How to start up an existing backup from scratch?

#1

I have an existing backup and I just want to discard everything that was done and start with a new run that backups everyting new.

What is the easiest way to do? Can I just delete the local database and delete all remote files or will that not work?

#2

I think that’s fine. The config is in a different file so it will just be like an initial backup. If you think there’s a chance that initial backup won’t complete without interruption, you could do a small starter backup to get something uploaded, add folders slowly (gets more intermediate points in case issue arises), and so on. Easiest is to just do it, but interrupted initial backups don’t recover well (so might have to be done again).

Failed first upload has more discussion for a recent case.

#3

Did not work.

I deleted the remote folder (box.com), made a “Test connection” and the the folder was created.

Then I delete the local database of this backup (so not the db of duplicati, just the db of this job).

Now I get this error:

“The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.”

If I try “Repair” in the GUI I get “No files were found at the remote location, perhaps the target url is incorrect?”

With the Repair a sqlite file with 120kb was created but backup shows the same error

#4

When I last did this I exported the job, deleted the job plus remote files then re-imported the job again and went from there.

You might need to manually delete the old database though - don’t recall what I did.

#5

@Taomyn gave a good idea which I think was to delete the database manually after getting its path from Database screen. I think that’s what I usually do (more or less – sometimes I “delete” by renaming, just to keep the old one around in case I need it). There was (maybe still is) a problem where the database isn’t deleted when the job gets deleted, basically just wasting some space and leaving clutter to confuse me…

Although I didn’t see your exact results, I did see the Delete button not be available for use (grey not blue), and I’m not sure what that’s about. I have a somewhat odd installation, so maybe that’s related somehow.

The other surprise came in Google Drive testing, where using drive.google.com to delete all files from the backup folder left them still listed to Duplicati, which complained they weren’t in its deleted local database. They were not visible from a different client (Cyberduck), but available to Duplicati, which could Recreate.

Files: trash possibly explains what’s going on, saying files in the trash are still listed. Maybe available too?

Your situation sounds like the opposite, where you managed to make the files disappear, but the database somehow remained (counter-argument to that is your comment about seeing a new 120kb file created…).

The export/import/adjust-various-things is a reasonable option, although it has more steps to run correctly.

For some situations, export/delete-original-job-and-backup-files/import might do better, for example if that would give Google Drive a files.delete that it needs in order for the file to disappear not move to Trash. Another advantage is that one doesn’t need to worry about accidentally having two jobs with same remote (one of the adjustments one should make manually when doing an export/import to create the similar job).

EDIT: Tested 2.0.4.5 by deleting the Google Drive job. Duplicati pre-checked the database delete (then did not actually delete the database if I check with File Explorer). I had to manually check Delete Remote Files which it did do. I was watching Trash in drive.google.com, and the dlist I had seen disappeared from there.

Failure to delete the local database does not confuse Duplicati, as an import creates a new random name. Sorry I still haven’t nailed what might have gone wrong with your attempt, but at least here are some ideas.

#6

Since I do various testing with various versions and latest canary updates, sometimes a bad version messes with files… So under your current job(s), drop it down, under configuration click Export, save the json file somewhere safe but easy access for you. Then edit the settings to stop the automatic run of the old backup.

Add new backup and import from that json file, editing the settings and ensure it backs up to a new destination folder. Create new folder in the OS itself before creating new backup job, if old destination folder was named “Public Backup”, just name the new one “Public Backup2” and set that version 2 as the destination. Trying to backup to the previous folder will have it look at the previous backups and database which will cause it to fail, so in your case it may help to delete all files and folders found under your initial backup folder (in the OS).

Oh and here are some pro tips:

  1. Do not use a wifi connection for backups of any kind, always hard wired connection.
  2. Do not expect it to backup files from a PC (desktop or laptop) that is set to go to sleep from lack of input as that will also fail/corrupt the backups.
  3. Expect the initial backup to run around 5-10GB worth of files per hour for the initial backup. If you have 500GB, that is 5-10 hours minimum with a local fast Gigabit connection, fast source drive (SSD/NVMe), fast destination drive/server. To an online source, expect slower than 5GB per hour.

Now with that said I have found the latest canary 2.0.4 versions 16 to 18 are quite a bit faster than the previous versions (including 2.0.4.5 beta). Where the previous versions would take 20 hours for initial backup of 200GB (for my fast SSD drives, fast 1Gb or 10Gb LAN connection, and decent server with 8CPU and 32GB RAM), the latest versions take 2-3 hours for the same initial backup now. Still, never use wifi and never use a PC that goes to sleep/shuts off after a certain timeframe of non-use, it will always fail and cause problems.