Backup has restarted from beginning after machine reboot

I have Duplicati running on a FreeNAS server, and over the past month have been running it nightly to backup approx 190Gb of photos to OneDrive.

Over the past month I’ve managed to push about 80Gb to OneDrive thanks to woefully slow Australian internet speeds.

My issue is that yesterday I decided to update FreeNAS. I made sure the Duplicati backup job was paused and I shut down the jail prior to updating FreeNAS. The machine rebooted and I started all the jails without problems. However it seems that my photo backup has started from the beginning.

I came across this post with a similar issue and decided to let the backup run overnight to see if it would catch up to where it was previously (with about ~ 100Gb to go):

Interrupted backups due to a reboot

Unfortunately, when I logged into OneDrive, files were being uploaded throughout the night with the progress bar showing there is 185Gb to go.

Is there a way to get Duplicati to force check the backup, or do I need to start from scratch? Are there any other options I could try?

Thanks for the help.

This has been addressed in several previous threads. Basically the gist of it is, there is no guarantee (or even the faintest assumption) that when a new backup job starts, Duplicati will scan through the files that have already been successfully backed up and account for them in the “size remaining” indicator. It will, however, still skip them once it reaches that point in the backup run, whenever that may be. The only way to know for sure is to watch the Information logs as they go by and see what’s being backed up or skipped. Or do what I did and backup a smaller set at first, wait until it’s done, add more, and so on.

As an aside: did you only pause your backup job before restarting, instead of actually stopping your backup job? That worries me slightly, as stopping it would ensure it resolves the open database entries and such correctly. I’m not 100% sure what the effect of restarting with the backup job merely paused would be (hopefully nothing), but in the future if you need to interrupt a backup job I’d recommend just stopping it. However if your current backup operation isn’t actually giving you errors, then I’d guess you’re OK.

Thanks for the additional info drakar. Good idea to start the backup with a smaller set and then keep adding to it … wish I had thought of that earlier.

I did pause the job rather than stopping it. Will make sure I stop it instead if any future interruptions are needed.

The latest available backup log shows the following (which looks fairly normal to me):

Apr 24, 2018 9:01 AM: put duplicati-b00630d6612a148b0a4884ee4e6a5e2c9.dblock.zip.aes
Apr 24, 2018 9:01 AM: put duplicati-i01f2390fa4e84da7a516a7355342b0b3.dindex.zip.aes
Apr 24, 2018 8:45 AM: put duplicati-be18bcb054e934fd89bb5021c68056084.dblock.zip.aes
Apr 24, 2018 8:45 AM: put duplicati-ifb9645f9d7dc4b9389f92531c60b523f.dindex.zip.aes
Apr 24, 2018 8:45 AM: put duplicati-b47f04c608c454e2586166c58210ad624.dblock.zip.aes
Apr 24, 2018 8:13 AM: put duplicati-beee64e90cb93419d80a7354a01a713eb.dblock.zip.aes

However the the system log shows the following and I’m wondering whether the renaming and the failures are a sign I should just restart the backup from scratch:

Apr 25, 2018 1:56 PM: Backend event: Put - Started: duplicati-b6a23dc5feca64b96a54d9e436e609025.dblock.zip.aes (99.91 MB)
Apr 25, 2018 1:56 PM: Renaming “duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes” to “duplicati-b6a23dc5feca64b96a54d9e436e609025.dblock.zip.aes”
Apr 25, 2018 1:56 PM: Backend event: Put - Rename: duplicati-b6a23dc5feca64b96a54d9e436e609025.dblock.zip.aes (99.91 MB)
Apr 25, 2018 1:56 PM: Backend event: Put - Rename: duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes (99.91 MB)
Apr 25, 2018 1:56 PM: Backend event: Put - Retrying: duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes (99.91 MB)
Apr 25, 2018 1:56 PM: Operation Put with file duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes attempt 2 of 5 failed with message: Cannot access a disposed object. Object name: ‘System.Net.Sockets.NetworkStream’.
Apr 24, 2018 2:18 PM: Backend event: Put - Started: duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes (99.91 MB)
Apr 24, 2018 2:18 PM: Renaming “duplicati-b00630d6612a148b0a4884ee4e6a5e2c9.dblock.zip.aes” to “duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes”
Apr 24, 2018 2:18 PM: Backend event: Put - Rename: duplicati-b0cd8dbc5652346be989efa53faa2c31a.dblock.zip.aes (99.91 MB)
Apr 24, 2018 2:18 PM: Backend event: Put - Rename: duplicati-b00630d6612a148b0a4884ee4e6a5e2c9.dblock.zip.aes (99.91 MB)
Apr 24, 2018 2:18 PM: Backend event: Put - Retrying: duplicati-b00630d6612a148b0a4884ee4e6a5e2c9.dblock.zip.aes (99.91 MB)
Apr 24, 2018 2:18 PM: Operation Put with file duplicati-b00630d6612a148b0a4884ee4e6a5e2c9.dblock.zip.aes attempt 1 of 5 failed with message: Cannot access a disposed object. Object name: ‘System.Net.Sockets.NetworkStream’.

The renames are most likely part of the normal cleanup process.

And I suspect the “Cannot access a disposed object” errors are known messages that don’t cause any functional problems so haven’t been addressed yet.

Well I stopped the backup and restarted it later that day - it quickly went from 189Gb to go to 185Gb to go - so it seems that it picked up the files that had been uploaded after the system restart but not the ones before it.

I’ve decided to just scrap the backup and restart it from scratch with what I’ve learned here.

Thanks for the help.

Thanks for letting us know how things went.

Sorry to hear you had to start over, but hopefully the new one will work better for you.