I’m having an odd issue when trying to backup my NAS from a local folder to a dedicated Backblaze B2 bucket. Attached is my backup config (sanitized of private info) for reference. The backup itself is about 700GB in size, and the upload is happening at about 1MB/sec. I’ve taken some of the advice I’ve gleaned from reading through other forum threads (such as this one ), and set my upload volume size to 300MB and blocksize to 500KB. This has dramatically improved the speed of verifying backend data etc, but doesn’t seem to have affected the basic issue I’m seeing.
The problem is this: once I begin the backup task, it will start by progressing as expected; files will upload at about 1MB/s, and the count of remaining files will decrease at a rate of about 1 every 10 seconds or so. Then, for whatever reason after about 15 minutes or so the upload speed will still display about 1MB/sec, but the remaining files will start decreasing at a MUCH faster rate, hundreds per update interval. Definiately way too fast to be accurate, since the files are all high resolution .RAW photographs, each at least 50 to 100MB in size. Once the file count reaches 0, the backup task will finish, and I will see a notification at the bottom of the page that says “Previous volume not finished. Call finishVolume before starting a new volume”. I have found that if I perform a database repair task, and then start the backup again the process will resume from where it left off for another 15 minutes or so, and then trigger the same error message after experiencing the same high remaining files churn rate.
When I look into the logs, I don’t see anything interesting. In the General tab I just see a message showing the result of the “Repair” process as “Success”, and one message showing the “Re-creating missing index file for duplicati- …” with the same timestamp, presumably as part of the repair process. The “remote” tab only shows two entries with a timestamp from after the repair process:
Jan 4, 2018 2:18 PM: put duplicati-bb94a0ec1951d4ed7bec71b4fa564ce57.dblock.zip.aes
{“Size”:314164205,“Hash”:“xEFUB/m6BB9TBHUt1pVN9jJw5POEfXjIERC0AZXcobI=”}
Jan 4, 2018 2:14 PM: put duplicati-ie50c404efe3e4a69bdc418ba51439dd0.dindex.zip.aes
{“Size”:42381,“Hash”:“CraLlQjPbJRlzhKQgB9nEHEbneYJ9rOIKR75wF6H6C0=”}
I read in a separate forum posting that the backup task was rewritten in the Canary builds in such a way so as to avoid similar messages posted by a different user, so I tried upgrading my Duplicati insall from the current Beta build to the latest Canary build. I see exactly the same behaviour.
Does anyone have any thoughts on what might be causing this? Any suggestions as to what I can try? Needless to say, I can’t use this solution as it stands, making progress uploading my 700GB archive one chunk at a time in 15 minute intervals…
Thanks in advance for your help,
-ANAS-Backups-duplicati-config.zip (828 Bytes)