Unable to complete backup using Backblaze B2

I’m having an odd issue when trying to backup my NAS from a local folder to a dedicated Backblaze B2 bucket. Attached is my backup config (sanitized of private info) for reference. The backup itself is about 700GB in size, and the upload is happening at about 1MB/sec. I’ve taken some of the advice I’ve gleaned from reading through other forum threads (such as this one ), and set my upload volume size to 300MB and blocksize to 500KB. This has dramatically improved the speed of verifying backend data etc, but doesn’t seem to have affected the basic issue I’m seeing.

The problem is this: once I begin the backup task, it will start by progressing as expected; files will upload at about 1MB/s, and the count of remaining files will decrease at a rate of about 1 every 10 seconds or so. Then, for whatever reason after about 15 minutes or so the upload speed will still display about 1MB/sec, but the remaining files will start decreasing at a MUCH faster rate, hundreds per update interval. Definiately way too fast to be accurate, since the files are all high resolution .RAW photographs, each at least 50 to 100MB in size. Once the file count reaches 0, the backup task will finish, and I will see a notification at the bottom of the page that says “Previous volume not finished. Call finishVolume before starting a new volume”. I have found that if I perform a database repair task, and then start the backup again the process will resume from where it left off for another 15 minutes or so, and then trigger the same error message after experiencing the same high remaining files churn rate.

When I look into the logs, I don’t see anything interesting. In the General tab I just see a message showing the result of the “Repair” process as “Success”, and one message showing the “Re-creating missing index file for duplicati- …” with the same timestamp, presumably as part of the repair process. The “remote” tab only shows two entries with a timestamp from after the repair process:

Jan 4, 2018 2:18 PM: put duplicati-bb94a0ec1951d4ed7bec71b4fa564ce57.dblock.zip.aes
{“Size”:314164205,“Hash”:“xEFUB/m6BB9TBHUt1pVN9jJw5POEfXjIERC0AZXcobI=”}
Jan 4, 2018 2:14 PM: put duplicati-ie50c404efe3e4a69bdc418ba51439dd0.dindex.zip.aes
{“Size”:42381,“Hash”:“CraLlQjPbJRlzhKQgB9nEHEbneYJ9rOIKR75wF6H6C0=”}

I read in a separate forum posting that the backup task was rewritten in the Canary builds in such a way so as to avoid similar messages posted by a different user, so I tried upgrading my Duplicati insall from the current Beta build to the latest Canary build. I see exactly the same behaviour.

Does anyone have any thoughts on what might be causing this? Any suggestions as to what I can try? Needless to say, I can’t use this solution as it stands, making progress uploading my 700GB archive one chunk at a time in 15 minute intervals…

Thanks in advance for your help,

-ANAS-Backups-duplicati-config.zip (828 Bytes)

It’s been suggested that the “Previous volume not finished” error is sometimes a red herring - something else is blowing up but not interrupting the job when it should thus allowing the previous volume error to happen.

If you don’t do a repair and re-run the backup does it work?

As a test only, does it work if you enable the --no-backend-verification parameter?

As far as the speed of the file counter, it’s possible you’re seeing the difference between new files / changes being backed up (slower) and already backed up files being verified as un-changed (faster) or having matching blocks to already backed up files.

For example, if your raw images were all of a solid red wall then it’s possible many of them have 500KB chunks of matching data so the first file would be slow as it’s processed and written to the database while the following ones would be faster since the block doesn’t need to be written anywhere.

Granted that’s a very unlikely scenario with photos…

If you look at the main Log -> Live -> Profiling section when the faster file count is going, what sorts of items do you see being logged? Alternatively, you could try running with --log-file=<path> and --log-level=profiling parameters to generate a text file of the log of a run.

Thanks for digging into this with me.

If I don’t repair the database first, it seems as though the backup doesn’t make any progress. It verifies the backend data, then instead of uploading for 15 minutes it just immediately begins the accelerated rate of file completion behaviour I described before.

Since you mention the possibility of the file completion rate not matching the upload transfer rate, I should probably clarify my intended usage here. I have several desktops and laptops in my house configured with local installs of Duplicati all backing up to the NAS under discussion here. These LAN backups are completing successfully, and presumably they are each performing their own de-duplication. My intent is that on a weekly basis, the NAS will also use Duplicati to backup the collection of desktop backups to Backblaze. So not only are all the individual files being processed each fairly large but they’re also pre de-duplicated, presumably removing any red wall photoshoots. :grin: I can’t imagine that Duplicati would care that the files it’s backing up are themselves the result of a Duplicati backup (they are just .zip files afterall), but I figured I should mention it in case there’s a subtlety I’m not thinking of.

I configured the backup task to log to file with a “profiling” log level as you suggested, and there’s definitely something interesting going on in there. I was watching with tail -f while the task was running, and as soon as the accelerated file completion behaviour began, I started seeing the following message flood the log:

Warning: Failed to process path: (filename xxx)
System.NullReferenceException: Object reference not set to an instance of an object

Also, since I’m still a bit new to using Duplicati I’m still getting used to the UI. I hadn’t clued into the fact that there were separate logs for task-specific logging and system-wide logs. While it’s true that there aren’t any relevant logs when I check the backup-specific log, in checking Show Log -> Stored I do see the message “Failed while executing “Backup” with id: 1”. This appears to just be the location of the “Previous volume not finished” message discussed previously, though there are some details in the body that might be helpful.

I’ve attached a .zip with both of the logs to this message.

Thanks again for your help.

-A

nas-backup-log.zip (175.0 KB)

I’ll try to check your logs in a bit, thanks for including those.

So the issue is happening in the NAS to B2 backup, and that issue seems to be a pile of null object errors. My first guess would be some sort of permissions issue - are you running the NAS backup as tray icon, service, or both?

Regarding your plan, I suspect you won’t see much fertilization of the NAS backup, particularly as each machine backup is likely encrypted so the likelihood of encrypted blocks from different matching pad pretty low.

Additionally, you may not see much compression benefits as source files will all be compressed already (unless you have turned that down / off).

That being said, if you don’t care about a second layer of deduplication then your plan should work just fine. Though a restore from B2 becomes a bit of a slog as you have to restore ALL of the B2 backup just to restore a single file to a source desktop.

Well, technically you could figure out all the B2 archive files that contain all the blocks needed to restore the NAS archive files that have all the blocks needed to restore a single desktop file…

Thanks. Let me know if you need any further information/clarification.

The permissions on all of the source files is 755. Am I correct in assuming that Duplicati only needs read access to the backup source? The server is being run in a docker container on Ubuntu LTS 16.04 using the .deb installation option from the download page.

The backups from the desktop PCs to the local NAS are actually configured to have no encryption, while the backup offsite to Backblaze has a very strong password. I’m not really too worried about the level of de-duplication or compression that I see on Backblaze. Storage is cheap, and this whole process should happen unattended in the background on a weekly basis. The Backblaze backup is just intended as an offsite last resort. My expectation is that most of the time individual machines or files can be restored quickly off the local NAS. Offsite backup only becomes necessary when both a PC AND the NAS (whose datastores are p+q parity protected) have failed simultaneously, in which case I’ll be less concerned about restore convenience and more just happy that the files still exist somewhere :grin:

Thanks again for your help,

-A

Sorry that took so long. I checked your logs and the error that shows before all the null reference errors is a disk full error:

2018-01-07 12:45:58Z - Warning: Failed to process path: /source/Backup/duplicati/jimDesktop-photos/2004-2010/duplicati-bb506a9248541499f8a78efca50133992.dblock.zip
System.IO.IOException: Disk full. Path /tmp/dup-34ed66af-86dd-4fd2-9d4a-46b712e288b2
  at System.IO.FileStream.WriteInternal (System.Byte[] src, System.Int32 offset, System.Int32 count) [0x00077] in <62f5937022004555807e6c57c33f6684>:0 
  at System.IO.FileStream.Write (System.Byte[] array, System.Int32 offset, System.Int32 count) [0x00090] in <62f5937022004555807e6c57c33f6684>:0 
  at SharpAESCrypt.SharpAESCrypt+LeaveOpenStream.Write (System.Byte[] buffer, System.Int32 offset, System.Int32 count) [0x00000] in <5e494c161b2e4b968957d9a34bd81877>:0 
  at SharpAESCrypt.Threading.DirectStreamLink.write (System.Byte[] buffer, System.Int32 offset, System.Int32 count) [0x0015b] in <5e494c161b2e4b968957d9a34bd81877>:0 
  at SharpAESCrypt.Threading.DirectStreamLink+LinkedWriterStream.Write (System.Byte[] buffer, System.Int32 offset, System.Int32 count) [0x00000] in <5e494c161b2e4b968957d9a34bd81877>:0 
  at System.Security.Cryptography.CryptoStream.Write (System.Byte[] buffer, System.Int32 offset, System.Int32 count) [0x001b7] in <62f5937022004555807e6c57c33f6684>:0 
  at SharpAESCrypt.SharpAESCrypt.Write (System.Byte[] buffer, System.Int32 offset, System.Int32 count) [0x0004d] in <5e494c161b2e4b968957d9a34bd81877>:0 
  at Duplicati.Library.Utility.Utility.CopyStream (System.IO.Stream source, System.IO.Stream target, System.Boolean tryRewindSource, System.Byte[] buf) [0x0002c] in <56b1245f594f44fe81bb104962d86e57>:0 
  at Duplicati.Library.Encryption.EncryptionBase.Encrypt (System.IO.Stream input, System.IO.Stream output) [0x00008] in <e4c2d4d1910c4cadb0c5bf8c5bcd37e3>:0 
  at Duplicati.Library.Encryption.EncryptionBase.Encrypt (System.String inputfile, System.String outputfile) [0x0000e] in <e4c2d4d1910c4cadb0c5bf8c5bcd37e3>:0 
  at Duplicati.Library.Main.BackendManager+FileEntryItem.Encrypt (Duplicati.Library.Interface.IEncryption encryption, Duplicati.Library.Main.IBackendWriter stat) [0x0001e] in <e7a77822a26d4113881277f9ff99933e>:0 
  at Duplicati.Library.Main.BackendManager.Put (Duplicati.Library.Main.Volumes.VolumeWriterBase item, Duplicati.Library.Main.Volumes.IndexVolumeWriter indexfile, System.Boolean synchronous) [0x00061] in <e7a77822a26d4113881277f9ff99933e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.AddBlockToOutput (Duplicati.Library.Main.BackendManager backend, System.String key, System.Byte[] data, System.Int32 offset, System.Int32 len, Duplicati.Library.Interface.CompressionHint hint, System.Boolean isBlocklistData) [0x00299] in <e7a77822a26d4113881277f9ff99933e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.ProcessStream (System.IO.Stream stream, Duplicati.Library.Interface.CompressionHint hint, Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Utility.FileBackedStringList blocklisthashes, Duplicati.Library.Utility.FileBackedStringList hashcollector, System.Boolean skipfilehash) [0x000d8] in <e7a77822a26d4113881277f9ff99933e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.HandleFilesystemEntry (Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Main.BackendManager backend, System.String path, System.IO.FileAttributes attributes) [0x00471] in <e7a77822a26d4113881277f9ff99933e>:0 

So I think the initial issue is that you’re running out of disk space in /tmp/. If you could try a run and watch the space on /tmp/ that might help confirm the issue. The cause of this is often either low disk space (kinda obvious) or a very large dblock size (“Upload volume size” on the step 5) usually due to a misunderstanding where people think it means how much space they want to use on the destination when it REALLY means how big should each archive file be.

Since the archive file has to be created locally before uploading to the destination, there has to be at “Upload volume size” space available on whatever drive is being used as temp.

My guess is the Previous volume not finished, call FinishVolume before starting a new volume error is probably a side effect of the disk space issue.


As far as permissions, you are correct that read only access the SOURCE is all that’s needed - of course write access is needed in temp, and where the .json config and .sqlite databse files are stored (and the destination, of course).

1 Like

Hm. Interesting. I’ll have to dig into this. /source and /config (the mount point for the source files and .json respectively) both have 7.5TB of free space, but I didn’t think to check /tmp. This is actually mounted as a ramdrive. The server only has 4GB of RAM, and it’s currently about half used. My dblock size is currently set to 300MB for this backup task. How many archive files need to be stored simultaneously? Let’s assume we only have 1GB free in /tmp. Do you have a recommendation on dblock size assuming we have 1GB of temporary storage available?

Side question: In step 5 of the backup task configuration there is the “upload volume size” configuration parameter, which I have set to 100MB. But then there is also the advanced parameter “dblock-size” which I have set to 300MB based on a discussion I read in a separate thread on this forum. From your previous message, am I to understand that these are actually the same configuration parameter? In my situation, which one takes precedence?

Thanks,

-A

In pretty sure it will be the --dblock-size parameter, but you could double check by using the job Export link to export “As Command line…” and see what shows there.

I’ll have to get back to you on the default number of dblock (archive) files that are prepared while the “current” one is being uploaded. There’s a (poorly named?) parameter that let’s you change that count if you want.

If the tmp ramdrive is global (not just for Duplicati) then my guess is it’s getting other system filmed in they as well.

It might be worth a test run telling Duplicati to use a physical drive with known big space availability. It COULD be a “fix” but it would also help confirm this is the issue.

@kenkendk, do we know why a temp disk full error doesn’t abort the job right away?

Yup. Looks like that was the problem. I remapped /tmp to a path with plenty of free space, repaired the database, and restarted the backup. Since then, it’s been running fine overnight. Looks like the permanent solution is to either install some more RAM into the server in order to increase free space in the /tmp ramdrive, or just leave /tmp mapped to a physical drive as it currently is. For now, I think I’ll just leave it as it is. Seems to be running quite well, and anyway I’m sure the primary bottleneck is my uplink datarate and not the scratch hard drive write rate :stuck_out_tongue:

Thanks again for all your help.

Cheers,

-A

Glad to hear it’s working for you! :smiley:

I went ahead and flagged my post as the “solution” in case anybody else comes looking. Let me know if you disagree with that.

You and me both :wink:

Sure, I’m fine with marking your post as the solution. Your help was much appreciated. Hopefully our conversation can help someone else avoid struggling with the same problem.

Cheers,

-A

1 Like

Not sure what the solution is.

Use the “-- tempdir” option to define annother directory for the temporary storage?

Just tried it. So far it solved the issue and backup is running. Hope it will finish :grinning: