Hang while compacting remote data

I’ve not had a successful backup for two days now. Duplicati hangs while compacting remote data.

Can someone please help me trouble shoot this? I am a very new user. I have only used Duplicati for about 3 weeks now.

It may not be hung. Go to About → Show log → Live → and set the dropdown to Verbose. Watch for activity there.

Did you customize the “remote volume size” setting? Default is 50MB.

The remote volume size is set to 4 TB.

The reason I think it is hung is ordinarily, my backups take no more that 6 minutes. For example, the last successful backup took 5 minutes and 31 seconds. I tried 3 different times to run the backup, and each time I let it run for about 3 hours, except the last time, I actually let it run overnight. So, it ran for perhaps 10 hours.

Your remote volume of 4 TB is enormous. What made you decide on that value?

When restoring any file (even a tiny one), Duplicati would need to download at least one remote volume.

It also explains the problem you are seeing with compaction. The compact operation is trying to reduce remote files and reclaim wasted space by downloading some volumes that contain wasted space and repackaging them.

I leave all my backups with a 50 MB remote volume size.

Check out this for more info: Choosing Sizes in Duplicati - Duplicati 2 User's Manual

1 Like

I was confused. I thought that remote volume size referred to the size of the disk where I wanted my backup stored. In my case, I have a 4 TB USB Seagate NAS disk hanging off my WIFI router. I guess, I should have specified a smaller volume size such as 50 MB. I will give that a try. BTW, as my disk is so large, I really do not need the data compacted. Is there anyway to turn compaction off? Thank you for the help.

Sure, you can use the –no-auto-compact option to disable that feature. But if you are not using infinite retention, you should probably leave it on. Compacting is also how Duplicati cleans up unused blocks that are no longer referenced by any retained backup version.

I will leave compression on for now. So, can I ask a possibly stupid question: do the symptoms of taking forever to compress deffinitely implicate the cause of a goofy volume size that I had (4 TB)? Could there be another cause? If it happens again, I will turn the verbose option on.

Didn’t this happen consistently 3 most recent runs? If it’s wanting to compact, it will try to do so again.

You can get somewhat lighter but probably good information at Information level instead of Verbose.
Below I showed only the lines commenting on the compacting decision. Others show file Get and Put.

2021-03-10 18:43:25 -05 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required
2021-03-10 19:43:13 -05 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there are 54.52 MB in small volumes and the volume size is 50.00 MB
2021-03-10 19:49:03 -05 - [Information-Duplicati.Library.Main.Operation.CompactHandler-CompactResults]: Downloaded 17 file(s) with a total size of 462.10 MB, deleted 34 file(s) with a total size of 462.80 MB, and compacted to 6 file(s) with a size of 132.08 MB, which reduced storage by 28 file(s) and 330.72 MB
2021-03-10 20:43:42 -05 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required
...
2021-03-15 10:42:40 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there is 26.31% wasted space and the limit is 25%
2021-03-15 10:55:42 -04 - [Information-Duplicati.Library.Main.Operation.CompactHandler-CompactResults]: Downloaded 32 file(s) with a total size of 1.16 GB, deleted 64 file(s) with a total size of 1.16 GB, and compacted to 12 file(s) with a size of 274.94 MB, which reduced storage by 52 file(s) and 914.80 MB

Above was from log-file, but an easier method for casual use is About → Show log → Live → Information
This should give a clear view of potential automatic compacting which might happen after backup is done.

How are you getting to it? For example, what Storage Type are you using on the Destination screen?

How long did the initial backup take, and how big is the backup source area? You probably have a log for that under Show log for the job. With the huge remote volume size, all volumes probably look small, and small volumes eventually get compacted. I wonder if you’re repackaging everything you ever backed up?

Information level log will show file size being processed. If you see a huge one, that’s going to be slow.

Thanks for the information and your help. I will study the log file in information level, the next time the problem occurs - if the problem occurs again.

Some more information from me: when I could not get a successful backup over 3 days, I panicked, and I tossed all of my backup files (20 days worth) on my NAS thinking that I would likely need to switch to a different backup system. Having recovered, I only have a couple of days of backups. If the problem does reoccur, I would expect it to happen sometime in about 3 weeks time after there are about 20 days of backups.

The source size of my backups is 121 GB. The reason it is so large is there are drone videos, and pictures together with lots of backups of Quicken files.

On the NAS, after the Duplicati backup completed, the backup directory had 3,117 files in it, and had about 77 GB total.

My Duplicati backup is configured as a local drive, and hanging off of a Windows ‘network share’ on my WIFI router. My router exports a network identifier called ‘TP-SHARE’, and on that share is a directory tree called ‘sda2’ on the NAS. To access the ‘tp-share’ network share device, in Windows, I believe it is called a UNC pathname such that the directory on my NAS can be accessed from my Windows laptop using the path \tp-share\sda2

My backup uses the full, UNC path of: \TP-SHARE\sda2\Duplicati\Backup

My initial backup took about 5 hours,

Again, thank you very much for your support in this matter. Very much appreciated.

Is this the one created with 4 TB remote volume size, or a replacement with reasonable smaller size?
I expected the initial 121 GB backup to fit into a single remote volume. Adding dindex and dlist is 3 files.
If you can sort remote files by size, you can see if there’s still one that’s bigger than your intended size.

SMB shares are sometimes troublesome, and Linux is worse. I think it’s usually corruptions, not hangs.

Sorry for the confusion. Here is the timeline with events on it:

On March 9, I created my first Duplicati backup with the goofy volume size 4 TB, and that backup consisted about 3 files. The source size was, again, 120 GB.

From March 10 through 17, Duplicati ran every day at about 1:00 PM local time, and each time, added about 3 files.

On March 18, Duplicati failed with this backup configuration, hanging during compression. The same hang happened on March 19 and March 20.

As I am generally anal about backups, I deleted the Duplicati backup, and tried another backup tool called COBIAN. It was a joke. It took almost 12 hours to create the first backup, and spewed a bunch of errors. So, I abandoned that backup as well. At this point in time, my NAS had nothing on it.

On March 20, also I posted to the Duplicati support forum, indicating the problem with the hang during compression thinking that there may be a simple solution. Soon after I posted someone replied saying the 4 TB volume size was goofy, and I should try the 50 MB. So, I started a new backup with Duplicati, with 50 MB volume size, with the same collection of source files, which were 120 GB, and this backup resulted in about 3,117 files, and none of the files are bigger than 50 MB. (50 MB is 52428800). In fact the largest dblock file is 51,201 KB.

1 Like

Nice timeline. Normal sizes seem to have solved your slow compact. Before, you might have been compacting your entire backup, taking all the small files (relative to 4 TB) apart and filling a new file.

Regardless, I’m glad it seems well, and I hope it stays that way.

2 Likes