How long to compact, and why no logging? Is it stuck?

There is activity in the temp folder as well as sqlite journal file, etc.

So during the hang we have:

  • 100+% CPU utilization for mono-sg*
  • no activity in “live log”
  • no activity in the temp folder
  • I think there was also no activity for the duplicati database folder, but I’m not as sure

System #2 is configured with --log-file, so if it hangs again, I can compare that log to the “live” log. I will be adding --log-file to System #1 as well once it finishes the database recreate.

What is the RAM and CPU configuration on these hosts?

Yeah, I understand… sometimes people don’t care about the history and don’t mind starting over. Thought I’d ask! Unfortunately you cannot change the deduplication block size without starting a new backup. No way to process the data already on the back end.

System 1 - AMD Ryzen 5 2400G with Radeon Vega Graphics, 12 GB
System 2 - Intel(R) Core™ i7-9750H CPU @ 2.60GHz, 32 GB

Compact on system 2 just finished…

Last successful backup: Today at 11:07 AM (took 00:26:55)
Next scheduled run: Tomorrow at 2:00 AM
Source: 759.94 GB
Backup: 878.22 GB / 16 Versions

-rw-------. 1 root root 233472 Aug 5 11:14 Duplicati-server.sqlite
-rw-------. 1 root root 17183563776 Aug 5 11:07 UMYFGVUULA.sqlite

System 2 appears to be backing up OK now. And it has been deleting remote file sets.

My path out of the mess appears to have been re-creating the database and running a compact.

System 1 is still chugging along re-creating the database.

I am wondering - when the re-create is done, would be it better to compact, or first trim the number of versions down to 1 and then compact? I expect the compact to take several days either way, but was wondering if there was a way to save some time, or maybe even get a backup done without having to wait several weeks.

You can watch progress with About → Show log → Live → Verbose. What sort of files is it doing now?
I hope it’s not downloading dblock files because that’s going to be most of the 5 TB. Slow and costly…
Preferably it’s still processing dlist or dindex files (lots of block data due to the small default blocksize).

The more versions trimmed, the longer compact needs to remove the newly-freed up space. That may have been part of the original problem (which I wouldn’t want to happen again). I don’t know how far the troubled previous compact got, but it may be safer to finish any current compact before deleting further.

Ramping the compact threshold down slowly was suggested earlier. Or set no-auto-compact for awhile.

It sounded like you would wait for the recreate to finish, and I talked about taming/postponing compacts.

If you want something sooner than that, you can maybe script something together to copy files modified after the last Duplicati backup. Is there any way to sync or copy files NEWER than a given date? rclone forum topic gives one way that might need date math. Or you can set up cron to get age-based copying.
Having rclone consider some excess files is probably harmless. I think it only copies if files are different.

Ordinarily I’d worry about having only one version, but is that how you plan to use Duplicati in the future?

Its processing dindex files - “Aug 8, 2021 8:47 AM: Processing indexlist volume 67524 of 261195”

I have no-auto-compact set, so it won’t start a compact until I tell it to do so.

Current plan is:

  • let the re-create finish (not that I have a choice!)
  • perform a backup
  • manually start a compact
  • note that I will not be reducing versions to 1 since it would not help the compact

In the meantime, if I want any interim off-site backups of system 1 I would need to create a tarball of modified files and copy it to one of my other systems (and then Duplicati would back up the tarball).

Looks like the database re-create has stalled - no log entries since yesterday, no disk activity, and no real CPU usage.

The last log entries are:

  • Aug 25, 2021 7:24 AM: Backend event: Get - Started: duplicati-i6cf6a21dd2f84b709d1fdbaad0841d93.dindex.zip.aes (31.75 KB)

  • Aug 25, 2021 7:24 AM: Backend event: Get - Retrying: duplicati-i6cf6a21dd2f84b709d1fdbaad0841d93.dindex.zip.aes (31.75 KB)

  • Aug 25, 2021 7:24 AM: Operation Get with file duplicati-i6cf6a21dd2f84b709d1fdbaad0841d93.dindex.zip.aes attempt 2 of 5 failed with message: The operation has timed out.

If this hasn’t progressed by tomorrow, is it safe to restart duplicati? I.e. will it resume the re-create (I assume I have to tell it to do the re-create), or will it start it over from scratch?

image is likely what you would use, and it’s from scratch.
There’s also a Repair button, but I think it’s more for fixing smaller inconsistencies.

I wonder if this has any relationship to the seeming hang that followed it. I tried getting a B2 problem by disconnecting WiFi’s USB during a download, but couldn’t get this exact error (and also got no hangs).

It would also be interesting to know if these ever happen without going into a hang. This would be in the backup log listing, under Complete log as RetryAttempts, but the stats come from the job database, meaning Recreate will delete them. log-file=<path> with log-file-log-level=retry is a way to keep history.

What sort of OS and hardware is this on? There was seemingly some backup history, which is mostly uploading, but download problems aren’t a good thing because that could interfere with needed restore.

I suppose you could run Duplicati.CommandLine.BackendTester.exe to an unused B2 folder for awhile, giving it a URL based on Export As Command-line but with folder modified to one used just for this test.

We could consider trying to do network packet capture, but it’d be kind of hard to configure and stop at problem time. If there’s ample drive space and not a lot of other HTTPS traffic on system, that will help.

There’s a chance that using Backblaze B2 using Duplicati S3 support would work better, but that would mean moving the data to a different bucket, which maybe rclone could do remotely using B2 copy API.

Things are not going very well. So I am going to go with the “start over” option.

My plan is to set up 2 new backups:

  • 1 for the local system using more-or-less default options
  • 1 for the NAS using the larger blocksize

Initially the NAS backup will be set to exclude files larger than 1GB, and then I will increase it after getting a good backup.

Is there any way to exclude files smaller than a specified size? That would let me split the NAS backup into a backup for small files and a backup for large files.

Closing out this story. New backups have been completed, and at least for the moment all appears to be well.

Thanks to all for the help and advice!

2 Likes