Although it’s not supposed to get stuck anywhere, narrowing down where it gets stuck might help.
log-file=<path> and log-file-log-level=<level> is easier for long logs than trying to look over live log.
Starter level of Retry might be reasonable. Possibly this will wind up at Profiling, which can be big.
If you have the less command, I think it can handle big logs and even do tail -f if you want live view.
I have a Profiling log which I’m cutting down to Information level here. There are clues on positions.
2021-05-22 11:42:49 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20210522T154001Z.dlist.zip.aes (51.58 KB)
2021-05-22 11:42:49 -04 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-StartCheck]: Start checking if backups can be removed
is end of backup itself (dlist
says what’s in it), and retention decisions which may delete and compact:
2021-05-22 11:42:54 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-20210521T154106Z.dlist.zip.aes (51.62 KB)
2021-05-22 11:42:55 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-20210521T154106Z.dlist.zip.aes (51.62 KB)
2021-05-22 11:42:55 -04 - [Information-Duplicati.Library.Main.Operation.DeleteHandler-DeleteResults]: Deleted 1 remote fileset(s)
2021-05-22 11:42:55 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there is 25.47% wasted space and the limit is 25%
2021-05-22 11:42:57 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b7baa95430b9245e0a18dcb829b5bd62b.dblock.zip.aes (41.35 MB)
A backup might not delete, and a delete might not need a compact (Compact decision is logged anyway).
I show the first Get, but there were more, then a Put of the compacted file, then Delete of the original files.
Eventually the Compact finished and there is a final List to check destination, and a sample verify of 1 set:
2021-05-22 11:55:46 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2021-05-22 11:55:47 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (178 bytes)
2021-05-22 11:55:49 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20210522T154001Z.dlist.zip.aes (51.58 KB)
2021-05-22 11:55:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20210522T154001Z.dlist.zip.aes (51.58 KB)
So that’s how it’s supposed to look at a fairly light log level. If you did Profiling, it adds mostly SQL which is sometimes relevant, but we seem to still be looking for the neighborhood of hang, so lighter start is easier.
Another way to try to find the neighborhood is to separate the deletes from the compact. no-auto-compact will let the deletes finish. If they finish, backup should be as done as it gets, should leave you a job log, etc.
You can then use the Compact now
item on the backup menu to try that after you’ve set up the logs again.
I’m not sure how long this problem will be reproducible. Were you previously limited backup versions any? Were you hitting the limit and getting version deletes and compact? Smart retention is just a different way of deciding which old versions to delete, however it’s probably more likely to do a larger number at a time, compared to, say, date-based or version-count-based retentions which (once set) might do one at a time.
If you’re willing, you could go to heavy (Profiling) logs from the start, and use the Information messages as guideposts if need be. If it gets stuck, then looking at the end of the log and doing reverse search may help.
Alternatively, start with lighter logs and see if the problem sticks around in case Profiling log is needed later.