We introduced a smart retention policy. This means that backups are retained based on their age. The default for Duplicati’s retention policy is the following: There will remain one backup for each of the last 7 days, each of the last 4 weeks, each of the last 12 months.
New backends have been added. Duplicati now supports Sia and rclone. That means, Duplicati now supports way over 20 backends as well as five standard protocols to transfer your backup to any storage system.
This version also got quite some performance improvements: Listing files during restore process is now faster. Clean-up after backup is now faster. Hashing of data blocks is now faster.
And there have been many more smaller changes. About 40 “things” have been added, 70 “issues” have been fixed, and much more “stuff” got updated, improved, or changed.
I had some trouble updating from an earlier 2.x beta: at first I got error messages and no working UI, tried to downgrade again to retain function, then UI could not connect to the server. Back to 2.3.3 and then the app would not start at all w/out error messages. Then wanted to try to uninstall and reinstall, hoping that the backup-jobs would be kept, but tried “repair” first and that made 2.3.3 work. Strangely the app is stated to be at 2.0.0.7 in the win10 app manager but now clearly is 2.3.3 and working flawlessly.
(Win10 x64)
Is there a “best practice” when updating between betas?
Not really - there were a number of update changes / improvements made between the 2.0.2.1 and 2.0.3.3 betas. Unfortunately, for SOME users this caused issues like you experienced. It happened once on a Windows 10 x64 machine and I just downloaded the full installer from Duplicati and “over installed” on my old version (no uninstall, no backup job loss).
You shouldn’t need to. I believe that message just means that a current task (such as a backup) is running so it can’t activate the new version or the current task would fail.
I’d suggest checking if a task is running and, if so, let it finish then try activating again. If nothing is running then you may have found an upgrade bug, but you still shouldn’t have to delete any backups. You could try “over installing” from the downloads page (it’s worked for me in the past) and if that doesn’t work, you can export your existing jobs to files before a full uninstall / reinstall then import the files and you shouldn’t loose anything from your jobs (though maybe some global settings - I’m not sure).
There’s a year old bug that restores don’t work with non-standard block sizes. Non-standard block sizes are very useful in some scenarios. Inability to restore seems like a fairly big problem for a backup tool. Are there any plans to fix that bug at any point? I stopped using Duplicati due to that bug, because I couldn’t trust it to restore my data.
It took a while, but I’ve finally been able to set smart retention policy in the new beta. Initially the setting wasn’t saved, it went back to previous setting whenever I exited options dialog and entered again. Then, I found that setting “Keep all backups” was the only choice being saved. Finally, setting “smart retention” using incognito mode seems to have done the trick, and now I can see the setting in backup options. I hope it’s actually applied too, let’s wait to next backup execution…
It looks like it is a parsing/logic issue. If you look at the comments, you can fix the problem by setting the block size manually (and remember to put a B at the end so it is not parsed as MB).
That said, we really should fix small issues like that, unfortunately, my time is limited.
Is there going to be updates on the experimental channel bring safe fixes (such as the symbolic link fix) without the riskier changes (such as multi-threading)?
Or is another beta coming soon?
Basically, how can I get the latest fixes without the risks of running canary?
I had not planned on that, but I guess that we could cherry pick some of the minor changes into a new experimental build. Any volunteers for creating a branch with fixes for an experimental?
Ken, I spent quite a bit of time trying to do a restore, but I couldn’t work out how to do it. I’d really like to start using Duplicati again, but “cannot restore” bugs are a big red flag for me that I shouldn’t trust my data to that backup tool.
I agree. Making a backup that cannot be restored is pointless.
For the year old bug, the solution is to supply --blocksize=1048576b.
Note the trailing b, otherwise it is treated as 1048576 KiB => 1 GiB which is too big a blocksize.
That said, I do have multiple unittests that test with odd blocksizes and they work without any problems, so I suspect that the issue is that the GUI somehow injects a new blocksize, causing it to not accept the first read blocksize (from the manifest, inside the zip archives).