Worried... Can I change to "Smart Backup Retention" after a year of "Keep all backups"?

Hi! Backups is such an important an delicate subject. I have been running Duplicati backups with a BackBlaze B2 ‘bucket’ for about a year. And it seems to work quite well, although one never knows if the backup works until you need to restore it… :wink:
I have been using “Keep all backups” so far, but that is NOT very smart… So I am considering switching to “Smart Backup Retention”, which I presume would lead to the deletion of most of my current 700 GB backup. But would it be safe? I mean - Duplicati would have to perform an extremely big and complex ‘sorting’, ‘filtering’ and deleting task…!?
I appreciate any comment!
Henrik

1 Like

Yep, you should be fine. There used to be a bug where if more than 999 backup versions needed to be pruned, Duplicati would fail. But if you are running the latest beta you won’t have that problem.

It’s up to you if you want to use the “Smart backup retention” or “custom backup retention”. I believe Smart backup retention is the same as using custom retention set to 7D:1D,4W:1W,12M:1M. Note that this will delete all backup versions > 1 year old.

Personally I use “custom” set to 7D:0s,3M:1D,2Y:1W,99Y:1M. That basically means keep every backup for the most recent 7 days, keep daily backups for 3 months, keep weekly backups for 2 years, and keep monthly backups for 99 years. (I don’t want it to delete old backups…)

2 Likes

I’d also suggest upgrading to 2.0.5.1 Beta if you’re not there already. There’s a fix for a bug in compact, which is what actually reclaims the space after the version deletes make some of the space now waste.

Compacting files at the backend

Thank you very much to drwtsn32 and ts678 for your replies!
Duplicati suggested updating to 2.0.5.1_beta_2020-01-18, but repeatedly said that update failed, but actually it was updated, according to ‘About’ on the menu…
Then I removed and added some folders to my backup settings. But when I ran the backup everything seemed to fail. Then I tried recreating the local database. But I don’t know if that job succeeded, because now Duplicati cannot connect to the server… Every time I try, I even get logged out of Duplicati itself. So I am kind of in a deadlock.
Addition: It also seems I can’t update Duplicati Settings.

Did you use the auto-update mechanism? A lot of people seem to have issues with it.

What platform are you on? Windows? I’d probably uninstall and then reinstall using the latest manual download.

I logged in to Duplicati and was asked to update again and again, until I discovered that it WAS updated. I use Ubuntu 18.04. So should I unstall?

By the way: How much is lost with the uninstall?

If you’re using Ubuntu, try this first:

$ sudo systemctl restart duplicati

Afterward see if the Web UI comes up ok.

The mono from Ubuntu LTS is too old. Have you updated to at least v5?

Release: 2.0.5.1 (beta) 2020-01-18

Important notes:

On Linux, macOS, and other systems that require Mono, this version requires Mono v5 or later.

Install Duplicati on Ubuntu Server 18.04

After update to 2.0.4.27 duplicati crashes on synology

1 Like

Uninstalling doesn’t remove your Duplicati configuration database (Duplicati-server.sqlite) or the job-specific databases. So it has always been a safe procedure on the systems I use.

That being said your mono version may very well be the issue. I’d check that first.

Unfortunately that didn’t help.
Concerning the version, the ‘About’-menu says:
“You are currently running Duplicati - 2.0.5.1_beta_2020-01-18”
But what do you mean by ‘mono’?

Cross platform, open source .NET framework

Mono is a software platform designed to allow developers to easily create cross platform applications part of the .NET Foundation.

On Windows, Duplicati code is run by .NET Framework. On non-Windows sytems, it’s run by mono.

Mono (software)

If running mono --version doesn’t show at least version 5 (version 6 has been out too), it’s too old.

While you could try to stay on an obsolete mono, you’ll be stuck on old Duplicati and a compact bug.

If you hit the compact bug, the backup is damaged, possibly irrecoverably, so I suggest the upgrade.

After getting on the right mono, known upgrade problems include Activate button maybe not working. Workaround for that is manual stop and restart. Windows has also seen some odd .dll version issues.

Mono was definitely too old:
$ mono --version
Mono JIT compiler version 4.6.2 (Debian 4.6.2.7+dfsg-1ubuntu1)
I followed this install guide:
https://www.mono-project.com/download/stable/#download-lin
So now it’s:
$ mono --version
Mono JIT compiler version 6.8.0.96 (tarball Wed Jan 15 10:08:18 UTC 2020)
After that I had to, again, do:
sudo systemctl restart duplicati
But in order to be on the safe side, I just started another ‘recreate database’, which seems to be running smoothly (for the next several hours).

Well… Bad news:

When I tried to recreate the database I got this error:

Error while running Lenovo-Ubuntu
SQLite error cannot rollback - no transaction is active

When looking at the ‘Remote’ log, I get a long list of this type of entries:

Jan 28, 2020 6:03 AM: get duplicati-ifffe34153ef04ed1a5038f23bc5d0a2b.dindex.zip.aes
{“Size”:28509,“Hash”:“qelV4rdupKR8wULgcXNGZ13u89RkYoC+6dE7s/jCxeY=”}

Does this mean that my backup is corrupt, and I should delete my whole backup and start all over?
And in that case what is the smartest way to do that?

A seemingly rare failure. Four forum reports, one solved:

[SOLVED] SQLite error cannot rollback - no transaction is active (NAS)

Please check free space on all partitions, e.g. using df -k.

Normal. You get one entry every time anything happens with a remote file.
Recreate does lots of downloads (Get), of mostly dindex (duplicati-i*).
If it does many dblocks (duplicati-b*) or repeats a file, that’s significant.

Thank you!
Disk space could be the problem.
There is 1.6 GB free on the linux-partition(s) and Duplicati already takes up 8.1 GB.
I did the “df -k” command, which gives a lot of details that I don’t know exactly how to interpret:

$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3997928 0 3997928 0% /dev
tmpfs 804068 2048 802020 1% /run
/dev/nvme0n1p5 47123228 43032544 1673848 97% /
tmpfs 4020324 335436 3684888 9% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4020324 0 4020324 0% /sys/fs/cgroup
/dev/loop0 172544 172544 0 100% /snap/brave/63
/dev/loop1 91264 91264 0 100% /snap/core/8268
/dev/loop3 8832 8832 0 100% /snap/canonical-livepatch/90
/dev/loop4 55936 55936 0 100% /snap/core18/1288
/dev/loop6 171648 171648 0 100% /snap/brave/62
/dev/loop8 354304 354304 0 100% /snap/pycharm-community/172
/dev/loop9 8832 8832 0 100% /snap/canonical-livepatch/88
/dev/loop10 141952 141952 0 100% /snap/kdictionary/2
/dev/loop12 45312 45312 0 100% /snap/gtk-common-themes/1353
/dev/loop13 91264 91264 0 100% /snap/core/8213
/dev/nvme0n1p4 157733884 154863180 2870704 99% /mnt/4AF15A0435E762B4
/dev/nvme0n1p1 262144 44792 217352 18% /boot/efi
tmpfs 804064 116 803948 1% /run/user/1000
/dev/loop14 46080 46080 0 100% /snap/gtk-common-themes/1440
/dev/loop5 56064 56064 0 100% /snap/core18/1650
/dev/loop2 356224 356224 0 100% /snap/pycharm-community/175

My question is whether Duplicati is trying to use ONE part of the disk which is already full, while there is room elsewhere, for instance in my home folder?

Where do you see the 8.1 GB of files? Can you do an ls -l in that folder and show us the results? You may have some old backups that can be deleted to free up space.

Duplicati will not automatically start using space elsewhere. If the default location doesn’t have a lot of capacity, then we can look at moving some databases to a different location.

Per the df man page that I linked, you can see how you can test specific folders, e.g. df -k ~
Just looking at the entire df output, I don’t see an obvious spot besides the 97% full mount on /.
Beyond that, it gets deeper because some filesystem types reserve a portion for root-only use.
I’m not sure whether df accounts for that. The same 97% might also appear if you df -k /tmp

Your drive was possibly even fuller at point of Recreate failure. Temporary files may be deleted.

This is all somewhat speculative, but regardless of Duplicati Recreate, I think you need room…

Thank you both!
I am sorry I included the result of “df -k” in such a messy way!!
Here is the output of “ls -l /home/henrik/.config/Duplicati/”:

$ ls -l /home/henrik/.config/Duplicati/
total 7864752
-rwxr-xr-x 1 henrik henrik     126976 jan 30  2019 'backup 20190130101155.sqlite'
-rw------- 1 henrik henrik     122880 jan 30  2019 'backup 20190130103833.sqlite'
-rw------- 1 henrik henrik     122880 okt 16 18:55 'backup 20191016065537.sqlite'
-rw------- 1 henrik henrik 4830175232 jan 24 20:56 'backup NQWTCGLDTL 20200125082200.sqlite'
drwxrwxr-x 2 henrik henrik       4096 jan 25 16:55  control_dir_v2
-rwxrwxrwx 1 henrik henrik     143360 jan 28 11:54  Duplicati-server.sqlite
-rw------- 1 henrik henrik 3222753280 jan 28 06:24  NQWTCGLDTL.sqlite
drwxrwxr-x 4 henrik henrik       4096 jan 27 19:44  updates

It DOES seem that there are 2 copies of the database. Should I delete both copies and then start recreate again? And do I need even more space than that? (There is 7.4 GB elsewhere in my home folder that I could move temporarily.)
Here is “df -k” again as Preformatted:

$ df -k
Filesystem     1K-blocks      Used Available Use% Mounted on
udev             3997928         0   3997928   0% /dev
tmpfs             804068      2048    802020   1% /run
/dev/nvme0n1p5  47123228  43032544   1673848  97% /
tmpfs            4020324    335436   3684888   9% /dev/shm
tmpfs               5120         4      5116   1% /run/lock
tmpfs            4020324         0   4020324   0% /sys/fs/cgroup
/dev/loop0        172544    172544         0 100% /snap/brave/63
/dev/loop1         91264     91264         0 100% /snap/core/8268
/dev/loop3          8832      8832         0 100% /snap/canonical-livepatch/90
/dev/loop4         55936     55936         0 100% /snap/core18/1288
/dev/loop6        171648    171648         0 100% /snap/brave/62
/dev/loop8        354304    354304         0 100% /snap/pycharm-community/172
/dev/loop9          8832      8832         0 100% /snap/canonical-livepatch/88
/dev/loop10       141952    141952         0 100% /snap/kdictionary/2
/dev/loop12        45312     45312         0 100% /snap/gtk-common-themes/1353
/dev/loop13        91264     91264         0 100% /snap/core/8213
/dev/nvme0n1p4 157733884 154863180   2870704  99% /mnt/4AF15A0435E762B4
/dev/nvme0n1p1    262144     44792    217352  18% /boot/efi
tmpfs             804064       116    803948   1% /run/user/1000
/dev/loop14        46080     46080         0 100% /snap/gtk-common-themes/1440
/dev/loop5         56064     56064         0 100% /snap/core18/1650
/dev/loop2        356224    356224         0 100% /snap/pycharm-community/175

I look forward to here from you.

Yes. But just to be safe, I would probably move backup NQWTCGLDTL 20200125082200.sqlite to a different partition (just in case you want to go back to it for some reason). You could ultimately delete it once your database recreation succeeds.

But I would delete all the other backup*.sqlite files and the NQWTCGLDTL.sqlite file. Then do a database recreation and keep your fingers crossed.

It does seem like this partition is a bit tight on space.