Hi guys. I’m glad I found Duplicati software and this forum, which looks active compared to other open source projects.
I’m starting to install Duplicati on my home and family computers and was wondering if is there any reason that I should use the stable 1.3.x version, or should I just trust backups in the 2.0 beta release.
Honestly, I just tested for couple of days the version 2.0. The first test I did was to setup a backup job, let it ran couple times and tried to simulate a corruption or missing files at remote destination. I tried to simulate an event of failure where the remote host would corrupt or lose any file for any reason. I expected Duplicati to identify the missing or corrupt file and perform the accurate repair (re-upload) to fix it, back to expected state. Unfortunately, I discovered that this won’t happen, as Duplicati can only repair local database, and even having local database repaired, backup job will not run anymore, needing to create a new job and thus, upload whole backup set/files again. I don’t know if Duplicati 1.3.x perform the same way in this case.
I thought it would be able to restore remote corruption mainly after reading its description/features page:
"Technology: Fail-Safe Design
Duplicati is designed to handle various kinds of issues: Network hiccups, interrupted backups, unavailable or corrupt storage systems. Even if a backup run was interrupted, it can be continued at a later time. Duplicati will then backup everything that was missed in the last backup. And even if remote files get corrupted, Duplicati can try to repair them if local data is still present or restore as much as possible."
Fact Sheet • Duplicati
I researched about it and found a couple of commands to list and purge broken files, but in my quick tests it ended up having to purge all files (or most of it) and upload again. Not sure if it was because the backup set was very small.
Another “sad” thing I notice (using 2.0), is the big size of local database. I did an initial job uploading to Google Drive with 10GB and local database resulted in 44MB. Currently it means 4,4MB (44M/10GB) for each 1GB uploaded. So if I backup my laptop data (about 100GB), it would grow up to 440MB, if growing is proportional. Now, if I setup multiple backup jobs, with local NAS (~2TB), I would have to spare 9,2GB for just the local database. I think it’s really big space requirements, mainly when used on laptops with disk space constraints. I believe that currently the only trick to lower this huge size would be setup a higher blocksize (defaults to 100KB), maybe 500KB or up to 1MB. I don’t know what would be the expected local database size instead, but I’m a bit afraid of changing this default settings and having issues in future, maybe when restoring backup files (from original source or from another laptop, as I have seeing others having difficulties; here).
I understand that I used the same topic to talk about different things (which version to use and features, such as size of database and corruption recovery), but I thought it was better to post it together, as it’s related to which version should be better also on this issues.
Thanks in advance for any thoughts.