- Source files backup size: 200GB.
- Target backup files amount: 5100.
- Target backup size: ~185GB.
- Backup timespan: ~2 years.
Backup destination direction is to a second internal hard-drive which is synced by a independent third-party.
- On this device, the initial backup was made years ago with the 75MB (remote) volume size.
- The volume size was increased to 200MB, but that of course does not affect existing volumes.
The goal is to decrease the total number of (75MB) target backup volume files. These large number of files (~5000) seem to affect duplicati’s performance negatively on this system. Also the backup file-handling is more complicated, in some contexts more sluggish.
Is it possible to have duplicati repack (a set number of) existing volumes to the new size somehow?
I have looked into the compact command, but that seems not to be its purpose.
If it is possible, please provide an example.
Ideally, over time, every time a regular backup runs, duplicati would also do some ‘transitioning work’ and migrate/repack some older volume file sizes.
It is not a goal on its own that all 5000 files get repacked in one go; if a percentage or number of them gets repacked every backup, then theoretically, after every backup the total number of volume-files will get smaller. This would also spread out the third-party upload penalty/load.
However, if it can only be done with a separate CLI command, please let me know.
Setting up a whole new backup is not possible for this context, though the second drive might have double the storage space, the synced storage space is limited. It would need to be purged which means during that transition period, there is no viable backup in the cloud, which is not deemed acceptable. One would also lose access to modified/deleted files that are still in the backup if it was completely replaced.