How to repack existing backup remote volumes after volume size change?

  • Source files backup size: 200GB.
  • Target backup files amount: 5100.
  • Target backup size: ~185GB.
  • Backup timespan: ~2 years.

Backup destination direction is to a second internal hard-drive which is synced by a independent third-party.

  • On this device, the initial backup was made years ago with the 75MB (remote) volume size.
  • The volume size was increased to 200MB, but that of course does not affect existing volumes.

The goal is to decrease the total number of (75MB) target backup volume files. These large number of files (~5000) seem to affect duplicati’s performance negatively on this system. Also the backup file-handling is more complicated, in some contexts more sluggish.

Is it possible to have duplicati repack (a set number of) existing volumes to the new size somehow?

I have looked into the compact command, but that seems not to be its purpose.
If it is possible, please provide an example.

Ideally, over time, every time a regular backup runs, duplicati would also do some ‘transitioning work’ and migrate/repack some older volume file sizes.
It is not a goal on its own that all 5000 files get repacked in one go; if a percentage or number of them gets repacked every backup, then theoretically, after every backup the total number of volume-files will get smaller. This would also spread out the third-party upload penalty/load.

However, if it can only be done with a separate CLI command, please let me know.

Setting up a whole new backup is not possible for this context, though the second drive might have double the storage space, the synced storage space is limited. It would need to be purged which means during that transition period, there is no viable backup in the cloud, which is not deemed acceptable. One would also lose access to modified/deleted files that are still in the backup if it was completely replaced.

How to repack existing backup remote volumes after volume size change?

There’s no GUI or CLI tool specifically designed for all the things you’re after.

The COMPACT command can be persuaded to increase (not decrease) the size, however I’m not sure how much it will improve your performance, and it’s hard to control how much work it does at one run. There are a couple of posts about this:

Feature Request: Time Limit for Compaction

Compact - Limited / Partial

I’m not sure it would need to be purged. Current compact does a little at a time, downloading files to repack still-in-use blocks into a new file which is uploaded.

v2.1.0.119_canary_2025-05-29

Preserve space during compact by deleting files early, instead of at the end of the compact

You can see this in CommandLine using --dry-run flag to stop actual change:

  Downloading file duplicati-b79b25742c67f4def900c8edf919fee12.dblock.zip.aes (1.93 MiB) ...
  Downloading file duplicati-b70708693fbbb4707a4da5f1f83dfc1e6.dblock.zip.aes (1.05 MiB) ...
  Downloading file duplicati-b42f28d1af6724da9bd00ecca4e31d3ac.dblock.zip.aes (2.11 MiB) ...
[Dryrun]: Would upload generated blockset of size 10.91 MiB
[Dryrun]: Would delete remote file: duplicati-bf64e95282d714c3492449377ed87f142.dblock.zip.aes, size: 9.99 MiB
[Dryrun]: Would delete remote file: duplicati-i7b6b165265204c3ca35f66120af65e1f.dindex.zip.aes, size: 66.29 KiB
[Dryrun]: Would delete remote file: duplicati-b8d0c9f99438c4af1bae5234210af768c.dblock.zip.aes, size: 4.73 MiB
[Dryrun]: Would delete remote file: duplicati-i9807acdf586c430e849b44bd8290d4f6.dindex.zip.aes, size: 42.64 KiB
[Dryrun]: Would delete remote file: duplicati-b79b25742c67f4def900c8edf919fee12.dblock.zip.aes, size: 1.93 MiB
[Dryrun]: Would delete remote file: duplicati-i76a6f06a0a574ed7ae5185cf8c384c01.dindex.zip.aes, size: 38.25 KiB
  Downloading file duplicati-ba24dc45e49cf4c7c91ffd11678c7acee.dblock.zip.aes (1.28 MiB) ...
  Downloading file duplicati-b46124700bbb244db869c1dc7fa80ee31.dblock.zip.aes (279.58 KiB) ...
[Dryrun]: Would upload generated blockset of size 3.68 MiB
[Dryrun]: Would delete remote file: duplicati-b70708693fbbb4707a4da5f1f83dfc1e6.dblock.zip.aes, size: 1.05 MiB
[Dryrun]: Would delete remote file: duplicati-ifea8b78ddaad47d5a87c0979aed10f63.dindex.zip.aes, size: 37.06 KiB
[Dryrun]: Would delete remote file: duplicati-b42f28d1af6724da9bd00ecca4e31d3ac.dblock.zip.aes, size: 2.11 MiB
[Dryrun]: Would delete remote file: duplicati-i8bb745e6fdb1413ea922004167ac0cce.dindex.zip.aes, size: 41.67 KiB
[Dryrun]: Would delete remote file: duplicati-ba24dc45e49cf4c7c91ffd11678c7acee.dblock.zip.aes, size: 1.28 MiB
[Dryrun]: Would delete remote file: duplicati-i055b72e2df354f7285e994986b062c06.dindex.zip.aes, size: 37.67 KiB
[Dryrun]: Would delete remote file: duplicati-b46124700bbb244db869c1dc7fa80ee31.dblock.zip.aes, size: 279.58 KiB
[Dryrun]: Would delete remote file: duplicati-idfa32ce5d0d246b998e35f39b3069810.dindex.zip.aes, size: 1.26 KiB

Is an old backup in the cloud viable? A long compact may delay usual backup.
My test run above had tried --dblock-size=11MB because 10MB did nothing,
while 20MB did more than I wanted. Basically, it appears “roughly” controllable.

Duplicati.CommandLine.exe compact destination-URL --dbpath=path-to-database --passphrase=REDACTED --dblock-size=11MB --dry-run=True

I could get 10MB to do something by adding --small-file-max-count=0, and I had some luck getting the processing amount down with --threshold=1000 and lowering it. Yes, that’s percent, but it seems like one can go beyond 100 for value.

So it’s sort of possible but kind of indirect and finicky, persuading the wrong tool…

Just had a somewhat expected “compact” avalanche that ran about two hours (backup is pretty small though – large backups would compact for more time).

My Options screen “Remote volume size” had previously been 10 MB per my preference to have a little more uploading and downloading, for better testing.

Release: 2.3.0.0 (Stable) 2026-04-14 presumably by design changed Options, deemphasizing that formerly top-of-page setting (which also confused people).

Unfortunately it got deleted in the process of moving it to “Advanced options”, meaning any such option is removed from config, so relying on 50 MB default.

I watched Duplicati compact (but not all) of my 10 MB volumes to 50 MB ones. Mathematics of the algorithm aren’t totally clear, but for your backup you might already have had some 75 MB ones compact to 200 MB. I’m not sure of exact options to force the others. Regardless, if version 3.0.0 of new UI edits settings, backup will probably be uploading 50 MB volumes from your backup. Restore developer says large volumes don’t help, but what about work besides restore?

Is your take on it generally. You give general note, then single out backup speed.

Input from developers of non-restore operations would help, meanwhile new bug:

As a user noted, you can use old UI after 3.0.0 install and keep large volumes.
Personally I’m probably just going to live with default volumes until a Stable fix.