I have an existing backup with about 40GB and the dblock-files are 750MB.
If there a possibility to “convert” the existing backup with 750MB files to a backup with file sizes of 100MB?
Just changing the --dblock-size will only work for new files but not for the existing ones.
Background for the question: I have to change the “cloud server” and the new one has very limited download speed (limited upload asynchron dsl of the server) and so a block-size of 100MB would be better. With the existing 750MB files the verification takes a very long time. I would prefer to keep the --backup-test-samples=X, just with smaller files.
Conversion can be done locally. At the moment I have the whole backup on a local USB-storage and this storage is then tranported to the server (house of a friend).
However it should be technically possible, I can’t find a tool to do this (semi) automatically.
I was hoping that the recompress command of the Duplicati.CommandLine.RecoveryTool.exe would accept the --dblock-size option, but this option seems to be ignored.
It would indeed be a useful feature to be able to change the upload volume size.
The recompress command in the Recovery Tool doesn’t seem to support resizing of archives, I guess the only thing it can do is switch from 7z to zip and vice versa. However, resizing arcihves would be a nice extra functionality for the recompress command.
The --small-file-size and --small-file-max-count options are a nice trick to consolidate smaller archives to a single larger one, but the OP wants to do the opposite: split a large archive to smaller archives.
I tested the COMPACT process on a copy of a job and it seems to have done the trick - eventually. Feel free to test yourself, or assume my results are valid and apply to your live backup (YMMV).
My test process…
I made a copy of an existing job by:
copy backend files to new folder
export job as file
import job from file
edit name so unique
edit dest. to point to copy
disable schedule
I then ran the following against it by:
Go to job menu -> Commandline
select “Command” Compact
leave “Target URL >” alone
Remove all Commandline arguments
Use “Edit as text” for “Advanced options” and replace EVERYTHING with this:
It seems to have happily re-compressed 30% of the files from the default 50MB dblock size to the reqiested 40MB on the first run then stopped w/out error. I ran the process again, and it processed the rest of the files to the new 40MB dblock size.
Why it took two runs I don’t know.
I have a full-verification test running now just to make sure everything is good as this also seems to have converted my very old .7z dblock files to .zip (yay!).
Note that the progress bar just said “Running task: Starting backup …” the whole time, so estimating how long it will take is tricky.
For those that care, here are the first & last few lines of the PROFILING console log:
The operation Compact has started
Starting - Running Compact
Found 0 fully deletable volume(s)
Found 0 small volumes(s) with a total size of 0 bytes
Found 93 volume(s) with a total of 0.00% wasted space (0 bytes of 4.08 GB)
Compacting because there is 0.00% wasted space and the limit is 0%
Starting - RemoteOperationList
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (276 bytes)
RemoteOperationList took 0:00:00:00.035
...
Starting - CommitCompact
CommitCompact took 0:00:00:00.262
Running Compact took 0:00:28:07.987
Return code: 0
Nice hack of the compact command!
I guess is has never been the intention to use this command for resizing remote volumes, but it looks like this is more convenient to use than downloading everything to local storage, converting and re-uploading with the Recovery tool.
Do you think this was a simple enough process to be considered a “solution”?
Because I’m a glutton for punishment, I wiped my first test and repeated it - this time it handled everything in a single run, so I’m guessing something interrupted my first test.