Back up SVN repository

We are currently setting up backups using duplicati for a big SVN repository. Previously this was done by simply doing a svnadmin hotcopy to a different drive. After reading a bit about best ways to create backups for SVN, we came to the conclusion that svnadmin dump is probably a better approach.
Now the big question we are facing is how duplicati will deal with the generated dump file. The dump command generates a single file which will change every time a commit got added to the repository. How will duplicati deal with this? Will it do a diff between previous and new dump file and only upload a patch or will it always upload the whole changed file?
The later would be quite bad as the repository is over 30gb and we would like to keep the smart rendition plan so that would create a lot of duplicate information.
Does anyone have experience using duplicati to backup a SVN repository?

Welcome to the forums @Mats391

Duplicati breaks files into blocks (tiny chunks or pieces of a file) which it then stores in dblocks (zip files), when Duplicati looks at a file it looks at the blocks and compares them, if the block has changed it gets backed up otherwise the block is the same as it was previously and moves on.

The deduplication engine should find the modified blocks in the file and only deal with those but scanning that single 30GB file for changes could take a long time.

Alright, thanks for clearing this up! We will experiment with different dump scripts then. Probably will have to set up incremental dumps with occasional full dump.

I’m not sure why scanning a single 30GB file would be worse than scanning 300 x 100MB files though. If it is, the file system management could be seen has having a problem - unless Duplicati loads the whole file in memory, an unlikely proposition.

In the OP place, I’d take a look at 2 dumps and see if the beginning look identical. If it is, it’s likely that new data is appended at file end - an ideal scenario for dedupliication. If the whole file is regenerated completely different each time, dedup can’t possibly help.