Performance is really slow with high disk usage

In my experience I see something I’ll call double copy… in a least complicated setup.

Backup source D:
Temp Dir C:%temp%
Destination E:

First read files from D:, create dblocks in temp, move to destination

No doubt the job is done but high resources are required. In this case even with no compression and encryption high HDD I/O is needed.

For local storage like you’re using it might work better if you use --tempdir= to point to somewhere on E: and then --use-move-for-put to move the resulting files instead of copying them.

2 Likes

2 ideas from @kenkendk that may improve performance and reduce disk usage:

Multithreaded backup engine:

Processing DBLOCKS and Temp files in memory:

I split this off to it’s own “disk usage” specific topic as fixes for this and CPU usage issues aren’t likely to overlap much.