Why failed because of disk space?

Should be fine, as far as I know. If somehow you get into a slow spot of generating tmp files, you might miss some upload opportunity by reducing the amount of buffering, but that would merely delay finish of backup.

Yes, it can happen. With some storage types there are limits that one can hit. The 5000 limit that OneDrive either had or still has causes trouble. I haven’t heard much of people finding a Google Drive limit, however enormous numbers of files (or for that matter, hash blocks --. deduplication chunks of files) cause tracking done with SQL operations on the local job database to grow slow. Scaling for large sources isn’t very good. Sometimes people raise their –blocksize, because the default 100kb means tracking 6TB/100kb=about 64 million hash blocks, and if the database ever needs to be recreated, doing those inserts will be very slow…

Using huge dblock-size can backfire at restore time because file updates are put in whatever dblock file is being produced at the time, so a restore might have to download many of those big files to gather chunks.

Duplicati creates to many backup files is a recent discussion on settings to use. Some depends on usage.

Probably reasonable. A single TCP connection can only push so much data out, and Duplicati doesn’t yet have the ability to start parallel upload threads, like some programs do when they try to fill up the network. What sort of single-threaded upload speeds can you get from something else to a far-remote destination?

Names would help. The ones Duplicati makes usually begin with duplicati- and include dblock, dindex, or dlist in the name. The only ones that use --dblock-size are the dblock files. Other files can be large or small.

1 Like