Hi @Hassan_Ghaseminia welcome to the forum.
Duplicati tracks data in the size of “blocks”. The default block size is 100KiB for v2.0.8.1
and earlier. Using 100KiB blocks means Duplicati needs to keep track of approx 10 million blocks pr. TiB, and perhaps double that to account for file metadata. You can set the blocksize using --blocksize=1mb
, but you cannot change the blocksize of a backup that has already created remote files. Depending on your data patterns, you can consider using a 10MiB blocksize, or larger to reduce the number of blocks.
See the article on sizes for more general guidance.
The “the job gets deleted” part sounds like Docker is restaring and you have not set up a data folder. Duplicati will use the data folder to store the database with your backup configurations, as the Docker volumes are wiped on updates and restarts. I do not understand why it would restart, but maybe it is related to the “out-of-disk” issue you mention.
Duplicati stores a database that serves as a lookup for the remote destination. This database makes it faster to perform various operations but takes up some space. The primary space taken is from the hashes and file paths, with some additional cache space for parallel processing.
You can reduce the database size by adjusting the blocksize as described. I am not sure what remote volume size you are using, but here Duplicati will build multiple remote files (default 4) while creating the output. If you have a large remote file size, this will affect the local storage space used. You can toggle this part with --asynchronous-upload-limit=4
.