Delete backup job before completion

Hello everyone,

I’m having an issue with Duplicati. I want to back up a drive that contains about 2 TB of data, but the backup process doesn’t complete. The backup is supposed to be done on another server via FTP.

I don’t know what the reason is. Usually, it doesn’t give an error, but the process is very slow, taking about two to three days to make a backup. My question is, is this tool suitable for backing up such a large volume of data?

I’ve set up the service in Docker, and everything seems to be working fine, but the job gets deleted, and no backup is taken. When I start the job, it creates the files, and the process begins, but after a day, when it’s still in running mode, I notice that there is no backup, and the job has been deleted.

Could anyone help me understand what’s going wrong and how to fix it?

Thank you.

Another problem is that the server hard drive, where Duplicati is installed, has 150 GB of free space. When I start a job, this free space gets filled up. The backup destination is on another server, so why does the hard drive where Duplicati is running get filled up?

The question is, why does Duplicati use so much cache? How can I transfer the cache volume to the backup server so that it doesn’t take up space on the local drive?

Thank you for your help.

Hi @Hassan_Ghaseminia welcome to the forum.

Duplicati tracks data in the size of “blocks”. The default block size is 100KiB for v2.0.8.1 and earlier. Using 100KiB blocks means Duplicati needs to keep track of approx 10 million blocks pr. TiB, and perhaps double that to account for file metadata. You can set the blocksize using --blocksize=1mb, but you cannot change the blocksize of a backup that has already created remote files. Depending on your data patterns, you can consider using a 10MiB blocksize, or larger to reduce the number of blocks.

See the article on sizes for more general guidance.

The “the job gets deleted” part sounds like Docker is restaring and you have not set up a data folder. Duplicati will use the data folder to store the database with your backup configurations, as the Docker volumes are wiped on updates and restarts. I do not understand why it would restart, but maybe it is related to the “out-of-disk” issue you mention.

Duplicati stores a database that serves as a lookup for the remote destination. This database makes it faster to perform various operations but takes up some space. The primary space taken is from the hashes and file paths, with some additional cache space for parallel processing.

You can reduce the database size by adjusting the blocksize as described. I am not sure what remote volume size you are using, but here Duplicati will build multiple remote files (default 4) while creating the output. If you have a large remote file size, this will affect the local storage space used. You can toggle this part with --asynchronous-upload-limit=4.

1 Like

Thank you very much, I got a lot of new data.

I had set the block size to 5 gigs, according to the article, I will reduce it to 5 megabytes

For docker I set these volumes:
volumes:

  • ./config:/config
  • ./db:/data/Duplicati
  • /:/source
    So that nothing is lost when restarting, do I need to set another volume?

If you did that originally (which seems kind of hard to even see) what is Remote volume size?
That’s on the Options screen, links to basically the same article, and people often misconfigure.

Trying to put the whole backup into one file winds up filling up /tmp on the system Duplicati uses, however in a Docker case (I don’t use Docker), I’d have thought it would leave host space alone.

1 Like

Please follow the directions for whatever Docker image you’re on. They vary. Examples:

https://hub.docker.com/r/duplicati/duplicati

https://hub.docker.com/r/linuxserver/duplicati

If you need to look around, e.g. with docker exec (or maybe even with Duplicati GUI) the configuration database is Duplicati-server.sqlite. Job databases have random letter starts.

1 Like

I am currently using the image duplicati/duplicati, and I followed the instructions without any issues with Docker itself. It came up easily and works. My biggest challenge is related to the block size. Now I want to test the default size

The data inside the container gets deleted when Docker restarts. I defined volumes for this, but I am not sure if these volumes are enough. Could this be causing the problem? Also, I had set the block size to 5GB, which might not be appropriate.

Directions call for /data which is not what you set up. Possibly your subfolder works too.

Regardless, as soon as Duplicati starts, I think you have a Duplicati-server.sqlite created.

How To Use docker exec to Run Commands in a Docker Container will let you look for it.
Using the Duplicati Source browser is probably easier. If it’s in Docker, is it also on host?

1 Like

this volumes fix my bug for restart docker container:

  • ./data:/data
1 Like

It should be faster for non-initial backups, as only changes are uploaded. Depending on file use, possibly you have a huge number of changes. The total file count can also slow things, because the backup has no way of knowing what changed except by looking. The job log has a summary:


Examined is your Source area. Yours is likely larger. It only opens a file if the exam suggests the file changed. Opened files are read, and only changed blocks are uploaded. Complete log has:

“BytesUploaded”

or you can look on your FTP server sorted by date. This will also hint at the Remote volume size, which I hope is something reasonable, as its default is 50 MB. If it’s local, you can go a bit higher.

1 Like