Backup_ProcessingFiles step where this step alone takes more than 9 hours

And I really am finding the tool fascinating.
You are to be congratulated.

However, I have two doubts.
I’m using it on an email server, from the company KERIO.
So I have a folder with all the company’s emails, this folder is 900GB.

However, it has a Backup_ProcessingFiles step where this step alone takes more than 9 hours.

And I wanted to know if this slowness in this part is slow anyway.

Another doubt is that it should have a configuration of the place where it compresses this data, make it compress where the folder we want to backup is, makes the main drive full and crashes the server.

I have another HD and I wanted to configure the program to use it as a temporary and from this HD the program would send it to Google Drive.

Can you help me?

The tempdir option might help you with the disk space issue.

1 Like

Welcome to the forum @Silvio_Tavares

Also make sure that you didn’t greatly increase Remote volume size on Options page. See the note.

Choosing Sizes in Duplicati

At default value, you’ll see roughly 50 MB files flowing through the Temp folder. The queue size is limited.
I’m not sure how the main drive would fill unless Temp space is very low or remote volumes very large…

That step is approximately the entire backup, isn’t it? Sometimes post-backup delete and compact takes time, but that wouldn’t happen on an initial backup, or is this a non-initial backup? They are typically faster because only changed data is backed up. A job log has statistics, and Complete log in it has even more.

image

      "BackendStatistics": {
      "RemoteCalls": 26,
      "BytesUploaded": 39560587,
      "BytesDownloaded": 10514318,
      "FilesUploaded": 18,
      "FilesDownloaded": 6,

Going to Google Drive, beware of Google 750 GB daily upload limit, although maybe yours is less after compression. That and encryption add some time to the processing. Compression and deduplication happen based on a blocksize value that defaults to 100 KB, which means your roughly 1 TB backup is potentially making about 10 million blocks that are individually tracked. I suggest you raise that to 1 MB, however it can’t be done without starting the backup over, for reasons detailed (and being tested) here:

Why the heck CAN’T we change the blocksize?

Does this look like MBOX format? That might be deduplication-friendly. Is this Kerio Connect? What OS?
Be careful of “live” backups that might get an inconsistent view. I don’t know if the servers supports VSS.
For complex systems, sometimes it’s best to use dump tools provided by the system, then back that up.