Time to complete backup job has trippled in the last week

I have been using Duplicati for some 4 or 5 months without issue.
But recently (9th August 2021) a job did not run - and since the time the backup job didn’t run, the backups are now taking 2.5 hours - where as prior to the job not running the backup jobs were always between 40 and 60 minutes

I have checked the Logs - but can’t see anything which stands out. But I am also not too sure what i should be looking for
I know that the server which is running Duplicati was online and working on the 9th when the backup did not run - as i can see in Windows Event Viewer that other processes were working.
No errors in Windows Event Viewer about Duplicati either

Windows server 2019 (1809), OS build 17763.1397
New server as of 19/10/2020 running intel Xeon E5-2665 CPU (16 cores), 32GB RAM, 4TB SSD storage

The server is used as a file server
The job in question backs up the files from the local file server to a network location
This network location is another server on the LAN - which is used only for this backup - no issues with this other server either

For further troubleshooting i have copied a folder from my PC to the file server running Duplicati - the folder was exactly 1GB made up of circa 500 files - large and small
The copy from my PC to the file server running Duplicati took about 25 seconds
I then copied this same folder to the backup server (to test network performance) and it took only 22 seconds - so to me this seems to rule out the network and the backup server (which is receiving the backups from Duplicati) and the file server itself which is running Duplicati

Thanks in advance for any help!

Let’s first take a look at the basic stats. In the Duplicati web UI, click on your backup job to expand the options, then click Show Log. You’ll see a listing of the backup operations. Click on one that took a long time and one that took the normal (short) amount of time. Show us the “time” and “source files” stats for the two different backup operations. It looks something like this:

Thanks for your reply
Attached are 3 screenshots. 2 of good runs with both small and large amounts of data (opened, added or modified) and 1 of the jobs which is taking too long



Thanks - My initial guess was maybe it was a compact phase, but I don’t see that happening on those jobs.

Can you tell me which options you’ve customized? Did you adjust the remote volume size (default 50MB) or the deduplication block size (default 100KB)?

No options changed from memory… All default except the Source and Destination, plus i yesterday added a URL for Duplicati Monitoring - but this doesn’t seemed to have worked (yet - needs some fixing once I fix this performance bug)
Here is exported config JSON - I have replaced any confidential info such as IP addresses, URLs, Domain etc.

{
“CreatedByVersion”: “2.0.5.1”,
“Schedule”: {
“ID”: 6,
“Tags”: [
“ID=8”
],
“Time”: “2021-08-18T12:00:00Z”,
“Repeat”: “1D”,
“LastRun”: “2021-08-17T12:00:00Z”,
“Rule”: “AllowedWeekDays=Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday”,
“AllowedDays”: [
“mon”,
“tue”,
“wed”,
“thu”,
“fri”,
“sat”,
“sun”
]
},
“Backup”: {
“ID”: “8”,
“Name”: “Backup E Drive”,
“Description”: “”,
“Tags”: ,
“TargetURL”: “file://ServerLocation - I have removed this”,
“DBPath”: “C:\Users\administrator.DOMAINNAME\AppData\Local\Duplicati\ZDARFALCWQ.sqlite”,
“Sources”: [
“E:\10.Sales Plotting Requirements\”,
“E:\11.1. ACCOUNTING DATA HCC\”,
“E:\11.2.Confidential Accounting\”,
“E:\11.ACCOUNTING DATA\”,
“E:\12.CONTRACT ADMIN PRIVATE\”,
“E:\13. IT\”,
“E:\15. Management Confidential\”,
“E:\16. Graphic Design\”,
“E:\16. Legals\”,
“E:\18. Marketing Agreements\”,
“E:\General\”,
“E:\Metrik Constructions\”,
“E:\Queensland Retail Sales\”
],
“Settings”: [
{
“Filter”: “”,
“Name”: “encryption-module”,
“Value”: “aes”,
“Argument”: null
},
{
“Filter”: “”,
“Name”: “compression-module”,
“Value”: “zip”,
“Argument”: null
},
{
“Filter”: “”,
“Name”: “dblock-size”,
“Value”: “50mb”,
“Argument”: null
},
{
“Filter”: “”,
“Name”: “–send-http-url”,
“Value”: “Our-Log-URL-addresss-was-here”,
“Argument”: null
}
],
“Filters”: ,
“Metadata”: {
“LastErrorDate”: “20210524T120521Z”,
“LastErrorMessage”: “The folder does not exist”,
“LastBackupDate”: “20210817T120000Z”,
“BackupListCount”: “94”,
“TotalQuotaSpace”: “9223372036854771712”,
“FreeQuotaSpace”: “9223372036854771712”,
“AssignedQuotaSpace”: “-1”,
“TargetFilesSize”: “1284031276812”,
“TargetFilesCount”: “48844”,
“TargetSizeString”: “1.17 TB”,
“SourceFilesSize”: “1804824887949”,
“SourceFilesCount”: “799320”,
“SourceSizeString”: “1.64 TB”,
“LastBackupStarted”: “20210817T120000Z”,
“LastBackupFinished”: “20210817T142718Z”,
“LastBackupDuration”: “02:27:18.0641450”,
“LastCompactDuration”: “00:22:23.8978687”,
“LastCompactStarted”: “20210815T141813Z”,
“LastCompactFinished”: “20210815T144037Z”,
“LastRestoreDuration”: “00:03:02.5396943”,
“LastRestoreStarted”: “20210728T045426Z”,
“LastRestoreFinished”: “20210728T045728Z”
},
“IsTemporary”: false
},
“DisplayNames”: {
“E:\10.Sales Plotting Requirements\”: “10.Sales Plotting Requirements”,
“E:\11.1. ACCOUNTING DATA HCC\”: “11.1. ACCOUNTING DATA HCC”,
“E:\11.2.Confidential Accounting\”: “11.2.Confidential Accounting”,
“E:\11.ACCOUNTING DATA\”: “11.ACCOUNTING DATA”,
“E:\12.CONTRACT ADMIN PRIVATE\”: “12.CONTRACT ADMIN PRIVATE”,
“E:\13. IT\”: “13. IT”,
“E:\15. Management Confidential\”: “15. Management Confidential”,
“E:\16. Graphic Design\”: “16. Graphic Design”,
“E:\16. Legals\”: “16. Legals”,
“E:\18. Marketing Agreements\”: “18. Marketing Agreements”,
“E:\General\”: “General”,
“E:\Metrik Constructions\”: “Metrik Constructions”,
“E:\Queensland Retail Sales\”: “Queensland Retail Sales”
}
}

Can you try adding the --no-auto-compact option and see if makes any difference? The evaluation Duplicati does whether or not it should compact can take time depending on your configuration (even if it decides it doesn’t need to compact).

Okay thanks
I have just added that option now
I’ll update here tomorrow with the result

Yes, I had the “issue” with automatic compacts as well. Now I run compacts as separate task, after taking the actual backups. So it doesn’t matter if those take a while to run. I have to lock some files during backup for data consistency and I want to minimize that time.

I’ve been also asking for option to run compact completely independently on the back-end, but I do understand (yes, technically also) it would complicate matters a lot. And as long as there are any stability / reliability issues with current implementation, it would be just be a extray way to ruin your day.

I just checked (sorry for the delay) and the issue is still present
last 2 backups (after disabling Auto compact) is still taking over 2 hours, vs the previous time for the same data set taking 40 minutes
Screenshot below of Duplicati Monitoring which I have now got working

Ok, so the compact evaluation isn’t the culprit. Have you ever vacuumed your database? Maybe you can try a vacuum operation to see if it helps.

Excuse the ignorance - but how do I do this?

In the main web UI, click the backup job, and click the blue “Commandline …” link. Pick “vacuum” in the Command field and clear the “commandline arguments” field. Scroll to the bottom and click the “run vacuum command” button.

Thanks
I have now run that. The next backup is due at 10pm my time
I will update here tomorrow with if it worked or not

image

Hi drwtsn32,
It looks like the vacuum helped (bringing it down to 1 hour). Its a significant improvement, however still not back to the 40 minutes it was previously

You have almost 800,000 files included in your backup source. I bet you could improve performance even more by using the USN option. This lets Duplicati use the NTFS change journal to find new/changed files instead of enumerating the filesystem.

Try setting:

--usn-policy=auto

I’m not certain, but it may not work the first time you run a backup after setting the option. You may have to wait for the second run to see if it helped.

Also, this option requires that Duplicati be running elevated. Since you’re using Windows Server, I am going to guess you’re running Duplicati as a service so it probably has admin rights.

Thanks, I’ll give that a go and report back
Will test on job 1 and if it doesn’t improve I’ll let it run a second job the next day and then come back

You might want to set it to ‘on’ instead of ‘auto’ so it’s more obvious if Duplicati wasn’t able to use the USN.

I decided to run an out of schedule backup to test this - but it failed

Sat on “Starting Backup” for about 45-60 mins
After many refreshes i decided to close and re-open the browser and then it changed to backing up… after a few mins it then failed where i got the following error:

image

So I went into About > Log > Live Log
Where the following was listed

Was the bottom of that the end of the whole thing? The channel is retired means an earlier error happened.
Sometimes the stack shows it further down. An example of a longer one that permitted an analysis is here.

That was it - nothing more before or after it

However i have just restarted it now and am watching the log and it appears to be doing a lot more this time so will see how i go