I have a slow backup issue, even when the changed files are just a few.
I’d like to know what I can do to speed it up, I give you more details:
Duplicati Version 2.2.0.0 (2.2.0.0_stable_2025-10-23)
Linux Debian 13 AMD64
Old hardware like 10yo intel CPU, 4GB ram, two Raid1 rotational 4TB disks,
Total source size: 1.57 TiB, Duration 7h33m
The source files are under samba shares accessed from the LAN, windows clients. I’m quite sure I could trust file modification time and skip the hash completely. Make sense?
Where is it slow? Backup status bar starts with “Verifying” then “Counting”.
Verifying is faster with less versions. “Counting” is part of checking source.
Scanning in SMB may be slower than local, but maybe you’re stuck with it.
If those two phases take most of the time, then you can concentrate there.
Even though relatively few files changed, 3.5 GB is not nothing to backup.
Looking at the job part of the old UI will show a file that is then being read.
Old hardware probably also slows it some. I think I see this on my own PC.
Old backups can be slow. Before 2.1, blocksize was small for big backups.
Unfortunately you can only set it on a new backup. How old is the backup?
If you can sort Destination files by time, you can look for the oldest file in it.
What hash? I don’t think source file block hash scan happens without reason.
Verbose log will tell you the decision process on any file that must be opened.
EDIT 1:
It’s not just changed existing files. Did you know there’s more in new files?
I’m not able to log-in to the server and check the status bar while performing the backup, can I retrieve the Verifying / Counting phases duration from a log file?
I explained myself badly, the backup source are the server local disks, it is basically a Samba NAS that I need to backup on cloud.
The backup task was created in 2022, with the default blocksize. The number of versions is 38.
The files are stored on cloud provider Backblaze
How can I enable the verbose log file ?
Yes I know, I think that 3.2GB compared to the full backup size of 1.5TB is very few data.
then be ready with something to look at a possibly big file. Usually less can.
To get the interesting parts in sequence grep might be able to do something.
Here’s an example of why a file I changed is logging CheckFileForChanges:
2025-11-09 08:01:09 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\PortableApps\Notepad++Portable\App\Notepad++64\backup\webpages.txt@2025-11-04_204704, new: False, timestamp changed: True, size changed: True, metadatachanged: True, 11/8/2025 10:45:18 PM vs 11/5/2025 1:47:12 AM
You have quite a few new files too, and for these I would expect new: True, however easier way to find new files (and modified) is by GUI Commandline:
I don’t know if Counting has a good indicator in the log, but does in the GUI.
Can your server let you login at all during backup? I don’t understand issue.
[Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put
and
[Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get
it’s interesting to see how it pauses 20min and 40min between two file uploads for no clear reason:
If you’re looking at gaps, you don’t know what’s in there. More logging can help.
2025-10-06 17:47:20 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2025-10-06 17:56:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2025-10-06 17:56:53 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (2.97 KiB)
is from my big backup. Most of that is probably PreBackupVerify (Profiling level). You can likely also use log-file-log-filter option to log PreBackupVerify.
Your backup is bigger, and old small default blocksize also slows down the SQL.
More SQLite cache can likely help, but starting fresh backup should too (ouch?).
As it scans files, no changes means no changes upload. Old UI shows this better, assuming whatever the problem was with log-in to server to watch can be solved.
As log-file is at verbose level or more, you can check it for CheckFileForChanges because no check definitely mean no changed data will be put in upload volumes.
Even if a big file is checked, not all of it may have changed. It’s checked by block, with blocks already in backup being referenced, instead of uploaded redundantly.