Slow backup even with unchanged files

Hi,

I have a slow backup issue, even when the changed files are just a few.

I’d like to know what I can do to speed it up, I give you more details:

Duplicati Version 2.2.0.0 (2.2.0.0_stable_2025-10-23)

Linux Debian 13 AMD64

Old hardware like 10yo intel CPU, 4GB ram, two Raid1 rotational 4TB disks,

Total source size: 1.57 TiB, Duration 7h33m :unamused_face:

The source files are under samba shares accessed from the LAN, windows clients. I’m quite sure I could trust file modification time and skip the hash completely. Make sense?

Any other ideas?

More details from the log:

Result
successful
Deleted files
559
Deleted folders
89
Modified files
75
Examined files
906690
Opened files
1781
Added files
1706
Modified data
127.47 MiB
New data
3.20 GiB
Examined data
1.57 TiB
Opened data
3.32 GiB
Not processed files
0
New folders
274
Too large files
0
Files with error
0
Modified folders
274
Modified symlinks
0
Added symlinks
0
Deleted symlinks
0
Partial backup
no
Dry run
no
Operation
Backup
Report result
Success
Version
2.2.0.0 (2.2.0.0_stable_2025-10-23)
End of backup
08/11/2025 04:33
Begin of backup
07/11/2025 21:00
Duration
07:33:05
Time of report
08/11/2025 04:33
Last change
08/11/2025 04:33

thanks :folded_hands:

Where is it slow? Backup status bar starts with “Verifying” then “Counting”.
Verifying is faster with less versions. “Counting” is part of checking source.
Scanning in SMB may be slower than local, but maybe you’re stuck with it.
If those two phases take most of the time, then you can concentrate there.

Even though relatively few files changed, 3.5 GB is not nothing to backup.
Looking at the job part of the old UI will show a file that is then being read.
Old hardware probably also slows it some. I think I see this on my own PC.

Old backups can be slow. Before 2.1, blocksize was small for big backups.
Unfortunately you can only set it on a new backup. How old is the backup?
If you can sort Destination files by time, you can look for the oldest file in it.

What hash? I don’t think source file block hash scan happens without reason.
Verbose log will tell you the decision process on any file that must be opened.

EDIT 1:

It’s not just changed existing files. Did you know there’s more in new files?

Added files
1706
New data
3.20 GiB

thanks for you message, I can add some details:

I’m not able to log-in to the server and check the status bar while performing the backup, can I retrieve the Verifying / Counting phases duration from a log file?

I explained myself badly, the backup source are the server local disks, it is basically a Samba NAS that I need to backup on cloud.

The backup task was created in 2022, with the default blocksize. The number of versions is 38.

The files are stored on cloud provider Backblaze

How can I enable the verbose log file ?

Yes I know, I think that 3.2GB compared to the full backup size of 1.5TB is very few data.

log-file=<path> log-file-log-level=verbose

then be ready with something to look at a possibly big file. Usually less can.
To get the interesting parts in sequence grep might be able to do something.

Here’s an example of why a file I changed is logging CheckFileForChanges:

2025-11-09 08:01:09 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\PortableApps\Notepad++Portable\App\Notepad++64\backup\webpages.txt@2025-11-04_204704, new: False, timestamp changed: True, size changed: True, metadatachanged: True, 11/8/2025 10:45:18 PM vs 11/5/2025 1:47:12 AM

You have quite a few new files too, and for these I would expect new: True, however easier way to find new files (and modified) is by GUI Commandline:

The COMPARE command

Partly, but you’d have to raise the level to profiling, which makes it even bigger:

2025-11-06 07:50:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.BackupHandler-PreBackupVerify]: Starting - PreBackupVerify
2025-11-06 07:51:14 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.BackupHandler-PreBackupVerify]: PreBackupVerify took 0:00:01:10.952

I don’t know if Counting has a good indicator in the log, but does in the GUI.
Can your server let you login at all during backup? I don’t understand issue.

Hi,

this week the backup took much less, just 4 hours with 1.7GB of new data, and 170MB of modified data.

The log file shows interesting things, like at the very beginning it takes 23 min just to startup the SQL Lite, (as per my understanding)

2025-11-14 21:00:05 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: L'operazione Backup è iniziata
2025-11-14 21:00:06 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-38912'.
2025-11-14 21:23:12 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2025-11-14 21:23:20 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (10,689 KiB)

then 2 hours of

Verbose-Duplicati.Library.Main.Operation.Backup.<something>

then 1.5hours of

[Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put

and

[Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get

it’s interesting to see how it pauses 20min and 40min between two file uploads for no clear reason:

2025-11-14 23:43:16 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20251114T200006Z.dlist.zip.aes (89,600 MiB)
2025-11-14 23:43:25 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20251114T200006Z.dlist.zip.aes (89,600 MiB)
2025-11-14 23:43:27 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20230804T190005Z.dlist.zip.aes (70,182 MiB)
2025-11-14 23:43:34 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20230804T190005Z.dlist.zip.aes (70,182 MiB)
2025-11-15 00:04:41 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20250725T190005Z.dlist.zip.aes (86,913 MiB)
2025-11-15 00:04:50 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20250725T190005Z.dlist.zip.aes (86,913 MiB)

2025-11-15 00:15:09 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-iea74e442a807450599ffb470774237c5.dindex.zip.aes (197,403 KiB)
2025-11-15 00:15:10 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-iea74e442a807450599ffb470774237c5.dindex.zip.aes (197,403 KiB)
2025-11-15 00:57:10 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i1b5d349a03b5417687bcdfe51a805319.dindex.zip.aes (300,310 KiB)
2025-11-15 00:57:11 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i1b5d349a03b5417687bcdfe51a805319.dindex.zip.aes (300,310 KiB)

If you’re looking at gaps, you don’t know what’s in there. More logging can help.

2025-10-06 17:47:20 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2025-10-06 17:56:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2025-10-06 17:56:53 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (2.97 KiB)

is from my big backup. Most of that is probably PreBackupVerify (Profiling level). You can likely also use log-file-log-filter option to log PreBackupVerify.

Your backup is bigger, and old small default blocksize also slows down the SQL.
More SQLite cache can likely help, but starting fresh backup should too (ouch?).

As it scans files, no changes means no changes upload. Old UI shows this better, assuming whatever the problem was with log-in to server to watch can be solved.

As log-file is at verbose level or more, you can check it for CheckFileForChanges because no check definitely mean no changed data will be put in upload volumes.

Even if a big file is checked, not all of it may have changed. It’s checked by block, with blocks already in backup being referenced, instead of uploaded redundantly.