Slow Duplicati on large backups with few changes

I am running Duplicati on the systems of a few not-so-technical family members that now have off-site (and in one case both on-site and off-site) backup to a Minio S3-compatible storage I host for them. These backups are large (200GB) but they hardly ever change (e.g. a photo collection, a music collection). I noticed that regardless of the fact that hardly anything changes on their computer, the backups take a lot of time even if there is hardly anything to do.

Take for instance this one that backs up to a local rotational disk (FW800):
DeletedFiles: 2
DeletedFolders: 0
ModifiedFiles: 10
ExaminedFiles: 25442
OpenedFiles: 13
AddedFiles: 3
SizeOfModifiedFiles: 1293931
SizeOfAddedFiles: 17632
SizeOfExaminedFiles: 32041460912
SizeOfOpenedFiles: 1311799
NotProcessedFiles: 0
AddedFolders: 0
TooLargeFiles: 0
FilesWithError: 0
ModifiedFolders: 0
ModifiedSymlinks: 0
AddedSymlinks: 0
DeletedSymlinks: 0
PartialBackup: False
Dryrun: False
MainOperation: Backup
ParsedResult: Success
VerboseOutput: False
VerboseErrors: False
EndTime: 2/26/2018 9:18:37 AM
BeginTime: 2/26/2018 7:54:46 AM
Duration: 01:23:50.5412490
Messages: [
No remote filesets were deleted,
Compacting not required
]
Warnings: []
Errors: []

Hardly anything changed, but the backup took 1 hour and 23 minutes.

Or this one (off site via fast internet connection, 30Mbps download from client to S3-compatible server, slower upload):
DeletedFiles: 6
DeletedFolders: 0
ModifiedFiles: 35
ExaminedFiles: 89385
OpenedFiles: 37
AddedFiles: 2
SizeOfModifiedFiles: 159173885
SizeOfAddedFiles: 6144
SizeOfExaminedFiles: 218757720928
SizeOfOpenedFiles: 177055237
NotProcessedFiles: 0
AddedFolders: 0
TooLargeFiles: 0
FilesWithError: 0
ModifiedFolders: 0
ModifiedSymlinks: 0
AddedSymlinks: 0
DeletedSymlinks: 0
PartialBackup: False
Dryrun: False
MainOperation: Backup
ParsedResult: Success
VerboseOutput: False
VerboseErrors: False
EndTime: 2/26/2018 7:53:19 AM
BeginTime: 2/26/2018 7:23:31 AM
Duration: 00:29:47.5007990
Messages: [
No remote filesets were deleted,
Compacting not required
]
Warnings: []
Errors: []

Now, here are the funny things: The off-site was 30 minutes, the on-site was 1 hour and 20 minutes. My suspicion: the off site ran first, waking up the machine(?) (macOS) which was probably sleeping at the time. Then while running the backup the macOS system went back to sleep and it was finished only when the system was woke up later again. Now, 30 minutes is already pretty long if there is almost nothing to do. I assume this is because the indexes need to be downloaded every time. A feature might be some sort of ‘index of indexes’ kept on the backend with the actual indexes cached locally by Duplicati and thus only a download of the indexes if they are really needed.
Another feature would be to have some way on the mac to let a Duplicati background process prevent the system form going to sleep if it is busy backing up.

@gctwnl Can you export your backup configuration into a command line, remove the passwords and encryption phrase. Thanks.

I believe you are correct that if a job is started (and let’s say runs for 5 min.), then the machine sleeps (let’s say for 2 hours), then wakes and the job finishes (let’s say in 10 min.) then the the run time will be reported as 2 hours 15 min.

I’m not sure what you mean by that, but unless you have parameters explicitly telling Duplicati NOT to use the local database, there should be no need to download anything from the destination to do a backup (not including the testing step at the end). Basically, the local database IS the ‘index of indexes’.

If the onsite backup is to the same physical disk as the source data and temp folder, then you may be running into disk IO performance issues.

No, this is a separate disk. So, it is probably some kind of sleeping issue.