This update adds concurrent processing for the backup. With this update, the backup will now use multiple cores to perform checking and compressing.
Use the advanced option --concurrency-max-threads to toggle how many threads to use.
The options --concurrency-block-hashers and --concurrency-compressors can be used to adjust the number of hashers and compressors to use.
Beware that this update contains a lot of new code, and should only be used in test environments.
The defaults are as follows: --concurrency-block-hashers
Default value: “2” --concurrency-compressors
Default value: “2” --concurrency-max-threads
Default value: “0” (0 means dynamically adjusted to system hardware)
Should I assume that prior to this version, the values for “concurrent block hashers” and “compressors” were each essentially 1 (meaning no concurrency at all)?
Are you guys interested in some users trying some alternative configurations to test stability/performance? If so what would the values be for, for example, “on the high end but still reasonably safe”, “stress test”, and/or “so high there will probably be issues”? Is it a bad idea to set the value higher than the number of cores in my CPU, for example?
Everything was single threaded prior, so yes, in essence setting them all to 1 would be “the same”.
Very interested! It’s a lot of new code and it needs to be tested well before it’s released for everyone
In general having more task threads than you have cores/cpu threads will cause “context switching”. A little context switching can be good if the task threads are waiting for disk reads or uploading to the internet, and not using a lot of CPU, but if they’re all using 100% CPU, then the context switching is just wasted CPU.
Is this version working for you guys? I started my local2local backup, just about 10 files were added, like 1 GB. Usually it would take about 2 minutes, now it’s running since 2 hours already. “Current action:Backup_ProcessingFiles”, well it displays files which shouldn’t get updated. I don’t know if it takes soooo much time for comparison or if it creates another version of the files…
EDIT. I looked into the target folders, no new files. So that means, it is doing file comparison since hours. I can say that I use “check-filetime-only”.
It does not show logs, in the log screen it says “Loading …”. Now an error message appears there: “Failed to connect: database is locked database is locked”
Well the backup process is walking slowly through the files… Not stalling. Seems to be file comparison with like a file in 3 seconds.
“Run now
Source:1,15 TB
Backup:1,22 TB / 3 Versions
Current action:Backup_ProcessingFilesProgress:
0.00%
Current file:M:\Musik\Library\Klassik\Simpson, Robert_Complete Symphonies (Hyperion)\2. Symphonies Nos. 2 & 4 (Handley - BSO)\CDImage.flac”
Hmm, with file time checking it should go much faster. I’m wondering if it’s having trouble reading from the database causing the slow comparisons.
My Windows 10 snapshot/VSS doesnt seem to work and I am also getting an issue with backups directed to Google Drive. The backups I run to B2 Backblaze are also pretty slow compared to how they used to be.
I am seeing similar issues as others with Duplicati - 2.0.3.6_canary_2018-04-23.
Out of the 5 backups I have scheduled and setup, so far only one has actually completed. Another dealing with 125GB of data via new backup started Thu Apr 28 after update to 2.0.3.6, has been running for 5 days now and never seems to finish. This means the others are not able to start at their scheduled time meaning I am now several days behind on the daily backups. I had to restart the server for updates earlier today, and Duplicati is stuck on “verifying backend data” for at least 4 hours now on one of the others that has not run for 5 days.