Release: 2.0.3.6 (canary) 2018-04-23

2.0.3.6-2.0.3.6_canary_2018-04-23

This update adds concurrent processing for the backup. With this update, the backup will now use multiple cores to perform checking and compressing.
Use the advanced option --concurrency-max-threads to toggle how many threads to use.
The options --concurrency-block-hashers and --concurrency-compressors can be used to adjust the number of hashers and compressors to use.

Beware that this update contains a lot of new code, and should only be used in test environments.

Other fixes in this build:

  • Fixes for filter groups, thanks @tygill
  • Fixed a backup import issue with empty metadata
  • Added upper bound to password checker, thanks @pectojin
3 Likes

Is this advanced option the only way to use the new feature? Or is it on by default and this option allows some manual override?

The defaults are as follows:
--concurrency-block-hashers
Default value: “2”
--concurrency-compressors
Default value: “2”
--concurrency-max-threads
Default value: “0” (0 means dynamically adjusted to system hardware)

1 Like

Should I assume that prior to this version, the values for “concurrent block hashers” and “compressors” were each essentially 1 (meaning no concurrency at all)?

Are you guys interested in some users trying some alternative configurations to test stability/performance? If so what would the values be for, for example, “on the high end but still reasonably safe”, “stress test”, and/or “so high there will probably be issues”? Is it a bad idea to set the value higher than the number of cores in my CPU, for example?

1 Like

Everything was single threaded prior, so yes, in essence setting them all to 1 would be “the same”.

Very interested! It’s a lot of new code and it needs to be tested well before it’s released for everyone :slight_smile:

In general having more task threads than you have cores/cpu threads will cause “context switching”. A little context switching can be good if the task threads are waiting for disk reads or uploading to the internet, and not using a lot of CPU, but if they’re all using 100% CPU, then the context switching is just wasted CPU.

1 Like

Is this version working for you guys? I started my local2local backup, just about 10 files were added, like 1 GB. Usually it would take about 2 minutes, now it’s running since 2 hours already. “Current action:Backup_ProcessingFiles”, well it displays files which shouldn’t get updated. I don’t know if it takes soooo much time for comparison or if it creates another version of the files…

EDIT. I looked into the target folders, no new files. So that means, it is doing file comparison since hours. I can say that I use “check-filetime-only”.

Seems to be a mixed experience. It’s been working just fine on my server but I’ve seen multiple reports of issues with performance on github.

I don’t know what it is doing. Progress indicator is at 0% and with check-filetime-only it should not need some seconds per file.

Do the logs indicate anything or does it seem to be stalling?

It does not show logs, in the log screen it says “Loading …”. Now an error message appears there: “Failed to connect: database is locked database is locked”

Well the backup process is walking slowly through the files… Not stalling. Seems to be file comparison with like a file in 3 seconds.

“Run now
Source:1,15 TB
Backup:1,22 TB / 3 Versions
Current action:Backup_ProcessingFilesProgress:
0.00%
Current file:M:\Musik\Library\Klassik\Simpson, Robert_Complete Symphonies (Hyperion)\2. Symphonies Nos. 2 & 4 (Handley - BSO)\CDImage.flac”

Hmm, with file time checking it should go much faster. I’m wondering if it’s having trouble reading from the database causing the slow comparisons.

Did you try restarting Duplicati?

My Windows 10 snapshot/VSS doesnt seem to work and I am also getting an issue with backups directed to Google Drive. The backups I run to B2 Backblaze are also pretty slow compared to how they used to be.

Hi, I created issue with Canary 2.0.3.6 and slow queries on github

Yes it happened on different days (i.e. restarts of Computer in between).

I am not sure the performance issue came with the April 23 update, I am already seeing it with April 13 canary.

I am seeing similar issues as others with Duplicati - 2.0.3.6_canary_2018-04-23.
Out of the 5 backups I have scheduled and setup, so far only one has actually completed. Another dealing with 125GB of data via new backup started Thu Apr 28 after update to 2.0.3.6, has been running for 5 days now and never seems to finish. This means the others are not able to start at their scheduled time meaning I am now several days behind on the daily backups. I had to restart the server for updates earlier today, and Duplicati is stuck on “verifying backend data” for at least 4 hours now on one of the others that has not run for 5 days.

Hi @mikaitech
Just downgrade to 2.0.3.5 or wait for new canary release

Cause of problem is already discovered in Github topic by kenkendk

1 Like

Hi @mikaitech
Just downgrade to 2.0.3.5 or wait for new canary release

Cause of problem is already discovered in Github topic by kenkendk

I just did that, re-running the backups now.

Last successful backup: 25th April, since then: error while running XXX; At least one error… No log since then except for repairs, for example:

…Warnings: [] Errors: [] TaskReader: ProgressAsync: Result: True Factory: CancellationToken: IsCancellationRequested: False CanBeCanceled: False WaitHandle: Handle: 3880 SafeWaitHandle: IsInvalid: False IsClosed: False…

What can I do?

Hello @SMichel, just downgrade - install Release v2.0.3.5-2.0.3.5_canary_2018-04-13 · duplicati/duplicati · GitHub
(from Releases · duplicati/duplicati · GitHub )

or wait couple more days for new fixed version.

There are couple of strange error caused by long backup run in latest canary.

1 Like