Release: (canary) 2018-04-23

Should I assume that prior to this version, the values for “concurrent block hashers” and “compressors” were each essentially 1 (meaning no concurrency at all)?

Are you guys interested in some users trying some alternative configurations to test stability/performance? If so what would the values be for, for example, “on the high end but still reasonably safe”, “stress test”, and/or “so high there will probably be issues”? Is it a bad idea to set the value higher than the number of cores in my CPU, for example?

1 Like

Everything was single threaded prior, so yes, in essence setting them all to 1 would be “the same”.

Very interested! It’s a lot of new code and it needs to be tested well before it’s released for everyone :slight_smile:

In general having more task threads than you have cores/cpu threads will cause “context switching”. A little context switching can be good if the task threads are waiting for disk reads or uploading to the internet, and not using a lot of CPU, but if they’re all using 100% CPU, then the context switching is just wasted CPU.

1 Like

Is this version working for you guys? I started my local2local backup, just about 10 files were added, like 1 GB. Usually it would take about 2 minutes, now it’s running since 2 hours already. “Current action:Backup_ProcessingFiles”, well it displays files which shouldn’t get updated. I don’t know if it takes soooo much time for comparison or if it creates another version of the files…

EDIT. I looked into the target folders, no new files. So that means, it is doing file comparison since hours. I can say that I use “check-filetime-only”.

Seems to be a mixed experience. It’s been working just fine on my server but I’ve seen multiple reports of issues with performance on github.

I don’t know what it is doing. Progress indicator is at 0% and with check-filetime-only it should not need some seconds per file.

Do the logs indicate anything or does it seem to be stalling?

It does not show logs, in the log screen it says “Loading …”. Now an error message appears there: “Failed to connect: database is locked database is locked”

Well the backup process is walking slowly through the files… Not stalling. Seems to be file comparison with like a file in 3 seconds.

“Run now
Source:1,15 TB
Backup:1,22 TB / 3 Versions
Current action:Backup_ProcessingFilesProgress:
Current file:M:\Musik\Library\Klassik\Simpson, Robert_Complete Symphonies (Hyperion)\2. Symphonies Nos. 2 & 4 (Handley - BSO)\CDImage.flac”

Hmm, with file time checking it should go much faster. I’m wondering if it’s having trouble reading from the database causing the slow comparisons.

Did you try restarting Duplicati?

My Windows 10 snapshot/VSS doesnt seem to work and I am also getting an issue with backups directed to Google Drive. The backups I run to B2 Backblaze are also pretty slow compared to how they used to be.

Hi, I created issue with Canary and slow queries on github

Yes it happened on different days (i.e. restarts of Computer in between).

I am not sure the performance issue came with the April 23 update, I am already seeing it with April 13 canary.

I am seeing similar issues as others with Duplicati -
Out of the 5 backups I have scheduled and setup, so far only one has actually completed. Another dealing with 125GB of data via new backup started Thu Apr 28 after update to, has been running for 5 days now and never seems to finish. This means the others are not able to start at their scheduled time meaning I am now several days behind on the daily backups. I had to restart the server for updates earlier today, and Duplicati is stuck on “verifying backend data” for at least 4 hours now on one of the others that has not run for 5 days.

Hi @mikaitech
Just downgrade to or wait for new canary release

Cause of problem is already discovered in Github topic by kenkendk

1 Like

Hi @mikaitech
Just downgrade to or wait for new canary release

Cause of problem is already discovered in Github topic by kenkendk

I just did that, re-running the backups now.

Last successful backup: 25th April, since then: error while running XXX; At least one error… No log since then except for repairs, for example:

…Warnings: [] Errors: [] TaskReader: ProgressAsync: Result: True Factory: CancellationToken: IsCancellationRequested: False CanBeCanceled: False WaitHandle: Handle: 3880 SafeWaitHandle: IsInvalid: False IsClosed: False…

What can I do?

Hello @SMichel, just downgrade - install Release v2.0.3.5- · duplicati/duplicati · GitHub
(from Releases · duplicati/duplicati · GitHub )

or wait couple more days for new fixed version.

There are couple of strange error caused by long backup run in latest canary.

1 Like

Thanks a lot! Following this Downgrading / reverting to a lower version now my duplicati ( is working again and I’m looking foreward to the next update.

I downgraded/reinstalled after was not working properly. Luckily the backups are running like normal again although missing several days worth of backups meant the backups are taking much longer than a normal daily backup does.

Just as an FYI it appears has introduced a bug causing SizeOfAddedFiles and SizeOfModifiedFiles to always be 0.

It has already been reported on GitHub.

1 Like