Release: 2.0.4.29 (canary) 2019-09-17

I did the same, no backups created on any of my machines both Windows and Linux

Strange… I’m showing a version bump from 9 to 10:

root@stimpy [/root/.config/Duplicati]# sqlite3 LYKQSUOLIN.sqlite 'select * from Version;' 
1|10
root@stimpy [/root/.config/Duplicati]# sqlite3 backup\ LYKQSUOLIN\ 20190917053203.sqlite 'select * from Version;'                 
1|9

Yes very, as mine on Windows and Linux all show as version 9 still.

And you did run a backup, correct? I believe job-specific databases are not upgraded until a backup job is run.

Ah that could be it - I upgraded after I knew most backups had run, so I will check that out

I confirmed with a backup that ran 15min ago, that it is indeed updated to v10.

Thanks for helping to clear that up.

It will upgrade the db when a backup is run.

This is total hunch, but was something changed with the de-duplication / cleanup code? Because I did update six systems today, with this version. And observed three of those doing much larger than normal upload amount and also three systems did immediately after that compact.

Sure, it could be coincidence, especially the compaction. But the large upload quantity gave kind of impression that maybe something wasn’t updated earlier that should have been. Or maybe something was re-uploaded for some reason? Also the systems in this case are all completely separated, so it wasn’t about some data set change, which would have affected multiple systems. - Finally saying it again this is total hunch, just a strange feeling. Just wondering if anyone noticed something similar. - Just getting paranoid. No worries. - If there would have been serious issue of course the automated restore testing would have showed it.

Check the full log of the backup job to see how much was uploaded, and if it was a result of a compact operation.

I didn’t notice anything unusual myself.

Yes. Deleted remote volumes stick around in the database for a while to fix issues where the backend reports the files as existing even though they have been deleted. Before this update, those files would continue to stay in the database, but they will now be purged from the database.

I do not see a case where it should have the effect that you report, but maybe @BlueBlock has a better idea of how the fix works?

1 Like

@drwtsn32 @Taomyn Sorry for the confusion, I somehow overlooked the database upgrade. But yes, this is the update:

1 Like

In Windows 10 1903 I use Duplicati as a service. I upgraded Duplicati to version 2.0.4.29 yesterday.
And after completing the first backup in the new version, I found that Duplicati.Server.exe consumes the processor to about 45%. I have a 4 thread processor.
I don’t know what the process does when no backup is running.

Here is the status from Process Explorer:

There are multiple report of the CPU issue from multiple people on multiple OSs. I hope someone with the right tools (perhaps a profiler?) can narrow down what’s going on. Meanwhile, I suggest care, because the database upgrade in this release will make it difficult to downgrade to 2.0.4.28 because the backup DB file gradually gets stale if too much time passes. Its name these days is like backup.random-letters.date.sqlite.

I have a suspicion that it is the usage reporter:

1 Like

Perhaps someone could try turning it off in Settings as a workaround?

image

That solved the CPU issue! So it is really the usage reporter.

Great, turning off these reports really helps! Thanks

@Reimi @Ferdis @ts678
I have found and fixed the problem and will send out a new canary build, hopefully today.

2 Likes

I see a case I need to fix in the removal of deleted records in Remotevolume, but I can’t think of what would cause the issue as described by the user.