VACUUM - performance improvement is huge


#21

Are you saying it was failing before? This is supposed to help performance (did it?), not fix things, though it might as a side effect. The 200kb job that ran 10 minutes without completing was probably a different issue. Please also run a connection test to make sure you’re not affected by the authentication changes at Mega, and open a new support topic if it looks like the problem there has no relationship to VACUUM performance.

It’s not part of Windows, and Duplicati doesn’t ship the command, only the parts it needs for its own access, including the weak database encryption that Windows users get by default. To get that might require a build with encryption, or use of --unencrypted-database with a normal sqlite3.exe, however my general thinking is:

To anyone who plans to vacuum Duplicati databases from anything besides the main Duplicati, please try to avoid access conflicts. Probably turning the main Duplicati off while working behind it would reduce the risks, however even better would be not bypassing (same thinking applies to manual changing of destination files). Looking at the “SQLite VACUUM” article above, it sounds like (unsurprisingly) the results of the VACUUM are based on a temporary copy of the database, meaning any backup in progress then possibly got corrupted…


#22

For windows download https://www.sqlite.org/2018/sqlite-dll-win32-x86-3250300.zip and play the 3 files in C:\Windows\system32 and run the command via CMD.


#23

The problem with the delay is I think it was running some other command that did not finish … But with the executable of sqlite itself I was able to do it.

My question now is to see if Duplicati has any way to purge data that is no longer needed, such as data from 91 days ago when my retention is 90 days, for example.


#24

Compacting files at the backend explains the usual way this happens. Manual command is here and I point to it mainly because that page gives the options you can put in the backup’s web UI to control how aggressively compact runs. This is an example of really forcing compact, for a special purpose. Compaction requires a lot of downloads and uploads as backed up data gets repackaged, so storage provider charges may be a factor.


#25

Very good! I did not know that it would be necessary to download and upload the data again, but I believe that for use on servers with local backup by FTP, ssh, etc, may be a good option.

Thank you very much for the feedback. =D


#26

Good afternoon guys, yesterday after 2 days running the backup normally the following error occurred:

Failed: Unexpected difference in fileset version 0: 22/11/2018 21:00:00 (database id: 27), found 67153 entries, but expected 67252
Details: Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 0: 22/11/2018 21:00:00 (database id: 27), found 67153 entries, but expected 67252
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x00370] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass32_0.<VerifyConsistencyAsync>b__0 () [0x00000] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.<RunOnMain>b__0 () [0x00000] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Operation.Common.SingleRunner+<DoRunOnMain>d__2`1[T].MoveNext () [0x000b0] in <a699962d1b954fd09198884685231873>:0 
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0004e] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x0002e] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x0000b] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.GetResult () [0x00000] in <4ffb8394f71c471ab65d43c04283a838>:0 
at Duplicati.Library.Main.Operation.BackupHandler+<RunAsync>d__19.MoveNext () [0x003d6] in <a699962d1b954fd09198884685231873>:0 
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <4ffb8394f71c471ab65d43c04283a838>:0 
at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <6973ce2780de4b28aaa2c5ffc59993b1>:0 
at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00008] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Controller+<>c__DisplayClass13_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00035] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0011d] in <a699962d1b954fd09198884685231873>:0 

Log data:
2018-11-23 21:01:51 -03 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 0: 22/11/2018 21:00:00 (database id: 27), found 67153 entries, but expected 67252
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x00370] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass32_0.<VerifyConsistencyAsync>b__0 () [0x00000] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.<RunOnMain>b__0 () [0x00000] in <a699962d1b954fd09198884685231873>:0 
at Duplicati.Library.Main.Operation.Common.SingleRunner+<DoRunOnMain>d__2`1[T].MoveNext () [0x000b0] in <a699962d1b954fd09198884685231873>:0 
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0004e] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x0002e] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x0000b] in <4ffb8394f71c471ab65d43c04283a838>:0 
at System.Runtime.CompilerServices.TaskAwaiter.GetResult () [0x00000] in <4ffb8394f71c471ab65d43c04283a838>:0 
at Duplicati.Library.Main.Operation.BackupHandler+<RunAsync>d__19.MoveNext () [0x003d6] in <a699962d1b954fd09198884685231873>:0

Could it be that it was caused by VACCUM right in the database without using Duplicati?


#27

I think not, if ran Vacuum by DB Browser for SQLite Portable and then kill it (because RAM usage by that program was 18GB) I get this error from Duplicati:

System.Data.SQLite.SQLiteException (0x80004005): database disk image is malformed
database disk image is malformed

So your error don’t look Vacuum related.


#28

If you’re saying there’s a chance you ran it outside of Duplicati while Duplicati might have been at work such as doing a backup, then possibly you got into the access conflict I’d worried about, with stale data put back. Maybe there was even a problem on the data copy to a temporary file, if it could change while it was being copied. This is a deeper SQLite question than I can answer, on exactly how SQLite vacuum is implemented.

The test from @mr-flibble seems different from what I think you described, but it does answer how one can get into that unfortunate state with the database. I had hoped SQLite would be able to resist such problems. There’s currently such an error here, so if anyone is good at fixing these, please stop by to help with that…


#29

When I did the vacuum Duplicati was not running the job, it was the first thing I checked before executing the command … The strange thing is that it only occurred 2 days after executing the command, in the mean time, it worked normally.

After rebuilding the database and using the purge-broken-files command, it returned to normal operation.