Are you saying it was failing before? This is supposed to help performance (did it?), not fix things, though it might as a side effect. The 200kb job that ran 10 minutes without completing was probably a different issue. Please also run a connection test to make sure you’re not affected by the authentication changes at Mega, and open a new support topic if it looks like the problem there has no relationship to VACUUM performance.
It’s not part of Windows, and Duplicati doesn’t ship the command, only the parts it needs for its own access, including the weak database encryption that Windows users get by default. To get that might require a build with encryption, or use of --unencrypted-database with a normal sqlite3.exe, however my general thinking is:
To anyone who plans to vacuum Duplicati databases from anything besides the main Duplicati, please try to avoid access conflicts. Probably turning the main Duplicati off while working behind it would reduce the risks, however even better would be not bypassing (same thinking applies to manual changing of destination files). Looking at the “SQLite VACUUM” article above, it sounds like (unsurprisingly) the results of the VACUUM are based on a temporary copy of the database, meaning any backup in progress then possibly got corrupted…