Release: 2.1.1.101 (Canary) 2025-08-19

2.1.1.101_canary_2025-08-19

This release is a canary release intended to be used for testing.

Changes in this versions

This version has an update to the restore process that reduces the amount of temporary space used during restores.
With the option --restore-volume-cache-hint it is now possible to choose how much size to use, and the dafult is 4GiB.

A problem with ngclient saving sizes with the default multiplier was fixed, and an improved list of destinations was added.

The breaking change in 2.1.1.100 with the --allow-empty-source was tested and found to have too many negative issues, so it was inverted to be --prevent-empty-source such that it is possible to turn it on, but it does not by default fail if the source folder is empty.

For Debian packages, the libICU package is now a recommended dependency and will not prevent installing the package.
A pre-install script warns if no libICU package was found, and shows how to run without libICU present.

Detailed list of changes:

  • Fixed an issue with null values on Sharepoint mounted folders
  • Improve support for libICU on Debian
  • Fixed a case where testing would fail on a missing hash value
  • Added support for limiting the usage of local temporary disk space during restore
  • Inverted --allow-empty-source to --prevent-empty-source (opt-in)
  • Fixed an issue with S3 backend reading stream twice
  • Lowercase S3 AWS Client content hash for better compatibility with other providers

ngclient changes:

  • Improved searching for destinations
  • Resolved some issues with colors not being applied
  • Fixed an issue with non-default size multiplier causing invalid values to be saved
  • Streamlined suffixes for sizes to be the same
  • Removed support for picking bytes as the multiplier
  • Fixed issue with livelog page requesting invalid data when disabled
2 Likes

I went mad, upgraded all my machines and so far so good, at least with the installer - no backups yet.

Did go into my S3 backup to test the fix, which does work now with AWS SDK though will stay with Minio for now, but I noticed some small issues:

The S3 logo won’t load, in fact none of them do



The destination section does not offer any scroll bar when the information goes beyond the page


Was this announced somewhere? I found commit but no PR. Commit says:

“Add new destination searchable list + new view for selected destination”

My install keeps that image at webroot\ngclient\assets\dest-icons\aws.png

webroot\ngclient\chunk-KDYD6LMI.js has icon:“/assets/dest-icons/aws.png”

ProcessMonitor shows failing access to webroot\assets\dest-icons\aws.png

leading slash looked a bit suspicious. How does it access into ngclient tree?

It turns out “Relative paths for images” was done 7 hours ago in this commit.

Before I found that, I had already tried a symbolic link addition to align things:

webroot>mklink /d assets ngclient\assets
symbolic link created for assets <<===>> ngclient\assets

You can try the workaround, or presumably fixed Canary will come out later on.

1 Like

Thanks for reporting. It is because the images load with absolute paths (which works for developer setups, but fails when deployed). I have fixed the issue for the next release.

For now, we are doing a bit more rapid development on ngclient, so not all changes are made as PRs.

Nice trick!

Yes, it will roll into the next release.

2 Likes

Upgraded my systems that were running 2.1.1.100. No issues with the upgrade.

I don’t know if this is related to the “missing files due to statusCode 503” error I reported in the .100 thread or not, but I will report here whether or not I still see it.

Is there an outstanding task to get the new UI working properly on mobile browsers?

It’s completely unusable at the moment:

Still sometimes getting missing files on the next backup after a “good” backup.

The log looks similar to the bits I posted in the .100 thread, except that there are multiple “503”s.

Please let me know if you want a bugreport and/or the logfile.

Since reporting this I have let that specific backup sit in an error state (it is just the backup for the Duplicati databases). Since then I have not had this error on any other backup on that Windows 10 system, or on the Linux system running 2.1.1.101.

Update - I raised the read-write-timeout to 60 seconds, and so far no “missing files” errors.

So there is a work-around, but it seems to me this is hiding a more fundamental error.

I wonder if the devs would be able to look with the Duplicati versions that did this easily?

Post-fix Duplicati has been good for me at default, which I still worry about due to packet loss being able to get TCP to stall for minutes – will that cause read-write-timeout to fire?

I don’t have any way to stall TCP (maybe dev lab-project?) but did try to use tiny timeout.

Unfortunately the switch back from B2 S3 to B2 native (which I assume you’re on) found numerous unexpected (to Duplicati and me) files. I’m thinking this is because B2 buckets default to versioning enabled, causing B2 S3 soft-delete (evidence is in their web viewer).

I first cut version retention to 7 days in their web interface, and next day had fewer extra. I changed it to keeping only latest, waited hours, researched, read it runs at midnight GMT.

Decided to let Repair loose to delete the extra files faster. Got some noise, but worst was:

The operation Test has failed => Detected 3 volumes with missing filesets

reported in my after-backup test script which looks hard at a “copy” of the backup and DB. Copying of backup is with rclone sync. Copy of the DB is a copy of only the main DB file, which turned out to be the problem with the new scheme, plus a possible bug in Duplicati.

Write-Ahead Logging is the new scheme. I’m on Canary, so this is still using usual SQLite.

What I think is the problem is that Duplicati left over 30 MB in its BMTSLGPDSE.sqlite-wal file, which it does not have open at the Windows level (per Resource Monitor). Duplicati is still running since the 12:15:57 modification time of the file, so can’t use crash to excuse it.

Profiling log is still chugging along doing its thing until it runs into the error from my script:

2025-09-02 12:15:52 -04 - [Information-Duplicati.Library.Main.Operation.TestHandler-Test results]: Successfully verified 3 remote files
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-CommitAfterVerification]: Starting - CommitTransaction: CommitAfterVerification
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-CommitAfterVerification]: CommitTransaction: CommitAfterVerification took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Starting - Rollback during transaction dispose
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Rollback during transaction dispose took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQueryAsync]: Starting - ExecuteNonQueryAsync: PRAGMA optimize
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQueryAsync]: ExecuteNonQueryAsync: PRAGMA optimize took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'synchronous=NORMAL'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'temp_store=MEMORY'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'journal_mode=WAL'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-65536'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'mmap_size=67108864'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'threads=8'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'shared_cache=true'.
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64Async]: Starting - ExecuteScalarInt64Async: 
                    INSERT INTO "Operation" (
                        "Description",
                        "Timestamp"
                    )
                    VALUES (
                        "Backup",
                        1756829752
                    );
                    SELECT last_insert_rowid();
                
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64Async]: ExecuteScalarInt64Async: 
                    INSERT INTO "Operation" (
                        "Description",
                        "Timestamp"
                    )
                    VALUES (
                        "Backup",
                        1756829752
                    );
                    SELECT last_insert_rowid();
                 took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Starting - Rollback during transaction dispose
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Rollback during transaction dispose took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQueryAsync]: Starting - ExecuteNonQueryAsync: PRAGMA optimize
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQueryAsync]: ExecuteNonQueryAsync: PRAGMA optimize took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:03:20.448
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'synchronous=NORMAL'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'temp_store=MEMORY'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'journal_mode=WAL'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-65536'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'mmap_size=67108864'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'threads=8'.
2025-09-02 12:15:52 -04 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'shared_cache=true'.
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: 
                    SELECT
                        "ID",
                        "Timestamp"
                    FROM "Operation"
                    ORDER BY "Timestamp"
                    DESC LIMIT 1
                
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: 
                    SELECT
                        "ID",
                        "Timestamp"
                    FROM "Operation"
                    ORDER BY "Timestamp"
                    DESC LIMIT 1
                 took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: Starting - CommitTransaction: Unnamed commit
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: CommitTransaction: Unnamed commit took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: Starting - CommitTransaction: Unnamed commit
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: CommitTransaction: Unnamed commit took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Starting - Rollback during transaction dispose
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Rollback during transaction dispose took 0:00:00:00.000
2025-09-02 12:15:52 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQueryAsync]: Starting - ExecuteNonQueryAsync: PRAGMA optimize
2025-09-02 12:15:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQueryAsync]: ExecuteNonQueryAsync: PRAGMA optimize took 0:00:00:00.000
2025-09-02 12:18:10 -04 - [Warning-Duplicati.Library.Modules.Builtin.RunScript-StdErrorNotEmpty]: The script "HP4 clone 12 Backblaze B2\sync.bat" reported error messages: 
ErrorID: DatabaseInconsistency
Detected 3 volumes with missing filesets

12:15:57 time on main DB is when my script did a CommandLine compare on it, but I’m hesitant to quickly blame that, because IMO Duplicati should have cleaned up before it.

I tracked down the cause of my volumes with missing filesets to three dlist files found in Deleting state in my copy of the DB, but looking OK in the original. No difference in the main DB file content. Then I found Duplicati had left a 30 MB change out of the main DB. Putting those together solved the mystery of the wrong file state that I accidentally found.

Maybe devs can comment on that issue. Another oddity is that log-socket-data=256 data doesn’t seem as long or as recognizable as I had expected. Is anyone seeing good data?

EDIT 1:

This was 2.1.1.101 Canary, so Duplicati is presumably using Microsoft.Data.SQLite here.

EDIT 2:

Duplicati up since 7:15:01 AM 8/31/2025. Its wal file was created ‏‎10:23:53 AM yesterday.
As expected the DB browser normal (not read-only) can merged the .wal into the primary.
Looking at the differences confirms the changes breaking the test were in RemoteVolume.

sqldiff.exe --summary "C:\tmp\waltest\BMTSLGPDSE.sqlite" "C:\tmp\waltest\BMTSLGPDSE-merged.sqlite"
Block: 0 changes, 0 inserts, 0 deletes, 138009 unchanged
BlocklistHash: 0 changes, 0 inserts, 0 deletes, 1617 unchanged
Blockset: 0 changes, 0 inserts, 0 deletes, 20859 unchanged
BlocksetEntry: 0 changes, 0 inserts, 0 deletes, 149275 unchanged
ChangeJournalData: 0 changes, 0 inserts, 0 deletes, 0 unchanged
Configuration: 0 changes, 0 inserts, 0 deletes, 5 unchanged
DeletedBlock: 0 changes, 0 inserts, 2255 deletes, 6755 unchanged
DuplicateBlock: 0 changes, 0 inserts, 0 deletes, 0 unchanged
FileLookup: 0 changes, 0 inserts, 0 deletes, 16228 unchanged
Fileset: 0 changes, 0 inserts, 0 deletes, 25 unchanged
FilesetEntry: 0 changes, 0 inserts, 0 deletes, 339624 unchanged
IndexBlockLink: 0 changes, 0 inserts, 17 deletes, 385 unchanged
LogData: 0 changes, 2 inserts, 2 deletes, 112 unchanged
Metadataset: 0 changes, 0 inserts, 0 deletes, 14395 unchanged
Operation: 0 changes, 0 inserts, 0 deletes, 1189 unchanged
PathPrefix: 0 changes, 0 inserts, 0 deletes, 6250 unchanged
RemoteOperation: 0 changes, 39 inserts, 12 deletes, 2504 unchanged
RemoteVolume: 54 changes, 0 inserts, 0 deletes, 1045 unchanged
Version: 0 changes, 0 inserts, 0 deletes, 1 unchanged
sqlite_stat1: 0 changes, 0 inserts, 0 deletes, 23 unchanged
sqlite_stat4: 0 changes, 0 inserts, 0 deletes, 394 unchanged

sqldiff.exe --table RemoteVolume "C:\tmp\waltest\BMTSLGPDSE.sqlite" "C:\tmp\waltest\BMTSLGPDSE-merged.sqlite"
UPDATE RemoteVolume SET VerificationCount=3 WHERE ID=45;
UPDATE RemoteVolume SET Size=-1, State='Deleted', DeleteGraceTime=1756836940 WHERE ID=3007;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3012;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3013;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3014;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3015;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3016;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3017;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3018;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3019;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3020;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3021;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3022;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3023;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3024;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3025;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3026;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3027;
UPDATE RemoteVolume SET Size=-1, State='Deleted', DeleteGraceTime=1756836940 WHERE ID=3042;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3043;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3044;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3045;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3046;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3047;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3048;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3049;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3051;
UPDATE RemoteVolume SET VerificationCount=3 WHERE ID=3375;
UPDATE RemoteVolume SET Size=-1, State='Deleted', DeleteGraceTime=1756836940 WHERE ID=3517;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3518;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3519;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3520;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3521;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3522;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3523;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3524;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3525;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3526;
UPDATE RemoteVolume SET OperationID=1189, Size=-1, Hash=NULL, State='Deleted', DeleteGraceTime=1756836945 WHERE ID=3527;
UPDATE RemoteVolume SET State='Verified', VerificationCount=3 WHERE ID=3839;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3840;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3841;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3842;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3843;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3844;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3845;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3846;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3847;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3848;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3849;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3850;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3851;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3853;
UPDATE RemoteVolume SET State='Verified' WHERE ID=3854;

The three that were wrongly Deleting were the three there with the shorter version of lines:

UPDATE RemoteVolume SET Size=-1, State='Deleted', DeleteGraceTime=1756836940 WHERE ID=3007;

Marking it Deleted makes it right to not be in the Fileset table, per code likely around here.

In my case, 2.1.1.101 on Windows 10 using snapshots backing up the folder with the Duplicati databases to b2 native with default read-write-timeout of 30s hits the issue almost every backup.

Yes, B2 native. I don’t think I have versioning/soft deletes enabled, but I will check. Even if it is enabled, there are 3 backups that use folders within the same bucket: os, user area, and duplicati. Only the duplicati area backup has been failing consistently. I have seen less frequent errors on the others, but I’m thinking only 1 or 2 total since updating.

If the issue is with Microsoft.Data.SQLite (or with how Duplicati is using it) it would explain why I have only been seeing this on the Windows system and not the Linux system. I plan to update one of my other Linux systems since things seem a bit more stable there.

I haven’t tested Beta much. That one omits the Microsoft move and its related changes.

I put some detailed data in edits to my post above, in case it helps the devs figure it out.

Hmmm. B2 says “Keep all versions“, so I guess that means versioning is enabled. However, since every file Duplicati creates has a unique filename, does versioning even come into play? I can see where a soft delete could lead to unexpected “deleted” files still being present.

One bit I misremembered - the Duplicati config backup does not use the same bucket as the rest of the backups on the Windows 10 system. Rather, all my systems use folders within the same bucket for their Duplicati config backups. So 1 Windows 10 system and 1 Linux system running 2.1.1.101, and a few Linux systems running a mix of versions prior to 2.1.1.101, and at the moment out of all of them the only one having an issue is the Windows 10 system.

I think it’s pretty minimal until one switches to S3. duplicati-verification.json files might get lots of versions if one has upload-verification-file=True, but dblock buildup is much bigger, which is a pitfall for anybody on a provider that defaults S3 buckets to versioning enabled.

One doesn’t notice it (or at least I didn’t) until one watches space use, or returns to native.

Retaining multiple versions of objects with S3 Versioning (AWS docs)

By default, S3 Versioning is disabled on buckets, and you must explicitly enable it. For more information, see Enabling versioning on buckets.

S3-Compatible API Bucket Versions (Backblaze docs)

The buckets that are in Backblaze B2 Cloud Storage are always versioned by default.

There are open enhancement issues on both.

Add soft-delete option to B2 backend #5727

Support AWS S3 soft-delete #5930

Duplicati already supports the versioning

which maybe is what I fell into by accident, although I’m wondering if something like my 7 day retention might add some ransomware resilience. Getting files back might be wanted.

As a side note, my rclone sync using B2 native didn’t sync aging deleted files to my drive.