Source: 5.69 GB Backup:1.03 MB? (Solved)

I’m back again, and having a helluva time trying to get a full backup of my /home folder. This filesize difference makes me awfully suspicious. And I can’t verify, as I’m aiming for cold storage.

It was going pretty well yesterday in small batches. Then I excited/impatient and tried uploaded about 60 gigs; a few hours later, I couldn’t tell if they were there or not.

Lesson learned; I’ll start with the most important stuff to backup and go in tiny batches (especially since I’m trying cold storage and can’t verify).

So I’m starting small again, however, I’m running into errors left and right. If I stop a job, I have to kill mono-sgen before restarting it (seems to be the solution for “the database is locked” issue).

I did a dry-run, saw the source size (5.5G), then tried to run it. It issued a warning. I clicked “Show”, and got this:

  • Dec 30, 2018 10:09 AM: Result

  • Dec 30, 2018 10:07 AM: Result

  • Dec 30, 2018 10:06 AM: Result

  • Dec 30, 2018 10:05 AM: Result

  • Dec 30, 2018 10:01 AM: Result

I’m getting the following put calls from the remote log, however, I don’t see any files in my S3 bucket (EDIT: Ok, there’s 3 files there now that add up to 1.03 KB) :

  • Dec 30, 2018 10:05 AM: put duplicati-20181230T150531Z.dlist.zip.gpg

  • Dec 30, 2018 10:05 AM: put duplicati-ie92a344088394422b54055a8fd312939.dindex.zip.gpg

*etc

System Logs:
*Dec 30, 2018 9:46 AM: Failed while executing “Backup” with id: 8

**System.Threading.ThreadAbortException: Thread was being aborted.

*Dec 30, 2018 9:46 AM: Error in worker

**System.Threading.ThreadAbortException: Thread was being aborted.

The Live Error Log:

  • Dec 30, 2018 10:09 AM: The operation Backup has completed

  • Dec 30, 2018 10:09 AM: Running Backup took 0:00:00:05.761

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “RemoteOperation” WHERE “Timestamp” < 1543590566 took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “RemoteOperation” WHERE “Timestamp” < 1543590566

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “LogData” WHERE “Timestamp” < 1543590566 took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “LogData” WHERE “Timestamp” < 1543590566

  • Dec 30, 2018 10:09 AM: ExecuteReader: SELECT “ID”, “Timestamp” FROM “Fileset” ORDER BY “Timestamp” DESC took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteReader: SELECT “ID”, “Timestamp” FROM “Fileset” ORDER BY “Timestamp” DESC

  • Dec 30, 2018 10:09 AM: ExecuteReader: SELECT “ID”, “Timestamp” FROM “Fileset” ORDER BY “Timestamp” DESC took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteReader: SELECT “ID”, “Timestamp” FROM “Fileset” ORDER BY “Timestamp” DESC

  • Dec 30, 2018 10:09 AM: CommitFinalizingBackup took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - CommitFinalizingBackup

  • Dec 30, 2018 10:09 AM: CommitAfterUpload took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - CommitAfterUpload

  • Dec 30, 2018 10:09 AM: CommitUpdateRemoteVolume took 0:00:00:00.240

  • Dec 30, 2018 10:09 AM: Starting - CommitUpdateRemoteVolume

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DROP TABLE IF EXISTS “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA”

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DROP TABLE IF EXISTS “DelBlockSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS “DelBlockSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA”

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA”

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “DelBlockSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “DelBlockSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA”

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “DeletedBlock” WHERE “VolumeID” IN (SELECT “ID” FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” ) took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “DeletedBlock” WHERE “VolumeID” IN (SELECT “ID” FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” )

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “Block” WHERE “VolumeID” IN (SELECT “ID” FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” ) took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “Block” WHERE “VolumeID” IN (SELECT “ID” FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” )

  • Dec 30, 2018 10:09 AM: ExecuteNonQuery: DELETE FROM “BlocklistHash” WHERE “Hash” IN (SELECT “Hash” FROM “Block” WHERE “VolumeID” IN (SELECT “ID” FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” )) took 0:00:00:00.000

  • Dec 30, 2018 10:09 AM: Starting - ExecuteNonQuery: DELETE FROM “BlocklistHash” WHERE “Hash” IN (SELECT “Hash” FROM “Block” WHERE “VolumeID” IN (SELECT “ID” FROM “DelVolSetIds-04205DD2D79CB64BBD4605BFBFB4F7BA” ))

Note that I’m running no-autocompact and no-verify for cold storage use. I also have an exclude filter for files under 1 kb.

EDIT: Ok, looks like this is just due to some confusing interface stuff. I upped the exclude filter, and the upload size increased. So it looks like the Source lists the total size of the source (including any excludes, or, at the very least, including any excludes based on file size).

Did you try clicking on any of those “Result” timestamp lines to see the details (which would likely have included the actual warnings)?

Can you confirm that OS reported source size matches the Duplicati reported Source size (regardless of filter / exclude settings)?

I just want to try and rule out an actual “Source size” but if possible. I think there used to be an exclude-by-size bug but I thought it had been resolved…

This sounds curious. When I had a filter accidentally exclude all files my source Metadata reported 0GB in size.

I ran into a different problem (Detected non-empty blocksets with no associated blocks!) and am in the middle of a database repair. I’m guessing those logs you’re looking for are part of the old database and I’d have to restore it to see what was going on. Is this correct?

What I can remember is this: My backup config wouldn’t save a byte-quantity backup level. I had to go up to the kilo range for it to save in the GUI.

Source size has gone up as I’ve added other folders… obviously. However I haven’t been dealing with size-based excludes (as it would un-backup all the larger files from the other folders), so I don’t think I have any useful data there.

As a side-note, are database-repairs even possible without thawing cold storage?

Yep - logs are stored in the job & main databases so during a database recreate you’ll lose the job level logs. I think you keep them during a repair though.

No - the database repair uses the remote dlist (and maybe dindex) files to figure out what’s wrong with the local database. (I think in certain scenarios it may even try to use the dblock files to repair bad hashes).

So if you thaw the dindex & dlist files you’ll PROBABLY be able to do a repair - but the thaw is still needed.