What does "Completing previous backup..." mean?

I just completed a 5 day backup (around 300Gb) to dropbox and for the last 2 hours the UI is showing me “Completing previous backup…” . What does that mean? What is it doing concretely? I can’t find anything useful in the logs.

I’m not positive, but I think it means your previous backup didn’t fully finish (obviously) - but most likely in the sense that not all, or only partial, files got uploaded.

When Duplicati starts a backup one of the first things it does is check the Destination to make sure all expected files (as listed in the local database) are found and of the correct size. If they aren’t, then it needs to fix the problem before it can start a new backup.

Assuming I’m correct (which I may not be) what exactly was found wrong and how it gets fixed isn’t something I know much about. If “Show Logs” -> “Live” -> “Profiling” isn’t showing you any useful info, I’m afraid somebody who has run into this before will have to help us out. :frowning:

After a reboot, it sais “Verifying backend data …”. It seems it’s doing the same, just a different label :slight_smile: Very strange…

Is Show Logs > Live > Profiling the most verbose log I can get?
The last log line is this one, more than 7 hours ago :

Oct 26, 2017 12:27 AM: Message
Re-creating missing index file for duplicati-b1f28b2774ee34f56b957d276950b57a9.dblock.zip

I also see a mono-sgen64 process taking up 100% cpu.

In the GUI, yes - I think that is the most verbose.

There are some log files you could review but I don’t know if they’re updated more or less frequently than the Profiling GUI.


I think mono-sgen64 is the “garbage collection” part of the code that cleans up memory use and such when processing is complete. I don’t know why it would be so busy, nor why nothing has been logged in Duplicati for 7 hours…

After reading the link below, I think mono-sgen64 is just the mono VM using the sgen garbage collector. So I think it’s just the duplicati server itself.

http://www.mono-project.com/docs/advanced/garbage-collector/sgen

I’m taking a look at the other logs, thanks!

Yes, that is the correct answer. It builds and uploads a dlist file with as much data as possible (i.e. only fully stored files) and uploads that.

That means it is looking at the remote storage and checking that all files are present. It can also happen after the backup is complete, where it checks if the files are uploaded correctly, and it downloads a few to check that they are not modified.

Thanks for confirming that, but it’s still going nowhere it seems. I started the service 3 days ago with extra logging :

./duplicati --log-file=/Users/xastor/duplicati.log --log-level=Profiling

These are the last lines of that :

2017-10-28 17:47:01Z - Information: Re-creating missing index file for duplicati-b1f28b2774ee34f56b957d276950b57a9.dblock.zip
2017-10-28 17:47:01Z - Profiling: Starting - ExecuteScalarInt64: INSERT INTO “Remotevolume” (“OperationID”, “Name”, “Type”, “State”, “Size”, “VerificationCount”, “DeleteGraceTime”) VALUES (?, ?, ?, ?, ?, ?, ?); SELECT last_insert_rowid();
2017-10-28 17:47:01Z - Profiling: ExecuteScalarInt64: INSERT INTO “Remotevolume” (“OperationID”, “Name”, “Type”, “State”, “Size”, “VerificationCount”, “DeleteGraceTime”) VALUES (?, ?, ?, ?, ?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000
2017-10-28 17:47:01Z - Profiling: Starting - ExecuteReader: SELECT “Name”, “Hash”, “Size” FROM “RemoteVolume” WHERE “Name” = ?
2017-10-28 17:47:01Z - Profiling: ExecuteReader: SELECT “Name”, “Hash”, “Size” FROM “RemoteVolume” WHERE “Name” = ? took 00:00:00.000
2017-10-28 17:47:01Z - Profiling: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Remotevolume” WHERE “Name” = ?
2017-10-28 17:47:01Z - Profiling: ExecuteScalarInt64: SELECT “ID” FROM “Remotevolume” WHERE “Name” = ? took 00:00:00.000
2017-10-28 17:47:01Z - Profiling: Starting - ExecuteReader: SELECT DISTINCT “Hash”, “Size” FROM “Block” WHERE “VolumeID” = ?
2017-10-28 17:47:01Z - Profiling: ExecuteReader: SELECT DISTINCT “Hash”, “Size” FROM “Block” WHERE “VolumeID” = ? took 00:00:00.000
2017-10-28 17:47:02Z - Profiling: Starting - ExecuteReader: SELECT “A”.“Hash”, “C”.“Hash” FROM (SELECT “BlocklistHash”.“BlocksetID”, “Block”.“Hash”, * FROM “BlocklistHash”,“Block” WHERE “BlocklistHash”.“Hash” = “Block”.“Hash” AND “Block”.“VolumeID” = ?) A, “BlocksetEntry” B, “Block” C WHERE “B”.“BlocksetID” = “A”.“BlocksetID” AND “B”.“Index” >= (“A”.“Index” * 3200) AND “B”.“Index” < ((“A”.“Index” + 1) * 3200) AND “C”.“ID” = “B”.“BlockID” ORDER BY “A”.“BlocksetID”, “B”.“Index”

The mono-sgen process has been chugging along at 100% cpu for the last 3 days.
I’m running the latest beta on mac and my backup target is dropbox. Backup size is around 300Gb.

Maybe I’d better file a bug report with all this info?

I would not expect it to take days to re-create the index file,so obviously we need to tune that query.

How big is the database?

I think that’s these 2 files :

-rw-r–r–+ 1 xastor staff 57K Oct 30 17:42 Duplicati-server.sqlite
-rw-r–r–+ 1 xastor staff 1.9G Oct 28 17:47 OGMKLFAZBJ.sqlite

1 Like