Command line slower than GUI

I have a backup config that I run daily from a .bat file on a file set that’s currently about 1TB in size. It used to take about 5 to 10 minutes to complete each day (when there weren’t too many large changes) which was very adequate. I’m not sure which exact version of Duplicati I had installed (pretty sure it was a canary build from last year), but I decided to install one of the newer builds last month and since then, the same command now takes at least 30 minutes to complete which is much less desirable. I decided to try and run the same config from the GUI (where I copied the command from in the first place), and it only takes about 10 minutes which I’ve found a bit strange.

I had a look at the profiling output from each, and the command line version seems to take much longer where it seems to be trying to count the number of distinct paths in each file set. In the GUI it takes about 2-3 seconds per file set, but the CMD takes at least 7 seconds each.

I’ve tried rolling back to various previous versions and rebuild the database a couple times as well, but can’t seem to get the command line to work as efficiently as I had it anymore.

I’ll also note that even when I completely uninstall Duplicate to install an older version, the version shown on the GUI’s ‘About’ page always shows the newest version. Is this stored in the appdata?

Please see About --> System info BaseVersionName and ServerVersionName. I suspect an uninstall does not remove all of the updates (might also be OS-dependent), and if so a leftover update could still be found. There’s a search at Duplicati startup time done by the version you install, and it may run some later update.

Downgrading / reverting to a lower version would be worth reading if you’re trying to revert to an old version.

I don’t have much to say about performance difference between command line and GUI. It does seem odd. Both could probably be improved by choosing sizes such as –blocksize to be larger to help larger backup. Keeping track of lots of little blocks inside lots of somewhat larger remote volumes can make DB get slow.

That was correct, the updates were still under ProgramData. I’ve tried doing this now, and noticed that the last version I was using was the 2.0.4.5 beta instead of one of the canary versions I had thought I was on. I’ll probably try revert back to that and see if it helps (though I suspect I’ll have to rebuild the database again…). I still wouldn’t mind seeing a fix for the command line issue for whenever the next canary/beta does release however.

I’ll consider this, but the speed, at least via the GUI, is tolerable enough for now.

Hello @littlerat, welcome to the forum!

Depending on what interim version you were in, I recall there being done performance issues (later fixed) with filters (might have been regex specific).

I think there was also a possible issue with the command line version not correctly finding those updates you cleaned out.

Regardless, it’s definitely odd that performance would differ as they both run the same “server” code.

I’m pretty sure I’m running the latest canary (2.0.4.18_canary_2019-05-12) after uninstalling, cleaning out program data, and reinstalling now. I’m not completely sure how anything to do with filters is logged in the profiling, but anything to do with listing and checking the files seems to express through rather nicely.

I’ve copied one of the issue queries here which may help:
SELECT COUNT(*) FROM (SELECT DISTINCT "Path" FROM ( SELECT "L"."Path", "L"."Lastmodified", "L"."Filelength", "L"."Filehash", "L"."Metahash", "L"."Metalength", "L"."BlocklistHash", "L"."FirstBlockHash", "L"."FirstBlockSize", "L"."FirstMetaBlockHash", "L"."FirstMetaBlockSize", "M"."Hash" AS "MetaBlocklistHash" FROM ( SELECT "J"."Path", "J"."Lastmodified", "J"."Filelength", "J"."Filehash", "J"."Metahash", "J"."Metalength", "K"."Hash" AS "BlocklistHash", "J"."FirstBlockHash", "J"."FirstBlockSize", "J"."FirstMetaBlockHash", "J"."FirstMetaBlockSize", "J"."MetablocksetID" FROM ( SELECT "A"."Path" AS "Path", "D"."Lastmodified" AS "Lastmodified", "B"."Length" AS "Filelength", "B"."FullHash" AS "Filehash", "E"."FullHash" AS "Metahash", "E"."Length" AS "Metalength", "A"."BlocksetID" AS "BlocksetID", "F"."Hash" AS "FirstBlockHash", "F"."Size" AS "FirstBlockSize", "H"."Hash" AS "FirstMetaBlockHash", "H"."Size" AS "FirstMetaBlockSize", "C"."BlocksetID" AS "MetablocksetID" FROM "File" A LEFT JOIN "Blockset" B ON "A"."BlocksetID" = "B"."ID" LEFT JOIN "Metadataset" C ON "A"."MetadataID" = "C"."ID" LEFT JOIN "FilesetEntry" D ON "A"."ID" = "D"."FileID" LEFT JOIN "Blockset" E ON "E"."ID" = "C"."BlocksetID" LEFT JOIN "BlocksetEntry" G ON "B"."ID" = "G"."BlocksetID" LEFT JOIN "Block" F ON "G"."BlockID" = "F"."ID" LEFT JOIN "BlocksetEntry" I ON "E"."ID" = "I"."BlocksetID" LEFT JOIN "Block" H ON "I"."BlockID" = "H"."ID" WHERE "A"."BlocksetId" >= 0 AND "D"."FilesetID" = 22 AND ("I"."Index" = 0 OR "I"."Index" IS NULL) AND ("G"."Index" = 0 OR "G"."Index" IS NULL) ) J LEFT OUTER JOIN "BlocklistHash" K ON "K"."BlocksetID" = "J"."BlocksetID" ORDER BY "J"."Path", "K"."Index" ) L LEFT OUTER JOIN "BlocklistHash" M ON "M"."BlocksetID" = "L"."MetablocksetID" ) UNION SELECT DISTINCT "Path" FROM ( SELECT "G"."BlocksetID", "G"."ID", "G"."Path", "G"."Length", "G"."FullHash", "G"."Lastmodified", "G"."FirstMetaBlockHash", "H"."Hash" AS "MetablocklistHash" FROM ( SELECT "B"."BlocksetID", "B"."ID", "B"."Path", "D"."Length", "D"."FullHash", "A"."Lastmodified", "F"."Hash" AS "FirstMetaBlockHash", "C"."BlocksetID" AS "MetaBlocksetID" FROM "FilesetEntry" A, "File" B, "Metadataset" C, "Blockset" D, "BlocksetEntry" E, "Block" F WHERE "A"."FileID" = "B"."ID" AND "B"."MetadataID" = "C"."ID" AND "C"."BlocksetID" = "D"."ID" AND "E"."BlocksetID" = "C"."BlocksetID" AND "E"."BlockID" = "F"."ID" AND "E"."Index" = 0 AND ("B"."BlocksetID" = -100 OR "B"."BlocksetID" = -200) AND "A"."FilesetID" = 22 ) G LEFT OUTER JOIN "BlocklistHash" H ON "H"."BlocksetID" = "G"."MetaBlocksetID" ORDER BY "G"."Path", "H"."Index" ))

It is possible the CMD is being limited in resources (memory) somehow?

I’ve tried the --use-block-cache=true switch as well which made no difference in either the GUI or CMD.