Is anything being done to address the slowness in browsing backups?

Backing up about 2TB total to an external drive with the Duplicati docker container within unRAID.

I’ve come to terms that Duplicati is on the slow side. I hope that one day backup speeds will improve, but now that I have my initial full backup done and incrementals are taking 15-20 minutes a night, I’m ok.

However, I am getting increasingly worried about how long it takes to browse through backups. After hitting “Restore Files” from a backup set, it takes 5-10 minutes getting file versions and fetching file paths before finally presenting a file structure that I can browse through.

It doesn’t end there. It then takes at least 3 minutes to “think” every time I try to drill down each and every folder. I can’t imagine what this would be like if I ever needed to restore a single file that’s more than a few folders deep.

I thought maybe this was happening because I was letting my number of versions get too high, so I set my retention policy to only keep one backup a night for a week. That trimmed my versions down from over 30 to just 8, but that gave no improvements in browsing speed.

I’ve even gone as far as timing with a stopwatch how long it takes for a folder to finally open after clicking on it. These times are getting longer and longer with every new backup taken. The scheduled backups themselves don’t seem to be taking any longer. I’ve actually done test restores and the process of actually restoring doesn’t seem to take any longer either. It’s just the browsing of the files that keep getting longer and longer.

Right now it’s just an annoyance and frustrating, but at this rate I’m afraid it will go beyond that to the point where I will have a set of backups that I can’t restore from at all.

Why would I get this behavior? Why does it get even longer with each backup taken? Is this a known issue and is anything being done to address it?

I have no solution, but browsing files to restore has been my biggest concern now with Duplicati. With something like Arq, bringing up a file list is nearly instant. Duplicati gives me a spinning animation and a long wait (this is on a 7700K CPU, 32GB RAM, and a 960 EVO NVMe SSD, so the system shouldn’t be a bottneck). Regardless of backup size, or where I’ve backed up to and from (disk to local disk), there is always a long wait to view backed up files.

I believe this is due to the number and size of file paths more than numbers of versions.

Most likely (and this is just a guess) it’s getting longer simply due to having more file paths with each run as new files are added. In theory you should eventually reach a balance point where file additions and historical cleanup deletions even out at which point performance should also level out.

Yes, it’s a known issue and yes, things are being done to address it. The problem is that it’s a MAJOR rewrite to implement a more efficient file path handling method, so it takes time to get it all in place and tested.

More recently (as in 2 hours ago) a suggestion was made that might be relatively easily implemented and could improve RESTORE times (but likely have little to no effect on other performance issues):

I am aware of the problem and hear you clearly.

The most voiced concerns are “rebuild database is slow” and “browse files is slow”. I will prioritize solving these issues, but my time is limited so it might take a while before I get it done.

It appears that someone found a smart way to speed up the “browse files” part quite a bit:

I will try to get an update out ASAP.

4 Likes

Awesome news! Thanks for your work on this.

Since I’m using it in unRAID, I imagine it I will see it a little later after the update since I have to wait for the docker container to be updated too, but it’s awesome knowing that a fix is on the way.

This topic was automatically closed after 3 days.
To continue the discussion, use this topic: