Restore full 15Tb of data

Hi,

I am trying to restore 15Tb of data from Backblaze.

This is takink to long and hanging. i am trying to restore folder by folder but is getting very frustating and confuse.

Is any way to restore all data of the last version in a “simple” way???

I was able to recover the .sqlite database from the original server.

Thanks for your help

regards,

Diogo

It’s two screens. Last version is default, say what you want, say where you want it.

If you mean instant, no. The 15 TB took a long time to upload, but it was gradual…

Hangs, though, are not supposed to happen. How do you see it’s hung? Live logs?

Blog post: Cut restore times by 3.8x - A deep dive into our new restore flow
is new restore now in Canary test releases. My speedup (on old gear) was far less.
Latest Canary also gives SQLite more cache memory which speeds up some work.

This will download some data repeatedly if the same data block is in both restores.
You could consider downloading the backup once to local media, e.g. using rclone.
Repeated file reads will then need less time - and money if you’re past free egress.

Don’t use lack of new filenames in a restored folder as a sure sign of hang. Files are restored gradually in old flow. You see partial files. Verbose live log is a better check.

If there’s actually a hang, it may also offer some clues as to what the hang might be.

I think the new flow may look more satisfying if you’re willing to try Canary test build.
If I understand it correctly, files will be restored to completion one at a time, however potentially there is more resource use than in the old way. The article discusses this.

Make sure it’s absolutely up to date. Ones not matching destination are no good.
If you think this DB is fine, you can save your own copy, but Canary will save too.
There is an internal version upgrade, so old one is always saved as a precaution.

might someday allow more parallelism on things like restores or database recreate.
Until then, if you have enough disk space, you can let rclone do downloads for you.

What OS is this? For any performance issues, using OS performance tools is good.

I am using last stable version 2.1.0.5 in windows.

I am download all the files in another server on another location but will take 7 day to download. is any way to download only the files of the last version? i have a lot of years in versions…

When hang, the progress bar continues part in green but the number of file/Gb dont change and in verbose log i see a lot of:

"7 de Mai de 2025 22:22: Failed to patch with remote file: “duplicati-bae5d294861ec4917951821042cf44143.dblock.zip.aes”, message: The operation was canceled. "

I am using 4 computer to download separatly, data from BackBalze on the same internet. Can this cause problems because of the external IP be the same?

I can try install the Canary version in one of the computers as long as it dosen’t damage the backup files.

thanks for your help and opinions.

Diogo

That’s what restoring the last version has to work out and do. It’s not available separately. Versions are not standalone. Typically many files in the initial backup exist for a long time, kept as individual blocks. Next backup uploads only changes, backreferencing to old data.

Deduplication saves a lot of space compared to a full file copy, but files of the last version have blocks from various earlier versions. One can find its files, but it takes a lot of SQL…

If you want to install DB Browser for SQLite or similar, we can try, but then you’d get a big list of files that you’d have to pass into your download tool. Some scripting might help that.

Given that versions that are similar use little extra space, you can look at source size and destination size to see how they compare. If it’s slow growth, it might not be worth fighting.

Probably need developer interpretation, but are you sure you got message right?
Google has never seen that combination. Was it perhaps “A task was canceled”?
There are a few of those reported, possibly resulting from problems downloading.
Surrounding log context would help. Did the download fail just before that failure?

  --number-of-retries (Integer): Number of times to retry a failed transmission
    If an upload or download fails, Duplicati will retry a number of times
    before failing. Use this to handle unstable network connections better.
    * default value: 5

can sometimes ride through temporary failures. Verbose log will show any retries done.

I wouldn’t think so. The port number would be unique, which allows connection tracking.

You might, however, be loading the connection heavily so each computer gets less.
If you’re getting in timeout trouble, some timeouts can be increased, but check logs.

Restore won’t change destination. Don’t run backup, repair, or other things that do.
Note that rclone to local drive would be an even better way to prevent any damage.

If you try Canary, I’d advise using the old user interface, for its familiarity and maturity.
If it comes up in a new UI, you can click “Revert to NGAX client” (need better name?).

Which? Job database is random-letters.sqlite. Server database is Duplicati-server.sqlite, and contains the job configuration, so saves you from having to re-enter a configuration.

Which reminds me – to avoid damaging backup files, avoid Duplicati configurations that schedule backups, as you might be behind schedule and will get a backup on the restart.

Restoring files shows options including “Direct restore from backup files” which has more typing on your part, compared to rescuing a Duplicati-server.sqlite or having a job export.

I’m not clear on the 4 computers. Did you lose 4, yet somehow still had .sqlite database?

The error message The operation was cancelled typically happens when some kind of cancel has been activated. I assume you have not actively tried to stop the process, so I am thinking this could be a place where a timeout exception is not gracefully handled, and returns this message instead of a more helpful message Operation has timed out.

If my guess is correct, try setting --read-write-timeout=0s to disable the activity timeout.

I am not sure why this would be needed, but there was a report that it worked for Jottacloud, so maybe B2 will also work better?