Full restore of incremental backup

How restore all files from an incremental backup?
I tried --all-versions=true and * for files but not work, only restore files partially

Welcome to the forum @Marco_Del_Pin

The documentation for --all-versions=true seems to tell me it’s only relevant to a “find”.
Did it change anything in “restore” for you? I always use the GUI because it’s simpler…

https://duplicati.readthedocs.io/en/latest/06-advanced-options/#all-versions

Could you please clarify the restore issue? Do you mean that one or more files were the wrong size?

Does it mean that you got fewer files than expected, but that they were OK? Or is it something else?

Note that on many systems (e.g. Mac, Linux, probably most of them), a * gets expanded by the shell.

That may or may not be what you want. It might be right for current directory, but won’t get into lower.

1 Like

Thanks to the availability,
I try to be more precise

I made that
t:\a\file00.dat
t:\a\file01.dat
ecc. and made a backup
then deleted those files and put others like:
t:\b\file02.dat
t:\b\file03.dat
ecc.
backup delete and others:
t:\c\file04.dat
t:\c\file05.dat
ecc.

now I would like to restore all files without specifying the exact file backup number or file names but so far I have managed to recover only the last backup or I must specify exact file name.

Hmm, so you want to “combine” all restore points and restore that?

I’m not sure there’s an option for that.

If it needs to be implemented how would it work if you deleted a file after backing it up and then backed up a new file with same name?

I might be wrong about the intent, but this reminds me of How can Duplicati be used to Move folders to Storage Location? where the wish was to be able to periodically move big old files off of a small drive, to free up space. One of my thoughts at the time was how one was going to keep track of what one had uploaded, to get it back.

One approach leaves names and special links in the folders. They run actions instead of being the original file. Seeing backslashes makes me think Windows, so I wonder whether Files On-Demand might help. In addition to its own bugs, I’d give the caution that this feature has caused some Duplicati pains from the odd things it does.

1 Like

skip older files for example.
As far now is necessary to use the disaster recovery procedure to achieve that but is extremely slow and space consuming. :frowning:

I’m guessing this is referring to restore all files that can be recovered from any backup version, which is pretty coarse control… Some backup programs offer an option to show deleted files in the UI, and then select them, however I think it’s unsafe to rely on deleted files to hang around forever. There’s too much room for accident.

Based on the topic title I think there might be an initial mis-understanding here.

With Duplicati there isn’t such a thing as an “incremental” backup in the classic sense (meaning a file created yesterday goes only in yesterday’s backup while a file created today goes only in today’s backup). With such a classic “incremental” process one would have to restore yesterday’s backup to get yesterday’s file AND today’s backup to also get today’s file.

With Duplicati every backup version includes all files that were in that folder at the time of the backup, even if they had already been backed up previously. So you only need to restore one backup version and you’ll get ALL the files that existed in that folder when the backup was created.

From the example, I suspect incremental backup here refers to usage cycles of create-backup-delete batches, however the title could mislead. Maybe title could be “Simplify viewing and restoring of previously deleted files”.

The ordinary Duplicati limitation that Recovery Tool avoids is the ability to get all the deleted files in one bunch, however please also refer to my comments about possibly wanting more control over the restore than that has, and one example of how a UI handled it. There are likely others, but I think there’s no easy viewing in Duplicati.

There’s a philosophical question of whether Duplicati wants to become an archiver of intentionally deleted files.

I’m pretty sure it doesn’t.

However there is a case to be made for “Oh no, my super important file has been deleted but I don’t know when and there’s no easier way to find it in Duplicati than to look at each version until I see it.”

"where the wish was to be able to periodically move big old files off of a small drive, to free up space. "

This is exactly my case

Duplicati is meant as a backup, not as a repository for old large files you don’t want anymore. If you only have files in one spot (even if inside Duplicati), then they’re not backed up.

Hi Jon,

So the way Duplicati is designed is to speed up the “Restore files” process because there’s no “fullbackup” to restore first before all subsequent incremental/differential backups. And when you said ALL files in the folders are backup, this from a functional standpoint is effectively a fullbackup. Am I right? My question is: with this approach, what’s the implication of the sizes of the backups? If the size of my files is 20GB and I do a daily backup and when only a few of those files are modified and a few new files are created daily. Does this mean in 1 month I would get a backup which is at least 600GB in size?

No, if there are just a few changes in your source files, the size of your backup will stay about 20 GB. Only new files (that don’t contain data that isn’t uploaded already) and changes in files are uploaded.

The process is explained here. A more Technical artical can be found here.

Hi kees-z,

Thanks for reply. Under the section Processing a larger file in the article “How the Backup Process works”, is the content of the dblock files contains only the hashes of the blocks or chunks and the hash of the entire file only? It’s not obvious to me from the documentation or manual where the actual binary data of blocks being stored? Inside the dblock files alongside with the hashes if the storage system I use is Amazon S3? I know the SHA hashes are one-way operations, if the actual binary data is not also stored inside the dblocks, how the restore is going to recreate the binary data simply from the hashes alone?

DBLOCK files contain the binary data. The hashes you see in the docs are the filenames of the files inside the DBLOCK volune that contain the actual binary data.