Daily backups without all the backups files stored

This can probably be treated like do-it-yourself cold storage where in the event of a disaster you put the cloud back together with a huge upload from the NAS (assuming the NAS has survived, if local disaster happens). Meanwhile, you have to keep Duplicati from wanting access to old data even to sanity-check things. –no-backend-verification can do that but it will raise risk. –no-auto-compact should also be used.

Basically, it’s blind upload of your file changes on a block-based level so you probably can’t even restore current files if blocks are on NAS. You can certainly try proof-of-concept, but this may sacrifice reliability. How would you test the backup if you can’t restore sample source files or even verify sample backend?

Here are ideas of doing a local backup as a quick-restore base, and a more current backup to the cloud:
2 Backups available, which to choose?
Restoring a backup with deletion of files not contained in backup
If you keep fewer cloud versions than local, it would stay smaller, but it would normally backup all the files.

Is it possible to set a date limit? is a possible way to backup only files more recent than the local baseline. There might be other posts about this around, for example if one wants to use an offline image as a base.

Further posts on the cold storage idea are also around. Some might refer to brand name such as Glacier.

I don’t understand all the talk about hashes and hash lists. There are many hashes. Which do you mean?

Remote files are already not deleted when you delete a version. The marking is just done in the database. Eventually (if needed and allowed) a compact is done, which reclaims some storage from the destination.

Similarly I don’t believe (but you could test if you like) that Duplicati needs to have files in the destination to upload changes. It relies on its database, which contains info about destination, and can be rebuilt from it. Recreates can be slow (depending on size and other factors), and your method requires NAS upload too.

Looking at RemoteListAnalysis, it combines verifying what’s there with updating its remote volume states, and you certainly want dblock states tracked properly. VerifyRemoteList runs this, and is run both after the backup, and maybe before (to make sure files look as expected) unless –no-backend-verification was set.

Conclusion is that there doesn’t seem to be an option to set this up. I’m not sure whether developers even would want to. It seems an unusual case. Technically, you have a severely broken backup at that moment. Manually messing with the destination files is considered a bad idea, and I know cold storage does this. :face_with_raised_eyebrow:

From a high-level view, you have a NAS with files on it that can’t directly be restored, and you have a cloud that’s possibly the same way for files that have some or all of their blocks on the NAS. The NAS is trying to help by reducing cloud storage amounts. How is the cloud helping in this scenario? If you don’t care about possible local disasters, just backup directly to the NAS, and worry less about somebody breaching cloud. Note, though, the encryption is local at the client, so while cloud can be destroyed, reading data is tougher.