Create New Folder with Each Backup

Hello, is it possible for Duplicati to A.) create a new folder for each time a backup runs, and B.) have the name of that folder be the date/time? For example: Daily backup to a cloud service, when duplicati runs, create a sub-folder in the Destination Directory named “20190225” (or similar), then have all the backup files contained in that folder.

I’ve been Googling around for the better part of the day this morning and can’t seem to find a straight answer to confirm or deny if this is possible. Thanks!

Short answer: no.

Long answer: Each upload volume (DBLOCK files) contains a number of fragments of your source files. The first time a backup is run, a full backup will be made. These backup files could be stored in a subfolder. The second backup could store incremental data in another subfolder, resulting in 2 folders containing DBLOCK/DINDEX/DLIST files at the remote location.

After each backup, the retention policy is applied and compacting of the remote files takes place. During this operation, small DBLOCK files and DBLOCK files with unallocated data are downloaded, combined to a single DBLOCK files and re-uploaded to the remote location.
This results in DBLOCK files containing data from multiple incremental backups. After a number of backups, there could be very few files that can be fully associated with a particular backup.

2 Likes

Is not quite the same as “is it reasonable”. The first might be a “yes”, but the second is probably a “no”. To elaborate on what @kees-z said, Duplicati uploads changes from the previous backup. The initial upload uploads everything. If I change 1% of data, the next backup uploads perhaps 1%. You could perhaps make Duplicati upload everything each time, but would you want to? I’ve heard of this only for specialized needs.

How the backup process works gets into the technical details, which preclude storing as direct copies like
Create incremental backups without .zip or any compression requests. Maybe ideas there fit your request.

1 Like

Thank you @kees-z and @ts678 for the explanations. I read through both of the provided links, and understand now that this question/request isn’t really conducive to the way Duplicati works. Also, the 2nd link isn’t what I’m looking for as a solution really.

What I’m trying to do is send encrypted backups to a cloud storage service for “cold” storage, of mostly SQL backups and transaction logs. I’m comfortable trusting the Duplicati agent to backup/restore properly when the times comes, that’s not the reasoning.

The main reason(s) I’m waiting this folder structure are 2 reasons: 1. To double-verify that the backup ran and completed (I can go into the cloud storage and visually see a new folder made with files in it). 2. I can more quickly fetch an online backup, as I can begin downloading a specific day’s backup folder while preparing a replacement machine/drive in the event of a disaster, then restore locally from the downloaded files, instead of restoring straight from the Internet once the new hardware is up AND online.

I guess to resolve point # 2 I could install duplicati on a spare workstation, or my own machine, and being the restore process ahead of time. For point 1, I’m assuming I could just take a look in the dlist file and manually check for backup dates?

Thanks for the context. My impression is that SQL backups and transaction logs tend to defeat Duplicati deduplication efforts, due to the scattered changes. That’s certainly been the experience for people who backup Duplicati’s own SQLite database in a second job (hoping to avoid Recreate, which can be slow).

There are other hazards too. Sometimes people may use the database’s own facilities to write backup-ready files in an application-consistent state (rather than rely on –snapshot-policy - VSS can sometimes get this with help from the database). I’m not familiar with database dump formats, but Duplicati has fixed deduplication block sizes and boundaries, so is probably going to lose deduplication advantages if offsets change (e.g. in sequential dump). Even if the changed-blocks-only scheme worked, it interferes with fast restores (your goal) because blocks need to be collected from various sources, and that slows a restore.

Choosing sizes in Duplicati may be relevant. You might want a big –blocksize if dedup does little anyway. Some people use 1 MB (up from 50 MB default) to reduce overhead losses of slicing a backup too finely.

So yours might actually be (in some ways) a reasonable case, although I’m not sure if direct copy using rclone wouldn’t get you there more directly, faster, and with less risk of Duplicati messing up all the block operations that might not help in your case. Duplicati is also not very happy with cold storage, because it does remote operations all the time by default. It can be somewhat pacified by reducing verifications, but that’s got dangers too. You could try searching the forum for earlier topics about adapting to cold storage. Converting already-started S3 backup to Glacier, Misc Glacier Questions touches on some of the issues.

There are lots of ways to get some indication the backup ran, including reporting options and third-party procesors of the raw reports, such as duplicati-monitoring, or dupReport which I think supports Apprise (extending notifications even further). You could also use –upload-verification-file to make a file with fixed name, then make sure file time changes, or you could use Duplicati.CommandLine.BackendTool.exe to grab a full file listing and pull the backup date out of the dlist file name, to make sure the right one is seen. Or instead of a listing, use it to download duplicati-verication.json to get dlist file information. You can run restore by date, in theory, but my experience is it’s a bit buggy compared to –version, where 0 is newest. The FIND command can show the available versions of backup, and can list the files that it finds in them.

If you still want a new folder each time (gets less restore time, maybe more reliable, likely more storage), then the command line backup command is probably scriptable, whereas changing the GUI config is not. There’s a third-party client that tries to get both though, being a CLI client that uses the raw GUI interface.