Change of Duplicati server (+ OS) without losing the remote data

I have been using Duplicati on a Windows VM for quite some time. Since I now use the VM exclusively for Duplicati, I wanted to switch to Duplicati with Docker on a Linux platform (unraid). Of course I want to keep the existing backups, so I exported the config files and imported them into the Duplicati container. When starting the existing backups I now get an error with the following description:

“The backup contains files that belong to another operating system. Proceeding with a backup would cause the database to contain paths from two different operation systems, which is not supported. To proceed without losing remote data, delete all filesets and make sure the --no-auto-compact option is set, then run the backup again to re-use the existing data on the remote store.”

I have read in other posts (some many years old) that Duplicati does not handle different OS well.
Is there a solution for the problem I have in the meantime?

I have now carried out another test. If I use duplicati on the new server not in a Docker container but also again in a (completely new) Windows VM, the backup works fine and the backup just keeps running with the old settings and the old remote data.

So it means that the change of the OS is the main reason.

Welcome to the forum @public1

Changing source OS would be worth reading then. It’s from last month and adds a new technique.
What did you think of the other ones you saw while reading in any older posts? Are any workable?


I’m not clear what Duplicati has and now wishes to backup, as opposed to restoring which is easy.
When one starts dealing with VMs, containers, and hosts, it’s confusing what the exact situation is.


You also quoted one solution, which you might recognize from some of your readings. Need more?
There are several more technically difficult methods which involve edits of dlist files or databases.
The folder arrangement of new backup versus old backup could also influence which way is best…


Continuing with the thought of “how different is the new layout?”, you’ll need to tell Duplicati where backup Source files are now. If you didn’t change anything, it would still expect to use drive letters.
This is a somewhat separate topic from what to do with the Destination side, but it’s necessary too.

hello @public1

Duplicati is a backup tool. It is not a migration tool. The best way to migrate is to use OS tools, that is, restore under the same system that has been used to backup, and then migrate the files to another computer with a different operating system.

The prompt "To proceed without losing remote data, delete all filesets and make sure the --no-auto-compact option is set, then run the backup again to re-use the existing data on the remote store.” does not say that it will allow you to keep your backups, but that it could allow you to keep your data.

The important difference is that the backup history allows you to know that a file has been deleted or created at some specific date, and to restore a given version. Keeping your data is much more limited; the backup will save the files at the date when you will backup, and the Duplicati deduplication will allow it to go faster, that is, see that data exists already on the backend and not send it again. But for a file that is currently deleted on your system but was existing in the past and has still data allocated on the backend, you would not get any way to restore it with this procedure. What you will get is ONE (1) fileset after this initial backup. The words ‘without losing data’ are a bit excessive IMO. It depends on what one is calling ‘data’.

To keep the backups (the history) would be a much more involved process - and is indeed not supported.

We’re still guessing the current situation and the goal. Maybe source is on host, so no migrate.

One clue is in the topic title “without losing the remote data” which of course doesn’t lose itself.
If desired for its old versions, it can be kept, and restored, except you’ll need to tell it the folder.

If instead, the goal is to avoid a big maybe slow upload, the quoted solution will work, however
deleting all old versions might not be desired. One also needs to set a special option to allow it.

Depending on remote storage cost and copy speed, having both old and new versions may do.

The tricky route, which has been tried but is somewhat experimental, are the dlist or DB edits…
If you experiment, work with someone and keep copies of files you change, to allow going back.

Hi together,

sorry for the late reply.
I present the situation again. I have Duplicati on a Windows VM and the backup destination is MS OneDrive. This all worked, but now I am changing the host on which the VM is running and at the same time I want to switch from Duplicati@Windows to Duplicati in a Linux Docker container. Since there is a lot of TB data involved (and already uploaded), my plan was to keep using the old backups and just backup from the Docker container instead of the Windows VM. The target files in the remote folder should not be moved or touched. Ideally, the version history should remain intact.

Source situation is still unclear. Please clarify what OS it’s actually on, and how Duplicati gets files.
There’s talk of changing the host as well as Duplicati’s run environment. Please lay everything out.


Please fill big hole in the middle, in hope of finding nicer options than have already been given you.

                        old             new
Duplicati host OS                       Linux
Duplicati guest OS      Windows         Linux
Duplicati Source OS
Duplicati Source access
Duplicati Destination   OneDrive        OneDrive


ok - I didn´t thought that still something is unclear but ok, lets try:

                        old             new
Duplicati host OS       Synology DS     unraid
Duplicati guest OS      Windows VM      unraid docker
Duplicati Source OS     Synology DS     unraid
Duplicati Source access Synology DS     unraid
Duplicati Destination   OneDrive Business OneDrive Business

Thanks. So I guess previously you were backing up Linux from Windows, meaning some amount of attribute translation was being done in addition to path conversion from Linux to Windows. The new scheme might be better able to keep Linux attributes, but the question is – do you care about them?

In terms of file data (ignoring attributes), this is less OS-specific, and I guess you did manual move?


I didn´t care about the attributes and I only backed up file data - no Windows/Linux installed software or something from the system.

The move from Synology to unraid was done manually by myself.

You backed up file attributes unless you went out of your way to add Advanced option skip-metadata.
Installed software and system software doesn’t make much difference. The OS always has metadata.

Regardless, it’s good to know you don’t care about things like NTFS ACLs and Linux file permissions.

Is it the same set of files in the same folder tree format? It’s not essential, but it makes it easier to map. Because there is no built in support, this is still a manual job for you, and you also get to see if it works. Unless you actually didn’t back up attributes (metadata) when on Windows, one question is how Linux restore will deal with Windows metadata. If you somehow still have both environments somewhere, it’s simpler to do a small test. The alternative is to back database up well and try not to change destination.

It’s kind of your choice. Two methods to try to change the paths are the dlist edit and recreate, or edit in database directly. There is a dlist per backup version, so lots of versions is lots of editing. If you happen to be good at scripting in a language that does JSON, you might be able to automate the dlist-changing.

If you want to do something like keep the old one for its versions, but start a new without a huge upload, method that Duplicati suggests (with one gap) will work, but you’ll need to keep old backup somewhere.


I think (but am not sure) that the serious metadata restore only comes into play if you check this button:


Without that, it would probably be nice if you got the last-modified timestamp back It’s stored separately.


If you had an explicit ACL set (most of them are just inherited), it would be saved in win-ext:accessrules.