Changing source OS

I’m in a similar position. I want to create a backup using my windows PC then export it to a NAS to continue later, but it doesn’t like the source changes.

Still no way to fix this?
or an easy way to change the paths in the database?

ok, so i’ve done some tests with renaming the paths in side the filelist.json file (inside the dlist.zip)

my initial backup with the windows machine was from source \192.168.1.205\backup\test\ which ends up looking like: \\192.168.1.205\backup\test\ and when moving that over to the NAS the path translates to /data/backup/test/ which is quite easy to find and replace.

What i found was then i had to find and replace any sub directories which was easy enough by searching for “\” and replacing it with “/”

This all seems to work great until you want to go the other way…

If you have a backup that is /data/backup/test/ and you want to rename it to \\192.168.1.205\backup\test\ that is easy enough. But its when you start looking for the subdirectories. you can’t just do a search on “/” and replaced it with “\” Because there are hashes in the file that use the “/” character as part of the hash, and you can end up accidently renaming the hash.

Anyone have any suggestions on fixing this file path? or somehow reindexing?

Let’s make sure I’m understanding the situation currently.

You run a backup of source files on Windows (meaning all the paths are backslash Windows style).

Then you export the backup job and move it and the source files to a Linux environment, import the backup job and change the source and destination paths (now forward slash based) as appropriate for the new OS.

When you run the moved backup it complains because it can’t find any files in filelist.json (inside the dlist.zip) because the source slashes (and root paths) have changed?

Correct, I get the error:

“The backup contains files that belong to another operating system. Proceeding with a backup would cause the database to contain paths from two different operation systems, which is not supported. To proceed without losing remote data, delete all filesets and make sure the --no-auto-compact option is set, then run the backup again to re-use the existing data on the remote store."

But there is no way to 'delete the filesets

Thanks for the confirmation.

This is speculation on my part (hopefully @kenkendk or somebody else can confirm) but it sounds like you need to purposefully cripple the backup so that a rebuild of dlist.zip data is triggered.

DO NOT TRY THESE until it’s confirmed by somebody else, but my guess is you’d need to do something LIKE:

  1. manually connect to your destination
  2. delete (or better yet, rename) the dlist.zip file
  3. add the --no-auto-compact Advanced parameter to your backup (otherwise I think it will delete your backups, as it will see every backup file as wasted space since it’s not referenced in the dlist.zip file)
  4. run your backup, at which point I expect it will
    a. realize there no remove dlist.zip file
    b. scan all your local files to build new dlist.zip contents
    c. upload the new dlist.zip

If you brought your local Windows sqlite files over to Linux rather than exporting/importing the backup job them something like this MIGHT end up being needed to get paths in the local database structured correctly:

  1. export your backup
  2. delete the backup but DO NOT “Delete remote files”
  3. import the backup
  4. run the backup - at which point it will:
    a. realize it has no local data files
    b. download remote files to re-populate the local data files
    c. run the backup and NOT upload very much data as it will see the contents already exists at the destination

Again, DO NOT do either of these until somebody can confirm either of them are the correct process.

Ok, so ive tried your first suggestion, but no success…

here were my steps, let me know if I misunderstood.

  1. Made windows backup
  2. Exported backup
  3. Imported into linux
  4. Renamed DLIST files manually on backup
  5. Added --no-auto-compact parameter
  6. Ran backup - Error no list files present
  7. Attempted to run repair - Error
  8. Attempted to delete and run repair - Recreated
  9. Ran backup - Errored.

So it doesn’t look like this method works.

I’m not quite understanding your second method, are you saying that you need to redownload the whole backup again?

There is a check in the source that prevents you from continuing if you have no remote dlist files. You need to create one, but it can be empty (i.e. no files) and put it in the remote folder to “fool” Duplicati into thinking that it is a valid backup. Then you can run the repair, which will build the local database, but with no files.

The next backup should then add your filenames to the new dlist file, and you can use the commandline to DELETE the old/fake/empty version (version 1).

For the manual fixing of the dlist file, you can use something like Python to load the json, manipulate it, and then write it out again as a json:

This will allow you to only do the replace on the path fields instead of a simple search-n-replace.

ok, well the fake dlist file sounds like a much easier way to go than to making a script. So once you run the repair it will create a proper dlist file?

No, “repair” will rebuild the local database.

The next backup you run will create the correct dlist file, and re-use the dblock data.

Ah ok! so it won’t actually redo the backup. That’s good to know.

Sorry but I don’t undestand what I need to do.

I have the same problem when migrating from Windows to Ubuntu. I had only two filesets.

I used web-interface for all D2 operations.

What I did…

  1. Download list file and decrypt it.
  2. Remove all arrays in json so the file became with only text: []
  3. Change the name of file and text with date in manifest to today.
  4. Crypt it again and upload it.
  5. Run Delete and Repair.
  6. Run it.

And I have the same error. But when I press Restore I see my new empty fileset - the last one.
Also I see two filesets instead of two in web.

What did I do wrong and how I can save my two filesets (may be in another backups I see more filesets)? Or there is no way for it, isn’t it?

A related question is are you saving what you change so that another try is possible?
For example, do you have the old dlist and the old database (and everything else)?
If things have been permanently destroyed, then that limits the options going forward.

From one point of view they were fine before changes, and could have been restored.
Restoring to non-Windows would have made you specify a new restore folder though,
because paths are different. File attributes are also very different from Windows ones.
How did you wind up putting prior Windows files on the NAS? Did timestamps persist?

To clarify terminology, a fileset is a backup version, and should correspond to a dlist file.
Do you mean you had many backups but you don’t know how many versions each had?
What’s the state of the Windows system? Don’t have two systems on same Destination.

Is not uploading all previous file data again sufficient? That’s the easiest part to achieve,
provided you don’t mind losing ability to restore old versions. These steps were itemized,
then dlist editing was briefly mentioned. It’s needed only if you want to continue backups.
If you wish to do that, then you also get to try to migrate old configs (steps not described).

There’s a lot in current post version that I find hard to follow. Consider reviewing and edit.

There’s probably no way to have an ideal totally transparent migrate-and-continue because
systems are too different. Partial things you can get were listed. How much will be enough?
The more you want, the harder and less certain it gets. Major surgery is difficult and is risky.

Sorry I didn’t have time to answer before. After this post I’ve got an idea to replace path in fileset .json file and after it all work fine.
If you want my opinion D2 could save information Linux / Windows per file and after migrating ask how to replace each path and when restoring to ask how to replace each path again. It would be not very good but acceptable solution.

Unfortenutely I don’t know C# neither Angular and can’t help to do this. I thought about a small utility but I don’t know how it can be better for the community.

A fileset .json file inside the .dlist file looks like the below, but whatever you did, I’m happy it worked.

{"IsFullBackup":true}

Paths would be in the filelist.json file. How the backup process works shows some examples of it.

Not following this. Duplicati before migrating doesn’t know where you’re going to put all the current files.

Not following this. You certainly would not want questions to the user for each path found in the backup.

Not following this for the same reason, plus it currently asks " Where do you want to restore the files to?" which sets up one location for the whole restore instead of asking how to replace each path again.

While I’m somewhat interested in clarifying the idea, the chances of it happening depend on a volunteer. Volunteers from its community are what keep Duplicati going. There are ample opportunities to help out.

Thank you for your answer.

Duplicati before migrating doesn’t know where you’re going to put all the current files.
Yes, but my idea is to know. And Duplicati could know about changing type of OS.
My idea is to change how to replace paths instead of do that I and everyone had to do to at least not to upload all backups and also save backup history. I think it’s good price for it.

It could be some confient interface to change path. For example, if you change the shortest path all subfolder could be changed (ask or not to do it it doesn’t matter now. For the first version I think no).

This interface as I said before could appear after creating DB and the last fileset for different OS type or when restoring and the chosen fileset is created by different OS type.

Is it clear? I don’t know how I can help except money in BountySource.

Getting slightly closer. Before continuing though, what is the current situation? Still trying to migrate?

I’m not sure how well that works. I couldn’t work out in website what their Duplicati success has been.

Finding out how well or how badly the current support (or workarounds for lack of support) could help.
I just copied a Windows database to Linux and asked for a Restore. As expected, it asked for a folder:

image

I took its advice, gave it a path, and my Windows file appeared on Linux. If I had asked for a folder, it would have put the entire restored tree there. This might be your “change the shortest path” of case:

If so, I would call “shortest path” the “start of the path” or “path prefix”. It changes for whole folder tree.
Assuming this actually works as well as it seems to (you can try), you might be all set for Restore side.
Backup side, though, says:

image

It neglected to tell me to also turn on the allow-full-removal option, but I did that too, used Commandline delete command with excessive versions of 0-999, verified Restore screen had no versions, removed the two options I added, then ran backup. I now have a Linux-path backup of the Linux path I configured manually. I will take the message at its word that the existing data blocks were used, but I didn’t test that.

What I do not have any more is the old backup versions from Windows. If this is needed, it gets harder.

You can see that you already have cross-platform restore and a way to avoid upload of all the data again. There was no editing of dlist files or database, and no database recreate was needed. Is this sufficient?

If you still need old versions migrated, please say how many backups and how many versions you have. There’s a new method I devised which can do lots of versions, but it needs a lot of testing. Any interest?

No. All ok as I wrote before. I just find and replace old windows paths to new linux paths and go to .zip and after encrypt just the same way I decrypt and unzip).

But I use .json file what I exported via “duplicati client” before (I think this is the same as through web-gui).
Fortunetely I didn’t lost my data so I don’t need to restore something today. But I suggest only ideas for future.

Thank you. I’ll know about it. But I changed dlist only because of this topic).

Yes. I could try it. Because I have some backups too. I decided don’t touch it because they consists of old data that shouldn’t be changed but who knows. And of course, if it helps you or project I am ready).

The wild idea should be tried first on a test database before an actual one. You can find it in:
Delete old versions of backup without running a new backup

I tested on Windows, and I know sqlitebrowser exists for that. I don’t know if your NAS has it.
This method was in response to “loads of old versions”, meaning a lot of dlist files to edit…

Above table is Windows format. These are all the unique path prefixes in all backup versions.
This is just a space-saving measure to avoid storing lots of repeated paths starting the same.

First step would probably be to find-and-replace \ into /. Second step is set up above, using a
regular expression to remove the drive letter, or edit the whole left parts into the Linux version.

Writing these changes out (on non-test database, make sure you saved a copy of original DB)
should be equivalent to editing all the dlist files and then doing a recreate (not needed here).

You can then move all the dlist files to a different folder to hide them yet backup just-in-case.
Repair button should notice that they’re gone, and recreate them from its new Linux-path data.

So the theory goes, but I only tried it once rather roughly. It’s possible method needs changing.

When/if it seems working on a test backup, try it on real, but keep backup copies of everything.
As said in the other topic, “all of this internal surgery is poorly charted” so do it at your own risk.

1 Like

Sorry, i didn’t write before. After changing tables all was good but after it I decided to return to Windows)). I don’t know why but I have only problems with Linux (Ubuntu, Lubuntu) on different machines))