Possible to relink an existing backup store (for pre-seeding purposes)

Still struggling with this problem. Isn’t there any way to resync the database so that it will work with a moved source?

Doooh! I had not thought about that, but yes, this will prevent you from doing it correctly.

For now you cannot re-sync in a cross-os manner.

If you really want it, you can grab the original dlist file, and remove all the files in there (or change the paths to fit the new OS) and place the that custom dlist file in the remote folder.

1 Like

OK in that case I will probably have to move the backups stemming from a windows client using a windows machine and linux clients using a linux machine.
Anyway, thanks for the support! :slight_smile:

BTW: A good way of showing your appreciation for a post is to like it: just press the :heart: button under the post. If you asked the original question, you can also mark an answer as the accepted answer which solved your problem using the tick-box button under esch reply. All of this also helps the forum software distinguish interesting from less interesting posts when compiling summary emails.

2 posts were split to a new topic: Changing source OS

I discovered this thread after having a problem migrating from CrashPlan using the process I wrote up here : Seed Duplicati from CrashPlan?

I got the following message right at the end of the final migration, but was not able to understand it until I found this thread.
Capture
I’m about to try the process described farther up this thread and see what happens.
Wish me luck :wink:

Ok, so this is by no means straightforward.

I had to:

  1. decrypt the dlist files from the backup destination, (in a safe folder because I’m paranoid) (gpg --decrypt file.name.zip.gpg)
  2. unzip the dlist file (unzip file.name.zip)
  3. construct some carefully crafted sed statements to translate the paths to match the source (sed commands below)
  4. rezip the dlist file (zip file.name.zip filelist.json manifest)
  5. re-encrypt the dlist file (gpg -c file.name.zip

sed commands I used:
sed -i ‘s/F:\\Data\\Greg-temp//g’ filelist.json
sed -i ‘s#\#/#g’ filelist.json
sed -i ‘s#//#/#g’ filelist.json
(Yes I could probably do this better, but this is a one time gig)

I’ll report back here once I know how well it has worked, if at all.

1 Like

I’m starting to get a little worried. its been like this for almost 24 hours… no movement in the progress bar…

Just give it some more time. Should be OK I think. Did you delete the db or repair it? You could also run the same changes in the db I think and then no repair was needed.

/ has filled up. Duplicati process did not resume when I created space.

I’m not SQLite literate, so for me the easy route is to recreate it. :slight_smile:

I’ve restarted the repair now.

It’s been running since my last post, I selected a delete and repair on the database, I hope that was right, as it’s whats mentioned above

Im monitoring the database files, their ATime is changibg(updating) every few mins, so something is happening. The disks are all healthy, the process (mono) is alive( fluctuations in cup and ram allocated to it. The free space on relevant volumes is consistently non zero by the order of gigabytes, overall cpu and ram are available in acceptable quantities (never hitting the end-stops). I noticed that swap was low so I added another 20Gig to the existing 5. Network traffic to the destination back end is active ( packets are flowing, tcp session is established, send and receive flow rates are at expected levels (non zero to maxing out periodically)

Which all makes me wonder, what is it actually doing? What does a delete and repair actually do? Ian it downloading each and every chunk and processing it before sending it back again? Is there any way to get more detailed statistics from duplicati?

Bearing in mind I’m trying to seed a migrated backup set from crashplan, I’m wondering if I would be best places to wipe the back end and start I’ve ? I’m keen to understand more.

Any advice welcome :slight_smile:

Thanks for your interest!

Duplicati stores 3 times of files in the backend:

  • dblock (actual file blocks, generally about your "volume sizel big)
  • dlist (what files have blocks stored in what dblock files, generally pretty small since it’s mostly compressed text of file paths and hashes)
  • dindex (what was backed up in a specific job run, generally pretty small and only one per job run)

A repair usually downloads dlist files to fill in the missing data in the database. A rebuild ends up downloading all the dlist files since all the data is missing.

If dlist files themselves are corrupt or missing then the raw dblock files are downloaded. This can take much longer due to their larger size.


If you go to the main menu Show Log page you can look at the Live tab in Profiling mode to see pretty much everything Duplicati is doing.

Alternatively, if you use the --log-file and --log-level (set to profiling) parameters when the repair is started you’ll get a text file of the same info.

Ok, thanks.

I have nothing for logs, except a droplist saying “Disabled”.

Where would I look to find files that it has downloaded from the backend to see what size they are and how many there are ? ( I see in the job-specific logs that it has downloaded “17517 volumes for blocklists” and is currently processing all of them.

is “blocklist” here the dblock files ? (I have only 2 dlist files on the backend, these are the ones I modified to correct the file paths.

I’m just wondering if I should cancel the database rebuild and start the backup from scratch. I’m worried that if I lose connection on my flaky ADSL broadband, or the SSH tunnel drops or something, that I would have to restart the rebuild from scratch… in which case a fresh backup might be preferable as at least it will pick up where it left off ?

Thanks

I suppose what I’m really looking for is an ETA for completion.

I feel getting ETA is difficult because in case of backup operation it depends on the file types, how fast they can get processed. ETA will keep fluctuating other things are bandwidth speed system performance etc

in this case though we’re only talking about rebuilding the database, so I think that’s just analysing the blocklist files and comparing them to the source filesystem.

At least some indicator of how many it has processed would be helpful.

It sounds like you’re in the right place - does the droplist not show other options (such as Profiling)?
image

For a rebuild that could happen, but if all the files have already downloaded or the connection drops then reconnects between downloads then it shouldn’t be an issue.

The way I see it, if you cancel the repair you can be 100% sure it won’t work - letting it run improves those odds quite a bit. :wink:

As for an ETA, on a database repair that’s very hard to pin down. I personally haven’t had any long (to me) rebuild times but we’ve heard from other users who have waited a week or two for the repair to finish.

I don’t know that we’ve pinned down yet why there’s such a huge difference in repair times, so we can’t really say whether yours is likely to be fast or slow.

The “Profiling” view in the “Log data from the server” shown above should give you a step-by-step list of what is going on. If you’re not seeing anything other than “Disabled” try refreshing (F5 or Ctrl-F5) your browser window.

see, now that’s an interesting point… I noticed a number of times in the last 3 days or so that the droplits in duplicat were not populating properly… I figured it was just a symptom of it being busy,

If you weren’t getting a “Missing XSRF Token” error then there’s definitely something else going on…

Ok, so I tried the rebuild again, this time being careful not to touch anything else on the system (no apt update etc)

it ran for about 24 hours and appears to have stalled at this point yesterday:

Do I just leave it and hope it finishes ?