Timespan to rebuild a database?

Not your word, then from gpatel-fr … credits to him :slight_smile: But also for you for your kind and wealth of ideas. “Should” sounds like a time-costly test…

I don’t expect that NTFS rights are restored, too - even I don’t want that because the new server has a complete new setup for access right’s which finally meets the goals the customers wanted to have. This is really the good side of the crash…

Your’e right.

Your’e right.

Hit the goal!

My plan is to restore the files to the “small” Debian Server VM, doing a folder- and filename cleanup and transfer them to the new Windows Server. We are just missing about 20’000 files, which weren’t copied from the crashed one to the new one - almost because of “forbidden” characters (worse than you think) where NTFS says “No”!

Because the only task of the Debian VM is to restore the Duplicati backups. After restore, cleanup, transfer, verify the restored folders on the Debian VM will be deleted and reboot will done to free RAM. This scenario, because the crashed Debian Server must be up as long as the new server setup will be switched to production. Thanks god, the old Debian contains the most important files.

Originally we had to recover 3TB of data in more than a million of files on a new Windows VM. This doesn’t work in an “short” timespan and got - you know it - some other “maybe-a-problem”. Next was to repair the Ext4-Filesystem of the old (crashed) Debian VM - which was good - and transfer the data to the Windows Server.

At this point, again the question or as a comment, why does Duplicati didn’t release the RAM it used? I think I would be great to investigate this.

I hope, the transfer from the Duplicati database will work well. This will be the decisive point of the plan.

Crossing fingers.

1 Like

Still not seeing it. I’m finding and making sad comments about DB recreate when dindex files are lost, however you’re not doing that. If this database is already existing and working, just move it and test it.

Might be too late now to be relevant, but

Why so? Restore a sample first before you go for 3 TB (which will be a slow test either OS you’re on).

Setup includes typing at least the basic destination and encryption data like you did for Direct restore. Uncheck the Schedule, then try to Save and it will say what more it insists on, e.g. job needs a name. Satisfying the needs-source-file requirement just means pick a small minor one. You won’t do backup.

After having a job, it has an assigned database with no file yet, so you can copy your old DB into path. Alternatively, if you want the old DB name, set the path to point to it. Next up is small restore-and-look.

Having said that, you have other needs, such as the forbidden character cleanups whose behavior in Duplicati you could test if you restore on Windows. I hope you’d get some sort of noise-then-continue, meaning you might be able to get a nice list of ones that are impossible to write. Read or script from it.

Maybe instead of fixing the broken ones, you prefer to restore to Linux (no character breaks) to pre-fix using some scan-and-fix script (which needs no list because it makes its own). Many ways might work.

Situation is still confusing, but if your remaining need is 20,000 files, that might avoid a memory issue.

The rest were direct-copy? That might also have created an error output of whatever failed characters, however if crashed server still has files, direct-copy would work to cleanup system prior to a final copy.

If it helps any The RESTORE command takes a list of files, but shell line has limits. Duplicati may also.

If you’re going to do anything right at a shell, start with Export As Command-line then edit as you need.

It sounds like you’re gap-filling on new machine first setup. It’s unclear if you’ll keep it around after that.

If there’s a question besides character set loss of whether you got everything, Duplicati can do a listing.
The FIND command plus some scripting. It won’t find file damage though. For that, doing Linux restore (trying to avoid character set issues) followed by a compare to Windows would be one way to find gaps.

Or maybe you’re planning a redo eventually based on the restore rather than the FS fix and copy-some.

“as long as” rather “until” means it’s not a setup thing but continuing? That’s a little bit precarious, but the older history isn’t anywhere else. If you need an occasional small restore, be sure to potect the database, because recreating it wasn’t going so smoothly. Consider old backup to be only a fixed historical restorer.

Duplicati doesn’t directly release RAM, though can have some influence or bugs. Managed code has a runtime that deals with what you see at the OS level. and it varies with OS. I commented on that earlier.

Finding memory leaks is hard and usually benefits from lots of characterization of what use cases do it versus what different ones don’t. Your exact behavior isn’t described, but you could experiment, ideally getting to a small number of steps on a small configuration on a destination any developer can acquire.

Having an exact recipe, file an Issue, then perhaps someday (could be a long wait) a sufficiently expert developer will arrive to look into it. There are far more things wanting attention than volunteers to assist.

1 Like

Garbage Collection is the mono page on what I mentioned, whose words to describe the situation are:

The garbage collector, in short, is that part of the language’s or platform’s runtime that keeps a program from exhausting the computer’s memory, despite the lack of an explicit way to free objects that have been allocated.

This doesn’t mean there’s no part Duplicati might play in this, and some specialized tooling might help.

dotMemory from JetBrains is one we can get for free for core contributors, but those are rather scarce.
Measure memory usage in Visual Studio (C#, Visual Basic, C++, F#) is maybe another way to inspect.

Both would probably benefit from a specific use case, ideally an easily reproducible case for the exam.

There are also Duplicati options that can impact the memory use, so options in use should be checked.

use-block-cache, for example, reduces database lookups for big backups, at the expense of memory…

1 Like

Bad strategy, and is not your fault!