Hi, can someone explain to me what these errors are:
“Failed while executing “Restore” with id:”
“Found 23 remote files that are not recorded in local storage, please run repair”
I’m using Amazon S3 to back up for one Linux machine and they are running successfully and when I try to restore from conf file to another Linux instance where I have another duplicati running as well and the destination directory is empty, but still fails to restore after doing 1h for 4GB size, creating temp db and listings what are those files that are missing on which side on S3 or my new directory which is empty, that doesn’t make any sense?
Was listing some threads here but ( most of it says repair the db (which one “temp db”, delete files in the destination) which seems not relevant in my case … not sure because I’m a new user so if someone faced a similar issue and is willing to share solution thank you!
I hope you don’t have two Duplicati instances trying to work with the exact same back end location. It’s a recipe for failure.
If you are trying to do a restore on a separate machine, that should work. But you need to make sure the original machine that was doing backups to that target location isn’t still actively trying to do backups. Can you confirm?
They are separated instances and the one that Im trying to restore on, isn’t set up for backup, so you were saying that every time if I need to restore the latest point I should stop backup from the source instance before I do the restore? - and one more thing comes to my mind If the backup instance is dead and cannot be retrieved, I shouldn’t try to restore the latest because it would be damaged?
Yes, if you are testing or just otherwise want to do a restore on a different computer, make sure Duplicati is not running on the original computer.
If your original computer died in the middle of a backup, then I don’t think that partial backup will be shown at all when you look to restore data on another PC. The backup versions are stored in “dlist” files and that is only uploaded at the end of the backup job, from what I recall.
That’s my thought, although I still warn people to avoid collisions, i.e. restores elsewhere during backup.
Best case is it will just see some extra data that it doesn’t see source file use of (that info is in the dlist).
Worst case is something will be more confused by the partial backup. If source system dies, there’s no choice but to pick up the incomplete backup, but if there’s an option, don’t tempt fate. Feel free to test it, because anybody who can find a repeatable failure potentially paves the way to a problem being solved,
or at least a better understanding of the limits of the systems so that forum users can be better advised.
is a current topic talking about what’s safe, less safe, and don’t-do-it, e.g. don’t point two backups to one folder and run them – this will throw things way off. Good testing (including rare cases) can be beneficial.
Testing is one thing, but Duplicati is not a file sync program. Above topic on testing talks about (and gives references for) some different ways to do restores. Database should match the destination that it tracks, and this is easy to foul up when two databases (one per Duplicati installation) are configured to one folder.
Temporary databases (e.g. Direct restore from backup files or Restore from configuration) avoid having an out-of-sync database by building a partial temporary database (slowly?) when they’re run.
If “restore from conf file” means you checked Restore from configuration option (and description of temp db sounds like it was) I’m not sure what it thought was extra. In a small test, I threw an extra dblock file into the destination and it didn’t mind, but possibly there’s a timing thing if it arrives at the wrong time.
You can either see if it does that when things are settled (no backup running) or see if you can get names from About → Show log → Live → Warning. I think it will talk about “Extra unknown file” and name names which can then be looked up to see, for example if they are recently uploaded files from a running backup.
“recorded in local storage” refers to the local database storage, and any message about missing or extra remote files refers to destination files relative to the database record. They aren’t source or restore-target.
My experience thus far suggests that Duplicati is great for these scenarios:
I run regular backups that retain previous versions. Some files or folders got deleted or messed up, and I want to restore them (on the same machine) from a previous version (or the last backup, if the files were still OK then).
I have to replace my computer or its mass storage, or wipe it and start over, and I want to install Duplicati, set up my old backup jobs, then restore everything from those jobs. It’s going to take some time, but that’s OK… I want it all back.
Duplicati is nearly useless for these situations:
I can’t get to my computer (whether it failed, was stolen, or I’m stranded somewhere else); I can use a computer temporarily (like a friend’s computer or a hotel computer), and I really need to get access quickly to a few critical files I backed up.
I back up some folders on one of my computers regularly, and I frequently want to update corresponding folders on a different computer to match the latest backup from the first computer.
The thing is that Duplicati is very dependent on the “local database” for each backup configuration. It can’t do much without it, and it takes a long time to create it from scratch. Even using the “Direct restore” option it has to build a partial database, and in my trials that has taken significantly longer than downloading all the data. (This is on a DSL connection around 30mbps with an i9-9900K. The rate-limiting process appears to be single-threaded and CPU-bound.)
So when you have the local database available, it’s great. When you have time and space to rebuild, that’s fine. For situations 3 and 4 above, though, it’s not the right tool. I’m noting this because it sounds like you’re in situation 4; if so, you probably won’t get what you want from Duplicati. I know that doesn’t answer your original question.
Very nice insights, or at least I agree. The ability of Duplicati to achieve things like efficiently stored multiple backups (storing only changes) relies heavily on block tracking, which means a lot of work in the database.
The backup process explained is a simple explanation that I forgot to mention last time, but basically every source file that’s backed up is backed up as fixed-size blocks (some of which might be elsewhere as well) which need to be reassembled into a file at restore time using database information as a reassembly map.
This makes it impossible to run quick-and-casual restores (#3) without getting a bit of a database together. Backup programs (typically commercial) with central servers can solve this using server-side processing, however Duplicati is a bring-your-own-storage system, and the storage does just that – no special smarts.
One program that manages to do block-based deduplication without a database while using plain storage is Duplicacy, whose GUI version isn’t free, but I assume it can get a file restore going fast (#3). I’m unsure how well it solves #4, but at least it doesn’t mind multiple computers going to one remote storage location.
If anybody is motivated to see how it handles #4, I’ll point to one rather old post which sounds like one can make a second computer appear just like the original. This is good for restores to it, but backups are risky.
I agree with your #4, but not your #3 if you are prepared. I have all my backup configurations exported and stored in an encrypted archive which I save in my OneDrive and Dropbox folders. If my main PC dies I can easily grab the file and without too much trouble restore data on a different PC.
I do fear that some people don’t keep careful track of their backup configurations and will have difficulty in this scenario. There are many pros and cons to having a client side backup program the works with “dumb” storage locations, and this aspect could definitely be a con if one is not prepared.
I was looking more at #3 description than the “nearly useless” category heading. It’s probably better than that, provided you can install Duplicati (would a hotel computer allow it?), prepared at least essential info (export of config is best), and can wait for however long Recreate takes (once per bunch of critical files).
I think a can’t-install scenario would be challenging to any installed solution. An online service works best.
Duplicati being able to run without usual install (unzip it into the folder you like) will allow a portable install.
This is very subjective. I’d rate Duplicati better than “nearly useless” but not great on “get access quickly” depending on how slow the Recreate is (one can help it by keeping versions few and backup size small).
That is more or less what I meant. I understand, and accept, that Duplicati isn’t meant to be able to get to your files quickly in an emergency. I didn’t grasp that at first, though.
I knew Duplicati can’t run in a web browser, but at least it can run on Windows, Mac OSX and Linux, so I figured if I could get to any “real” computer that isn’t totally locked down, I could probably get my files. What I missed was that it won’t be like restoring an old version of a file on the original system. It can easily take an hour or more — some people report days — just to rebuild the database, and that’s a prerequisite for doing anything else. If I’m stranded away from home, my wallet and phone have been stolen, and I really need to get to that file with my important phone numbers in it… a Duplicati backup is not going to provide a very practical way to get it.
So now I know — and I’m still looking for a good solution for emergency/crisis access to data that is end-to-end encrypted — but until you’ve tried to recreate a Duplicati database, you might not see it coming. (Just as I think the original poster in this thread might have expected to routinely restore Duplicati backups on a different machine than the one on which they’re being made… it’s not obvious at first that Duplicati isn’t a good tool for that.)
I do know of two solutions (to my problem, not the original poster’s), each with its own limitations. Tresorit, assuming it works as described, can do web access without compromising end-to-end encryption. However, it has no point-in-time restore for folders — only files. That means if ransomware knocks out a folder with two thousand files in it, and Tresorit syncs them before I catch it, I’d have to select a previous version of each of those two thousand files one by one, manually. That, to me, rules out Tresorit as a backup solution. For my budget, it’s too expensive to be only a sync solution and not also a backup solution. Also, it’s closed source, which is always a bit suspect when it comes to encryption.
KeeWeb is accessible from a web page, but it only handles KeePass files, and the back ends are limited. (Apparently there’s no way for a web page to access an SFTP server directly, which really… sucks.) I use it with some spare space on my shared web hosting account accessed via WebDAV (long story… only way I could find to make it work was by installing NextCloud and using its WebDAV… feels like swatting a fly with a sledgehammer) as my password manager, but obviously it’s not a backup system. I’ll only be able to retrieve limited, specific information which I’ve predicted ahead of time that I might need to retrieve.
Hi again, thanks for the explanations so my problem basically cannot be solved, in my case (just to be more clear) Im doing this because I need to migrate the server which at this point is in prod where the backups are made, the size of it is around 300GB, and I need to keep and relay on those backups that are made from this server for future use and precaution, thats why Im trying to restore them first on the new server, so I can sum the time it will need to restore if something bad happened between. But as I can see all of you are saying I cant relay on that … I have few other instances who are running and I also tested yesterday some of them with Duplicati service stoped it went flawless on the other server, around 10mins but the backup data was significantly smaller 11G, but for the test Im doing with the bigger data and more active server I guess I dont have much choice, It leaves me just to hope that I wont need the backups.
Moving Duplicati databases is the key to avoiding database Recreate time (and maybe other issues).
After migration, the new server can continue backup and keep all history, but don’t let old one backup.
300 GB is not too huge. Do you have any time window for migration with original server unchanged?
You’ll either need to do it in one shot, or find and move later changes with rclone or maybe robocopy.
EDIT: Depending on your files, finer points like permissions might matter, which complicates copying.