Hey there, I was able to run my backup to a NAS on my local network, I can see it in the storage device which was successfully created with the “Local drive” setting, as it is setup as a network drive via windows 10
However when it attempts to verify the backup it throws up the “Found 33743 remote files that are not recorded in local storage, please run repair” error, I am not sure how to resolve this and would love some assistance.
I haven’t attempted to repair it at this stage since I don’t know what’s going on. I’m still relatively green around the gills when it comes to this program, I’ve only used it for a few months compared to acronis which was just garbage.
There is nothing else in the remote folder except for the backup files
I don’t even know which direction to think in yet, but can you tell us how many files there currently are in the backup target total?
Are you still running new backups?
And just to double check, when you say there’s nothing in the remote folder except for backup files, are these only files with names starting with duplicati and have dlist, dblock or dindex in them? For instance: duplicati-b01ea13c51f2a44bea30b41d45c42676e.dblock.zip.aes
I cannot run any more backups to the NAS since it throws that error up. It is working for my external drive connected via USB however (Identical settings except for the destination)
All files have duplicati at the start and end in dindex.zip.aes or dblock.zip.aes
This looks either like a non existing database, or an empty one - although the fact that you have only dblock and index files, and no dlist files, is concerning. Are you sure of that ? If yes, this would be a bad situation, your backend would be damaged (don’t run repair in this case).
Is the database file actually existing at the location currently used by Duplicat ? Mind that if you change from Duplicati run as an user to run as system, the database location changes and it’s necessary to either move it in its new location or recreate it.
I’m not sure how good this log is at showing history. If need be, you could test backup with it running.
EDIT:
Above test was from a backup after Delete of database of prior backup, so any files found surprise it.
There’s a careful verification that the destination looks like the records in database, but yours did not.
The test for missing files looks like it happens after the test for extra. Did the file names get mangled?
Possibly that could be inferred by the “Extra” file names.
This is what I see in that log section when I try and run the backup.
Initializing the backup on the external drive works fine. (Which was made with the same settings except destination was the external. I added filters to the NAS backup, took those settings which were originally a duplicate of my older external drive backup, imported them back into a fresh backup for my external drive. That ran fine just now as of writing this post. )
There are no dlist files there, no. It’s worth noting at one stage I ran duplicati as administrator to try and rectify some warnings I was getting regarding certain files (It seemed to usually be temp file directories, or like, weird .log files that appear deep in folders, usually my windows drive under Appdata.)
That’s no longer the case, neither of the backups were done run as admin and both my NAS and External Drive backup were fresh backups as of this week.
As for whether the database is stored with duplicati, I am guessing that’s the dlist files? I can’t seem to find them in the program files directory, unless it would be found elsewhere.
no the database files have .sqlite extension.
A Duplicati backend has 3 file types: dblock, dindex and dlist. If there is no dlist files on your backend, the backend data has been destroyed because the file lists are lost.
I don’t see what in Duplicati could delete the dlist files without touching the dblock and dindex files.
With that much dblock files you should have lots of dlist files. Unless your backup is enormous and the first backup has never finished (the dlist file is written last).
Names don’t look mangled, so that returns us to the database theory.
This can cause some confusion, depending on the sequence of steps.
AppData\Local has Temp folder. Duplicati folder might have databases.
You can have several Duplicati going, with each one as a different user.
First one is usually on port 8200 (see browser). Next one may be 8300.
On my system, running as an administrator changes user. Does yours?
About → System info field UserName says which user Duplicati is using.
I would like to rule out confusion from Duplicati multi-user or other-user.
Your database screen will show the job database name. Is path proper?
Is size substantial (lots of MB) or tiny (less than 1 MB). If no clue so far,
How about doing a tiny backup to the NAS, checking result in seconds?
As noted, use a different folder. If this works, how did big backup differ?
Lack of dlist suggests possible earlier failure, but without a message?
This may need some more logging, but tiny test run should be far faster.
To clarify, the user I usually run as is not in Administrators group (old habit to increase security).
These days, even an Administrators group user usually needs a UAC prompt to be empowered.
If your usual user is Administrators group, you would not have had to change user to get power.
If you did change user, you could have AppData\Local\Duplicati databases for both those users.
Specifics would help here. They might just be normal but annoying inability to access something.
Running as an elevated administrator can help. Running as a service (extra steps) can also help.
The user under About > System Info is my default login username. I also checked for appdata files under a different user and they don’t exist.
I just checked the two database files
The top size in KB is the NAS destination backup database file and the bottom is the external drive destination backup
Did a test with very small backup to a new folder on the NAS.
Initial backup was successful and subsequent forced backups with new files in it were successful and generated new dlist files. As well as a database file in AppData
It should be mentioned that I ran into some issues after my last few reformats whilst troubleshooting my motherboard. My brother helped fix some of the permission issues (I suck at permissions), this even caused problems for bitdefender and I was forced to stop using it as it wouldn’t update anymore and eventually got stuck in an uninstall loop.
My primary user that I login with has administrator rights. Duplicati is also no longer “Run As Administrator”
As stated above I only have 1 database folder under my primary user (the only login on this installation)
I can show some warnings I encounter on a regular basis (This doesn’t seem to impact my backups though as it never seems to involve any important files that I would need to restore)
These are pretty far apart, given that the source is the same (and is presumably not changing hugely).
The lack of a dlist file also suggests an early end. Is there anything in About → Show log → Stored?
Unfortunately the regular job log only has results information for runs that actually ran to completion…
You can also get a rough idea by comparing files or space use of external drive and NAS destinations.
Regarding the unanswered verification question, a backup run will by default compare destination with database information before backup and again after. One can also click Verify files on home page.
If backup run ended early, it would probably not have gotten to Verifying backup files, but which way of verification did you first notice? If it was on a later backup run, or a manual verification, that seems OK, however if it failed early and then continued, that would seem pretty strange. Please clarify as you can.
re: reformat, could it be that you have gotten a ‘new’ system (formatted and installed anew), prepared a backup that copied about nothing, reloaded your system with one of these backups, and copied the backup data from the ‘good’ backend to the NAS ? That would explain it. You should never attempt to manage the files on the backup support (the backend) yourself without having first a very good understanding of how the data is organized.
Don’t manage the databases in AppData directly either, e.g. restoring them from some other backup.
You’ll get stale records which will be surprised at seeing new files that are not recorded in an old DB.
The logs, BTW, are in the databases, so new info may be missing, if post-format restores were done.
While I’d like to know an exact chronology of events if possible, starting NAS backup fresh may work.
Unfortunately the summary of cause behind the line shown is on the second line, which is not shown.
Until that issue is fixed, About → Show log → Live → Warning and clicking on them can get the data.
Some of them can be guessed at, for example lockfiles likely need VSS snapshots turned on as well.
A re-do might also be a good time to raise the blocksize to limit backup blocks to around a million, to improve performance. Default 100 KB is OK for 100 GB backup, but the file count suggests it’s more.
If running as Administrator is awkward to arrange, you can instead run as a Windows service, but it’s slightly complicated to set up. Note it’s easier to set up before first backup than to migrate an existing.
re: reformat, could it be that you have gotten a ‘new’ system (formatted and installed anew), prepared a backup that copied about nothing, reloaded your system with one of these backups, and copied the backup data from the ‘good’ backend to the NAS ? That would explain it. You should never attempt to manage the files on the backup support (the backend) yourself without having first a very good understanding of how the data is organized.
No I do all my backups manually, I don’t use duplicati to back my system up and restore that way, all files are copied via win explorer, targeting files I need to keep to maintain certain settings and setups like OBS, etc.
Hi all, I just had this issue and I subscribed to the forum to document how I fixed this issue for future Googlers.
TLDR don’t use leading slashes in your backup’s S3 destination folder path.
I started having the “please run repair” message that I couldn’t seem to fix. I checked my Wasabi S3 bucket and panicked when I saw it apparently empty. However, when I checked the usage/billing it showed I had over a TB of files in the bucket.
This post gave me the clue I needed: this happens when you use a leading slash in your S3 destination folder path. This creates a double slash in the s3 url which confuses both duplicati and most S3 viewing GUIs.
The following command successfully listed the hidden files (notice the double slash):
aws s3 ls --profile wasabi --endpoint-url=https://s3.wasabisys.com s3://my-duplicati-bucket//
I fixed the issue by moving everything under the double slash into the bucket root:
This took a few hours to complete. Then, I fixed the leading slash in my duplicati settings, re-ran the backup, and the problem was fixed.
I still don’t know why everything worked for months without errors and then suddenly stopped. Perhaps an update? In any case, I think it would be sensible for Duplicati to forbid leading slashes.
I would be interested in knowing why it would suddenly change? I saw S3 was updated as part of the beta release. Do you know when you first noticed the failure?
I think you are correct that a double slash is likely a mistake, but it would be annoying if we prevent other users from continuing their backups with double slash, so I need to know the reason before we start thinking about handling it.