Error on BackUp after Stop/Start?

Should I delete my old back-up and just start fresh?

Here’s the error log:

        {

“DeletedFiles”: 627325,
“DeletedFolders”: 918,
“ModifiedFiles”: 1,
“ExaminedFiles”: 312793,
“OpenedFiles”: 309649,
“AddedFiles”: 78145,
“SizeOfModifiedFiles”: 0,
“SizeOfAddedFiles”: 181822540622,
“SizeOfExaminedFiles”: 891916544419,
“SizeOfOpenedFiles”: 369995431801,
“NotProcessedFiles”: 0,
“AddedFolders”: 15030,
“TooLargeFiles”: 0,
“FilesWithError”: 0,
“ModifiedFolders”: 0,
“ModifiedSymlinks”: 0,
“AddedSymlinks”: 0,
“DeletedSymlinks”: 154,
“PartialBackup”: false,
“Dryrun”: false,
“MainOperation”: “Backup”,
“CompactResults”: null,
“VacuumResults”: null,
“DeleteResults”: null,
“RepairResults”: null,
“TestResults”: null,
“ParsedResult”: “Fatal”,
“Interrupted”: false,
“Version”: “2.0.7.100 (2.0.7.100_canary_2023-12-27)”,
“EndTime”: “2024-01-11T18:03:05.120397Z”,
“BeginTime”: “2024-01-11T15:58:35.831325Z”,
“Duration”: “02:04:29.2890720”,
“MessagesActualLength”: 109,
“WarningsActualLength”: 0,
“ErrorsActualLength”: 1,
“Messages”: [
“2024-01-11 10:58:35 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started”,
“2024-01-11 10:59:34 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()”,
“2024-01-11 11:01:08 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (36.54 KB)”,
“2024-01-11 11:01:09 -05 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-20240109T150000Z.dlist.zip.aes”,
“2024-01-11 13:00:44 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b01aac2432baa4a11ac59942636dcf1f2.dblock.zip.aes (49.91 MB)”,
“2024-01-11 13:00:44 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b87cba52e0dc442a4918e03fd2209e10f.dblock.zip.aes (49.94 MB)”,
“2024-01-11 13:00:44 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b2012900345134317ba0feeea274c30ee.dblock.zip.aes (49.92 MB)”,
“2024-01-11 13:00:45 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b45b98b2f71ea40479b17ca6bd1e3bec4.dblock.zip.aes (49.93 MB)”,
“2024-01-11 13:01:05 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b45b98b2f71ea40479b17ca6bd1e3bec4.dblock.zip.aes (49.93 MB)”,
“2024-01-11 13:01:05 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b01aac2432baa4a11ac59942636dcf1f2.dblock.zip.aes (49.91 MB)”,
“2024-01-11 13:01:05 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b87cba52e0dc442a4918e03fd2209e10f.dblock.zip.aes (49.94 MB)”,
“2024-01-11 13:01:05 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b2012900345134317ba0feeea274c30ee.dblock.zip.aes (49.92 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b01aac2432baa4a11ac59942636dcf1f2.dblock.zip.aes (49.91 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b87cba52e0dc442a4918e03fd2209e10f.dblock.zip.aes (49.94 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b45b98b2f71ea40479b17ca6bd1e3bec4.dblock.zip.aes (49.93 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b460a46dd4a7b45ae87e163422791ec75.dblock.zip.aes (49.91 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b2012900345134317ba0feeea274c30ee.dblock.zip.aes (49.92 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming "duplicati-b01aac2432baa4a11ac59942636dcf1f2.dblock.zip.aes" to "duplicati-b460a46dd4a7b45ae87e163422791ec75.dblock.zip.aes"”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b2f150dd2a5ad4441b3100cbb7b6886d0.dblock.zip.aes (49.94 MB)”,
“2024-01-11 13:01:15 -05 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming "duplicati-b87cba52e0dc442a4918e03fd2209e10f.dblock.zip.aes" to "duplicati-b2f150dd2a5ad4441b3100cbb7b6886d0.dblock.zip.aes"”
],
“Warnings”: ,
“Errors”: [
“2024-01-11 13:03:05 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\nUserInformationException: Error writing file: duplicati-b3cabb6eda7e04e1b8fcc68ee5cdfad27.dblock.zip.aes”
],
“BackendStatistics”: {
“RemoteCalls”: 25,
“BytesUploaded”: 0,
“BytesDownloaded”: 0,
“FilesUploaded”: 0,
“FilesDownloaded”: 0,
“FilesDeleted”: 0,
“FoldersCreated”: 0,
“RetryAttempts”: 20,
“UnknownFileSize”: 0,
“UnknownFileCount”: 0,
“KnownFileCount”: 37412,
“KnownFileSize”: 978449070468,
“LastBackupDate”: “2024-01-03T10:14:20-05:00”,
“BackupListCount”: 13,
“TotalQuotaSpace”: 0,
“FreeQuotaSpace”: 0,
“AssignedQuotaSpace”: -1,
“ReportedQuotaError”: false,
“ReportedQuotaWarning”: false,
“MainOperation”: “Backup”,
“ParsedResult”: “Success”,
“Interrupted”: false,
“Version”: “2.0.7.100 (2.0.7.100_canary_2023-12-27)”,
“EndTime”: “0001-01-01T00:00:00”,
“BeginTime”: “2024-01-11T15:58:35.831634Z”,
“Duration”: “00:00:00”,
“MessagesActualLength”: 0,
“WarningsActualLength”: 0,
“ErrorsActualLength”: 0,
“Messages”: null,
“Warnings”: null,
“Errors”: null
}
}

Hello

the problem look to be with the backend (because of all the retrying), so starting fresh could not be the way to solve it. Can you say more about the context ?

1 Like

Yeah, had a main drive SSD failure. Did a replacement and restored everything. Then tried to do a backup and looks as if somehow a file got corrupted.

I think I’ll just start fresh.

I don’t know your retention settings for elder versions of your files. If the actual file version is sufficient for all your files, then just start a fresh setup of the backup.
Are you sure that there is sufficient free space on your target system? No drive size or quota limitation?
Alternative: Recently I found out that the *.aes files from Duplicati can be decrypted with https://www.aescrypt.com/ and the PW used by Duplicati. Did you try to manually decrypt the problematic file and after that try to open it with an zip tool like 7zip? If the file is really corrupted, I would 1) try to generate a backup copy of the file, 2) then delete it in its original location, 3) run an file system check for the partition, 4) then run the “Recreate (delete and repair)” from the “Advanced Database” section of that backup in Duplicati page and 5) run the backup again.

1 Like

2.0.7.100 was supposed to have better error reporting, but apparently not enough for this case.
You can do a backup with About → Show log → Live → Retry up, and click on error messages.
The end result in posted log was no files managed to actually be uploaded. What’s destination?

EDIT 1:

number-of-retries (default 5) sets the retry limit. retry-delay (default 10s) sets delay before retry.
A retry is done under a new name, as the status of the original file is rather unknown after error.
asynchronous-concurrent-upload-limit (default 4) is parallel uploaders. 4 * 5 = 20 retry attempts.

EDIT 2:

Or maybe you tried that. Were files uploading?

I don’t understand, from the initial report:

the actual reporting seems perfectly correct. The job has not worked, and it’s reported. The reporting enhancement has completely done its job IMO, or am I missing something ?

There were two enhancements. The one I’m talking about is:

where I hoped to reduce the need to ask people for live logs, but here I am again, asking for a live log. Admittedly the enhancement does seem to be helping other cases. Maybe the failure path is different?

That is the exception message, it doesn’t have any more detail. This one seems to be alternative FTP after upload failed, so it is probably a connection error. The API doesn’t return any more information, so that is all we get.

If you use FTPS, it might be related to TLS issues in newer windows versions, since List works but Put fails (there can some session continue issues with TLS 1.3). Is that relevant @Andor_Kiss? There is no information about the backend type in the logs you posted.

OK thanks. I guess you can’t return information that isn’t there. I thought maybe it got chopped off.

@Andor_Kiss what OS and version is Duplicati on? What destination Storage Type is being used?

While waiting for answers, I set up FTP (Alternative) on Windows 10 to go to a restricted folder that disallows access from the FTP server. This resulted in failed put requests, and a job log looking like

which is the other reporting enhancement. About → Show log → Stored and server profiling log add information from the stack trace, but they can be huge. Clicking live log “Fatal error” gives a little bit.

Basically this confirms both enhancements, but it’s disappointing that the FTP client didn’t say more.

Now back to waiting for OS/Destination/other info.

The connection didn’t fail - it’s a NAS. Using LINUX (Pop_OS 22.04 LTS). There’s 1.4 TB total backup and I stopped it at end of day and attempted to continue next day (laptop). This is when I got the error. I’ll most likely delete the backup and start fresh and run O/N.

@Andor_Kiss

you don’t say how you did your recovery. If you are of the kind that does a full image backup from time to time and recover from it if a disaster happens, know that in this case you can’t restart Duplicati directly from the image, since the database is out of sync with the backend (that is more or less up to date with the computer before the crash). In this case, you need to delete the database from the reimaged computer, rebuild it from the backend, and if the backend was correct you should be able to restore up to date data from the Duplicati backup and continue backups.

If not, that is, if you do a backup from the reimaged computer using an out of date database, you will damage your backend probably in an irreversible way, so needing to start again (with lost data of course).

I initially did a full image backup and then do an incremental (differential) every week. My current machine is OKAY. The backup is problematic, so I’ll delete the back-up and start fresh.

Meaning FTP (Alternative) guess was wrong? If access is SMB, there could still be permission issues preventing file writes, and Duplicati if run as a service is not likely the same user as is on the browser.

I don’t know why fresh backup would upload better, and I don’t know why current one stopped writing.

Good luck.