Another backup failed due to error Cannot find file

Can you confirm that you have enough space in tmp folder?

How long is a backup? Why would these /tmp files NOT be overnight? Are any source files huge?
Do you have really old files in your /tmp? Sometimes systems will delete old /tmp files periodically.
Unfortunately, Duplicati temporary file names don’t say much. Any other context you can provide?

Was this the second failure? Was the other failure similar with “Could not find file” and also “403”?

FWIW Google Drive recently threw a temporary 403 at me which exhausted my deliberately small –number-of-retries and caused problems, so I wonder if your 403 is a cause, a result, or unrelated.

The third day succeeded, more about that at the end. First about the failures

/tmp has plenty of space. Its on the root filesystem with 54GB free.

The “could not find error” was from the second day, It was from the popup in the console, since there is no log when the backup aborts I wouldn’t know where to look for the first days error now.

I am backing up a very large tree. On a typical day it takes 5 hours and reports something like

examined 1140271 (4.65 TB)
Opened 8 (31.62 GB)
Added 7 (3.02 GB)
Modified 1 (28.60 GB)
Deleted 6

Now about the third day which succeeded. I did get a warning

2020-03-12 04:50:23 -04 - [Warning-Duplicati.Library.Main.Operation.Backup.UploadSyntheticFilelist-MissingTemporaryFilelist]: Expected there to be a temporary fileset for synthetic filelist (121, duplicati-i410c5a59a2cd4d19b39ce10de9f637e7.dindex.zip.aes), but none was found?

The backup took an unusually long time a little over 8 hours, with

Examined 1140283 (4.62 TB)
Opened 58 (76.87 GB)
Added 54 (25.18 GB)
Modified 4 (51.69 GB)
Deleted 38

I don’t if the extra 3 hours had to do with the warning, or was simply because it had two extra days of work to backup.

Yes, this is a support problem. Log isn’t there when you need it. If it keeps happening, lots of logging to lots of drive space can be set up, but few people do that. Email on Error is somewhat more feasible for constant use, and picks up a little, but less than the ideal amount – most are just one-line summaries…

This is, I believe, a Duplicati bug involving a bad lookup in an attempt to upload a backup interrupted by something (a fatal error may do it). It shows the previous backup plus whatever new made it. More here where there’s talk about whether this is still needed, now that manual stopping is getting more possible.

Your experience points out that proper stoppings aren’t always possible, so then how should things go?

The good news is you’re also confirming that lack of a synthetic filelist for an interrupted day is not a big problem (relatively speaking), but you probably have a day gap instead of synthetic backup for that day.

There would probably be some amount of inspection and repair, but more work to backup is likely too… Looking at that job log compared to the usual might show some differences beyond what you’ve posted. “BytesUploaded” in “BackendStatistics” in “Complete log” might give an idea if you’re bandwidth-limited.

I’m not sure if this is going to get solved easily. Even with logs, there’s little to no logging of tmp file use, and tmp files are used for many things. I think another report had a stack trace that narrowed the scope a bit.to SpillCollector which is collecting partially filled files of blocks to finish off the backup, and which might suffer from extreme skew where a “long pole” file runs long enough that early enders get deleted. Your backup does not seem so hugely long that a tmp file clean would be it, but it’s not totally ruled out.

For the “403” question, you could see if About --> Show log --> Stored logs it. If so, on which backups? Your experience does match mine which is that the 403 is transient. I’m just not sure if it’s related here.

You know I had this problem. Did all sorts of things and could not solve it. This was backing up to a USB drive connected to a router. So I took the drive off the router, plugged it into a PC. Did a Check Disk/Repair on it. Put it back. Problem Solved!! The error is so far removed from the solution.

After deleting my existing backup (500 TB, Duplicati could not repair it) and reinstalling Duplicati (as a service, but problem also occurs with standard installation), I keep getting this same error. I’m trying to back up using FTP to a NAS at my mother’s place. t’s highly reproducable. See the log below (unfortunately parts are in Dutch):

Failed: Er zijn één of meer fouten opgetreden.
Details: System.AggregateException: Er zijn één of meer fouten opgetreden. —> System.AggregateException: Kan bestand C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a niet vinden. —> System.IO.FileNotFoundException: Kan bestand C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a niet vinden.
bij System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
bij System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
bij System.IO.FileStream…ctor(String path, FileMode mode, FileAccess access, FileShare share)
bij Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<b__0>d.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij CoCoL.AutomationExtensions.d__101.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bij Duplicati.Library.Main.Operation.BackupHandler.<RunMainOperation>d__13.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bij Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Einde van intern uitzonderingsstackpad --- bij Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Einde van intern uitzonderingsstackpad --- bij CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) bij Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result) bij Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action1 method)
—> (Interne uitzondering #0) System.AggregateException: Kan bestand C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a niet vinden. —> System.IO.FileNotFoundException: Kan bestand C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a niet vinden.
bij System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
bij System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
bij System.IO.FileStream…ctor(String path, FileMode mode, FileAccess access, FileShare share)
bij Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<b__0>d.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij CoCoL.AutomationExtensions.d__101.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bij Duplicati.Library.Main.Operation.BackupHandler.<RunMainOperation>d__13.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bij Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Einde van intern uitzonderingsstackpad --- bij Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() ---> (Interne uitzondering #0) System.IO.FileNotFoundException: Kan bestand C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a niet vinden. Bestandsnaam: C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a bij System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) bij System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) bij System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) bij Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bij CoCoL.AutomationExtensions.<RunTask>d__101.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij Duplicati.Library.Main.Operation.BackupHandler.d__13.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()<—

—> (Interne uitzondering #1) System.AggregateException: Er zijn één of meer fouten opgetreden. —> System.Net.Sockets.SocketException: Een verbindingspoging is mislukt omdat de verbonden party niet correct heeft geantwoord na een bepaalde tijd, of de gemaakte verbinding is mislukt omdat de verbonden host niet heeft geantwoord 145.133.54.229:50381
bij System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)
bij System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
bij System.Net.FtpControlStream.ConnectCallback(IAsyncResult asyncResult)
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij Duplicati.Library.Main.Operation.Backup.BackendUploader.<b__13_0>d.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij Duplicati.Library.Main.Operation.Backup.BackendUploader.<b__13_0>d.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij CoCoL.AutomationExtensions.d__101.MoveNext() --- Einde van intern uitzonderingsstackpad --- ---> (Interne uitzondering #0) System.Net.Sockets.SocketException (0x80004005): Een verbindingspoging is mislukt omdat de verbonden party niet correct heeft geantwoord na een bepaalde tijd, of de gemaakte verbinding is mislukt omdat de verbonden host niet heeft geantwoord 145.133.54.229:50381 bij System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult) bij System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult) bij System.Net.FtpControlStream.ConnectCallback(IAsyncResult asyncResult) --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij Duplicati.Library.Main.Operation.Backup.BackendUploader.<<Run>b__13_0>d.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij Duplicati.Library.Main.Operation.Backup.BackendUploader.<<Run>b__13_0>d.MoveNext() --- Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden --- bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bij CoCoL.AutomationExtensions.<RunTask>d__101.MoveNext()<—
<—
<—

Log data:
2020-04-12 18:49:50 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.UploadSyntheticFilelist-MissingTemporaryFilelist]: Expected there to be a temporary fileset for synthetic filelist (1, duplicati-20200412T140509Z.dlist.zip.aes), but none was found?
2020-04-12 19:02:56 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.IO.FileNotFoundException: Kan bestand C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a niet vinden.
Bestandsnaam: C:\WINDOWS\TEMP\dup-c6adb50d-67f2-456d-aae5-b059e229357a
bij System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
bij System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
bij System.IO.FileStream…ctor(String path, FileMode mode, FileAccess access, FileShare share)
bij Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess.<>c__DisplayClass0_0.<b__0>d.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij CoCoL.AutomationExtensions.d__10`1.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij Duplicati.Library.Main.Operation.BackupHandler.d__13.MoveNext()
— Einde van stacktracering vanaf vorige locatie waar uitzondering is opgetreden —
bij System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bij System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bij Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

That’s TB, not GB? If TB, this is probably not an ordinary computer. How long does the backup take?

At what time of the backup does error occur? I’m guessing near ending, but I’ve never seen the issue.

From the temporary file path, I’d guess this is some sort of Windows OS. Is antivirus delete possible?

Some systems delete based on file age, but I think Linux is more likely, and I don’t know backup time.

What Duplicati version (visible at top of About --> Changelog)?

Do you have other jobs that are working OK? If so, what’s different that might be related to the issue?

Please look in some successful jobs for this backup to see what "RetryAttempts" typically is finding. Assuming you’re on 2.0.5.1, this level of logging might require looking in Complete log in the job log.

Perhaps you could get a special log of it with Advanced options –log-file and –log-file-log-level=retry. Nobody’s ever shown anything except the ending, and sometimes a poor end is from earlier problems.

There’s one report that sounds like they set this to 1 in Advanced options, and it solved the problem:

–asynchronous-concurrent-upload-limit = 4
When performing asynchronous uploads, the maximum number of concurrent
uploads allowed. Set to zero to disable the limit.

500 TB

That’s TB, not GB? If TB, this is probably not an ordinary computer. How long does the backup take?

Whoops, you’re right, GB. Mostly movies & photos of the kids. I do not know how long the backup takes since it never finished yet. The old backup grew throughout the years.

At what time of the backup does error occur? I’m guessing near ending, but I’ve never seen the issue.

Actually the opposite, quite early after the start. Say 10 minutes or so.

From the temporary file path, I’d guess this is some sort of Windows OS. Is antivirus delete possible?

Yes, Windows 10. I’m running BitDefender but I’ve never had this problem before. I’ll try again with BitDefender disabled.

Some systems delete based on file age, but I think Linux is more likely, and I don’t know backup time.

So OS is Windows and the error occurs quite early after the start of the backup.

What Duplicati version (visible at top of About --> Changelog)?

The latest, freshly installed: duplicati-2.0.5.1_beta_2020-01-18-x64.msi

Do you have other jobs that are working OK? If so, what’s different that might be related to the issue?

No, I don’t have other jobs.

Please look in some successful jobs for this backup to see what "RetryAttempts" typically is finding. Assuming you’re on 2.0.5.1, this level of logging might require looking in Complete log in the job log.

p_vestjens:
highly reproducable

Perhaps you could get a special log of it with Advanced options –log-file and –log-file-log-level=retry. Nobody’s ever shown anything except the ending, and sometimes a poor end is from earlier problems.

I’ll see if I can get more logging.

There’s one report that sounds like they set this to 1 in Advanced options, and it solved the problem:

–asynchronous-concurrent-upload-limit = 4

When performing asynchronous uploads, the maximum number of concurrent

uploads allowed. Set to zero to disable the limit.

I’ll give it a try.

Thanks for the tips!

Best regards,

Patrick.

Hi,

I tried running the backup with BitDefender disabled and the asynchronous-concurrent-upload-limit option set to 1, but neither makes a difference.

I enabled log-file and set log-file-log-level to Retry. Attached a generated log file.

Best regards,

Patrick.

Duplicati.zip (3.4 KB)

I encounter the same issue on two machines. The issue started directly after the update to v2.0.5.1-2.0.5.1_beta_2020-01-18.
On both machines I tried to delete the most recent back-up version, however that did not solve the issue.
On machine A (back-up of around 8GB to Box.com) I deleted the database and started over. This solved the issue. It’s now running fine for weeks.
On machine B (back-up of around 200 GB to TransIP Stack) I also deleted the database and started with a very small subset (about 40 MB). The issue directly returned (see attached back-up report). On this machine, also a local back-up is running. The local back-up is backing up the same data + some extra folders. This back-up is running without any issues.
On machine C (alsoa back-up of around 200 GB to the same TransIP Stack account) everything is running fine.
All machines are running Windows 10 with default Windows security solution and Duplicati v2.0.5.1-2.0.5.1_beta_2020-01-18 with default settings. The missing files are not placed in quarantine. The only real difference I see is the internet connection:
Machine A = 8/2 Mbit (8 down, 2 Mbit upload limited in Duplicati to 160 Kbyte/s)
Machine B = 4/1 Mbit (upload is limited in Duplicati to 65 KByte/s)
Machine C = 50/50 Mbit
Can it be that the slow internet connection (upload speed) is causing the issue?

Backup report machine B.zip (1.4 KB)

Welcome to the forum @Niels

If you have the same issue, please perform the same diagnostic step of taking a log at Retry level.
The other log is looking interesting due to all its retries and failures trying to talk to the destination.
A few more logs would help in figuring out if there’s a pattern to this issue. Or maybe yours differs.

Is machine B ever working to TransIP Stack? If so, please also answer earlier "RetryAttempts."

Thanks ts678. I will try to get remote access to machine B in the coming days to perform the diagnostic steps.

Hi,

For what it’s worth: I also use TransIP Stack and Duplicati v2.0.5.1_beta_2020-1-18.
No problems about not finding files here. But… I am on macOS not Windows.
Since the logfile contains Dutch a little translation: It says Windows cannot find requested files.
Did you check for issues with the disk itself?

The log at retry level can be summarized (feel free to look at it yourself) as this. Number is try #::

List            1-4 failed, 5 worked.
Put dblock      1-5 failed, 6 worked.
Put dindex      1 worked.
Put dblock      1 worked.
Put dindex      1 worked.
Put dblock      1-6 failed.
2:12 later it complains about TEMP

A dblock file is source file data. A dindex file is the much smaller index of what’s in a dblock file.
Retries are a common enough event that I hope the bugs are gone. Failure after retries is rarer.

Generally I’d have thought an upload (Put) failing all retries would immediately error the backup.
It sort of did here, but with a rather misleading error message. Unfortunately I can’t reproduce it.

Are you running any special Advanced options that might be relevant to how the backup works?
What’s total size of the source area you’re backing up? Can you test for minimum needed size?
We’re still looking to learn if the @Niels issue is similar, but there a 40 MB subset of files failed.

The @p_vestjens issue was on FTP. What FTP server is at the remote? Does it keep any logs?
It would be interesting to see what its view is, compared to the local view which in some cases is
looking like it can’t reach the server, meaning it’s a network issue (examinable but not so simple).

System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not answer correctly after a specified time, or the connection made failed because the connected host did not answer 145.133.54.229:50252

Is this network connection generally solid? The percentage of failed FTP operations is quite huge.
I suppose you could try running a continuous ping /t to it during backup to see how reliably it runs.
Seeing netstat -s Segments Retransmitted going up during backup might also be a trouble sign.

EDIT:

For anyone seeing that they’re running out of retries, you can raise –number-of-retries from default.
This would be a test and a poor workaround for whatever’s causing unreliable remote operations…

It sort of did here, but with a rather misleading error message. Unfortunately I can’t reproduce it.

I’ve also got that impression now.

Are you running any special Advanced options that might be relevant to how the backup works?

These are all the options I’ve been using the last time:

–send-mail-to=

–send-mail-from=

–send-mail-url=smtp://smtp.ziggo.nl:587

–send-mail-username=

–send-mail-password=

–send-mail-subject=%PARSEDRESULT%: Duplicati %OPERATIONNAME% report for %backup-name%
–snapshot-policy=On
–log-file=C:\Windows\Temp\Duplicati.log
–log-file-log-level=Retry
–asynchronous-concurrent-upload-limit=1

Nothing special I think.

What’s total size of the source area you’re backing up? Can you test for minimum needed size?

About 528 GB.

The @p_vestjens issue was on FTP. What FTP server is at the remote? Does it keep any logs?

It would be interesting to see what its view is, compared to the local view which in some cases is

looking like it can’t reach the server, meaning it’s a network issue (examinable but not so simple).

At the remote is an el-cheapo LG NAS with built-in FTP. It does keep a log but only very high level: connection established and file uploaded.

System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not answer correctly after a specified time, or the connection made failed because the connected host did not answer 145.133.54.229:50252

Is this network connection generally solid? The percentage of failed FTP operations is quite huge.

I suppose you could try running a continuous ping /t to it during backup to see how reliably it runs.

ping is showing decent results, most of the time 20-30 ms, for some periods 200-300 ms.

Seeing netstat -s Segments Retransmitted going up during backup might also be a trouble sign.

It’s going up (for IPv4): 21 segments retransmitted in 1 minute. That’s 1 every 2 seconds on average. Quite a lot I guess. And sometimes it increases faster, possibly coinciding with the increased ping times above (hard to check for sure).

The strange thing is that the backup didn’t show these problems before I had to delete it and and start all over again. Same Duplicati version, same crappy NAS on the other side. I don’t know what changed.

For anyone seeing that they’re running out of retries, you can raise –number-of-retries from default.

This would be a test and a poor workaround for whatever’s causing unreliable remote operations…

I’ll give this a try, see if it helps.

Thanks again!

Best regards,

Patrick.

One thing that changed in 2.0.5.1 is a feature was added to do concurrent uploads to run faster.

  --asynchronous-concurrent-upload-limit (Integer): The number of concurrent
    uploads allowed
    When performing asynchronous uploads, the maximum number of concurrent
    uploads allowed. Set to zero to disable the limit.
    * default value: 4

You could see whether lowering that to 1 helps. I’m not sure from the language what 0 would do.

Perhaps this FTP server is upset by concurrent uploads, and that’s showing up as upload errors.

You can also try FTP (Alternative) instead of FTP. Use the Test connection button for fast test.
If I recall correctly, one of these (and I think it’s Alternative) doesn’t like spaces in its folder name.

So I set the number-of-retries to 10 (twice the default value) but that didn’t make any difference. The final error was still the “file not found”. (log file attached below)

In case you want to try to reproduce it yourself, I guess I could provide you with temporary access to the NAS.

Best regards,
Patrick.

Duplicati.zip (15.1 KB)

You seem to still be having extreme unreliability in connecting to the server. I did post some other ideas earlier (before your last post), but I’d suggest the Test connection first as a basic check to the server.

I see you already tried --asynchronous-concurrent-upload-limit=1, plus it wouldn’t explain the List fail.

Windows also has a built in ftp program that you may be able to try from Command Prompt, however there’s no encryption so you should be sure you’re using a VPN or encryption if going over the Internet.

Same safety advice holds for Duplicati. Does this NAS support SSH? If so, it might also support SFTP.

I looked at Windows netstat -s and found another count that should go up if an FTP connection fails:

C:\>netstat -s | findstr Failed
  Failed Connection Attempts          = 63441
  Failed Connection Attempts          = 25976

C:\>

If you look at full output, you’ll see that the first is IPv4 (more common, usually four decimal numbers). Second is IPv6 and I’m not sure why it’s going up, unless maybe IPv6 is tried if IPv4 can’t get through.
You can see how those counters behave, e.g. when you do Test connection. Beware of noise that other programs such as browsers might throw in. You can see how fast it goes up without Duplicati…

The connection established is interesting if it’s before login. The command line FTP client can test that. Duplicati’s error still sounds to me like a failure to connect. You might check the NAS to see if can find tcpdump command in case you need to look at that to see what’s coming in on the FTP port (port 21).

I’m trying not to get too complicated too fast though. Other complicated steps later on might include the Duplicati.CommandLine.BackendTool.exe to do some manual testing without getting the usual backup software involved (for a URL you can give it the one from the Export As Command-line. Getting deep in networking (maybe best saved for last), FTP (especially unencrypted) can be looked at with Wireshark.

But first it’s easier to do simpler things starting with Test connection, maybe CLI client, and netstat

I did Test connection a couple of times and it always approves of the connection. Only sometimes it takes a few seconds longer than other times.

No, the NAS doesn’t support either SSH or SFTP. So far I didn’t consider that a problem since Duplicati’s ZIP files are encrypted anyhow.

I actually kind of gave up on the NAS and tried to use the 1 TB OneDrive coupled to a fresh Office 365 account instead using the OneDrive v2 backend. For a smaller data set that seemed to work just fine, but when I tried the big data set (500+ GB), I ran into the same problem (see attached log file):

duplicati to OneDrive.zip (231.0 KB)

Again there are several warnings regarding failed retries (in this case for the put operation), eventually resulting in the fatal error that it cannot find a file in C:\Windows\Temp that it created (and deleted?) itself. As before, I think the file not found error is just a follow-up error caused by the handling of previous troubles (i.e. exceeding the number of retries).

So now I have two completely different destinations resulting in the same kind of problems. I also got the “Unexpected difference in fileset version X” error once which is hard to recover from, so I had to start all over again.

I’m sorry, but I’m afraid I need to start looking for an alternative backup solution. I really like the Duplicati initiative a lot and your support is fantastic, but I just keep running into problems costing me a lot of time which I frankly don’t have. I need a backup solution I can rely on without spending a lot of effort to keep it running.

So, thanks for all your help and I wish you all the best.

Goodbye for now,
Patrick.

Plain FTP protocol (i.e. not FTPS) doesn’t encrypt username and password, so be careful.

attempt 1 of 5 failed with message: HTTP timeout 00:01:40 exceeded.

is likely the default Microsoft 100 second timeout (although the log shows long ones working).
–http-operation-timeout raising can generally get rid of these if the upload needs more time…
I’ve suggested increasing it (because this may be the only one so short), but it’s not done yet.

It’s also my suspicion but I haven’t been able to invent a test case that breaks in just that way.
I’ve seen some other ways for it to break though… This seems a target area for robustness…

Thanks for adding a log to the collection. Maybe that will lead to added insights on the error…