Keep getting System.AggregateException and System.IO.FileNotFoundException. Need help

Hello there,

A couple weeks ago i made a post about a FileNotFound exception and im not entirely sure if this is the same issue. I thought i’d link the post anyway: Keep getting FileNotFoundException

I run Duplicati in Docker on a Synology NAS. I’m trying to backup about 3TB of data currently. My backup will run for a couple hours, and then stop with an error:

System.AggregateException: Could not find file "/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462\

and

System.IO.FileNotFoundException Could not find file "/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462\

Both these error names are in the same error log. I’m running the official Duplicati/Duplicati on docker with “high privileges” as Synology calls it, and in the Synology Docker app the Duplicati container has the variables PGID = 0 and PUID = 0, i think i read somewhere that has something to do with privileges, i thought i’d at least mention it, maybe it is useful to someone reading this.

I would greatly appreciate if someone could help me solver this problem because Duplicati is pretty much my last shot at getting proper offiste backups to work.

I can still try the official Synology Duplicati Package but… it should work in Docker, right? I shouldnt have to use that version. EDIT: Never mind, i also had an issue with that one.

Here is the entire error log, sorry it’s very long:

{“ClassName”:“System.AggregateException”,“Message”:“One or more errors occurred.”,“Data”:null,“InnerException”:{“ClassName”:“System.AggregateException”,“Message”:“Could not find file "/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462"”,“Data”:null,“InnerException”:{“ClassName”:“System.IO.FileNotFoundException”,“Message”:“Could not find file "/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462"”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at System.IO.FileStream…ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.Boolean anonymous, System.IO.FileOptions options) [0x0019e] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at System.IO.FileStream…ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share) [0x00000] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at (wrapper remoting-invoke-with-check) System.IO.FileStream…ctor(string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare)\n at Duplicati.Library.Main.Volumes.VolumeReaderBase.LoadCompressor (System.String compressor, System.String file, Duplicati.Library.Main.Options options, System.IO.Stream& stream) [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00007] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.BlockVolumeReader…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00010] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess+<>c__DisplayClass0_0.b__0 (<>f__AnonymousType112[<Input>j__TPar,<Output>j__TPar] self) [0x0023c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation (System.Collections.Generic.IEnumerable1[T] sources, Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Snapshots.UsnJournalService journalService, Duplicati.Library.Main.Operation.Backup.BackupDatabase database, Duplicati.Library.Main.Operation.Backup.BackupStatsCollector stats, Duplicati.Library.Main.Options options, Duplicati.Library.Utility.IFilter sourcefilter, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.BackupResults result, Duplicati.Library.Main.Operation.Common.ITaskReader taskreader, System.Int64 filesetid, System.Int64 lastfilesetid, System.Threading.CancellationToken token) [0x0035f] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00a12] in <8f1de655bd1240739a78684d845cecc8>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2147024894,"Source":"mscorlib","FileNotFound_FileName":"/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462","FileNotFound_FusionLog":null},"HelpURL":null,"StackTraceString":" at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01033] in <8f1de655bd1240739a78684d845cecc8>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233088,"Source":"Duplicati.Library.Main","InnerExceptions":[{"ClassName":"System.IO.FileNotFoundException","Message":"Could not find file \"/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462\"","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.Boolean anonymous, System.IO.FileOptions options) [0x0019e] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share) [0x00000] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at (wrapper remoting-invoke-with-check) System.IO.FileStream..ctor(string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare)\n at Duplicati.Library.Main.Volumes.VolumeReaderBase.LoadCompressor (System.String compressor, System.String file, Duplicati.Library.Main.Options options, System.IO.Stream& stream) [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase..ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00007] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.BlockVolumeReader..ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00010] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess+<>c__DisplayClass0_0.<Run>b__0 (<>f__AnonymousType112[j__TPar,j__TPar] self) [0x0023c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation (System.Collections.Generic.IEnumerable1[T] sources, Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Snapshots.UsnJournalService journalService, Duplicati.Library.Main.Operation.Backup.BackupDatabase database, Duplicati.Library.Main.Operation.Backup.BackupStatsCollector stats, Duplicati.Library.Main.Options options, Duplicati.Library.Utility.IFilter sourcefilter, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.BackupResults result, Duplicati.Library.Main.Operation.Common.ITaskReader taskreader, System.Int64 filesetid, System.Int64 lastfilesetid, System.Threading.CancellationToken token) [0x0035f] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00a12] in <8f1de655bd1240739a78684d845cecc8>:0 “,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2147024894,“Source”:“mscorlib”,“FileNotFound_FileName”:”/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462",“FileNotFound_FusionLog”:null},{“ClassName”:“System.AggregateException”,“Message”:“One or more errors occurred.”,“Data”:null,“InnerException”:{“ClassName”:“System.Net.WebException”,“Message”:“The remote server returned an error: (504) Gateway Time-out.”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at Duplicati.Library.Utility.AsyncHttpRequest+AsyncWrapper.GetResponseOrStream () [0x0004d] in :0 \n at Duplicati.Library.Utility.AsyncHttpRequest.GetResponse () [0x00044] in :0 \n at Duplicati.Library.Backend.WEBDAV.PutAsync (System.String remotename, System.IO.Stream stream, System.Threading.CancellationToken cancelToken) [0x001b8] in <0d600faf328943a887e690f4858efbb2>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) [0x00426] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader+<>c__DisplayClass17_0.b__0 () [0x0010a] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x0017c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x003a3] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000da] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadVolumeWriter (Duplicati.Library.Main.Volumes.VolumeWriterBase volumeWriter, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000b8] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.b__13_0 (<>f__AnonymousType121[<Input>j__TPar] self) [0x00847] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0 (<>f__AnonymousType121[j__TPar] self) [0x0089e] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233079,"Source":"mscorlib"},"HelpURL":null,"StackTraceString":null,"RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233088,"Source":null,"InnerExceptions":[{"ClassName":"System.Net.WebException","Message":"The remote server returned an error: (504) Gateway Time-out.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at Duplicati.Library.Utility.AsyncHttpRequest+AsyncWrapper.GetResponseOrStream () [0x0004d] in <b0ec73cdc8b845289fe2e9bdf696ccd0>:0 \n at Duplicati.Library.Utility.AsyncHttpRequest.GetResponse () [0x00044] in <b0ec73cdc8b845289fe2e9bdf696ccd0>:0 \n at Duplicati.Library.Backend.WEBDAV.PutAsync (System.String remotename, System.IO.Stream stream, System.Threading.CancellationToken cancelToken) [0x001b8] in <0d600faf328943a887e690f4858efbb2>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) [0x00426] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader+<>c__DisplayClass17_0.<UploadFileAsync>b__0 () [0x0010a] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x0017c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x003a3] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000da] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadVolumeWriter (Duplicati.Library.Main.Volumes.VolumeWriterBase volumeWriter, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000b8] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0 (<>f__AnonymousType121[j__TPar] self) [0x00847] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.b__13_0 (<>f__AnonymousType121[<Input>j__TPar] self) [0x0089e] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 “,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2146233079,“Source”:“mscorlib”}]}]},“HelpURL”:null,“StackTraceString”:” at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x0005d] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String& paths, Duplicati.Library.Utility.IFilter& filter, System.Action1[T] method) [0x0026f] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <c5f097a49c0a4f1fb0f93cf3f5f218b1>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233088,"Source":"CoCoL","InnerExceptions":[{"ClassName":"System.AggregateException","Message":"Could not find file \"/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462\"","Data":null,"InnerException":{"ClassName":"System.IO.FileNotFoundException","Message":"Could not find file \"/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462\"","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.Boolean anonymous, System.IO.FileOptions options) [0x0019e] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share) [0x00000] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at (wrapper remoting-invoke-with-check) System.IO.FileStream..ctor(string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare)\n at Duplicati.Library.Main.Volumes.VolumeReaderBase.LoadCompressor (System.String compressor, System.String file, Duplicati.Library.Main.Options options, System.IO.Stream& stream) [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase..ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00007] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.BlockVolumeReader..ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00010] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess+<>c__DisplayClass0_0.<Run>b__0 (<>f__AnonymousType112[j__TPar,j__TPar] self) [0x0023c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation (System.Collections.Generic.IEnumerable1[T] sources, Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Snapshots.UsnJournalService journalService, Duplicati.Library.Main.Operation.Backup.BackupDatabase database, Duplicati.Library.Main.Operation.Backup.BackupStatsCollector stats, Duplicati.Library.Main.Options options, Duplicati.Library.Utility.IFilter sourcefilter, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.BackupResults result, Duplicati.Library.Main.Operation.Common.ITaskReader taskreader, System.Int64 filesetid, System.Int64 lastfilesetid, System.Threading.CancellationToken token) [0x0035f] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00a12] in <8f1de655bd1240739a78684d845cecc8>:0 “,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2147024894,“Source”:“mscorlib”,“FileNotFound_FileName”:”/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462",“FileNotFound_FusionLog”:null},“HelpURL”:null,“StackTraceString”:" at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01033] in <8f1de655bd1240739a78684d845cecc8>:0 “,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2146233088,“Source”:“Duplicati.Library.Main”,“InnerExceptions”:[{“ClassName”:“System.IO.FileNotFoundException”,“Message”:“Could not find file "/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462"”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:” at System.IO.FileStream…ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.Boolean anonymous, System.IO.FileOptions options) [0x0019e] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at System.IO.FileStream…ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share) [0x00000] in <3833a6edf2074b959d3dab898627f0ac>:0 \n at (wrapper remoting-invoke-with-check) System.IO.FileStream…ctor(string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare)\n at Duplicati.Library.Main.Volumes.VolumeReaderBase.LoadCompressor (System.String compressor, System.String file, Duplicati.Library.Main.Options options, System.IO.Stream& stream) [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00007] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.BlockVolumeReader…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00010] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess+<>c__DisplayClass0_0.b__0 (<>f__AnonymousType11`2[j__TPar,j__TPar] self) [0x0023c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func`2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation (System.Collections.Generic.IEnumerable`1[T] sources, Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Snapshots.UsnJournalService journalService, Duplicati.Library.Main.Operation.Backup.BackupDatabase database, Duplicati.Library.Main.Operation.Backup.BackupStatsCollector stats, Duplicati.Library.Main.Options options, Duplicati.Library.Utility.IFilter sourcefilter, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.BackupResults result, Duplicati.Library.Main.Operation.Common.ITaskReader taskreader, System.Int64 filesetid, System.Int64 lastfilesetid, System.Threading.CancellationToken token) [0x0035f] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00a12] in <8f1de655bd1240739a78684d845cecc8>:0 “,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2147024894,“Source”:“mscorlib”,“FileNotFound_FileName”:”/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462",“FileNotFound_FusionLog”:null},{“ClassName”:“System.AggregateException”,“Message”:“One or more errors occurred.”,“Data”:null,“InnerException”:{“ClassName”:“System.Net.WebException”,“Message”:“The remote server returned an error: (504) Gateway Time-out.”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at Duplicati.Library.Utility.AsyncHttpRequest+AsyncWrapper.GetResponseOrStream () [0x0004d] in :0 \n at Duplicati.Library.Utility.AsyncHttpRequest.GetResponse () [0x00044] in :0 \n at Duplicati.Library.Backend.WEBDAV.PutAsync (System.String remotename, System.IO.Stream stream, System.Threading.CancellationToken cancelToken) [0x001b8] in <0d600faf328943a887e690f4858efbb2>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) [0x00426] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader+<>c__DisplayClass17_0.b__0 () [0x0010a] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func`1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x0017c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func`1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x003a3] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000da] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadVolumeWriter (Duplicati.Library.Main.Volumes.VolumeWriterBase volumeWriter, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000b8] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.b__13_0 (<>f__AnonymousType12`1[j__TPar] self) [0x00847] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.b__13_0 (<>f__AnonymousType12`1[j__TPar] self) [0x0089e] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func`2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 “,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2146233079,“Source”:“mscorlib”},“HelpURL”:null,“StackTraceString”:null,“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2146233088,“Source”:null,“InnerExceptions”:[{“ClassName”:“System.Net.WebException”,“Message”:“The remote server returned an error: (504) Gateway Time-out.”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:” at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <9b672a45b19f4d52b5f28f32c0c91d97>:0 \n at Duplicati.Library.Utility.AsyncHttpRequest+AsyncWrapper.GetResponseOrStream () [0x0004d] in :0 \n at Duplicati.Library.Utility.AsyncHttpRequest.GetResponse () [0x00044] in :0 \n at Duplicati.Library.Backend.WEBDAV.PutAsync (System.String remotename, System.IO.Stream stream, System.Threading.CancellationToken cancelToken) [0x001b8] in <0d600faf328943a887e690f4858efbb2>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) [0x00426] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader+<>c__DisplayClass17_0.b__0 () [0x0010a] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func`1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x0017c] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry (System.Func`1[TResult] method, Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x003a3] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync (Duplicati.Library.Main.Operation.Common.BackendHandler+FileEntryItem item, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000da] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadVolumeWriter (Duplicati.Library.Main.Volumes.VolumeWriterBase volumeWriter, Duplicati.Library.Main.Operation.Backup.BackendUploader+Worker worker, System.Threading.CancellationToken cancelToken) [0x000b8] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.b__13_0 (<>f__AnonymousType12`1[j__TPar] self) [0x00847] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.Backup.BackendUploader.b__13_0 (<>f__AnonymousType12`1[j__TPar] self) [0x0089e] in <8f1de655bd1240739a78684d845cecc8>:0 \n at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func`2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 ",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2146233079,“Source”:“mscorlib”}]}]}]}

This sort of “Could not find file” for temporary file, with SpillCollectorProcess on the stack seems to result from prior errors. Your earlier error might be below. What is this backend, and any idea why it errors you?

Are you able to do smaller backups successfully? If so, look in the <job> → Log → Complete log for RetryAttempts to see if you’re starting to get some random ones, and just survived due to default 5 number-of-retries managing to get through eventually. Any logs on the remote that would show 504?

You can also try getting the target URL from <job> → Export → As Command-line to edit with a new folder (the proposed test needs an empty folder) and run Duplicati.CommandLine.BackendTester.exe

It would be helpful to know what got 504. A way to find that (and maybe make a big file) is log-file=path along with log-file-log-level=retry to see what sort of things upload, how fast (timeout?), and what fails.

Are you running at the default 50 MB remote volume size, a.k.a dblock-size? How many source files? Assuming you’ve had at least one backup finish, the job log shows this under Source Files Examined, otherwise I guess you could do some sort of find command or something to try to get an estimate…

What I’m wondering is whether you’re making a dlist file that’s really large (lots of files), and timing out.

Hi, thanks for your detailed reply. It seems like you’re onto something. My backend is a Nextcloud server which i thought to be working perfectly fine, but apparently not so. I should say that i do have some rather large files in my backup. One of them in particular is about 100 gigs.

I did a 300GB backup successfully with "RetryAttempts": 1,. I also mention that i have had clear 504 errors from Duplicati before but i thought i had resolved those by upping the max file size in my Nextcloud server’s configuration. Apparently not, the error was just renamed or something.

Yes i am running the default 50 MB remote volume size. On the backup that is 337GB my source files examined says 296393 (337.72 GB).

What I’m wondering is whether you’re making a dlist file that’s really large (lots of files), and timing out.

Can this happen with the 100 gig file? How would i prevent this?

You can also try getting the target URL from → Export → As Command-line to edit with a new folder (the proposed test needs an empty folder) and run Duplicati.CommandLine.BackendTester.exe

I downloaded duplicati for windows and used the following command to run it:

Duplicati.CommandLine.BackendTester.exe “webdav://<username>:<password>@<backupurl>/TestFolder”

The same URL as in my Duplicati job config, just another folder, but in duplicati i can check “use SSL”. I don’t see that option for the command line, so all i’m getting is a 400 bad request response :confused: It also doesn’t say webdavs is supported. When i use webdavs instead of webdav it says unknown protocol.

The dlist file is a list of file paths, with per-file information, and it grows with the file count not file size.
You would normally see it upload as one of the last uploaded files. It has dlist and date in file name:

2020-10-02 14:21:24 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20201002T182004Z.dlist.zip.aes (40.26 KB)
2020-10-02 14:21:25 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20201002T182004Z.dlist.zip.aes (40.26 KB)

Above was a small dlist. The largest I can recall was 5 GB. Can you please try looking at a retry log?
If you don’t want to set up the log file, you might be able to use About → Show log → Live → Retry, however it will probably be less reliable and harder to handle compared with the log written into a file.

If you decide your file count makes a dlist larger than your NAS can accept, then do smaller backups.
This is all also a guess until some log information comes out showing how far the backup had gotten.

You can also look at the destination folder to see how much got there. Is it plausible for source size?
Actually, this sounded like a problem on an existing backup, so maybe there are some dlists already.
Can you sort destination files by file size? That would show how large a file you’ve managed to write.

I can’t tell you enough how much i appreciate your help. Without you i’d be at a complete loss for actions.

On my 3TB Backup which has already done about 1TB i’m now suddenly getting a “No filelists found on the remote destination” message. My other 300GB backup job still works fine, and i’ve done a test backup on the folder that contains the 100GB+ file which has also gone through just fine. I can also verify both the 300GB and the Test backup, but the 3TB backup still has the same message. I do not understand what is going on. I really don’t want to redo the entire thing.

I have tried to setup a completely new backup task with the same settings but it gives the same error.

About → Show log → Live → Retry,

This only gives me two retry logs which are both today and are about the test job that i ran. Nothing about the 3TB job is in there.

If you decide your file count makes a dlist larger than your NAS can accept, then do smaller backups.
This is all also a guess until some log information comes out showing how far the backup had gotten.

How can i find out if this is the issue? I would preferrably have the entire backup in one job but i guess, if really nessecary, i can split them up. Not ideal but it’ll work.

You can also look at the destination folder to see how much got there. Is it plausible for source size?
Actually, this sounded like a problem on an existing backup, so maybe there are some dlists already.
Can you sort destination files by file size? That would show how large a file you’ve managed to write.

The destination folder size is 1.1TB. If i sort by file size i get a bunch of 50MB files. Sorting from smallest to largest gives me a bunch of 18KB files. The file modified dates range from “A month ago” to “3 days ago”.

For now i’d like to fix the no filelists found though because i don’t want those weeks of backing up going to waste and having to redo all that. Also the stress on my drives bothers me a lot with this trial and error situation that i have going on now.

Did the first one show up at end of backup or start of backup? What about later tries?
Also check About → Show log → Stored. Sometimes error message will land there.

You can go through <job> → Show files -> Remote and click on a list line you find.
If you need a new one done, you can click the Verify files link for the job and look.

Got another tool that can look to see what’s on NextCloud on same WebDAV protocol?
OneDrive used to turn invisible (no list sent) from lots of files. I hope NextCloud doesn’t.

On the other hand…

Is that two backups to the same destination. You never want that setup. They’ll conflict.

You found the time when the 3 TB job errored it saw nothing, and saw no list nearby?
All the output is intermixed, and most will be from jobs that are running not fast-stopped.

But you’re in a different test now. Before we were trying to see the uploads suddenly fail.
Currently I assume we’re trying to figure out why it sees nothing and so won’t go further.

The previous plan (before the list seemingly broke – and that could be a timeout too) was
to see the size of the dlist and any retries and any failures, but failing at 1 TB on 3 TB might
mean that it never got as far as trying to upload a dlist. Depends on what data the NAS has.
Data that is hugely compressible or hugely duplicated would be able to shrink that much…

Good next step. We can’t return to the original issue until that works, so please look at list.

Did the first one show up at end of backup or start of backup? What about later tries?
Also check About → Show log → Stored. Sometimes error message will land there.

At the start of the backup

In About → Show log → Stored. there’s the following errors:

  1. Duplicati.Library.Interface.UserInformationException: No filelists found on the remote destination
  2. Duplicati.Library.Interface.UserInformationException: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.

I already tried to delete the database and repair, that did not solve the issue.

Is that two backups to the same destination. You never want that setup. They’ll conflict.

No i deleted the original backup task and then remade it using the exact same settings. I read it on this forum as recommended action.

Got another tool that can look to see what’s on NextCloud on same WebDAV protocol?
OneDrive used to turn invisible (no list sent) from lots of files. I hope NextCloud doesn’t.

I just logged in through Webdav with the exact same settings as i use in Duplicati using WinSCP and it works. I can see all files in the folder but i do notice that it takes a bit, maybe 5-10 seconds to list all the files in the folder because there are so many.

You can go through → Show files → Remote and click on a list line you find.
If you need a new one done, you can click the Verify files link for the job and look.

I can’t find where to do this? Verify files says the same “No filelists found on the remote destination” message.

I don’t understand why i’m having so many issues with it. I just want a proper backup of my important data :frowning:. Again thank you very much for your help i appreciate it a lot.

If you mean where to find the list line, see the first line. Here’s an example screenshot:

You get a list and three get operations typically. Click on the list, see if anything came in.
If you see a big list and Duplicati denies it, that’s one thing. If you see no list, it’s another.

but failing at 1 TB on 3 TB might
mean that it never got as far as trying to upload a dlist. Depends on what data the NAS has.
Data that is hugely compressible or hugely duplicated would be able to shrink that much…

It is all video footage which i think isn’t very compressible. The backup data on the backup server is 1.1TB right now.

If you mean where to find the list line, see the first line. Here’s an example screenshot:

You get a list and three get operations typically. Click on the list, see if anything came in.
If you see a big list and Duplicati denies it, that’s one thing. If you see no list, it’s another.

I found it but it’s 48597 lines long, most of which seem to be the same, just different filenames and timestamps.

Here’s the 3 first lines:

[
{“Name”:“duplicati-b0002d90ff6c941e5acf7cbac55cfba29.dblock.zip.aes”,“LastAccess”:“2020-08-31T09:33:54+00:00”,“LastModification”:“2020-08-31T09:33:54+00:00”,“Size”:52402973,“IsFolder”:false},
{“Name”:“duplicati-b00037215cbb1486d9b4d8046a05dcb49.dblock.zip.aes”,“LastAccess”:“2020-08-31T02:30:11+00:00”,“LastModification”:“2020-08-31T02:30:11+00:00”,“Size”:52408477,“IsFolder”:false},
{“Name”:“duplicati-b0009074c7c364f4db1c2bc7483a4da64.dblock.zip.aes”,“LastAccess”:“2020-10-01T10:43:13+00:00”,“LastModification”:“2020-10-01T10:43:13+00:00”,“Size”:52394205,“IsFolder”:false},

And here’s the 3 last lines:

{“Name”:“duplicati-ifffae22101884327907b572344856b16.dindex.zip.aes”,“LastAccess”:“2020-09-10T18:15:54+00:00”,“LastModification”:“2020-09-10T18:15:54+00:00”,“Size”:119949,“IsFolder”:false},
{“Name”:“duplicati-ifffeb796a6a54fdc946ad65b8eed2668.dindex.zip.aes”,“LastAccess”:“2020-09-11T09:00:37+00:00”,“LastModification”:“2020-09-11T09:00:37+00:00”,“Size”:18397,“IsFolder”:false},
{“Name”:“duplicati-ifffff970a8e64c0eb684e8d896aae51a.dindex.zip.aes”,“LastAccess”:“2020-08-26T22:42:47+00:00”,“LastModification”:“2020-08-26T22:42:47+00:00”,“Size”:18397,“IsFolder”:false}
]

Also there is just the list and no GET operations.

What should i try next? :confused: I’m lost.

I’m pretty close to lost. Might need to gather some logs on how this breaks. Meanwhile I misinterpreted this:

(sorry for thinking out loud – you don’t need all the details, but perhaps they will help somebody somehow)

“No filelists found on the remote destination” made me think of the no-files-seen issue, but you proved that files were seen by list. Looking up the message text (thanks for providing it), this was actually from code attempting to recreate the database from backend files, but found it had no dlist (list of files) to start from. Because the dlist contains backup details, it can’t be finalized for uploading until the very end of the backup.

The 3 usual get operations are sampled from remote files known to the database, but no DB may mean no samples. The question becomes – where did the database go? Unfortunately, I don’t do Docker, but think it sometimes becomes an issue of where to keep persistent data. Keeping it outside the container solves the problem of losing it all when you replace the container with a new version, but it needs some special steps.

Duplicati on Synology via Docker?! is a writeup by the person who you worked with on your previous issue.
Docker container sqlite location is an older discussion.
Can’t repair database on synology was one where a 512 MB Synology couldn’t do Recreate due to lack of memory. That initial crash was due to lack of temporary file space. tempdir and other settings control that.

Because you’re on a small system (what memory size?) that can run a small backup but not a larger one, possibly there is some sort of resource issue that arises. I don’t know how it leads to the original message whose source is not known, but appears to be the result of previous problems (which a log might capture).

Duplicati uses temporary file space (I guess from your error yours is at /tmp – is that inside the container?) heavily for accumulating information and staging files for upload to destination. Can you watch free space?

Can you check the job Database tab to see its path, then figure out where that really is? Watch free space. You can also watch the database itself. Especially from a clean start, I’d expect it to start small then grow.

I’m not sure why it’s in the recreate code. I think in some cases it does a repair before backup if it feels the need (and this probably can be seen in the log), however (per repair link), a recreate is due to no database. Specific issue of no filelist is probably because it never got far enough in initial backup to upload the dlist.

A log file was suggested earlier to try to see what got the 504 from NextCloud. That log level was at retry, which is good because it doesn’t show private info like paths, however there are higher levels if necessary.

You can possibly gather some information on any stuck-here situation like the “No filelists found” (is there a file with dlist in its name in the backup destination area) this way, or I suppose you could just try to restart by deleting the database (Delete button) and the corresponding NextCloud files using some manual delete.

I’d be happy if @drwtsn32 has any thoughts on setting up Docker on Synology, and how to check issues.

“No filelists found on the remote destination” made me think of the no-files-seen issue, but you proved that files were seen by list . Looking up the message text (thanks for providing it), this was actually from code attempting to recreate the database from backend files, but found it had no dlist (list of files) to start from. Because the dlist contains backup details, it can’t be finalized for uploading until the very end of the backup.

The 3 usual get operations are sampled from remote files known to the database, but no DB may mean no samples. The question becomes – where did the database go? Unfortunately, I don’t do Docker, but think it sometimes becomes an issue of where to keep persistent data. Keeping it outside the container solves the problem of losing it all when you replace the container with a new version, but it needs some special steps.

This actually made me understand the process more, but i still have no idea what caused the problem. Thanks for thinking out loud.

I have mounted the container /config folder to a folder on my NAS. I think i remember seeing .sqlite files in the folder but it is currently completely empty. Is that normal? I still have an old linuxserver/duplicati container which i haven’t used anymore since switching to duplicati/duplicati but when looking in the config folder for the old linuxserver/duplicati i do see a bunch of config files with .sqlite files.

Because you’re on a small system (what memory size?) that can run a small backup but not a larger one, possibly there is some sort of resource issue that arises. I don’t know how it leads to the original message whose source is not known, but appears to be the result of previous problems (which a log might capture).

My Synology NAS has 8GB RAM with an Intel Pentium N3710. Usually there’s only 35% of RAM is use, and the Duplicati container is not limited in resource usage.

Duplicati uses temporary file space (I guess from your error yours is at /tmp – is that inside the container?) heavily for accumulating information and staging files for upload to destination. Can you watch free space?

I have tried mounting and not mounting the /tmp file to the host filesystem because i had the same idea with free space. With both options everything that’s been going on still happens. I have about 9TB free on the NAS so it shouldn’t be running out of space when it’s mounted, and since it does the same thing when it’s not mounted i guess it’s also not a space issue in that case. (I’m not sure where the files are when they are not mounted)

Can you check the job Database tab to see its path, then figure out where that really is? Watch free space. You can also watch the database itself. Especially from a clean start, I’d expect it to start small then grow.

I do see a problem here. All databases for each backup say they are in /data/Duplicati, which is not a path i have mounted to the host and thus must be getting cleared every time the container restarts, and i have done that a couple times when trying to solve problems. I will try to mount /data to the host filesystem now and see if that makes a difference. You’ll hear about this on my next reply.

Can it be that linuxserver/duplicati uses /config and duplicati/duplicati uses /data? I had mounted /config, not /data. If this is really the cause of all of these issues (not sure how this would cause a 504) i’m going to be very frustrated.

Now that i have mounted /data to the host filesystem, my backup tasks are gone completely. That kind of sucks, but i guess i can set them up again with the exact same settings. This hasn’t happened before on container restarts so maybe the /data folder never got cleared like i said i thought had happened before.

Edit: I also suddenly don’t have an encryption option anymore?? It just says “No Encryption” instead of the 256 bit AES encryption that i used before. What can cause this??

Edit Edit: The encryption option is back after a container restart. I am so confused.

A log file was suggested earlier to try to see what got the 504 from NextCloud. That log level was at retry , which is good because it doesn’t show private info like paths, however there are higher levels if necessary.

I’m sorry i must have read over this. I’ll add the log options to the backup task and see what that produces.

You can possibly gather some information on any stuck-here situation like the “No filelists found” (is there a file with dlist in its name in the backup destination area) this way, or I suppose you could just try to restart by deleting the database (Delete button) and the corresponding NextCloud files using some manual delete.

I cannot find a file with ‘dlist’ in it’s name on the 3TB backup, but i do see one on the 300GB backup.

Edit edit edit (3 hours later): i let the 300GB backup recreate the database and it suddenly says theres 15k files locally missing. That can’t be right so im even more confused than before now. I guess i’ll redo that backup, and if it says the same for the 3TB backup i guess i’ll redo that one too. I have no clue what’s going on at this point.

Yeah, the two docker images are quite different in where they place data. Personally I would stick with the official image but you’ll definitely need to update your mappings if you haven’t done so already.

I also have this running on a Synology NAS, as @ts678 mentioned. I have not seen the issues you describe but I also use a different back end.

Answered at least tentatively (before issue reverted to unsolved) in the topic I linked.

I can’t do Docker specifics, but I did click on the links and they still appear to say what the post said.

Configurations are in Duplicati-server.sqlite which is usually right next to the per-backup files.
Getting everything that you need onto the host, if possible, would save you some redoing of things…
I’m not sure where your browser is, but it can export configurations to files (keep for safety anyway).

I’m not sure about that one, but some of the UI takes awhile to populate data. System info is that way.
Sometimes fields will say something like “Unknown” for things that aren’t yet known from server data.

That makes sense because the 300 GB finished, while the 3 TB hasn’t finished even its initial backup.

I can’t find an error message like that, even loosening and using a regular expression. Can you quote?

Hello, thanks for sticking with me.

Answered at least tentatively (before issue reverted to unsolved) in the topic I linked.

Thank you. I had originally started with linuxserver/duplicati before someone on this forum told me to switch to duplicati/duplicati and apparently i had not realised that the mappings are different. That is a stupid mistake on my end.

Configurations are in Duplicati-server.sqlite which is usually right next to the per-backup files.
Getting everything that you need onto the host, if possible, would save you some redoing of things…

Unfortunately it looked like everything was cleared once i mounted the /data folder. I unmounted it to try and see if my configuration was back, but it wasnt.

It was

Duplicati.Library.Interface.UserInformationException: Found 15774 remote files that are not recorded in local storage, please run repair

I have deleted the remote 300GB backup though and am redoing the entire thing. I just want to be safe. I don’t fancy redoing the 3TB though, that 1TB it’s done so far takes like 2 weeks. If it says the same message on that one i’ll report back. I can start it after the 300GB backup is done which will probably be tomorrow or the day after. It’ll have to recreate database which will take a couple hours and then i’ll see what it has to say. Thanks for the help so far, i appreciate it a lot.

The easiest way to get that is to lose the DB, which then thinks there should be no remote files.
You can see if the destination has 15774. If so, that’s probably it. This can happen if one deletes
the DB on purpose to try to get a fresh start (solution is also deleting the backup files manually).
In your case, the disappearing DB issues might have caused a new DB to see the old backup…

EDIT:

If the goal is to recreate the DB from backup (to avoid backup uploads), use Database Recreate.
Generally this should be pretty fast, but if information is missing it can wind up in a big download.

I thought it meant that there are remote files found which are not in local storage, and when i click repair it’ll download the files from the remote backup and put them in my local storage. That’s why i didnt want to press repair, i didn’t want to mess up my local files.

So what do you suggest doing on the 1TB backup task once the 300GB current one is done? Do i press repair on the message since i’ll probably get it if i understand right? Or do i go to Database -> Repair there? Is it the same repair button? Will pressing repair on the error not edit my local files, but only repair the database?

If local files means source files, Repair won’t touch those. It’s a DB process to reconcile with the backup, however there’s a known issue when an old DB is restored (e.g. from an image backup) and Repair runs.

Apparently the stale DB is a convincing enough DB that agreement with remote is made by deleting new files from remote. Not good. I just did Repair to a totally new empty DB, and it rebuilt the DB from remote. Using the Recreate button deletes the old DB first, so there’s no chance of accidental damage to remote.

This is the one that got to 1 TB and never finished (and never made its dlist, so it won’t recreate) I believe.
It’s also the original debug target, but it takes two weeks to get to the 1 TB. If there’s anything left of its DB then looking inside may be possible. If the DB is gone, and no log files exist, then all there is is your recall.

You can see if the DB in the Database page has a plausible size. if it’s below 1 MB it’s probably too empty. You can also see if you can spot any history on anything in the job’s Show log General or Remote pages. Finding older history would mean there’s something to look at or work on. If it’s extremely new, that’s bad.

Duplicati can ordinarily resume an interrupted backup from where it left off, but yours was getting an error. Possibly the problems in Docker contributed, but I guess first step is to see if there’s any sign of a DB left.

I do have an experimental method that “might” be able to make use of your backup files even if DB is gone, however I’d prefer if the original DB is still there, as there’s a chance it has clues. A big log might do better, however it’d be nice if we didn’t have to wait 2 weeks for something to fail before having some information.

I suspect the Repair button on the 1 TB partial will wind up in “No filelists found on the remote destination”, because I think Duplicati runs repair internally before the backup if needed, and that might go into recreate.

The REPAIR command

Tries to repair the backup. If no local database is found or the database is empty, the database is re-created with data from the storage. If the database is in place but the remote storage is corrupt, the remote storage gets repaired with local data (if available).

but there’s a lot of uncertainty about how much of a database existed at what times during all the testing.

If local files means source files, Repair won’t touch those. It’s a DB process to reconcile with the backup, however there’s a known issue when an old DB is restored (e.g. from an image backup) and Repair runs.
Apparently the stale DB is a convincing enough DB that agreement with remote is made by deleting new files from remote. Not good. I just did Repair to a totally new empty DB, and it rebuilt the DB from remote. Using the Recreate button deletes the old DB first, so there’s no chance of accidental damage to remote.

Well i redid this entire backup because i thought the message meant something else than it did, so i guess that works too.

This is the one that got to 1 TB and never finished (and never made its dlist, so it won’t recreate) I believe.

Yes i’m sorry, i called it the 1TB backup task but it should have been the 3TB backup task.

You can see if the DB in the Database page has a plausible size. if it’s below 1 MB it’s probably too empty. You can also see if you can spot any history on anything in the job’s Show log General or Remote pages. Finding older history would mean there’s something to look at or work on. If it’s extremely new, that’s bad.

I don’t think there’s anything left of the DB. My entire Duplicati got cleared when i mounted the /data folder. When cllicking on the job and then show log, it says “Failed to connect: SQLite error no such table: LogData”.

I suspect the Repair button on the 1 TB partial will wind up in “No filelists found on the remote destination”, because I think Duplicati runs repair internally before the backup if needed, and that might go into recreate.

I can try this to see if the message occurs. The 300GB backup task is at 80GB left, it should be done later today.

So you suggest trying to repair, and if that doesn’t work i come back here to try the experimental method you mentioned?

Also i just wanted to mention again that i’m very grateful for your help.

The 3TB backup task, as expected, returns a “No filelists found on the remote destination”. Going to Database -> Delete & Repair does the same, as you expected.

What is the experimental method that might work? Does it affect data integrity at all? Obviously i can’t have that, but if it doesn’t and it’s only experimental because it’s not guaranteed to work, then i’m willing to try it.

I just hope that if it doesn’t work for some reason and i have to redo the entire backup, i don’t get stuck at the same point.

It’s described here. It’s basically making a dummy dlist with no files listed, to get Recreate to populate block information into the DB so that blocks already uploaded are reattached to any files that have those blocks…

It’s basically replacing one file in a .zip file. It sounds like you encrypt, but using AES Crypt can do that part.

It’s just a Recreate helper, and risks were discussed earlier. Basically, if you mean source data you’re fine.

Recreate button, if blue, will delete the DB thus it can’t think of deleting remote files (which are less critical than source data, and in fact if this doesn’t work we’re probably going to have to throw away the partial…).

If DB is not present at all, I think Recreate won’t go blue, but Repair button will turn into database recreate.

Assuming this loads block info successfully, you’d want to watch some of the files that go by on the home screen or live log at verbose level, and at least do a test restore of those to see if they backed up properly.

So none of this should endanger source files, but it’d take more doing to get convinced that backup is fine. What I’d probably do if you’re up for trying to save two weeks of uploads is do some more testing myself…

One advantage of the two-weeks-then-fail plan is it could generate a log that might show what went wrong. Going that route should probably bump blocksize up from its default 100KB, because 3 TB of 100 KB gets 30 million blocks for the database to track, and it slows down operations (Recreate shows it quite heavily).

Especially given video which doesn’t deduplicate well, something like 5 MB blocksize might be reasonable.
Choosing sizes in Duplicati gets into this some. A big note is blocksize on existing backup cannot change, which possibly will drive this effort into a 1 TB reupload just to get the blocksize more appropriate for 3 TB.

But this other path should be pretty fast to play with, and we might learn something without a 2 week wait:

Trying to read current upload back into a DB might be interesting to see if it even starts, fails soon, or does another two weeks. Is network simple (e.g. typical home network) with no boxes in the middle to time out?

Trying to debug something that takes so long to fail is certainly awkward, and I don’t know where it’ll land…