Keep getting System.AggregateException and System.IO.FileNotFoundException. Need help

Thank you for the writeup, but reading this and considering how valuable the source data is, and how important it will be to have a proper backup solution that i can rely on for years to come, and how i have absolutely no idea where to start with the described process, i think i will just redo the entire thing and hope it goes well this time.

Going that route should probably bump blocksize up from its default 100KB
Especially given video which doesn’t deduplicate well, something like 5 MB blocksize might be reasonable.

You mean i should change the blocksize on the task to 5 MB so there will be less database entries? That sounds like a good plan actually.

Choosing a large value will cause a larger overhead on file changes,

I’m not entirely sure what this means.

Is there anything else i need to do before restarting the task from scratch? Any flags i need to add to the task? I have the following flags on it at the moment:

auto-cleanup: Yes
blocksize: 5MByte
log-file: /data/video-log
log-file-log-level: Retry
upload-verification-file: Yes
zip-compression-level: 9

Trying to read current upload back into a DB might be interesting to see if it even starts, fails soon, or does another two weeks. Is network simple (e.g. typical home network) with no boxes in the middle to time out?

Yes typical home network on both sides.

Trying to debug something that takes so long to fail is certainly awkward, and I don’t know where it’ll land…

I know but i don’t think i have another option.

Also before i start it, is there a way to know which file was being processed at the moment an error occurred? I’ve been wondering if it has been happening on one specific file or not.

That was just the rough writeup, to gauge interest. There are other steps, but ultimately it’s still an experiment, and just the blocksize change means you might wind up with a performance problem.

Please get a log as described earlier. If you prefer, using verbose level is more informative without completely overwhelming size (like profiling will make), but you’d have to sanitize before posting it.
Logs at retry level might be enough, and are likely postable. OTOH I’d sure hate to repeat this drill.

While it’s backing up, you could watch the DB grow, and you should see dup-* files in temp area.
Most will probably be transient files that are created, uploaded, then deleted. Some are long-term.

I’m curious what type "Could not find file “/tmp/dup-6f255783-2945-47fe-8786-8f3f19ece462” was,
however the naming doesn’t distinguish. There are some extreme measures to take to gather info,
however they don’t scale up. I’ve used Sysinternals Process Monitor to watch temp file action, and
also used a batch file to copy all dup-* files to another folder in case I wanted to look at one later…
If you have extra TBs of space available, that might work, but in any case we’d still want the log file.

For anything very valuable, two backups (done differently) is a good idea. Software is never perfect.
Do these videos have valuable old-version info? If it’s all just-the-latest, maybe direct copies will do.
That would give you more choice on how to get the copies, although 100 GB size may pose issues. Duplicati’s strength include deduplication and compression to keep multiple versions compactly, but
video is good at defeating both of those. Unless you edit and want to undo, versions may be overkill.
OTOH versions are handy if ransomware clobbers things. You don’t want to clobber backup as well.

Yes, and I picked 5 MB because of my initial test results that showed a slowdown at 1 million entries.
There’s not much solid data, except that large backups get slow in their SQL queries and Recreates.
Currently the blocks are inserted one at a time, and recreate has to do every one of them that way…
Backups are better at (sometimes) hiding the slowness because they start with an existing database.

How does Duplicati Resume After Interruption? is an old post from the Duplicati author suggesting not using --auto-cleanup, but possibly that being set explains why repair ran (and maybe went to recreate).
That also describes the experiment that we’re not trying, which is to try to patch up DB for continuation.

Initial backup is especially troublesome (as you saw) due to no dlist file, so one way to proceed is to do smaller subset backup initially (maybe most important first?) to get at least the initial backup in the files.
If something breaks later, at least you can recreate the DB, but we hope the DB loss issue is over now.

Processing is parallel. At higher log levels, you first see the file getting past the exclude and other filters. Actually reading through the file is (I think) log-file-silent. Data blocks collect in temp files, then upload as filled to 50 MB (default dblock-size a.k.a. Remote volume size on the options screen), but they queue…
asynchronous-upload-limit controls how many queue. It’s a good question, but difficult to answer without even much understanding of whether it’s related to a source file, a file-made-for-upload, or the uploading.

What’s interesting is that SpillCollectorProcess is trying to do something. Ordinarily its job is to collect the leftovers after concurrent file processors have finished their work of filling dblock files. The end isn’t even. Your backup was seemingly nowhere near the end so I’m not sure why the code is running where it looks.

Here’s an example Verbose log of a small backup. I’m posting lines with three backticks above and below which help with the formatting (and scrolling), but for a really big file you can zip it up and drag to window.

2020-10-06 16:03:48 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  ()
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingSourcePath]: Including source path: C:\backup source\length1.txt
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingSourcePath]: Including source path: C:\backup source\short.txt
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingSourcePath]: Including source path: C:\backup source\length1.txt
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingSourcePath]: Including source path: C:\backup source\short.txt
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\backup source\length1.txt, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 10/4/2020 1:42:38 AM vs 1/1/0001 12:00:00 AM
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\backup source\short.txt, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 10/4/2020 6:56:47 PM vs 1/1/0001 12:00:00 AM
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-NewFile]: New file C:\backup source\length1.txt
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-NewFile]: New file C:\backup source\short.txt
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b2c52d2a1185c4cd280e6f6b14133f540.dblock.zip (1.11 KB)
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b2c52d2a1185c4cd280e6f6b14133f540.dblock.zip (1.11 KB)
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20201006T200351Z.dlist.zip (748 bytes)
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-ib2eaa999e8d44fe08d64af2e88704c82.dindex.zip (688 bytes)
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20201006T200351Z.dlist.zip (748 bytes)
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-ib2eaa999e8d44fe08d64af2e88704c82.dindex.zip (688 bytes)
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Database.LocalDeleteDatabase-FullyDeletableCount]: Found 0 fully deletable volume(s)
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Database.LocalDeleteDatabase-SmallVolumeCount]: Found 1 small volumes(s) with a total size of 1.11 KB
2020-10-06 16:03:51 -04 - [Verbose-Duplicati.Library.Main.Database.LocalDeleteDatabase-WastedSpaceVolumes]: Found 0 volume(s) with a total of 0.00% wasted space (0 bytes of 285 bytes)
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2020-10-06 16:03:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (3 bytes)
2020-10-06 16:03:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20201006T200351Z.dlist.zip (748 bytes)
2020-10-06 16:03:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20201006T200351Z.dlist.zip (748 bytes)
2020-10-06 16:03:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-ib2eaa999e8d44fe08d64af2e88704c82.dindex.zip (688 bytes)
2020-10-06 16:03:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-ib2eaa999e8d44fe08d64af2e88704c82.dindex.zip (688 bytes)
2020-10-06 16:03:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b2c52d2a1185c4cd280e6f6b14133f540.dblock.zip (1.11 KB)
2020-10-06 16:03:52 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b2c52d2a1185c4cd280e6f6b14133f540.dblock.zip (1.11 KB)

I have to be honest, i’m getting slightly overwhelmed with information now.

Ok, so:

I’m ready to start the 3TB backup process from scratch and hope everything goes better now, for whatever reason even though the only thing that has changed is the data/ folder, since i feel like this is really the only chance i got. I have put logging on verbose on the task and put block size to 5MByte.

What do you suggest putting “Remote volume size” on? All the tiny 50 MB files were getting slightly difficult to search through when you asked me to find a file in them, and that was with only 1TB done. Is leaving them it on 50 okay? Or should i increase it to decrease the amount of files on the remote system?

Software is never perfect.
Do these videos have valuable old-version info? If it’s all just-the-latest, maybe direct copies will do.
That would give you more choice on how to get the copies, although 100 GB size may pose issues. Duplicati’s strength include deduplication and compression to keep multiple versions compactly, but
video is good at defeating both of those. Unless you edit and want to undo, versions may be overkill.
OTOH versions are handy if ransomware clobbers things. You don’t want to clobber backup as well.

It’s just the latest honestly and i have thought about direct copies too, however versioning with randomware seems like a plus, and the fact that if i delete a file it’ll still be in my backup for a couple of versions before it’s completely gone seems good too. I also wanted to compress as much as possible but i do see now that the compression is pretty minimal, understandable.

For anything very valuable, two backups (done differently) is a good idea.

Yes i do know that, i’ll be looking into another backup option too but just having this one is already a million times better than none at all.

Duplicati author suggesting not using --auto-cleanup, but possibly that being set explains why repair ran (and maybe went to recreate).

“Cleanup” sounded like a positive thing that’s kind of why i added it. I have removed it from both tasks now.

I did read the rest of your post but i’m not sure what to say to all the information. Thanks for explaining and i’ll keep everything in mind. I’ll be waiting for an answer before i start the 3TB task again.

The “Choosing sizes” document talks about that some. There aren’t really any hard-and-fast rules on this.
Especially if you use versions of files (which may or may not fit video), restoring a version may take many dblock files, as the updates will certainly land in a different file (and non-updated parts use original dblock). Compacting can mix blocks around too. If you delete files and create waste space, rearranging gets done.

Basically, larger remote volume size means possibly slower restores. It might not matter if this is just DR. When doing a restore of everything, all dblock files will likely download. For limited restores, it may matter.

Download speed is also a factor. If fast, larger volume might not hurt as much (if NextCloud can keep up).
If lots of smallish files are annoying, and your connection is fairly fast and fairly reliable (we will see when something finishes and “RetryAttempts” in the log is seen – you can keep checking your working backup), increasing this to 500 MB might be reasonable if you don’t mind the possibility of restores getting slower…

Because you have “just the latest”, if you also have few deletes, then probably you’ll just keep on using the original upload dblocks, and not have to go chasing around for updates, so larger volumes will slow less…

EDIT:

Large volumes increase the chance of timeouts. What might be nice is the small initial backup I proposed, then a restore to see if that seems to be working, then either add gradually or go for broke with rest of files.

Download speed is 500 Mbit/s, but upload speed at the backup server is rather slow at 20 Mbit/s. I’m already fearing the day i need to restore a backup with that slow upload speed at the other end.

You named 500 MB which was actually a lot higher than i originally had in mind. I was thinking about doubling or tripling the default 50 MB.

I honestly simply had absolutely no idea what to do, so i just chose a value inbetween your thoughts and my thoughts: 250 MB.

Before starting the backup i also went into the configuration of the Nextcloud server (the server the backups are running to) and i increased the maximum execution time to the max (3600 seconds) to try and prevent the 504 errors, since these are timeouts. I thought that might help.

The backup has been running for about 5 days now and i should say i vastly underestimated the speed. I said 1TB takes about a week but it has done about 1.5TB now in these 5 days. The total backup size to be precise is 2.8TB, and currently it’s on 1.2TB remaining. The previous 504 errors usually occurred around 1.8TB remaining, and since it has already passed that point my hopes are high. I don’t want to speak too soon, but i have a feeling everything might work out this time :). I’ll report back once the backup is fully complete.

Should be fine assuming no timeouts. Value is quite arbitrary and depends on too many things to optimize. Changing it later is possible, but affects only new volumes (which could also be produced by compacting).

One other thing that can happen (that ideally should not happen) is the earlier mentioned Recreate (which ideally is never needed) can download the whole backup as opposed to just its small dlist and dindex files. Runnng in the 90% to 100% range on the progress bar is going to be slow. Live log at Verbose shows info.

Carefully testing occasionally by moving the database aside temporarily to see if Repair can rebuild it fast would give you early alert of a possible problem on a necessary rebuild, but can’t help if source drive dies.

Of course things can never go my way lmao.

Just as the backup was done i got the following error: “The remote server returned an error: (423) Locked.”.

Verbose logging was on. The log file has 102k lines. The last 300 ish lines (because i think those are interesting) can be found here: 2020-10-13 20:53:09 +00 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileB - Pastebin.com

Once i again unfortunately i do not know what to do in this situation.

As far as i can tell, for some reason it says the file /video/2002/2002-07-28 Spain/108-0894_MVI.AVI has been edited since the last time it checked the file but i can confirm no one edited the file. It’s a 3.2MB Video file with last modified date 2002. I asked the other person who could have done anything to the files but they say they haven’t been doing anything. I do not understand what is going on.

Also i noticed it is backing up files from #recycle and @eadir folders. I should probaly exclude those because those are system folders from the NAS. Do i just go to Source Data -> Filters -> Press “Exclude Folder” and enter @eadir ? Is it that simple? Should i do this while problem solving or wait for a succesful backup first?

I can see that it tried 5 times to delete a file called duplicati-idea09091bb4e43699ae1bb7c1359de07.dindex.zip.aes and each time it got a 423 Locked error. At the time of this error (2020-10-13 21:36:21) i do not see anything in the logs at the remote server.

What do you suggest doing? Maybe just try run the backup task again?

It does look done, or pretty close. I see the dlist file put up at end of backup, and the verification file too.

There might be some cleanup code trying to delete files that didn’t make it up, so were retried under a different name. That’s the design, and the file that got the upload error gets a delete attempt right then.

Maybe there’s another try at the end. Can you search the whole log for early history of the named file?
Database should also have some records, if you’re willing to go browsing the RemoteOperation table.

Do you control the remote server? I wonder if rebooting either it or local system would unlock the file?
Less disruptive than reboot if remote is shared may be WebDAV restart. You can also research error.
I’m not immediately seeing Duplicati locking files, but I wonder if the server can’t delete during upload.

Got lines that show that? What I see for that path is that it looks like a newly backed up file, not an edit:

2020-10-13 20:53:13 +00 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes /source/video/2002/2002-07-28 Spain/108-0894_MVI.AVI, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 08/05/2002 20:32:44 vs 01/01/0001 00:00:00
2020-10-13 20:53:14 +00 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-NewFile]: New file /source/video/2002/2002-07-28 Spain/108-0894_MVI.AVI

I don’t know the entire backup history and whether it used to work, but at the moment incomplete backups could very easily leave some files not backed up. If not backed up before (check old backups) they’re new.

Does it normally show delete operations I wonder? Or maybe it only shows ones that happen. No idea.

Look in early logs for duplicati-idea09091bb4e43699ae1bb7c1359de07.dindex.zip.aes operations.

Maybe look on remote for file date, to see whether it was even uploaded in the run with good logs.

If you can get technical, browse Database read-only to search via RemoteOperation table header.
Timestamp is in UNIX form. Convert with whatever converter you like, e.g. using EpochConverter.

If the file has early history of not uploading, so needs a deletion, need is remembered in database Remotevolume table having Deleting as State. Delete will be tried again at start of next backup.

I’m not sure it will fare any better next time unless some things get restarted to unlock WebDAV…

You can also research 423 error, including a special search for behaviors of your particular server.

Or maybe it cleared up on its own, and next backup log will show it being deleted without an error.

@eaDir is specifically discussed in the Filters article here, but let’s try to get this backup fixed first.
There’s also a bug in the GUI filter builder, so using its three-dot menu Edit as text does better.

I think i misread the logs.

Yes i did do this but i also wanted to wait for your reply because you’re the expert here :). It seems like the backend (Nextcloud) puts a lock on files when they’re in use, and somehow Duplicati interferred with it’s own lock.

I reran the task and it completed in about a minute. I think i will run a “verify” task now to verify the integrity of the backup. Does that seem like a good idea?

Also i noticed that the compression - as expected - literally only saved 2 GB on 2.7TB. Does turning compression off entirely for that task specifically increase upload speed OR decrease cpu usage at all? To maybe completely skip an attempt at compression on each file. Also is this still something i can do while i already made one successful backup? I did read there’s a list of default extensions that Duplicati will not compress and .mp4 and .avi is the list.

Once again i want to make clear that i’m extremely grateful for your help and once i get everything going good and i have done a couple more successful restore tests i will absolutely consider donating to Duplicati. The support i have been getting on this free piece of software has been hundreds of times better than some support for some paid software.

It’s good start, however I’m not sure it’s much more thorough than the verification done before the backup.
If you decide you want to ensure pulling down all the files for a check against records, the test command would let you set as high a number as you like (or all). It can be run in a Commandline adapted for test

Verifying gently per-backup can be done with backup-test-samples or backup-test-percentage options.

Since you have upload-verification-file, the fastest way to check file integrity is on-remote-server script run using one of the DuplicatiVerify scripts in the Duplicati utility-scripts folder. I guess you use the Python one.

Nothing quite replaces a test restore of some sort. You should use no-local-blocks to guarantee download. Alternatively Direct restore from backup files on another machine can’t copy original source blocks, and it also has to create a partial temporary database, which proves that will work. If you really want to, a Repair after temporarily renaming your Database will let you see how it does. These are like DR tests in advance.

I think you can change it any time you like, however you’d have to measure the speed differences yourself. Heavier compression would presumably use more CPU but maybe upload slightly faster. What limits you?

Thank you for that. There are some rather experienced volunteers, but more (at any level) would help a lot.

The verify button does do the verification task but doesnt do much after that. No errors but also not explicit message saying it succeeded.

I found the “commandline” page rather confusing but i managed to do something. I selected “test” from the “command” dropdown. The “target URL” is already there by default.
The “Commandline arguments” has my source folder directory in there by default for some reason. This confused me a bit. I removed it and entered “10” to hopefully pass it through to the “samples” argument the command needs. I executed it and this is the output:

Finished!

Listing remote folder …
Downloading file (1.21 MB) …
Downloading file (1.21 MB) …
Downloading file (2.53 KB) …
Downloading file (2.53 KB) …
Downloading file (9.11 KB) …
Downloading file (2.53 KB) …
Downloading file (2.53 KB) …
Downloading file (2.53 KB) …
Downloading file (2.53 KB) …
Downloading file (10.26 KB) …
Downloading file (2.53 KB) …
Downloading file (4.34 KB) …
Downloading file (245.08 MB) …
Downloading file (245.08 MB) …
Downloading file (246.43 MB) …
Downloading file (245.03 MB) …
Downloading file (245.08 MB) …
Downloading file (249.59 MB) …
Downloading file (249.73 MB) …
Downloading file (245.08 MB) …
Downloading file (248.18 MB) …
Downloading file (245.08 MB) …
Examined 22 files and found no errors
Return code: 0

Which is great, but that’s 22 files, so what did i enter “10” for exactly? I thought i told it to test 10 files. Now i’m scared that if i enter like 500 i’ll be waiting for 2000 files to be tested :P.

I will look into this, does Duplicati use the verification file remotely as well to check file-integrity on the remote server? I actually thought it would do exactly that which is why i turned the option on. I also realize i should read more documentation before turning options on and off.

My idea was to turn compression off completely because on 2.7TB it only saved 2GB. If it is still attempting to compress each file as much as it can that poor pentium CPU in that NAS is going crazy on such a tiny win of storage space. The 2GB per ~3ish TB in bandwidth it saves also doesn’t really make a difference in the overall upload time of the backup but maybe turning compression off will make a difference in processing time.

This seems interesting. I will look into this. Thank you!

I will absolutely do a test restore with a considerable amount of data in the near future from the NAS itself to make sure it works there as well.

Once again i want to say thank you for all the support you have provided me. You’re like an angel at this point!

Using the Command line tools from within the Graphical User Interface

Most command line tools need one or more commandline arguments. For example, if you want to delete a specific backup, you have to supply a version number to the Delete command. The default value for this field are the source folders selected for backup, but in most situations you have to change this.

You did the right thing, changing backup into test, meaning <source path> became <samples>.

Duplicati.CommandLine.exe backup <storage-URL> "<source-path>" [<options>]

Duplicati.CommandLine.exe test <storage-URL> <samples> [<options>]

Not really. The test page explains what samples means:

Verifies integrity of a backup. A random sample of dlist, dindex, dblock files is downloaded, decrypted and the content is checked against recorded size values and data hashes. <samples> specifies the number of samples to be tested. If “all” is specified, all files in the backup will be tested. This is a rolling check, i.e. when executed another time different samples are verified than in the first run. A sample consists of 1 dlist, 1 dindex, 1 dblock.

The posted output isn’t totally clear on what’s what, but a good guess is the 10 files just under 250 MB are dblock files. The other 12 might be 10 dindex files and all 2 dlist files (at 1.21 MB) if you have few versions.

If you want to check the math, you can watch an About → Show log → Live → Information of file names.

It should be 1500, but if you’ve only got 2 versions, it’ll be 1002. You can also just tentatively assume all is uploaded correctly. There should be a directory listing and size check done every backup, so the question is whether a WebDAV upload (or file sitting on the remote) managed to go bad without that check noticing.

I spoke earlier of gradual remote verifications after backup, or total verification done on the remote system:

It’s a manual run. Duplicati has no way to run a verification script on the remote server, but you might, and unless your network bandwidth to remote is very high, reading remote from remote is likely much faster…

One thing the manual isn’t great with yet is giving usage details such as where the scripts are that you run.

How to verify duplicati-verification.json? (findable from the forum search near upper right) is an explanation.

I’m sorry if im not supposed to reply to old solved threads but i want to thank you very much for all the help you have provided me. It all seems to be working good now. I have made a donation to Duplicati as a thanks for the great support and software.

How to verify duplicati-verification.json?

I will look into this :).

I actually have one more question that i think i already know the answer to but i couldn’t really find a clear answer when googling: If i ever do need to restore my backups, is it possible to take the hard drive that my backup files are on out of the destination server, attach it to the source server and then restore the backup through USB instead of over the internet? I assume this is possible but i just want to make sure if that would cause any problems.

Yes. The backup files are portable. You’ll of course need to rebuild the local database unless somehow the old one survived, but it should be faster locally, as should the restore. Easiest path for a one-shot restore is Direct restore from backup files and point to the USB drive. I think “Restore from configuration” is buggy, or that’d be a good path. Regardless, you should put an Export of your backup configuration somewhere safe.