Duplicati backup through Sia appliance VM

I have Duplicati pointed at my Sia daemon (siad). The job (45GB) starts running and creates the first 1GB file (my configured size of upload file for the job). I monitor the creation. When the file gets just over 1GB and presumably should start uploading, it vanishes. And Duplicati throws this error:

/renter/upload/cbc-media-srv/duplicati-20180512T192559Z%2Edlist%2Ezip?source=%2Fmedia%2Fdata%2Fduplicati-temp%2Fdup-cc09793c-bec8-42ad-9eaa-670a1d47e101 failed, response: {“message”:“upload failed: stat /media/data/duplicati-temp/dup-cc09793c-bec8-42ad-9eaa-670a1d47e101: no such file or directory”}

(/media/data/duplicati-temp/ is my Duplicati temp folder on my Sia and Duplicati server)

Any ideas?

Is it possible something is auto-cleaning the temp folder and sees that nice juicy 1G file as a good target for deletion?

Some tests I’d suggest include:

  • try a smaller “Upload volume size” (--blocksize) setting (this can be changed later without causing any problems)
  • try a test job going to a local folder (this removes your Sia daemon as a potential source of the issue)

I don’t have anything running cleanup. Just a vanilla Ubuntu Server VM dedicated to Sia. I did change the block size to 500MB instead of 1GB. I also upgraded to This time Duplicati created 10 or so files. None of them appeared to disappear, and they are still there. However, the job failed with the following errors:

Operation List with file attempt 1 of 5 failed with message: Object reference not set to an instance of an object
System.NullReferenceException: Object reference not set to an instance of an object
at Duplicati.Library.Backend.Sia.Sia.getResponseBodyOnError (System.String context, System.Net.WebException wex) <0x402c9d60 + 0x000c5> in :0
at Duplicati.Library.Backend.Sia.Sia.GetFiles () <0x40246930 + 0x00507> in :0
at Duplicati.Library.Backend.Sia.Sia+d__26.MoveNext () <0x40246640 + 0x00057> in :0
at System.Collections.Generic.List1[T]..ctor (IEnumerable1 collection) <0x7f7564343c70 + 0x001fb> in :0
at System.Linq.Enumerable.ToList[TSource] (IEnumerable`1 source) <0x400d3290 + 0x00070> in :0
at Duplicati.Library.Main.BackendManager.DoList (Duplicati.Library.Main.FileEntryItem item) <0x402457c0 + 0x00083> in :0
at Duplicati.Library.Main.BackendManager.ThreadRun () <0x40229510 + 0x0037f> in :0

And then retries.

And finally:

Fatal error
System.NullReferenceException: Object reference not set to an instance of an object
at Duplicati.Library.Main.BackendManager.List () <0x40244ce0 + 0x000df> in :0
at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, IBackendWriter log, System.String protectedfile) <0x402416f0 + 0x0015f> in :0
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, IBackendWriter log, System.String protectedfile) <0x4023cc70 + 0x000ab> in :0
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) <0x4023b720 + 0x001af> in :0

I will create a test job to another repository tomorrow.

So I tested to a local repository and had no issues.

I downgraded back to beta (so I could get the progress bar back). Then I flushed the DBs and recreated the job. I configured the block size at 500MB. And when I repeated the test, as soon as the Duplicati finished the initial block of 500MB, the file disappeared and then Duplicati threw the error message:

/renter/upload/shalom/kcs/duplicati-b558f7c01faa24a82a2f54c43f470bd61%2Edblock%2Ezip?source=%2Fmedia%2Fdata%2Fduplicati-temp%2Fdup-bb6104cb-755b-46d4-ae0b-00eaa73abbcd failed, response: {“message”:“upload failed: stat /media/data/duplicati-temp/dup-bb6104cb-755b-46d4-ae0b-00eaa73abbcd: no such file or directory”}

I then adjusted the same job, pointing it to a local repository and started it. When the first block was complete, it vanished. But it appeared on the local repository (which is the same disk but another folder which might explain why it happened so quickly).

I’m curious about the process Duplicati is taking. I do not know if the process is the same for all the supported repositories, but at least in the use case of Sia and based on the error message, it appears to be (1) creating the block locally, (2) attempting to upload the block to the repo, and then (3) failing to wait for confirmation before it deletes the cached block. In other words, the error message seems to indicate the cache block is purged by Duplicati prior to uploading or at least prior to completing the upload.

I am trying to examine logs from both ends, Duplicati server and Sia daemon, but I’m not seeing a lot that I can dig into or research.

Let me know if you have any additional suggestions. Also, which file in the source contains the code for the specific process above (whether the same for all repos or specific to the Sia repo)? I am a coder by night, so I would like to examine that and learn what order events are taking and clues to where it is breaking down (either on the Duplicati side or the Sia side).


Thanks for the detailed checking! It’s nice that you’ve confirmed the issue exists with a Sia destination but not a local one.

I believe the ACTUAL issue is being covered up by a problem with the getResponseBodyOnError() method itself. As far as I can tell something went wrong in GetFiles() which causes the call to getResponseBodyOnError() which then throws an Object reference not set to an instance of an object error instead of reporting the actual GetFiles() error.

So my guess is the endpoint context or wex WebException being passed to getResponseBodyOnError are either null (unlikely) or are unable to be cast into other data types thus ending up as null.

I have no idea why that would happen or why it’s happening to you and (apparently) not other people.

The GetFiles() code which calls the getResponseBodyOnError() method when there’s a problem) is located here:

Well, I gave it a go… I downloaded the source, added some debug statements to learn more about the objects you referenced and what value they may/may not have, and then managed to compile the code. But then I couldn’t figure out which binary files of the lot I needed to drop into my Duplicati install (overwriting existing files) in order to test!

Maybe I should go into a tad more detail on what I am actually doing with Sia and Duplicati. I hope no one roles their eyes over this!

I have a lab at home (just a Dell 1U host) with a handful of VMs. I wanted to backup my VM data to Sia, which is the most inexpensive cloud storage I know of at this time, and also test Sia as well (as I have not yet put anything on it). However, when I was testing Sia, I actually created a headless server and run the Sia GUI over SSH (with X forwarding). The entire Sia blockchain is downloaded to my Sia VM. I wanted to point the Duplicati instance on my host (which has access to the VM data) at the Sia daemon running on the Sia VM. So in essence I have a “Sia backup appliance” running on the host. And I have the host backing up to the Sia cloud through it.

Now here was the trick. To get Duplicati to even make a successful connection to the Sia daemon, I had to do the following:

Run this command on my Sia VM: socat tcp-listen:8001,reuseaddr,fork tcp:localhost:9980
This allows me to run the Sia daemon on localhost; Sia very much complains when you attempt to open it up, even on a LAN.

Then I run the Sia-UI over a ssh connection with forwarded X. This allows me to unlock my wallet using the UI even though my VM is technically headless.

The Duplicati instance in question runs on the host. It successfully makes the connection to the Sia daemon on port 8001. But unfortunately, the bizarre symptom herewith described plagues me.

I may have to do a “lift and shift” of my local blockchain to the host itself and just run Sia and Duplicati together on the same OS. It would probably work then because I think this is what most people are doing. But this kind of defeats the purpose though as I really wanted to create my own “backup gateway appliance” to the Sia blockchain. For instance, I want to point my laptop and my gaming computer and a handful of other machines to the same Sia VM so that I have them all backing up to Sia through my single wallet and “appliance.” Does this use case make sense?

I think I recognized a few words here and there. :blush:

Assuming I followed you correctly, you are attempting to have Duplicati use “SIA over an SSH connection” as your destination but when Duplicati tries to upload any files you get a “no such file or directory” (on or “object reference…” (on error?

Is it possible to try connecting to Sia without the SSH connection just to test if it works? My guess is it will and, if so, that might help us narrow down what the issue is with the SSH pipe (maybe some unexpected additional ports need forwarding).

I usually just run the tray-icon from the compile folder. As long as there are no underlying database changes from your currently active version it should work just fine, automatically opening on a different port than your installed version.

Alternatively, you could run it in portable-mode so it’s fully separate from your local install.

So I am conducting two tests at this time.

First test: I created an 8GB tar.gz archive and split it into 1GB segments. Then I used scp to send a few of those over to my headless Sia VM. Afer that, I started Sia-UI over a forwarded X session to the Sia VM and was able to use the interface to start uploading the files. So far so good…

Second test: After the first is done, I will do the same thing but using the siac command line tool.

I think you are correct that somehow the internal port forward is not working as I desire. The point of it is to allow multiple Duplicati instances to send their backups to a single Sia VM which then uploads it through my single wallet. The Sia VM is like an appliance or gateway of sorts that puts my backups on the cloud. I definitely don’t want to have Sia installed on every machine because that means I waste all that redundant block chain space!

I will keep pushing forward. Thank you for continuing to work with me on this issue and suggest things to test!

Thanks for the update. Your setup using a single Sia VM is an interesting idea! If it works out it might be worth doing a #howto guide for those power users that might be interested. :slight_smile:

Definitely! That is if this can actually work. :slight_smile:

So I tried another method of internal port forwarding.

Previous method (as root): socat tcp-listen:8000,reuseaddr,fork tcp:localhost:9980
New method (as root): ssh -g -L 8000:localhost:9980 -f -N jwilmoth@localhost

Unfortunately the issue remained.

This evening I tried the following:

I mapped a larger volume to the Sia VM to ensure I had plenty of space in case Duplicati was trying to copy the 1GB backup files to the Sia VM before Sia then started uploading them; this didn’t clear up the symptom.

I ran the port forward method using socat above. Then I started up the siad daemon. Then I tested (1) siac -a localhost:8000 which was successful and then from another physical computer entirely (2) siac -a wil-sia-srv:8000 which was also successful!

This latter test was eye-opening because it verified the port forwarding was working just fine! I continued testing and was able to unlock my wallet, check status, etc. all from a remote physical computer across my LAN to the siad daemon running behind a port forward using socat.

Does this help us narrow down the symptom? I wish I understood more the flow of logic in Duplicati when the backup runs to a Sia destination. And I wish there was a way to get more logging information. I feel like we’re almost there. And to be honest, the thing driving me is the beauty of this configuration if it can work. I could have ALL my VMs with Duplicati pointing to a single Sia VM that is passing everything on up to the blockchain using a single wallet. This would be centralization at its best, only to be surpassed by a Duplicati single-pane-of-glass view of all the instances. (Hey, I can dream, right?! lol)

As far as I know Duplicati treats Sia like any other destination and simply does file lists, gets, puts, renames, and deletes.

So using siac -a with socat works but using it with ssh did not?

I’ve used SSH before so I kind of recognize what your socate and sia commands are likely doing, but I’m not sure why they wouldn’t have worked in the first place.

If you have the opportunity, I’d suggest rebooting everything and seeing if the above commands still work… :wink:

Both of them worked to allow siac to control the siad daemon remotely.

I had to work on my AC yesterday, so I actually did completely power off all my VMs and my host. No luck this morning either.

The next step I’m going to take is see if any/all of my other Duplicati instances exhibit the same symptom. Maybe that will narrow the symptom down and uncover the root cause.

@JonMikelV I think I may have found why this is not working.

So up until now, and to my chagrin, I had not actually fully tested siac from a remote computer to the Sia VM. When I found that another instance of Duplicati could not backup to the Sia VM either, I decided to send the files manually using siac. Well, to my surprise, siac kept telling me it could not see the files – they did not exist! I was like, “What the heck, they are right there!” I even copied the files into the remote computer’s Sia folder where the siac binary was located, but no joy. I then thought that maybe siac was merely invoking the siad daemon which was looking for the files within the Sia VM file system, so I mounted there a share from my remote computer on which the files resided. No joy – siac on the remote computer just wouldn’t see them. So then, with this share still mounted, I ran siac from the Sia VM. Wallah. It found the files (mounted from my remote computer) immediately and started uploading them.

So this leads me to conclude that siac can be operated remotely over this socat connection but cannot upload a file remotely. I think the same constraint is causing the Duplicati issue.

I haven’t scrapped my deployment idea in the least, but I do have to rethink it with this limitation in mind.

Sounds like a good plan. It may not have been forward progress, but it was still progress. :slight_smile:

I got a workable solution! It’s not perfect since it requires some OS-level config, but at least it works and demonstrates proof of concept. I’m sure the Duplicati backend for Sia could be improved to eliminate the need for even this, but I’m not up to the programming challenge yet.

Sia server configuration:

  • Must have a cache volume (i.e. /media/cache) which can be internal storage or a mounted iSCSI volume
  • Must share cache volume via Samba
  • Must run socat to listen on external 8000 and forward to internal 9980
  • Must run siad which listens internally on 9980
  • Must have wallet unlocked (I use siac to do this and to check status on uploads etc)

Client(s) configuration:

  • Must have Sia server’s cache volume mounted via CIFS/Samba at same location (i.e. /media/cache); in other words, both server and client(s) must have the same local path to the same volume
  • Duplicati must be installed (GUI or headless is fine) and global tempdir must reference local path to cache volume (i.e. /media/cache); it is recommended to append a UID, such as client’s hostname, to this (i.e. /media/cache/myclient1); this allows the clients to keep their Duplicati files separate from each other
  • Duplicati jobs must be configured to use port 8000
  • Duplicati job dblock (upload) sizes can be whatever, but I used 1GB as per what I found others recommended for Sia blockchain

Great work, thanks for sharing!

Just to confirm, are you really using a block size of 1G or is that your dblock (Upload volume size)?

Upload size. Thanks for the catch!

I am monitoring the Sia VM’s performance with top, and I may need to give it some more RAM. It currently has 4GB assigned, and with two uploads in progress, the siad process is capping at 83% memory usage. CPU usage is low, between 2 and 15%.

Ooof. But that’s on a VM running the Sia stuff, not any Duplicati code right?

Ooof. But that’s on a VM running the Sia stuff, not any Duplicati code right?

Correct. However, that wasn’t enough. The Sia daemon crashed on me. :frowning: I have since boosted it to 8GB of RAM. I tried lowering the cores to 2, but found that caused the CPU usage to spike. So I am settling on 8g/4c for the Sia appliance VM specs.

Well, I just completed my first restore of a VM that was about 45GB in size and 15GB in compressed backup file data (2 versions) on Sia blockchain.

It was successful from start to restore to import into VirtualBox to power up to web UI access!

Surprisingly, it took less than 15 minutes to restore. I do have a 200Mbps downstream (which means 15GB / 2 version = 7.5GB @ 200Mbps = 5 min best case), but I still expected download from the Sia blockchain to take a lot longer. Maybe I am mistaken. Just to be sure, is there any source-side deduplication taking place whereby a restore could technically be pulling blocks from local source (i.e. protected data) and remote source (backup repo, in this case Sia blockchain) to compile restore results at my restore path?

For example:

  1. Block A and Block B exist in Sia repo
  2. Block A exists in protected data set (unchanged)
  3. Restore job chooses Block A from (2) above since local is faster
  4. Restore job chooses Block B from (1) above since only found therein
  5. Restore job merges Block A and Block B in restore location