I have mounted my bucket using the s3fs file system on my Linux host.
From cli I can copy and list files to check that the operations are really happening.
I can also verify on AWS manager console that the files are there.
I tried using Duplicati for a backup, selected “local folder or drive” and the s3fs mounted folder, the backup happened apparently correctly.
I can even do a restore that effectively does restore the files.
The only problem is I DON’T KNOW WHERE THE FILES GO.
They’re not listed in the manager console, they’re not listed from the cli on the local host where the s3fs folder is mounted, I searched on my local file system and they’re not there. So WHERE DID THEY GO?
Have they been written to another bucket that I don’t own?
I have >30 years of computer experience and never seen anything like it!
I checked the duplicati logs but they don’t say where the files are being written to so not much help there.
And before anyone asks, yes I did before all that try to select the backup to S3 method but that consistently gave me a “internal error” message.
I’ve often used duplicati before mostly using the sftp method and I know I should be able to see the created *.zip.aes files on the destination location.
Any suggestions greatly appreciated.
OK, found out what happened.
Forgot to say that I was running the docker version and the files were being written inside the container thanks to the twisted way I was mounting the volumes in docker-compose.
My problem now is that with the correct settings the mounted s3fs folder on the host does not show up once the container restarted.
Anyway at least I know where the files were going.
Hello, are you saying the docker bind mount doesn’t work after restarting the container?
Also while I have never used s3fs I have used s3 successfully with Duplicati (as have many others). Would you like to troubleshoot that so you can use the native s3 support in Duplicati?
hi @ drwtsn32 thank you for your offer.
I did also not say it in my initial post but I also have used Duplicati with S3 on another host with success.
Which is strange because that host (duplicati was also running the same docker version) was behind a router which didn’t even have port 8200 open it did work wonderfully.
But I have a few VPS which of course all have their public IP and duplicati on all of them issues a “internal server error”. Again exactly the same version as the one behind the router.
I’ve read somewhere else that it might be a DNS problem which of course is possible (although they all have been up a good while on their current domain) but how to explain then that the one behind a router (and of course no DNS and no the router does not respond to any domain apart from that of the ISP) does work?
I just noticed that if I do resolvectl status the DNS for my public ip is listed as invalid whereas for my behind the router host it says "~.ispname (where ispname should be replaced by my real ISP name).
This is getting real complicated now.
Will investigate and tell you how it plays out.
Opening port 8200 on your router (which I believe you mean forwarding that port to the internal machine running Duplicati) is not required for Duplicati to back up properly. Most home routers allow internal machines to talk out TO the internet without doing anything special.
(By the way - I would advise against exposing Duplicati to the internet by setting up a port-forward on your router. Developers have stated that the Duplicati web engine has not been hardened for such exposure. You’d be better off setting up a VPN if you need to access the web UI remotely.)
Yeah, you do need to make sure Duplicati docker can utilize DNS. It will need to be able to resolve
Dull end to a complicated story: I don’t know why but decided to give it 1 last try on the VPS and … it worked.
Didn’t dream it before, it did say “internal server error” and I did try many many times, taking the Duplicati container down and back up again.
Weird but it now works like magic and I didn’t change anything…
Thanks to @ drwtsn32 for his help on this, much appreciated
Glad you were able to get it working!