I’ve been trying to use duplicati for a while now to replace CrashPlan as a cross-offsite backup solution between my Parent’s house and mine. I’ve gotten everything setup so that it should work…but something’s nearly always broken with one of the systems or another.
1.) Primary Home Server - linux based (unRAID) - Runs duplicati in a docker container.
Has 2 backups configured - one is to a local mount point/file share that exists one one singular backup drive. This tends to run fine. The other is offsite to an ubuntu server at my parent’s house. The endpoint is a Minio docker container (Amazon S3 compatible storage). This endpoint gives me nothing but trouble. Hasn’t worked since 6/15/2018. Connection failures I understand and can usually fix those, but even when I test and it’s working I can’t seem to get it to do anything. Recently I’ve been getting an error that says to repair the database. I attempt that and all it does is sit on “Starting…”. I’m trying a recreate now and it just says “Recreating database …” IS THERE NO WAY TO SEE WHAT THE @Q#$%& IT’S DOING?!?!?!* Logs show N.O.T.H.I.N.G. in the webUI - tcpdump on my host only shows a few packets every few minutes going from my server out to their IP.
2.) My Desktop - Windows 10 - Backs up to minio on my server and on my parent’s server. Biggest complaint here is Duplicati doesn’t seem to want to launch at startup so I went months without a backup because I didn’t know it wasn’t running. Anyway to fix this that I’m missing? Backs up to the same endpoints as 1. Not sure if it has any better luck right now or not…will let you know when the current “verify backend data” finishes. Though it does say that the last offsite backup took 23:23:10 (W.T.F?!)
3.) Parent’s server - Ubuntu 18.04.01 - running the same Minio docker I have on mine for an S3 endpoint. This isn’t actually running duplicati…it’s just an endpoint.
4.) Mom’s desktop - Windows 10 - Honestly, I haven’t checked up on this in a while. I need to go visit and check it out. I know it was having trouble with remote backups to my server. I believe it was backing up to their server just fine.
Minio endpoints are accessed via LetsEncrypt docker containers that are handling https certs for all internet traffic to/from the boxes. Both are accessible at a dynamicDNS entry via web browser from the opposite location.
Original backup from my server and desktop were done with my parent’s server on my local network then it was moved offsite and the connection updated.
When data transfers were occurring they were SLOW AS *&%^! I’ll need to get connections re-established to verify speeds but it was in the Kbps (measured mine to theirs). I have ~ 100Mbps down and about 15Mbps up on a good day for connection speeds. They’ve got 60 down and ?? up…shouldn’t be far off mine…we’re both using the same provider and about 15-20 miles apart geographically.
Now my desktop is running verification against my main server…it says it’s 100% progress, but the top bar still says “Running…” and nothing appears to be happening. Is there ANY way to tell if this is doing something!!!
Are there any really good guides for ensuring your setup is correct? Sorry for ranting but I’m getting really worried that I can’t get good backups in place for two households. With all the family memories being digital today I need to get this in place.
Quite a lot of stuff going on there, but let me see if I can help address one of your frustrations at least a little.
The web UI (as is typical for such “user friendly” interfaces, leaves something to be desired when debugging, however one can also open it in another tab, go to About --> Show log -> Live and crank the logging way up. Profiling level can be overwhelming, but Information should get you at least more insight into what’s going on.
There are a lot of options to control logging. On newer Duplicati canary versions (not yet in Beta I think), you specify separate settings for console logging and log file logging. You can look over the logging options here.
Another way to have a traditional line-by-line view of the activity is to use the job’s Commandline menu option with advanced options set to the desired verbosity level. Use console-log-level on the newer canary versions.
One Commandline caveat is that if you do something besides backup, the command line might need change.
Even the increased logging might not get you everything you might want, but might as well get what’s there…
Does that mean you installed it as service, expect it to start at boot, but find after boot that service is stopped? Does it start manually? On some systems (such as mine…), service on “Automatic (Delayed Start)” avoids this. There’s a technical explanation for this (involving updates, do you have any?), but first let’s learn the situation.
hmmm… Well, I could have sworn it was installed as a service as it used to start at boot, but I can’t find it in my services list. Sorry…I wasn’t sure what mechanism it was supposed to use to start at boot (startup item, service, etc.) Is it possible a windows update could have removed the service? Is there a way to re-register the service?
Also, biggest “here” means just with that machine…
I wouldn’t have thought a monthly or even the twice-yearly Windows update would remove the service, but the twice-yearly one can clobber Duplicati configuration and database files if they are kept in the SYSTEM profile.
I know that there are other issues, but I thought I’d start where I could. Some others may need log information.
One other thing I could mention is that Linux distro versions of Mono seem to be a chronic problem, but there aren’t enough specifics here yet to advise an upgrade, unless you want to try it because it often works better.
Do you have anything else going to the Minio remotely, or between houses using any other communications?
ok, so it turns out I do have a shortcut to the duplicati.gui.trayicon in my personal startup folder…it just apparently never actually starts up
I made the mistake once of setting my dockerized duplicati to canary and updating it, letting a backup run, then updating the docker which downgraded my version back to stable causing my backups to have a version mismatch. I’m not sure I’m fond of attempting a mono upgrade inside a docker only to have it revert later…though I suppose I could set the docker to not auto-update and just do duplicati updates manually… I will consider this.
Nothing else hits minio on the servers. Mine gets hit only by my mom’s desktop and my desktop, theirs gets hit by her desktop, my server, and my desktop. I’d have to double check schedules, but I was pretty sure they should be scheduled far enough apart to avoid concurrency issues…though not with one task taking 23hrs!
Generally the advice seems to be to use the latest, and it seems to fix odd Duplicati issues and not break Linux, although if you have concern about this, please use your judgment. I know the latest fixed an odd TLS failure…
I’ve got Duplicati running in a Docker container on unRAID and haven’t had any issues with mono, but then I’m using local and Box.com destinations so maybe they don’t have there latest SSL certificates.
Are you using the LinuxServer or Duplicati container?
I set up a Minio destination but never really got it working so moved to SFTP.
With the off-site support you seem to be doing you might want to consider a reporting aggregation tool like Dup-Report so you have a better idea of what might not be running.
I used the linuxserver.io container until an official Duplicati one (duplicati/duplicati) came out a few months ago.
Since the local destination is working fine I’m going to assume your container isn’t the problem. Just to check if it’s a connectivity issue, maybe you could set up a Minio container on the same box and use that as a destination for a few runs.
Sorry to fall off the planet. So, I’m back. I’ve switched to the official duplicati container and switched to the experimental branch. Still no joy. Now even my local backup to folder is having issues. I got a warning that a bunch of files were missing and it’s recreating the database…for the last ~5 days. Am I just backing up too much for duplicati to handle? Source says 637.18GB Backup is 673.69. This is my local server -> local server self-backup and should be the least complicated.
IF I’m reading the logs right it’s essentially pulling every DB record, checking the data against my FS and re-creating the DB records since they’re somehow wrong and it’s taking a LONG ASS TIME!!! Why are my DBs constantly corrupted? It seems my backups are always broken and I can’t ever get them fixed. Recreating them over and over again isn’t an option and I’m starting to worry I’m going to wear out my disks.
Now it’s my turn to apologize - stupid family holidays!
A source of 640G shouldn’t be an issue - and the larger backup-than-source likely means you’ve got multiple versions of some files and/or you’re backing up non-compressible data.
Database recreates can be slow - and yes, multiple day kind of slow. We know this is an issue and are working to improve that but are not near any sort of release yet.
Until that time comes, one thing to consider is possibly breaking your backup into multiple jobs. This keeps each job database smaller (thus faster) and if something does happen to one of them you’ll have less to rebuild.
As for why you’re having so many issues, is it possible you’re storing the Duplicati databases IN the Docker container? If so, this could cause problems like:
running out of space in the container (potentially corrupting the database due to failed writes)
database being deleted when container is updated (linuxserver containers seem to have updates just about every week!)
Oh, and there was a bug found in official Duplicati container. it really only affected certain destinations as they weren’t even available to be chosen as a destination so I’m pretty sure that’s not an issue you’re having. But if you want the latest one consider using 126.96.36.199 experimental (though I believe a beta with the fix should be coming out shortly if you’d prefer to wait).
Thanks. I’ve switched to the official docker, and yeah, in doing so I inadvertantly messed up pointers and locations for some of my databases. I’ve sorted that out by moving /data and /config out of the container properly and I’m working to create a bunch of smaller backups. I’m on the 188.8.131.52 experimental version since the switch as well. Destination is S3 Compatabile (minio s3 server) if that makes a difference for that bug you mentioned. I’ll let this setup ride for a week or two and see how it goes. If it works out I’ll need to move my remote server back to being remote and see if it keeps running (remote is on-site right now until my initial re-backups are settled).
Will 184.108.40.206 be released on the experimental branch too? I remember seeing something about a bug in the beta (stable) branch and it sat stagnant for a while. I’d rather keep on the faster-moving branch for now (was on canary for a while…) but only if beta releases will also come out on experimental. Seems like builds should bubble up. 220.127.116.11 should have been a canary, when tested well enough, experimental, and if it reaches release, then it goes to beta. Are you not doing “bubble up” paths like that with your build lanes?
Releases shows the history. Experimental doesn’t seem to happen much more than as the lead-up to a beta. So beta releases sort of come out on experimental, except earlier. If you like faster-moving, try canary…
experimental didn’t release a 18.104.22.168 though… I tried changing my container to :beta and it didn’t work right. All the drop-down menus (advanced settings, destination, etc.) had nothing. I switched back to experimental and it’s back to being fine. Any recommendations on how to upgrade/change branch on the docker containers?
I have very little Docker experience, but can refer you OnedriveV2 not available as provider choice that got into this. Several of those users follow the forum (so might see this), and maybe someone else will also say. Possibly you could describe in advance how you’re changing your container. Is it a different one, or treated like a non-container system would typically be done, where you use Duplicati updater and manual changes.
I got it to work. I internally upgraded to beta inside the experimental container then used the specific version tag for the new beta to pull a fresh container image(apparently :beta isn’t up to date because the error said I had downgraded the DB). After that everything worked fine.