I’ve just started using duplicati on Fedora and I am finding the speed to be very slow. My initial full backup has been averaging about 12 gigabytes per hour for the last 6 hours on a 400Gb backup. The destination is an internal disk which will accept a 12Gb file copy in about 2 minutes. I can understand some additional overhead, but taking 30 times as long for a backup? Is there something I’m missing, or can do to improve performance?
Do you have any monitoring tools that might identify stressed resources while the backup is running? I would expect it to be CPU, disk IO, memory constraints (leading to MORE disk IO), or network bandwidth (no likely an issue in this scenario).
My system isn’t very stressed. It’s a core i7 with 16Gb of memory. Source disk is a mirrored btrfs filesystem. Destination is a single ext4 filesystem on the same system. The 12Gb file copy test was done during the backup.
I cancelled the duplicati backup because a ~33hr backup of 400Gb is just silly. I was having some minor annoying issues with Gnome Backup (deja-dup) flaking out during incrementals on the encryption password, which is why I gave duplicati a try. I just started a fresh backup with deja-dup with the same source file set and destination and it’s completed more in 1 hour than duplicati did in 6 hours
Yeah, that sounds quite a bit more reasonable. If you’re interested in working with us to identify the Duplicati issue you’re seeing that would be great! But if Deja-Dup is otherwise doing what you want and you choose to stick with that, I wish you luck - we’d rather have people backing up with just about ANYTHING than not having backups at all!
Sure, would love to assist in identify a potential issue. Duplicati looks to have some great features and I’d like to continue using it if I can get past this initial performance issue. If you can give me a few pointers on how I can start to troubleshoot effectively I’ll give it a shot.
Great! Let’s start with the basics - would you mind pasting an Export of your backup job “As Command-line” here with any important text (passwords, hashes, etc.) replaced? That should give me something I can try to simulate on my system to see if it’s also slow.
Oh - and I assume that if you run something like htop while the backup is going none of the “graphs” (CPU, Mem, Swp) show much higher than normal…
Nothing is abnormal when examining resource usage. No swap used, memory usage is fine. mono is the highest cpu user during backup, but not extreme (“top” shows 65-90 %CPU on an 8 core system)
Scott_Brown, I edited your post by surround your command line with ` (tic marks) to help it stand out more as a command.
Thanks for the settings - I’ll see if I can get a Fedora VM going tomorrow and try running an similar command on my (decidedly less powerful) system.
Oh, and you mentioned this backup is of about 400GB of data but I don’t recall seeing a file count - if you’ve got a rough estimate that might help me better simulate things than just making a single 400GB junk file.
Roughly 1 million files. The usual junk in dot directories in ~, web dev files and local git repos, around 100 gigs of vmware disk images and tons of ripped music
Ooof - maybe I’ll start with a simple Fedora test and see what happens there.
Since this is a custom test environment, is there a particular flavor of Fedor you’re using? I was planning on using whatever ISO showed up on getfedora.org.
Also, this may not apply to your situation but if you’re interested there’s a discussion going on about settings and performance over here.
Scott, I apologize for not getting back to you yesterday - I’m having “issues” getting Fedora installed so haven’t had a chance to test anything out yet.
I’ll try again with VirtualBox and see how far I can get. The performance won’t be as good, but it may still let me identify issues areas.