Slow Duplicati Bakcup on Mac OS


Hi, I’m having some issues with Duplicati in my Mac OS computer. Is is very slow to backup. It takes a lot of time to execute this process in a folder with no more than 3 images.
In windows, I tried I do not have this problem.



Hi @andresalvear10, welcome to the forum!

Sorry to hear you’re seeming to have speed issues with your MacOS backup.

Note that fine tuning performance varies a LOT by hardware (CPU, memory, drives, internet speed, etc.) and remember the first backup is a LOT slower than subsequent “delta” backups.

That being said, unless you’ve got gigantic images it shouldn’t take very long to handle a backup of just 3 of them…

Are you using the same version of Duplicati on both the MacOS and Windows systems (and what are they)?

Are the settings (encryption, compression, destination) about the same on both system?


Hi @JonMikelV, the settings on both system are the same. I used the last one. ( . Yes, I know this.

Note that fine tuning performance varies a LOT by hardware (CPU, memory, drives, internet speed, etc.) and remember the first backup is a LOT slower than subsequent “delta” backups.

But, unfortunately, the Mac computer is “very good”. Is part of the last generation Mac Computer. I have tried in a different Mac computer, and I have got the same issue.


It sounds like your machine should be able to handle Duplicati load just fine so I’m not sure what would be causing the problem.

Hopefully @Pectojin might have some thoughts on this if he’s available.


What’s the log output from the backup? Do you notice a specific part of the backup taking the longest?

Also, is this a completely clean backup or was 3 images added to an existing backup?


I am having a similar issue running I am new to Duplicati and backing up to BackBlaze. My first backup was 35Gb of my favorite items and the speeds were fine (>600Kb/s). Once that completed, I added another 30Gb to the backup and now things are super slow (<3Kb/s) for about 8 hrs. Any ideas?


I suspect it’s due to the overhead of all the database lookups to see whether or not all the new blocks already exist.

Assuming that is the issue, the only things I can think of are:

  • put the database on a faster drive (SSD or RAM drive)
  • upgrade to a newer version where some performance improvements have been added (but at the moment you’re running the newest beta so you’d have to switch to the canary update path in which case I wouldn’t recommend using -
  • create a second backup job for additional data. It can go to the same destination, but should be in a different folder

Note that once that initial backup of the new data is done you’ll likely see the speed improve as Duplicati will only have to do database lookups for changed data.


I am still struggling with this issue. New information: If I stop the backup, restart the computer, and then restart the backup I get speeds in excess of 1 Mb/s. Then it gradually slows down so that after a few hours it is running about 1 Kb/s. During that time it will back up around 2Gb, but then crawls and does nothing.

Note: I am running on a MacBook Pro with a SSD, so the database is on a SSD
I have not installed a canary version. Do I really need to try to to use Duplicati?

Question: It is suggested that I make multiple backups that are all smaller. I can try this. Do I have to abandon the 300Gb I have already backed up to make multiple new backups?

Ideally, I would like to get through my initial backup of my 600Gb. I just can’t get there when I have to spend time every few hours restarting the system to get a few more Gb done.

Here is info that may help:

APIVersion : 1
PasswordPlaceholder : **********
ServerVersion :
ServerVersionName : -
ServerVersionType : Beta
BaseVersionName :
DefaultUpdateChannel : Beta
DefaultUsageReportLevel : Information
ServerTime : 2018-08-11T12:09:24.440295-04:00
OSType : OSX
DirectorySeparator : /
PathSeparator : :
CaseSensitiveFilesystem : false
MonoVersion :
MachineName : Marks-MacBook-Pro-2.local
NewLine :
CLRVersion : 4.0.30319.42000
CLROSInfo : {"Platform":"Unix","ServicePack":"","Version":"","VersionString":"Unix"}
ServerModules : []
UsingAlternateUpdateURLs : false
LogLevels : ["Profiling","Information","Warning","Error"]
SuppressDonationMessages : false
SpecialFolders : [{"ID":"%MY_DOCUMENTS%","Path":"/Users/markdaubenmier"},{"ID":"%MY_MUSIC%","Path":"/Users/markdaubenmier/Music"},{"ID":"%MY_PICTURES%","Path":"/Users/markdaubenmier/Pictures"},{"ID":"%DESKTOP%","Path":"/Users/markdaubenmier/Desktop"},{"ID":"%HOME%","Path":"/Users/markdaubenmier"}]
BrowserLocale : {"Code":"en-US","EnglishName":"English (United States)","DisplayName":"English (United States)"}
SupportedLocales : [{"Code":"cs","EnglishName":"Czech","DisplayName":"čeština"},{"Code":"da","EnglishName":"Danish","DisplayName":"dansk"},{"Code":"de","EnglishName":"German","DisplayName":"Deutsch"},{"Code":"en","EnglishName":"English","DisplayName":"English"},{"Code":"es","EnglishName":"Spanish","DisplayName":"español"},{"Code":"fi","EnglishName":"Finnish","DisplayName":"suomi"},{"Code":"fr","EnglishName":"French","DisplayName":"français"},{"Code":"it","EnglishName":"Italian","DisplayName":"italiano"},{"Code":"lt","EnglishName":"Lithuanian","DisplayName":"lietuvių"},{"Code":"lv","EnglishName":"Latvian","DisplayName":"latviešu"},{"Code":"nl-NL","EnglishName":"Dutch (Netherlands)","DisplayName":"Nederlands (Nederland)"},{"Code":"pl","EnglishName":"Polish","DisplayName":"polski"},{"Code":"pt","EnglishName":"Portuguese","DisplayName":"português"},{"Code":"pt-BR","EnglishName":"Portuguese (Brazil)","DisplayName":"português (Brasil)"},{"Code":"ru","EnglishName":"Russian","DisplayName":"русский"},{"Code":"sk-SK","EnglishName":"Slovak (Slovakia)","DisplayName":"slovenčina (Slovensko)"},{"Code":"zh-CN","EnglishName":"Chinese (Simplified)","DisplayName":"中文 (中国)"},{"Code":"zh-HK","EnglishName":"Chinese (Traditional, Hong Kong SAR China)","DisplayName":"中文 (中国香港特别行政区)"},{"Code":"zh-TW","EnglishName":"Chinese (Traditional)","DisplayName":"中文 (台湾)"}]
BrowserLocaleSupported : true
backendgroups : {"std":{"ftp":null,"ssh":null,"webdav":null,"openstack":"OpenStack Object Storage / Swift","s3":"S3 Compatible","aftp":"FTP (Alternative)"},"local":{"file":null},"prop":{"s3":null,"azure":null,"googledrive":null,"onedrive":null,"cloudfiles":null,"gcs":null,"openstack":null,"hubic":null,"amzcd":null,"b2":null,"mega":null,"box":null,"od4b":null,"mssp":null,"dropbox":null,"sia":null,"jottacloud":null,"rclone":null}}
GroupTypes : ["Local storage","Standard protocols","Proprietary","Others"]
Backend modules:
























Compression modules:


Encryption modules:



Server state properties

lastEventId : 89
lastDataUpdateId : 7
lastNotificationUpdateId : 2
estimatedPauseEnd : 0001-01-01T00:00:00
activeTask : {"Item1":3,"Item2":"3"}
programState : Running
lastErrorMessage :
connectionState : connected
xsfrerror : false
connectionAttemptTimer : 0
failedConnectionAttempts : 0
lastPgEvent : {"BackupID":"3","TaskID":3,"BackendAction":"Put","BackendPath":"","BackendFileSize":52389837,"BackendFileProgress":52389837,"BackendSpeed":6265,"BackendIsBlocking":true,"CurrentFilename":"/Users/markdaubenmier/Documents/Pictures & Video/2010 Decade/2015 Pictures/2015.04.01 Trip to the US/IMG_2052.JPG","CurrentFilesize":4270830,"CurrentFileoffset":1843200,"Phase":"Backup_ProcessingFiles","OverallProgress":0,"ProcessedFileCount":55946,"ProcessedFileSize":307967128985,"TotalFileCount":106553,"TotalFileSize":658285479687,"StillCounting":false}
updaterState : Waiting
updatedVersion :
updateReady : false
updateDownloadProgress : 0
proposedSchedule : [{"Item1":"3","Item2":"2018-08-11T23:00:00Z"}]
schedulerQueueIds : [{"Item1":4,"Item2":"3"}]
pauseTimeRemain : 0


Not at all. You can have multiple jobs backing up the same data if you want (I do that all the time). One of the benefits there is each job can have a different settings like destination, run frequency, retention policy etc.

If you go to the main menu “About” -> “Show log” page, click on the “Live” tab and select “Profiling” you should see a self-updating list of of individual commands being executed.

When it starts getting slow can you tell what the last command is that seemed slow?

If that’s too awkward, you could also try running with --log-file=[path] and --log-level=profiling which will produce pretty much the same content but in a text file.

What I’m wondering is if there’s something silly going on with the dblock file uploads like maybe connection re-use is slowing down for some reason.

Hang on…you’re backing up to BackBlaze right? I don’t use them myself but they wouldn’t happen to have a daily upload limit of around 2G would they?


Thanks for the response. I don’t know if there is a default limit for BackBlaze, but I do not have one. I broke up my big job into several smaller ones and the jobs less than 40GB all ran fine in less than a day. Once the smaller jobs complete, I will return to the larger backup that was failing and get the list of commands being being executed and post them here. Thanks again.