Slow Duplicati Bakcup on Mac OS

Hi, I’m having some issues with Duplicati in my Mac OS computer. Is is very slow to backup. It takes a lot of time to execute this process in a folder with no more than 3 images.
In windows, I tried I do not have this problem.

Thanks.

1 Like

Hi @andresalvear10, welcome to the forum!

Sorry to hear you’re seeming to have speed issues with your MacOS backup.

Note that fine tuning performance varies a LOT by hardware (CPU, memory, drives, internet speed, etc.) and remember the first backup is a LOT slower than subsequent “delta” backups.

That being said, unless you’ve got gigantic images it shouldn’t take very long to handle a backup of just 3 of them…

Are you using the same version of Duplicati on both the MacOS and Windows systems (and what are they)?

Are the settings (encryption, compression, destination) about the same on both system?

Hi @JonMikelV, the settings on both system are the same. I used the last one. (2.0.3.3) . Yes, I know this.

Note that fine tuning performance varies a LOT by hardware (CPU, memory, drives, internet speed, etc.) and remember the first backup is a LOT slower than subsequent “delta” backups.

But, unfortunately, the Mac computer is “very good”. Is part of the last generation Mac Computer. I have tried in a different Mac computer, and I have got the same issue.

It sounds like your machine should be able to handle Duplicati load just fine so I’m not sure what would be causing the problem.

Hopefully @Pectojin might have some thoughts on this if he’s available.

What’s the log output from the backup? Do you notice a specific part of the backup taking the longest?

Also, is this a completely clean backup or was 3 images added to an existing backup?

I am having a similar issue running 2.0.3.3_beta_2018-04-02. I am new to Duplicati and backing up to BackBlaze. My first backup was 35Gb of my favorite items and the speeds were fine (>600Kb/s). Once that completed, I added another 30Gb to the backup and now things are super slow (<3Kb/s) for about 8 hrs. Any ideas?

I suspect it’s due to the overhead of all the database lookups to see whether or not all the new blocks already exist.

Assuming that is the issue, the only things I can think of are:

  • put the database on a faster drive (SSD or RAM drive)
  • upgrade to a newer version where some performance improvements have been added (but at the moment you’re running the newest beta so you’d have to switch to the canary update path in which case I wouldn’t recommend using 2.0.3.5 - 2.0.3.9)
  • create a second backup job for additional data. It can go to the same destination, but should be in a different folder

Note that once that initial backup of the new data is done you’ll likely see the speed improve as Duplicati will only have to do database lookups for changed data.

I am still struggling with this issue. New information: If I stop the backup, restart the computer, and then restart the backup I get speeds in excess of 1 Mb/s. Then it gradually slows down so that after a few hours it is running about 1 Kb/s. During that time it will back up around 2Gb, but then crawls and does nothing.

Note: I am running on a MacBook Pro with a SSD, so the database is on a SSD
I have not installed a canary version. Do I really need to try to to use Duplicati?

Question: It is suggested that I make multiple backups that are all smaller. I can try this. Do I have to abandon the 300Gb I have already backed up to make multiple new backups?

Ideally, I would like to get through my initial backup of my 600Gb. I just can’t get there when I have to spend time every few hours restarting the system to get a few more Gb done.

Here is info that may help:

APIVersion : 1
PasswordPlaceholder : **********
ServerVersion : 2.0.3.3
ServerVersionName : - 2.0.3.3_beta_2018-04-02
ServerVersionType : Beta
BaseVersionName : 2.0.3.3_beta_2018-04-02
DefaultUpdateChannel : Beta
DefaultUsageReportLevel : Information
ServerTime : 2018-08-11T12:09:24.440295-04:00
OSType : OSX
DirectorySeparator : /
PathSeparator : :
CaseSensitiveFilesystem : false
MonoVersion : 5.10.1.47
MachineName : Marks-MacBook-Pro-2.local
NewLine :
CLRVersion : 4.0.30319.42000
CLROSInfo : {"Platform":"Unix","ServicePack":"","Version":"16.7.0.0","VersionString":"Unix 16.7.0.0"}
ServerModules : []
UsingAlternateUpdateURLs : false
LogLevels : ["Profiling","Information","Warning","Error"]
SuppressDonationMessages : false
SpecialFolders : [{"ID":"%MY_DOCUMENTS%","Path":"/Users/markdaubenmier"},{"ID":"%MY_MUSIC%","Path":"/Users/markdaubenmier/Music"},{"ID":"%MY_PICTURES%","Path":"/Users/markdaubenmier/Pictures"},{"ID":"%DESKTOP%","Path":"/Users/markdaubenmier/Desktop"},{"ID":"%HOME%","Path":"/Users/markdaubenmier"}]
BrowserLocale : {"Code":"en-US","EnglishName":"English (United States)","DisplayName":"English (United States)"}
SupportedLocales : [{"Code":"cs","EnglishName":"Czech","DisplayName":"čeština"},{"Code":"da","EnglishName":"Danish","DisplayName":"dansk"},{"Code":"de","EnglishName":"German","DisplayName":"Deutsch"},{"Code":"en","EnglishName":"English","DisplayName":"English"},{"Code":"es","EnglishName":"Spanish","DisplayName":"español"},{"Code":"fi","EnglishName":"Finnish","DisplayName":"suomi"},{"Code":"fr","EnglishName":"French","DisplayName":"français"},{"Code":"it","EnglishName":"Italian","DisplayName":"italiano"},{"Code":"lt","EnglishName":"Lithuanian","DisplayName":"lietuvių"},{"Code":"lv","EnglishName":"Latvian","DisplayName":"latviešu"},{"Code":"nl-NL","EnglishName":"Dutch (Netherlands)","DisplayName":"Nederlands (Nederland)"},{"Code":"pl","EnglishName":"Polish","DisplayName":"polski"},{"Code":"pt","EnglishName":"Portuguese","DisplayName":"português"},{"Code":"pt-BR","EnglishName":"Portuguese (Brazil)","DisplayName":"português (Brasil)"},{"Code":"ru","EnglishName":"Russian","DisplayName":"русский"},{"Code":"sk-SK","EnglishName":"Slovak (Slovakia)","DisplayName":"slovenčina (Slovensko)"},{"Code":"zh-CN","EnglishName":"Chinese (Simplified)","DisplayName":"中文 (中国)"},{"Code":"zh-HK","EnglishName":"Chinese (Traditional, Hong Kong SAR China)","DisplayName":"中文 (中国香港特别行政区)"},{"Code":"zh-TW","EnglishName":"Chinese (Traditional)","DisplayName":"中文 (台湾)"}]
BrowserLocaleSupported : true
backendgroups : {"std":{"ftp":null,"ssh":null,"webdav":null,"openstack":"OpenStack Object Storage / Swift","s3":"S3 Compatible","aftp":"FTP (Alternative)"},"local":{"file":null},"prop":{"s3":null,"azure":null,"googledrive":null,"onedrive":null,"cloudfiles":null,"gcs":null,"openstack":null,"hubic":null,"amzcd":null,"b2":null,"mega":null,"box":null,"od4b":null,"mssp":null,"dropbox":null,"sia":null,"jottacloud":null,"rclone":null}}
GroupTypes : ["Local storage","Standard protocols","Proprietary","Others"]
Backend modules:

aftp

amzcd

azure

b2

box

cloudfiles

dropbox

ftp

file

googledrive

gcs

hubic

jottacloud

mega

onedrive

openstack

rclone

s3

ssh

od4b

mssp

sia

tahoe

webdav
Compression modules:

zip

7z
Encryption modules:

aes

gpg

Server state properties

lastEventId : 89
lastDataUpdateId : 7
lastNotificationUpdateId : 2
estimatedPauseEnd : 0001-01-01T00:00:00
activeTask : {"Item1":3,"Item2":"3"}
programState : Running
lastErrorMessage :
connectionState : connected
xsfrerror : false
connectionAttemptTimer : 0
failedConnectionAttempts : 0
lastPgEvent : {"BackupID":"3","TaskID":3,"BackendAction":"Put","BackendPath":"duplicati-b848ecf8e13894c72b7cac5f54373d140.dblock.zip.aes","BackendFileSize":52389837,"BackendFileProgress":52389837,"BackendSpeed":6265,"BackendIsBlocking":true,"CurrentFilename":"/Users/markdaubenmier/Documents/Pictures & Video/2010 Decade/2015 Pictures/2015.04.01 Trip to the US/IMG_2052.JPG","CurrentFilesize":4270830,"CurrentFileoffset":1843200,"Phase":"Backup_ProcessingFiles","OverallProgress":0,"ProcessedFileCount":55946,"ProcessedFileSize":307967128985,"TotalFileCount":106553,"TotalFileSize":658285479687,"StillCounting":false}
updaterState : Waiting
updatedVersion :
updateReady : false
updateDownloadProgress : 0
proposedSchedule : [{"Item1":"3","Item2":"2018-08-11T23:00:00Z"}]
schedulerQueueIds : [{"Item1":4,"Item2":"3"}]
pauseTimeRemain : 0

Not at all. You can have multiple jobs backing up the same data if you want (I do that all the time). One of the benefits there is each job can have a different settings like destination, run frequency, retention policy etc.

If you go to the main menu “About” → “Show log” page, click on the “Live” tab and select “Profiling” you should see a self-updating list of of individual commands being executed.

When it starts getting slow can you tell what the last command is that seemed slow?

If that’s too awkward, you could also try running with --log-file=[path] and --log-level=profiling which will produce pretty much the same content but in a text file.

What I’m wondering is if there’s something silly going on with the dblock file uploads like maybe connection re-use is slowing down for some reason.

Hang on…you’re backing up to BackBlaze right? I don’t use them myself but they wouldn’t happen to have a daily upload limit of around 2G would they?

Thanks for the response. I don’t know if there is a default limit for BackBlaze, but I do not have one. I broke up my big job into several smaller ones and the jobs less than 40GB all ran fine in less than a day. Once the smaller jobs complete, I will return to the larger backup that was failing and get the list of commands being being executed and post them here. Thanks again.

Thanks for following up and letting us know how the smaller jobs worked for you. :slight_smile:

Some more information about the slow (or stalled?) backups I am seeing on Mac OS. This effect was just reproduced on two different Macs running Duplicati to Backblaze. Here is a refined description of the problem.

The backup is running just fine (over last 24 hours 120gb backed up at speeds around 1 MB/s) and then the backup slows down gradually falling to < 5kb/s. On both computers I noticed the specific file that the job was stalled on was large - 780mb on one computer and 3.2gb on another computer.

My first theory was that maybe the file was backing up just fine but the displayed speed in the progress bar does not calculate properly during the middle of an individual file being backed up. So I let the computers continue to work on things for about 3 hours. No progress. I then stopped the job and restarted it and now things are running fine again (speeds greater than 1MB/s and not stuck on a single file).

Additional notes:

  1. There were no entries in the log files (shown through the GUI) during the slowdown.
  2. On restart, I was not looking close enough to notice if the large file completed just fine or not.
  3. Here is one of the large files the backup was stuck on: * lastPgEvent : {“BackupID”:“8”,“TaskID”:6,“BackendAction”:“Put”,“BackendPath”:“duplicati-b0e045ddc77f14524a2274aaabb0297b2.dblock.zip.aes”,“BackendFileSize”:52418413,“BackendFileProgress”:52418413,“BackendSpeed”:5036,“BackendIsBlocking”:true,“CurrentFilename”:"/Users/markdaubenmier/Pictures/Pictures & Video/2010 Decade/2016 Pictures/2016.05.15 Titchie Sunday (The Grace Race)/MVI_4880.MP4",“CurrentFilesize”:780787623,“CurrentFileoffset”:390553600,“Phase”:“Backup_ProcessingFiles”,“OverallProgress”:0,“ProcessedFileCount”:16408,“ProcessedFileSize”:225721171745,“TotalFileCount”:66971,“TotalFileSize”:552746595645,“StillCounting”:false}
  • updaterState : Waiting

Any thoughts on this issue would be appreciated.

Forget what I said about large files. That must have been a coincidence. The backup just stalled again on a 6mb file. 30 min ago this showed up in the logs. Not sure if this is related to the current slowdown or not.

  • Aug 16, 2018 5:20 PM: CommitAddBlockToOutputFlush took 00:00:00.047

  • Aug 16, 2018 5:20 PM: Starting - CommitAddBlockToOutputFlush

  • Aug 16, 2018 5:20 PM: ExecuteReader: SELECT DISTINCT “Hash”, “Size” FROM “Block” WHERE “VolumeID” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteReader: SELECT DISTINCT “Hash”, “Size” FROM “Block” WHERE “VolumeID” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ?

  • Aug 16, 2018 5:20 PM: Checking file for changes /Users/markdaubenmier/Pictures/Pictures & Video/2010 Decade/2016 Pictures/2016.01.30 Titchie water day/IMG_4295.JPG, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 1/30/2016 7:39:14 AM vs 1/1/0001 12:00:00 AM

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “File” (“Path”,“BlocksetID”, “MetadataID”) VALUES (?, ? ,?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “File” (“Path”,“BlocksetID”, “MetadataID”) VALUES (?, ? ,?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “File” WHERE “BlocksetID” = ? AND “MetadataID” = ? AND “Path” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “File” WHERE “BlocksetID” = ? AND “MetadataID” = ? AND “Path” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Blockset” (“Length”, “FullHash”) VALUES (?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Blockset” (“Length”, “FullHash”) VALUES (?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Blockset” WHERE “Fullhash” = ? AND “Length” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Blockset” WHERE “Fullhash” = ? AND “Length” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Metadataset” (“BlocksetID”) VALUES (?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Metadataset” (“BlocksetID”) VALUES (?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Blockset” (“Length”, “FullHash”) VALUES (?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Blockset” (“Length”, “FullHash”) VALUES (?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Blockset” WHERE “Fullhash” = ? AND “Length” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “ID” FROM “Blockset” WHERE “Fullhash” = ? AND “Length” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “A”.“ID” FROM “Metadataset” A, “BlocksetEntry” B, “Block” C WHERE “A”.“BlocksetID” = “B”.“BlocksetID” AND “B”.“BlockID” = “C”.“ID” AND “C”.“Hash” = ? AND “C”.“Size” = ? took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: SELECT “A”.“ID” FROM “Metadataset” A, “BlocksetEntry” B, “Block” C WHERE “A”.“BlocksetID” = “B”.“BlocksetID” AND “B”.“BlockID” = “C”.“ID” AND “C”.“Hash” = ? AND “C”.“Size” = ?

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid(); took 00:00:00.000

  • Aug 16, 2018 5:20 PM: Starting - ExecuteScalarInt64: INSERT INTO “Block” (“Hash”, “VolumeID”, “Size”) VALUES (?, ?, ?); SELECT last_insert_rowid();

  • Aug 16, 2018 5:20 PM: ExecuteScalarInt64: SELECT “ID” FROM “Block” WHERE “Hash” = ? AND “Size” = ? took 00:00:00.000

Final update (I think): My backup stalled (1kb/s for a few hours). Rather than restarting everything, I took a different approach - I just let it run. Sometime in the last 6 hours it recovered and sped back up (2MB/s).

Is your Mac backup running slow? My new advice: Just let it run and come back in a day and see if it has recovered on its own.

I also have concerns with slow Mac OS backups (Duplicati beta 2.0.3.3). I don’t have a Windows PC to compare with, but maintenance backups on four Macs are pretty slow.

It has nothing to do with the upload data speed. Backups to a locally attached network drive take the same time as Backblaze B2 online backups. I actually think the initial backup is reasonably fast. But I did notice too that my test backups in the 250 MB range had upload speeds of more than 9 MB/s while a 400 GB upload achieved no more than 2 MB/s and those large backups often got stuck and caused computer to behave sluggishly.

What is also very slow are subsequent maintenance backups on backup sets that are in the 200-400 GB size. Small test sets of 20 GB size are fast. But it looks like Duplicati spends a really long time looking through the database. A subsequent maintenance backup with 300 MB new data and 700 MB modified data took 3 hours. On same computer the next day a maintenance backup with 2 MB new data and 170 MB modified data still took 2.25 hours.

I moved the Duplicati database from an old-fashioned harddrive to a built-in SSD with no signification speed improvements.

And files restores are even slower. I am waiting forever (3.5 to 5 min) just to fetch the path information to get to the file selection. And every click on the file selection requires waits of 1 to 2 min.

So I don’t know. I still suspect there is something not optimized for MacOS either in Mono or Duplicati. I guess there are not many Mac Users on the forum or using Duplicati. I like to hear other Mac Users experiences.

running duplicati 2.0.3.9 on a mac and on a ubuntu virtual machine.
mac is consistently slower despite a smaller source size.

Hmm, that performance doesn’t sound great :frowning:

For what it’s worth I’m running Duplicati on a Mac but I only back up around 45GB. However speeds are pretty good, usually below 10 minutes unless lots of data was changed.

Our forum doesn’t ask / track anything about a user’s setup (like what OSes they use) but the usage statistics show that in July 2018 there were about 64,000 backups done with OSX.

Granted, that’s pretty small compared to 341,000 for Linux and 1,667,000 for Windows - but it’s still a big enough number I would expect to hear more about performance issues if every MacOS user was having your experience.

Despite all of @Mark_Daubenmier’s shared log info showing a bunch of SQL steps happening all at 5:20, I suspect (as in totally guess) that the performance issue is based in under-optimized database usage and the variations in speed people have seen are likely related to other things happening on the computer using resource that Duplicati would otherwise have taken.

I know there are some database optimization changes in the works, though I’m not sure when they’ll go live (let alone make it to a beta version) - but at least the issue is being addressed (assuming my gut feeling about the source of the problem is correct).

If the issue is under-optimized database usage, is there data I can gather to help confirm this? I am new to Duplicati so I don’t know a lot, but I would be happy to gather any data than might help the smarter people see what is going on.

I also note that as far as I am aware, there were no other things happening on the computer and competing for resources. I was letting Duplicati run and had shut down Crashplan as well since some had said a possible problem would be them both running at the same time. Also, during the slowdowns, there was virtually no CPU usage by anything (including Duplicati) on my computer.

If there is some data I can gather, let me know.

@Mark_Daubenmier & @andresalvear10 – were you ever able to resolve your macOS related slowness issues? I may also be experiencing the same (or a similar) issue for a while now. I started a new thread about it here, but no responses so far. If you have managed to overcome the slow backups from macOS, I’d love to hear how you did that :slight_smile: