Duplicati on Truenas Scale 25.04 - performance DB recreate

Hi All,

Few days ago I moved to Truenas Scale 24.04. I was able to install Duplicati as an app and successfully migrated several of the backups. Each time Duplicati recreated the database and started backing up normally.

The challenge started when I decided to go ahead with the big backup (almost 2TB to cloud - E2). The database recreation is running for close to 48h and still not done.

  • Duplicati is installed as an app (that is Docker based but I just downloaded it - no customization)
  • Currently I can see the DB that it created (just by looking at the disk space is less than 3 GB)
  • Everything is on spinning disks (no SSD).
  • Currently the restore is working - haven’t crashed yet.
  • All data had to move - before the computer (pop os) used to back data from a network drive, now the data is on truenas so Duplicati sees it as local.
  • I also had to re consider what am I backing up to the cloud and as a result select a bit less than originally (what is already in the cloud)
  • The RAM for the truenas is 8 GB.

What puzzles me is the performance

  • The app reports only 1-2 % utilization of the CPU.
  • The RAM used is give or take 300 MB
  • The network is next to nothing in utilization
  • Disk read/writes - less than 5 MB/s max for reads and less than 43 MB/s for writes. However - this is the total that the system does - including my regular access of the server and I am sure the server can utilize much more when it is pushed.

If it was a question of resource - I expect I would have seen it. But it does not look like it. Does anybody have experience with such configuration? Are there any areas that I maybe overlooking?

Most likely I will leave the process continue but I expect it will take at least another 24-48 h. That is in addition to the already close to 48 h spent. And this looks quite a lot for DB recreation.

What version? Any idea what version you originally had? Any idea whose Docker image you got? Docker Hub favorites are LinuxServer.io and Duplicati. The setup plan is one way to guess which.

Migrating Duplicati to a new machine gives some possibly easier ways that move databases.

It’s puzzling partly because there’s no Duplicati context. Looking at progress bar on 2.1.0.5 or earlier would be a hint. What percent is it at? Or watch About → Show log → Live → Verbose. Downloading dblock files is a bad sign, and the last 10% of progress bar can take a long time.

Before the 70% mark, it’s processing dlist files (each one a version, and they can add up) and dindex files (which say what blocks are available – and here the smaller blocksize adds work).

If you really like, you can increase the logging to Profiling to see if any SQLite actions got slow.

You might need better tools. I’m especially suspicious of mechanical drives, which can be slow.

Using iostat -xd 5 or similar can show %util on how busy drive is. I/O rates on mechanical drives vary greatly depending on random or sequential access. You can see this in benchmarks.

Using iotop --only can show process I/O at the system call level, and how much it’s slowing processing. To show the IO column, you might need to toggle that on for awhile, as prompted by:

task_delayacct is OFF
press Ctrl-T to toggle

Network load during recreate will be bursty, with file download then processing. If you look at the wrong time or too short a time, you don’t see the download at all. Processing can use SQLite on one logical core, so a multi-core CPU (not mentioned, but I assume) will have low overall loading (assuming tool is looking at utilization of whole CPU). For disk read/writes, are you aware of your regular access of the server? Would it have that much tilt towards write? Better tools might show.

One thing that causes slowdowns on big backups first made on Duplicati 2.0 is the default 100 KB blocksize, where each block needs tracking in SQLite database with a tiny default cache_size.

Some Duplicati versions use larger cache_size, some have manual adjustment, and raising 2.1 blocksize to 1 MB default helps, however that can’t be changed except at time of initial backup.

I have these as sources

**
Name:**
duplicati
App Version:
v2.1.0.5
Version:
v1.0.21
Source:
github.com/duplicati/duplicati, hub.docker.com/r/duplicati/duplicati

I was not aware of the process with preserving the DB. I did have the backup configurations exported as encrypted and I was able to bring them back as configurations with all the settings but I was lacking the DB (in my approach). Part of my question is related to a catastrophic case scenario. Most likely the files that the user will have are the exported backup configuration files. It is less likely the user to have a copy of the DB (especially an updated one).

Here is some context - hopefully it helps.

Oct 21, 2025 8:33 AM: Backend event: Get - Completed: duplicati-ic83f00a489424c16936aa02cf57f7315.dindex.zip.aes (541 bytes)
Oct 21, 2025 8:33 AM: Backend event: Get - Started: duplicati-ic83f00a489424c16936aa02cf57f7315.dindex.zip.aes (541 bytes)
Oct 21, 2025 8:33 AM: Processing indexlist volume 33584 of 42893
Oct 21, 2025 8:33 AM: Backend event: Get - Completed: duplicati-ic83c239fa5a84249b14e5b1d4f8510af.dindex.zip.aes (40.06 KB)
Oct 21, 2025 8:33 AM: Backend event: Get - Started: duplicati-ic83c239fa5a84249b14e5b1d4f8510af.dindex.zip.aes (40.06 KB)
Oct 21, 2025 8:33 AM: Processing indexlist volume 33583 of 42893
Oct 21, 2025 8:32 AM: Backend event: Get - Completed: duplicati-ic83c10c6025e4820a2587ae730a7936f.dindex.zip.aes (541 bytes)
Oct 21, 2025 8:32 AM: Backend event: Get - Started: duplicati-ic83c10c6025e4820a2587ae730a7936f.dindex.zip.aes (541 bytes)
Oct 21, 2025 8:32 AM: Processing indexlist volume 33582 of 42893
Oct 21, 2025 8:32 AM: Backend event: Get - Completed: duplicati-ic83a8ec57d7d44b4bca033d627da7048.dindex.zip.aes (18.23 KB)
Oct 21, 2025 8:32 AM: Backend event: Get - Started: duplicati-ic83a8ec57d7d44b4bca033d627da7048.dindex.zip.aes (18.23 KB)
Oct 21, 2025 8:32 AM: Processing indexlist volume 33581 of 42893
Oct 21, 2025 8:32 AM: Backend event: Get - Completed: duplicati-ic8367301c27b4cb493fab0994d4112b4.dindex.zip.aes (541 bytes)
Oct 21, 2025 8:32 AM: Backend event: Get - Started: duplicati-ic8367301c27b4cb493fab0994d4112b4.dindex.zip.aes (541 bytes)
Oct 21, 2025 8:32 AM: Processing indexlist volume 33580 of 42893
Oct 21, 2025 8:32 AM: Backend event: Get - Completed: duplicati-ic83630bfa1954f999861882aacf37fd4.dindex.zip.aes (19.37 KB)
Oct 21, 2025 8:32 AM: Backend event: Get - Started: duplicati-ic83630bfa1954f999861882aacf37fd4.dindex.zip.aes (19.37 KB)
Oct 21, 2025 8:32 AM: Processing indexlist volume 33579 of 42893 

I was able to get this. Ondly the sda, sdb and sdc are of relevance as this is the raid (z1) and I had to install duplicati on it because I don’t have a spare ssd.

Linux 6.12.15-production+truenas (truenas) 	10/21/25 	_x86_64_	(12 CPU)

Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
nvme0n1          0.62     18.64     0.00   0.00    0.16    30.25    1.24      9.91     0.00   0.00    0.05     8.00    0.00      0.00     0.00   0.00    0.00     0.00    0.04    0.38    0.00   0.01
nvme1n1          0.00      0.34     0.00   0.00    7.10    76.85    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
sda             40.79   2916.45     0.03   0.08    7.99    71.49   31.81   7417.92     0.38   1.17    4.27   233.16    0.00      0.00     0.00   0.00    0.00     0.00    1.35   14.52    0.48  28.59
sdb             40.75   2916.36     0.03   0.08    7.93    71.57   32.30   7417.91     0.39   1.20    4.17   229.68    0.00      0.00     0.00   0.00    0.00     0.00    1.35   14.83    0.48  28.56
sdc             40.80   2911.90     0.03   0.08    7.97    71.37   32.02   7417.90     0.39   1.20    4.20   231.67    0.00      0.00     0.00   0.00    0.00     0.00    1.35   14.67    0.48  28.59

I can’t see a meaningful network load. This is the WAN screenshot for 24h for total connections from my network. The overnight segment - 12 am - 7 am is not even noticeable on the graph and it is most accurate as there are no other activities in the time frame.

My original machine version is

You are currently running Duplicati - 2.1.0.5_stable_2025-03-04

I can only see an option for Remote volume size that is set to 50 MB. This was the recommended when I started using Duplicati - few years ago. I was not able to see an option for cachesize (so I am using the default) also I did not change blocksize - so I must be using the default. But - even if I have desire to change - I can’t unless I discard the backup and start over (if I am following correctly).

Should I understand that nothing can be done - I just have to accept it or maybe I can optimize in some way if I let go of the backup that is already in place in the cloud?

I don’t know what v1.0.21 is, but 2.1.0.5_stable_2025-03-04 follows 2.1.0.4_stable_2025-01-31. Initial Duplicati backup in 2025 might have been on a 2.1, whereas initial in 2024 might be a 2.0. Decrypting and looking at manifest in any zip file would say for sure, but it’s a bit complicated. Possibly a developer would have another idea to find out what blocksize a backup actually uses.

If destination keeps dates right, just looking for its oldest file would probably find its initial backup.

That’s why it’s a good thing that you started this now rather than in actual disaster recovery.

It’s finished the dlist files and is churning through the dindex files, possibly slowing as it goes. Seeing that would would need a longer view. Basically a linear extrapolation might guess low.

The dindex files are pretty small (looking at your log, tens of KB), so this is about as expected.

I think performance is little affected by install, but are database and /tmp on slow or fast drives?

Then the question is whether that’s an upgrade of something earlier, for example 2.0.8.1 Beta.

It’s still the default, but this isn’t the blocksize. It’s the size of the volume blocks are packed in.

Backup size parameters

If this large backup is a few years old, it might have the old blocksize. I gave hints to examine.

I’m not sure if it ever became an option, and the devs have also been rearranging things lately.

The reason I asked for an iotop (not just iostat) was because it’s probably a more accurate reflection of Duplicati’s I/O requests and their speed loss. Most get soaked up by the OS cache. Insufficient SQLite cache causes excessive read and write requests, sort of a “thrashing” effect.

Default cache_size from SQLite is 2 MB, and that doesn’t go far when DB grows to multiple GB.

Below is batch file syntax. The negative value sets cache_size in KB, so this gets 200 MB cache:

set CUSTOMSQLITEOPTIONS_DUPLICATI=cache_size=-200000

I’m not sure what your backup should get. Regardless, it can’t be changed in middle of Recreate.

Developers have been working on performance. Some speedups are underway, but for now:

  • Consider iotop, but that’s not fixing anything, just helping to understand situation better.
  • See if there’s a fast drive to keep DB on. Location and movement is on Database page.
  • Unless /tmp is already on a fast drive, consider a tempdir option to put one into use.
  • Look at history of backup as suggested to see if initial backup was done on 2.0 version, however increasing blocksize requires a fresh backup. Maybe old can be kept awhile.

iotop is not available in truenas. I am concern I will break it if I install it.

I am afraid this will have to wait (budget restrain).

For this particular backup the retention is 4 months.

I can see that the space used for the Duplicati configuration (that is all) is 3.59 GiB.

Thank you for your help @ts678 I am going to have to wait (and hope the power won’t cut out). I may need to come back in a week (or longer) once everything is functional and assess the optimization. And I also have to learn how to tweak the DB cache size in Docker - this is my first exposure to Docker to start with.

That’s likely irrelevant. Unless your data turnover since initial backup is massive, initial backup destination files are still around with original dates because the data is still used in current files.

This sometimes gives people the urge to delete old files. Don’t. It’s probably your initial backup.

If you have a destination that preserves dates, just sort by date and see what date the oldest is.

A harder way is AES Crypt GUI or duplicati-aescrypt CLI to decrypt a file to read manifest.

Here’s my manifest from a very recent Duplicati version and backup still on prior blocksize:

{"Version":2,"Created":"20251022T112016Z","Encoding":"utf8","Blocksize":102400,"BlockHash":"SHA256","FileHash":"SHA256","AppVersion":"2.1.1.105"}

This backup’s oldest Destination file is circa June 2023, although retention only goes back a year. Guessing entirely from my dblock to dindex size being about 1500 to 1, yours remind me of mine.

Another way to check the blocksize after decryption is to just unzip a dblock and look at file sizes.

I am afraid this maybe game over for me.

If I end up waiting till tonight - it will be 120 h and I am unable to recreate the DB alone.

Even setting the log to Profiling - it is dead silent for a long time (about 1h I guess).

The only indication that something happens is “uplicati-server.sqlite” - the time stamp changes every 15 min or so. The 5 Gig DB hasn’t moved since 6:59 am - 4h or so.

total 4429973
drwxrwx--- 3 apps root         20 Oct 23 11:18 .
-rwxrwx--- 1 apps root     102400 Oct 23 11:18 Duplicati-server.sqlite
-rwx------ 1 apps root 5016473600 Oct 23 06:59 WFNPWZJPTM.sqlite
-rwx------ 1 apps root   16683008 Oct 18 23:30 WFNPWZJPTM.backup
-rw------- 1 apps root  124116992 Oct 18 22:40 UCCATZRKEP.sqlite
-rwx------ 1 apps root    1548288 Oct 18 21:30 YYCPSPYOXU.sqlite

My conclusion at this time is that even if it does complete - it will be far in the future.

My plan is to try recreating the DB on a local machine and copy to the server (perhaps I should have done it sooner). Will report back if I am able to do so and what the results would be.

If DB recreating alone would take way more than 120 h on a 2TB archive without even fetching any data from the cloud - it rises serious concern on the total time needed in a disaster recovery scenario.

So compared to previous Verbose log of several dindex/minute, it slowed or stopped?

Latest live log image shows two dindex processed in same minute – nearing their end.
It got 99.44% done, which totally off-topic is the purity number that Ivory soap ads use.

I don’t know if TrueNAS has a command to see I/O, but Linux likely has /proc/PID/io
man proc will say how to interpret reads and writes. You could try that inside Docker.
It seems like there ought to be other signs of work (or lack), such as seeing CPU used.

There’s no information available so far to say if it’s working hard or has somehow hung.

It would be nice if the developers had some input. They did increase blocksize to 1 MB,
however you probably need to run requested steps to try to find what your backup is on.

In addition to the destination date method, the manifest method, and the dblock method,
DB Browser for SQLite could look at a copy (for safety) of the old DB from other system.
Configuration table shows blocksize directly. That might be its authoritative value. IDK.

Make sure to increase SQLite cache size, or run a recent version. What Duplicati version?
2.2.0.0_stable_2025-10-23 just came out, including Duplicati Docker. LinuxServer out too.