Useing a RamDisk for the asynchronous-upload-folder

I’m doing a test to see how putting the asynchronous-upload-folder into a ramdisk helps performance.
My main reasoning for doing this is.

A. My system drive is an SSD, and I did not want the software to chew through my right cycles with its default setting.
B. All drives have data that will be backed up at some point, and I did not want an HDD drive getting pegged with double duty. (E.G: Having data read, then writing to temp dir, only to have the temp dir read again to then get written to the backup location.)
C. Ramdisk is fast. Fast = good. Right?

Would love to know if there is a technical reason why this is a good, bad, or pointless idea. will also let you know how it performed on my end.

Can’t get in too deep having never tried anything like this, and because I don’t know OS or RamDisk type.

The Difference Between a tmpfs and ramfs RAM Disk applies to Linux, and claims tmpfs can use swap (so where’s swap living?). Similar question applies to Windows, and if it turns out that the RamDisk holds onto memory persistently, that will leave less for everything else which might then swap or page more to a drive. When the system has a lot of extra memory, the RamDisk will probably be fast but limited by the network…

With Windows systems and Administrator group accounts, –usn-policy might help speed and cut disk load.

Duplicati only uploading changed parts of files on a block-based basis will reduce the amount of data flow.

–tempdir is another option to have less data flow through yous SSD (vendors claim a lot of writes are fine).

Make sure TRIM is enabled.

It is a persistent ramdisk of 4 GB I always have online. I’m using windows and the target is an iSCSI drive connection. I have 32GB of ram and paging disabled, as I always have plenty of RAM to spare.

From my end, it did seem to make things go a bit faster. My network before was only peeking at 750Mbp’s but after using the ramdisk I was seeing it peeking at 850 - 950 Mbps managing to nearly saturate my gigabit connection to my NAS. In general, it was managing faster speeds overall on average.

I’m not going to say that this test was scientific in any way, But it would be good to know if others get the same results if they want to give it a shot.

as for vendors claiming a lot of writes are fine. Call me a cynic, but they will say that. They would love for that SSD to wear out sooner than later, as then you need to get a new one.

PS: Good job Dev team! Duplicati is looking to be my new goto backup solution.

I had thought this was the purpose of --use-block-cache, but it’s a barely-documented feature whose specific behavior isn’t described; one would assume that it means that assembling the block file, compression, and encryption all happen in the cache during backup and it never touches the disk, but whenever I do a backup or restore, it’s clear from the IO activity that the blocks are being cached on disk before transmission or restore.

I think the block cache is an in-memory copy of the SQLite Block table meant to shorten a lookup whether a data block read from a file being backed up was backed up already. If so, it doesn’t require another backup.

https://forum.duplicati.com/search?q=use-block-cache found a few mentions, with one of the better ones at:

Duplicati 2 vs. Duplicacy 2

I have just tested with a small backup (~2000 blocks), and a block lookup cache had a tiny positive effect on the backup speed. But it is possible that this speedup is much more pronounced with larger backups, so I made a canary build with the option --use-block-cache . If it turns out that it really does improve performance without ridiculous memory overhead, I will convert it to a --disable-block-cache option.

v2.0.2.6-2.0.2.6_canary_2017-09-16

Added an experiemental --use-block-cache flag to test performance potential

1 Like