In short, is there any reason that Duplicati doesn’t like using a ramdrive for its temp directory?
Here’s the background.
By default, Duplicati uses the current %TEMP% folder for temporary storage which usually works out to be C:\Users<username>\AppData\Local\Temp and, these days, most are running an SSD as their OS drive. Observing that Duplicati is fairly ‘TEMP-intensive’ and in the interests of SSD preservation, I’ve toyed around with moving Duplicati’s TEMP to other locations.
One trial I ran was with a ramdrive. I used the ImDisk toolkit (ImDisk Toolkit download | SourceForge.net), which is widely regarded and still developed/supported (however, I just noticed that, as of Dec/24, ImDisk has been replaced with the AIM Toolkit project. I started testing before that so was unaware till now), with dynamic allocation and the drive mounted as an existing folder. The ramdrive was permanenly enabled, not spun up/down specifically for duplicati’s use.
I ran my ramdrive trial for around 4 months but recently declared defeat.
I initially used a 2GB ramdrive which was expanded to 3GB. The ramdrive couldn’t be used for restore operations since even 3GB still wasn’t enough but it was used for most other operations. Which is to say that, naturally, I was aware of space exhaustion issues and mitigated those by disabling ramdrive use for the specific operation.
The most common problem I encountered was corrupted backup databases. However, there was also cases of files missing from the remote storage, files that couldn’t be decrypted, backup operations stalling (ie simply stopping with no errors and no further log entries, as if there was a deadlock, though it responded to a Ctrl-C). Repeated repair, rebuild and simply delete-all-data-and-start-again operations ensued. I also do automated restore tests and I’ve noticed that non-critical errors have crept into those as well where some files can’t be restored (‘touching’ the file so it gets updated in the next backup resolves those).
There was never anything ‘specific’ in the errors that I could chase down, the problems seemingly occurring randomly. I have a weak but still very much unsubstantiated feeling that deleting anything (eg pruning, compacting, etc) was at the root of problems. I found that the stalling issue could generally be traced to a bad DB, despite there being no errors, and a rebuild usually fixed that. I did spend time trying to pick apart the stalling issue by inspecting the DB in an external application but quickly drowned in data. This is also why I haven’t posted about this till now: I had nothing to pin-point as specific behaviour.
To round out the saga, my ramdrive testing roughly coincided with the update from 2.0.7.103 to the 2.1.* .Net8 versions and, I have to admit, I aimed most of my frustration at the new platform, to the point that I was seriously considering going back to the 2.0.* branch. The only thing that stopped me was that I didn’t see any similar issues raised in this here forum.
Then, I just up and decided one day to kill the ramdrive. Lo and behold, things have improved.
Now, yes, I use the canary release branch so ‘caveat emptor’ is understood and accepted. I’ve been using the canary branch for years, largely without issue.
Yes, it could well be the ImDisk driver, though that seems rather poor form for a disk driver, all the more surprising considering its age and popularity. I’ve considered whether SQLite has a known aversion to ramdrive use but found nothing. While I don’t think there’s a “slap-you-in-the-face” bug in Duplicati, I do wonder if there’s some sort of interaction with the async/await calls, or perhaps there’s an overlooked missing await that only causes problems if a filesystem/DB call returns very fast. [I’m a .Net dev by day and, as a long-time avid user, would gladly donate spare time, if I had it.]
I don’t see how it should make a difference but in case it does, I do all my Duplicati’ing as Duplicati.CommandLine.exe scripts, rarely using the web UI. Besides finding scripting much more convenient, it also allows capturing detailed (versbose by default) logs much easier; not that this helped much in this situation. I have multiple backups run at various schedules stored in Minio NAS storage (though I’m still using the S3 provider) and, as mentioned above, I regularly run full restoration tests for all backups.
I realise there’s a lot to pick apart here and thank you for reading this far.
I’d love to hear of anyone else’s experience with using a ramdrive with duplicati.
Thanks