Oom killer with duplicati and systemd-userdbd

Hello, I have updated Duplicati - 2.0.6.104_canary_2022-06-15 archlinux from the aur duplicati-latest 2.0.6.104-3(2022-09-10 18:50). I have from time to time out of memory killer, but also a strange behavior with systemd-userdbd every time there is a backup with duplicati, the systemd-userdbd service crashes.
I had no problem before. have any idea how I could debug?

oom message

Sep 13 15:17:39 xxx systemd[1]: duplicati.service: A process of this unit has been killed by the OOM killer.
Sep 13 15:17:39 xxx systemd[1]: duplicati.service: Main process exited, code=killed, status=9/KILL
Sep 13 15:17:39 xxx systemd[1]: duplicati.service: Failed with result 'oom-kill'.
Sep 13 15:17:39 xxx systemd[1]: duplicati.service: Consumed 2h 51min 2.407s CPU time.

systemd-userdbd message

ep 25 15:50:16 xxx systemd[1]: Starting User Database Manager...
Sep 25 15:50:16 xxx systemd[1]: Started User Database Manager.
Sep 25 15:50:19 vps-356d03eb systemd-userdbd[9303]: Worker threads requested too frequently, something is wrong.
Sep 25 15:50:19 xxx systemd[1]: systemd-userdbd.service: Deactivated successfully.
Sep 25 15:50:19 xxx systemd[1]: systemd-userdbd.service: Start request repeated too quickly.
Sep 25 15:50:19 xxx systemd[1]: systemd-userdbd.service: Failed with result 'start-limit-hit'.
Sep 25 15:50:19 xxx systemd[1]: Failed to start User Database Manager.

Hello

As far as I can tell there is no memory leak in Duplicati 104. So either your system is so memory constrained that a small increase in memory requirements has set it over the usable, or your package as a defect (I use Ubuntu with standard install), or you use a specific backend that has acquired a new problem. Can’t say more about your config, I tested with sftp and Onedrive v2 and did not see a memory problem.

Duplicati requirements are rather low, my container is using 300 Mb (including all the system stuff, it’s a system container, not Docker), when the web interface is connected it goes to 310 Mb and when backing up it is going to 350 Mb. So if you have say 500 Mb Ram free before launching Duplicati you should be good for incremental backups at least (I did not check with huge backups it’s a test setup).

Either way if you have a memory problem, it’s no wonder that Duplicati service could restart so often that systemd is noticing it and terminating userdb.

not sure for memory, At the end of the backup process, during the delete, it takes a lot of memory. And I realize when the backup is finished, mono does not free the memory completely

ps aux --sort -%mem | head -3

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      118327  4.8  9.9 2502220 388908 ?      Ssl  20:22   6:49 /usr/bin/mono /opt/duplicati-latest/Duplicati.Server.exe --webservice-port=8200

that’s not a lot of precise information I’m afraid.

300 Mb of RSS can’t be evaluated to be a lot or not without knowing the backend, maybe you have a backend that has big libraries. Unless you are talking about VSZ ? it does mean much.