I was hoping to avoid that level of logging… Heh.
The job I’m running is this (sanitized):
docker run \
--rm \
--name repair-job \
--hostname repair-job \
--memory 1024m \
--memory-swap="1536m" \
--cpus="0.75" \
--volume duplicati_data_storage:/data/duplicati \
--volume /path/to/data/on/nfs/mount/:/path/in/container/:ro \
--volume /opt/databasedumps:/opt/databasedumps:rw \
--volume /var/www/app:/var/www/app:ro \
--volume /root/scripts:/root/scripts:ro \
duplicat-in-docker-image \
duplicati-cli repair googledriveuri \
--backup-name="blah" \
--dbpath=/data/duplicati/NamedDB.sqlite \
--passphrase=password \
--disable-module=console-password-input \
--rebuild-missing-dblock-files \
--debug-output=true \
--debug-retry-errors=true \
--console-log-level=profiling
I forked David Reagan / Duplicati-In-Docker · GitLab to my workplaces internal gitlab instance. (Yes, the original project is mine as well.) So that’s the image I’m running.
The sqlite db is living on an nfs mount. So are data uploads. Code is local to the vm.
I believe the SAN the nfs mount is on is the same one vmware (an esix cluster) uses for the vm’s drives. So I’d hope that would not be the bottleneck.
Is there anything in the profiling level of logging that would be sensitive? What I’m seeing mostly looks like hashes. No file names.
Once I’m sure the job is fully stalled, I’ll post some output.