Multiple backup issues

I have been running 2.0.5.1_beta_2020-01-18 for many months now with no issues on CentOS 7, backing up to Wasabi.
Over the weekend I had both backup jobs fail. I discovered that the root partition filled up (50G) most of it being the sqlite files. One was ~2GB and the other ~30GB. (should they get this big?).
I found another partition on the server with several TB free, so I went into each jobs database configuration and changed the local database path to this other partition. The files moved successfully.
My backup jobs are still failing and even doing a database repair fails.
One job runs for a bit then says “The requested folder does not exist”
Partial log info here
2020-09-15 07:18:29 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-09-15 07:18:30 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: ()
2020-09-15 07:20:42 -06 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation List has started
2020-09-15 07:20:58 -06 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Test has started
2020-09-15 07:20:58 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-09-15 07:20:58 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: ()
2020-09-15 07:21:03 -06 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Test has started
2020-09-15 07:21:03 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-09-15 07:21:03 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: ()
2020-09-15 07:21:05 -06 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Test has started
2020-09-15 07:21:05 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-09-15 07:21:05 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: ()
2020-09-15 07:23:16 -06 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started
2020-09-15 07:23:16 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-09-15 07:23:17 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: ()
2020-09-15 07:28:38 -06 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started
2020-09-15 09:29:40 -06 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-09-15 09:29:41 -06 - [Retry-Duplicati.Library.Main.BackendManager-RetryList]: Operation List with file attempt 1 of 5 failed with message: The requested folder does not exist
Duplicati.Library.Interface.FolderMissingException: The requested folder does not exist —> Amazon.S3.AmazonS3Exception: The specified bucket does not exist —> Amazon.Runtime.Inte$
at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <8a0944092ae944f79161e3ab1$
at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func1[TResult] aborted, $
at System.Net.HttpWebRequest.GetResponse () [0x00016] in <8a0944092ae944f79161e3ab1237b7dd>:0
at Amazon.Runtime.Internal.HttpRequest.GetResponse () [0x00000] in :0

The second job show error “The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again”

Log only shows that the job started and then an email was sent.

When I try a database repair on this job, I get “No files were found at the remote location, perhaps the target url is incorrect?”

I was able to create a new job and run it on a test directory (using the same bucket/key/secret) and it worked fine.

Any ideas where I can go from here?

Welcome to the forum @backup_sam

Out of space on the sqlite files is worrying because it’s unknown what sort of state they’re left in.
Have you examined configurations to make sure they still have the right access info for Wasabi?
You can look in a variety of ways such as Edit and look through that, or Commandline, or Export.

The latter two will give you the URL to use with Duplicati.CommandLine.BackendTool.exe to test
carefully, avoiding changing any files beginning with your backup’s prefix (default is duplicati-).
This might let you try to figure out your folder access or target URL issue without drastic steps…

What was going on during that partial log? It looks like a Test connection on the Destination.
Problem is that there were so many in a row, and no Get (though those can be configured away).

Assuming the backup destination is good despite the possible hard stop, Recreate button could reconstruct the database from backup files. Large backups may take awhile, and worst case for downloading is everything (best case is just dlist and dindex files – you can watch in the live log).

Duplicati-server.sqlite should be a small DB with the configuration. Job DBs can be kept trimmed

  --auto-vacuum (Boolean): Allow automatic rebuilding of local database to
    save space.
    Some operations that manipulate the local database leave unused entries
    behind. These entries are not deleted from a hard drive until a VACUUM
    operation is run. This operation saves disk space in the long run but
    needs to temporarily create a copy of all valid entries in the database.
    Setting this to true will allow Duplicati to perform VACUUM operations at
    its discretion.
    * default value: false

  --auto-vacuum-interval (Timespan): Minimum time between auto vacuums
    The minimum amount of time that must elapse after the last vacuum before
    another will be automatically triggered at the end of a backup job.
    Automatic vacuum can be a long-running process and may not be desirable
    to run after every single backup.
    * default value: 0m

but after a Recreate (if it works), the DB size should be nice and small. Limiting versions also helps.

Thank you very much for taking the time to reply. After a few days of struggling, I finally got the jobs running again, and it appears there is no data loss. The job is still running after a couple days of catching up It is currently working on uploading ~7TB of backup data. Currently the sqlite database is ~40GB which is keeping 30 days worth of backup. Is this sqllite file size to be expected for the backup size and retention time we have set up?

Database size is a function of not only your source data size, but also the number of files and the blocksize you’re using. 7TB is a very large source data size so using a larger blocksize is usually recommended. It helps keep the database smaller. (Unfortunately you cannot change that setting without starting over.)

This is good to know for the future. I will keep an eye on the database size over time.
Thank to all of you for creating this software and all that help to maintain as well as support the community!

1 Like