How to recover from "There is not enough space on the disk"?

Hello,

I’ve had a hunt through Google and the results I can find there but I’m still unable to easily recover from a NAS getting full.

My backup has failed due to “Failed: There is not enough space on the disk.”… I guess I was too optimistic over how many versions I could fit on a disk.

Once in this state how do I fix it so it runs. If I change the versions down to, for example, 4, I still get the same error when I click run.

If I delete some random old files from the NAS (as I tried the last time the disk got full) then all hell breaks loose and it is easier to just delete everything and start the backup from fresh! (note: Don’t delete old files from backup sets manually!).

I even attempted to follow some forum post about running a command line to delete snapshots older than $date but I couldn’t get it to work. The command would take ages to run but never delete anything.

What is the simplest way to get the backup into normal working state again? Can I set a flag for it to just delete whatever old versions can no longer fit on the disk?

Sorry if this seems like something that is already answered somewhere else - if it is please link me to it… I’ve failed to find it in my searching.

I’m still not further with this. I’ve now managed to work out the correct delete command:

.\duplicati.commandline.exe delete “file://\10.142.115.140\ServerBackup/” --keep-versions=1 --passphrase=REDACTED --dbpath=“C:\Users\administrator\AppData\Roaming\Duplicati\QYHDYVWMQL.sqlite”

However it still fails to do this as I only have 192MB free on the storage.

Any idea how I can get duplicati to do whatever work it needs in a temporary location on another drive so it can free up space on the full NAS? Seems frustrating that you can get into an unrecoverable situation where you can’t delete old versions if a drive or destination becomes full.
The destination is a drive just for duplicati2 backups so there are no unrelated files that can be deleted or moved to make space temporarily.

I’d be grateful for any advice.

What is the hardware specs of the NAS and the Windows machine running Duplicati?

Also need to know how are you using Duplicati (i.e what are you backing up and to where are you backing up to)

Thanks for the reply.

Not sure how the specs would change what happens but it is a standard Buffalo branded NAS using CIFS / Standard windows file sharing.

The duplicati machine is a Xeon running Windows Server 2008 R2.
Not sure how it matters what I’m backing up but… its a bunch of user files and databases.

PS C:\Duplicati> .\duplicati.commandline.exe delete "file://\\10.142.115.140\ServerBackup/" --keep-versions=1 --passphra
se=REDACTED --dbpath="C:\Users\administrator\AppData\Roaming\Duplicati\QYHDYVWMQL.sqlite"
  Listing remote folder ...
Backend quota is close to being exceeded: Using 464.42 GB of 464.62 GB (192.43 MB available)
Update "2.0.4.5_beta_2018-11-28" detected

Unhandled Exception: System.AggregateException: One or more errors occurred. ---> System.IO.IOException: Not enough stor
age is available to process this command.

   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.__ConsoleStream.Read(Byte[] buffer, Int32 offset, Int32 count)
   at System.IO.Stream.<BeginReadInternal>b__a(Object param0)
   at System.Threading.Tasks.Task`1.InnerInvoke()
   at System.Threading.Tasks.Task.Execute()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.IO.Stream.EndRead(IAsyncResult asyncResult)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncTrimPromise`1.Complete(TInstance thisRef, Func`3 endMethod, IAsyncRe
sult asyncResult, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
   at System.IO.Stream.<CopyToAsyncInternal>d__2.MoveNext()
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at Duplicati.Library.AutoUpdater.UpdaterManager.RunFromMostRecentSpawn(MethodInfo method, String[] cmdargs, AutoUpdat
eStrategy defaultstrategy)
   at Duplicati.CommandLine.Program.Main(String[] args)

Moving to another storage provider is the basic recipe, but it might be easier to move something else instead. Duplicati needs some room to free up space, because it does it with an operation called “compact” described here for the commandline, but probably more typically run automatically per Compacting files at the backend.

There’s a log of what one that I did looked like at the bottom of this post (giving some log data for a problem). You can see how it downloads data blocks (50MB default size, but I don’t know if you configured yours larger) then uploads the compacted result, then (and only then) deletes the original files whose contents it repacked.

Commonly one will see a compact run after a delete does (assuming your job options have a retention setting that allows deletions), but the deletion itself just records in the database that some space is no longer in use, then the space analysis done by compact decides it’s time to repack (which needs space to free up space…).

Block-based storage engine is a short piece on Duplicati’s design, if that helps to understand my description.

How to limit backups to a certain total size? might be useful to avoid future overfills, if your Duplicati is recent.

Very much agreed, and even Duplicati has to follow the process I described. I hope things are in reasonably good shape after things filled, but the best path is probably to free up space then see if things appear OK… Duplicati is usually fairly resistant to things being interrupted mid-backup. On the next backup it looks to see what the destination storage actually looks like. This sometimes looks like a backup-from-scratch but it’s not.

Using the Command line tools from within the Graphical User Interface is another way to force a delete when space is too tight to rely on the usual delete after the backup. After that, I guess you’d do a manual compact. But before that, give Duplicati room to upload compacted files, maybe by temporarily moving other NAS files.

The specs won’t change what happens but it will help us understand why it is happening and even possibly gives us ways to suggest how to help alleviate the situation.

Are you using Duplicati to backup files on the Windows machine onto the Buffalo NAS, making the Buffalo NAS your backend?

I am asking all these as recently I have similar messages that got resolved once I changed the temp directory to somewhere else; I don’t have enough RAM to have big enough tmpfs as I am using Duplicati on Debian. I was wondering if it may work for your case too. (Yes I am aware that for Windows, temp directories are not on RAM but it can be something for you to look into)

I am finding this part strange too; I don’t recall seeing the DELETE command using more storage.

Delete command might use more storage temporarily while it shuffles files around in the storage.

What I think has solved it is… changing the file size from 250MB to 50MB then running the delete command - seems to now actually be doing something and reporting deletions and uploads and some more free space has appeared!

As noted in my original question - the NAS is the backend and is only for this Duplicati backup - so no scope to delete or move other files to make space : ( … really what has happened is my source has nearly outgrown the destination so the NAS does need upgrading anyway… Just needed a working and up to date backup until then.

Thanks so much for the replies. Merry Christmas!

1 Like

I think I’ve seen automatic compact after delete. If it happened, it might have used up the last little space.
Transition from delete to compact might be here. I suggest seeing if –no-auto-compact stops the filling. Increasing logging, e.g. (for recent Duplicati) –console-log-level=Information will show the action such as:

2018-10-28 13:02:02 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required
2018-10-29 13:01:24 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there are 52.41 MB in small volumes and the volume size is 50.00 MB
2018-10-29 13:05:05 -04 - [Information-Duplicati.Library.Main.Operation.CompactHandler-CompactResults]: Downloaded 22 file(s) with a total size of 117.33 MB, deleted 44 file(s) with a total size of 117.49 MB, and compacted to 4 file(s) with a size of 74.83 MB, which reduced storage by 40 file(s) and 42.65 MB

or the actual uploads that claim the last space, so that we can better observe what was happening then…

Sorry I missed that statement before suggesting that plan. So it’s a physical drive, thus resizing is also out?

I see you’re already getting the quota warning I’d referenced. The default --quota-warning-threshold is 10%.

The best outcome would be if --no-auto-compact helps. If not, the “migration” might be an option, especially convenient (for speed reasons) if move can be done within the NAS instead of having to go over a network.

A tricky technical maneuver would be to try to take advantage of information here to see what files could be sacrificed, knowing the next backup will fix things. On Windows, you’d need --unencrypted-database to look.

Another tricky technical maneuver is to try to delete the products of the last failed backup, however the risk there is that a compact will typically package old data into new files which might then be deleted by mistake. Current situation is different because the uncompleted delete command already said to delete old versions.

How about looking at the restore dropdown to see if that actually did such a heavy trim. If not, maybe trim it more lightly for starters (use higher –keep-versions), add --no-auto-compact, and try to get heavier logging.