Why failed because of disk space?

Did you also get some big ones? ls -lhS | head might find them. Did you really mean to have the remote files that hold data blocks be 30G? That’s almost a thousand times the default, and by default you queue up 4, per –asynchronous-upload-limit. That 120GB could fill your 60GB of free space. How large is the source?

Choosing sizes in Duplicati talks about the remote volume size, whose option is known as –dblock-size after the name of the files produced. Each dblock file has a smaller dindex file for it. If you see nothing so far, you might be working on the first dblock. 30GB will take awhile. Does your network monitoring show any upload?

What sort of storage type is the destination? Some might show partial files, and others might not show them. Viewing the Duplicati Server Logs at Information level should show you when file uploads start or completes.

I’m not sure what all those small files are. As you can see, the names aren’t very informative. Temporary files get used for various purposes. In some cases, they are also (unfortunately) left around instead of deleted…

2 Likes