Backup /var/: backup larger than file system?


#1

I’m trying to backup /var/; when I add it, my backup size suddenly balloons to 100+TB. My total local storage is about 1.9TB total.

I’ve tried excluding gvfs, as I figured that was the culprit. But no luck there.

I assume this a link issue of some sort, but I don’t know how to debug.


#2

You might be experiencing the same issue as described here:

Unfortunately, we have yet to determine what the cause is as it’s difficult to reproduce. What operating system and filesystem are you running? On my Arch Linux system with ext4, I can select /var as a source and it reports the size correctly. Most people that have this issue appear to be using some copy-on-write filesystem, but at least one user is using ext4.


#3

I wonder if your file manager can measure directory sizes, starting with /var? If huge, go into subdirectories until the cause is found. On Windows, there are also tools to do this. TreeSize Alternatives for Linux has Linux ideas.

It turned out that my Linux Mint system came with baobab a.k.a. Disk Usage Analyzer, and mentioned in that list.

A non-GUI tool is du which has numerous options for how links should be treated. I’m not sure about the others.

Files with sizes larger then the file system could be sparse files, but even for that, 100TB seems like a whole lot.

Another possibility is that Duplicati’s counting is off. If this backup finishes anytime soon, that’s likely the case…


#4

I’m running Arch Linux. ext4 file system.

du -sh ./ says 11 gigs.

sudo du -sh ./*/ has an interesting entry. However, it shouldn’t be affecting me due to my excludes:

du: cannot access './run/user/1000/gvfs': Permission denied

Per the Arch wiki, this should be excluded from backups. GVFS = gnome virtual file system. I’ve got an exclude there to account for this.

I killed the backup when it got that big. I’ll give it a shot now and see what happens.


#5

Ok, I did a (partial) backup of /var/, adding one subdirectory at a time.

Partial because I only got 3 directories in before things ballooned.

It was /var/lib that caused the size to balloon. du says it’s 5.6G. Duplicati says adds 256TB. But it uploads in a few mins, so I’m pretty sure it’s not uploading 256TB.

Also worth noting that the source size (displayed under the profile name) seems appropriate.


#6

If the high size sticks on future backups (might be hard to tell because it might flash by too fast, but starting a new test backup would solve that), you could test subdirectories of /var/lib to isolate this further. Rather than wade through 70 (on my system – yours may have more) subdirectories, you could getting a suspects list by

sudo find /var/lib ! -type f ! -type d -exec ls -lnd {} ; | less

basically looking for any folders containing something not a plain old directory or file, and testing there first…

Simpler but indirect would be to add --apparent-size to du. Here I play with a nearly 16TB spare file I created:

$ du -sh
4.0K	.
$ du -sh --apparent-size
16T	

Although you probably don’t have an actual sparse file, but there could be similar filesystem oddities at play.


#7

I’m going to wait for this backup to finish before I start making test backups (this current backup is my only true backup; true local backup hard drive is in the mail…)

However, I did just du -sh -apparent-size the entire directory, folder-by-folder. Nothing suspicious that I could find.

One thing that does seem odd is that it displays 256.25TB. Note that the actual size of my source is 256GB, which accounts for the 0.25TB.

Occasionally (after uploading one set of some recursive links or something, maybe), it displays 128.XXTB.

Is it just a weird coincidence that it’s 256 and 128 (maybe next would be 64 and 32)? Or is it just a factor of 1000 error? EDIT: Nope on the 1000x. I increased my source files to a total of 285G. Duplicati is now displaying 256.28.