I have rather large DB’s…about 40 different files, the largest is about 200 GB, the rest totaling about 120 GB’s. So…320 GB’s of .bak files.
I’m favoring about a 4 GB chunk on each chunk for backup, the default is 50 MB but I read through the file blobbing document they have about it and I think my bandwidth is solid enough to handle it. So far so good.
I think with ANY backup software the first one takes a LONG time (potentially) and this seems to also be the case. The initial OS snap shot of a 66 GB OS took like 13 hours, it compressed on the disk down to 49 GB and I was pleased with that.
The MSSQL one has been running since I posted this (roughly) and still has 100 GB left to go AND that is backing up over the LAN > server share that is sharing out a folder to a USB disk which I’ll sneaker net (drive it over) to my office’s NAS then manually copy to the destination mount point and folder. I’ll then switch the target from the local network share to ssh and the mount point with folder path and we’ll see how she goes!
I think I’ll let it do 2 backups (this one and another tomorrow) and see what I get for total size. Again I suspect the rdiff tech will only copy the diff and with it being the holiday the volume on the DB is low for my client.
Thanks again and I’ll update this when I get more results.