Increasing backup duration


#1

server X:
i’ve a 1tb of data, backup requires 5-6h, but from a couple of days it’s increasing, last backup required 8h30m, is it because i keep too much versions? every days i’ve 100-200mb of new data, not more, now i’ve 37 versions
duplicati sqlite db is now 13GB

server S:
33GB of data, new data not all days and less then 30MB, backup increase from 3h30m until more than 6 hours, 10 versions
duplicati sqlite db is now 6.7GB

is it normal it requires so much time after some run?


Are version stats available anywhere?
#2

just now it hangs with error:

Failed: Insertion failed because the database is full
database or disk is full
Details: Mono.Data.Sqlite.SqliteException (0x80004005): Insertion failed because the database is full
database or disk is full

but i’ve space:
150GB in /, where i’ve duplicati log, and 1.5T in /srv/dev-disk-by-label-Backup where i’ve backup file and tmpdir
and also more than 90% inode free


#3

after a new run it seems ok, no more errors

Has this something in common with my problem about time increasing?

2018-04-02 - 2.0.3.3_beta_2018-04-02

  • Added a new retention policy and UI which allows backup versions to decrease over time

#4

Hi @mmiat, welcome to the forum!

Assuming you’re using the standard block size of 100KB 100-200MB of new data a day would mean around 1k-2k of new block hash lookups per day.

Depending on your system specs and the size of the sqlite database for your backup job that could could be what’s taking so long. Of course if you’ve got a slow connection to your destination that could be the issue as well (but it would have to be pretty slow to take 8 hours for 200 MB).


#5

i’ve investigated, these are some results:
14/03 6h
19/03 7h (about 800MiB of new data, 30GiB of modified data)
26/03 8h (about 4GiB of new data, 30GiB of modified data)
03/04 11h (about 2GiB of new data, 30GiB of modified data)
backup are schedules from mon to fri, i’ve written the sum of days data (not so precise)
block size is 100MB


#6

Technically it’s not a problem, but a 100MB block size is likely to use more bandwidth than a smaller size - at least with modified data.

On the flip side, with ~6GB of new/changed data every day (~32G / 5 days?) larger block size is likely what’s keeping your backups from being even slower.

Is your Duplicati sqlite database still at 13G or is growing a lot every day/week as well?


#7

sqlite is still 13GB
i should delete backup and restart with a less block size? 10MB?


#8

If you have the space available, I’d suggest you leave the current backup in place and try making a 2nd backup (into a different folder) with the different block size and see how the performance of that one works for you.

The current “runs-too-long” backup is still good and can be restored from, so there’s no reason to delete it unless you’re out of space. You can always turn off the scheduling of the first backup for a while to let the second backup run a few times so you can compare performance.


#9

ok i’ll try
I’ve 7.4M files, 1TB of data, average file dimension is 141Kb, so which block size can i choose?
thanks


#10

It’s tough to suggest a block size because the results can vary depending on your CPU, RAM, disk I/O, bandwidth etc. however did you read this page at all?

Is this 141KB average file size on server X (with the 13GB sqlite file after only 37 versions)?