What caused your backup to go from 4tb to 800gb?
It seems he changed retention, cutting down on the number of versions stored in backup.
That would mean there was more than 3tb of wasted space after retention policy clean up. Highly unlikely, no? It depends on the type of data ofcourse. But could also be a wrong blocksize I think. To be checked/confirmed by @StevenKSanford.
I went from 350+ versions down to 10. I had been saving every change for a few years. Seems reasonable to me?
Thank you⦠Steve
I have the same problem ⦠Backblaze storage, database is about 1G, about 150G of source-data. I canāt complete a database repair over a weekend.
Could the repair task on another computer? After completion move the .sqlite file back to my server? I could export-import to replicate the backup setup, but the source-data would not be present. Would this work?
One other question, there is 150G of source-data and a similar volume of backup data at Backblaze ⦠how much download and upload will there be to the Backblaze buckets during the repair process? If it will be 150G of traffic then delete and start over again is a good option.
I also have this problem. I back up my laptop (approx 400Gb, Windows 10) to a NAS disk over a cabled (100Mbit) connection when docked. The backup is 275 Gb, with 11 versions using a custom retention policy.
Since June I have not been able to back up, because of a database problem.
Iāve tried to repair (no luck) and delete and rebuild. Over the summer I managed to let it rebuild for about two weeks (without finishing) before having to cancel.
After reading this forum, I have looked at the OverallProgress status variable as a function of time (see table below). Rebuild is very quick from 0 to 90%, and then progresses extremely slowly from there.
Extrapolating from the last two points, it should take more than 200 days to completeā¦
Duplicati is using around 20-30% of the CPU resources, and intensive disk usage, but only sporadic Network usage.
Can anyone explain this behavior? How can we help provide information useful to figuring out what the basic problem is?
Is the database rebuild only needed for continued backup? I seem to have read somewhere on the forum that it is also needed as the first step in a complete restore ⦠That would make a restore virtually impossible.
Thanks for comments and suggestions.
Date/Time | Time(h) | Progress |
---|---|---|
2019-09-17 11:51:00 | 0.0000 | 0.054545 |
2019-09-17 11:52:00 | 0.0166 | 0.109091 |
2019-09-17 12:00:00 | 0.1500 | 0.700000 |
2019-09-17 12:17:00 | 0.4333 | 0.800000 |
2019-09-17 12:28:00 | 0.6167 | 0.900071 |
2019-09-17 12:45:00 | 0.9000 | 0.900160 |
2019-09-17 13:56:00 | 2.0833 | 0.900604 |
2019-09-17 23:17:00 | 11.4333 | 0.904137 |
2019-09-18 10:54:00 | 23.0500 | 0.905735 |
What version are you using? There have been a few fixes recently with regards to lengthy database rebuilds, but they havenāt made it to the beta channel yet.
I have found the rebuild progress bar % is very non-linear, so itās difficult to estimate time remaining.
If it slows down after 70% it is likely Duplicati is downloading dblock files. There was a bug in older versions that would cause Duplicati to unnecessarily download ALL dblocks - a long process.
Even with that fix there are some situations where dblocks need to be downloaded. The latest canary version improves the live logging so you can see the current # and total # of dblocks needed so you can more easily gauge progress.
If you are willing to upgrade to the newest canary you can see how much things improve for youā¦
Thanks for the feedback.
I am currently using version 2.0.4.23_beta_2019-07-14.
Iāll try out the canary channel right away
Great⦠Please let us know how 2.0.4.29 works for you.
And Iād probably switch back to beta channel once the next one is released, unless you like living on the edge.
OK, new version is 2.0.4.29_canary_2019-09-17, and something has definitely changed.
I timed a new attempt at recreating the database:
Date/Time | Time(h) | Progress |
---|---|---|
2019-09-18 14:29:00 | 0.000000 | 0.072727 |
2019-09-18 14:35:20 | 0.105556 | 0.200000 |
2019-09-18 14:38:30 | 0.158333 | 0.401667 |
2019-09-18 14:40:20 | 0.188889 | 0.522277 |
2019-09-18 14:45:30 | 0.275000 | 0.700000 |
2019-09-18 14:49:30 | 0.341667 | 0.716667 |
2019-09-18 15:00:40 | 0.527778 | 0.750000 |
2019-09-18 15:05:15 | 0.604167 | 0.783333 |
It seemed to again slow down around 70% progress.
When I returned again at around 15:25, Duplicati seems to have finished (or quit?) the recreate phase, and entered a backup phase. But it does not seem to be an incremental backup, but a full backup (all files).
Currently the statusbar says: 221232 files (325.55 GB) left at 1.00 MB/s
(translated from DK language interface).
Looking at the backup profile entry, it still says that the backup set holds 11 versions, and that current action is Backup_ProcessingFiles
.
The System log says:
lastPgEvent : {āBackupIDā:ā3ā,āTaskIDā:4,āBackendActionā:āPutā,āBackendPathā:āParallelUploadā,āBackendFileSizeā:1100783698,āBackendFileProgressā:1100783698,āBackendSpeedā:895058,āBackendIsBlockingā:false,āCurrentFilenameā:āC:\****.ā,āCurrentFilesizeā:2500200424,āCurrentFileoffsetā:0,āCurrentFilecompleteā:false,āPhaseā:āBackup_ProcessingFilesā,āOverallProgressā:0,āProcessedFileCountā:21346,āProcessedFileSizeā:104830596617,āTotalFileCountā:240934,āTotalFileSizeā:440838404714,āStillCountingā:false}
I will let the backup run, and report back when it completes. At least I am no longer stuck in recreate-mode
The fact that itās running a backup means the db recreation succeeded! If you click on the backup set and then click Show Log youāll be able to see past job logs in more detail, including a db recreation.
I noticed this, too, after a full database rebuild on one of my systems. Checking the Live Log with high enough verbosity, I could see that it says metadata changed on files which triggers file reprocessing. I am not sure if this intentional design or a bug.
The good thing is that Duplicati will realize it doesnāt need to reupload blocks (unless file contents really did change since your last backup) so youāll probably find almost nothing changes on the back end. It will take a bit more time to complete this backup, but the next ones should be back to normal where only new/changed files are processed.
Correct! The log shows a successful recreation (log shows a duration of 55 min), and the following backup completed after 1.5 hours. The backup set grew by only a few Gb, so it did not do a full backup afterall. Backup versions are now down to 7, which seems to reflect the retention policy and the fact that a long time passed with no backup (since June).
In short: The duplicati version 2.0.4.29_canary_2019-09-17 solved my problem with extremely slow database recreation!
Thanks to @drwtsn32 for comments and suggestions!
And thanks to duplicati developers for an excelent software - duplicati is now again my favorite backup solution
PS: can I downgrade now to the latest beta version, or should I wait until the beta branch gets a new release (version number >=2.0.4.29)?
I realize now that @drwtsn32 already stated that I should wait for the next beta release. Thanks.
I think youāre probably safe to change your update channel back to āBetaā now. It shouldnāt offer you any updates then until the next Beta is released.
Andy
Thanks, have done soā¦
Recreate woes even on the latest canary (105.1) and mono (6.8.0.105). It zooms along to 70%, slows down to a still acceptable 5%/15 minutes, but then gets to 90%. Now it is doing 1/100th of 1% every 15 minutes.
So it literally does 90% in less than 30 minutes and the final 10% is projected to take 24+hours to do 1%, which translates to 10 days to do the final 10% of the recreate.
.
The entire *.zip.aes directory is less than 250GB and my db is currently 703MB. It bumps up by a small fraction every time the completion percentage bumps.
Something else from the server state properties tab: my BackEndSpeed is less than 1, drops to zero, then back up to .9, then drops to zero, etc. Most of the time when it ārecyclesā back to .9, the completion percentage changes by 1-2/1000ths of a percent. Can someone explain what might be happening here?
Is there a way to get the server state properties on a command line, so I can grep out these values?
Can you verify? I donāt recognize that version number. The latest canary is 2.0.5.104, is that what you mean?
sorry. 2.0.5.104 is what Iāve installed on both windows and ubuntu
If you see it āstickā at 90% that usually means it has to download dblocks in order to rebuild the database. Unfortunately this can be a lengthy process.
Can you check About -> Show Log -> Live Log -> Verbose while this database rebuild is in progress? It should show something like āprocessing dblock X of Yā to give you an idea of the progress. You may need to leave it on this Live Log screen for several minutes to see this information slowly scroll by.
I understand how that could take a bunch of time and I will check the logs. But I am running the recreate on the machine where the dblocks are and ādestinationā is set as a local filesystem. Shouldnāt that fly? Or is there some kind of throttle built-in?