Also using 126.96.36.199 on a QNAP device?
I’m seeing the same thing here.
I have a large backup that I’ve just changed a large number of files in. I set it running overnight, and planned to stop it today to allow my regular ‘daily’ backups to run. I clicked ‘Stop after upload’ about 6 or 7 hours ago now, and it’s still backing up.
Running 188.8.131.52_beta_2018-11-28 on Debian if it helps.
No. Don’t even know what that is… Windows client, Duplicati, Backup to Google Drive. Latest Canary .14 - “Stop after upload” does nothing, but a full stop instantly interrupts.
I’ll try to check this myself but after choosing “Stop after upload” if you check the job log Remote tab, see if 2 more uploads start.
My GUESS is it’s an issue with the multi-threading updates a few versions ago. Something like:
- thread one starts an upload
- thread two starts a compress
- stop after upload requested
- thread one finishes upload
- stop request rejected because thread two still busy (when instead it should be left in process queue so every thread can respond to it)
If that’s the case, then using Duplicati with single threads MIGHT not exhibit the issue.
Of course that’s all just a theory, I haven’t looked at the Stop code yet.
Any news on this? I have just installed Duplicati today and with initial backups (Many TB) not being able to stop after upload is making the program unusable to me…
Many Thanks in advance, looking great apart from this!
Any news? I’m currently reading:
" [Local] Bilder : Starting backup …"
It’s a 200 GByte backup. And I cannot stop the “Starting backup” without killing the Duplicati process at all. Feels like it insists on counting files or whatever it does…
If it decided to stop the process at some point, there still could be subsequent jobs waiting…
So, killing duplicati.exe makes me feel uncomfortable.
-> The non-ability to see or edit the current backup queue is still a big thing IMO.
Probably a lot of work for you hard working guys I guess
Thanks for your, and @mbc9’s, interest. Unfortunately, I don’t think much progress has been made on this.
As an “annoying but (likely) doesn’t actually break anything” issue this is probably lower in the priority list than you would like.
That being said, having more details might help narrow down what needs to be done.
For example, it sound like @Tapio is wanting to be able to stop during the file scanning process (at the beginning of the process) while @mbc9 wants to be about to stop after the current upload (middle of the process).
Did I get that right?
I ask because “stop after upload” uses a different stop process than “stop now”.
Well mostly yes, during file scanning, which takes ages on many files. Because typically a scheduled job is running, but I wanted to start another backup job. The queue is quite intransparent as you know. And unstoppable.
EDIT: Though it must be said, “stop after upload” is totally broken also when used while backup is running. The top progress bar ist telling “stopping after upload”, while it just continiues backing up multiple files and gigabytes. It is in a disk to disk scenario.
Is there any news here? I’m missing stop after upload also, being able to see queued tasks would be nice, same as avoidance of subsequent same tasks, which looks stupid.
Cannot stop backup in current Beta on ‘Stop after upload’ has news on an effort that is still in progress.
Direct link to the work area:
I’m curious, is there a general consensus on what the “stop after upload” behavior should be? Do you want the already uploaded files to be immediately restorable? Or, is the intent more of a “long pause”, where the next backup would resume the interrupted one?
If the former, should this “incomplete” version be considered by the retention settings? I’m guessing that if one clicks “stop after upload” during the first of thousands of files, surely they don’t want this incomplete version to be kept in favor of prior complete versions?
“Stop” and “Stop After Upload” should leave the database in a healthy state. In either case, if a file is only partially uploaded when the job is stopped, then the next backup job that runs should consider that file to be not backed up yet and should back it up. But there’s no need to re-upload the chunks that have already been backed up. The important thing is that the job should stop quickly and the local database should do everything it’s supposed to do so that there’s no missing information or any need for a database repair.
What about the following situation:
- Source has 10 files selected for backup.
- Backup begins, and blocks for 2 files have been uploaded. A stop (either now or after upload) is requested while processing the third file.
Should a backup version exist for this incomplete backup? Should the 2 files whose blocks have been uploaded be restorable before the next backup is run? Or, is the intent to simply avoid re-uploading blocks that have already been uploaded?
My thought is that a partial backup should not be recorded as a backup version, and should not be restorable. It simply allows the next backup the opportunity to resume the interrupted one and avoid having to re-upload a bunch of blocks. Does anyone imagine a different use-case?
Expected there to be a temporary fileset for synthetic filelist #2506 explains current design intent which was:
The “synthetic filelist” is created after a backup has been interrupted.
It is synthetic as it uses the previous (successful) backup, and then just adds whatever new/updated files it managed to upload.
I forget what date it will use, but it’s supposed to be posted at start of next backup. Post explains current bug.
I’d be happy either way. If the next backup job just continues where the aborted job left off, that sounds ok because it should only be broken for a day or so until the next backup job completes.
But, it seems slightly safer to allow us to restore anything that has been backed up. What if we need to restore one of those files, which is technically backed up and sitting on our backup server, but just wasn’t part of a “complete server backup”? It would be sad if the backup software told us “you’re not allowed to restore that perfectly available data because I said so”. It should probably allow the restore but just warn us that we are restoring from an incomplete backup.
You might be able to get this manually for the short term if we fix the synthetic filelist bug and this stop bug. You’d typically restart the backup and it would check itself, upload the synthetic filelist, and resume backup. For the hurry-up case, you’d stop the backup just after it makes the synthetic filelist, and restore using that. Question would be how tidy a stop leaves things. Definitely don’t want DB totally busted, but can it restore? More ambitious plan might be if some of the backup code below can get in other operations, but does it fit?
Enhancement-level operations would allow restores while the backup is still running, but that’s ambitious. Orchestrating things into a stable state (via awkward manual work or something better) might do for now.
I am using 184.108.40.206_beta_2019-07-14, stop after upload does not work (initial files upload).
Immediate stop seems to work okay.
Short answer is 220.127.116.11 beta should work just like 18.104.22.168 beta from last November. It’s basically identical.
Fix ‘stop after current file’ #3836 should be in v22.214.171.124-126.96.36.199_canary_2019-09-17 and later, but there’s currently a backup corruption in canary and I’m not recommending it right now. There is no newer beta yet.
Hello, “currently a backup corruption in canary” so I should use 188.8.131.52 for any production environments? Can I the 184.108.40.206 over the current canary, uninstall / reinstall? Should I delete existing backups and start over?
Good news is there have been many backup corruptions fixed after 220.127.116.11/23, but at least one bad bug slipped in. Generally I advise people not to put Duplicati (which is still in beta) in a position where loss of backup causes major issues, e.g. don’t archive-and-delete, and preferably have multiple backups using different software if the data is critical enough. Canary gives you known fixes plus unknown new issues.
Version upgrades also bring database changes sometimes, so you sometimes can’t just downgrade and expect the old version to know a DB format that didn’t exist when it was written. There’s a backup of your database made when the format changes, but the window for going back to it is small, as the backup DB doesn’t get updated with information from newer backups, and you might want some of your recent files.
Start-over is always an available but extreme option if things go wrong, and fits well with the idea of using Duplicati in ways that can forgive less-than-perfect reliability. The question is which bugs are worse? The 18.104.22.168 ones that I can think of mostly broke in obvious ways. The one (new and still not well-investigated) in 22.214.171.124 actually dates to 126.96.36.199 but is subtle enough to not notice with ordinary backups and restores because the database is good, but the backup on the destination isn’t quite right. Fortunately the error is in a dindex file which is one of the things that is less valuable, compared to dblock (data) or dlist info on files.
Disaster recovery (e.g. source drive is lost) or a DB Recreate might be affected. You can watch for news, and I hope there will be a new canary out soon with a fix (may be easy or difficult depending on the cause).
You could check your logs or emails to see if you’re getting retried uploads, e.g. from flaky Internet issues. Backing up to local folder should be fine. Servers on a LAN might be the next best on the few-retries front.
If 188.8.131.52 is working seemingly well for you, and especially if you have some ability to withstand an issue (should one occur), then staying on 184.108.40.206 might be best. Also see top and consider your backup design.