Hello,
I created a backup job to upload 950GB of files from my Windows Server 2012 server to Azure Blob. The backup task started on Friday, on Monday it was still running, so in order not to compromise the internet link during office hours, I paused the task. At night I logged back into the server to restart the task, but instead of pausing as I had left, the task was finished. It only copied 375GB, I started the task again manually for Duplicati to complete the copy of the 930GB, but it doesn’t, it simply makes incrementals of the 375GB that were copied, that is, my backup is incomplete.
How can I solve this? Can’t he see that he has a lot of files left to copy?
It is possible that the backup has finished properly. Are you expecting 950GB to be on the back end (in the Azure storage account)? Duplicati uses deduplication and compression, so based on the nature of your source data it may certainly take less space on the back end.
In the web UI, click on your job, then click “Show log”, then expand the most recent one. You should see some statistics like this:
But in the logs the total number of files examined is well below 930GB which is the total size of files stored on the server’s disk, so I found it very strange.
I’m attaching the images with statistics from the first backup.
Warnings inform you that some files could not be processed, for example.
2021-09-30 09:42:47 -03 - [Warning-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-PathProcessingFailed]: Failed to process path: E:\PUBLIC\DIRECTORY.SIENGE\MIGRATION PROJECT \Units\Limits and Confrontations~$Lotemaneto Life Plan.xlsx
Looks like it can’t back up some files either due to permission issue or the file being open. How did you install Duplicati? As a service? If so, you probably just need to enable the snapshot-policy option.
I didn’t install it as a service, I downloaded this latest version from Duplicati and installed it.
I really missed using it as a service, I was going to ask about that. When I log off from the server, the app closes, I noticed in one of the times I logged off and logged in right away that a backup copy would have stopped.
Yeah, for a server you definitely want to install it as a service.
The default context for a service is to run it under the LocalSystem account, which stores its profile data in a different location. What this means is you’ll need to do some work to move your Duplicati configuration over (after you switch to a service).
Alternatively, you could configure the service to run as the same user that you installed Duplicati as. This would retain your current configuration. But if this is a user account where you change the password regularly, it may not be a good idea. You’ll have to remember to reconfigure the service logon password each time it’s changed. It’s really up to you.
Try searching the forum for installing Duplicati as a service. There are also some threads on migrating from a user install to a service-based install, if you want to have the service use LocalSystem. If you use my “alternative” idea of running the service as the same user, then you can skip that part.
After you install as a service you should remove the normal Duplicati TrayIcon from the startup folder. This will actually start up a second instance of Duplicati.
I deleted the backup job and files stored in the Azure blob. I created a new task to run at 6pm tonight, hope you copy everything.
On Monday I will see the evolution.
I made the adjustment in the snapshot policy as indicated by you.
Not yet
As I don’t have a good grasp of the subject, I’m keeping the remote server session open, so I’ll look at this issue later.
At the moment I’m more concerned with backups, as I’m out of copies and I don’t want to take risks.
Ok, well unless Duplicati runs with “elevated” permissions, the snapshot-policy option won’t work (per the documentation). One way to achieve elevation is to configure it as as service. There are other ways to do it, too.
Hello, sorry for the delay, I had given up on the tool, but as there is no better one, I ended up going back to it.
Last week I installed Duplicati again, logged in as Administrator and ran a backup manually, I saw that it read the 920GB at the source, ok, that’s right. So I canceled and set up a backup scheduled for 8:00PM with another user I created, backup.service, it’s the account that I and other analysts will use when they need to access backups, even though I put this account as a domain administrator, when running the backuo at the scheduled time, it read only 280GB at source.
This is very strange, the tool could be simpler, just run and copy all the data, regardless of the logged in user.
A Domain Administrator account does not necessarily have the same rights to files as the local Administrator user. It really depends on group memberships and the NTFS ACL. For instance you might have files where the user “Administrator” was granted access and not the group “Administrators”. Even if Domain Admins is a member of the local Administrators group (usually the standard on Windows domains), it still may not have access to the files.
Also UAC comes into play as well. If UAC is in the default configuration, and Duplicati is not run in an elevated state, its “Administrator” token is masked by Windows. Make sure Duplicati runs elevated and double check the ACLs where Duplicati can’t read data files.
Maybe you should install it that way then. By default it would run as SYSTEM, get good local access, and not suffer from being suddenly killed by logging out of the account that a non-service Duplicati would use.
To see the details of the warnings, you can watch About → Show log → Live → Warning. Possibly many are access permission errors. Maybe some are locked files, but a service could turn snapshot-policy on.
The logged in user that works the Duplicati UI web browser won’t matter any after you set up the service. What matters to get file access is the user that Duplicati runs as, and Windows is going to insist on that.
Duplicati only really worked with the admin user, I’ll leave it anyway.
But, a doubt, here it shows that it read 966.62GB of data, but the backup it shows with 651.35GB. Has it failed to read and copy some files, or does it mean that it is using compression?
That’s the benefit of “de-duplication”. When Duplicati “looks” at your files it breaks them into little pieces, it then looks at all those pieces and says, I already have one of those pieces so I’m only going to keep one copy of that piece and just make a reference to that piece anywhere else it exits. The more copies of the same data you have the more benefit there is.
and you also get compression unless file extensions are one of the known already-compressed types.
If you can figure out how many files you expect to have backed up, compare that to log (see first reply).
You can also look at the restore tree, or use The FIND command (which is going to get more involved).
If you are still not confident that files really got backed up, best test is to simulate a disaster recovery of Restoring files if your Duplicati installation is lost on a different system that has none of the source files.
This test can be done to a different folder on original system (if space exists), however you need to use –no-local-blocks in the crude options entry screen, otherwise the restore-to-a-different-folder will look at original folders for data content, and you won’t get full proof that everything actually came from backups.