Is your Windows share mounted via CIFS/Samba? Apparently there are some issues with CIFS mounts and caching/truncation that can cause problems.
If you are using CIFS, can you try disabling caching to see if that helps? I don’t think it will fix the hash mismatch you mentioned, but hopefully it will prevent it from happening again in future backups.
See the below for possibly related issues:
opened 12:06PM - 16 Dec 18 UTC
closed 11:47AM - 19 Dec 18 UTC
- [x] I have searched open and closed issues for duplicates.
----------------… ------------------------
## Environment info
- **Duplicati version**: 2.0.4.5_beta_2018-11-28
- **Operating system**: Ubuntu 18.04
- **Backend**: S3 Compatible (Minio)
## Description
I did a backup of numerous files/directories and went to restore a number of them to validate restore capability. I started by removing a directory and then attempting to restore but it failed with numerous hash compare issues. Seeking a minimal reproducible sequence I then tried a single file in that directory to restore and it came back good.
I then wiped the directory and restored that same file and another, and the other came out fine while the first had a hash issue. I repeated the same restore of the two files again (not changing anything) and this time first file came back hashed correctly. For some reason it appears that only one file is restored successfully at a time, even if multiples are selected.
It seems to do fine restoring the two files if I direct it to restore to different folder.
This is pretty basic capability so this seems like a user error of some kind, but I'm not sure where. The setup is quite basic. No special options...just taking defaults on everything when setting up the job. Any ideas on where to look?
## Steps to reproduce
1. Backup directory with several files (each ~10MB in my case)
2. Remove directory that was backed up (prep for restore test)
3. Attempt restore of 2 files in that directory
4. After hash failure, perform the same restore operation again
- **Actual result**:
hash failures when multiple files need to be restored.
- **Expected result**:
Proper restore the first time.
opened 11:51AM - 15 Sep 18 UTC
- [x] I have searched open and closed issues for duplicates.
----------------… ------------------------
## Environment info
- **Duplicati version**: 2.0.3.3_beta_2018-04-02
- **Operating system**: ubuntu 18.04.1
- **Backend**: google drive
## Description
errors are generated and the backup does not complete
## Steps to reproduce
1. just let duplicati run on a "twice a day" schedule
- **Actual result**:
Duplicati.Library.Interface.UserInformationException: Found 1 files that are missing from the remote storage, please run repair at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x001aa] in <ae134c5a9abb455eb7f06c134d211773>:0 at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x00066] in <ae134c5a9abb455eb7f06c134d211773>:0
- **Expected result**:
no errors and a successful backup
## Screenshots
## Debug log
opened 09:37AM - 07 Jan 17 UTC
bug
I have:
- [x] searched open and closed issues for duplicates
---------------… -------------------------
### Version info
**Duplicati Version:** 2.0.1.35_experimental_2016-12-13
**Operating System:** Kubuntu 16.04.1 LTS (GNU/Linux)
**Backend:** local folder based on cifs-autofs mount
### Bug description
Installed Duplicati on January 4th (2017). The day after (5th) I directly got some error messages about invalid file size. The latest errors of today (they vary sometimes) were:
```
Operation Get with file duplicati-b1dfc153c23894fe6b96d2bf2638abc94.dblock.zip.aes attempt 5 of 5 failed with message: The file duplicati-b1dfc153c23894fe6b96d2bf2638abc94.dblock.zip.aes was downloaded and had size 65536 but the size was expected to be 52367709
System.Exception: The file duplicati-b1dfc153c23894fe6b96d2bf2638abc94.dblock.zip.aes was downloaded and had size 65536 but the size was expected to be 52367709
at Duplicati.Library.Main.BackendManager.GetForTesting (System.String remotename, Int64 size, System.String hash) <0x41a18080 + 0x0011f> in <filename unknown>:0
at Duplicati.Library.Main.Operation.TestHandler.DoRun (Int64 samples, Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.BackendManager backend) <0x41a13510 + 0x00f8f> in <filename unknown>:0
```
And finally (after 5 retries)
```
Failed to process file duplicati-b1dfc153c23894fe6b96d2bf2638abc94.dblock.zip.aes
System.Exception: The file duplicati-b1dfc153c23894fe6b96d2bf2638abc94.dblock.zip.aes was downloaded and had size 65536 but the size was expected to be 52367709
at Duplicati.Library.Main.BackendManager.GetForTesting (System.String remotename, Int64 size, System.String hash) <0x41a18080 + 0x0011f> in <filename unknown>:0
at Duplicati.Library.Main.Operation.TestHandler.DoRun (Int64 samples, Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.BackendManager backend) <0x41a13510 + 0x00f8f> in <filename unknown>:0
```
I have twice "repaired" the database, but after file verification the errors kept. Might be related to #1583, but in that issue so many different bugs are mentioned I thought to open a fresh/clean new one. I am not sure to recreate the database; will I loose some information about my recent backups?
More relevant information:
1. Since the backups only run for a few days, I am happy to delete all backup files and start over when necessary. The current risk is not that large
2. The backup files are written to a local folder on my system (`/mnt/nas/backup`). This is a mount point for a CIFS share, mounted with autofs. So the underlying file system is not ext4 or something, but is a share over SMB. Volume size is set to the default 50MB.
3. Currently is backup up my home directory with various exclude filters. 1,89GB source size with backup 165,31 MB / 3 Versions.
### Steps to reproduce
- instantiate backups
- run them a few days
- notice the failure message in the backup log
**Actual result:** After backup logs contain failure messages
**Expected result:** No failures should happen during backup process
### debug log
I have not a live log available. Can do that when requested, I have only the stored backup log to my availability.
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/40712270-failed-to-process-messages-after-backup-the-file-had-size-x-but-was-expected-to-be-y?utm_campaign=plugin&utm_content=tracker%2F4870652&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F4870652&utm_medium=issues&utm_source=github).
</bountysource-plugin>
Important edit: Since I’ve marked this as the solution, I’ve discovered that CIFS with cache=none is extremely slow. I noticed my backups were taking longer than normal. I ran a dd test with one of my shares mounted via NFS and then the same test with CIFS (with cache=none):
NFS: (14.3MB/s - not great, but OK):
root@home:/media/duplicati_a# dd if=/mnt/nas/Backup/home-backup.tar.gz of=/media/duplicati_a/testfile status=progress
816660992 bytes (817 MB, 779 MiB) copied, 57.0038 s, 14.3 MB/s
I d…