Came back from a long break to find lots of new releases, but I’m still a little confused by all these channels so I will stick to Canary for now. Maybe once there is a proper Release release I’ll revert back to that for most systems, and Beta for ones I want to test out.
Looking at the release notes is the the auto stop/start of the service by the installer only done if it finds it running, and not touched if the service is already stopped? My scripts should stop the service itself and often perform steps after the installation before it then restarts the service. If the installer is going to always start it, that’s going to break things for me - one example where this could also be a problem is the script service stop not working and the installer performing it and therefore still restarting it afterwards. Maybe an install parameter like “NOSTART” to prevent this?
Yes, the first check is to see if the service is running.
Only if the service is detected as running will it be stopped and attempted restarted.
This sounds like it is compatible with your setup?
We could add something like the FORSERVICE=1 parameter to disable this, but it generally should work fine in the automatic mode.
The experimental+beta release is based on canary 2.0.9.111, with some of the smaller fixes baked in.
Yes, that is generally how it is. The Canary builds are always ahead, except when progress was too slow, so we did not get a canary out before the beta was ready.
I upgraded 5 Windows servers from duplicati-2.0.9.111 and all were successful. Services all restarted and no sign of duplicated installer entries but in the past it skipped a release so we’ll see what happens for next time.
I’ve hit an issue trying to recover files - was using Duplicati on my main Windows server to do a direct restore from the backups of another machine, which happen to be stored on a local folder as this is my backup storage server. When it tries to access the backups it fails with:
Error
The archive entry was compressed using LZMA and is not supported.
I know at some point before the .NET8 work that I enabled for my backups LZMA, I don’t remember why, but I soon removed that reverting to the defaults.
I did see it mentioned that LZMA was removed, but now I’m left with backups that I cannot restored.
I don’t recall ever getting any warnings whilst using Duplicati with the LZMA option enabled, and I have 9 servers, 18 locally stored jobs and 9 S3 stored jobs that would have been affected by this. I don’t think I can recompress them all I have all the emailed backup job reports going back to 23rd February 2023 and none of them mention LZMA until the recent issue while testing 2.0.9 back in August when an S3 backup started failing.
That comment refers to 7z (I think). I’m thinking that issue here refers to this. 2.0.9.111 claims:
--zip-compression-method (Enumeration): Set the ZIP compression method
Use this option to set an alternative compressor method, such as LZMA. Note that using another value than Deflate will cause the option --zip-compression-level to be ignored.
* values: None, Deflate, BZip2, LZMA, PPMd, GZip, Xz, Deflate64
* default value: Deflate
but also new:
--zip-compression-library (Enumeration): Toggles the zip library to use
This option changes the compression library used to read and write files. The SharpCompress library has more features and is more resilient where the built-in library is faster. When Auto is chosen, the
built-in library will be used unless an option is added that requires SharpCompress.
* values: Auto, SharpCompress, BuiltIn
* default value: Auto
EDIT 1:
I tested this. For direct restore, the option shown above put into screen 2 makes it work for me.
EDIT 2:
How many other zip-compression-method are going to fall into this? People do choose others.
Yes! I was a confusing 7z/LZMA2, which is removed with zip/LZMA which is NOT removed.
Fortunately that is not required, I was confusing the two LZMA versions .
The update to 2.0.9 and 2.1.0 is to use the .NET Zip library as it is much faster, but has limited functionality.
The implementation is intended to automatically detect what you want and choose the best option. If you set any of the compression options not supported by the .NET Zip library, it will automatically choose SharpCompres as the compression library.
In your case, you are not setting any of the settings, and the intention here is to gracefully revert to SharpCompress if the file cannot be processed by .NET Zip. This automatic fallback is clearly failing in your case, but you can manually set it.
This will disable the .NET Zip library and only use SharpCompress, which supports LZMA.
Anything other than --zip-compression-method=Deflate will switch to the SharpCompress library.
The problem with @Taomyn’s setup is that there is existing data with LZMA but the current setup is using deflate, so it uses ZipCompression, and fails to read the files.
Sorry for any confusion but at least it doesn’t seem so bad now. I did just test some restores of data back in June/July of a couple of machines and they are working, the issue is only happening if I try to use direct restore which I hadn’t tested in a while. I then tried using the “–zip-compression-library=SharpCompress” as mentioned and confirm that works.
BTW, just a minor point, when you then press connect (which to me makes no sense, why “connect”), it then shows “Starting backup” which again makes no sense if I’m trying to restore.
Still, thanks for the help. All my installs are successfully updated to this release and appear to be working fine.
This gets reported from time to time, but surprisingly there is no open GitHub issue.
This was my code-level theory. Regardless, are you seeing any sort of regression?
It’s probably best to have the release note posts focus on newer issues in Duplicati.