Release: 2.1.0.117 (Canary) 2025-04-25

2.1.0.117_canary_2025-04-25

This release is a canary release intended to be used for testing.

Major changes in this version

This update adresses a few edge cases where a crash or other problem could cause the database to become out-of-sync.

Support for “archived” files makes Duplicati more compatible with life-cycle rules, that moves files into longer-term storage.
The support is currently added for S3 and Azure Blob Storage, and works by not attempting to test files that are archived.
Compacting still needs to be turned off if using life-cycle rules where files are inacessible.

The version of the local database is updated to v15 to support the archive timestamp.

A new tool is added Duplicati.CommandLine.DatabaseTool.exe / duplicati-database-tool which can downgrade databases.
The tool can be used prior to installing an older version, and will downgrade the database to the previous version (14 in this case).

The tool can target individual databases, but will default to take all databases in the storage folder.

Throttle options were broken with a recent update. The fixed version now shares the throttle between all transfers.
Previously, setting a 10kb/s limit would cause each individual stream to be capped to 10 kb/s, but now the setting will throttle the combined upload speed (each stream gets a fraction of the limit).

Detailed list of changes:

  • Added additional libICU compatibility versions
  • Fixed an issue where dlist files would be uploaded ahead of time, causing failures on crashes
  • Fixed an issue where compacting would sometimes kick off incorrectly
  • File backend now throws the correct exceptions
  • Ignore errors when closing and attempting pragma optimize
  • Support reloading the page when restoring
  • Added support for storage with archival options, like Glacier and Azure Cold storage
  • Updated warning for cases where new files are found
  • Added a database downgrade tool
  • Added option to set SQLite cache size
  • Fixed throttle not working, and made it a shared throttle
  • Fixed an issue where the backup metadata could be cleared on save

Ngclient changes:

  • Continue button now works on editing backup
  • Overlapping display on scrolling fixed
  • Fixed Repeated background requests on some pages
  • Database screen now shows the database state
  • Improved support for resetting restore flow
  • Settings page is more reactive to changes
  • Server error messages are now shown
  • Numerous fixes and updates to the source picker page
  • Sort order and time display is now persisted in the browser
1 Like

I upgraded my Windows machines at the weekend and all seemed well, but then this morning after rebooting one machine, Win Server 2025, I can no longer connect to the GUI, the ports are being listened on, but in the event log I see this all the time the service is running:

Log Name:      Application
Source:        .NET Runtime
Date:          28/04/2025 10:23:09
Event ID:      1000
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      LISA.mydomain.com
Description:
Category: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware
EventId: 1
SpanId: 8f4bfbad679ca99a
TraceId: f0877dccb301e361f59ec40cb8e097df
ParentId: 0000000000000000
ConnectionId: 0HNC65NKAI0A1
RequestId: 0HNC65NKAI0A1:00000001
RequestPath: /api/v1/auth/signin

An unhandled exception has occurred while executing the request.

Exception: 
Duplicati.WebserverCore.Exceptions.InvalidHostnameException: Invalid hostname: lisa.mydomain.com
   at Duplicati.WebserverCore.Middlewares.HostnameFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Duplicati.WebserverCore.Middlewares.LanguageFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<ExecuteValueTaskOfObject>g__ExecuteAwaited|129_0(ValueTask`1 valueTask, HttpContext httpContext, JsonTypeInfo`1 jsonTypeInfo)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<>c__DisplayClass102_2.<<HandleRequestBodyAndCompileRequestDelegateForJson>b__2>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.WebserverCore.Middlewares.WebsocketExtensions.<>c__DisplayClass0_0.<<UseNotifications>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.<Invoke>g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)

Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name=".NET Runtime" />
    <EventID Qualifiers="0">1000</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2025-04-28T08:23:09.4122116Z" />
    <EventRecordID>51550</EventRecordID>
    <Correlation />
    <Execution ProcessID="12852" ThreadID="0" />
    <Channel>Application</Channel>
    <Computer>LISA.mydomain.com</Computer>
    <Security />
  </System>
  <EventData>
    <Data>Category: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware
EventId: 1
SpanId: 8f4bfbad679ca99a
TraceId: f0877dccb301e361f59ec40cb8e097df
ParentId: 0000000000000000
ConnectionId: 0HNC65NKAI0A1
RequestId: 0HNC65NKAI0A1:00000001
RequestPath: /api/v1/auth/signin

An unhandled exception has occurred while executing the request.

Exception: 
Duplicati.WebserverCore.Exceptions.InvalidHostnameException: Invalid hostname: lisa.mydomain.com
   at Duplicati.WebserverCore.Middlewares.HostnameFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Duplicati.WebserverCore.Middlewares.LanguageFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.&lt;ExecuteValueTaskOfObject&gt;g__ExecuteAwaited|129_0(ValueTask`1 valueTask, HttpContext httpContext, JsonTypeInfo`1 jsonTypeInfo)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.&lt;&gt;c__DisplayClass102_2.&lt;&lt;HandleRequestBodyAndCompileRequestDelegateForJson&gt;b__2&gt;d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.WebserverCore.Middlewares.WebsocketExtensions.&lt;&gt;c__DisplayClass0_0.&lt;&lt;UseNotifications&gt;b__0&gt;d.MoveNext()
--- End of stack trace from previous location ---
   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.&lt;Invoke&gt;g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)
</Data>
  </EventData>
</Event>


Log Name:      Application
Source:        .NET Runtime
Date:          28/04/2025 10:23:19
Event ID:      1000
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      LISA.mydomain.com
Description:
Category: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware
EventId: 1
SpanId: 0ab38f9f6872cdcb
TraceId: 644751aebde7aa7215be5f4a25cee06a
ParentId: 0000000000000000
ConnectionId: 0HNC65NKAI0A1
RequestId: 0HNC65NKAI0A1:00000002
RequestPath: /api/v1/auth/signin

An unhandled exception has occurred while executing the request.

Exception: 
Duplicati.WebserverCore.Exceptions.InvalidHostnameException: Invalid hostname: lisa.mydomain.com
   at Duplicati.WebserverCore.Middlewares.HostnameFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Duplicati.WebserverCore.Middlewares.LanguageFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<ExecuteValueTaskOfObject>g__ExecuteAwaited|129_0(ValueTask`1 valueTask, HttpContext httpContext, JsonTypeInfo`1 jsonTypeInfo)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<>c__DisplayClass102_2.<<HandleRequestBodyAndCompileRequestDelegateForJson>b__2>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.WebserverCore.Middlewares.WebsocketExtensions.<>c__DisplayClass0_0.<<UseNotifications>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.<Invoke>g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)

Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name=".NET Runtime" />
    <EventID Qualifiers="0">1000</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2025-04-28T08:23:19.1526603Z" />
    <EventRecordID>51551</EventRecordID>
    <Correlation />
    <Execution ProcessID="12852" ThreadID="0" />
    <Channel>Application</Channel>
    <Computer>LISA.mydomain.com</Computer>
    <Security />
  </System>
  <EventData>
    <Data>Category: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware
EventId: 1
SpanId: 0ab38f9f6872cdcb
TraceId: 644751aebde7aa7215be5f4a25cee06a
ParentId: 0000000000000000
ConnectionId: 0HNC65NKAI0A1
RequestId: 0HNC65NKAI0A1:00000002
RequestPath: /api/v1/auth/signin

An unhandled exception has occurred while executing the request.

Exception: 
Duplicati.WebserverCore.Exceptions.InvalidHostnameException: Invalid hostname: lisa.mydomain.com
   at Duplicati.WebserverCore.Middlewares.HostnameFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Duplicati.WebserverCore.Middlewares.LanguageFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.&lt;ExecuteValueTaskOfObject&gt;g__ExecuteAwaited|129_0(ValueTask`1 valueTask, HttpContext httpContext, JsonTypeInfo`1 jsonTypeInfo)
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.&lt;&gt;c__DisplayClass102_2.&lt;&lt;HandleRequestBodyAndCompileRequestDelegateForJson&gt;b__2&gt;d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.WebserverCore.Middlewares.WebsocketExtensions.&lt;&gt;c__DisplayClass0_0.&lt;&lt;UseNotifications&gt;b__0&gt;d.MoveNext()
--- End of stack trace from previous location ---
   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.&lt;Invoke&gt;g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)
</Data>
  </EventData>
</Event>

No idea why it says the hostname is invalid, I have no issues resolving it and nothing has changed on the server regarding it. I suppose it’s possible an update has been applied by the reboot, but there were none pending when I restarted.

Slight discrepancy between the old and new UI


I don’t think it’s good that you need to hover over the slider to see the current value - this should just be visible.

Seems to send entire dblock in one full-speed burst.
Comments begin as post-fix follow-ups on the Issue.
Doubling the throttle-upload, this breaks Backblaze B2 below 16 MBytes/sec.
asynchronous-concurrent-upload-limit is 1 or 4. read-write-timeout is 5m.

For minor ngclient oddity that might be debug code, screen 1 footer says hello:

image

Above hellofalse is in Add a new backup. If I type a name, it says hellotrue.

Just wanted to add that this eventually crashes the server, more specifically CPU goes to 100% and then nothing works and it has to be force shutdown. I found this out when I forgot to disable the Duplicati service and rebooted the server for another reason. An hour or so later my monitoring reported the CPU at a constant 100%.

If I reboot any of the other Windows servers also running this version, I’ll have to remember to keep an eye on it.

I’ve seen this a couple times recently, but got it today and it reminded me to ask: does Duplicati fix these files or are they left in the old format and so will repeatedly warn me each time it comes across them again?

I did disable the option for LZMA some time back so there should be no new ones.

What steps do you take to make it valid? If you had it before, maybe it got lost?

--webservice-allowed-hostnames: The hostnames that are accepted, separated with semicolons. If any of the hostnames are "*", all hostnames are allowed and the hostname checking is disabled.

Settings view of that:

I don’t know if this should be an enhancement request to that warning, but you can set:

--zip-compression-library=SharpCompress

Recompress in RecoveryTool can probably also stop it, but I don’t have an exact recipe.

Thanks for this, it allowed the service to start without the errors and then I was able to connect to the GUI. I always had * for this and when I checked that’s what it was, so I removed the parameter, restarted the service, and it was still ok. Or so I thought.

All the settings from “Update channel” down had been reset, with all the ones under “Default options” having been deleted. As I use basically the same on all my servers I was able to copy/paste them back - thank-goodness for “Edit as text”.

Running some tests but I think it ok now, so later I will reboot one of the other Windows servers and see if the same happens to them. Still, why this would cause an error then eventual crash of the server, is a bit worrying.

As for the compression issue, I’ll wait to see if @kenkendk has anything to comment, especially as these files are on my S3 storage, so replacing files isn’t as straightforward especially with the cost involved.

You use ngax terminology, but do you ever edit this in ngclient too? Layout is different:

ngax
Update channel
Usage statistics
Default options

ngclient
Update channel
Pause after startup or hibernation
Usage statistics
Advanced options

What does reset look like in Update channel? Ordinarily it defaults to what was installed:

If you can repro the upgrade you did (several had build bugs) to get failure, it’d be nice.
I saw a bug where ngclient dropped job schedule, but Settings are a different database.

No, I don’t make changes in ngclient for the time being - I didn’t think to check it when my settings got wiped so I don’t know what they were like. If it happens again I’ll be sure to do that.

Repair time check is no longer confused by time zones, but is confused by dlist retries:

The remote files are newer (5/5/2025 11:54:31 AM) than the local database (5/5/2025 11:54:29 AM), this is likely because the database is outdated. Consider deleting the local database and run the repair operation again. If this is expected, set the option “–repair-ignore-outdated-database”

An upload retry is done with a different name, random for dblock and dindex, 1 second higher for dlist. People normally wouldn’t notice tiny time change, but a Repair notices.

Live log at retry level:

May 5, 2025 11:54 AM: Backend event: Put - Completed: duplicati-20250505T155431Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Backend event: Put - Started: duplicati-20250505T155431Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Renaming "duplicati-20250505T155430Z.dlist.zip" to "duplicati-20250505T155431Z.dlist.zip"
May 5, 2025 11:54 AM: Backend event: Put - Rename: duplicati-20250505T155431Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Backend event: Put - Rename: duplicati-20250505T155430Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Backend event: Put - Retrying: duplicati-20250505T155430Z.dlist.zip ()
May 5, 2025 11:54 AM: Operation Put with file duplicati-20250505T155430Z.dlist.zip attempt 2 of 5 failed with message: No connection could be made because the target machine actively refused it.
May 5, 2025 11:54 AM: Backend event: Put - Started: duplicati-20250505T155430Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Renaming "duplicati-20250505T155429Z.dlist.zip" to "duplicati-20250505T155430Z.dlist.zip"
May 5, 2025 11:54 AM: Backend event: Put - Rename: duplicati-20250505T155430Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Backend event: Put - Rename: duplicati-20250505T155429Z.dlist.zip (667 bytes)
May 5, 2025 11:54 AM: Backend event: Put - Retrying: duplicati-20250505T155429Z.dlist.zip ()
May 5, 2025 11:54 AM: Operation Put with file duplicati-20250505T155429Z.dlist.zip attempt 1 of 5 failed with message: No connection could be made because the target machine actively refused it.
May 5, 2025 11:54 AM: Backend event: Put - Started: duplicati-20250505T155429Z.dlist.zip (667 bytes)

Job → Show log → Remote:

  • May 5, 2025 11:54 AM: put duplicati-20250505T155431Z.dlist.zip
  • May 5, 2025 11:54 AM: put duplicati-20250505T155430Z.dlist.zip
  • May 5, 2025 11:54 AM: put duplicati-20250505T155429Z.dlist.zip

Fileset table Timestamp is 1746460469 3:54:29 PM

Repro:

Backup a short file to something easily turned on and off. I used FileZilla Server.
--no-backend-verification --upload-unchanged-backups
Turn server off then backup again, watching About → Show log → Live → Retry
Turn server on and let it finish uploading the time-incremented renamed dlist file.
Repair

Seeing ParsedVolumes makes me think it uses the retry-incremented dlist filename time.

EDIT 1:

Test for bug inspired by a user report where discussion on the theory might continue here.

EDIT 2:

Filed issue.

ngclient SFTP test fails. Dev tools shows target URL to test has an extra colon in it.

ngax ssh://localhost/test_3
ngclient ssh://localhost:/test_3

ngclient create looks fishy too.
ssh://localhost:/test_3

What’s actually in the Folder path is test_3 per general no-leading-slashes design.

EDIT 1:

Attempting to press on and recreate a database from files that SFTP actually has fails:

2025-05-06 19:42:57 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2025-05-06 19:42:57 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Backend.Handler-RemoteOperationList]: RemoteOperationList took 0:00:00:00.113
2025-05-06 19:42:57 -04 - [Retry-Duplicati.Library.Main.Backend.Handler-RetryList]: Operation List with file  attempt 6 of 5 failed with message: Unable to set folder to /:/test_3/, error message: The file path does not exist or is invalid.
Duplicati.Library.Interface.FolderMissingException: Unable to set folder to /:/test_3/, error message: The file path does not exist or is invalid.
 ---> Renci.SshNet.Common.SftpPathNotFoundException: The file path does not exist or is invalid.
   at Renci.SshNet.SubsystemSession.WaitOnHandleAsync[T](TaskCompletionSource`1 tcs, Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at Renci.SshNet.Sftp.SftpSession.ChangeDirectoryAsync(String path, CancellationToken cancellationToken)
   at Duplicati.Library.Utility.Utility.WithTimeout(TimeSpan timeout, CancellationToken token, Func`2 func)
   at Duplicati.Library.Backend.SSHv2.SetWorkingDirectory(SftpClient connection, CancellationToken cancelToken)
   --- End of inner exception stack trace ---
   at Duplicati.Library.Backend.SSHv2.SetWorkingDirectory(SftpClient connection, CancellationToken cancelToken)
   at Duplicati.Library.Backend.SSHv2.ListAsync(CancellationToken cancelToken)+MoveNext()
   at Duplicati.Library.Backend.SSHv2.ListAsync(CancellationToken cancelToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()
   at System.Linq.AsyncEnumerable.<ToListAsync>g__Core|424_0[TSource](IAsyncEnumerable`1 source, CancellationToken cancellationToken) in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/ToList.cs:line 36
   at System.Linq.AsyncEnumerable.<ToListAsync>g__Core|424_0[TSource](IAsyncEnumerable`1 source, CancellationToken cancellationToken) in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/ToList.cs:line 36
   at Duplicati.Library.Main.Backend.BackendManager.ListOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)

filed directly to ngclient see how that path goes.

I think I linked one before and didn’t get a reference back to forum like duplicati gets.
I’m not sure if such links help connect, or add useless noise, but I thought I’d mention it.

On a different Destination topic, how is one supposed to know what’s currently chosen?
ngax has a dropdown that shows it and lets one change it.
ngclient has a different way to change, but what is it now?

1 Like

I was having some issues with one of my backups, to a local share, and ran a delete/rebuild on the database. Nearly 24hrs later it finally completed but it reported I had 4071 of these badly compressed files. Ignoring that it took a day to recreate the database, I think I need to rebuild all these compressed files but can’t think of a good way as the log isn’t telling me the file names. I’m also not looking forward to losing another day to rebuilding the database.

Why does it take so long to rebuild? If this was an urgent recovery I would not be happy and pretty glad I back up the databases in a separate job.

I can reproduce this on Chrome, I have registered an issue.

This has been addressed now and will be part of .118, it only affects a few backends (S3 + B2 mainly).

Yes, that was a debug thing that made it into the release; it has been removed.

Is this across the board or on Windows only? Any clues as to what it is doing?

The files are not fixed, as that would require recreate/rewriting them, which Duplicati does not normally do. Should we downgrade warning this to information so it does not pop up so visibly? As @ts678 suggests, you can also set --zip-compression-library=sharpcompress so it always uses the library that supports it.

Any ideas as to what made it disapear? Was it after you removed the * setting for the hostname?

I guess it would be possible to get Duplicati to re-create new files, but it is potentially error prone. With the new repair code for the next release, it would be possible to create a replacement, but not sure if it is worth it?

I have added an issue for that.

It is supposed to be the same. This should be fixed along with the other issue with missing advanced options.

To rebuild the database, Duplicati needs to download all dindex and dlist files. From those files it should be possible to rebuild the database. With files present locally, it can take a few minutes to recreate the database, and we are actively working on increasing the recreate performance.

However, in some cases data is missing from the dindex files and this causes Duplicati to look for the missing data in the dblock files. Downloading and scanning these files will take substantially longer, and the duration is somewhat unpredictable, as it will end when all data is located.

Do you know if you entered into the phase where the dblock files were downloaded?

It is a bit decoupled, so the archive routine does not know the source filename, which is why it is not reported.

One way would be to use the recovery-tool and the recompress option. This will download all remote files, decrypt-decompress-compress-encrypt, and upload again.

It will break the database as the file length (and hashes) will be different after the recompression.

If you prefer to keep the local database, you can wipe the knowledge of hash/size, so it will instead gradually build knowledge of the remote storage:

UPDATE "Remotevolume" SET "Hash" - NULL, Size = -1, State = 'Uploaded' WHERE State IN ('Uploaded', 'Verified')

Prefered alternative is to rebuild the database, as that will grab the size and hashes from the index files.

Thanks

I only saw it with Windows, all my Linux installs were fine after the upgrade.

Will this fallback to sharpcompress ever be deprecated? Just thinking ahead as I would prefer to fix my mistake in foolishly enabling it in my own time rather than being forced to do it later.

Not really, other than the upgrade, but I could only spot it once I was able to reconnect to the GUI by putting the parameter into the service. Feels like that was a setting that was also reset by the upgrade.

Well as I just discovered a backup with 4071 of these, for myself it would be worth it.

I think it did as I did see some dblock files appearing in the temp folder, as I have this on a separate disk to spread the I/O and was looking in performance monitor if there was a bottleneck somewhere.

Thanks, I’ll have to think about how to schedule it, it’s just a shame it has to download all the files as there are over 17k of them, and only 4k need fixing.

We don’t have any plans to disable SharpCompress, but libraries do sometimes stop being maintained so I can’t promise “forever”.

And I don’t think it is a mistake. Using LZMA offers better compression at the cost of more processing power. The tradeoff is valid for some uses.

If you do not see any significant slowdowns using SharpCompress, you can just set it to be the default library. The only reason it is not the default is that the .NET zip library is faster, but this speed change can easily be masked by transfer speeds.

That worries me because it should not happen.

Can’t really think of a great way of identifying them without downloading them. Maybe “last modified” can narrow it down?

That’s fair enough, but having the mix of types is probably not great in my case.

Well it didn’t happen for .118 but I’ll update more on that in its thread.

Not going to work as I switched back at some point and no idea when that was.