The size of the DLIST files is not my biggest concern. We’re talking about very rare scenarios. I read this on Wikipedia:
The original .ZIP format had a 4 GiB limit on various things (uncompressed size of a file, compressed size of a file and total size of the archive), as well as a limit of 65535 entries in a ZIP archive. In version 4.5 of the specification (which is not the same as v4.5 of any particular tool), PKWARE introduced the “ZIP64” format extensions to get around these limitations, increasing the limitation to 16 EiB (2^64 bytes). In essence, it uses a “normal” central directory entry for a file, followed by an optional “zip64” directory entry, which has the larger fields.
The File Explorer in Windows XP does not support ZIP64, but the Explorer in Windows Vista does. Likewise, some extension libraries support ZIP64, such as DotNetZip, QuaZIP and IO::Compress::Zip in Perl. Python’s built-in zipfile supports it since 2.5 and defaults to it since 3.4. OpenJDK’s built-in java.util.zip supports ZIP64 from version Java 7. Android Java API support ZIP64 since Android 6.0. Mac OS Sierra contains a broken implementation of creating ZIP64 archives using the graphical Archive Utility.
So the limitations are:
- Size of a file inside the .ZIP archive (normal and compressed size)
- Number of files inside one .ZIP archive
- File size of the .ZIP archive.
I guess only limitation 2 and 3 could apply to Duplicati in some rare situations, because the block size varies from a few KB’s to one or more MB’s.
Limitation 3 can only apply to the DLIST files, DBLOCK files have a fixed size of 50 MB by default, in some situations a size of 1 or 2 GB is defined by the user. More than 4 GB is not recommendable, restore and compact operations would take way too much bandwidth.
Assume there are a lot of nested subfolders in the backup source and the average source file is about 150 characters long (including path). to exceed the 4 GB limitation, you need to backup about 20 million source files. I assume this is a very rare situation (millions of files, very long paths).
Limitation 2 can be a problem when using a large archive size and a small block size. If we choose the largest size for a standard ZIP file (4GB), using the default block size (100 KB) and assuming a compression ratio of 50%, there could be more than 80000 blocks in a single archive, which exceeds the 65535 limitation. Also a very rare scenario, but not impossible.
I thought I remembered that Zip64 files can’t be opened by Windows Explorer, so you would need a third party tool (WinZip, 7-Zip) to open the files. That would be a backdraw for defaulting to Zip64.
But this is fixed in Vista, all OS’es that can run Duplicati seem to support Zip64 natively.
Long story short, I can’t think of any reason to not use Zip64 by default, but there is no urgent need to do this, because the limitations do not apply to Duplicati backup files, except for some very rare situations.