Zip & Compression settings

So I started to look at compression settings and have a few questions.

  • Why are the zip/compression settings not available on the (global) settings page but only on the backup job options page?

  • The compression-extension-file setting: What are the default extensions if I don’t add this?

  • Why are there two, what seems identical, settings? compression-level and zip-compression-level? Which one should I change? And what are their default values if I don’t set them?

PS:
Running 2.0.2.10_canary_2017-10-11

PPS:
The question on what the default values are is actually a general question for all the available options in Duplicati, what if I don’t set it, what’s default? Can this info be found somewhere?

I have often wondered about all the defaults myself. Hopefully someone has them listed.

I think I saw somewhere that deflate 6 or 7 was the default.

“Compression level” is depreciated.

Think the “Compression extension File” is loaded by default. You just use this setting to change it.

I’m actually impressed with the option descriptions that pop up on most options when you add them, they are really great! But the “default value” added to these option descriptions would be a very very nice addition :smiley:

3 Likes

They are only shown on the local page because you can choose another compression module, in which case it would be an invalid option.
(Don’t change the compression module, 7z has issues, and we should fix the underlying problem)

You can look in the file to see what it is, there is a list here as well:

The compression-level is a legacy option from 1.3.x. It is renamed to zip-compression-level as other compression libraries may not support setting a compression level.

It should choose the default value when you add the option. Otherwise you can go to the commandline and run the “help” command with “options” as the argument and it will show all options and their default values.

2 Likes

Thank you very much!

I didn’t when it comes to zip-compression-level. Just so you know.

If the default values for most options are printed when using the command line version with the help option maybe adding this default values info to the web interface could be automated somehow? Just wishing :wink:

I seem to recall a few other legacy options floating around - perhaps we should look at updating the in-UI descriptions to indicate this. For example:

Compression-level
This option controls the compression level used. A setting of zero gives no compression, and a setting of 9 gives maximum compression.

Could be updated to be:

Compression-level
*Depreciated: See --zip-compression-level instead.
This option controls the compression level used. A setting of zero gives no compression, and a setting of 9 gives maximum compression.

Or if there is no new equiv. but it’s being kept around for older configs, something like:

Compression-level
*Depreciated: no new parameter, functionality retired.
This option controls the compression level used. A setting of zero gives no compression, and a setting of 9 gives maximum compression.

Hopefully we can come up with something flexible enough to allow for future parameter restructuring / retirement.

That is a problem with the UI, it shows this on the commandline:

> mono Duplicati.CommandLine.exe help compression-level
  --compression-level (Enumeration): Sets the Zip compression level
    [DEPRECATED]: Please use the zip-compression-level option instead
    This option controls the compression level used. A setting of zero gives
    no compression, and a setting of 9 gives maximum compression.
    * values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
    * default value: BestCompression

Ok, I’d like to put a ticket in at GitHub about it - does the UI dynamically get it’s contents from the commandline or are these pre-populated during the app build?

The GUI gets the information dynamically, it is the same as the commandline outputs, but not all fields are apparently shown.

Hi Kenkendk,

can you tell which zip compression level is being used now?
I run this on a Linux machine, personally I would set it to maximum, since my network speed (over internet) will be slower than my compression speed, I suppose.

Should I enable zip64 ? I will be using 50 Mb chunks, but the data inside could have very large directories, or is that not an issue? Not clear if zip64 is enabled or not.

Can I somehow see how long the backup took me? and how big it is? So I can may be test it.

Kind regards

Unless you’ve set up a custom --zip-compression-level it will default to “best compression” (so 9).

Duplicati.CommandLine.exe help --zip-compression-level
–zip-compression-level (Enumeration): Sets the Zip compression level
This option controls the compression level used. A setting of zero gives
no compression, and a setting of 9 gives maximum compression.
* values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
* default value: BestCompression


Unless you’re going to use a custom dblock (archive file) size larger than 4GB you do NOT need zip64 enabled (the default is DISABLED).

Duplicati.CommandLine.exe help --zip-compression-zip64
–zip-compression-zip64 (Boolean): Toggles Zip64 support
The zip64 format is required for files larger than 4GiB, use this flag to
toggle it
* default value: False


Once a backup is done, the interface will show how long it took, how big the source is, and how big the destination is:
image

What is it you are wanting to test?


@kenkendk, we might want to consider adjusting the Zip64 summary to specificy the 4GB limit is for destination, not source, files - so something like “The zip64 format is required when creating files larger than 4GiB, use this flag to toggle it.”

Unless I’m wrong and it is source files that matter in which case we still should probably update it (along with my post). :slight_smile:

What I would like to test:
At home I have a Linux machine with a standard processor, so no resource issue there.
But on my work location I would like to make a backup using a Synology NAS. Setting the compression to maximum could use too much resources of this lower level processor (not sure). You need to find a sweet spot between resource usage and size of the transferred files. Not sure if I will find time for this, but that would be the idea.

It sounds like your goal is to minimize archive space usage without making your machine lag due to the processing.

What I’d recommend is to do some test backups (even to a local drive) so you get an idea of what will happen and how long it will take. Keep in mind that that actual compression levels will vary depending on what’s being backed up (text vs. mp3s, for example) so in setting up a test if you can’t back up everything you eventually want to keep, try selecting some representative folders.

Personally, I want to be able to use my computer even if it makes a backup take a “long” time so I set up a Duplicati level default of --thread-priority="lowest". So far I haven’t really noticed any issues (on a 5+ year old laptop with only 4G of memory), so I assume it’s doing what I need. :slight_smile: