Choose a dataset

Different codecs can behave very differently with different data. Some are great at compressing text but horrible with binary data, some excel with more repetitive data like logs. Many have long initialization times but are fast once they get started, while others can compress/decompress small buffers almost instantly.

This benchmark is run against many standard datasets. Hopefully one of them is interesting for you, but if not don't worry—you can use Squash to easily run your own benchmark with whatever data you want. That said, if you think you have a somewhat common use case, please let us know—we may be interested in adding the data to this benchmark.


The default dataset is selected randomly.

Name Source Description Size

Choose a machine

If you think certain algorithms are always faster, you've got another thing coming! Different CPUs can behave very differently with the same data.

The Squash benchmark is currently run on many of the machines I have access to—this happens to be fairly recent Intel CPUs, and a mix of ARM SBCs. There is an entry in the FAQ with more details.


The default machine is selected randomly.

Name Status CPU/SoC Architecture Clock Speed Memory Platform Distro Kernel Compiler CSV


  1. Compression Ratio vs. Compression Speed
  2. Compression Ratio vs. Decompression Speed
  3. Compression Speed vs. Decompression Speed
  4. Round-Trip vs. Compression Ratio
  5. Transfer + Processing
  6. Optimal Codecs
  7. Results Table

Note that we do provide access to the raw data if you would prefer to generate your own charts.

Compression Ratio vs. Compression Speed

Compression Ratio vs. Decompression Speed

Compression Speed vs. Decompression Speed

Round Trip Speed vs. Compression Ratio

Transfer + Processing

Sometimes all you care about is how long something takes to load or save, and how much disk space or bandwidth is used doesn't really matter. For example, if you have a file that would take 1 second to load if uncompressed and you could cut the file size in half by compressing it, as long as decompressing takes less than half a second the content is available sooner than it would have been without compression.


For the presets I have tried to provide typical real-world speeds, not theoretical peaks. This can be significantly less than the advertised speed.

When entering custom values please keep in mind that this uses bytes per second, not bits per second. Also, it uses binary prefixes (1 MiB is 1024 KiB, not 1000).

Optimal Codecs


Plugin Codec Level Compression Ratio Compression Speed Decompression Speed
{{ result.ratio | number:2 }}
{{ result.ratio | number:2 }}
{{ result.compression_rate | formatSpeed }}
{{ result.compression_rate | formatSpeed }}
{{ result.decompression_rate | formatSpeed }}
{{ result.decompression_rate | formatSpeed }}

Codec Details

Library Name Version License Plugin Codec Streaming Flushing Levels


Well, okay, I'm writing this before I publish the URL, so not so much "frequently asked questions" as "things I thought you might be wondering", but if I called this section "TITYMBW" the first thing I would have to answer would be "WTF is 'TITYMBW'?"

Can I be informed of updates?

Sure! This benchmark is updated whenever a new version of Squash is released, so you can just subscribe to the squash-annonce mailing list. We send one message to announce the updated version of both the library and the benchmark.

Will you add «insert compression codec»?

In order to be included the benchmark the software must be supported by Squash. If there is a specific codec you're interested in, please file an issue against Squash.

If the codec is reliable (it has to pass Squash's unit tests), works on Linux, accessible from C or C++, and open source the odds are good I would be willing to write a plugin, or at least merge a pull request.

If the codec doesn't meet all the above critera I still may be willing to accept a pull request for a plugin, but keep in mind that all the machines I run this on are running Linux with various architectures, so even if there is a plugin for a Windows-only, or x86-only, etc., library it probably will not show up for every machine. For example, I would like to add a Windows API plugin, but it will not show up on any of the Linux machines' results. Ditto for OS X.

How are the values calculated?

The benchmark collects the compressed size, compression time, and decompression time. Those are then used to calculate the values used in the benchmark:


uncompressed size ÷ compressed size

Compression Speed

uncompressed size ÷ compression time

Decompression Speed

uncompressed size ÷ decompression time

Round Trip Speed

(2 × uncompressed size) ÷ (compression time + decompression time)

Sizes are presented using binary prefixes—1 KiB is 1024 bytes, 1 MiB is 1024 KiB, and so on.

What about memory usage?

I would really like to include that data. If you have a good way to capture that data on the C side (in benchmark.c) I would be very happy to merge it, and integrate it into the results.

Please use quixdb/squash-benchmark#2 to discuss this.

Is time CPU time or wall-clock?

CPU time. Wall-clock data is actually captured but not presented. I'm willing to consider pull requests if you can find a good way to display it, as well as a good reason.

What about multi-threading?

That's a tricky question. For one thing it would explode the number of data points (for example, LZMA would have 4 codecs with 9 levels each, multiplied the number the number of cores—that's 288 different configurations on an 8-core machine per dataset). Obviously this would make generating the benchmark even slower, but the main problem is presenting the data in a way which doesn't destroy the usability for people who are interested in single-threaded compression/decompression.

Can you switch the graphs to use a logarithmic scale?

You can. Just click on the label for either axis and it will toggle between linear and logarithmic. I know this isn't obvious—I'd be happy to merge a PR if you can improve that.

I don't want to switch the default because I think that linear is probably better for most people. Logarithmic tends to be better if all you care about is compression ratio, not speed.

Can you make it easier to compare machines or datasets (instead of codecs)?

I would like to (see quixdb/squash-benchmark-web#2 and quixdb/squash-benchmark-web#3 for discussion). It is probably going to require downloading all the data from all the machines, which is around {{data_points_per_machine*datasets.length*machines.length*66|formatSize:-1}}, though HTTP compression should help (oh, the irony of being stuck with zlib).

Can you add a feature (chart, table, etc.) to the bencmark?

No promises, but I am looking for ideas—I'm certainly willing to at least listen to the request. Please file an issue.

My library isn't performing as well as I think it should!

Sorry. I am trying to keep conditions as fair and as close to real-world as I can, but I'm certainly willing to discuss any concerns you have with methodology.

The Squash library typically adds very little overhead, but if you have ideas on how to improve the plugin for your library I'm happy to accept patches.

If the issue is performance on an architecture you don't have access to (e.g., ARM), I'm willing to provide SSH access to most of the machines included in this benchmark to people working on open source libraries. If you would like access to one to help you optimize your code just let me know.

Can you add «insert machine, CPU, architecture, OS, etc.»?

Only if I have, or at least have access to, a machine which fits that description.

In general I include what I have available. That tends to be some newish Intel CPUs, some older Intel CPUs that haven't yet found their way to the electronics recycler, and some ARM SBCs. If you would like to donate other hardware I'm willing to add it to the benchmark.

What compiler flags were used?

Flags vary a bit since different plugins require different flags, but as far as performance related flags are concerned, plugins are currently compiled with -O3 -flto -march=native -mtune=native.

If you are the author of one of the libraries and would like for your plugin to use different flags when compiled as part of Squash, please file an issue. As long as the flags are safe, we will respect your wishes.

This doesn't work in my browser.

I'm sorry, but if you're using an old browser (probably Internet Explorer) you'll have to upgrade. I'm willing to consider patches, but I will not be putting in any effort to make this page display in older browsers unless someone wants to pay me to do it—and it's not something I enjoy, so they would have to pay pretty well.

If your browser is up to date, make sure JavaScript is enabled. Cookies, Flash, etc. aren't necessary, but JS definitely is.

Are there any better benchmarks?

There are different benchmarks, which may or may not be better, depending on what you're interested in. Most benchmarks use command line programs instead of libraries, which is a bit easier so they tend to include many more codecs, but usually don't include nearly as many different machines. Those benchmarks include:

The only other benchmark I'm aware of focusing on libraries is fsbench. It includes more compression codecs (though fewer total options), as well as hash functions (cryptographic and non-cryptographic) and some other cryptographic functions. It is run on a few different machines against a tarball of Silesa with all files truncated to 1 MiB.

If I'm missing any modern, up-to-date benchmarks please let me know.

Can I have the raw data?

Of course! The table in the "choose a machine" section includes a link to a CSV which you can import into your favorite spreadsheet application (or at least your least-hated spreadsheet application). If you don't have one yet, LibreOffice Calc is a good choice.

Additionally, you can grab a copy from the the data folder of the squash-benchmark-web git repository.

If you do something interesting with it please let us know! Or, even better, submit a pull request so everyone can benefit from your brilliance!

The data itself is CC0 licensed. That said, we would certainly appreciate attribution.

Can I link to a specific configuration?

Some things can be configured by passing parameters in the query string:

Dataset to show (currently ), the default is selected randomly
Machine to show (currently ), the default is selected randomly
Transfer speed (in KiB/s) for the Transfer + Processing chart (currently )
The default scale for the speed axis of charts (linear or logarithmic, currently )
A comma-separated list of plugins to show in the scatter plots. All other plugins will be disabled, though they can be re-enabled by clicking on their entry in the legend.
A comma-separated list of plugins to hide in the scatter plots. Note that, if used, this parameter overrides visible-plugins

For example, your current configuration would be: {{ location }}?dataset={{ dataset }}&machine={{ machine }}&speed={{ calculatedTransferSpeed / 1024 }}&speed-scale={{ speedScale }}.

Note that all fields are optional; you can provide as many or few of them as you like. Also, please be aware that this isn't necessarily stable—we may change the format when adding new features to the benchmark.