I understand the LZ77 and LZ78 algorithms. I read about LZ4 here and here and found code for it.
Those links described the LZ4 block format. But it would be great if someone could explain (or direct me to some resource explaining):
LZ4 is built to compress fast, at hundreds of MB/s per core. It's a fit for applications where you want compression that's very cheap: for example, you're trying to make a network or on-disk format more compact but can't afford to spend a bunch of CPU time on compression. It's in a family with, for example, snappy and LZO.
The natural comparison point is zlib's DEFLATE algorithm, which uses LZ77 and Huffman coding and is used in gzip, the .ZIP and .PNG formats, and too many other places to count.
These fast compressors differ because:
By comparison, DEFLATE gets better compression but compresses and decompresses slower, and high-compression algorithms like LZMA, bzip2, LZHAM, or brotli tend to take even more time (though Brotli at its faster settings can compete with zlib). There's a lot of variation among the high-compression algorithms, but broadly, they tend to capture redundancies over longer distances, take more advantage of context to determine what bytes are likely, and use more compact but slower ways to express their results in bits.
LZ4HC is a "high-compression" variant of LZ4 that, I believe, changes point 1 above--the compressor finds more than one match between current and past data and looks for the best match to ensure the output is small. This improves compression ratio but lowers compression speed compared to LZ4. Decompression speed isn't hurt, though, so if you compress once and decompress many times and mostly want extremely cheap decompression, LZ4HC would make sense.
Note that even a fast compressor might not allow one core to saturate a large amount of bandwidth, like that provided by SSDs or fast in-datacenter links. There are even quicker compressors with lower ratios, sometimes used to temporarily pack data in RAM. WKdm and Density are two such compressors; one trait they share is acting on 4-byte machine words of input at a time rather than individual bytes. Sometimes specialized hardware can achieve very fast compression, like in Samsung's Exynos chips or Intel's QuickAssist technology.
If you're interested in compressing more than LZ4 but with less CPU time than deflate, the author of LZ4 (Yann Collet) wrote a library called Zstd--here's a blog post from Facebook at its stable release, background on the finite state machines used for compactly encoding info on repetitions, and a detailed description in an RFC. Its fast modes could work in some LZ4-ish use cases. (Also, Apple developed lzfse on similar principles, and Google developed gipfeli as a 'medium' packer. Neither seemed to get much use in the outside world.) Also, a couple projects aim to provide faster/lighter DEFLATE: SLZ and patches to zlib by CloudFlare and Intel.
Compared to the fastest compressors, those "medium" packers add a form of entropy encoding, which is to say they take advantage of how some bytes are more common than others and (in effect) put fewer bits in the output for the more common byte values.
If latency rather than total CPU time is your main concern and you're compressing one long stream, there are tools to do compression in parallel, like pigz and the -T
threading option to the zstd command-line tool. (There are various experimental packers out there too, but they exist more to push boundaries on speed or density, rather than for use today.)
So, in general, you have a pretty good spectrum of alternative compressors for different apps:
As you move from LZ4 through DEFLATE to brotli you layer on more effort to predict and encode data and get more compression out at the cost of some speed.
As an aside, algorithms like brotli and zstd can generally outperform gzip--compress better at a given speed, or get the same compression faster--but this isn't actually because zlib did anything wrong. The main secret is probably that newer algos can use more memory: zlib dates to 1995 (and DEFLATE to 1993). Back then RAM cost >3,000x as much as today, so just keeping 32KB of history made sense. Changes in CPUs over time may also be a factor: tons of of arithmetic (as used in finite state machines) is relatively cheaper than it used to be, and unpredictable if
s (branches) are relatively more expensive.