Python decompressing gzip chunk-by-chunk

user291294 picture user291294 · Mar 11, 2010 · Viewed 26.7k times · Source

I've a memory- and disk-limited environment where I need to decompress the contents of a gzip file sent to me in string-based chunks (over xmlrpc binary transfer). However, using the zlib.decompress() or zlib.decompressobj()/decompress() both barf over the gzip header. I've tried offsetting past the gzip header (documented here), but still haven't managed to avoid the barf. The gzip library itself only seems to support decompressing from files.

The following snippet gives a simplified illustration of what I would like to do (except in real life the buffer will be filled from xmlrpc, rather than reading from a local file):

#! /usr/bin/env python

import zlib

CHUNKSIZE=1000

d = zlib.decompressobj()

f=open('23046-8.txt.gz','rb')
buffer=f.read(CHUNKSIZE)

while buffer:
  outstr = d.decompress(buffer)
  print(outstr)
  buffer=f.read(CHUNKSIZE)

outstr = d.flush()
print(outstr)

f.close()

Unfortunately, as I said, this barfs with:

Traceback (most recent call last):
  File "./test.py", line 13, in <module>
    outstr = d.decompress(buffer)
zlib.error: Error -3 while decompressing: incorrect header check 

Theoretically, I could feed my xmlrpc-sourced data into a StringIO and then use that as a fileobj for gzip.GzipFile(), however, in real life, I don't have memory available to hold the entire file contents in memory as well as the decompressed data. I really do need to process it chunk-by-chunk.

The fall-back would be to change the compression of my xmlrpc-sourced data from gzip to plain zlib, but since that impacts other sub-systems I'd prefer to avoid it if possible.

Any ideas?

Answer

wisty picture wisty · Mar 11, 2010

gzip and zlib use slightly different headers.

See How can I decompress a gzip stream with zlib?

Try d = zlib.decompressobj(16+zlib.MAX_WBITS).

And you might try changing your chunk size to a power of 2 (say CHUNKSIZE=1024) for possible performance reasons.