I am ordering a huge pile landsat scenes from the USGS, which come as tar.gz archives. I am writing a simple python script to unpack them. Each archive contains 15 tiff images from 60-120 mb in size, totalling just over 2 gb. I can easily extract an entire archive with the following code:
import tarfile
fileName = "LT50250232011160-SC20140922132408.tar.gz"
tfile = tarfile.open(fileName, 'r:gz')
tfile.extractall("newfolder/")
I only actually need 6 of those 15 tiffs, identified as "bands" in the title. These are some of the larger files, so together they account for about half the data. So, I thought I could speed this process up by modifying the code as follows:
fileName = "LT50250232011160-SC20140922132408.tar.gz"
tfile = tarfile.open(fileName, 'r:gz')
membersList = tfile.getmembers()
namesList = tfile.getnames()
bandsList = [x for x, y in zip(membersList, namesList) if "band" in y]
print("extracting...")
tfile.extractall("newfolder/",members=bandsList)
However, adding a timer to both scripts reveals no significant efficiency gain of the second script (on my system, both run in about a minute on a single scene). While the extraction is somewhat faster, it seems like that gain is offset by the time it takes to figure out which files need to be extracted int he first place.
The question is, is this tradeoff inherant in what I am doing, or just the result of my code being inefficient? I'm relatively new to python and only discovered tarfile today, so it wouldn't surprise me if the latter were true, but I haven't been able to find any recommendations for efficient extraction of only part of an archive.
Thanks!
The problem is that a tar
file does not have a central file list, but stores files sequentially with a header before each file. The tar
file is then compressed via gzip to give you tar.gz
. With a tar
file, if you don't want to extract a certain file, you simply skip the next header->size
bytes in an archive and then read the next header. If the archive is additionally compressed, you'll still have to skip that many bytes, only not within the archive file but within the decompressed data stream - which for some compression formats works, but for others requires you to decompress everything in between.
gzip belongs to the latter class of compression schemes. So while you save some time by not writing the undesired files to the disk, your code still decompresses them. You might be able to overcome that problem by overriding the _Stream
class for non-gzip archives, but for your gz
files, there is nothing you can do about it.