using a python generator to process large text files

user3062260 picture user3062260 · Apr 10, 2018 · Viewed 10.1k times · Source

I'm new to using generators and have read around a bit but need some help processing large text files in chunks. I know this topic has been covered but example code has very limited explanations making it difficult to modify the code if one doesn't understand what is going on.

My problem is fairly simple, I have a series of large text files containing human genome sequencing data in the following format:

chr22   1   0
chr22   2   0
chr22   3   1
chr22   4   1
chr22   5   1
chr22   6   2

The files range between 1Gb and ~20Gb in length which is too big to read into RAM. So I would like to read the lines in chunks/bins of say 10000 lines at a time so that I can perform calculations on the final column in these bin sizes.

Based on this link here I have written the following:

def read_large_file(file_object):
    """A generator function to read a large file lazily."""

    bin_size=5000
    start=0
    end=start+bin_size

    # Read a block from the file: data
    while True:
        data = file_object.readlines(end) 
        if not data:
            break
        start=start+bin_size
        end=end+bin_size
        yield data


def process_file(path):

    try:
        # Open a connection to the file
        with open(path) as file_handler:
            # Create a generator object for the file: gen_file
            for block in read_large_file(file_handler):
                print(block)
                # process block

    except (IOError, OSError):
        print("Error opening / processing file")    
    return    

if __name__ == '__main__':
            path='C:/path_to/input.txt'
    process_file(path)

within 'process_block' I expected the returned 'block' object to be a list 10000 elements long but its not? The first list is 843 elements. The second is 2394 elements?

I want to get back 'N' number of lines in a block but am very confused by what is happening here?

This solution here seems like it could help but again I don't understand how to modify it to read N-lines at a time?

This here also looks like a really great solution but again, there isn't enough background explanation for me to understand enough to modify the code.

Any help would be really appreciated?

Answer

pawamoy picture pawamoy · Apr 10, 2018

Instead of playing with offsets in the file, try to build and yield lists of 10000 elements from a loop:

def read_large_file(file_handler, block_size=10000):
    block = []
    for line in file_handler:
        block.append(line)
        if len(block) == block_size:
            yield block
            block = []

    # don't forget to yield the last block
    if block:
        yield block

with open(path) as file_handler:
    for block in read_large_file(file_handler):
        print(block)