How should I deal with an XMLSyntaxError in Python's lxml while parsing a large XML file?

damon picture damon · Jan 17, 2012 · Viewed 16k times · Source

I'm trying to parse an XML file that's over 2GB with Python's lxml library. Unfortunately, the XML file does not have a line telling the character encoding, so I have to manually set it. While iterating through the file though, there are still some strange characters that come up once in a while.

I'm not sure how to determine the character encoding of the line, but furthermore, lxml will raise an XMLSyntaxError from the scope of the for loop. How can I properly catch this error, and deal with it correctly? Here's a simplistic code snippet:

from lxml import etree
etparse = etree.iterparse(file("my_file.xml", 'r'), events=("start",), encoding="CP1252")
for event, elem in etparse:
    if elem.tag == "product":
        print "Found the product!"
        elem.clear()

This eventually produces the error:

XMLSyntaxError: PCDATA invalid Char value 31, line 1565367, column 50

That line of the file looks like this:

% sed -n "1565367 p" my_file.xml
<romance_copy>Ravioli Florentine. Tender Ravioli Filled With Creamy Ricotta Cheese And

The 'F' of filled actually looks like this in my terminal:

xml line causing the error

Answer

Michael picture Michael · Jan 17, 2012

The right thing to do here is make sure that the creator of the XML file makes sure that: A.) that the encoding of the file is declared B.) that the XML file is well formed (no invalid characters control characters, no invalid characters that are not falling into the encoding scheme, all elements are properly closed etc.) C.) use a DTD or an XML schema if you want to ensure that certain attributes/elements exist, have certain values or correspond to a certain format (note: this will take a performance hit)

So, now to your question. LXml supports a whole bunch of arguments when you use it to parse XML. Check out the documentation. You will want to look at these two arguments:

--> recover --> try hard to parse through broken XML
--> huge_tree --> disable security restrictions and support very deep trees and very long text content (only affects libxml2 2.7+)

They will help you to some degree, but certain invalid characters can just not be recovered from, so again, ensuring that the file is written correctly is your best bet to clean/well working code.

Ah yeah and one more thing. 2GB is huge. I assume you have a list of similar elements in this file (example list of books). Try to split the file up with a Regex Expression on the OS, then start multiple processes to part the pieces. That way you will be able to use more of your cores on your box and the processing time will go down. Of course you then have to deal with the complexity of merging the results back together. I can not make this trade off for you, but wanted to give it to you as "food for thought"

Addition to post: If you have no control over the input file and have bad characters in it, I would try to replace/remove these bad characters by iterating over the string before parsing it as a file. Here a code sample that removes Unicode control characters that you wont need:

#all unicode characters from 0x0000 - 0x0020 (33 total) are bad and will be replaced by "" (empty string)
for line in fileinput.input(xmlInputFileLocation, inplace=1):
    for pos in range(0,len(line)):
        if unichr(line[pos]) < 32:
            line[pos] = None
    print u''.join([c for c in line if c])