How can encode('ascii', 'ignore') throw a UnicodeDecodeError?

Trindaz picture Trindaz · Oct 2, 2011 · Viewed 10.2k times · Source

This line

data = get_url_contents(r[0]).encode('ascii', 'ignore')

produces this error

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11450: ordinal not in range(128)

Why? I assumed that because I'm using 'ignore' that it should be impossible to have decode errors when saving the output to a value to a string variable.

Answer

Thomas K picture Thomas K · Oct 2, 2011

Due to a quirk of Python 2, you can call encode on a byte string (i.e. text that's already encoded). In this case, it first tries to convert it to a unicode object by decoding with ascii. So, if get_url_contents is returning a byte string, your line effectively does this:

get_url_contents(r[0]).decode('ascii').encode('ascii', 'ignore')

In Python 3, byte strings don't have an encode method, so the same problem would just cause an AttributeError.

(Of course, I don't know that this is the problem - it could be related to the get_url_contents function. But what I've described above is my best guess)