I have a text which contains characters such as "\xaf", "\xbe", which, as I understand it from this question, are ASCII encoded characters.
I want to convert them in Python to their UTF-8 equivalents. The usual string.encode("utf-8")
throws UnicodeDecodeError
. Is there some better way, e.g., with the codecs
standard library?
Sample 200 characters here.
.encode
is for converting a Unicode string (unicode
in 2.x, str
in 3.x) to a a byte string (str
in 2.x, bytes
in 3.x).
In 2.x, it's legal to call .encode
on a str
object. Python implicitly decodes the string to Unicode first: s.encode(e)
works as if you had written s.decode(sys.getdefaultencoding()).encode(e)
.
The problem is that the default encoding is "ascii", and your string contains non-ASCII characters. You can solve this by explicitly specifying the correct encoding.
>>> '\xAF \xBE'.decode('ISO-8859-1').encode('UTF-8')
'\xc2\xaf \xc2\xbe'