Looking for an easy way to get the charset/encoding information of an HTTP response using Python urllib2, or any other Python library.
>>> url = 'http://some.url.value'
>>> request = urllib2.Request(url)
>>> conn = urllib2.urlopen(request)
>>> response_encoding = ?
I know that it is sometimes present in the 'Content-Type' header, but that header has other information, and it's embedded in a string that I would need to parse. For example, the Content-Type header returned by Google is
>>> conn.headers.getheader('content-type')
'text/html; charset=utf-8'
I could work with that, but I'm not sure how consistent the format will be. I'm pretty sure it's possible for charset to be missing entirely, so I'd have to handle that edge case. Some kind of string split operation to get the 'utf-8' out of it seems like it has to be the wrong way to do this kind of thing.
>>> content_type_header = conn.headers.getheader('content-type')
>>> if '=' in content_type_header:
>>> charset = content_type_header.split('=')[1]
That's the kind of code that feels like it's doing too much work. I'm also not sure if it will work in every case. Does anyone have a better way to do this?
To parse http header you could use cgi.parse_header()
:
_, params = cgi.parse_header('text/html; charset=utf-8')
print params['charset'] # -> utf-8
Or using the response object:
response = urllib2.urlopen('http://example.com')
response_encoding = response.headers.getparam('charset')
# or in Python 3: response.headers.get_content_charset(default)
In general the server may lie about the encoding or do not report it at all (the default depends on content-type) or the encoding might be specified inside the response body e.g., <meta>
element in html documents or in xml declaration for xml documents. As a last resort the encoding could be guessed from the content itself.
You could use requests
to get Unicode text:
import requests # pip install requests
r = requests.get(url)
unicode_str = r.text # may use `chardet` to auto-detect encoding
Or BeautifulSoup
to parse html (and convert to Unicode as a side-effect):
from bs4 import BeautifulSoup # pip install beautifulsoup4
soup = BeautifulSoup(urllib2.urlopen(url)) # may use `cchardet` for speed
# ...
Or bs4.UnicodeDammit
directly for arbitrary content (not necessarily an html):
from bs4 import UnicodeDammit
dammit = UnicodeDammit(b"Sacr\xc3\xa9 bleu!")
print(dammit.unicode_markup)
# -> Sacré bleu!
print(dammit.original_encoding)
# -> utf-8