I would like to download file over HTTP
protocol using urllib3
.
I have managed to do this using following code:
url = 'http://url_to_a_file'
connection_pool = urllib3.PoolManager()
resp = connection_pool.request('GET',url )
f = open(filename, 'wb')
f.write(resp.data)
f.close()
resp.release_conn()
But I was wondering what is the proper way of doing this. For example will it work well for big files and If no what to do to make this code more bug tolerant and scalable.
Note. It is important to me to use urllib3
library not urllib2
for example, because I want my code to be thread safe.
Your code snippet is close. Two things worth noting:
If you're using resp.data
, it will consume the entire response and return the connection (you don't need to resp.release_conn()
manually). This is fine if you're cool with holding the data in-memory.
You could use resp.read(amt)
which will stream the response, but the connection will need to be returned via resp.release_conn()
.
This would look something like...
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', url, preload_content=False)
with open(path, 'wb') as out:
while True:
data = r.read(chunk_size)
if not data:
break
out.write(data)
r.release_conn()
The documentation might be a bit lacking on this scenario. If anyone is interested in making a pull-request to improve the urllib3 documentation, that would be greatly appreciated. :)