Web Crawler - Ignore Robots.txt file?

Craig Locke picture Craig Locke · Dec 5, 2011 · Viewed 11.3k times · Source

Some servers have a robots.txt file in order to stop web crawlers from crawling through their websites. Is there a way to make a web crawler ignore the robots.txt file? I am using Mechanize for python.

Answer

David Heffernan picture David Heffernan · Dec 5, 2011

The documentation for mechanize has this sample code:

br = mechanize.Browser()
....
# Ignore robots.txt.  Do not do this without thought and consideration.
br.set_handle_robots(False)

That does exactly what you want.