How to force scrapy to crawl duplicate url?

Alok Singh Mahor picture Alok Singh Mahor · Apr 17, 2014 · Viewed 16.1k times · Source

I am learning Scrapy a web crawling framework.
by default it does not crawl duplicate urls or urls which scrapy have already crawled.

How to make Scrapy to crawl duplicate urls or urls which have already crawled?
I tried to find out on internet but could not find relevant help.

I found DUPEFILTER_CLASS = RFPDupeFilter and SgmlLinkExtractor from Scrapy - Spider crawls duplicate urls but this question is opposite of what I am looking

Answer

paul trmbrth picture paul trmbrth · Apr 17, 2014

You're probably looking for the dont_filter=True argument on Request(). See http://doc.scrapy.org/en/latest/topics/request-response.html#request-objects