I'm using Scrapy to crawl a webpage. Some of the information I need only pops up when you click on a certain button (of course also appears in the HTML code after clicking).
I found out that Scrapy can handle forms (like logins) as shown here. But the problem is that there is no form to fill out, so it's not exactly what I need.
How can I simply click a button, which then shows the information I need?
Do I have to use an external library like mechanize or lxml?
Scrapy cannot interpret javascript.
If you absolutely must interact with the javascript on the page, you want to be using Selenium.
If using Scrapy, the solution to the problem depends on what the button is doing.
If it's just showing content that was previously hidden, you can scrape the data without a problem, it doesn't matter that it wouldn't appear in the browser, the HTML is still there.
If it's fetching the content dynamically via AJAX when the button is pressed, the best thing to do is to view the HTTP request that goes out when you press the button using a tool like Firebug. You can then just request the data directly from that URL.
Do I have to use an external library like mechanize or lxml?
If you want to interpret javascript, yes you need to use a different library, although neither of those two fit the bill. Neither of them know anything about javascript. Selenium is the way to go.
If you can give the URL of the page you're working on scraping I can take a look.