Scraping/Parsing Google search results in Ruby

user186645 picture user186645 · Oct 8, 2009 · Viewed 14.7k times · Source

Assume I have the entire HTML of a Google search results page. Does anyone know of any existing code (Ruby?) to scrape/parse the first page of Google search results? Ideally it would handle the Shopping Results and Video Results sections that can spring up anywhere.

If not, what's the best Ruby-based tool for screenscraping in general?

To clarify: I'm aware that it's difficult/impossible to get Google search results programmatically/API-wise AND simply CURLing results pages has a lot of issues. There's concensus on both of these points here on stackoverflow. My question is different.

Answer

khelll picture khelll · Oct 8, 2009

This should be very simple thing, have a look at the "Screen Scraping with ScrAPI" screen cast by Ryan Bates. You still can do without scraping libraries, just stick to things like Nokogiri.


From Nokogiri's documentation:

require 'nokogiri'
require 'open-uri'

# Get a Nokogiri::HTML:Document for the page we’re interested in...

doc = Nokogiri::HTML(open('http://www.google.com/search?q=tenderlove'))

# Do funky things with it using Nokogiri::XML::Node methods...

####
# Search for nodes by css
doc.css('h3.r a.l').each do |link|
  puts link.content
end

####
# Search for nodes by xpath
doc.xpath('//h3/a[@class="l"]').each do |link|
  puts link.content
end

####
# Or mix and match.
doc.search('h3.r a.l', '//h3/a[@class="l"]').each do |link|
  puts link.content
end