How can I scrape website content in PHP from a website that requires a cookie login?

Forest picture Forest · Nov 3, 2012 · Viewed 13.9k times · Source

My problem is that it doesn't just require a basic cookie, but rather asks for a session cookie, and for randomly generated IDs. I think this means I need to use a web browser emulator with a cookie jar?

I have tried to use Snoopy, Goutte and a couple of other web browser emulators, but as of yet I have not been able to find tutorials on how to receive cookies. I am getting a little desperate!

Can anyone give me an example of how to accept cookies in Snoopy or Goutte?

Thanks in advance!

Answer

LSerni picture LSerni · Nov 3, 2012

You can do that in cURL without needing external 'emulators'.

The code below retrieves a page into a PHP variable to be parsed.

Scenario

There is a page (let's call it HOME) that opens the session. Server side, if it is in PHP, is the one (any one actually) calling session_start() for the first time. In other languages you need a specific page that will do all the session setup. From the client side it's the page supplying the session ID cookie. In PHP, all sessioned pages do; in other languages the landing page will do it, all the others will check if the cookie is there, and if there isn't, instead of creating the session, will drop you to HOME.

There is a page (LOGIN) that generates the login form and adds a critical information to the session - "This user is logged in". In the code below, this is the page asking for the session ID.

And finally there are N pages where the goodies to be scrapes reside.

So we want to hit HOME, then LOGIN, then GOODIES one after another. In PHP (and other languages actually), again, HOME and LOGIN might well be the same page. Or all pages might share the same address, for example in Single Page Applications.

The Code

    $url            = "the url generating the session ID";
    $next_url       = "the url asking for session";

    $ch             = curl_init();
    curl_setopt($ch, CURLOPT_URL,    $url);
    // We do not authenticate, only access page to get a session going.
    // Change to False if it is not enough (you'll see that cookiefile
    // remains empty).
    curl_setopt($ch, CURLOPT_NOBODY, True);

    // You may want to change User-Agent here, too
    curl_setopt($ch, CURLOPT_COOKIEFILE, "cookiefile");
    curl_setopt($ch, CURLOPT_COOKIEJAR,  "cookiefile");

    // Just in case
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);

    $ret    = curl_exec($ch);

    // This page we retrieve, and scrape, with GET method
    foreach(array(
            CURLOPT_POST            => False,       // We GET...
            CURLOPT_NOBODY          => False,       // ...the body...
            CURLOPT_URL             => $next_url,   // ...of $next_url...
            CURLOPT_BINARYTRANSFER  => True,        // ...as binary...
            CURLOPT_RETURNTRANSFER  => True,        // ...into $ret...
            CURLOPT_FOLLOWLOCATION  => True,        // ...following redirections...
            CURLOPT_MAXREDIRS       => 5,           // ...reasonably...
            CURLOPT_REFERER         => $url,        // ...as if we came from $url...
            //CURLOPT_COOKIEFILE      => 'cookiefile', // Save these cookies
            //CURLOPT_COOKIEJAR       => 'cookiefile', // (already set above)
            CURLOPT_CONNECTTIMEOUT  => 30,          // Seconds
            CURLOPT_TIMEOUT         => 300,         // Seconds
            CURLOPT_LOW_SPEED_LIMIT => 16384,       // 16 Kb/s
            CURLOPT_LOW_SPEED_TIME  => 15,          // 
            ) as $option => $value)
            if (!curl_setopt($ch, $option, $value))
                    die("could not set $option to " . serialize($value));

    $ret = curl_exec($ch);
    // Done; cleanup.
    curl_close($ch);

Implementation

First of all we have to get the login page.

We use a special User-Agent to introduce ourselves, in order both to be recognizable (we don't want to antagonize the webmaster) but also to fool the server into sending us a specific version of the site that is browser tailored. Ideally, we use the same User-Agent as any browser we're going to use to debug the page, plus a suffix to make it clear to whoever checks that it is an automated tool they're looking at (see comment by Halfer).

    $ua = 'Mozilla/5.0 (Windows NT 5.1; rv:16.0) Gecko/20100101 Firefox/16.0 (ROBOT)';
    $cookiefile = "cookiefile";
    $url1 = "the login url generating the session ID";

    $ch             = curl_init();

    curl_setopt($ch, CURLOPT_URL,            $url1);
    curl_setopt($ch, CURLOPT_USERAGENT,      $ua);
    curl_setopt($ch, CURLOPT_COOKIEFILE,     $cookiefile);
    curl_setopt($ch, CURLOPT_COOKIEJAR,      $cookiefile);
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, True);
    curl_setopt($ch, CURLOPT_NOBODY,         False);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, True);
    curl_setopt($ch, CURLOPT_BINARYTRANSFER, True);
    $ret    = curl_exec($ch);

This will retrieve the page asking for user/password. By inspecting the page, we find the needed fields (including hidden ones) and can populate them. The FORM tag tells us whether we need to go on with POST or GET.

We might want to inspect the form code to adjust the following operations, so we ask cURL to return the page content as-is into $ret, and to do return the page body. Sometimes, CURLOPT_NOBODY set to True is still enough to trigger session creation and cookie submission, and if so, it's faster. But CURLOPT_NOBODY ("no body") works by issuing a HEAD request, instead of a GET; and sometimes the HEAD request doesn't work because the server will only react to a full GET.

Instead of retrieving the body this way, it is also possible to login using a real Firefox and sniff the form content being posted with Firebug (or Chrome with Chrome Tools); some sites will try and populate/modify hidden fields with Javascript, so that the form being submitted will not be the one you see in the HTML code.

A webmaster who wanted his site not scraped might send a hidden field with the timestamp. A human being (not aided by a too-clever browser - there are ways to tell browsers not to be clever; at worst, every time you change the name of user and pass fields) takes at least three seconds to fill a form. A cURL script takes zero. Of course, a delay can be simulated. It's all shadowboxing...

We may also want to be on the lookout for form appearance. A webmaster could for example build a form asking name, email, and password; and then, through use of CSS, move the "email" field where you would expect to find the name, and vice versa. So the real form being submitted will have a "@" in a field called username, none in the field called email. The server, that expects this, merely inverts again the two fields. A "scraper" built by hand (or a spambot) would do what seems natural, and send an email in the email field. And by so doing, it betrays itself. By working through the form once with a real CSS and JS aware browser, sending meaningful data, and sniffing what actually gets sent, we might be able to overcome this particular obstacle. Might, because there are ways of making life difficult. As I said, shadowboxing.

Back to the case at hand, in this case the form contains three fields and has no Javascript overlay. We have cPASS, cUSR, and checkLOGIN with a value of 'Check login'.

So we prepare the form with the proper fields. Note that the form is to be sent as application/x-www-form-urlencoded, which in PHP cURL means two things:

  • we are to use CURLOPT_POST
  • the option CURLOPT_POSTFIELDS must be a string (an array would signal cURL to submit as multipart/form-data, which might work... or might not).

The form fields are, as it says, urlencoded; there's a function for that.

We read the action field of the form; that's the URL we are to use to submit our authentication (which we must have).

So everything being ready...

    $fields = array(
        'checkLOGIN' => 'Check Login',
        'cUSR'       => 'jb007',
        'cPASS'      => 'astonmartin',
    );
    $coded = array();
    foreach($fields as $field => $value)
        $coded[] = $field . '=' . urlencode($value);
    $string = implode('&', $coded);

    curl_setopt($ch, CURLOPT_URL,         $url1); //same URL as before, the login url generating the session ID
    curl_setopt($ch, CURLOPT_POST,        True);
    curl_setopt($ch, CURLOPT_POSTFIELDS,  $string);
    $ret    = curl_exec($ch);

We expect now a "Hello, James - how about a nice game of chess?" page. But more than that, we expect that the session associated with the cookie saved in the $cookiefile has been supplied with the critical information -- "user is authenticated".

So all following page requests made using $ch and the same cookie jar will be granted access, allowing us to 'scrape' pages quite easily - just remember to set request mode back to GET:

    curl_setopt($ch, CURLOPT_POST,        False);

    // Start spidering
    foreach($urls as $url)
    {
        curl_setopt($ch, CURLOPT_URL, $url);
        $HTML = curl_exec($ch);
        if (False === $HTML)
        {
            // Something went wrong, check curl_error() and curl_errno().
        }
    }
    curl_close($ch);

In the loop, you have access to $HTML -- the HTML code of every single page.

Great the temptation of using regexps is. Resist it you must. To better cope with ever-changing HTML, as well as being sure not to turn up false positives or false negatives when the layout stays the same but the content changes (e.g. you discover that you have the weather forecasts of Nice, Tourrette-Levens, Castagniers, but never Asprémont or Gattières, and isn't that cürious?), the best option is to use DOM:

Grabbing the href attribute of an A element