I'm into some web scraping with Node.js. I'd like to use XPath as I can generate it semi-automatically with several sorts of GUI. The problem is that I cannot find a way to do this effectively.
jsdom
is extremely slow. It's parsing 500KiB file in a minute or so with full CPU load and a heavy memory footprint.cheerio
) neither support XPath, nor expose W3C-compliant DOM.phantom
or casper
would be an option, but those require to be running in a special way, not just node <script>
. I cannot rely on the risk implied by this change. For example, it's much more difficult to find how to run node-inspector
with phantom
.Spooky
is an option, but it's buggy enough, so that it didn't run at all on my machine.What's the right way to parse an HTML page with XPath then?
You can do so in several steps.
parse5
. The bad part is that the result is not DOM. Though it's fast enough and W3C-compiant.xmlserializer
that accepts DOM-like structures of parse5
as input.xmldom
. Now you finally have that DOM.xpath
library builds upon xmldom
, allowing you to run XPath queries. Be aware that XHTML has its own namespace, and queries like //a
won't work.Finally you get something like this.
const fs = require('mz/fs');
const xpath = require('xpath');
const parse5 = require('parse5');
const xmlser = require('xmlserializer');
const dom = require('xmldom').DOMParser;
(async () => {
const html = await fs.readFile('./test.htm');
const document = parse5.parse(html.toString());
const xhtml = xmlser.serializeToString(document);
const doc = new dom().parseFromString(xhtml);
const select = xpath.useNamespaces({"x": "http://www.w3.org/1999/xhtml"});
const nodes = select("//x:a/@href", doc);
console.log(nodes);
})();