Is this the best way to get a webpage when scraping?
HttpWebRequest oReq = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse resp = (HttpWebResponse)oReq.GetResponse();
var doc = new HtmlAgilityPack.HtmlDocument();
doc.Load(resp.GetResponseStream());
var element = doc.GetElementbyId("//start-left");
var element2 = doc.DocumentNode.SelectSingleNode("//body");
string html = doc.DocumentNode.OuterHtml;
I've seen HtmlWeb().Load
to get a webpage. Is that a better alternative to load and the scrape the webpage?
Ok i'll try that instead.
HtmlDocument doc = web.Load(url);
Now when i got my doc
and didn't get so mutch properties. No one like SelectSingleNode
. The only one I can use is GetElementById
, and that works but I whant to get a class.
Do I need to do it like this?
var htmlBody = doc.DocumentNode.SelectSingleNode("//body");
htmlBody.SelectSingleNode("//paging");
Much easier to use HtmlWeb.
string Url = "http://something";
HtmlWeb web = new HtmlWeb();
HtmlDocument doc = web.Load(Url);