JSoup.connect throws 403 error while apache.httpclient is able to fetch the content

instanceOfObject picture instanceOfObject · Apr 12, 2012 · Viewed 8.2k times · Source

I am trying to parse HTML dump of any given page. I used HTML Parser and also tried JSoup for parsing.

I found useful functions in Jsoup but I am getting 403 error while calling Document doc = Jsoup.connect(url).get();

I tried HTTPClient, to get the html dump and it was successful for the same url.

Why is JSoup giving 403 for the same URL which is giving content from commons http client? Am I doing something wrong? Any thoughts?

Answer

instanceOfObject picture instanceOfObject · Apr 13, 2012

Working solution is as follows (Thanks to Angelo Neuschitzer for reminding to put it as a solution):

Document doc = Jsoup.connect(url).userAgent("Mozilla").get();
Elements links = doc.getElementsByTag(HTML.Tag.CITE.toString);
for (Element link : links) {
            String linkText = link.text();
            System.out.println(linkText);
}

So, userAgent does the trick :)