Skip to content Skip to sidebar Skip to footer

Parsing Webpages To Extract Contents

I want to design a crawler, using java, that crawls a webpage and extract certain contents of the page. How should I do this? I am new and I need guidance to start designing crawle

Solution 1:

Suggested readings

Static pages:

Mind you, many of the pages will create content dynamically using JavaScript after loading. For such a case, the 'static page' approach won't help, you will need to search for tools in the "Web automation" category. Selenium is such a toolset. You can command you browser to open and navigate pages using a common browser, you may even be able to use a 'headless browser' (no UI) using the phantomjs.

Good luck, there's lots of reading and coding ahead of you.

[edited for examples]

This technique is called Web scraping - use it with google for examples. The following are offered as an example of results in my searches, I offer no warranties or endorsements for them

For "static Webpage scrapping" - here's an example using jsoup

For "dynamic pages" - here's an example using Selenium

Post a Comment for "Parsing Webpages To Extract Contents"