You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Have you ever wanted to get a specific data from another website but there's no API available for it?
That's where Web Scraping comes in, if the data is not made available by the website we can just scrape it from the website itself.
But before we dive in let us first define what web scraping is. According to [Wikipedia](http://en.wikipedia.org/wiki/Web_scraping):
{% blockquote %}
Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox.
{% endblockquote %}
So yes, web scraping lets us extract information from websites.
But the thing is there are some legal issues regarding web scraping.
Some consider it as an act of trespassing to the website where you are scraping the data from.
That's why it is wise to read the terms of service of the specific website that you want to scrape because you might be doing something illegal without knowing it.
You can read more about it in this [Wikipedia page](http://en.wikipedia.org/wiki/Web_scraping).
##Web Scraping Techniques
There are many techniques in web scraping as mentioned in the Wikipedia page earlier.
But I will only discuss the following:
- Document Parsing
- Regular Expressions
###Document Parsing
Document parsing is the process of converting HTML into DOM (Document Object Model) in which we can traverse through.
Here's an example on how we can scrape data from a public website:
```php
<?php
$html = file_get_contents('http://pokemondb.net/evolution'); //get the html returned from the following url
Then we declare a new DOM Document, this is used for converting the html string returned from `file_get_contents` into an actual Document Object Model which we can traverse through:
```
<?php
$pokemon_doc = new DOMDocument();
?>
```
Then we disable libxml errors so that they won't be outputted on the screen, instead they will be buffered and stored:
Next we check if there's an actual html that has been returned:
```
<?php
if(!empty($html)){ //if any html is actually returned
}
?>
```
Next we use the `loadHTML()` function from the new instance of `DOMDocument` that we created earlier to load the html that was returned. Simply use the html that was returned as the argument:
```
<?php
$pokemon_doc->loadHTML($html);
?>
```
Then we clear the errors if any. Most of the time yucky html causes these errors. Examples of yucky html are inline styling (style attributes embedded in elements), invalid attributes and invalid elements. Elements and attributes are considered invalid if they are not part of the HTML specification for the doctype used in the specific page.
```
<?php
libxml_clear_errors(); //remove errors for yucky html
?>
```
Next we declare a new instance of `DOMXpath`. This allows us to do some queries with the DOM Document that we created.
This requires an instance of the DOM Document as its argument.
```
<?php
$pokemon_xpath = new DOMXPath($pokemon_doc);
?>
```
Finally, we simply write the query for the specific elements that we want to get. If you have used [jQuery](http://jquery.com/) before then this process is similar to what you do when you select elements from the DOM.
What were selecting here is all the h2 tags which has an id, we make the location of the h2 unspecific by using double slashes `//` right before the element that we want to select. The value of the id also doesn't matter as long as there's an id then it will get selected. The `nodeValue` attribute contains the text inside the h2 that was selected.
As you can see from the screenshot, the information that we want to get is contained within a span element with a class of `infocard-tall `. Yes, the space there is included. When using XPath to query spaces are included if they are present, otherwise it wouldn't work.
Converting what we know into actual query, we come up with this:
```
//span[@class="infocard-tall "]
```
This selects all the span elements which has a class of `infocard-tall `. It doesn't matter where in the document the span is because we used the double forward slash before the actual element.
Once were inside the span we have to get to the actual elements which directly contains the data that we want. And that is the name and the type of the pokemon. As you can see from the screenshot below the name of the pokemon is directly contained within an `anchor` element with a class of `ent-name`. And the types are stored within a `small` element with a class of `aside`.
There's nothing new with the code that we have above except for using query inside the `foreach` loop.
We use this particular line of code to get the name of the pokemon, you might notice that we specified a second argument when we used the `query` method. The second argument is the current row, we use it to specify the scope of the query. This means that were limiting the scope of the query to that of the current row.
The syntax is more simple so the code that you have to write is lesser plus there are also some convenience functions and attributes which you can use. An example is the plaintext attribute which extracts all the text from a web page: