Web Scraping

Web Scraping

1 Web Scraping Lab Objective: Web Scraping is the process of gathering data from websites on the internet. Since almost everything rendered by an internet browser as a web page uses HTML, the rst step in web scraping is being able to extract information from HTML. In this lab, we introduce the requests library for scraping web pages, and BeautifulSoup, Python's canonical tool for eciently and cleanly navigating and parsing HTML. HTTP and Requests HTTP stands for Hypertext Transfer Protocol, which is an application layer networking protocol. It is a higher level protocol than TCP, which we used to build a server in the Web Technologies lab, but uses TCP protocols to manage connections and provide network capabilities. The HTTP protocol is centered around a request and response paradigm, in which a client makes a request to a server and the server replies with a response. There are several methods, or requests, dened for HTTP servers, the three most common of which are GET, POST, and PUT. GET requests request information from the server, POST requests modify the state of the server, and PUT requests add new pieces of data to the server. The standard way to get the source code of a website using Python is via the requests library.1 Calling requests.get() sends an HTTP GET request to a specied website. The website returns a response code, which indicates whether or not the request was received, understood, and accepted. If the response code is good, typically 200, then the response will also include the website source code as an HTML le. >>> import requests # Makea request and check the result.A status code of 200 is good. >>> response = requests.get("http://www.byu.edu") >>> print(response.status_code, response.ok, response.reason) 200 TrueOK 1Though requests is not part of the standard library, it is recognized as a standard tool in the data science community. See http://docs.python-requests.org/. 1 2 Lab 1. Web Scraping # The HTML of the website is stored in the 'text' attribute. >>> print(response.text) <!DOCTYPE html> <html lang="en" dir="ltr" prefix="content: http://purl.org/rss/1.0/modules/ - content/ dc: http://purl.org/dc/terms/ foaf: http://xmlns.com/foaf/0.1/ - og: http://ogp.me/ns# rdfs: http://www.w3.org/2000/01/rdf-schema# schema: - http://schema.org/ sioc: http://rdfs.org/sioc/ns# sioct: http://rdfs.org - /sioc/types# skos: http://www.w3.org/2004/02/skos/core# xsd: http://www. - w3.org/2001/XMLSchema#" class=" "> <head> <meta charset="utf-8"/> # ... Note that some websites aren't built to handle large amounts of trac or many repeated requests. Most are built to identify web scrapers or crawlers that initiate many consecutive GET requests without pauses, and retaliate or block them. When web scraping, always make sure to store the data that you receive in a le and include error checks to prevent retrieving the same data unnecessarily. This is especially important in larger applications. Problem 1. Use the requests library to get the HTML source for the website http://www. example.com. Save the source as a le called example.html. If the le already exists, make sure not to scrape the website, or overwrite the le. You will use this le later in the lab. Achtung! Scraping copyrighted information without the consent of the copyright owner can have severe legal consequences. Many websites, in their terms and conditions, prohibit scraping parts or all of the site. Websites that do allow scraping usually have a le called robots.txt (for example, www.google.com/robots.txt) that species which parts of the website are o-limits and how often requests can be made according to the robots exclusion standard.a Be careful and considerate when doing any sort of scraping, and take care when writing and testing code to avoid unintended behavior. It is up to the programmer to create a scraper that respects the rules found in the terms and conditions and in robots.txt.b aSee www.robotstxt.org/orig.html and en.wikipedia.org/wiki/Robots_exclusion_standard. bPython provides a parsing library called urllib.robotparser for reading robot.txt les. For more infor- mation, see https://docs.python.org/3/library/urllib.robotparser.html. HTML Hyper Text Markup Language, or HTML, is the standard markup languagea language designed for the processing, denition, and presentation of textfor creating webpages. It structures a document using pairs of tags that surround and dene content. Opening tags have a tag name surrounded by angle brackets (<tag-name>). The companion closing tag looks the same, but with a forward 3 slash before the tag name (</tag-name>). A list of all current HTML tags can be found at http: //htmldog.com/reference/htmltags. Most tags can be combined with attributes to include more data about the content, help identify individual tags, and make navigating the document much simpler. In the following example, the <a> tag has id and href attributes. <html> <!-- Opening tags--> <body> <p> Click <a id='info' href='http://www.example.com'>here</a> for more information. </p> <!-- Closing tags--> </body> </html> In HTML, href stands for hypertext reference, a link to another website. Thus the above example would be rendered by a browser as a single line of text, with here being a clickable link to http://www.example.com: Click here for more information. Unlike Python, HTML does not enforce indentation (or any whitespace rules), though inden- tation generally makes HTML more readable. The previous example can be written in a single line. <html><body><p>Click <a id='info' href='http://www.example.com/info'>here</a> for more information.</p></body></html> Special tags, which don't contain any text or other tags, are written without a closing tag and in a single pair of brackets. A forward slash is included between the name and the closing bracket. Examples of these include <hr/>, which describes a horizontal line, and <img/>, the tag for representing an image. Problem 2. Using the output from Problem 1, examine the HTML source code for http: //www.example.com. What tags are used? What is the value of the type attribute associated with the style tag? Write a function that returns the set of names of tags used in the website, and the value of the type attribute of the style tag (as a string). (Hint: there are ten unique tag names.) BeautifulSoup BeautifulSoup (bs4) is a package2 that makes it simple to navigate and extract data from HTML documents. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/index.html for the full documentation. 2BeautifulSoup is not part of the standard library; install it with conda install beautifulsoup4 or with pip install beautifulsoup4. 4 Lab 1. Web Scraping The bs4.BeautifulSoup class accepts two parameters to its constructor: a string of HTML code and an HTML parser to use under the hood. The HTML parser is technically a keyword argument, but the constructor prints a warning if one is not specied. The standard choice for the parser is "html.parser", which means the object uses the standard library's html.parser module as the engine behind the scenes. Note Depending on project demands, a parser other than "html.parser" may be useful. A couple of other options are "lxml", an extremely fast parser written in C, and "html5lib", a slower parser that treats HTML in much the same way a web browser does, allowing for irregularities. Both must be installed independently; see https://www.crummy.com/software/BeautifulSoup/ bs4/doc/#installing-a-parser for more information. A BeautifulSoup object represents an HTML document as a tree. In the tree, each tag is a node with nested tags and strings as its children. The prettify() method returns a string that can be printed to represent the BeautifulSoup object in a readable format that reects the tree structure. >>> from bs4 import BeautifulSoup >>> small_example_html = """ <html><body><p> Click<a id= 'info' href='http://www.example.com'>here</a> for more information. </p></body></html> """ >>> small_soup = BeautifulSoup(small_example_html, 'html.parser') >>> print(small_soup.prettify()) <html> <body> <p> Click <a href="http://www.example.com" id="info"> here </a> for more information. </p> </body> </html> Each tag in a BeautifulSoup object's HTML code is stored as a bs4.element.Tag object, with actual text stored as a bs4.element.NavigableString object. Tags are accessible directly through the BeautifulSoup object. # Get the<p> tag(and everything inside of it). >>> small_soup.p 5 <p> Click<a href="http://www.example.com" id="info">here</a> for more information. </p> # Get the<a> sub-tag of the<p> tag. >>> a_tag = small_soup.p.a >>> print(a_tag, type(a_tag), sep='\n') <a href="http://www.example.com" id="info">here</a> <class 'bs4.element.Tag'> # Get just the name, attributes, and text of the<a> tag. >>> print(a_tag.name, a_tag.attrs, a_tag.string, sep="\n") a {'id': 'info', 'href': 'http://www.example.com'} here Attribute Description name The name of the tag attrs A dictionary of the attributes string The single string contained in the tag strings Generator for strings of children tags stripped_strings Generator for strings of children tags, stripping whitespace text Concatenation of strings from all children tags Table 1.1: Data attributes of the bs4.element.Tag class. Problem 3. The BeautifulSoup class has a find_all() method that, when called with True as the only argument, returns a list of all tags in the HTML source code. Write a function that accepts a string of HTML code as an argument. Use BeautifulSoup to return a list of the names of the tags in the code.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us