Web Crawler Michael Schmidt Governors State University
Total Page:16
File Type:pdf, Size:1020Kb
Governors State University OPUS Open Portal to University Scholarship All Capstone Projects Student Capstone Projects Fall 2015 Web Crawler Michael Schmidt Governors State University Follow this and additional works at: http://opus.govst.edu/capstones Part of the Databases and Information Systems Commons Recommended Citation Schmidt, Michael, "Web Crawler" (2015). All Capstone Projects. 148. http://opus.govst.edu/capstones/148 For more information about the academic degree, extended learning, and certificate programs of Governors State University, go to http://www.govst.edu/Academics/Degree_Programs_and_Certifications/ Visit the Governors State Computer Science Department This Project Summary is brought to you for free and open access by the Student Capstone Projects at OPUS Open Portal to University Scholarship. It has been accepted for inclusion in All Capstone Projects by an authorized administrator of OPUS Open Portal to University Scholarship. For more information, please contact [email protected]. Web Crawler By Michael Schmidt B.S. Computer Science, Governors State University, 2013 GRADUATE CAPSTONE SEMINAR PROJECT Submitted in partial fulfillment of the requirements For the Degree of Master of Science, With a Major in Computer Science Governors State University University Park, IL 60484 2015 Table of Contents Web Crawler ................................................................................................................................... 1 Project Description .................................................................................................................. 2 1.1 Project Abstract ................................................................................................................ 2 1.2 Competitive Information .................................................................................................. 2 1.3 Relationship to Other Applications/Projects .................................................................... 2 1.4 Assumptions and Dependencies ....................................................................................... 2 1.5 Future Enhancements ....................................................................................................... 2 1.6 Definitions and Acronyms ............................................................................................... 3 2 Technical Description ............................................................................................................. 3 2.1 Project/Application Architecture...................................................................................... 3 2.2 Project/Application Information flows............................................................................. 3 2.3 Interactions with other Projects (if Any) .......................................................................... 4 2.4 Interactions with other Applications ................................................................................ 4 2.5 Capabilities ....................................................................................................................... 4 2.6 Risk Assessment and Management .................................................................................. 4 3 Project Requirements .............................................................................................................. 4 3.1 Identification of Requirements ......................................................................................... 4 3.2 Operations, Administration, Maintenance and Provisioning (OAM&P) ......................... 5 3.3 Security and Fraud Prevention ......................................................................................... 5 3.4 Release and Transition Plan ............................................................................................. 5 4 Project Design Description ..................................................................................................... 5 5 Print Screens............................................................................................................................ 6 6 Acknowledgements .................................................................................................................. 8 7 References ................................................................................................................................ 8 8 Appendix .................................................................................................................................. 8 1 1 Project Description 1.1 Project Abstract A web crawler is a piece of code that travels the Internet and collects data from various web pages, also known as web scraping. Some web crawlers are autonomous and require no instructions once started. This project will focus on a user driven web crawler where user input will direct where the crawler goes and how the collected data is analyzed. Web scraping replaces the need for manual data entry and more easily reveals trends among data collected. It can also aggregate information from multiple sources into one central location. While this application provides three specific examples of web crawling/scraping, it could be easily altered to better suit additional markets and/or needs. 1.2 Competitive Information Identify competitor products, applications, or services that this project will directly compete against, and whether your team has the potential to be the first to market this new project application or capability. Web crawlers are currently being used in various markets to collect and analyze information. This information can provide the ability to gain an edge over competitors such as showing the average price of their products. Web crawling has been around for over a decade but can be applied to new and emerging markets. With a few small changes in code most websites can be scraped for information. 1.3 Relationship to Other Applications/Projects There are many open source web crawlers available for public use. This one is unique in that it was written specifically for three websites in particular. It would need to be modified to be used for any other sites. 1.4 Assumptions and Dependencies • This project depends on both the jsoup and MongoDB Java libraries. Should these libraries change or lose functionality, code changes would be required to keep the application running smoothly. • This project also depends on the websites it scrapes for information. If the HTML code changes on any of the pages, the application’s code would need to be updated in order to continue working. • Future sites desired to be supported would each need to be looked at and coded individually. 1.5 Future Enhancements Discuss any planned or possible evolution of the project. Some project will need to be delivered in phases; this will be the place to document your plan. More websites can be supported for data scraping in future releases. Every website uses different html id tags and therefore will need to be coded on a case-by-case basis. 2 1.6 Definitions and Acronyms Web crawler – an application or piece of code that travels about websites in an attempt to gather information Web scraper – a web crawler that stores pieces of information taken directly from the websites it has crawled 2 Technical Description There are two major components that make up this application: a Mongo database and the application code. The web scraper code is written in Java and is comprised of a data access layer and business layer. It uses the jsoup library to easily pull out HTML elements from the webpages it traverses. There are several classes that are used in the business layer to grab pages from the web. The data access class uses the MongoDB library and connects the application to the MongoDB server. This allows for the saving and retrieval of information in the database. 2.1 Project/Application Architecture 2.2 Project/Application Information flows There is not a UI at this time but one could be added for easier use. The command line is used to take user input and present the user with the options that are available to them. From the main menu there are three options: crawl for data, view data, and analyze data. The crawl for data menu allows the user to choose from three different websites to scrape for data. Once an option has been chosen, the site is scraped and the data is saved to the MongoDB server. If a user wants to view the data in the database, they can choose to do so from the main menu. The database menu has three options to choose from. The user can choose to print out the database names, the collection names, or the collection contents. There will be one database used for this application 3 and three separate collections, one for each of the websites. The last option on the main menu allows the user to perform analytics on the data that has been stored into the MongoDB server. 2.3 Interactions with other Projects (if Any) There are currently no interactions with other projects at this time but could be added in future releases. 2.4 Interactions with other Applications At this time there are no interactions with other applications. The only outside interactions this application has are with the Internet and MongoDB server. 2.5 Capabilities Provide a list of the capabilities need to support this Project. Example: a database application must provide capabilities to support business application such as retrieving/adding/deleting/updating user data. This application has a database that is used to store and