Web Scraping Versus Twitter API: a Comparison for a Credibility Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Web Scraping Versus Twitter API: a Comparison for a Credibility Analysis Web Scraping versus Twitter API: A Comparison for a Credibility Analysis Irvin Dongo Yudith Cadinale Ana Aguilera Universidad Católica San Pablo Universidad Católica San Pablo Facultad de Ingeniería, Escuela de Arequipa, Perú Arequipa, Perú Ingeniería Informática, Universidad Univ. Bordeaux, ESTIA INSTITUTE Universidad Simón Bolívar de Valparaíso OF TECHNOLOGY Caracas, Venezuela Chile Bidart, France [email protected] [email protected] [email protected] Fabiola Martínez Yuni Quintero Sergio Barrios Universidad Simón Bolívar Universidad Simón Bolívar Universidad Simón Bolívar Caracas, Venezuela Caracas, Venezuela Caracas, Venezuela [email protected] [email protected] [email protected] ABSTRACT ACM Reference Format: Twitter is one of the most popular information source available Irvin Dongo, Yudith Cadinale, Ana Aguilera, Fabiola Martínez, Yuni Quin- on the Web. Thus, there exist many studies focused on analyzing tero, and Sergio Barrios. 2020. Web Scraping versus Twitter API: A Com- parison for a Credibility Analysis. In The 22nd International Conference on the credibility of the shared information. Most proposals use ei- Information Integration and Web-based Applications & Services (iiWAS ’20), ther Twitter API or web scraping to extract the data to perform November 30-December 2, 2020, Chiang Mai, Thailand. ACM, New York, NY, such analysis. Both extraction techniques have advantages and USA, 11 pages. https://doi.org/10.1145/3428757.3429104 disadvantages. In this work, we present a study to evaluate their performance and behavior. The motivation for this research comes 1 INTRODUCTION from the necessity to know ways to extract online information in order to analyze in real-time the credibility of the content posted Social network platforms, such as Twitter, Facebook, and Instagram on the Web. To do so, we develop a framework which offers both have considerably increased their number of users in last years. alternatives of data extraction and implements a previously pro- These platforms share contents, opinions, news, and sometimes fake posed credibility model. Our framework is implemented as a Google content. In particular, Twitter is a worldwide social network which Chrome extension able to analyze tweets in real-time. Results re- has more than 600 million users and it is one of the most widely used port that both methods produce identical credibility values, when platform during relevant events [13], such as natural disasters [14], a robust normalization process is applied to the text (i.e., tweet). brands and products advertising [25], and presidential elections [6]. Moreover, concerning the time performance, web scraping is faster However, the information shared on Twitter, and in other social than Twitter API, and it is more flexible in terms of obtaining data; networks, is not completely reliable [31]. The information posted however, web scraping is very sensitive to website changes. on social networks must be reliable since their content can help people during crisis situations, influence the crowds, and even can be useful as a mean to help companies in decision making. Thus, CCS CONCEPTS there exist many studies focused on analyzing the credibility of the • General and reference ! Reliability; • Information systems shared information [1, 3, 9, 13]. ! Social networks; Data extraction and integration. In social networks, the credibility study is affected by different factors [9], such as: (i) the veracity of the text, content with mis- KEYWORDS spellings and bad words; (ii) users on the network who generated content; (iii) the quantity of data related to the information to be API, Web Scraping, Twitter, Credibility validated. As more data is available, more features can be extracted and thus, a better analysis can be performed. In the state-of-the-art, two main extraction methods have been used: (i) web scraping, which consists of parsing the website (HTML) to obtain data by Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed using the tags; and (ii) API, which is an interface provided by social for profit or commercial advantage and that copies bear this notice and the full citation media platforms to retrieve specific information. on the first page. Copyrights for components of this work owned by others than ACM This work presents a comparison among the two data extraction must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a techniques. To do so, we extend an application that instantiates our fee. Request permissions from [email protected]. credibility analysis model, both presented in [9]. The credibility iiWAS ’20, November 30-December 2, 2020, Chiang Mai, Thailand model is adaptable to various social networks and it is based on © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8922-8/20/11...$15.00 several features, such as verified account, number of following and https://doi.org/10.1145/3428757.3429104 followers, to compute three credibility measures: Text, User, and iiWAS ’20, November 30-December 2, 2020, Chiang Mai, Thailand I. Dongo, Y. Cardinale, A. Aguilera, F. Martínez, Y. Quintero, S. Barrios Social credibilities. The application is extended by updating the extraction include different methods or a combination of them. Next web scraping method and adding the Twitter API option. section describes these data extraction methods. In order to compare both data extraction techniques, an experi- mental evaluation is presented. Three different languages (Spanish, 2.1 Data Extraction Methods English, and French) are used to evaluate their impact on the Text Nowadays, data extraction from web sources is a vital task for most credibility. Moreover, different types of text such as short, long, of business process, researches, studies, and others. The process use of bad words, misspelling and emoticons are analyzed. Social of data extraction consists of obtaining relevant data or metadata credibility is also evaluate by using two types of accounts: common useful for diverse purposes. Three well-known methods have been accounts (less than one thousand of followers) and famous accounts applied for this: web scraping, APIs, and manual extraction. Web (more than nine hundred thousands of followers). Additionally, the scraping and APIs are automated techniques and the most practical execution time to extract the features is also reported. ways of data harvesting [24]. They allow to collect data from various Experiments show that a robust normalization process on the text website pages and repositories, at a high speed and accurately. The obtained by the extraction methods, produces identical credibility data is then saved and stored for further use and analysis. Manual results. Also, the language has no impact on the credibility, nor extraction is more susceptible to human errors and time consuming. does the type of text. Moreover, the number of followers obtained by the extraction methods have a minor difference for famous 2.1.1 Web Scraping, use and limitations. Web data scraping can accounts, since the number of followers is constantly growing in be defined as the process of extracting and combining contents of real-time. Additionally, data extraction with web scraping is faster interest from the Web in a systematic way. In such a process, a soft- than with Twitter API, since for the former, only local extraction (in ware agent, a web robot or a script, mimics the browsing interaction the Twitter website) is needed, while for the API, a local extraction between the web servers and the human in a conventional web to obtain the user_id and tweet_id is required to latter perform traversal. Step by step, the robot/script accesses as many websites a remote API request. as needed, parses their contents to find and extract data of interest This paper is organized as follows. First in Section 2, the topics and structures those contents as desired [12]. Web scrapers are use- related to this work such as data extraction methods and the social ful when retrieving and processing large amounts of data quickly network Twitter, are described. Data extraction techniques are from a specific website. Thus, if the information is displayed ona explained in Section 3. The credibility model used in this work is browser, it can be accessible via a robot/script to extract the data explained in Section 4. In Section 6, we present an experimental and store them in a database for future analysis and use [20]. evaluation and a discussion about both extraction methods. Finally, Web scraping is used commonly on website pages that use we conclude in Section 7. markup languages such as HTML or eXtensible HyperText Markup Language (XHTML). In this case, scraping consists on parsing hy- 2 SOCIAL NETWORKS AS INFORMATION pertext tags and retrieving plain text information embedded onto SOURCES: PRELIMINARY them. The web data scraper establishes communication with the Since the creation of social networks, they have become a new target website through the HTTP protocol and extracts the contents channel for communication and socialization [2]. The types of in- of interest. Some regular expression matching could be necessary formation that are exchanged on social media platforms range from along with additional logic [12]. The network speed may be a limi- personal contexts, with the exchange of personal conversations tation or disadvantage for web scraping, since it affects when and through messaging, expressions of feelings, photographs, videos, to how the data is displayed. Another problem regarding this method official and institutional contexts. Hence, social networks become is the often changes of the web pages format. Web scraping involves popular sources of real-time information. In 2020, among the most site-specific programming and does not comply with expectable popular social networks in terms of the number of users are Face- changes in the HTML source [12].
Recommended publications
  • An Overview of the 50 Most Common Web Scraping Tools
    AN OVERVIEW OF THE 50 MOST COMMON WEB SCRAPING TOOLS WEB SCRAPING IS THE PROCESS OF USING BOTS TO EXTRACT CONTENT AND DATA FROM A WEBSITE. UNLIKE SCREEN SCRAPING, WHICH ONLY COPIES PIXELS DISPLAYED ON SCREEN, WEB SCRAPING EXTRACTS UNDERLYING CODE — AND WITH IT, STORED DATA — AND OUTPUTS THAT INFORMATION INTO A DESIGNATED FILE FORMAT. While legitimate uses cases exist for data harvesting, illegal purposes exist as well, including undercutting prices and theft of copyrighted content. Understanding web scraping bots starts with understanding the diverse and assorted array of web scraping tools and existing platforms. Following is a high-level overview of the 50 most common web scraping tools and platforms currently available. PAGE 1 50 OF THE MOST COMMON WEB SCRAPING TOOLS NAME DESCRIPTION 1 Apache Nutch Apache Nutch is an extensible and scalable open-source web crawler software project. A-Parser is a multithreaded parser of search engines, site assessment services, keywords 2 A-Parser and content. 3 Apify Apify is a Node.js library similar to Scrapy and can be used for scraping libraries in JavaScript. Artoo.js provides script that can be run from your browser’s bookmark bar to scrape a website 4 Artoo.js and return the data in JSON format. Blockspring lets users build visualizations from the most innovative blocks developed 5 Blockspring by engineers within your organization. BotScraper is a tool for advanced web scraping and data extraction services that helps 6 BotScraper organizations from small and medium-sized businesses. Cheerio is a library that parses HTML and XML documents and allows use of jQuery syntax while 7 Cheerio working with the downloaded data.
    [Show full text]
  • Automatic Retrieval of Updated Information Related to COVID-19 from Web Portals 1Prateek Raj, 2Chaman Kumar, 3Dr
    European Journal of Molecular & Clinical Medicine ISSN 2515-8260 Volume 07, Issue 3, 2020 Automatic Retrieval of Updated Information Related to COVID-19 from Web Portals 1Prateek Raj, 2Chaman Kumar, 3Dr. Mukesh Rawat 1,2,3 Department of Computer Science and Engineering, Meerut Institute of Engineering and Technology, Meerut 250005, U.P, India Abstract In the world of social media, we are subjected to a constant overload of information. Of all the information we get, not everything is correct. It is advisable to rely on only reliable sources. Even if we stick to only reliable sources, we are unable to understand or make head or tail of all the information we get. Data about the number of people infected, the number of active cases and the number of people dead vary from one source to another. People usually use up a lot of effort and time to navigate through different websites to get better and accurate results. However, it takes lots of time and still leaves people skeptical. This study is based on web- scraping & web-crawlingapproach to get better and accurate results from six COVID-19 data web sources.The scraping script is programmed with Python library & crawlers operate with HTML tags while application architecture is programmed using Cascading style sheet(CSS) &Hypertext markup language(HTML). The scraped data is stored in a PostgreSQL database on Heroku server and the processed data is provided on the dashboard. Keywords:Web-scraping, Web-crawling, HTML, Data collection. I. INTRODUCTION The internet is wildly loaded with data and contents that are informative as well as accessible to anyone around the globe in various shape, size and extension like video, audio, text and image and number etc.
    [Show full text]
  • Deconstructing Large-Scale Distributed Scraping Attacks
    September 2018 Radware Research Deconstructing Large-Scale Distributed Scraping Attacks A Stepwise Analysis of Real-time Sophisticated Attacks On E-commerce Businesses Deconstructing Large-Scale Distributed Scraping Attacks Table of Contents 02 Why Read This E-book 03 Key Findings 04 Real-world Case of A Large-Scale Scraping Attack On An E-tailer Snapshot Of The Scraping Attack Attack Overview Stages of Attack Stage 1: Fake Account Creation Stage 2: Scraping of Product Categories Stage 3: Price and Product Info. Scraping Topology of the Attack — How Three-stages Work in Unison 11 Recommendations: Action Plan for E-commerce Businesses to Combat Scraping 12 About Radware 02 Deconstructing Large-Scale Distributed Scraping Attacks Why Read This E-book Hypercompetitive online retail is a crucible of technical innovations to win today’s business wars. Tracking prices, deals, content and product listings of competitors is a well-known strategy, but the rapid pace at which the sophistication of such attacks is growing makes them difficult to keep up with. This e-book offers you an insider’s view of scrapers’ techniques and methodologies, with a few takeaways that will help you fortify your web security strategy. If you would like to learn more, email us at [email protected]. Business of Bots Companies like Amazon and Walmart have internal teams dedicated to scraping 03 Deconstructing Large-Scale Distributed Scraping Attacks Key Findings Scraping - A Tool Today, many online businesses either employ an in-house team or leverage the To Gain expertise of professional web scrapers to gain a competitive advantage over their competitors.
    [Show full text]
  • Legality and Ethics of Web Scraping
    Murray State's Digital Commons Faculty & Staff Research and Creative Activity 12-15-2020 Tutorial: Legality and Ethics of Web Scraping Vlad Krotov Leigh Johnson Murray State University Leiser Silva University of Houston Follow this and additional works at: https://digitalcommons.murraystate.edu/faculty Recommended Citation Krotov, V., Johnson, L., & Silva, L. (2020). Tutorial: Legality and Ethics of Web Scraping. Communications of the Association for Information Systems, 47, pp-pp. https://doi.org/10.17705/1CAIS.04724 This Peer Reviewed/Refereed Publication is brought to you for free and open access by Murray State's Digital Commons. It has been accepted for inclusion in Faculty & Staff Research and Creative Activity by an authorized administrator of Murray State's Digital Commons. For more information, please contact [email protected]. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/343555462 Legality and Ethics of Web Scraping, Communications of the Association for Information Systems (forthcoming) Article in Communications of the Association for Information Systems · August 2020 CITATIONS READS 0 388 3 authors, including: Vlad Krotov Murray State University 42 PUBLICATIONS 374 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Addressing barriers to big data View project Web Scraping Framework: An Integrated Approach to Retrieving Big Qualitative Data from the Web View project All content following this
    [Show full text]
  • Webscraper.Io a How-To Guide for Scraping DH Projects
    Webscraper.io A How-to Guide for Scraping DH Projects 1. Introduction to Webscraper.io ​ ​ 1.1. Installing Webscraper.io ​ ​ 1.2. Navigating to Webscraper.io ​ ​ 2. Creating a Sitemap ​ ​ 2.1. Sitemap Menu ​ ​ 2.2. Importing a Sitemap ​ ​ 2.3. Creating a Blank Sitemap ​ ​ 2.4. Editing Project Metadata ​ ​ 3. Selector Graph ​ ​ 4. Creating a Selector ​ ​ 5. Scraping a Website ​ ​ 6. Browsing Scraped Data ​ ​ 7. Exporting Sitemaps ​ ​ 8. Exporting Data ​ ​ ​ Association of Research Libraries 21 Dupont Circle NW, Suite 800, Washington, DC 20036 ​ (202) 296-2296 | ARL.org 1. Introduction to Webscraper.io ​ ​ Webscraper.io is a free extension for the Google Chrome web browser with which users can extract information from any public website using HTML and CSS and export the data as a Comma Separated Value (CSV) file, which can be opened in spreadsheet processing software like Excel or Google Sheets. The scraper uses the developer tools menu built into Chrome (Chrome DevTools) to select the different elements of a website, including links, tables, and the HTML code itself. With developer tools users can look at a web page to see the code that generates everything that is seen on the page, from text, to images, to the layout. Webscraper.io uses this code either to extract information or navigate throughout the overall page. This is helpful for users who don’t have another way to extract important information from websites. It is important to be aware of any copyright information listed on a website. Consult a copyright lawyer or professional to ensure the information can legally be scraped and shared.
    [Show full text]
  • 1 Running Head: WEB SCRAPING TUTORIAL a Web Scraping
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repository@Nottingham 1 Running Head: WEB SCRAPING TUTORIAL A Web Scraping Tutorial using R. Author Note Alex Bradley, and Richard J. E. James, School of Psychology, University of Nottingham AB is the guarantor. AB created the website and videos. Both authors drafted the manuscript. All authors read, provided feedback and approved the final version of the manuscript. The authors declare that they have no conflicts of interest pertaining to this manuscript. Correspondence concerning this article should be addressed to Alex Bradley, School of Psychology, University of Nottingham, Nottingham, NG7 2RD. Email [email protected]. Tel: 0115 84 68188. 2 Running Head: WEB SCRAPING TUTORIAL Abstract The ubiquitous use of the internet for all manner of daily tasks means that there are now large reservoirs of data that can provide fresh insights into human behaviour. One of the key barriers preventing more researchers from utilising online data is that researchers do not have the skills to access the data. This tutorial aims to address this gap by providing a practical guide to web scraping online data using the popular statistical language R. Web scraping is the process of automatically collecting information from websites, which can take the form of numbers, text, images or videos. In this tutorial, readers will learn how to download web pages, extract information from those pages, be able to store the extracted information and learn how to navigate across multiple pages of a website. A website has been created to assist readers in learning how to web scrape.
    [Show full text]
  • A Study on Web Scraping
    ISSN (Print) : 2320 – 3765 ISSN (Online): 2278 – 8875 International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering (A High Impact Factor, Monthly, Peer Reviewed Journal) Website: www.ijareeie.com Vol. 8, Issue 6, June 2019 A Study on Web Scraping Rishabh Singh Tomar1 Delhi International School, Indore, Madhya Pradesh, India1 ABSTRACT: Web Scraping is the technique which allows user to fetch data from the World Wide Web. This paper gives a brief introduction to Web Scraping covering different technologies available and methods to prevent a website from getting scraped. Currently available software tools available for web scraping are also listed with a brief description. Web Scraping is explained with a practical example. KEYWORDS:Web Scraping, Web Mining, Web Crawling, Data Fetching, Data Analysis. I.INTRODUCTION Web Scraping (also known as Web Data Extraction, Web Harvesting, etc.) is a method used for extracting a large amount of data from websites, the data extracted is then saved in the local repository, which can later be used or analysed accordingly. Most of the websites’ data can be viewed on a web browser, they do not provide any explicit method to save this data locally. Manually copying and pasting the data from a website to a local device becomes a tedious job. However, web scraping makes things a lot easier. It is the automated version of the manual process; the computer takes care of copying and storing the data for further use in the desired format. This paper will describe the legal aspects of Web Scraping along with the process of Web Scraping, the techniques available for Web Scraping, the current software tools with their functionalities and possible applications of it.
    [Show full text]
  • CSCI 452 (Data Mining) Dr. Schwartz HTML Web Scraping 150 Pts
    CSCI 452 (Data Mining) Dr. Schwartz HTML Web Scraping 150 pts Overview For this assignment, you'll be scraping the White House press briefings for President Obama's terms in the White House to see which countries have been mentioned and how often. We will use a mirrored image (originally so that we wouldn’t cause undue load on the White House servers, now because the data is historical). You will use Python 3 with the following libraries: • Beautiful Soup 4 (makes it easier to pull data out of HTML and XML documents) • Requests (for handling HTTP requests from python) • lxml (XML and HTML parser) We will use Wikipedia's list of sovereign states to identify the entities we will be counting, then we will download all of the press briefings and count how many times a country's name has been mentioned in the press briefings during President Obama's terms. Specification There will be a few distinct steps to this assignment. 1. Write a Python program to scrape the data from Wikipedia's list of sovereign states (link above). Note that there are some oddities in the table. In most cases, the title on the href (what comes up when you hover your mouse over the link) should work well. See "Korea, South" for example -- its title is "South Korea". In entries where there are arrows (see "Kosovo"), there is no title, and you will want to use the text from the link. These minor nuisances are very typical in data scraping, and you'll have to deal with them programmatically.
    [Show full text]
  • Lecture 18: HTML and Web Scraping
    Lecture 18: HTML and Web Scraping November 6, 2018 Reminders ● Project 2 extended until Thursday at midnight! ○ Turn in your python script and a .txt file ○ For extra credit, run your program on two .txt files and compare the sentiment analysis/bigram and unigram counts in a comment. Turn in both .txt files ● Final project released Thursday ○ You can do this with a partner if you want! ○ End goal is a presentation in front of the class in December on your results ○ Proposals will be due next Thursday Today’s Goals ● Understand what Beautiful Soup is ● Have ability to: ● download webpages ● Print webpage titles ● Print webpage paragraphs of text HTML ● Hypertext Markup Language: the language that provides a template for web pages ● Made up of tags that represent different elements (links, text, photos, etc) ● See HTML when inspecting the source of a webpage HTML Tags ● <html>, indicates the start of an html page ● <body>, contains the items on the actual webpage (text, links, images, etc) ● <p>, the paragraph tag. Can contain text and links ● <a>, the link tag. Contains a link url, and possibly a description of the link ● <input>, a form input tag. Used for text boxes, and other user input ● <form>, a form start tag, to indicate the start of a form ● <img>, an image tag containing the link to an image Getting webpages online ● Similar to using an API like last time ● Uses a specific way of requesting, HTTP (Hypertext Transfer Protocol) ● HTTPS has an additional layer of security ● Sends a request to the site and downloads it ● HTTP/HTTPS
    [Show full text]
  • CSCI 452 (Data Mining) Basic HTML Web Scraping 75 Pts Overview for This Assignment, You'll Write Several Small Python Programs T
    CSCI 452 (Data Mining) Basic HTML Web Scraping 75 pts Overview For this assignment, you'll write several small python programs to scrape simple HTML data from several websites. You will use Python 3 with the following libraries: • Beautiful Soup 4 (makes it easier to pull data out of HTML and XML documents) • Requests (for handling HTTP requests from python) • lxml (XML and HTML parser) Here is a fairly simple example for finding out how many datasets can currently be searched/accessed on data.gov. You should make sure you can run this code before going on to the questions you’ll be writing (the answer when I last ran this was 195,384). import bs4 import requests response = requests.get('http://www.data.gov/') soup = bs4.BeautifulSoup(response.text,"lxml") link = soup.select("small a")[0] print(link.text) #Credit to Dan Nguyen at Stanford’s Computational Journalism program Specification Write python programs to answer the following questions. Be sure to follow the specified output format (including prompts) carefully since it will be autograded. You will need to do some reading/research regarding the Beautiful Soup interface and possibly on Python as well. There are links to the documentation on my website. Do not hardcode any data; everything should be dynamically scraped from the live websites. Points will be awarded on functionality, but there will be a code inspection. If values are hardcoded or if the style/commenting is insufficient, points will be deducted. 1. (30 pts) Data.gov (relevant url, http://catalog.data.gov/dataset?q=&sort=metadata_created+desc): accept an integer as input and find the name (href text) of the nth "most recent" dataset on data.gov.
    [Show full text]
  • Security Assessment Methodologies SENSEPOST SERVICES
    Security Assessment Methodologies SENSEPOST SERVICES Security Assessment Methodologies 1. Introduction SensePost is an information security consultancy that provides security assessments, consulting, training and managed vulnerability scanning services to medium and large enterprises across the world. Through our labs we provide research and tools on emerging threats. As a result, strict methodologies exist to ensure that we remain at our peak and our reputation is protected. An information security assessment, as performed by anyone in our assessment team, is the process of determining how effective a company’s security posture is. This takes the form of a number of assessments and reviews, namely: - Extended Internet Footprint (ERP) Assessment - Infrastructure Assessment - Application Assessment - Source Code Review - Wi-Fi Assessment - SCADA Assessment 2. Security Testing Methodologies A number of security testing methodologies exist. These methodologies ensure that we are following a strict approach when testing. It prevents common vulnerabilities, or steps, from being overlooked and gives clients the confidence that we look at all aspects of their application/network during the assessment phase. Whilst we understand that new techniques do appear, and some approaches might be different amongst testers, they should form the basis of all assessments. 2.1 Extended Internet Footprint (ERP) Assessment The primary footprinting exercise is only as good as the data that is fed into it. For this purpose we have designed a comprehensive and
    [Show full text]
  • Web Scraping)
    שומן ושות ' משרד עורכי דין ונוטריון Schuman & Co. Law Offices Data Scraping On The Internet (Web Scraping) Introduction Web scraping , or data scraping, is an operation used for extracting data from websites. While web scraping can be done manually, the term typically refers to automated processes implemented using a web crawler or bot. It is a form of electronic copying, in which data is gathered and copied from the web for later analysis and use. There are methods that some websites use to prevent web scraping, such as detecting and disallowing bots from crawling their pages. In response, there are web scraping systems that rely on using techniques in DOM parsing, computer vision and natural language processing to simulate human browsing to enable gathering web page content for offline parsing. Potential Legal Violations Of Data Scraping In order to evaluate the risks of a data scraping business model, it is essential to recognize the potential legal violations that might transpire. Computer Fraud and Abuse Act (CFAA) The CFAA is a federal statute that imposes liability on someone who “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains…information from any protected computer.” A determination of liability will typically focus on whether the data scraper has knowledge that the terms governing access to the website prohibit the data scraping activity. Breach of Contract If a user is bound by terms of service that clearly prohibit data scraping, and a user violates such terms, such a breach can be the basis for prohibiting the user's access and ability to scrape data.
    [Show full text]