Apophanies Or Epiphanies? How Crawlers Impact Our Understanding of the Web

Total Page:16

File Type:pdf, Size:1020Kb

Apophanies Or Epiphanies? How Crawlers Impact Our Understanding of the Web Apophanies or Epiphanies? How Crawlers Impact Our Understanding of the Web Syed Suleman Ahmad Muhammad Daniyal Dar Muhammad Fareed Zaffar University of Wisconsin - Madison University of Iowa LUMS Narseo Vallina-Rodriguez Rishab Nithyanand IMDEA Networks / ICSI University of Iowa ABSTRACT [22], International World Wide Web Conference (WWW) [25], In- Data generated by web crawlers has formed the basis for much of ternational AAAI Conference on Web and Social Media (ICWSM) our current understanding of the Internet. However, not all crawlers [24], and Privacy Enhancing Technology Symposium (PETS) [31] are created equal and crawlers generally find themselves trading off relied on Web data obtained through crawlers. However, not all between computational overhead, developer effort, data accuracy, crawlers are created equal or offer the same features, and therefore and completeness. Therefore, the choice of crawler has a critical im- the choice of crawler is critical. Crawlers generally find themselves pact on the data generated and knowledge inferred from it. In this trading-off between computational overhead, flexibility, data accuracy paper, we conduct a systematic study of the trade-offs presented by & completeness, and developer effort. On one extreme, command-line different crawlers and the impact that these can have on various tools such as Wget [35] and cURL [18] are lightweight and easy types of measurement studies. We make the following contributions: to use while providing data that is hardly accurate or complete – First, we conduct a survey of all research published since 2015 in the e.g., these tools have the inability to handle dynamically generated premier security and Internet measurement venues to identify and content or execute JavaScript which restricts the completeness of verify the repeatability of crawling methodologies deployed for dif- gathered data [21]. As JavaScript is currently used in over 95% of ferent problem domains and publication venues. Next, we conduct the Alexa Top-100K web rank (as of August 2019), their use as a qualitative evaluation of a subset of all crawling tools identified crawlers in specific types of research is questionable [37]. On the in our survey. This evaluation allows us to draw conclusions about other extreme, tools like Selenium [10] provide the ability to get the suitability of each tool for specific types of data gathering. Fi- extremely accurate and complete data at the cost of high developer nally, we present a methodology and a measurement framework effort (needed to develop automated web interaction models that to empirically highlight the differences between crawlers and how can mimic real users) and computational overhead (needed to run a the choice of crawler can impact our understanding of the web. fully-fledged Selenium-driven browser such as Firefox or Chrome). Further complicating matters, in response to a growing variety of ACM Reference Format: attacks, webservers are becoming increasingly sensitive to and will- Syed Suleman Ahmad, Muhammad Daniyal Dar, Muhammad Fareed Zaffar, Narseo Vallina-Rodriguez, and Rishab Nithyanand. 2020. Apophanies or ing to block traffic appearing to be anomalous – including traffic Epiphanies? How Crawlers Impact Our Understanding of the Web. In Pro- appearing to be generated by programmatic crawlers [15, 33]. ceedings of The Web Conference 2020 (WWW ’20), April 20–24, 2020, Taipei, As a consequence of the above mentioned trade-offs, it is impor- Taiwan. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/ tant to understand how the choice of crawler can make an impact 3366423.3380113 on data gathered and inferences drawn by a measurement study. What is lacking, however, is a systematic study of the trade-offs 1 INTRODUCTION presented by different crawlers and the impact that these can have Web crawlers and scrapers1 have been a core component of the on various types of measurement studies. In this paper, we fill this toolkits used by the Internet measurement, security & privacy, infor- gap. Our overall objective is to provide researchers with an un- mation retrieval, and networking communities. In fact, nearly 16% derstanding of the trade-offs proposed by different crawlers and of all publications from the 2015-18 editions of ACM Internet Mea- also to provide data-driven insight into their impact on Internet surement Conference (IMC) [23], Network and Distributed System measurements and the inferences drawn from them. We accomplish Security Symposium (NDSS) [27], ACM Conference on Computer this objective by making the following contributions. and Communication Security (CCS) [13], USENIX Security Sym- Surveying crawlers used in prior work. (§2) We start our study posium (SEC) [34], IEEE Symposium on Security & Privacy (S&P) by surveying research published since 2015 at the following venues: IMC [23], NDSS [27], S&P [22], CCS [13], SEC [34], WWW [25], 1Although there is a subtle distinction between a web crawler and a web scraper, we refer to both as “crawlers” in the remainder of this paper. ICWSM [24], and PETS [31]. In our survey of 2,424 publications, we find 350 papers that rely on data gathered by 19 unique crawlers, Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial which we categorize by their intended purpose. We use the findings advantage and that copies bear this notice and the full citation on the first page. Copyrights for of this survey to select the most relevant crawlers in these academic components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires communities for further analysis. prior specific permission and/or a fee. Request permissions from [email protected]. A systematic qualitative evaluation of crawling tools. (§3) WWW ’20, April 20–24, 2020, Taipei, Taiwan © 2020 Association for Computing Machinery. We focus on performing a qualitative evaluation of different crawl- ACM ISBN 978-1-4503-7023-3/20/04. ing tools used by previous research. For this evaluation, we rely https://doi.org/10.1145/3366423.3380113 1 WWW ’20, April 20–24, 2020, Taipei, Taiwan S. Ahmad et al. on the results of our survey to select crawlers for further analysis. Crawler identification. Our first goal in this survey is to identify We study the capabilities and flexibility of each of these crawlers which crawlers were most commonly used by the research com- with the goal of understanding how they are likely to impact the munity. To accomplish this goal, we used the following filtering data collection and their limitations. This allows us to draw conclu- process. First, we randomly sampled 300 publications for manual sions about the suitability of each tool for specific types of research inspection from the total set of 2,424 papers. During this manual problems. inspection process, we extracted keywords that were observed to A framework for performing comparative empirical crawler be commonly associated with crawling (Table 1). Next, we filtered evaluations. (§4) We present a methodology and accompanying out publications not containing these keywords based on the idea framework for empirically highlighting the differences between that these were unlikely to include a crawler as part of their data crawlers in terms of the network and application data they gen- gathering. This left us with a total of 764 papers. Each of these erate. Further, in an effort to guide future researchers in making was then manually inspected to verify the use of a crawler for data a choice between different crawlers in specific scenarios, and to gathering. We found a total of 350 publications meeting this crite- make our work repeatable, we make this evaluation framework rion. This manual inspection helped us to eliminate false-positives available publicly at https://sparta.cs.uiowa.edu/projects/ generated by our keyword-based heuristic, often due to mentions methodologies.html. of crawling tools and browsers in the related work sections of these Illustrating the impact of different crawlers on research re- papers. During this second round of manual inspection, we identi- sults and inferences. (§5) We empirically demonstrate how dif- fied the use of 19 unique crawling tools in 350 papers. Welabeled ferent crawlers impact measurement results and inferences drawn papers as having used custom-built crawlers if the crawling tool from them (and consequently: our understanding of the web). Based was developed for the purpose of that specific study. For example, on large amounts of prior work, we focus on the three major prob- several studies leveraged custom-built extensions or app-store spe- lem domains: privacy, content availability, and security measure- cific crawlers to overcome the inadequacies of popular off-the-shelf ments. Through our experiments, we observe the impact on the crawlers [38, 43]. Although we include statistics on the occurances inferences made on the (1) success of traffic analysis classifiers, of custom tools, we do not consider them as part of our study. Addi- (2) third-party ecosystem, and (3) incidence rates of server-side tionally, we do not include papers relying on data generated using blocking. This illustrates that crawler selection does indeed impact RESTful APIs as part of our study since RESTful API responses are measurement and experiment quality, soundness, and complete- based on a protocol and are not impacted by crawler or browser ness. technologies. 2 SURVEY OF CRAWLING METHODOLOGIES Category Keywords General crawl*, scrap*, webkit, webdriver, spider, browser, head- We begin our study by surveying the use of crawlers in recent aca- less demic research conducted by the Internet measurement, security, Browsers Firefox, Chrome, Chromium and privacy communities. Our goal is to understand how these re- Tools cURL, Wget, PhantomJS, Selenium, OpenWPM search communities use web crawlers and to verify the repeatability Table 1: Keywords obtained from a manual analysis of 300 publications.
Recommended publications
  • Building a Scalable Index and a Web Search Engine for Music on the Internet Using Open Source Software
    Department of Information Science and Technology Building a Scalable Index and a Web Search Engine for Music on the Internet using Open Source software André Parreira Ricardo Thesis submitted in partial fulfillment of the requirements for the degree of Master in Computer Science and Business Management Advisor: Professor Carlos Serrão, Assistant Professor, ISCTE-IUL September, 2010 Acknowledgments I should say that I feel grateful for doing a thesis linked to music, an art which I love and esteem so much. Therefore, I would like to take a moment to thank all the persons who made my accomplishment possible and hence this is also part of their deed too. To my family, first for having instigated in me the curiosity to read, to know, to think and go further. And secondly for allowing me to continue my studies, providing the environment and the financial means to make it possible. To my classmate André Guerreiro, I would like to thank the invaluable brainstorming, the patience and the help through our college years. To my friend Isabel Silva, who gave me a precious help in the final revision of this document. Everyone in ADETTI-IUL for the time and the attention they gave me. Especially the people over Caixa Mágica, because I truly value the expertise transmitted, which was useful to my thesis and I am sure will also help me during my professional course. To my teacher and MSc. advisor, Professor Carlos Serrão, for embracing my will to master in this area and for being always available to help me when I needed some advice.
    [Show full text]
  • Open Search Environments: the Free Alternative to Commercial Search Services
    Open Search Environments: The Free Alternative to Commercial Search Services. Adrian O’Riordan ABSTRACT Open search systems present a free and less restricted alternative to commercial search services. This paper explores the space of open search technology, looking in particular at lightweight search protocols and the issue of interoperability. A description of current protocols and formats for engineering open search applications is presented. The suitability of these technologies and issues around their adoption and operation are discussed. This open search approach is especially useful in applications involving the harvesting of resources and information integration. Principal among the technological solutions are OpenSearch, SRU, and OAI-PMH. OpenSearch and SRU realize a federated model to enable content providers and search clients communicate. Applications that use OpenSearch and SRU are presented. Connections are made with other pertinent technologies such as open-source search software and linking and syndication protocols. The deployment of these freely licensed open standards in web and digital library applications is now a genuine alternative to commercial and proprietary systems. INTRODUCTION Web search has become a prominent part of the Internet experience for millions of users. Companies such as Google and Microsoft offer comprehensive search services to users free with advertisements and sponsored links, the only reminder that these are commercial enterprises. Businesses and developers on the other hand are restricted in how they can use these search services to add search capabilities to their own websites or for developing applications with a search feature. The closed nature of the leading web search technology places barriers in the way of developers who want to incorporate search functionality into applications.
    [Show full text]
  • Efficient Focused Web Crawling Approach for Search Engine
    Ayar Pranav et al, International Journal of Computer Science and Mobile Computing, Vol.4 Issue.5, May- 2015, pg. 545-551 Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320–088X IJCSMC, Vol. 4, Issue. 5, May 2015, pg.545 – 551 RESEARCH ARTICLE Efficient Focused Web Crawling Approach for Search Engine 1 2 Ayar Pranav , Sandip Chauhan Computer & Science Engineering, Kalol Institute of Technology and Research Canter, Kalol, Gujarat, India 1 [email protected]; 2 [email protected] Abstract— a focused crawler traverses the web, selecting out relevant pages to a predefined topic and neglecting those out of concern. Collecting domain specific documents using focused crawlers has been considered one of most important strategies to find relevant information. While surfing the internet, it is difficult to deal with irrelevant pages and to predict which links lead to quality pages. However most focused crawler use local search algorithm to traverse the web space, but they could easily trapped within limited a sub graph of the web that surrounds the starting URLs also there is problem related to relevant pages that are miss when no links from the starting URLs. There is some relevant pages are miss. To address this problem we design a focused crawler where calculating the frequency of the topic keyword also calculate the synonyms and sub synonyms of the keyword. The weight table is constructed according to the user query. To check the similarity of web pages with respect to topic keywords and priority of extracted link is calculated.
    [Show full text]
  • Distributed Indexing/Searching Workshop Agenda, Attendee List, and Position Papers
    Distributed Indexing/Searching Workshop Agenda, Attendee List, and Position Papers Held May 28-19, 1996 in Cambridge, Massachusetts Sponsored by the World Wide Web Consortium Workshop co-chairs: Michael Schwartz, @Home Network Mic Bowman, Transarc Corp. This workshop brings together a cross-section of people concerned with distributed indexing and searching, to explore areas of common concern where standards might be defined. The Call For Participation1 suggested particular focus on repository interfaces that support efficient and powerful distributed indexing and searching. There was a great deal of interest in this workshop. Because of our desire to limit attendance to a workable size group while maximizing breadth of attendee backgrounds, we limited attendance to one person per position paper, and furthermore we limited attendance to one person per institution. In some cases, attendees submitted multiple position papers, with the intention of discussing each of their projects or ideas that were relevant to the workshop. We had not anticipated this interpretation of the Call For Participation; our intention was to use the position papers to select participants, not to provide a forum for enumerating ideas and projects. As a compromise, we decided to choose among the submitted papers and allow multiple position papers per person and per institution, but to restrict attendance as noted above. Hence, in the paper list below there are some cases where one author or institution has multiple position papers. 1 http://www.w3.org/pub/WWW/Search/960528/cfp.html 1 Agenda The Distributed Indexing/Searching Workshop will span two days. The first day's goal is to identify areas for potential standardization through several directed discussion sessions.
    [Show full text]
  • Natural Language Processing Technique for Information Extraction and Analysis
    International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 2, Issue 8, August 2015, PP 32-40 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Natural Language Processing Technique for Information Extraction and Analysis T. Sri Sravya1, T. Sudha2, M. Soumya Harika3 1 M.Tech (C.S.E) Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] 2 Head (I/C) of C.S.E & IT Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] 3 M. Tech C.S.E, Assistant Professor, Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] Abstract: In the current internet era, there are a large number of systems and sensors which generate data continuously and inform users about their status and the status of devices and surroundings they monitor. Examples include web cameras at traffic intersections, key government installations etc., seismic activity measurement sensors, tsunami early warning systems and many others. A Natural Language Processing based activity, the current project is aimed at extracting entities from data collected from various sources such as social media, internet news articles and other websites and integrating this data into contextual information, providing visualization of this data on a map and further performing co-reference analysis to establish linkage amongst the entities. Keywords: Apache Nutch, Solr, crawling, indexing 1. INTRODUCTION In today’s harsh global business arena, the pace of events has increased rapidly, with technological innovations occurring at ever-increasing speed and considerably shorter life cycles.
    [Show full text]
  • An Ontology-Based Web Crawling Approach for the Retrieval of Materials in the Educational Domain
    An Ontology-based Web Crawling Approach for the Retrieval of Materials in the Educational Domain Mohammed Ibrahim1 a and Yanyan Yang2 b 1School of Engineering, University of Portsmouth, Anglesea Road, PO1 3DJ, Portsmouth, United Kingdom 2School of Computing, University of Portsmouth, Anglesea Road, PO1 3DJ, Portsmouth, United Kingdom Keywords: Web Crawling, Ontology, Education Domain. Abstract: As the web continues to be a huge source of information for various domains, the information available is rapidly increasing. Most of this information is stored in unstructured databases and therefore searching for relevant information becomes a complex task and the search for pertinent information within a specific domain is time-consuming and, in all probability, results in irrelevant information being retrieved. Crawling and downloading pages that are related to the user’s enquiries alone is a tedious activity. In particular, crawlers focus on converting unstructured data and sorting this into a structured database. In this paper, among others kind of crawling, we focus on those techniques that extract the content of a web page based on the relations of ontology concepts. Ontology is a promising technique by which to access and crawl only related data within specific web pages or a domain. The methodology proposed is a Web Crawler approach based on Ontology (WCO) which defines several relevance computation strategies with increased efficiency thereby reducing the number of extracted items in addition to the crawling time. It seeks to select and search out web pages in the education domain that matches the user’s requirements. In WCO, data is structured based on the hierarchical relationship, the concepts which are adapted in the ontology domain.
    [Show full text]
  • A Hadoop Based Platform for Natural Language Processing of Web Pages and Documents
    Journal of Visual Languages and Computing 31 (2015) 130–138 Contents lists available at ScienceDirect Journal of Visual Languages and Computing journal homepage: www.elsevier.com/locate/jvlc A hadoop based platform for natural language processing of web pages and documents Paolo Nesi n,1, Gianni Pantaleo 1, Gianmarco Sanesi 1 Distributed Systems and Internet Technology Lab, DISIT Lab, Department of Information Engineering (DINFO), University of Florence, Firenze, Italy article info abstract Available online 30 October 2015 The rapid and extensive pervasion of information through the web has enhanced the Keywords: diffusion of a huge amount of unstructured natural language textual resources. A great Natural language processing interest has arisen in the last decade for discovering, accessing and sharing such a vast Hadoop source of knowledge. For this reason, processing very large data volumes in a reasonable Part-of-speech tagging time frame is becoming a major challenge and a crucial requirement for many commercial Text parsing and research fields. Distributed systems, computer clusters and parallel computing Web crawling paradigms have been increasingly applied in the recent years, since they introduced Big Data Mining significant improvements for computing performance in data-intensive contexts, such as Parallel computing Big Data mining and analysis. Natural Language Processing, and particularly the tasks of Distributed systems text annotation and key feature extraction, is an application area with high computational requirements; therefore, these tasks can significantly benefit of parallel architectures. This paper presents a distributed framework for crawling web documents and running Natural Language Processing tasks in a parallel fashion. The system is based on the Apache Hadoop ecosystem and its parallel programming paradigm, called MapReduce.
    [Show full text]
  • Curlcrawler:A Framework of Curlcrawler and Cached Database for Crawling the Web with Thumb
    Ela Kumaret al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (4) , 2011, 1700-1705 CurlCrawler:A framework of CurlCrawler and Cached Database for crawling the web with thumb. Dr Ela Kumar#,Ashok Kumar$ # School of Information and Communication Technology, Gautam Buddha University, Greater Noida $ AIM & ACT.Banasthali University,Banasthali(Rajasthan.) Abstract-WWW is a vast source of information and search engines about basic modules of search engine functioning and these are the meshwork to navigate the web for several purposes. It is essential modules are (see Fig.1)[9,10]. problematic to identify and ping with graphical frame of mind for the desired information amongst the large set of web pages resulted by the search engine. With further increase in the size of the Internet, the problem grows exponentially. This paper is an endeavor to develop and implement an efficient framework with multi search agents to make search engines inapt to chaffing using curl features of the programming, called CurlCrawler. This work is an implementation experience for use of graphical perception to upright search on the web. Main features of this framework are curl programming, personalization of information, caching and graphical perception. Keywords and Phrases-Indexing Agent, Filtering Agent, Interacting Agent, Seolinktool, Thumb, Whois, CachedDatabase, IECapture, Searchcon, Main_spider, apnasearchdb. 1. INTRODUCTION Information is a vital role playing versatile thing from availability at church level to web through trends of books. WWW is now the prominent and up-to-date huge repository of information available to everyone, everywhere and every time. Moreover information on web is navigated using search Fig.1: Components of Information Retrieval System engines like AltaVista, WebCrawler, Hot Boat etc[1].Amount of information on the web is growing at very fast and owing to this search engine’s optimized functioning is always a thrust Store: It stores the crawled pages.
    [Show full text]
  • Meta Search Engine with an Intelligent Interface for Information Retrieval on Multiple Domains
    International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.4, October 2011 META SEARCH ENGINE WITH AN INTELLIGENT INTERFACE FOR INFORMATION RETRIEVAL ON MULTIPLE DOMAINS D.Minnie1, S.Srinivasan2 1Department of Computer Science, Madras Christian College, Chennai, India [email protected] 2Department of Computer Science and Engineering, Anna University of Technology Madurai, Madurai, India [email protected] ABSTRACT This paper analyses the features of Web Search Engines, Vertical Search Engines, Meta Search Engines, and proposes a Meta Search Engine for searching and retrieving documents on Multiple Domains in the World Wide Web (WWW). A web search engine searches for information in WWW. A Vertical Search provides the user with results for queries on that domain. Meta Search Engines send the user’s search queries to various search engines and combine the search results. This paper introduces intelligent user interfaces for selecting domain, category and search engines for the proposed Multi-Domain Meta Search Engine. An intelligent User Interface is also designed to get the user query and to send it to appropriate search engines. Few algorithms are designed to combine results from various search engines and also to display the results. KEYWORDS Information Retrieval, Web Search Engine, Vertical Search Engine, Meta Search Engine. 1. INTRODUCTION AND RELATED WORK WWW is a huge repository of information. The complexity of accessing the web data has increased tremendously over the years. There is a need for efficient searching techniques to extract appropriate information from the web, as the users require correct and complex information from the web. A Web Search Engine is a search engine designed to search WWW for information about given search query and returns links to various documents in which the search query’s key words are found.
    [Show full text]
  • Effective Focused Crawling Based on Content and Link Structure Analysis
    (IJCSIS) International Journal of Computer Science and Information Security, Vol. 2, No. 1, June 2009 Effective Focused Crawling Based on Content and Link Structure Analysis Anshika Pal, Deepak Singh Tomar, S.C. Shrivastava Department of Computer Science & Engineering Maulana Azad National Institute of Technology Bhopal, India Emails: [email protected] , [email protected] , [email protected] Abstract — A focused crawler traverses the web selecting out dependent ontology, which in turn strengthens the metric used relevant pages to a predefined topic and neglecting those out of for prioritizing the URL queue. The Link-Structure-Based concern. While surfing the internet it is difficult to deal with method is analyzing the reference-information among the pages irrelevant pages and to predict which links lead to quality pages. to evaluate the page value. This kind of famous algorithms like In this paper, a technique of effective focused crawling is the Page Rank algorithm [5] and the HITS algorithm [6]. There implemented to improve the quality of web navigation. To check are some other experiments which measure the similarity of the similarity of web pages w.r.t. topic keywords, a similarity page contents with a specific subject using special metrics and function is used and the priorities of extracted out links are also reorder the downloaded URLs for the next crawl [7]. calculated based on meta data and resultant pages generated from focused crawler. The proposed work also uses a method for A major problem faced by the above focused crawlers is traversing the irrelevant pages that met during crawling to that it is frequently difficult to learn that some sets of off-topic improve the coverage of a specific topic.
    [Show full text]
  • Metadata Statistics for a Large Web Corpus
    Metadata Statistics for a Large Web Corpus Peter Mika Tim Potter Yahoo! Research Yahoo! Research Diagonal 177 Diagonal 177 Barcelona, Spain Barcelona, Spain [email protected] [email protected] ABSTRACT are a number of factors that complicate the comparison of We provide an analysis of the adoption of metadata stan- results. First, different studies use different web corpora. dards on the Web based a large crawl of the Web. In par- Our earlier study used a corpus collected by Yahoo!'s web ticular, we look at what forms of syntax and vocabularies crawler, while the current study uses a dataset collected by publishers are using to mark up data inside HTML pages. the Bing crawler. Bizer et al. analyze the data collected by We also describe the process that we have followed and the http://www.commoncrawl.org, which has the obvious ad- difficulties involved in web data extraction. vantage that it is publicly available. Second, the extraction methods may differ. For example, there are a multitude of microformats (one for each object type) and although most 1. INTRODUCTION search engines and extraction libraries support the popular Embedding metadata inside HTML pages is one of the ones, different processors may recognize a different subset. ways to publish structured data on the Web, often pre- Unlike the specifications of microdata and RDFa published ferred by publishers and consumers over other methods of by the RDFa, the microformat specifications are also rather exposing structured data, such as publishing data feeds, informal and thus different processors may extract different SPARQL endpoints or RDF/XML documents.
    [Show full text]
  • Web Crawling, Analysis and Archiving
    Web Crawling, Analysis and Archiving Vangelis Banos Aristotle University of Thessaloniki Faculty of Sciences School of Informatics Doctoral dissertation under the supervision of Professor Yannis Manolopoulos October 2015 Ανάκτηση, Ανάλυση και Αρχειοθέτηση του Παγκόσμιου Ιστού Ευάγγελος Μπάνος Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης Σχολή Θετικών Επιστημών Τμήμα Πληροφορικής Διδακτορική Διατριβή υπό την επίβλεψη του Καθηγητή Ιωάννη Μανωλόπουλου Οκτώβριος 2015 i Web Crawling, Analysis and Archiving PhD Dissertation ©Copyright by Vangelis Banos, 2015. All rights reserved. The Doctoral Dissertation was submitted to the the School of Informatics, Faculty of Sci- ences, Aristotle University of Thessaloniki. Defence Date: 30/10/2015. Examination Committee Yannis Manolopoulos, Professor, Department of Informatics, Aristotle University of Thes- saloniki, Greece. Supervisor Apostolos Papadopoulos, Assistant Professor, Department of Informatics, Aristotle Univer- sity of Thessaloniki, Greece. Advisory Committee Member Dimitrios Katsaros, Assistant Professor, Department of Electrical & Computer Engineering, University of Thessaly, Volos, Greece. Advisory Committee Member Athena Vakali, Professor, Department of Informatics, Aristotle University of Thessaloniki, Greece. Anastasios Gounaris, Assistant Professor, Department of Informatics, Aristotle University of Thessaloniki, Greece. Georgios Evangelidis, Professor, Department of Applied Informatics, University of Mace- donia, Greece. Sarantos Kapidakis, Professor, Department of Archives, Library Science and Museology, Ionian University, Greece. Abstract The Web is increasingly important for all aspects of our society, culture and economy. Web archiving is the process of gathering digital materials from the Web, ingesting it, ensuring that these materials are preserved in an archive, and making the collected materials available for future use and research. Web archiving is a difficult problem due to organizational and technical reasons. We focus on the technical aspects of Web archiving.
    [Show full text]