Can Web Crawler Download Files Doc Crawler 1.2

Total Page:16

File Type:pdf, Size:1020Kb

Can Web Crawler Download Files Doc Crawler 1.2 can web crawler download files doc_crawler 1.2. doc_crawler - explore a website recursively and download all the wanted documents (PDF, ODT…). == Synopsis doc_crawler.py [--accept=jpe?g$] [--download] [--single-page] [--verbose] http://… doc_crawler.py [--wait=3] [--no-random- wait] --download-files url.lst doc_crawler.py [--wait=0] --download-file http://… or python3 -m doc_crawler […] http://… == Description _doc_crawler_ can explore a website recursively from a given URL and retrieve, in the descendant pages, the encountered document files (by default: PDF, ODT, DOC, XLS, ZIP…) based on regular expression matching (typically against their extension). Documents can be listed on the standard output or downloaded (with the _--download_ argument). To address real life situations, activities can be logged (with _--verbose_). + Also, the search can be limited to one page (with the _--single-page_ argument). Documents can be downloaded from a given list of URL, that you may have previously produced using default options of _doc_crawler_ and an output redirection such as: + `./doc_crawler.py http://… > url.lst` Documents can also be downloaded one by one if necessary (to finish the work), using the _--download-file_ argument, which makes _doc_crawler_ a tool sufficient by itself to assist you at every steps. By default, the program waits a randomly-pick amount of seconds, between 1 and 5, before each download to avoid being rude toward the webserver it interacts with (and so avoid being black-listed). This behavior can be disabled (with a _--no-random-wait_ and/or a _--wait=0_ argument). _doc_crawler.py_ works great with Tor : `torsocks doc_crawler.py http://…` == Options *--accept*=_jpe?g$_:: Optional regular expression (case insensitive) to keep matching document names. Example : _--accept=jpe? g$_ will keep all : .JPG, .JPEG, .jpg, .jpeg *--download*:: Directly downloads found documents if set, output their URL if not. *--single-page*:: Limits the search for documents to download to the given URL. *--verbose*:: Creates a log file to keep trace of what was done. *--wait*=x:: Change the default waiting time before each download (page or document). Example : _--wait=3_ will wait between 1 and 3s before each download. Default is 5. *--no-random-wait*:: Stops the random pick up of waiting times. _--wait=_ or default is used. *--download-files* url.lst:: Downloads each documents which URL are listed in the given file. Example : _--download-files url.lst_ *--download-file* http://…:: Directly save in the current folder the URL-pointed document. == Tests Around 30 _doctests_ are included in _doc_crawler.py_. You can run them with the following command in the cloned repository root: + `python3 -m doctest doc_crawler.py` Tests can also be launched one by one using the _--test=XXX_ argument: + `python3 -m doc_crawler --test=download_file` Tests are successfully passed if nothing is output. == Requirements - requests - yaml. One can install them under Debian using the following command : `apt install python3-requests python3-yaml` == Author Simon Descarpentries - https://s.d12s.fr. == Ressources Github repository : https://github.com/Siltaar/doc_crawler.py + Pypi repository : https://pypi.python.org/pypi/doc_crawler. == Support To support this project, you may consider (even a symbolic) donation via : https://liberapay.com/Siltaar. == Licence GNU General Public License v3.0. See LICENCE file for more information. Can web crawler download files. Crabler - Web crawler for Crabs. Asynchronous web scraper engine written in rust. fully based on async-std derive macro based api struct based api stateful scraper (structs can hold state) ability to download files ability to schedule navigation jobs in an async manner. About. Web Crawler for Crabs. Resources. License. Releases. Packages 0. Contributors 2. Languages. © 2021 GitHub, Inc. You can’t perform that action at this time. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. 10 Open Source Web Crawlers: Best List. As you are searching for the best open source web crawlers , you surely know they are a great source of data for analysis and data mining. Internet crawling tools are also called web spiders, web data extraction software, and website scraping tools. The majority of them are written in Java, but there is a good list of free and open code data extracting solutions in C#, C, Python, PHP, and Ruby. You can download them on Windows, Linux, Mac or Android. Web content scraping applications can benefit your business in many ways. They collect content from different public websites and deliver the data in a manageable format. They help you monitoring news, social media, images, articles, your competitors, and etc. 10 of the best open source web crawlers. How to choose open source web scraping software? (with an Infographic in PDF) 1. Scrapy. Scrapy is an open source and collaborative framework for data extracting from websites. It is a fast, simple but extensible tool written in Python. Scrapy runs on Linux, Windows, Mac, and BSD. It extracting structured data that you can use for many purposes and applications such as data mining, information processing or historical archival. Scrapy was originally designed for web scraping. However, it is also used to extract data using APIs or as a web crawler for general purposes. Key features and benefits: Built-in support for extracting data from HTML/XML sources using extended CSS selectors and XPath expressions. Generating feed exports in multiple formats (JSON, CSV, XML). Built on Twisted Robust encoding support and auto-detection. Fast and simple. Heritrix is one of the most popular free and open-source web crawlers in Java. Actually, it is an extensible, web-scale, archival-quality web scraping project. Heritrix is a very scalable and fast solution. You can crawl/archive a set of websites in no time. In addition, it is designed to respect the robots.txt exclusion directives and META robots tags. Runs on Linux/Unixlike and Windows. Key features and benefits: HTTP authentication NTLM Authentication XSL Transformation for link extraction Search engine independence Mature and stable platform Highly configurable Runs from any machine. WebSphinix is a great easy to use personal and customizable web crawler. It is designed for advanced web users and Java programmers allowing them to crawl over a small part of the web automatically. This web data extraction solution also is a comprehensive Java class library and interactive development software environment. WebSphinix includes two parts: the Crawler Workbench and the WebSPHINX class library. The Crawler Workbench is a good graphical user interface that allows you to configure and control a customizable web crawler. The library provides support for writing web crawlers in Java. WebSphinix runs on Windows, Linux, Mac, and Android IOS. Key features and benefits: Visualize a collection of web pages as a graph Concatenate pages together for viewing or printing them as a single document Extract all text matching a certain pattern. Tolerant HTML parsing Support for the robot exclusion standard Common HTML transformations Multithreaded Web page retrieval. When it comes to best open source web crawlers, Apache Nutch definitely has a top place in the list. Apache Nutch is popular as a highly extensible and scalable open source code web data extraction software project great for data mining. Nutch can run on a single machine but a lot of its strength is coming from running in a Hadoop cluster. Many data analysts and scientists, application developers, and web text mining engineers all over the world use Apache Nutch. Apache Nutch is a cross-platform solution written in Java. Key features and benefits: Fetching and parsing are done separately by default Supports a wide variety of document formats: Plain Text, HTML/XHTML+XML, XML, PDF, ZIP and many others Uses XPath and namespaces to do the mapping Distributed filesystem (via Hadoop) Link-graph database NTLM authentication. A great tool for those who are searching open source web crawlers for enterprise needs. Norconex allows you to crawl any web content. You can run this full-featured collector on its own, or embed it in your own application. Works on any operating system. Can crawl millions on a single server of average capacity. In addition, it has many content and metadata manipulation options. Also, it can extract page “featured” image. Key features and benefits: Multi-threaded Supports different hit interval according to different schedules Extract text out of many file formats (HTML, PDF, Word, etc.) Extract metadata associated with documents Supports pages rendered with JavaScript Language detection Translation support Configurable crawling speed Detects modified and deleted documents Supports external commands to parse or manipulate documents Many others. 6. BUbiNG. BUbiNG will surprise you. It is a next-generation open source web crawler. BUbiNG is a Java fully distributed crawler (no central coordination). It is able to crawl several thousands pages per second. Collect really big datasets. BUbiNG distribution is based on modern high-speed protocols so to achieve very high throughput. BUbiNG provides massive crawling for the masses. It is completely configurable, extensible with little efforts and integrated with spam detection. Key features and benefits: High parallelism Fully distributed Uses JAI4J, a thin layer over JGroups that handles job assignment. Detects (presently) near-duplicates using a fingerprint of a stripped page Fast Massive crawling. GNU Wget is a free and open source software tool written in C for retrieving files using HTTP, HTTPS, FTP, and FTPS. The most distinguishing feature is that GNU Wget has NLS-based message files for many different languages. In addition, it can optionally convert absolute links in downloaded documents to relative documents. Runs on most UNIX-like operating systems as well as Microsoft Windows. GNU Wget is a powerful website scraping tool with a variety of features.
Recommended publications
  • Study of Web Crawler and Its Different Types
    IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 1, Ver. VI (Feb. 2014), PP 01-05 www.iosrjournals.org Study of Web Crawler and its Different Types Trupti V. Udapure1, Ravindra D. Kale2, Rajesh C. Dharmik3 1M.E. (Wireless Communication and Computing) student, CSE Department, G.H. Raisoni Institute of Engineering and Technology for Women, Nagpur, India 2Asst Prof., CSE Department, G.H. Raisoni Institute of Engineering and Technology for Women, Nagpur, India, 3Asso.Prof. & Head, IT Department, Yeshwantrao Chavan College of Engineering, Nagpur, India, Abstract : Due to the current size of the Web and its dynamic nature, building an efficient search mechanism is very important. A vast number of web pages are continually being added every day, and information is constantly changing. Search engines are used to extract valuable Information from the internet. Web crawlers are the principal part of search engine, is a computer program or software that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. It is an essential method for collecting data on, and keeping in touch with the rapidly increasing Internet. This Paper briefly reviews the concepts of web crawler, its architecture and its various types. Keyword: Crawling techniques, Web Crawler, Search engine, WWW I. Introduction In modern life use of internet is growing in rapid way. The World Wide Web provides a vast source of information of almost all type. Now a day’s people use search engines every now and then, large volumes of data can be explored easily through search engines, to extract valuable information from web.
    [Show full text]
  • Distributed Web Crawling Using Network Coordinates
    Distributed Web Crawling Using Network Coordinates Barnaby Malet Department of Computing Imperial College London [email protected] Supervisor: Peter Pietzuch Second Marker: Emil Lupu June 16, 2009 2 Abstract In this report we will outline the relevant background research, the design, the implementation and the evaluation of a distributed web crawler. Our system is innovative in that it assigns Euclidean coordinates to crawlers and web servers such that the distances in the space give an accurate prediction of download times. We will demonstrate that our method gives the crawler the ability to adapt and compensate for changes in the underlying network topology, and in doing so can achieve significant decreases in download times when compared with other approaches. 3 4 Acknowledgements Firstly, I would like to thank Peter Pietzuch for the help that he has given me throughout the course of the project as well as showing me support when things did not go to plan. Secondly, I would like to thank Johnathan Ledlie for helping me with some aspects of the implemen- tation involving the Pyxida library. I would also like to thank the PlanetLab support team for giving me extensive help in dealing with complaints from web masters. Finally, I would like to thank Emil Lupu for providing me with feedback about my Outsourcing Report. 5 6 Contents 1 Introduction 9 2 Background 13 2.1 Web Crawling . 13 2.1.1 Web Crawler Architecture . 14 2.1.2 Issues in Web Crawling . 16 2.1.3 Discussion . 17 2.2 Crawler Assignment Strategies . 17 2.2.1 Hash Based .
    [Show full text]
  • Panacea D4.1
    SEVENTH FRAMEWORK PROGRAMME THEME 3 Information and communication Technologies PANACEA Project Grant Agreement no.: 248064 Platform for Automatic, Normalized Annotation and Cost-Effective Acquisition of Language Resources for Human Language Technologies D4.1 Technologies and tools for corpus creation, normalization and annotation Dissemination Level: Public Delivery Date: July 16, 2010 Status – Version: Final Author(s) and Affiliation: Prokopis Prokopidis, Vassilis Papavassiliou (ILSP), Pavel Pecina (DCU), Laura Rimel, Thierry Poibeau (UCAM), Roberto Bartolini, Tommaso Caselli, Francesca Frontini (ILC- CNR), Vera Aleksic, Gregor Thurmair (Linguatec), Marc Poch Riera, Núria Bel (UPF), Olivier Hamon (ELDA) Technologies and tools for corpus creation, normalization and annotation Table of contents 1 Introduction ........................................................................................................................... 3 2 Terminology .......................................................................................................................... 3 3 Corpus Acquisition Component ............................................................................................ 3 3.1 Task description ........................................................................................................... 4 3.2 State of the art .............................................................................................................. 4 3.3 Existing tools ..............................................................................................................
    [Show full text]
  • Towards a Distributed Web Search Engine
    Towards a Distributed Web Search Engine Ricardo Baeza-Yates Yahoo! Research Barcelona, Spain Joint work with Barla Cambazoglu, Aristides Gionis, Flavio Junqueira, Mauricio Marín, Vanessa Murdock (Yahoo! Research) and many other people Web Search Web Context 4 Web Search • This is one of the most complex data engineering challenges today: – Distributed in nature – Large volume of data – Highly concurrent service – Users expect very good & fast answers • Current solution: Replicated centralized system 5 WR Logical Architecture Web Crawlers 6 A Typical Web Search Engine • Caching – result cache – posting list cache – document cache • Replication – multiple clusters – improve throughput • Parallel query processing – partitioned index • document-based • term-based – Online query processing Search Engine Architectures • Architectures differ in – number of data centers – assignment of users to data centers – assignment of index to data centers System Size • 20 billion Web pages implies at least 100Tb of text • The index in RAM implies at least a cluster of 10,000 PCs • Assume we can answer 1,000 queries/sec • 350 million queries a day imply 4,000 queries/sec • Decide that the peak load plus a fault tolerance margin is 3 • This implies a replication factor of 12 giving 120,000 PCs • Total deployment cost of over 100 million US$ plus maintenance cost • In 201x, being conservative, we would need over 1 million computers! 10 Questions • Should we use a centralized system? • Can we have a (cheaper) distributed search system in spite of network latency?
    [Show full text]
  • Distributed Web Crawlers Using Hadoop
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 24 (2017) pp. 15187-15195 © Research India Publications. http://www.ripublication.com Distributed Web Crawlers using Hadoop Pratiba D Assistant Professor, Department of Computer Science and Engineering, R V College of Engineering, R V Vidyanikethan Post, Mysuru Road Bengaluru, Karnataka, India. Orcid Id: 0000-0001-9123-8687 Shobha G Professor, Department of Computer Science and Engineering, R V College of Engineering, R V Vidyanikethan Post, Mysuru Road Bengaluru, Karnataka, India. LalithKumar H Student, Department of Computer Science and Engineering, R V College of Engineering, R V Vidyanikethan Post, Mysuru Road Bengaluru, Karnataka, India. Samrudh J Student, Department of Information Science and Engineering, R V College of Engineering, R V Vidyanikethan Post, Mysuru Road Bengaluru, Karnataka, India. Abstract With the increasing size of web content, it is very difficult to index the entire WWW. So, efforts must be put to limit the Web Crawler is a software, which crawls through the WWW amount of data indexed and at the same time maximize the to build database for a search engine. In recent years, web coverage. To minimize the amount of data indexed, Natural crawling has started facing many challenges. Firstly, the web Language Processing techniques need to be applied to filter pages are highly unstructured which makes it difficult to and summarize the web page and store only the essential maintain a generic schema for storage. Secondly, the WWW details. By doing so, a search can yield the most accurate and is too huge and it is impossible to index it as it is.
    [Show full text]
  • Design and Implementation of Distributed Web Crawler for Drug Website Search Using Hefty Based Enhanced Bandwidth Algorithms
    Turkish Journal of Computer and Mathematics Education Vol.12 No.9 (2021), 123-129 Research Article Design and Implementation of Distributed Web Crawler for Drug Website Search using Hefty based Enhanced Bandwidth Algorithms a b c Saran Raj S , Aarif Ahamed S , and R. Rajmohan a,b Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology cIFET College of Engineering, Villupuram, Tamil Nadu, India Article History: Received: 10 January 2021; Revised: 12 February 2021; Accepted: 27 March 2021; Published online: 20 April 2021 Abstract: The development of an expert system based search tool needs a superficial structure to satisfy the requirements of current web scale. Search engines and the internet crawler are used to mine the required information from the web and surf the internet in an efficient fashion. Distributed crawler is one of the types of a web crawler, which is a dispersed computation method. In this paper, we design and implement the concept of Efficient Distributed Web Crawler using enhanced bandwidth and hefty algorithms. Mostly Web Crawler doesn’t have any distributed cluster performance system and any implemented algorithm. In this paper, a novel Hefty Algorithm and enhanced bandwidth algorithm are combined for a better-distributed crawling system. The hefty algorithm is implemented to provide strong and efficient surfing results while applying on the drug web search. We also concentrate on the efficiency of the proposed distributed web crawler by implementing Enhanced Bandwidth algorithm. Keywords: Distributed crawler, Page surfs, Bandwidth ___________________________________________________________________________ 1. Introduction Internet Page swarmer is a meta- hunt engine, which combines the top search results from the represented search engines.
    [Show full text]
  • A Practical Geographically Distributed Web Crawler
    UniCrawl: A Practical Geographically Distributed Web Crawler Do Le Quoc, Christof Fetzer Pierre Sutra, Valerio Schiavoni, Systems Engineering Group Etienne´ Riviere,` Pascal Felber Dresden University of Technology, Germany University of Neuchatel,ˆ Switzerland Abstract—As the wealth of information available on the web can be repeated until a given depth, and pages are periodically keeps growing, being able to harvest massive amounts of data has re-fetched to discover new pages and to detect updated content. become a major challenge. Web crawlers are the core components to retrieve such vast collections of publicly available data. The Due to the size of the web, it is mandatory to make the key limiting factor of any crawler architecture is however its crawling process parallel [23] on a large number of machines large infrastructure cost. To reduce this cost, and in particular to achieve a reasonable collection time. This requirement the high upfront investments, we present in this paper a geo- implies provisioning large computing infrastructures. Existing distributed crawler solution, UniCrawl. UniCrawl orchestrates commercial crawlers, such as Google or Bing rely on big data several geographically distributed sites. Each site operates an centers. However, this approach imposes heavy requirements, independent crawler and relies on well-established techniques notably on the cost of the network infrastructure. Furthermore, for fetching and parsing the content of the web. UniCrawl splits the high upfront investment necessary to set up appropriate the crawled domain space across the sites and federates their storage and computing resources, while minimizing thee inter-site data centers can only be made by few large Internet companies, communication cost.
    [Show full text]
  • UCYMICRA: Distributed Indexing of the Web Using Migrating Crawlers
    UCYMICRA: Distributed Indexing of the Web Using Migrating Crawlers Odysseas Papapetrou, Stavros Papastavrou, George Samaras Computer Science Department, University of Cyprus, 75 Kallipoleos Str., P.O.Box 20537 {cs98po1, stavrosp, cssamara}@cs.ucy.ac.cy Abstract. Due to the tremendous increase rate and the high change frequency of Web documents, maintaining an up-to-date index for searching purposes (search engines) is becoming a challenge. The traditional crawling methods are no longer able to catch up with the constantly updating and growing Web. Real- izing the problem, in this paper we suggest an alternative distributed crawling method with the use of mobile agents. Our goal is a scalable crawling scheme that minimizes network utilization, keeps up with document changes, employs time realization, and is easily upgradeable. 1 Introduction Indexing the Web has become a challenge due to the Web’s growing and dynamic na- ture. A study released in late 2000 reveals that the static and publicly available Web (also mentioned as surface web) exceeds 2.5 billion documents while the deep Web (dynamically generated documents, intranet pages, web-connected databases etc) is almost three orders of magnitude larger [20]. Another study shows that the Web is growing and changing rapidly [17, 19], while no search engine succeeds coverage of more than 16% of the estimated Web size [19]. Web crawling (or traditional crawling) has been the dominant practice for Web in- dexing by popular search engines and research organizations since 1993, but despite the vast computational and network resources thrown into it, traditional crawling is no longer able to catch up with the dynamic Web.
    [Show full text]
  • DATA20021 Information Retrieval Lecture 5: Web Crawling Simon J
    DATA20021 University of Helsinki, Department of Computer Science Information Retrieval Lecture 5: Web Crawling Simon J. Puglisi [email protected] Spring 2020 Today’s lecture… 16.1: Introduction to Indexing - Boolean Retrieval model - Inverted Indexes 21.1: Index Compression - unary, gamma, variable-byte coding - (Partitioned) Elias-Fano coding (used by Google, facebook) 23.1: Index Construction - preprocessing documents prior to search - building the index efficiently 28.1: Web Crawling - getting documents off the web at scale - architecture of a large scale web search engine 30.1: Query Processing - scoring and ranking search results - Vector-Space model • Web search engines create web repositories – They cache the Web on their local machines • Web repositories provide fast access to copies of the pages on the Web, allowing faster indexing and better search quality • A search engine aims to minimize the potential differences between it’s local repo and the Web – Coverage and freshness allow better quality answers – Very challenging due to fast and continuous evolution of the web: huge changes in pages and content every second • The Web repository maintains only the most recently crawled versions of web pages – Raw HTML, but compressed, on a filesystem (not a DBMS) – Also a catalog containing location on disk, size, timestamp • Mechanisms for both bulk and random access to stored pages are provided – Bulk access is used, e.g., by the indexing system – Random access for, e.g., query-biased snippet generation <Some slides on query
    [Show full text]
  • Analysis, Modeling, and Algorithms for Scalable Web Crawling
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Texas A&amp;M Repository ANALYSIS, MODELING, AND ALGORITHMS FOR SCALABLE WEB CRAWLING A Dissertation by SARKER TANZIR AHMED Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Chair of Committee, Dmitri Loguinov Committee Members, Riccardo Bettati James Caverlee A. L. Narasimha Reddy Head of Department, Dilma Da Silva August 2016 Major Subject: Computer Science Copyright 2016 Sarker Tanzir Ahmed ABSTRACT This dissertation presents a modeling framework for the intermediate data gener- ated by external-memory sorting algorithms (e.g., merge sort, bucket sort, hash sort, replacement selection) that are well-known, yet without accurate models of produced data volume. The motivation comes from the IRLbot crawl experience in June 2007, where a collection of scalable and high-performance external sorting methods are used to handle such problems as URL uniqueness checking, real-time frontier rank- ing, budget allocation, spam avoidance, all being monumental tasks, especially when limited to the resources of a single-machine. We discuss this crawl experience in detail, use novel algorithms to collect data from the crawl image, and then advance to a broader problem – sorting arbitrarily large-scale data using limited resources and accurately capturing the required cost (e.g., time and disk usage). To solve these problems, we present an accurate model of uniqueness probabil- ity the probability to encounter previous unseen data and use that to analyze the amount of intermediate data generated the above-mentioned sorting methods.
    [Show full text]
  • Around the Web in Six Weeks: Documenting a Large-Scale Crawl
    Around the Web in Six Weeks: Documenting a Large-Scale Crawl Sarker Tanzir Ahmed, Clint Sparkmany, Hsin-Tsang Lee, Dmitri Loguinov∗ Department of Computer Science and Engineering Texas A&M University, College Station, TX 77843, USA Email: ftanzir, [email protected], [email protected], [email protected] Abstract—Exponential growth of the web continues to present there exists no standard methodology for examining web challenges to the design and scalability of web crawlers. Our crawls and comparing their performance against one another. previous work on a high-performance platform called IRLbot With each paper providing different, and often very limited, [28] led to the development of new algorithms for realtime URL manipulation, domain ranking, and budgeting, which were types of information, little can be said about the relative tested in a 6.3B-page crawl. Since very little is known about the strengths of various crawling techniques or even their web crawl itself, our goal in this paper is to undertake an extensive coverage. Setting aside the financial aspect discussed above, measurement study of the collected dataset and document its this lack of transparency has helped stifle innovation, allowing crawl dynamics. We also propose a framework for modeling the industry to take a technological and scientific lead in this area. scaling rate of various data structures as crawl size goes to infinity and offer a methodology for comparing crawl coverage to that Finally, the majority of existing web studies have no pro- of commercial search engines. visions to handle spam [8], [12], [17], [30], [36], [37], [41]. One technique for tackling the massive scale, infinite script- I.
    [Show full text]
  • Implementation of Efficient Distributed Crawler Through Stepwise Crawling Node Allocation
    Journal of JAITC, Vol. 10, No.2, pp. 15-31, Dec. 31, 2020 15 http://dx.doi.org/10.14801/JAITC.2020.10.2.15 Implementation of Efficient Distributed Crawler through Stepwise Crawling Node Allocation Hyuntae Kim1, Junhyung Byun2, Yoseph Na3, and Yuchul Jung4* 1,4Cognitive Intelligence Lab., Department of Computer Engineering, Kumoh National Institute of Technology, Gumi, Korea 2,3Undergraduate student, Department of Computer Engineering, Kumoh National Institute of Technology, Gumi, Korea [email protected], orcid: https://orcid.org/0000-0002-9803-8642 [email protected], orcid: https://orcid.org/0000-0002-6543-805X [email protected], orcid: https://orcid.org/0000-0002-5360-7418 [email protected], orcid: https://orcid.org/0000-0002-8871-1979 (*Corresponding Author) Abstract Various websites have been created due to the increased use of the Internet, and the number of documents distributed through these websites has increased proportionally. However, it is not easy to collect newly updated documents rapidly. Web crawling methods have been used to continuously collect and manage new documents, whereas existing crawling systems applying a single node demonstrate limited performances. Furthermore, crawlers applying distribution methods exhibit a problem related to effective node management for crawling. This study proposes an efficient distributed crawler through stepwise crawling node allocation, which identifies websites' properties and establishes crawling policies based on the properties identified to collect a large number of documents from multiple websites. The proposed crawler can calculate the number of documents included in a website, compare data collection time and the amount of data collected based on the number of nodes allocated to a specific website by repeatedly visiting the website, and automatically allocate the optimal number of nodes to each website for crawling.
    [Show full text]