Distributed Training

Total Page:16

File Type:pdf, Size:1020Kb

Distributed Training Scalable and Distributed DNN Training on Modern HPC Systems: Challenges and Solutions Keynote Talk at SDAS ‘19 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Understanding the Deep Learning Resurgence • Deep Learning is a sub-set of Machine Learning – But, it is perhaps the most radical and revolutionary subset – Automatic feature extraction vs. hand-crafted features • Deep Learning – A renewed interest and a lot of hype! – Key success: Deep Neural Networks (DNNs) – Everything was there since the late 80s except the “computability of DNNs” Courtesy: http://www.deeplearningbook.org/contents/intro.html Network Based Computing Laboratory SDAS (June ‘19) 2 Deep Learning Use Cases and Growth Trends Courtesy: https://www.top500.org/news/market-for-artificial-intelligence-projected-to-hit-36-billion-by-2025/ Network Based Computing Laboratory SDAS (June ‘19) 3 Increasing Usage of HPC, Big Data and Deep Learning Big Data HPC (Hadoop, Spark, (MPI, RDMA, HBase, Lustre, etc.) Memcached, etc.) Convergence of HPC, Big Deep Learning Data, and Deep Learning! (Caffe, TensorFlow, BigDL, etc.) Increasing Need to Run these applications on the Cloud!! Network Based Computing Laboratory SDAS (June ‘19) 4 Newer Workflows - Deep Learning over Big Data (DLoBD) • Deep Learning over Big Data (DLoBD) is one of the most efficient analyzing paradigms • More and more deep learning tools or libraries (e.g., Caffe, TensorFlow) start running over big data stacks, such as Apache Hadoop and Spark • Benefits of the DLoBD approach – Easily build a powerful data analytics pipeline • E.g., Flickr DL/ML Pipeline, “How Deep Learning Powers Flickr”, http://bit.ly/1KIDfof (3) Non-deep (1) Prepare (2) Deep (4) Apply ML learning Datasets @Scale Learning @Scale model @Scale analytics @Scale – Better data locality – Efficient resource sharing and cost effective Network Based Computing Laboratory SDAS (June ‘19) 5 Drivers of Modern HPC Cluster Architectures High Performance Interconnects - Accelerators / Coprocessors InfiniBand high compute density, high Multi-core Processors <1usec latency, 200Gbps Bandwidth> performance/watt SSD, NVMe-SSD, NVRAM >1 TFlop DP on a chip • Multi-core/many-core technologies • Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE) • Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD • Accelerators (NVIDIA GPGPUs and Intel Xeon Phi) • Available on HPC Clouds, e.g., Amazon EC2, NSF Chameleon, Microsoft Azure, etc. Summit Sierra Sunway TaihuLight K - Computer Network Based Computing Laboratory SDAS (June ‘19) 6 Key Phases of Deep Learning • Deep Learning has two major tasks 1. Training of the Deep Neural Network 2. Inference (or deployment) that uses a trained DNN • DNN Training – Training is a compute/communication intensive process – can take days to weeks – Faster training is necessary! • Faster training can be achieved by – Using Newer and Faster Hardware – But, there is a limit! – Can we use more GPUs or nodes? • The need for Parallel and Distributed Training Network Based Computing Laboratory SDAS (June ‘19) 7 Scale-up and Scale-out Desired • Scale-up: Intra-node Communication – Many improvements like: NCCL2 • NVIDIA cuDNN, cuBLAS, NCCL, etc. • CUDA 9 Co-operative Groups cuDNN MPI • Scale-out: Inter-node Communication MKL-DNN – DL Frameworks – most are optimized for single-node only up Performance up – Distributed (Parallel) Training is an emerging - gRPC trend • OSU-Caffe – MPI-based Scale • Microsoft CNTK – MPI/NCCL2 Hadoop • Google TensorFlow – gRPC-based/MPI/NCCL2 • Facebook Caffe2 – Hybrid (NCCL2/Gloo/MPI) Scale-out Performance Network Based Computing Laboratory SDAS (June ‘19) 8 Holistic Evaluation is Important!! • My framework is faster than your framework! • This needs to be understood in a holistic way. • Performance depends on the entire execution MKL/ MKL-DNN environment (the full stack) • Isolated view of performance is not helpful A. A. Awan, H. Subramoni, and Dhabaleswar K. Panda. “An In-depth Performance Characterization of CPU- and GPU-based DNN Training on Modern Architectures”, In Proceedings of the Machine Learning on HPC Environments (MLHPC'17). ACM, New York, NY, USA, Article 8. Network Based Computing Laboratory SDAS (June ‘19) 9 Broad Challenge: Exploiting HPC for Deep Learning How to efficiently scale-out a Deep Learning (DL) framework and take advantage of heterogeneous High Performance Computing (HPC) resources? Network Based Computing Laboratory SDAS (June ‘19) 10 Research Challenges to Exploit HPC Technologies 1. What are the fundamental 1 issues in designing DL Deep Learning and Machine Learning Frameworks Caffe/ CNTK Caffe2 TensorFlow MXNet frameworks? OSU-Caffe – Memory Requirements Major Computation and Communication Phases in DL Frameworks – Computation Forward Gradient Model Propagation Backward Requirements Aggregation – Communication Overhead 2. Why do we need to support 2 Communication Runtimes to support distributed training? Distributed Training – To overcome the limits of single-node training HPC Platforms CPU InfiniBand GPU – To better utilize hundreds of existing HPC Clusters Network Based Computing Laboratory SDAS (June ‘19) 11 Research Challenges to Exploit HPC Technologies (Cont’d) 3. What are the new design challenges brought forward by DL frameworks for Deep Learning and Machine Learning Frameworks Communication runtimes? Caffe/ CNTK Caffe2 TensorFlow MXNet – Large Message Collective OSU-Caffe Communication and Reductions Major Computation and Communication Phases in DL Frameworks – GPU Buffers (CUDA-Awareness) Forward Gradient Model Propagation Backward Aggregation 4. Can a Co-design approach help in 4 Co-Design Opportunities achieving Scale-up and Scale-out efficiently? Communication Runtimes (MPI/NCCL/Gloo/MLSL) – Co-Design the support at Runtime Point-to- CUDA- Large-message Point level and Exploit it at the DL Awareness Collectives 3 Operations Framework level – What performance benefits can HPC Platforms be observed? CPU InfiniBand GPU – What needs to be fixed at the communication runtime layer? Network Based Computing Laboratory SDAS (June ‘19) 12 Multiple Approaches taken up by OSU • MPI-driven Deep Learning – CPU-based Deep Learning – GPU-based Deep Learning • Co-designing Deep Learning Stacks with High-Performance MPI • Out-of-core DNN training • Accelerating TensorFlow on HPC Systems • Accelerating Big Data Stacks • Efficient Deep Learning over Big Data Network Based Computing Laboratory SDAS (June ‘19) 13 Data Parallel Deep Learning and MPI Collectives Loop {} • Major MPI Collectives packed_comm_buff involved in Designing MPI_Bcast (GPU 0) 1. Data Propagation distributed frameworks Params Params Params Params GPU 3 GPU 1 GPU 0 • MPI_Bcast – required for GPU 2 L DNN parameter exchange L1 L1 L1 1 L2 L L2 L2 • MPI_Reduce – needed for F B F 2 B F B F B 2. Forward .. .. .. .. Backward gradient accumulation L L Ln n Ln n Pass from multiple solvers packed_red packed_red packed_red packed_red uce_buff uce_buff uce_buff uce_buff • MPI_Allreduce – use just one Allreduce instead of MPI_Reduce (GPU 0) 3. Gradient Reduce and Broadcast Gradients Aggregatio ApplyUpdates n A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17) Network Based Computing Laboratory SDAS (June ‘19) 14 Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 3,000 organizations in 88 countries – More than 549,000 (> 0.5 million) downloads from the OSU site directly – Empowering many TOP500 clusters (June ‘19 ranking) • 3rd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China • 16th, 556,104 cores (Oakforest-PACS) in Japan • 19th, 367,024 cores (Stampede2) at TACC • 31st, 241,108-core (Pleiades) at NASA and many others – Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC) – http://mvapich.cse.ohio-state.edu Partner in the TACC Frontera System • Empowering Top500 systems for over a decade Network Based Computing Laboratory SDAS (June ‘19) 15 Architecture of MVAPICH2 Software Family High Performance Parallel Programming Models Message Passing Interface PGAS Hybrid --- MPI + X (MPI) (UPC, OpenSHMEM, CAF, UPC++) (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- Remote Collectives Energy- I/O and Fault Active Introspection point Job Startup Memory Virtualization Algorithms Awareness Tolerance Messages & Analysis Primitives Access File Systems Support for Modern Networking Technology Support for Modern Multi-/Many-core Architectures (InfiniBand, iWARP, RoCE, Omni-Path) (Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features Transport Mechanisms Modern Features SR- Multi Shared RC
Recommended publications
  • Cluster Setup Guide
    Frontera Documentation Release 0.8.0.1 ScrapingHub Jul 30, 2018 Contents 1 Introduction 3 1.1 Frontera at a glance...........................................3 1.2 Run modes................................................5 1.3 Quick start single process........................................6 1.4 Quick start distributed mode.......................................8 1.5 Cluster setup guide............................................ 10 2 Using Frontera 15 2.1 Installation Guide............................................ 15 2.2 Crawling strategies............................................ 16 2.3 Frontier objects.............................................. 16 2.4 Middlewares............................................... 19 2.5 Canonical URL Solver.......................................... 22 2.6 Backends................................................. 23 2.7 Message bus............................................... 27 2.8 Writing custom crawling strategy.................................... 30 2.9 Using the Frontier with Scrapy...................................... 35 2.10 Settings.................................................. 37 3 Advanced usage 51 3.1 What is a Crawl Frontier?........................................ 51 3.2 Graph Manager.............................................. 52 3.3 Recording a Scrapy crawl........................................ 58 3.4 Fine tuning of Frontera cluster...................................... 59 3.5 DNS Service............................................... 60 4 Developer documentation 63 4.1
    [Show full text]
  • Frontera Documentation Release 0.4.0
    Frontera Documentation Release 0.4.0 ScrapingHub December 30, 2015 Contents 1 Introduction 3 1.1 Frontera at a glance...........................................3 1.2 Run modes................................................5 1.3 Quick start single process........................................6 1.4 Quick start distributed mode.......................................8 2 Using Frontera 11 2.1 Installation Guide............................................ 11 2.2 Frontier objects.............................................. 11 2.3 Middlewares............................................... 12 2.4 Canonical URL Solver.......................................... 13 2.5 Backends................................................. 14 2.6 Message bus............................................... 16 2.7 Crawling strategy............................................. 17 2.8 Using the Frontier with Scrapy...................................... 17 2.9 Settings.................................................. 20 3 Advanced usage 31 3.1 What is a Crawl Frontier?........................................ 31 3.2 Graph Manager.............................................. 32 3.3 Recording a Scrapy crawl........................................ 38 3.4 Production broad crawling........................................ 40 4 Developer documentation 45 4.1 Architecture overview.......................................... 45 4.2 Frontera API............................................... 47 4.3 Using the Frontier with Requests.................................... 49 4.4
    [Show full text]
  • High Performance Distributed Web-Scraper
    High performance distributed web-scraper Denis Eyzenakh Anton Rameykov Igor Nikiforov Institute of Computer Science and Institute of Computer Science and Institute of Computer Science and Technology Technology Technology Peter the Great St.Petersburg Peter the Great St.Petersburg Peter the Great St.Petersburg Polytechnic University Polytechnic University Polytechnic University Saint – Petersburg, Russian Federation Saint – Petersburg, Russian Federation Saint – Petersburg, Russian Federation [email protected] [email protected] [email protected] Abstract—Over the past decade, the Internet has become the gigantic and richest source of data. The data is used for the II. EXISTING WEB SCRAPING TECHNIQUES extraction of knowledge by performing machine leaning analysis. Typically, web scraping applications imitate a regular web In order to perform data mining of the web-information, the data user. They follow the links and search for the information should be extracted from the source and placed on analytical storage. This is the ETL-process. Different web-sources have they need. The classic web scraper can be classified into two different ways to access their data: either API over HTTP protocol types: web-crawlers and data extractors “Fig. 1”. or HTML source code parsing. The article is devoted to the approach of high-performance data extraction from sources that do not provide an API to access the data. Distinctive features of the proposed approach are: load balancing, two levels of data storage, and separating the process of downloading files from the process of scraping. The approach is implemented in the solution with the following technologies: Docker, Kubernetes, Scrapy, Python, MongoDB, Redis Cluster, and СephFS.
    [Show full text]
  • Scrapy Cluster Documentation Release 1.2
    Scrapy Cluster Documentation Release 1.2 IST Research November 29, 2016 Contents 1 Introduction 3 1.1 Overview.................................................3 1.2 Quick Start................................................5 2 Kafka Monitor 13 2.1 Design.................................................. 13 2.2 Quick Start................................................ 14 2.3 API.................................................... 16 2.4 Plugins.................................................. 31 2.5 Settings.................................................. 34 3 Crawler 37 3.1 Design.................................................. 37 3.2 Quick Start................................................ 42 3.3 Controlling................................................ 44 3.4 Extension................................................. 49 3.5 Settings.................................................. 53 4 Redis Monitor 59 4.1 Design.................................................. 59 4.2 Quick Start................................................ 60 4.3 Plugins.................................................. 61 4.4 Settings.................................................. 64 5 Rest 69 5.1 Design.................................................. 69 5.2 Quick Start................................................ 69 5.3 API.................................................... 73 5.4 Settings.................................................. 77 6 Utilites 81 6.1 Argparse Helper............................................. 81 6.2 Log Factory..............................................
    [Show full text]
  • Frontera-Open Source Large Scale Web Crawling Framework
    Frontera: Large-Scale Open Source Web Crawling Framework Alexander Sibiryakov, 20 July 2015 [email protected] Hola los participantes! • Born in Yekaterinburg, RU • 5 years at Yandex, search quality department: social and QA search, snippets. • 2 years at Avast! antivirus, research team: automatic false positive solving, large scale prediction of malicious download attempts. 2 «A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier.». –Wikipedia: Web Crawler article, July 2015 3 Motivation • Client needed to crawl 1B+ pages/week, and identify frequently changing HUB pages. • Scrapy wasn’t suitable for broad crawling and had no crawl frontier capabilities* • People were tend to favor Apache Nutch instead of Scrapy. Hyperlink-Induced Topic Search, Jon Kleinberg, 1999 4 tophdart.com Frontera: single-threaded and distributed • Frontera is all about knowing what to crawl next and when to stop. • Single-Threaded mode can be used for up to 100 websites (parallel downloading), • for performance broad crawls there is a distributed mode. 6 Main features • Online operation: scheduling of new batch, updating of DB state. • Storage abstraction: write your own backend (sqlalchemy, HBase is included). • Canonical URLs resolution abstraction: each document has many URLs, which to use? • Scrapy ecosystem: good documentation, big community, ease of customization. 7 Single-threaded use cases • Need of URL metadata and content storage, • Need of isolation of URL ordering/queueing logic from the spider • Advanced URL ordering logic (big websites, or revisiting) 8 Single-threaded architecture 9 Frontera and Scrapy • Frontera is implemented as a set of custom scheduler and spider middleware for Scrapy.
    [Show full text]
  • HPC and AI Middleware for Exascale Systems and Clouds
    HPC and AI Middleware for Exascale Systems and Clouds Talk at HPC-AI Advisory Council UK Conference (October ‘20) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] Follow us on http://www.cse.ohio-state.edu/~panda https://twitter.com/mvapich High-End Computing (HEC): PetaFlop to ExaFlop 100 PetaFlops in 415 Peta 2017 Flops in 2020 (Fugaku in Japan with 7.3M cores 1 ExaFlops Expected to have an ExaFlop system in 2021! Network Based Computing Laboratory HPC-AI-UK (Oct ‘20) 2 Increasing Usage of HPC, Big Data and Deep/Machine Learning Big Data (Hadoop, Spark, HPC HBase, (MPI, PGAS, etc.) Memcached, etc.) Convergence of HPC, Big Deep/Machine Data, and Deep/Machine Learning Learning! (TensorFlow, PyTorch, BigDL, cuML, etc.) Increasing Need to Run these applications on the Cloud!! Network Based Computing Laboratory HPC-AI-UK (Oct ‘20) 3 Converged Middleware for HPC, Big Data and Deep/Machine Learning? Physical Compute Network Based Computing Laboratory HPC-AI-UK (Oct ‘20) 4 Converged Middleware for HPC, Big Data and Deep/Machine Learning? Network Based Computing Laboratory HPC-AI-UK (Oct ‘20) 5 Converged Middleware for HPC, Big Data and Deep/Machine Learning? Network Based Computing Laboratory HPC-AI-UK (Oct ‘20) 6 Converged Middleware for HPC, Big Data and Deep/Machine Learning? Hadoop Job Deep/Machine Learning Job Spark Job Network Based Computing Laboratory HPC-AI-UK (Oct ‘20) 7 Presentation Overview • MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness • HiBD Project – High-Performance
    [Show full text]
  • Vergleich Aktueller Web-Crawling-Werkzeuge
    Hochschule Wismar University of Applied Sciences Technology, Business and Design Fakult¨at fur¨ Ingenieurwissenschaften, Bereich EuI Bachelor-Thesis Vergleich aktueller Web-Crawling-Werkzeuge Gedruckt am: 30. April 2021 Eingereicht am: von: Christoph Werner Betreuende Professorin: Prof. Dr.-Ing. Antje Raab-Dusterh¨ ¨oft Zweitbetreuer: Prof. Dr.-Ing. Matthias Kreuseler Aufgabenstellung Das Ziel der Bachelor Thesis ist ein Vergleich aktueller Web-Crawling-Werkzeuge. Die Crawler sind hinsichtlich ihrer Funktionsweise und Crawling-Ergebnisse zu ver- gleichen und zu bewerten. Des Weiteren ist ein Tool zu konzeptionieren, welches diese Werkzeuge fur¨ die Nutzung in Laborpraktika verfugbar¨ macht. 3 Kurzreferat Webcrawling im Verbund mit Webscraping ist eine der grundlegenden Technologien, um gezielt und automatisiert Daten aus dem Internet zu sammeln. Zun¨achst wird in dieser Thesis auf die Grundlagen des Crawlings und Scrapings eingegangen. Hierbei soll besonderes Augenmerk auf der Architektur und Funkti- onsweise eines Crawlers, dem Robots Exclusion Protocol (REP), zu bedenkende Sicherheitsaspekte, sowie Anti-Crawling/Scraping-Maßnahmen liegen. Darauf aufbauend werden verschiedenste Crawling Frameworks auf Grundlage ihrer Dokumentation bewertet und gegenubergestellt.¨ Abschließend wird ein Tool mit einer grafischen Benutzeroberfl¨ache (GUI) zum Ver- gleich von verschiedenen Crawling Frameworks entwickelt. 4 Abstract Web crawling in combination with web scraping is the key technology for the targeted and automated collection of data from the World Wide Web. First of all, this thesis deals with the basics of crawling and scraping. Special at- tention will be paid to the architecture and functionality of a crawler, the Ro- bots Exclusion Protocol (REP), security aspects to be considered as well as anti- crawling/scraping measures. Based on this, various crawling frameworks are evaluated and compared on the basis of their documentation.
    [Show full text]
  • Parsl Documentation Release 1.1.0
    Parsl Documentation Release 1.1.0 The Parsl Team Sep 24, 2021 CONTENTS 1 Quickstart 3 1.1 Installation................................................3 1.2 Getting started..............................................4 1.3 Tutorial..................................................4 1.4 Usage Tracking..............................................4 1.5 For Developers..............................................5 2 Parsl tutorial 7 2.1 Configuring Parsl.............................................7 2.2 Python Apps...............................................8 2.3 Bash Apps................................................8 2.4 Passing data...............................................9 2.5 AppFutures................................................9 2.6 DataFutures................................................ 10 2.7 Files................................................... 11 2.8 Remote Files............................................... 11 2.9 Sequential workflow........................................... 12 2.10 Parallel workflow............................................. 12 2.11 Parallel dataflow............................................. 13 2.12 Monte Carlo workflow.......................................... 14 2.13 Local execution with threads....................................... 15 2.14 Local execution with pilot jobs..................................... 15 3 User guide 17 3.1 Overview................................................. 17 3.2 Apps................................................... 23 3.3 Futures.................................................
    [Show full text]
  • Frontera Documentation Release 0.8.0
    Frontera Documentation Release 0.8.0 ScrapingHub Jul 30, 2018 Contents 1 Introduction 3 1.1 Frontera at a glance...........................................3 1.2 Run modes................................................5 1.3 Quick start single process........................................6 1.4 Quick start distributed mode.......................................8 1.5 Cluster setup guide............................................ 10 2 Using Frontera 15 2.1 Installation Guide............................................ 15 2.2 Crawling strategies............................................ 16 2.3 Frontier objects.............................................. 16 2.4 Middlewares............................................... 19 2.5 Canonical URL Solver.......................................... 22 2.6 Backends................................................. 23 2.7 Message bus............................................... 27 2.8 Writing custom crawling strategy.................................... 30 2.9 Using the Frontier with Scrapy...................................... 35 2.10 Settings.................................................. 37 3 Advanced usage 51 3.1 What is a Crawl Frontier?........................................ 51 3.2 Graph Manager.............................................. 52 3.3 Recording a Scrapy crawl........................................ 58 3.4 Fine tuning of Frontera cluster...................................... 59 3.5 DNS Service............................................... 60 4 Developer documentation 63 4.1
    [Show full text]
  • Frontera Documentation 0.6.0
    Frontera Documentation 0.6.0 ScrapingHub 2017 02 13 Contents 1 3 1.1 Frontera at a glance...........................................3 1.2 Run modes................................................5 1.3 Quick start single process........................................7 1.4 Quick start distributed mode.......................................8 1.5 Cluster setup guide............................................9 2 Frontera 13 2.1 Installation Guide............................................ 13 2.2 Frontier objects.............................................. 13 2.3 Middlewares............................................... 14 2.4 Canonical URL Solver.......................................... 15 2.5 Backends................................................. 16 2.6 Message bus............................................... 18 2.7 Crawling strategy............................................. 19 2.8 Using the Frontier with Scrapy...................................... 20 2.9 Settings.................................................. 23 3 33 3.1 What is a Crawl Frontier?........................................ 33 3.2 Graph Manager.............................................. 34 3.3 Recording a Scrapy crawl........................................ 40 3.4 Fine tuning of Frontera cluster...................................... 42 3.5 DNS Service............................................... 43 4 45 4.1 Architecture overview.......................................... 45 4.2 Frontera API............................................... 47 4.3 Using the
    [Show full text]
  • High Performance Distributed Web-Scraper
    Труды ИСП РАН, том 33, вып. 3, 2021 г. // Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021 Eyzenakh D.S., Rameykov A.S., Nikiforov I.V. High performance distributed web-scraper. Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021, pp. 87-100 Для цитирования: Эйзенах Д.С., Рамейков А.С., Никифоров И.В. Высокопроизводительный распределенный веб-скрапер. Труды ИСП РАН, том 33, вып. 3, 2021 г., стр. 87-100 (на английском языке). DOI: 10.15514/ISPRAS–2021–33(3)–7. DOI: 10.15514/ISPRAS-2021-33(3)-7 1. Introduction Due to the rapid development of the network, the World Wide Web has become a carrier of a large High Performance Distributed Web-Scraper amount of information. The data extraction and use of information has become a huge challenge nowadays. Traditional access to the information through browsers like Chrome, Firefox, etc. can provide a comfortable user experience with web pages. Web sites have a lot of information and sometimes haven’t got any instruments to access over the API and preserve it in analytical storage. D.S. Eyzenakh, ORCID: 0000-0003-1584-1745 <[email protected]> The manual collection of data for further analysis can take a lot of time and in the case of semi- A.S. Rameykov, ORCID: 0000-0001-7989-6732 <[email protected]> structured or unstructured data types the collection and analyzing of data can become even more I.V. Nikiforov, ORCID: 0000-0003-0198-1886 <[email protected]> difficult and time-consuming. The person who manually collects data can make mistakes Peter the Great St.Petersburg Polytechnic University (duplication, typos in the text, etc.) as far as the process is error-prone.
    [Show full text]
  • Diapositivo 1
    Instituto Politécnico de Tomar Introduction to Information Retrieval Data Acquisition Ricardo Campos Lic ITM – Técnicas Avançadas de Programação Abrantes, Portugal, 2019 This presentation was developed by Ricardo Campos, Professor of ICT of the Polytechnic Institute of Tomar and researcher of LIAAD - INESC TEC. Part of the slides used in this presentation were adapted from presentations found in internet and from reference bibliography: Web crawling and system administration What is Information Retrieval? Please refer to the following when using this presentation: Campos, Ricardo. (2019). A .ppt version of this presentation can be provided upon request by sending an email to [[email protected]] What is Information Retrieval? AGENDA What is this talk about? Data Acquisition APIs Web Scraping 1 2 3 Web Crawling Web Dynamics Web Archives 4 5 6 What is Information Retrieval? Example scenario: Your small business’s website has a form used to sign clients up for appointments. You want to give your clients the ability to automatically create a Google calendar event with the details for that appointment. An application program interface (API) is a set of routines, protocols, API Use: The idea is to have your website’s server send an API and tools for building software request directly to Google’s. Your server would then receive applications. Basically, an API Google’s response, process it, and send back relevant specifies how software information to the browser, such as a confirmation message to components should interact the user. What is the difference between an API and You’ve probably heard of any other remote server? companies packaging APIs as To render the whole web page, your browser expects a response products.
    [Show full text]