An Approach of Web Crawling and Indexing of Nutch N

Total Page:16

File Type:pdf, Size:1020Kb

An Approach of Web Crawling and Indexing of Nutch N International Journal of Scientific & Engineering Research, Volume 5, Issue 11, November-2014 766 ISSN 2229-5518 An Approach of Web Crawling and Indexing of Nutch N. KOWSALYA ASSISTANT PROFESSOR DEPARTMENT OF COMPUTER APPLICATIONS VIVEKANANDHA COLLEGE OF ARTS AND SCIENCES FOR WOMEN (AUTONOMOUS) TIRUCHENGODE-05, TAMILNADU Abstract-Nutch has a highly modular architecture. Crawling is the operation that navigates and retrieves the information in web pages, populating the set of documents that will be searched. Typically, at the centre of any IR system is the inverted index. For each term, the inverted index contains a posting list, which lists the documents represented as integer document-IDs (doc-IDs) containing the term. The nutch command eliminates duplicate documents from a set of Lucene indices for Nutch segments. Nutch also has to add support for HTML extraction to Lucene. Nutch includes a link analysis algorithm similar to PageRank. The WebDB containing the web graph of pages and links. These lists contain every URL we’re interested in downloading. The query engine part consists of one or more front-ends, and one or more back-ends. Each back-end is associated with a segment of the complete data set. The driver represents external users and it is the point at which the performance of the query is measured, in terms of queries per second (qps). Index Terms – Crawling; Retrieving; Indexing; Link analysis; PageRank; Web DB; Web Graph; —————————— —————————— 1 Introduction: • parsers for web content; Apache Nutch is an open source web crawler that • a link-graph builder; is used for crawling websites. It is extensible and scalable. It • schemas for indexing and search; provides facilities for parsing, indexing, and scoring filters • distributed operation, for high scalability; for custom implementations. Nutch is an effort to build an • an extensible, plugin-based architecture. open source search engine based on Lucene Java for the Nutch is implemented in Java and its code is open source. search and index component. It has a highly modular Thus runs on many operating systems and a wide variety architecture, allowing developers to create plug-ins for of hardware. media-type parsing, data retrieval,IJSER querying and clustering. 2 Nutch Architecture: Nutch is a complete open-source Web search engine Nutch has a highly modular architecture that uses package that aims to index the World Wide Web as plug-in APIs for media-type parsing, HTML analysis, data effectively as commercial search services. As a research retrieval protocols, and queries. The core has four major platform it is also promising at smaller scales, since its components: flexible architecture enables communities to customize it; and can even scale down to a personal computer. Its Searcher: Given a query, it must quickly find a small founding goal was to increase the transparency of the Web relevant subset of a corpus of documents, and then present search process as searching becomes an everyday task. The them. Finding a large relevant subset is normally done nonprofit Nutch Organization supports the open-source with an inverted index of the corpus; ranking within that development effort as it addresses significant technical set to produce the most relevant documents, which then challenges of operating at the scale of the entire public Web. must be summarized for display. Nutch server installations have already indexed 100M-page collections while providing state-of-the-art search result Indexer: Creates the inverted index from which the quality. At the same time, smaller organizations have also searcher extracts results. It uses Lucene storing indexes. adopted Nutch for intranet and campus networks. At this scale, all of its components can run on a single server. Database: Stores the document contents for indexing and Nutch is an open-source project hosted by the Apache later summarization by the searcher, along with Software Foundation. Nutch provides a complete, high- information such as the link structure of the document quality Web search system, as well as a flexible, scalable space and the time each document was last fetched. platform for the development of novel Web search engines. Nutch includes: • a web crawler; IJSER © 2014 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 5, Issue 11, November-2014 767 ISSN 2229-5518 Fetcher: Requests web pages, parses them, and extracts uniform refresh policy; all pages are refetched at the same links from them. Nutch’s robot has been written entirely interval (30 days, by default) regardless of how frequently from scratch. they change. Java can set individual recrawl-deadlines on Figure-1 outlines the relationships between every page). The fetching process must also respect elements that refer on each other, placing them in the same bandwidth and other limitations of the target website. box, and those they depend on in a lower layer. For However, any polite solution requires coordination before example, protocol does not depend on net, because protocol fetching, Nutch uses the most straightforward localization is only an interface point for plugins that actually provide of references possible namely, making all fetches from a much of Nutch’s functionality. particular host run on one machine. 2.2 Indexing: To allow efficient retrieval of documents from a corpus, suitable data structures must be created, collectively known as an index. Usually, a corpus covers many documents, and hence the index will be held on a large storage device – commonly one or more hard disks. Typically, at the centre of any IR system is the inverted index. For each term, the inverted index contains a posting list, which lists the documents represented as integer document-IDs (doc-IDs) containing the term. Each posting in the posting list also stores sufficient statistical information to score each document, such as the frequency Figure-1 Nutch Architecture of the term occurrences and, possibly, positional information (the position of the term within each 2.1 Crawling: document, which facilitates phrase or proximity search) or field information (the occurrence of the term in various Crawling is the operation that navigates and semi-structured area of the document, such as title, retrieves the information in web pages, populating the set enabling these to be higher-weighted during retrieved). of documents that will be searched. This set of documents The inverted index does not store the textual terms is called the corpus, in search terminology. Crawling can be themselves, but instead uses an additional structure known performed on internal networksIJSER (Intranet) as well as as a lexicon to store these along with pointers to the external networks (Internet). Crawling, particularly in the corresponding posting lists within the inverted index. A Internet, is a complex operation. Either intentionally or document index may also be created which stores meta- unintentionally, many web sites are difficult to crawl. information about each document within the inverted Sophisticated technology is required to develop a good index, such as an external name for the document (e.g. crawler. URL), and the length of the document. The process of The performance of crawling is usually limited by generating these structures is known as indexing. the bandwidth of the network between the system doing the crawling and the outside world. In the Commercial 2.2.1 Indexing Text: Scale Out project, system setup did not have a high- bandwidth connection to the outside world and it would Lucene meets the scalability requirements for text take a long time to populate the system with enough indexing in Nutch. Nutch also takes the advantage of documents to create an interesting corpus. Lucene’s multi-field case-folding keyword and phrase An intranet or niche search engine might only take search in URLs, anchor text, and document text. The typical a few hours to crawl, in a single-machine while a whole- definition of Web search does not require some index types web crawl might take many machines several weeks or which Lucene does not address, such as regular-expression longer. A single crawling cycle consists of generating a matching (using, say, suffix trees) or document-similarity fetchlist from the webdb, fetching those pages, parsing clustering (using, say, term-vectors). those for links, then updating the webdb. In the terminology of Nutch's crawler supports both a crawl-and- 2.2.1 Indexing Hypertext: stop and crawl-and-stop-with-threshold (which requires feedback from scoring and specifying a floor). It also uses a IJSER © 2014 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 5, Issue 11, November-2014 768 ISSN 2229-5518 Lucene provides an inverted-file full-text index, analysis doesn’t use Nutch’s homegrown IPC service; like which suffices for indexing the text but not the additional fetching, work is coordinated through the appearance of tasks required by a web search engine. In addition to this, files in a shared directory. There are better techniques for Nutch implements a link database to provide efficient distribution (MapReduce) and accelerating link analysis. access to the Web's link graph, and a page database that Nutch includes a parallel indexing operation stores crawled pages for indexing, summarizing, and written using the MapReduce programming model. serving to users, as well as supporting other functions such MapReduce provides a convenient way of addressing an as crawling and link analysis. In the terminology of one important class of real-life commercial applications by web search engine survey, Nutch combines the text and hiding the parallelization and the fault tolerance from the utility databases into its page database. Nutch also has to programmers, letting them focus on the problem domain. add support for HTML extraction to Lucene. When a page MapReduce was published by Google in 2004 and quickly is fetched, any embedded links are considered for addition became a popular approach for parallelizing commercial to the fetchlist and that link’s anchor text is also stored.
Recommended publications
  • Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected]
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St. Cloud State University St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 3-2017 Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Annangi, Harikrishna, "Security Log Analysis Using Hadoop" (2017). Culminating Projects in Information Assurance. 19. https://repository.stcloudstate.edu/msia_etds/19 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Security Log Analysis Using Hadoop by Harikrishna Annangi A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Assurance April, 2016 Starred Paper Committee: Dr. Dennis Guster, Chairperson Dr. Susantha Herath Dr. Sneh Kalia 2 Abstract Hadoop is used as a general-purpose storage and analysis platform for big data by industries. Commercial Hadoop support is available from large enterprises, like EMC, IBM, Microsoft and Oracle and Hadoop companies like Cloudera, Hortonworks, and Map Reduce. Hadoop is a scheme written in Java that allows distributed processes of large data sets across clusters of computers using programming models. A Hadoop frame work application works in an environment that provides storage and computation across clusters of computers.
    [Show full text]
  • Building a Scalable Index and a Web Search Engine for Music on the Internet Using Open Source Software
    Department of Information Science and Technology Building a Scalable Index and a Web Search Engine for Music on the Internet using Open Source software André Parreira Ricardo Thesis submitted in partial fulfillment of the requirements for the degree of Master in Computer Science and Business Management Advisor: Professor Carlos Serrão, Assistant Professor, ISCTE-IUL September, 2010 Acknowledgments I should say that I feel grateful for doing a thesis linked to music, an art which I love and esteem so much. Therefore, I would like to take a moment to thank all the persons who made my accomplishment possible and hence this is also part of their deed too. To my family, first for having instigated in me the curiosity to read, to know, to think and go further. And secondly for allowing me to continue my studies, providing the environment and the financial means to make it possible. To my classmate André Guerreiro, I would like to thank the invaluable brainstorming, the patience and the help through our college years. To my friend Isabel Silva, who gave me a precious help in the final revision of this document. Everyone in ADETTI-IUL for the time and the attention they gave me. Especially the people over Caixa Mágica, because I truly value the expertise transmitted, which was useful to my thesis and I am sure will also help me during my professional course. To my teacher and MSc. advisor, Professor Carlos Serrão, for embracing my will to master in this area and for being always available to help me when I needed some advice.
    [Show full text]
  • Natural Language Processing Technique for Information Extraction and Analysis
    International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 2, Issue 8, August 2015, PP 32-40 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Natural Language Processing Technique for Information Extraction and Analysis T. Sri Sravya1, T. Sudha2, M. Soumya Harika3 1 M.Tech (C.S.E) Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] 2 Head (I/C) of C.S.E & IT Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] 3 M. Tech C.S.E, Assistant Professor, Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] Abstract: In the current internet era, there are a large number of systems and sensors which generate data continuously and inform users about their status and the status of devices and surroundings they monitor. Examples include web cameras at traffic intersections, key government installations etc., seismic activity measurement sensors, tsunami early warning systems and many others. A Natural Language Processing based activity, the current project is aimed at extracting entities from data collected from various sources such as social media, internet news articles and other websites and integrating this data into contextual information, providing visualization of this data on a map and further performing co-reference analysis to establish linkage amongst the entities. Keywords: Apache Nutch, Solr, crawling, indexing 1. INTRODUCTION In today’s harsh global business arena, the pace of events has increased rapidly, with technological innovations occurring at ever-increasing speed and considerably shorter life cycles.
    [Show full text]
  • Building a Search Engine for the Cuban Web
    Building a Search Engine for the Cuban Web Jorge Luis Betancourt Search/Crawl Engineer NOVEMBER 16-18, 2016 • SEVILLE, SPAIN Who am I 01 Jorge Luis Betancourt González Search/Crawl Engineer Apache Nutch Committer & PMC Apache Solr/ES enthusiast 2 Agenda • Introduction & motivation • Technologies used • Customizations • Conclusions and future work 3 Introduction / Motivation Cuba Internet Intranet Global search engines can’t access documents hosted the Cuban Intranet 4 Writing your own web search engine from scratch? or … 5 Common search engine features 1 Web search: HTML & documents (PDF, DOC) • highlighting • suggestions • filters (facets) • autocorrection 2 Image search (size, format, color, objects) • thumbnails • show metadata • filters (facets) • match text with images 3 News search (alerting, notifications) • near real time • email, push, SMS 6 How to fulfill these requirements? store query At the core a search engine: stores some information a retrieve this information when a question is received 7 Open Source to the rescue … crawler 1 Index Server 2 web interface 3 8 Apache Nutch Nutch is a well matured, production ready “ Web crawler. Enables fine grained configuration, relying on Apache Hadoop™ data structures, which are great for batch processing. 9 Apache Nutch • Highly scalable • Highly extensible • Pluggable parsing protocols, storage, indexing, scoring, • Active community • Apache License 10 Apache Solr TOTAL DOWNLOADS 8M+ MONTHLY DOWNLOADS 250,000+ • Apache License • Great community • Highly modular • Stability / Scalability
    [Show full text]
  • Full-Graph-Limited-Mvn-Deps.Pdf
    org.jboss.cl.jboss-cl-2.0.9.GA org.jboss.cl.jboss-cl-parent-2.2.1.GA org.jboss.cl.jboss-classloader-N/A org.jboss.cl.jboss-classloading-vfs-N/A org.jboss.cl.jboss-classloading-N/A org.primefaces.extensions.master-pom-1.0.0 org.sonatype.mercury.mercury-mp3-1.0-alpha-1 org.primefaces.themes.overcast-${primefaces.theme.version} org.primefaces.themes.dark-hive-${primefaces.theme.version}org.primefaces.themes.humanity-${primefaces.theme.version}org.primefaces.themes.le-frog-${primefaces.theme.version} org.primefaces.themes.south-street-${primefaces.theme.version}org.primefaces.themes.sunny-${primefaces.theme.version}org.primefaces.themes.hot-sneaks-${primefaces.theme.version}org.primefaces.themes.cupertino-${primefaces.theme.version} org.primefaces.themes.trontastic-${primefaces.theme.version}org.primefaces.themes.excite-bike-${primefaces.theme.version} org.apache.maven.mercury.mercury-external-N/A org.primefaces.themes.redmond-${primefaces.theme.version}org.primefaces.themes.afterwork-${primefaces.theme.version}org.primefaces.themes.glass-x-${primefaces.theme.version}org.primefaces.themes.home-${primefaces.theme.version} org.primefaces.themes.black-tie-${primefaces.theme.version}org.primefaces.themes.eggplant-${primefaces.theme.version} org.apache.maven.mercury.mercury-repo-remote-m2-N/Aorg.apache.maven.mercury.mercury-md-sat-N/A org.primefaces.themes.ui-lightness-${primefaces.theme.version}org.primefaces.themes.midnight-${primefaces.theme.version}org.primefaces.themes.mint-choc-${primefaces.theme.version}org.primefaces.themes.afternoon-${primefaces.theme.version}org.primefaces.themes.dot-luv-${primefaces.theme.version}org.primefaces.themes.smoothness-${primefaces.theme.version}org.primefaces.themes.swanky-purse-${primefaces.theme.version}
    [Show full text]
  • Chapter 2 Introduction to Big Data Technology
    Chapter 2 Introduction to Big data Technology Bilal Abu-Salih1, Pornpit Wongthongtham2 Dengya Zhu3 , Kit Yan Chan3 , Amit Rudra3 1The University of Jordan 2 The University of Western Australia 3 Curtin University Abstract: Big data is no more “all just hype” but widely applied in nearly all aspects of our business, governments, and organizations with the technology stack of AI. Its influences are far beyond a simple technique innovation but involves all rears in the world. This chapter will first have historical review of big data; followed by discussion of characteristics of big data, i.e. from the 3V’s to up 10V’s of big data. The chapter then introduces technology stacks for an organization to build a big data application, from infrastructure/platform/ecosystem to constructional units and components. Finally, we provide some big data online resources for reference. Keywords Big data, 3V of Big data, Cloud Computing, Data Lake, Enterprise Data Centre, PaaS, IaaS, SaaS, Hadoop, Spark, HBase, Information retrieval, Solr 2.1 Introduction The ability to exploit the ever-growing amounts of business-related data will al- low to comprehend what is emerging in the world. In this context, Big Data is one of the current major buzzwords [1]. Big Data (BD) is the technical term used in reference to the vast quantity of heterogeneous datasets which are created and spread rapidly, and for which the conventional techniques used to process, analyse, retrieve, store and visualise such massive sets of data are now unsuitable and inad- equate. This can be seen in many areas such as sensor-generated data, social media, uploading and downloading of digital media.
    [Show full text]
  • Code Smell Prediction Employing Machine Learning Meets Emerging Java Language Constructs"
    Appendix to the paper "Code smell prediction employing machine learning meets emerging Java language constructs" Hanna Grodzicka, Michał Kawa, Zofia Łakomiak, Arkadiusz Ziobrowski, Lech Madeyski (B) The Appendix includes two tables containing the dataset used in the paper "Code smell prediction employing machine learning meets emerging Java lan- guage constructs". The first table contains information about 792 projects selected for R package reproducer [Madeyski and Kitchenham(2019)]. Projects were the base dataset for cre- ating the dataset used in the study (Table I). The second table contains information about 281 projects filtered by Java version from build tool Maven (Table II) which were directly used in the paper. TABLE I: Base projects used to create the new dataset # Orgasation Project name GitHub link Commit hash Build tool Java version 1 adobe aem-core-wcm- www.github.com/adobe/ 1d1f1d70844c9e07cd694f028e87f85d926aba94 other or lack of unknown components aem-core-wcm-components 2 adobe S3Mock www.github.com/adobe/ 5aa299c2b6d0f0fd00f8d03fda560502270afb82 MAVEN 8 S3Mock 3 alexa alexa-skills- www.github.com/alexa/ bf1e9ccc50d1f3f8408f887f70197ee288fd4bd9 MAVEN 8 kit-sdk-for- alexa-skills-kit-sdk- java for-java 4 alibaba ARouter www.github.com/alibaba/ 93b328569bbdbf75e4aa87f0ecf48c69600591b2 GRADLE unknown ARouter 5 alibaba atlas www.github.com/alibaba/ e8c7b3f1ff14b2a1df64321c6992b796cae7d732 GRADLE unknown atlas 6 alibaba canal www.github.com/alibaba/ 08167c95c767fd3c9879584c0230820a8476a7a7 MAVEN 7 canal 7 alibaba cobar www.github.com/alibaba/
    [Show full text]
  • Apache Solr 3 1 Cookbook.Pdf
    Apache Solr 3.1 Cookbook Over 100 recipes to discover new ways to work with Apache's Enterprise Search Server Rafał Kuć BIRMINGHAM - MUMBAI Apache Solr 3.1 Cookbook Copyright © 2011 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: July 2011 Production Reference: 1140711 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849512-18-3 www.packtpub.com Cover Image by Rakesh Shejwal ([email protected]) Credits Author Project Coordinator Rafał Kuć Srimoyee Ghoshal Reviewers Proofreader Ravindra Bharathi Samantha Lyon Juan Grande Indexer Acquisition Editor Hemangini Bari Sarah Cullington Production Coordinators Development Editor Adline Swetha Jesuthas Meeta Rajani Kruthika Bangera Technical Editors Cover Work Manasi Poonthottam Kruthika Bangera Sakina Kaydawala Copy Editors Kriti Sharma Leonard D'Silva About the Author Rafał Kuć is a born team leader and software developer.
    [Show full text]
  • Deploying Arch
    Deploying Arch 1. Introduction This document is written for webmasters. It provides simple step-by-by step instructions for the installation of Arch. Some basic familiarity with managing websites is assumed. Arch is a multi-component package: it includes Arch indexer (based on Apache Nutch) and Arch search server (based on Apache Solr) with, optionally, one or more PHP front-ends. See Arch White Paper for more details on Arch architecture. Arch also requires a number of freely available software packages: Java, a Web server, PHP and a relational database management system (RDBMS) such as MySQL. These are widely used applications, and some, or all, of these may already be installed. Note: Arch 1.7 and later versions use own Jetty server and embedded RDBMS (H2) by default. This significantly simplifies deployment. All you have to do to get it going is run Ant build script and insert your seed URLs into Arch script. For a quick start, see Arch Quick Start Guide. The document below describes Arch installation for advanced users. 2. System requirements • A Linux/Unix Operating System, or Windows Vista/7 or later with Cygwin installed; • Java 9, or later version; • Apache Ant and Ant-options packages. These are required for building Arch binaries (section 3.7); • Apache Ivy; • Optionally, for PHP front-ends, a Web server, such as Apache, with PHP; • At least 2 GB RAM. The amount of memory required will depend on the size of the web site(s) and the number of web pages to crawl at one iteration. As a very rough guide 2 GB of RAM should be enough to crawl approximately 10,000 web pages at one iteration.
    [Show full text]
  • A Software Architecture for Progressive Scanning of On-Line Communities
    A Software Architecture for Progressive Scanning of On-line Communities Roberto Baldoni, Fabrizio d’Amore, Massimo Mecella, Daniele Ucci Cyber Intelligence and Information Security Center Dipartimento di Ingegneria Informatica, Automatica e Gestionale Sapienza Universita` di Roma, Italy Email: fbaldoni|damore|[email protected] , [email protected] Abstract—We consider a set of on-line communities (e.g., news, U1,1 U1,2 U1,3 U1,4 U1,5 U1,6 blogs, Google groups, Web sites, etc.). The content of a community C1 t is continuously updated by users and such updates can be seen U2,1 U2,2 U2,3 U2,4 U2,5 U2,6 by users of other communities. Thus, when creating an update, C2 a user could be influenced by one or more updates creating a t U U U U U U semantic causal relationship among updates. This transitively will 3,1 3,2 3,3 3,4 3,5 3,6 C3 allow to trace how an information flows across communities. The t paper presents a software architecture that progressively scan a seman8c causal relaonship community update set of on-line communities in order to detect such semantic causal relationships. The architecture includes a crawler, a large scale storage, a distributed indexing system and a mining system. The paper mainly focuses on crawling and indexing. (a) Evolution along the time of a set of social communities showing how an update is causally influenced by other updates Keywords—On-line communities, progressive scanning, MapR, Nutch I. INTRODUCTION U1,1 U1,2 U1,3 U1,4 U1,5 U1,6 C1 On-line communities are nowadays a fundamental source U U U U U of information in business and information security intelligence.
    [Show full text]
  • Enterprise Data Warehouse Optimization with Hadoop on Power
    Front cover Enterprise Data Warehouse Optimization with Hadoop on IBM Power Systems Servers Helen Lu Maciej Olejniczak In partnership with IBM Academy of Technology Redpaper International Technical Support Organization Enterprise Data Warehouse Optimization with Hadoop on IBM Power Systems Servers January 2018 REDP-5476-00 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (January 2018) This edition applies to Hortonworks Data Platform (HDP) Version 2.6 running on IBM Power Systems servers. © Copyright International Business Machines Corporation 2018. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii Authors. viii Now you can become a published author, too! . viii Comments welcome. ix Stay connected to IBM Redbooks . ix Chapter 1. Enterprise Data Warehouse overview. 1 1.1 Traditional Enterprise Data Warehouse . 2 1.2 Enterprise Data Warehouse on Hadoop . 2 1.3 Hadoop technology overview . 3 1.3.1 Advantages of using Hadoop . 4 1.3.2 Apache Hadoop components . 5 1.3.3 IBM and Hadoop technology. 7 1.3.4 The Hortonworks Data Platform on IBM Power Systems . 7 1.3.5 Hortonworks DataFlow . 10 Chapter 2. IBM Power Systems overview . 11 2.1 IBM Power Systems overview. 12 2.1.1 POWER8 server highlights . 12 2.1.2 POWER9 server highlights . 14 2.2 POWER versus Intel x86 performance . 16 2.2.1 System performance comparison . 18 2.3 NVIDIA GPU accelerators. 18 2.4 Linux on Power advantages .
    [Show full text]
  • Tracking Down the Bad Guys Tom Barber - NASA JPL Big Data Conference - Vilnius Nov 2017 Who Am I?
    Tracking Down The Bad Guys Tom Barber - NASA JPL Big Data Conference - Vilnius Nov 2017 Who am I? Tom Barber Data Nerd Open Source Business Intelligence Developer Director of Meteorite BI and Spicule LTD Software Developer at NASA JPL in the Computer Science for Data Intensive Applications Group Some stuff I’ve worked on Saiku Analytics - OLAP Browser Apache OODT - Data Processing Toolkit originally by NASA Pentaho Business Intelligence Suite - Business Intelligence Suite Apache Tika - Metadata Extraction Apache DRAT - Release Auditing Sparkler - Spark Crawler How I ended up working at NASA It’s an easy 6 step process!! 1. Want to be an astronaut 2. Learn about data 3. Volunteer some time on open source code and meet new people 4. Decide to change job direction 5. Ask random people for new job 6. Get offered a job at NASA See…. easy! Work we do at NASA JPL We build applications for high volume data processing and visualisation. ● Genomics Search and data extraction ● Polar data discovery ● Data processing for a number of satellite programs ● Storage and archive of planetary data ● Track down criminals!!! What is DARPA? Defense Advanced Research Projects Agency Founded in response to Sputnik Invests in technology for (US) National Security Work with external partners to bring knowledge to programs The Challenge Memex seeks to develop software that advances online search capabilities far beyond the current state of the art. The goal is to invent better methods for interacting with and sharing information, so users can quickly and thoroughly organize and search subsets of information relevant to their individual interests.
    [Show full text]