EPL 660: Lab 6 Introduction to Nutch

Total Page:16

File Type:pdf, Size:1020Kb

EPL 660: Lab 6 Introduction to Nutch EPL 660: Lab 6 University of Cyprus Department of Introduction to Nutch Computer Science Andreas Kamilaris Overview University of Cyprus • Complete Web search engine – Nutch = Crawler + Indexer/Searcher (Lucene) + GUI + Plugins (e.g. parsing) + MapReduce & Distributed FS (Hadoop) • Java-based • Open source Reasons to run your own search engine University of Cyprus • Transparency: Nutch is open source, anyone can see how ranking algorithms work. – Google allows rankings to be based on payments. – Can be used by academic and governmental organizations, where fairness of rankings may be very important. • Understanding: see how large-scale search engine works. – Google source code is not available. • Extensibility: can be customized and incorporated into your application Nutch in Practice University of Cyprus • Nutch installations typically operate at one of three scales: – local filesystem Æ reliable (no network errors, caching is unnecessary). – Intranet-level. – whole Web Æ whole-Web crawling is difficult. • Many crawling-oriented challenges when building a complete Web search engine: – Which pages do we start with? – How do we partition the work between a set of crawlers? – Howoftendowere-crawl? – How do we cope with broken links, unresponsive sites, and unintelligible or duplicate content? Nutch Vs Lucene University of Cyprus • Nutch is built on top of Lucene. • "Should I use Lucene or Nutch?" – Use Lucene if you don't need a web crawler. • e.g. you want to make a database searchable • Nutch is a better fit for sites where you don't have direct access to the underlying data, or it comes from disparate sources. Nutch Architecture University of Cyprus • Nutch Æ crawler + searcher • Crawler: fetches pages, creates inverted index. • Searcher: uses inverted index to answer queries. • Crawler and Searcher are highly decoupled, enabling independent scaling on separate hardware platforms. Nutch Crawler University of Cyprus • It consists of four main components: – WebDB – Segments – Index – Crawl tool Web Database (WebDB) University of Cyprus • Persistent data structure for mirroring the structure and properties of the Web graph being crawled. • Used only by the crawler (not used during searching). • The WebDB stores two types of entities. – Pages: pages on the Web. – Links: the set of links from one page (to other pages). • In the WebDB Web graph, the nodes are pages and the edges are links. Segments University of Cyprus • A Segment is a collection of pages fetched and indexed by the crawler in a single run. – limited lifespan (named by the date and time created). • Fetchlist of a segment involves a list of URLs for the crawler to fetch. Index University of Cyprus • Nutch uses Lucene for indexing. • Inverted index of all of the pages the system has retrieved. – Each segment has its own index. • A (global) inverted index is created by merging all individual segment indexes. Crawl tool University of Cyprus • Crawling is a cyclical process: 1. The crawler generates a set of fetchlists from the WebDB. 2. A set of fetchers downloads the content from the Web. 3. The crawler updates the WebDB with new links that were found. 4. The crawler generates a new set of fetchlists (for links that haven't been fetched for a given period, including the new links found in the previous cycle). 5. This cycle repeats. Steps in a Crawl+Index cycle University of Cyprus 1. Create a new WebDB (admin db -create). 2. Inject root URLs into the WebDB (inject). 3. Generate a fetchlist from the WebDB in a new segment (generate). 4. Fetch content from URLs in the fetchlist (fetch). 5. Update the WebDB with links from fetched pages (updatedb). 6. Repeat steps 3-5 until the required depth is reached. 7. Update segments with scores and links from the WebDB (updatesegs). 8. Index the fetched pages (index). 9. Eliminate duplicate content (and duplicate URLs) from the indexes (dedup). 10.Merge the indexes into a single index for searching (merge). Nutch as a Crawler University of Cyprus Initial URLs Injector Web WebDB Webpages/files update get Generator Crawl tool Fetcher read/write generate read/write Segment Parser Nutch as a complete Web Search Engine University of Cyprus Segments WebDB LinkDB Indexer (Lucene) Index Searcher (Lucene) GUI (Tomcat) Running a Crawl University of Cyprus • The site structure for the site we are going to crawl: • echo 'http://keaton/tinysite/A.html' > urls – file urls contains the root URL from which to populate the initial fetch list (page A). • The Crawl tool uses a filter to decide which URLs go into the WebDB. – restrict the domain to the server on the intranet (/keaton). • bin/nutch crawl urls -dir crawl-tinysite -depth 3 >& crawl.log – The Crawl tool uses the root URLs in urls file to start the crawl. – The results go to directory crawl-tinysite. – -depth flag tells the Crawler how many generate/fetch/update cycles to carry out to get full page coverage. Examine Results (File System) University of Cyprus • Directories and files created after running the Crawl tool: WebDB A, B, C, C-dup links to Wikipedia are not in WebDB Lucene index (filter was used) segments (pages) • Crawl created three segments in timestamped subdirectories. • Each segment has its own index. Examine Results (Pages&Links) University of Cyprus arguments changed into -stats in release 1.2 Examine results (Segments) University of Cyprus • The Crawl tool created three segments in timestamped subdirectories: changed into readseg in release 1.2 • PARSED column – Useful when running fetchers with parsing turned off, to be run later as a separate process. • STARTED and FINISHED columns indicate the times when fetching started and finished. – Invaluable for bigger crawls, when tracking down why crawling is taking a long time. • COUNT column – Shows the number of fetched pages in the segment. – E.g. last segment has two entries, corresponding to pages C and C-duplicate. Examine results (Index&Search) University of Cyprus • Command line searching through NutchBean: bin/nutch.org.apache.nutch.searcher.NutchBean <keyword> where keyword is the search term. Search results Examine results (Index&Search) University of Cyprus • GUI-based searching with Luke. • Luke is the Lucene Index Toolbox. • It accesses existing Lucene indexes and allows you to display and modify their contents. • You can browse by doc number, view docs, execute search, analyze search results, retrieve ranked lists etc. Download from: http://code.google.com/p/luke/ Nutch Distributed File System (NDFS) University of Cyprus • NDFS stores the crawling and indexes. • Data divided into blocks. • Blocks can be copied, replicated. • Namenode Vs Datanodes. • Datanodes hold and serve blocks. • Namenode holds metainfo. – Filename Æ block list – Block Æ datanode-location • Datanodes report to namenode every few seconds. Nutch & Hadoop University of Cyprus • Hadoop is used in Nutch to manage data obtained from the crawling process. • MapReduce for indexing, parsing, WebDB construction, even fetching. Plugins University of Cyprus • Provide extensions to extension-points. • Each extension point defines an interface that must be implemented by extension. • Some core extension points: – IndexingFilter: add meta-data to indexed fields. – Parser: to parse a new type of document. – NutchAnalyzer: language specific analyzers. Get Started with Nutch University of Cyprus 1. Download the latest Apache Nutch release (release 1.2) from: http://www.apache.org/dyn/closer.cgi/nutch/ 2. Set NUTCH_JAVA_HOME to the root of your JVM installation. (* you need to set also JAVA_HOME to work). 3. Open up /conf/nutch-default.xml file, search for http.agent.name and give it value “MYNAME Spider”. 4. Create a urls file containing a list of root URLs. 5. You can filter the crawling by editing the file /conf/crawl-urlfilter.txt, replacing MY.DOMAIN.NAME with the name of the domain you wish to crawl. (* actually if you don’t do it, it will not work!) Installing in Tomcat University of Cyprus 1. You need to put the Nutch war file into your servlet container. 2. Assuming you've unpacked Tomcat as ~/local/tomcat, the Nutch war file may be installed with the commands: mkdir ~/local/tomcat/webapps/nutch cp nutch*.war ~/local/tomcat/webapps/nutch/ jar xvf ~/local/tomcat/webapps/nutch/nutch.war rm nutch-1.1.war 3. The webapp finds its indexes in ./crawl, relative to where you start Tomcat. Start Tomcat using a command like: ~/local/tomcat/bin/catalina.sh start 4. Then visit: http://localhost:8080/nutch/ Crawl Command Vs Whole-Web Crawling University of Cyprus • The Crawl command is more appropriate when you intend to crawl up to around one million pages on a handful of Web servers. • Whole-Web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines. – More control over the crawl process. – Incremental crawling. References University of Cyprus • Nutch Web site: http://nutch.apache.org/ • Nutch Docs: http://lucene.apache.org/nutch/ • Nutch Wiki: http://wiki.apache.org/nutch/ (Support, mailing lists, tutorials, presentations) • Prasad Pingali, CLIA consortium, Nutch Workshop, 2007. • Tom White, Introduction to Nutch, java.net website http://today.java.net/pub/a/today/2006/01/10/introduction-to-nutch-1.html.
Recommended publications
  • Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected]
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St. Cloud State University St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 3-2017 Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Annangi, Harikrishna, "Security Log Analysis Using Hadoop" (2017). Culminating Projects in Information Assurance. 19. https://repository.stcloudstate.edu/msia_etds/19 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Security Log Analysis Using Hadoop by Harikrishna Annangi A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Assurance April, 2016 Starred Paper Committee: Dr. Dennis Guster, Chairperson Dr. Susantha Herath Dr. Sneh Kalia 2 Abstract Hadoop is used as a general-purpose storage and analysis platform for big data by industries. Commercial Hadoop support is available from large enterprises, like EMC, IBM, Microsoft and Oracle and Hadoop companies like Cloudera, Hortonworks, and Map Reduce. Hadoop is a scheme written in Java that allows distributed processes of large data sets across clusters of computers using programming models. A Hadoop frame work application works in an environment that provides storage and computation across clusters of computers.
    [Show full text]
  • Building a Scalable Index and a Web Search Engine for Music on the Internet Using Open Source Software
    Department of Information Science and Technology Building a Scalable Index and a Web Search Engine for Music on the Internet using Open Source software André Parreira Ricardo Thesis submitted in partial fulfillment of the requirements for the degree of Master in Computer Science and Business Management Advisor: Professor Carlos Serrão, Assistant Professor, ISCTE-IUL September, 2010 Acknowledgments I should say that I feel grateful for doing a thesis linked to music, an art which I love and esteem so much. Therefore, I would like to take a moment to thank all the persons who made my accomplishment possible and hence this is also part of their deed too. To my family, first for having instigated in me the curiosity to read, to know, to think and go further. And secondly for allowing me to continue my studies, providing the environment and the financial means to make it possible. To my classmate André Guerreiro, I would like to thank the invaluable brainstorming, the patience and the help through our college years. To my friend Isabel Silva, who gave me a precious help in the final revision of this document. Everyone in ADETTI-IUL for the time and the attention they gave me. Especially the people over Caixa Mágica, because I truly value the expertise transmitted, which was useful to my thesis and I am sure will also help me during my professional course. To my teacher and MSc. advisor, Professor Carlos Serrão, for embracing my will to master in this area and for being always available to help me when I needed some advice.
    [Show full text]
  • Natural Language Processing Technique for Information Extraction and Analysis
    International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 2, Issue 8, August 2015, PP 32-40 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Natural Language Processing Technique for Information Extraction and Analysis T. Sri Sravya1, T. Sudha2, M. Soumya Harika3 1 M.Tech (C.S.E) Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] 2 Head (I/C) of C.S.E & IT Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] 3 M. Tech C.S.E, Assistant Professor, Sri Padmavati Mahila Visvavidyalayam (Women’s University), School of Engineering and Technology, Tirupati. [email protected] Abstract: In the current internet era, there are a large number of systems and sensors which generate data continuously and inform users about their status and the status of devices and surroundings they monitor. Examples include web cameras at traffic intersections, key government installations etc., seismic activity measurement sensors, tsunami early warning systems and many others. A Natural Language Processing based activity, the current project is aimed at extracting entities from data collected from various sources such as social media, internet news articles and other websites and integrating this data into contextual information, providing visualization of this data on a map and further performing co-reference analysis to establish linkage amongst the entities. Keywords: Apache Nutch, Solr, crawling, indexing 1. INTRODUCTION In today’s harsh global business arena, the pace of events has increased rapidly, with technological innovations occurring at ever-increasing speed and considerably shorter life cycles.
    [Show full text]
  • Building a Search Engine for the Cuban Web
    Building a Search Engine for the Cuban Web Jorge Luis Betancourt Search/Crawl Engineer NOVEMBER 16-18, 2016 • SEVILLE, SPAIN Who am I 01 Jorge Luis Betancourt González Search/Crawl Engineer Apache Nutch Committer & PMC Apache Solr/ES enthusiast 2 Agenda • Introduction & motivation • Technologies used • Customizations • Conclusions and future work 3 Introduction / Motivation Cuba Internet Intranet Global search engines can’t access documents hosted the Cuban Intranet 4 Writing your own web search engine from scratch? or … 5 Common search engine features 1 Web search: HTML & documents (PDF, DOC) • highlighting • suggestions • filters (facets) • autocorrection 2 Image search (size, format, color, objects) • thumbnails • show metadata • filters (facets) • match text with images 3 News search (alerting, notifications) • near real time • email, push, SMS 6 How to fulfill these requirements? store query At the core a search engine: stores some information a retrieve this information when a question is received 7 Open Source to the rescue … crawler 1 Index Server 2 web interface 3 8 Apache Nutch Nutch is a well matured, production ready “ Web crawler. Enables fine grained configuration, relying on Apache Hadoop™ data structures, which are great for batch processing. 9 Apache Nutch • Highly scalable • Highly extensible • Pluggable parsing protocols, storage, indexing, scoring, • Active community • Apache License 10 Apache Solr TOTAL DOWNLOADS 8M+ MONTHLY DOWNLOADS 250,000+ • Apache License • Great community • Highly modular • Stability / Scalability
    [Show full text]
  • Full-Graph-Limited-Mvn-Deps.Pdf
    org.jboss.cl.jboss-cl-2.0.9.GA org.jboss.cl.jboss-cl-parent-2.2.1.GA org.jboss.cl.jboss-classloader-N/A org.jboss.cl.jboss-classloading-vfs-N/A org.jboss.cl.jboss-classloading-N/A org.primefaces.extensions.master-pom-1.0.0 org.sonatype.mercury.mercury-mp3-1.0-alpha-1 org.primefaces.themes.overcast-${primefaces.theme.version} org.primefaces.themes.dark-hive-${primefaces.theme.version}org.primefaces.themes.humanity-${primefaces.theme.version}org.primefaces.themes.le-frog-${primefaces.theme.version} org.primefaces.themes.south-street-${primefaces.theme.version}org.primefaces.themes.sunny-${primefaces.theme.version}org.primefaces.themes.hot-sneaks-${primefaces.theme.version}org.primefaces.themes.cupertino-${primefaces.theme.version} org.primefaces.themes.trontastic-${primefaces.theme.version}org.primefaces.themes.excite-bike-${primefaces.theme.version} org.apache.maven.mercury.mercury-external-N/A org.primefaces.themes.redmond-${primefaces.theme.version}org.primefaces.themes.afterwork-${primefaces.theme.version}org.primefaces.themes.glass-x-${primefaces.theme.version}org.primefaces.themes.home-${primefaces.theme.version} org.primefaces.themes.black-tie-${primefaces.theme.version}org.primefaces.themes.eggplant-${primefaces.theme.version} org.apache.maven.mercury.mercury-repo-remote-m2-N/Aorg.apache.maven.mercury.mercury-md-sat-N/A org.primefaces.themes.ui-lightness-${primefaces.theme.version}org.primefaces.themes.midnight-${primefaces.theme.version}org.primefaces.themes.mint-choc-${primefaces.theme.version}org.primefaces.themes.afternoon-${primefaces.theme.version}org.primefaces.themes.dot-luv-${primefaces.theme.version}org.primefaces.themes.smoothness-${primefaces.theme.version}org.primefaces.themes.swanky-purse-${primefaces.theme.version}
    [Show full text]
  • Chapter 2 Introduction to Big Data Technology
    Chapter 2 Introduction to Big data Technology Bilal Abu-Salih1, Pornpit Wongthongtham2 Dengya Zhu3 , Kit Yan Chan3 , Amit Rudra3 1The University of Jordan 2 The University of Western Australia 3 Curtin University Abstract: Big data is no more “all just hype” but widely applied in nearly all aspects of our business, governments, and organizations with the technology stack of AI. Its influences are far beyond a simple technique innovation but involves all rears in the world. This chapter will first have historical review of big data; followed by discussion of characteristics of big data, i.e. from the 3V’s to up 10V’s of big data. The chapter then introduces technology stacks for an organization to build a big data application, from infrastructure/platform/ecosystem to constructional units and components. Finally, we provide some big data online resources for reference. Keywords Big data, 3V of Big data, Cloud Computing, Data Lake, Enterprise Data Centre, PaaS, IaaS, SaaS, Hadoop, Spark, HBase, Information retrieval, Solr 2.1 Introduction The ability to exploit the ever-growing amounts of business-related data will al- low to comprehend what is emerging in the world. In this context, Big Data is one of the current major buzzwords [1]. Big Data (BD) is the technical term used in reference to the vast quantity of heterogeneous datasets which are created and spread rapidly, and for which the conventional techniques used to process, analyse, retrieve, store and visualise such massive sets of data are now unsuitable and inad- equate. This can be seen in many areas such as sensor-generated data, social media, uploading and downloading of digital media.
    [Show full text]
  • Code Smell Prediction Employing Machine Learning Meets Emerging Java Language Constructs"
    Appendix to the paper "Code smell prediction employing machine learning meets emerging Java language constructs" Hanna Grodzicka, Michał Kawa, Zofia Łakomiak, Arkadiusz Ziobrowski, Lech Madeyski (B) The Appendix includes two tables containing the dataset used in the paper "Code smell prediction employing machine learning meets emerging Java lan- guage constructs". The first table contains information about 792 projects selected for R package reproducer [Madeyski and Kitchenham(2019)]. Projects were the base dataset for cre- ating the dataset used in the study (Table I). The second table contains information about 281 projects filtered by Java version from build tool Maven (Table II) which were directly used in the paper. TABLE I: Base projects used to create the new dataset # Orgasation Project name GitHub link Commit hash Build tool Java version 1 adobe aem-core-wcm- www.github.com/adobe/ 1d1f1d70844c9e07cd694f028e87f85d926aba94 other or lack of unknown components aem-core-wcm-components 2 adobe S3Mock www.github.com/adobe/ 5aa299c2b6d0f0fd00f8d03fda560502270afb82 MAVEN 8 S3Mock 3 alexa alexa-skills- www.github.com/alexa/ bf1e9ccc50d1f3f8408f887f70197ee288fd4bd9 MAVEN 8 kit-sdk-for- alexa-skills-kit-sdk- java for-java 4 alibaba ARouter www.github.com/alibaba/ 93b328569bbdbf75e4aa87f0ecf48c69600591b2 GRADLE unknown ARouter 5 alibaba atlas www.github.com/alibaba/ e8c7b3f1ff14b2a1df64321c6992b796cae7d732 GRADLE unknown atlas 6 alibaba canal www.github.com/alibaba/ 08167c95c767fd3c9879584c0230820a8476a7a7 MAVEN 7 canal 7 alibaba cobar www.github.com/alibaba/
    [Show full text]
  • Apache Solr 3 1 Cookbook.Pdf
    Apache Solr 3.1 Cookbook Over 100 recipes to discover new ways to work with Apache's Enterprise Search Server Rafał Kuć BIRMINGHAM - MUMBAI Apache Solr 3.1 Cookbook Copyright © 2011 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: July 2011 Production Reference: 1140711 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849512-18-3 www.packtpub.com Cover Image by Rakesh Shejwal ([email protected]) Credits Author Project Coordinator Rafał Kuć Srimoyee Ghoshal Reviewers Proofreader Ravindra Bharathi Samantha Lyon Juan Grande Indexer Acquisition Editor Hemangini Bari Sarah Cullington Production Coordinators Development Editor Adline Swetha Jesuthas Meeta Rajani Kruthika Bangera Technical Editors Cover Work Manasi Poonthottam Kruthika Bangera Sakina Kaydawala Copy Editors Kriti Sharma Leonard D'Silva About the Author Rafał Kuć is a born team leader and software developer.
    [Show full text]
  • Deploying Arch
    Deploying Arch 1. Introduction This document is written for webmasters. It provides simple step-by-by step instructions for the installation of Arch. Some basic familiarity with managing websites is assumed. Arch is a multi-component package: it includes Arch indexer (based on Apache Nutch) and Arch search server (based on Apache Solr) with, optionally, one or more PHP front-ends. See Arch White Paper for more details on Arch architecture. Arch also requires a number of freely available software packages: Java, a Web server, PHP and a relational database management system (RDBMS) such as MySQL. These are widely used applications, and some, or all, of these may already be installed. Note: Arch 1.7 and later versions use own Jetty server and embedded RDBMS (H2) by default. This significantly simplifies deployment. All you have to do to get it going is run Ant build script and insert your seed URLs into Arch script. For a quick start, see Arch Quick Start Guide. The document below describes Arch installation for advanced users. 2. System requirements • A Linux/Unix Operating System, or Windows Vista/7 or later with Cygwin installed; • Java 9, or later version; • Apache Ant and Ant-options packages. These are required for building Arch binaries (section 3.7); • Apache Ivy; • Optionally, for PHP front-ends, a Web server, such as Apache, with PHP; • At least 2 GB RAM. The amount of memory required will depend on the size of the web site(s) and the number of web pages to crawl at one iteration. As a very rough guide 2 GB of RAM should be enough to crawl approximately 10,000 web pages at one iteration.
    [Show full text]
  • A Software Architecture for Progressive Scanning of On-Line Communities
    A Software Architecture for Progressive Scanning of On-line Communities Roberto Baldoni, Fabrizio d’Amore, Massimo Mecella, Daniele Ucci Cyber Intelligence and Information Security Center Dipartimento di Ingegneria Informatica, Automatica e Gestionale Sapienza Universita` di Roma, Italy Email: fbaldoni|damore|[email protected] , [email protected] Abstract—We consider a set of on-line communities (e.g., news, U1,1 U1,2 U1,3 U1,4 U1,5 U1,6 blogs, Google groups, Web sites, etc.). The content of a community C1 t is continuously updated by users and such updates can be seen U2,1 U2,2 U2,3 U2,4 U2,5 U2,6 by users of other communities. Thus, when creating an update, C2 a user could be influenced by one or more updates creating a t U U U U U U semantic causal relationship among updates. This transitively will 3,1 3,2 3,3 3,4 3,5 3,6 C3 allow to trace how an information flows across communities. The t paper presents a software architecture that progressively scan a seman8c causal relaonship community update set of on-line communities in order to detect such semantic causal relationships. The architecture includes a crawler, a large scale storage, a distributed indexing system and a mining system. The paper mainly focuses on crawling and indexing. (a) Evolution along the time of a set of social communities showing how an update is causally influenced by other updates Keywords—On-line communities, progressive scanning, MapR, Nutch I. INTRODUCTION U1,1 U1,2 U1,3 U1,4 U1,5 U1,6 C1 On-line communities are nowadays a fundamental source U U U U U of information in business and information security intelligence.
    [Show full text]
  • Enterprise Data Warehouse Optimization with Hadoop on Power
    Front cover Enterprise Data Warehouse Optimization with Hadoop on IBM Power Systems Servers Helen Lu Maciej Olejniczak In partnership with IBM Academy of Technology Redpaper International Technical Support Organization Enterprise Data Warehouse Optimization with Hadoop on IBM Power Systems Servers January 2018 REDP-5476-00 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (January 2018) This edition applies to Hortonworks Data Platform (HDP) Version 2.6 running on IBM Power Systems servers. © Copyright International Business Machines Corporation 2018. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii Authors. viii Now you can become a published author, too! . viii Comments welcome. ix Stay connected to IBM Redbooks . ix Chapter 1. Enterprise Data Warehouse overview. 1 1.1 Traditional Enterprise Data Warehouse . 2 1.2 Enterprise Data Warehouse on Hadoop . 2 1.3 Hadoop technology overview . 3 1.3.1 Advantages of using Hadoop . 4 1.3.2 Apache Hadoop components . 5 1.3.3 IBM and Hadoop technology. 7 1.3.4 The Hortonworks Data Platform on IBM Power Systems . 7 1.3.5 Hortonworks DataFlow . 10 Chapter 2. IBM Power Systems overview . 11 2.1 IBM Power Systems overview. 12 2.1.1 POWER8 server highlights . 12 2.1.2 POWER9 server highlights . 14 2.2 POWER versus Intel x86 performance . 16 2.2.1 System performance comparison . 18 2.3 NVIDIA GPU accelerators. 18 2.4 Linux on Power advantages .
    [Show full text]
  • Tracking Down the Bad Guys Tom Barber - NASA JPL Big Data Conference - Vilnius Nov 2017 Who Am I?
    Tracking Down The Bad Guys Tom Barber - NASA JPL Big Data Conference - Vilnius Nov 2017 Who am I? Tom Barber Data Nerd Open Source Business Intelligence Developer Director of Meteorite BI and Spicule LTD Software Developer at NASA JPL in the Computer Science for Data Intensive Applications Group Some stuff I’ve worked on Saiku Analytics - OLAP Browser Apache OODT - Data Processing Toolkit originally by NASA Pentaho Business Intelligence Suite - Business Intelligence Suite Apache Tika - Metadata Extraction Apache DRAT - Release Auditing Sparkler - Spark Crawler How I ended up working at NASA It’s an easy 6 step process!! 1. Want to be an astronaut 2. Learn about data 3. Volunteer some time on open source code and meet new people 4. Decide to change job direction 5. Ask random people for new job 6. Get offered a job at NASA See…. easy! Work we do at NASA JPL We build applications for high volume data processing and visualisation. ● Genomics Search and data extraction ● Polar data discovery ● Data processing for a number of satellite programs ● Storage and archive of planetary data ● Track down criminals!!! What is DARPA? Defense Advanced Research Projects Agency Founded in response to Sputnik Invests in technology for (US) National Security Work with external partners to bring knowledge to programs The Challenge Memex seeks to develop software that advances online search capabilities far beyond the current state of the art. The goal is to invent better methods for interacting with and sharing information, so users can quickly and thoroughly organize and search subsets of information relevant to their individual interests.
    [Show full text]