IBM POWER9 and Cognitive Computing

Total Page:16

File Type:pdf, Size:1020Kb

IBM POWER9 and Cognitive Computing IBM POWER9 and cognitive M. Kumar W. P. Horn computing J. Kepner J. E. Moreira Cognitive applications are complex and are composed of multiple P. Pattnaik components exhibiting diverse workload behavior. Efficient execution of these applications requires systems that can effectively handle this diversity. In this paper, we show that IBM POWER9TM shared memory systems have the compute capacity and memory throughput to efficiently handle the broad spectrum of computing requirements for cognitive workloads. We first review the GraphBLAS interface defined for supporting cognitive applications, particularly whole-graph analytics. We show that this application-programming interface effectively separates the concerns between the analytics application developer and the system developer and simultaneously enables good performance by permitting system developers to make platform- specific optimizations. A linear algebra formulation and execution of betweenness centrality kernel in the High-Performance Computing Scalable Graph Analysis Benchmark, for 256 million vertices and 2 billion edges graphs, delivers a sixfold reduction in execution time over a reference implementation. Following that, we present the results of benchmarking the forward propagation step of deep neural networks (DNNs) written in GraphBLAS and executed on POWER9. We present the rationale and evidence for weight matrices of large DNNs being sparse and show that for sparse weight matrices, GraphBLAS/POWERÒ has a two orders-of-magnitude performance advantage over dense implementations. Applications requiring analysis of graphs larger than several tens of billion vertices require distributed computing environments such as Apache Spark to provide resilience and parallelism. We show that when linear algebra techniques are implemented in an Apache Spark environment, we are able to leverage the parallelism available in POWER9 Servers. Introduction prevention use this large graph representation for modeling Cognitive systems create actionable knowledge from data. and analysis in Stage 3 [2]. Stage 2 also includes the data The recent growth in cognitive computing is due to the preparation (e.g., selection, curation, sampling, interpolation) availability of a large volume of relevant data, large prior to modeling in Stage 3. In this paper, we focus on whole- amounts of computational power, and the high value of the graph analytics. We do not dwell on queries to retrieve a actionable knowledge to many large businesses [1]. fraction of data, which are supported by various NoSQL The creation of actionable knowledge by a cognitive databases such as Accumulo, Apache Giraph, Cassandra, system encompasses four processing stages (Figure 1). Stage CouchDB, MongoDB, and Neo4J. 1 is primarily intra-record analysis of extremely diverse data The modeling phase of cognitive computing, Stage 3 sources such as call records, click streams, images, or videos. depicted in Figure 1, encompasses two approaches. The first The output of this stage is data tagged with metadata, the tags is driven by statistical models, primarily based on Bayesian enabling fusion or linking of data from the diverse sources methods. Various regression, classification, clustering into a large graph in Stage 2. Various applications in techniques, kernel methods in general, and support vector healthcare, social network analytics, and financial fraud machines reside in this category [3, 4]. The second approach comprises deep learning approaches, particularly Digital Object Identifier: 10.1147/JRD.2018.2846958 deep neural networks (DNNs) of various flavors [5–8]. ß Copyright 2018 by International Business Machines Corporation. Copying in printed form for private use is permitted without payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied by any means or distributed royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor. 0018-8646/18 ß 2018 IBM IBM J. RES. & DEV. VOL. 62 NO. 4/5 PAPER 10 JULY/SEPTEMBER 2018 M. KUMAR ET AL. 10 : 1 Figure 1 Four pillars of cognitive computing: intra-source analysis, data linking, actionable knowledge (model) extraction, and model deployment. These networks currently have hundreds of stages with usually very sparse. A significant amount of the time in thousands of neurons in each stage. Success of DNNs is graph analytics is spent in multiplication of this adjacency driving consideration of even larger DNNs, and recent matrix with a vector representing a set of nodes or node research suggests that the weight matrices for large DNNs properties. As graphs become very large, parallel and can be made sparse without sacrificing their prediction distributed computing solutions are necessary to address accuracy [9–12]. This enables larger DNN models to be both storage capacity and computation time requirements. evaluated on a given hardware platform capable of taking In the fourth section of this paper (“Computational advantage of sparsity. performance in the Spark environment”), we discuss the In the next section of this paper (“Linear algebra implementation of a GraphBLAS model of computation on formulation of graph analytics”), we describe the processing the Apache Spark distributed computing framework. We and storage requirements of Stage 2 of cognitive applications analyze the scalability of an example graph algorithm and for creating and storing these large graphs, and we highlight show that we can efficiently use the multiple parallel features of IBM POWER exploited in achieving high resources in a POWER9 server. performance. We summarize GraphBLAS [13, 14], an Our key message in this paper is that the computational interface defined for the sparse-matrix linear algebra requirements of the various tasks of cognitive systems, or in approach for graph analytics, and describe the other words the workload behavior of these tasks outlined implementation of the high-performance Graph Processing in Figure 1, are diverse. This diversity is discussed again in Interface (GPI) library that currently implements an early the conclusion of this paper. Specialized systems for narrow variant of the GraphBLAS interface [15, 16] and is tasks such as DNN training exist [18, 19] and perform very optimized for POWER processors [17]. Then, we report the well on them. However, POWER systems effectively cover performance on POWER9 of representative kernels in graph all stages of cognitive computing illustrated in Figure 1 analytics, performed in Stages 2 and 3 of Figure 1, and because of their large shared memory multiprocessing illustrate the advantage of the linear algebra approach over capability and high bandwidth to memory. This breadth of conventional methods. In the third section of this paper coverage includes Stage 4 in Figure 1, where the actionable (“DNN computations on POWER9”), we present the business knowledge is deployed in business systems. performance of POWER9 on the forward propagation kernel We refer the reader to companion articles in this issue for of artificial neural networks. We show that the sparse-matrix an exploration of the features and capabilities of the algorithms outperform their dense-matrix counterparts, even POWER9 processor. In particular, Le et al. [17] describe with limited sparsity in the weight matrices. the high-performance processing cores of POWER9, while In the linear algebra formulation of graph analytics, a the article Arimilli et al. [20] describe the cache hierarchy graph is represented as an adjacency matrix, which is that supports the computing capacity of those cores and 10 : 2 M. KUMAR ET AL. IBM J. RES. & DEV. VOL. 62 NO. 4/5 PAPER 10 JULY/SEPTEMBER 2018 Figure 2 GraphBLAS primitives to support linear algebra formulation of graph analytics. Uppercase letters are matrices; lowercase b, c, and m are vectors. M and m are masks. Red and blue lettering indicates optional parameters and modifiers. Starke et al. [21] describe the memory architecture and Finally, the graph data is highly non-uniform in terms of in/ connectivity of POWER9 systems. out degree distribution of vertices and the presence of community structures, a very loose definition of clique. Linear algebra formulation of graph analytics This complicates the exploitation of parallelism on modern Attaining good performance on analysis of linked data in multi-core processor-based parallel systems [17, 22] Stage 2 of cognitive applications is tenuous for the analytics because load balancing, minimization of synchronization application developer because of the need to manage overhead, and minimization of inter-task communication performance consequences of irregular memory accesses become more difficult to manage. over a large address space and exploit complex hardware In addition to these idiosyncrasies of the graph analytics features, as explained first in this section. The GraphBLAS problems, modern multi-core processors also have their interface, described next, unburdens the application own complexities that need to be factored into the developer from the chores of managing parallelism and application programs to minimize the execution time of platform-specific optimizations. The platform-specific applications. Programmers must restructure their optimizations are factored into a library of select graph applications
Recommended publications
  • Intelligent Cognitive Assistants (ICA) Workshop Summary and Research Needs Collaborative Machines to Enhance Human Capabilities
    February 7, 2018 Intelligent Cognitive Assistants (ICA) Workshop Summary and Research Needs Collaborative Machines to Enhance Human Capabilities Abstract: The research needs and opportunities to create physico-cognitive systems which work collaboratively with people are presented. Workshop Dates and Venue: November 14-15, 2017, IBM Almaden Research Center, San Jose, CA Workshop Websites: https://www.src.org/program/ica https://www.nsf.gov/nano Oakley, John / SRC [email protected] [Feb 7, 2018] ICA-2: Intelligent Cognitive Assistants Workshop Summary and Research Needs Table of Contents Executive Summary .................................................................................................................... 2 Workshop Details ....................................................................................................................... 3 Organizing Committee ............................................................................................................ 3 Background ............................................................................................................................ 3 ICA-2 Workshop Outline ......................................................................................................... 6 Research Areas Presentations ................................................................................................. 7 Session 1: Cognitive Psychology ........................................................................................... 7 Session 2: Approaches to Artificial
    [Show full text]
  • Potential of Cognitive Computing and Cognitive Systems Ahmed K
    Old Dominion University ODU Digital Commons Modeling, Simulation & Visualization Engineering Modeling, Simulation & Visualization Engineering Faculty Publications 2015 Potential of Cognitive Computing and Cognitive Systems Ahmed K. Noor Old Dominion University, [email protected] Follow this and additional works at: https://digitalcommons.odu.edu/msve_fac_pubs Part of the Artificial Intelligence and Robotics Commons, Cognition and Perception Commons, and the Engineering Commons Repository Citation Noor, Ahmed K., "Potential of Cognitive Computing and Cognitive Systems" (2015). Modeling, Simulation & Visualization Engineering Faculty Publications. 18. https://digitalcommons.odu.edu/msve_fac_pubs/18 Original Publication Citation Noor, A. K. (2015). Potential of cognitive computing and cognitive systems. Open Engineering, 5(1), 75-88. doi:10.1515/ eng-2015-0008 This Article is brought to you for free and open access by the Modeling, Simulation & Visualization Engineering at ODU Digital Commons. It has been accepted for inclusion in Modeling, Simulation & Visualization Engineering Faculty Publications by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. DE GRUYTER OPEN Open Eng. 2015; 5:75–88 Vision Article Open Access Ahmed K. Noor* Potential of Cognitive Computing and Cognitive Systems Abstract: Cognitive computing and cognitive technologies 1 Introduction are game changers for future engineering systems, as well as for engineering practice and training. They are ma- The history of computing can be divided into three eras jor drivers for knowledge automation work, and the cre- ([1, 2], and Figure 1). The first was the tabulating era, with ation of cognitive products with higher levels of intelli- the early 1900 calculators and tabulating machines made gence than current smart products.
    [Show full text]
  • Introduction to Gpus for Data Analytics Advances and Applications for Accelerated Computing
    Compliments of Introduction to GPUs for Data Analytics Advances and Applications for Accelerated Computing Eric Mizell & Roger Biery Introduction to GPUs for Data Analytics Advances and Applications for Accelerated Computing Eric Mizell and Roger Biery Beijing Boston Farnham Sebastopol Tokyo Introduction to GPUs for Data Analytics by Eric Mizell and Roger Biery Copyright © 2017 Kinetica DB, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Editor: Shannon Cutt Interior Designer: David Futato Production Editor: Justin Billing Cover Designer: Karen Montgomery Copyeditor: Octal Publishing, Inc. Illustrator: Rebecca Demarest September 2017: First Edition Revision History for the First Edition 2017-08-29: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781491998038 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Introduction to GPUs for Data Analytics, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk.
    [Show full text]
  • Cognitive Computing Systems: Potential and Future
    Cognitive Computing Systems: Potential and Future Venkat N Gudivada, Sharath Pankanti, Guna Seetharaman, and Yu Zhang March 2019 1 Characteristics of Cognitive Computing Systems Autonomous systems are self-contained and self-regulated entities which continually evolve them- selves in real-time in response to changes in their environment. Fundamental to this evolution is learning and development. Cognition is the basis for autonomous systems. Human cognition refers to those processes and systems that enable humans to perform both mundane and specialized tasks. Machine cognition refers to similar processes and systems that enable computers perform tasks at a level that rivals human performance. While human cognition employs biological and nat- ural means – brain and mind – for its realization, machine cognition views cognition as a type of computation. Cognitive Computing Systems are autonomous systems and are based on machine cognition. A cognitive system views the mind as a highly parallel information processor, uses vari- ous models for representing information, and employs algorithms for transforming and reasoning with the information. In contrast with conventional software systems, cognitive computing systems effectively deal with ambiguity, and conflicting and missing data. They fuse several sources of multi-modal data and incorporate context into computation. In making a decision or answering a query, they quantify uncertainty, generate multiple hypotheses and supporting evidence, and score hypotheses based on evidence. In other words, cognitive computing systems provide multiple ranked decisions and answers. These systems can elucidate reasoning underlying their decisions/answers. They are stateful, and understand various nuances of communication with humans. They are self-aware, and continuously learn, adapt, and evolve.
    [Show full text]
  • Artificial Intelligence, Real Results Cognitive Computing Capabilities Can Help Member Firm Clients Make Smarter, Faster Choices
    Artificial intelligence, real results Cognitive computing capabilities can help member firm clients make smarter, faster choices Home It’s no longer a question of when computers will be able to learn, “think,” and provide reasoned responses. Making an impact They already can. that matters The question now—from Deloitte’s perspective—is, “How can cognitive computing be used to benefit member firm clients and our network?” A software application Watch Cindy Hook, Deloitte named the Delivering service developed by the Deloitte US Innovation group, in close collaboration with the Audit that matters and Consulting M&A practices of Deloitte US, already is helping organizations save Deloitte Australia leader in Analytics IT Cognitive computing = time and money. Yet, to those who dream the future, it may be but an early step in a CEO, talk about why Consulting by Kennedy. long journey. smarter, faster choices disruption matters to Source: Kennedy Consulting Research & Global capabilities and “We’re at the beginning of a true paradigm shift. Instead of merely querying Advisory; IT Consulting: Analytics 2014 leaders in the digital era. report. © 2014 Kennedy Information, LLC. China’s State Grid structured data, we are demanding insights from unstructured data that will inform Reproduced under license. decision making,” says Forrest Danson, Deloitte US Principal and Analytics Integrated Cyber resilience begins in Market Offering Leader. “Deloitte US is among the vanguard in this arena because the boardroom we understand how artificial intelligence
    [Show full text]
  • Cognitive Computing
    word2vec\Tomas Mikolov Slides by Omer Levy and Guy Rapaport 1 Word Similarity & Relatedness • 2 Approaches for Representing Words Distributional Semantics (Count) Word Embeddings (Predict) • Used since the 90’s • Inspired by deep learning • Sparse word-context PMI/PPMI matrix • word2vec (Mikolov et al., 2013) • Decomposed with SVD Underlying Theory: The Distributional Hypothesis (Harris, ’54; Firth, ‘57) “Similar words occur in similar contexts” 3 Approaches for Representing Words Both approaches: • Rely on the same linguistic theory • Use the same data • Are mathematically related • “Neural Word Embedding as Implicit Matrix Factorization” (NIPS 2014) • How come word embeddings are so much better? • “Don’t Count, Predict!” (Baroni et al., ACL 2014) 4 Background 5 Distributional Semantics 6 Distributional Semantics Marco saw a furry little wampimuk hiding in the tree. 7 Distributional Semantics Marco saw a furry little wampimuk hiding in the tree. 8 Distributional Semantics Marco saw a furry little wampimuk hiding in the tree. words contexts wampimuk furry wampimuk little wampimuk hiding wampimuk in … … 9 Distributional Semantics • 10 Distributional Semantics • 11 Distributional Semantics • 12 What is word2vec? 13 What is word2vec? How is it related to PMI? 14 What is word2vec? • word2vec is not a single algorithm • It is a software package for representing words as vectors, containing: • Tw o d i s t i n c t m o d e l s • CBoW • Skip-Gram • Various training methods • Negative Sampling • Hierarchical Softmax • A rich preprocessing pipeline • Dynamic Context Windows • Subsampling • Deleting Rare Words 15 What is word2vec? • word2vec is not a single algorithm • It is a software package for representing words as vectors, containing: • Tw o d i s t i n c t m o d e l s • CBoW • Skip-Gram (SG) • Various training methods • Negative Sampling (NS) • Hierarchical Softmax • A rich preprocessing pipeline • Dynamic Context Windows • Subsampling • Deleting Rare Words 16 Skip-Grams with Negative Sampling (SGNS) Marco saw a furry little wampimuk hiding in the tree.
    [Show full text]
  • Wearable Affective Robot
    UNDER PROOF: IEEE ACCESS, VOL. XX, NO. YY, MONTH 20XX 1 Wearable Affective Robot Min Chen, Jun Zhou, Guangming Tao, Jun Yang, Long Hu Abstract—With the development of the artificial intelligence omy and people’s livelihood [3]. According to the forecasts, (AI), the AI applications have influenced and changed people’s by 2025, the AI-based medical treatment industry will occupy daily life greatly. Here, a wearable affective robot that integrates one fifth of the market scale. Although the AI and Medical the affective robot, social robot, brain wearable, and wearable 2.0 is proposed for the first time. The proposed wearable affective treatment in China has started a little later than in some other robot is intended for a wide population, and we believe that it can countries, it is expected that it will reach 20 billion Chinese improve the human health on the spirit level, meeting the fashion Yuan of the market size in China by 2018. requirements at the same time. In this paper, the architecture and The AI-based medical treatment interdiscipline plays an design of an innovative wearable affective robot, which is dubbed important role in the development of the artificial intelligent- as Fitbot, are introduced in terms of hardware and algorithm’s perspectives. In addition, the important functional component of based human health diagnosis and treatment [4] [5]. In 2011, it the robot-brain wearable device is introduced from the aspect was found by the research personnel from the Langone Health of the hardware design, EEG data acquisition and analysis, user of New York University that, the analysis and matching of behavior perception, and algorithm deployment, etc.
    [Show full text]
  • Construction of Feed Forward Multilayer Perceptron Model for Genetic Dataset in Leishmaniasis Using Cognitive Computing
    bioRxiv preprint doi: https://doi.org/10.1101/411363; this version posted September 7, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Construction of Feed Forward MultiLayer Perceptron Model For Genetic Dataset in Leishmaniasis Using Cognitive Computing Sundar Mahalingam #, Ritika Kabra $and Shailza Singh$* #Kamaraj College of Engineering andTechnology , S.P.G.Chidambara Nadar – C.Nagammal Campus, S.P.G.C.Nagar, K.Vellakulam - 625 701 Near Virudhunagar, Madurai District, INDIA $National Centre for Cell Science, NCCS Complex, Ganeshkhind, SP Pune University Campus, Pune-411007, INDIA Abstract Leishmaniasis is an endemic parasitic disease, predominantly found in the poor locality of Africa, Asia and Latin America. It is associated with malnutrition, weak immune system of people and their housing locality. At present, it is diagnosed by microscopic identification, molecular and biochemical characterisation or serum analysis for parasitic compounds. In this study, we present a new approach for diagnosing Leishmaniasis using cognitive computing. The Genetic datasets of leishmaniasis are collected from Gene Expression Omnibus database and it’s then processed. The algorithm for training and developing a model, based on the data is prepared and coded using python. The algorithm and their corresponding datasets are integrated using TensorFlow dataframe. A feed forward Artificial Neural Network trained model with multi-layer perceptron is developed as a diagnosing model for Leishmaniasis, using genetic dataset. It is developed using recurrent neural network.
    [Show full text]
  • Resiliency in the Cognitive Era
    RESILIENCY IN THE COGNITIVE ERA IN AN ALWAYS-ON WORLD, REAL-TIME DATA FLOW AND CONTINUOUSLY CONNECTED COGNITIVE APPLICATIONS WILL BE ESSENTIAL IN ASSOCIATION WITH: CONTENTS Executive summary ........................................................ 2 Key points ................................................................. 3 Introduction ...............................................................4 What changes in the cognitive era? .......................................... 7 Always on: The importance of continuous availability in the cognitive era....... 9 The benefits will be breathtaking. So should the resiliency. ....................11 How cognitive capabilities can improve resiliency .............................13 Conclusion.................................................................15 Acknowledgments ........................................................ 16 EXECUTIVE SUMMARY Cognition enables a new level of engagement with technology and a new class of products and services that sense, reason and learn about their users and the world around them. A cognitive system capitalizes on data from internal and external sources for continuous learning and better forecasting for real-time analytics in a fraction of the time it would take a human. To take full advantage of these capabilities requires a high degree of resilience; data must be accurate, available, accessible and auditable. New cognitive applications are increasing expectations and raising resiliency requirements for the overall enterprise as well as its IT and data
    [Show full text]
  • OCR (Optical Character Recognition)
    How to deploy AI for delivering new levels of insights AI Powered Resume Recognition 1 Aress Software and Education Technologies Pvt Ltd www.aress.com Need for AI powered Resume Recognition Manual resume screening takes a lot of time & effort, especially when there are a number of resumes to go through in a short span of time Resumes could vary in formats like word, pdf, image files etc and could pose a challenge to quickly manage and sort Retrieving relevant CV’s based on key criteria like job description, years of experience, location etc could be difficult Need for AI powered OCR Impossible to perform operations like editing, searching operation on docx and pdf files With OCR one can digitise printed texts enabling us to electronically store and search text content Uses techniques including machine learning processes such as artificial intelligence, computer vision, cognitive computing, machine translation, data/ information identification and extraction and text mining 2 Aress Software and Education Technologies Pvt Ltd www.aress.com AI Based Resume Recognition – The Process Character Post- Pre-Processing Output Resumes Files recognition Processing (pdf, docx, non- (using OpenCV (text, json, (using NLP editable) libraries) (using TensorFlow xml) libraries) Techniques ) Dashboard, Aress Software and Education Technologies Pvt Ltd www.aress.com 3 Reports Click the link below and our AI team member will get in touch with you within 24 hours I’m interested in getting resumes processed with AI powered solution Aress Software and Education Technologies Pvt Ltd www.aress.com 4.
    [Show full text]
  • Experiences in Building GPU Enabled HPC Clusters
    Building a Successful Deep Learning Platform: Experiences in Building GPU Enabled HPC Clusters Brian Michael Belgodere Software Engineer Cognitive Computing Cluster 1 Agenda • Who am I • Why should you listen to IBM Research • What did we build • Why did we do this • How do we keep our users productive • How do we keep it running • Future Work 2 IBM 5/10/17 About Me • Software Engineer • IBM Research • HPC and ML/Deep Learning • DevOps • Background in Computational Finance and Economics • Law Degree • Coffee and Beer Enthusiast • I Love Solving Problems 3 IBM 5/10/17 Why should you listen to IBM Research? 4 IBM 5/10/17 5 6 7 Bluemix Services 8 In 2 short years IBM Research... • 20+ high value patents submitted • 50+ Papers Accepted at Top Conferences • NIPS, COLING, INTERSPEECH, AISTATS, EACL, ICASSP, EMNLP, etc. • Several Commercialized Services and Client Engagements • 5 Moonshots 9 IBM 5/10/17 What did we build? 10 IBM 5/10/17 Cognitive Computing Cluster 11 IBM 5/10/17 What did we build? "' ! !!!!!!!!!!!!!!! ! !!!!!!!!!!!!!!! "& ! !!!!!!!!!!!!!!! " ( ! # !!!!!!!!!!!!!!! ! !!!!!!!!!!!!!!! !% !!!!!!!!!!!!!!! !!!!!!!!!!!!!! !!!!!!!!!!!!!! !!!!!!!!!!!!!! ! $$$$$$$$$$ ! !!!!!!!!!! 12 !!!!!!!!!! What did we build? • 263 x86 Compute Nodes • 130 P8 Compute Nodes • 946 GPU Devices • 3.45 Petabytes of Storage • 10GbE + Infiniband • 750+ Active Users 13 • 1.2IBM Million5/10/17 Jobs a Month Off the Shelf Parts IBM Software Defined Open Source Frameworks Infrastructure 14 Why did we do this? • Users of BluegeneQ, Speech, and several other cluster looking for next-gen compute platform • Colocation of Data and Workload across Disciplines • Centralization of Management, Cost, and Workload Visibility • Advanced Scheduling to Maximize Cluster Utilization 15 • DemandIBM for5/10/17 Deep Learning & GPUs Why did we do this? • The Cognitive Computing Cluster was built to explore the enormous possibilities of Deep Learning in order to accelerate the velocity of our researchers' experiments.
    [Show full text]
  • An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection Within Online Textual Comments
    electronics Article An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection within Online Textual Comments Danilo Dessì 1,2,*,† , Diego Reforgiato Recupero 3,† and Harald Sack 1,2,† 1 FIZ Karlsruhe–Leibniz Institute for Information Infrastructure, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany; harald.sack@fiz-karlsruhe.de 2 Karlsruhe Institute of Technology, Institute AIFB, Kaiserstraße 89, 76133 Karlsruhe, Germany 3 Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy; [email protected] * Correspondence: danilo.dessi@fiz-karlsruhe.de † The authors equally contributed to the research performed in this paper. Abstract: Today, increasing numbers of people are interacting online and a lot of textual comments are being produced due to the explosion of online communication. However, a paramount inconve- nience within online environments is that comments that are shared within digital platforms can hide hazards, such as fake news, insults, harassment, and, more in general, comments that may hurt some- one’s feelings. In this scenario, the detection of this kind of toxicity has an important role to moderate online communication. Deep learning technologies have recently delivered impressive performance within Natural Language Processing applications encompassing Sentiment Analysis and emotion detection across numerous datasets. Such models do not need any pre-defined hand-picked features, but they learn sophisticated features from the input datasets by themselves. In such a domain, word Citation: Dessì, D.; Reforgiato Recupero, D.; Sack, H. An embeddings have been widely used as a way of representing words in Sentiment Analysis tasks, Assessment of Deep Learning Models proving to be very effective.
    [Show full text]