Modeling Guide for SAP Data Hub Company

Total Page:16

File Type:pdf, Size:1020Kb

Modeling Guide for SAP Data Hub Company PUBLIC SAP Data Hub 2.6 Document Version: 2.6.1 – 2019-08-23 Modeling Guide for SAP Data Hub company. All rights reserved. All rights company. affiliate THE BEST RUN 2019 SAP SE or an SAP SE or an SAP SAP 2019 © Content 1 Modeling Guide for SAP Data Hub............................................... 5 2 Introduction to the SAP Data Hub Modeler.........................................7 2.1 Log in to SAP Data Hub Modeler...................................................8 2.2 Navigating the SAP Data Hub Modeler..............................................9 3 Creating Graphs............................................................ 11 3.1 Execute Graphs..............................................................16 Parameterize the Process Execution.............................................17 3.2 Schedule Graph Executions.....................................................17 Cron Expression Format.....................................................19 3.3 Maintain Resource Requirements for Graphs.........................................20 3.4 Groups, Tags, and Dockerfiles................................................... 23 3.5 Execution Model.............................................................25 4 Monitoring the SAP Data Hub Modeler...........................................27 4.1 Monitor the Graph Execution Status...............................................28 Graph Execution..........................................................29 Graph States.............................................................30 Process States............................................................31 4.2 Trace Messages............................................................. 31 Trace Severity Levels.......................................................33 4.3 Using SAP Data Hub Monitoring ................................................. 33 Log in to SAP Data Hub Monitoring.............................................34 Monitoring with the SAP Data Hub Monitoring Application.............................35 Schedule Graph Executions with SAP Data Hub Monitoring............................38 4.4 Downloading Diagnostic Information for Graphs.......................................41 Diagnostic Information Archive Structure and Contents.............................. 42 5 Tutorials..................................................................47 5.1 Create a Graph to Execute a TensorFlow Application....................................47 Example Graphs for TensorFlow............................................... 49 5.2 Creating a Graph to Perform Text Analysis...........................................50 Example Graph for Simple Text Analysis..........................................51 Example Graph for Text Analysis of Files..........................................53 6 Create Operators........................................................... 57 6.1 Operators..................................................................61 Modeling Guide for SAP Data Hub 2 PUBLIC Content Operator Details.......................................................... 62 6.2 Port Types.................................................................68 Compatible Port Types......................................................70 7 Working with the Data Workflow Operators........................................72 7.1 Execute an SAP BW Process Chain................................................76 7.2 Execute an SAP HANA Flowgraph Operator..........................................77 7.3 Transform Data .............................................................79 Configure the Data Source Node .............................................. 82 Configure the Data Target Node............................................... 83 Configure the Projection Node................................................ 85 Configure the Aggregation Node...............................................87 Configure the Join Node.....................................................89 Configure the Union Node....................................................91 Configure the Case Node....................................................92 7.4 Execute an SAP Data Hub Pipeline................................................93 7.5 Execute an SAP Data Services Job................................................95 7.6 Transfer Data ...............................................................97 Transfer Data from SAP BW to SAP Vora or Cloud Storage.............................98 Transfer Data from SAP HANA to SAP Vora or Cloud Storage..........................104 7.7 Control Flow of Execution......................................................106 7.8 Control Start and Shut Down of Data Workflows......................................108 7.9 Send E-mail Notifications......................................................108 8 Integrating SAP Cloud Applications with SAP Data Hub.............................110 9 Service Specific Information..................................................113 9.1 Azure Data Lake (ADL)........................................................113 9.2 Local File System (File)........................................................115 9.3 Google Cloud Storage (GCS)....................................................116 9.4 Hadoop File System (HDFS)....................................................118 9.5 Amazon S3................................................................121 9.6 Microsoft Azure Blob Storage (WASB).............................................125 9.7 WebHDFS.................................................................127 10 Subengines...............................................................131 10.1 Working with the C++ Subengine to Create Operators..................................132 Getting Started.......................................................... 133 Creating an Operator......................................................134 Logging and Error Handling..................................................135 Port Data...............................................................136 Setting Values for Configuration Properties.......................................139 Process Handlers.........................................................140 Modeling Guide for SAP Data Hub Content PUBLIC 3 API Reference............................................................141 10.2 Working with Python2.7 and Python3.6 Subengines to Create Operators.................... 153 Normal Usage...........................................................155 Advanced Usage..........................................................157 10.3 Working with the Node.js Subengine to Create Operators...............................170 Node.js Operators and OS Processes...........................................170 Use Cases for the Node.js Subengine...........................................171 The Node.js Subengine SDK................................................. 172 Data Types..............................................................174 Additional Type Specific Constraints............................................174 Develop a Node.js Operator ..................................................175 Project Structure.........................................................177 Project Files and Resources..................................................177 Logging................................................................179 10.4 Working with Flowagent Subengine to Connect to Databases.............................181 11 Create Dockerfiles......................................................... 185 11.1 Docker Inheritance..........................................................186 12 Create Types..............................................................187 13 Creating Scenes for Graphs (Beta).............................................190 13.1 Using the Scene Editor (Beta)...................................................193 13.2 Developing Controls..........................................................197 Understanding Scripts.....................................................199 Modeling Guide for SAP Data Hub 4 PUBLIC Content 1 Modeling Guide for SAP Data Hub The Modeling Guide contains information on using the SAP Data Hub Modeler. The SAP Data Hub Modeler helps create data processing pipelines (graphs), and provides a runtime and design time environment for data-driven scenarios. The tool reuses existing coding and libraries to orchestrate data processing in distributed landscapes. This guide is organized as follows: Title Description Introduction to the SAP Data Hub Modeler Provides an overview of the tool and describes certain business use cases. Creating Graphs Explains the steps for creating, configuring, and executing a graph in the modeler. Monitoring the SAP Data Hub Modeler This section describes the different monitoring options available within the tool to monitor the working of the SAP Data Hub Mod­ eler. Tutorials Includes tutorials and example graphs to help users understand and work with the SAP Data Hub Modeler. Create Operators Describes operators and explains the steps for creating, configur­ ing, and using the operators in a graph. Working with the Data Workflow Operators SAP Data Hub Modeler categorizes certain operators as Data Workflow operators. This sections describes the data workflow op­ erators and how to model graphs with the data workflow opera­ tors. Subengines Describes working with the C++, Python2.7, and Python3.6 suben­ gines. Create Dockerfiles Explains the steps for creating
Recommended publications
  • Network Monitoring & Management Using Cacti
    Network Monitoring & Management Using Cacti Network Startup Resource Center www.nsrc.org These materials are licensed under the Creative Commons Attribution-NonCommercial 4.0 International license (http://creativecommons.org/licenses/by-nc/4.0/) Introduction Network Monitoring Tools Availability Reliability Performance Cact monitors the performance and usage of devices. Introduction Cacti: Uses RRDtool, PHP and stores data in MySQL. It supports the use of SNMP and graphics with RRDtool. “Cacti is a complete frontend to RRDTool, it stores all of the necessary information to create graphs and populate them with data in a MySQL database. The frontend is completely PHP driven. Along with being able to maintain Graphs, Data Sources, and Round Robin Archives in a database, cacti handles the data gathering. There is also SNMP support for those used to creating traffic graphs with MRTG.” Introduction • A tool to monitor, store and present network and system/server statistics • Designed around RRDTool with a special emphasis on the graphical interface • Almost all of Cacti's functionality can be configured via the Web. • You can find Cacti here: http://www.cacti.net/ Getting RRDtool • Round Robin Database for time series data storage • Command line based • From the author of MRTG • Made to be faster and more flexible • Includes CGI and Graphing tools, plus APIs • Solves the Historical Trends and Simple Interface problems as well as storage issues Find RRDtool here:http://oss.oetiker.ch/rrdtool/ RRDtool Database Format General Description 1. Cacti is written as a group of PHP scripts. 2. The key script is “poller.php”, which runs every 5 minutes (by default).
    [Show full text]
  • Munin Documentation Release 2.0.44
    Munin Documentation Release 2.0.44 Stig Sandbeck Mathisen <[email protected]> Dec 20, 2018 Contents 1 Munin installation 3 1.1 Prerequisites.............................................3 1.2 Installing Munin...........................................4 1.3 Initial configuration.........................................7 1.4 Getting help.............................................8 1.5 Upgrading Munin from 1.x to 2.x..................................8 2 The Munin master 9 2.1 Role..................................................9 2.2 Components.............................................9 2.3 Configuration.............................................9 2.4 Other documentation.........................................9 3 The Munin node 13 3.1 Role.................................................. 13 3.2 Configuration............................................. 13 3.3 Other documentation......................................... 13 4 The Munin plugin 15 4.1 Role.................................................. 15 4.2 Other documentation......................................... 15 5 Documenting Munin 21 5.1 Nomenclature............................................ 21 6 Reference 25 6.1 Man pages.............................................. 25 6.2 Other reference material....................................... 40 7 Examples 43 7.1 Apache virtualhost configuration.................................. 43 7.2 lighttpd configuration........................................ 44 7.3 nginx configuration.......................................... 45 7.4 Graph aggregation
    [Show full text]
  • Automatic Placement of Authorization Hooks in the Linux Security Modules Framework
    Automatic Placement of Authorization Hooks in the Linux Security Modules Framework Vinod Ganapathy [email protected] University of Wisconsin, Madison Joint work with Trent Jaeger Somesh Jha [email protected] [email protected] Pennsylvania State University University of Wisconsin, Madison Context of this talk Authorization policies and their enforcement Three concepts: Subjects (e.g., users, processes) Objects (e.g., system resources) Security-sensitive operations on objects. Authorization policy: A set of triples: (Subject, Object, Operation) Key question: How to ensure that the authorization policy is enforced? CCS 2005 Automatic Placement of Authorization Hooks in the Linux Security Modules Framework 2 Enforcing authorization policies Reference monitor consults the policy. Application queries monitor at appropriate locations. Can I perform operation OP? Reference Monitor Application to Policy be secured Yes/No (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) CCS 2005 Automatic Placement of Authorization Hooks in the Linux Security Modules Framework 3 Linux security modules framework Framework for authorization policy enforcement. Uses a reference monitor-based architecture. Integrated into Linux-2.6 Linux Kernel Reference Monitor Policy (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) CCS 2005 Automatic Placement of Authorization Hooks in the Linux Security Modules Framework 4 Linux security modules framework Reference monitor calls (hooks) placed appropriately in the Linux kernel. Each hook is an authorization query. Linux Kernel Reference Monitor Policy (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) (subj.,obj.,oper.) Hooks CCS 2005 Automatic Placement of Authorization Hooks in the Linux Security Modules Framework 5 Linux security modules framework Authorization query of the form: (subj., obj., oper.)? Kernel performs operation only if query succeeds.
    [Show full text]
  • Masked Graph Modeling for Molecule Generation ✉ Omar Mahmood 1, Elman Mansimov2, Richard Bonneau 3 & Kyunghyun Cho 2
    ARTICLE https://doi.org/10.1038/s41467-021-23415-2 OPEN Masked graph modeling for molecule generation ✉ Omar Mahmood 1, Elman Mansimov2, Richard Bonneau 3 & Kyunghyun Cho 2 De novo, in-silico design of molecules is a challenging problem with applications in drug discovery and material design. We introduce a masked graph model, which learns a dis- tribution over graphs by capturing conditional distributions over unobserved nodes (atoms) and edges (bonds) given observed ones. We train and then sample from our model by iteratively masking and replacing different parts of initialized graphs. We evaluate our 1234567890():,; approach on the QM9 and ChEMBL datasets using the GuacaMol distribution-learning benchmark. We find that validity, KL-divergence and Fréchet ChemNet Distance scores are anti-correlated with novelty, and that we can trade off between these metrics more effec- tively than existing models. On distributional metrics, our model outperforms previously proposed graph-based approaches and is competitive with SMILES-based approaches. Finally, we show our model generates molecules with desired values of specified properties while maintaining physiochemical similarity to the training distribution. 1 Center for Data Science, New York University, New York, NY, USA. 2 Department of Computer Science, Courant Institute of Mathematical Sciences, New ✉ York, NY, USA. 3 Center for Genomics and Systems Biology, New York University, New York, NY, USA. email: [email protected] NATURE COMMUNICATIONS | (2021) 12:3156 | https://doi.org/10.1038/s41467-021-23415-2 | www.nature.com/naturecommunications 1 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-23415-2 he design of de novo molecules in-silico with desired prop- enable more informative comparisons between models that use Terties is an essential part of drug discovery and materials different molecular representations.
    [Show full text]
  • Operational Semantics with Hierarchical Abstract Syntax Graphs*
    Operational Semantics with Hierarchical Abstract Syntax Graphs* Dan R. Ghica Huawei Research, Edinburgh University of Birmingham, UK This is a motivating tutorial introduction to a semantic analysis of programming languages using a graphical language as the representation of terms, and graph rewriting as a representation of reduc- tion rules. We show how the graphical language automatically incorporates desirable features, such as a-equivalence and how it can describe pure computation, imperative store, and control features in a uniform framework. The graph semantics combines some of the best features of structural oper- ational semantics and abstract machines, while offering powerful new methods for reasoning about contextual equivalence. All technical details are available in an extended technical report by Muroya and the author [11] and in Muroya’s doctoral dissertation [21]. 1 Hierarchical abstract syntax graphs Before proceeding with the business of analysing and transforming the source code of a program, a com- piler first parses the input text into a sequence of atoms, the lexemes, and then assembles them into a tree, the Abstract Syntax Tree (AST), which corresponds to its grammatical structure. The reason for preferring the AST to raw text or a sequence of lexemes is quite obvious. The structure of the AST incor- porates most of the information needed for the following stage of compilation, in particular identifying operations as nodes in the tree and operands as their branches. This makes the AST algorithmically well suited for its purpose. Conversely, the AST excludes irrelevant lexemes, such as separators (white-space, commas, semicolons) and aggregators (brackets, braces), by making them implicit in the tree-like struc- ture.
    [Show full text]
  • A Property Graph Is a Directed Edge-Labeled, Attributed Multigraph
    - In IT Security for 10+ years - consulting and research - My first talk at a Data-conference :) - PhD thesis: “Pattern-based Vulnerability Discovery” on using unsupervised machine learning and graph databases for vulnerability discovery - Chief scientist at ShiftLeft Inc. - Modern development processes are optimized for fast feature delivery - Cloud services: multiple deployments a day - Each deployment is a security-related decision - Enforce code-level security policies automatically! Example: enforce a policy for propagation of sensitive data: “Environment variables must not be written to the logger” public void init() { String user = env.getProperty(“sfdc.uname”); String pwd = env.getProperty(“sfdc.pwd”); ... if (!validLogin(user, pwd)) { ... log.error("Invalid username/password: {}/{}", user, pwd); } } Jackson - Critical code execution vulnerabilities. A program is vulnerable if 1. a vulnerable version of jackson-databind is used 2. attacker controlled input is deserialized in a call to “readValue” 3. “Default typing” is enabled globally via a call to “enableDefaultTyping” Jackson Jackson Detecting this particular vulnerability types requires modeling program dependencies, syntax, data flow and method invocations. > cpg.dependency.name(".*jackson-databind.*").version("2.8..*") .and{ cpg.call.name(“.*enableDefaultTyping.*”) } .and{ cpg.method.name(“.*readValue.*”).parameter.index(1).reachableBy( cpg.parameter.evalType(".*HttpServletRequest.*")) }.location Code Property Graph cpg.method.name(“sink”).parameter .reachableBy(cpg.method.name(“source”))
    [Show full text]
  • The Cool Reference Manual∗
    The Cool Reference Manual∗ Contents 1 Introduction 3 2 Getting Started 3 3 Classes 4 3.1 Features . 4 3.2 Inheritance . 5 4 Types 6 4.1 SELF TYPE ........................................... 6 4.2 Type Checking . 7 5 Attributes 8 5.1 Void................................................ 8 6 Methods 8 7 Expressions 9 7.1 Constants . 9 7.2 Identifiers . 9 7.3 Assignment . 9 7.4 Dispatch . 10 7.5 Conditionals . 10 7.6 Loops . 11 7.7 Blocks . 11 7.8 Let . 11 7.9 Case . 12 7.10 New . 12 7.11 Isvoid . 12 7.12 Arithmetic and Comparison Operations . 13 ∗Copyright c 1995-2000 by Alex Aiken. All rights reserved. 1 8 Basic Classes 13 8.1 Object . 13 8.2 IO ................................................. 13 8.3 Int................................................. 14 8.4 String . 14 8.5 Bool . 14 9 Main Class 14 10 Lexical Structure 14 10.1 Integers, Identifiers, and Special Notation . 15 10.2 Strings . 15 10.3 Comments . 15 10.4 Keywords . 15 10.5 White Space . 15 11 Cool Syntax 17 11.1 Precedence . 17 12 Type Rules 17 12.1 Type Environments . 17 12.2 Type Checking Rules . 18 13 Operational Semantics 22 13.1 Environment and the Store . 22 13.2 Syntax for Cool Objects . 24 13.3 Class definitions . 24 13.4 Operational Rules . 25 14 Acknowledgements 30 2 1 Introduction This manual describes the programming language Cool: the Classroom Object-Oriented Language. Cool is a small language that can be implemented with reasonable effort in a one semester course. Still, Cool retains many of the features of modern programming languages including objects, static typing, and automatic memory management.
    [Show full text]
  • KGTK: a Toolkit for Large Knowledge Graph Manipulation and Analysis
    KGTK: A Toolkit for Large Knowledge Graph Manipulation and Analysis Filip Ilievski1[0000−0002−1735−0686], Daniel Garijo1[0000−0003−0454−7145], Hans Chalupsky1[0000−0002−8902−1662], Naren Teja Divvala1, Yixiang Yao1[0000−0002−2471−5591], Craig Rogers1[0000−0002−5818−3802], Rongpeng Li1[0000−0002−6911−8002], Jun Liu1, Amandeep Singh1[0000−0002−1926−6859], Daniel Schwabe2[0000−0003−4347−2940], and Pedro Szekely1 1 Information Sciences Institute, University of Southern California filievski,dgarijo,hans,divvala,yixiangy,rogers, rli,junliu,amandeep,[email protected] 2 Dept. of Informatics, Pontificia Universidade Cat´olicaRio de Janeiro [email protected] Abstract. Knowledge graphs (KGs) have become the preferred tech- nology for representing, sharing and adding knowledge to modern AI applications. While KGs have become a mainstream technology, the RDF/SPARQL-centric toolset for operating with them at scale is hetero- geneous, difficult to integrate and only covers a subset of the operations that are commonly needed in data science applications. In this paper we present KGTK, a data science-centric toolkit designed to represent, cre- ate, transform, enhance and analyze KGs. KGTK represents graphs in tables and leverages popular libraries developed for data science applica- tions, enabling a wide audience of developers to easily construct knowl- edge graph pipelines for their applications. We illustrate the framework with real-world scenarios where we have used KGTK to integrate and manipulate large KGs, such as Wikidata, DBpedia and ConceptNet. Resource type: Software License: MIT DOI: https://doi.org/10.5281/zenodo.3828068 Repository: https://github.com/usc-isi-i2/kgtk/ Keywords: knowledge graph · knowledge graph embedding · knowl- edge graph filtering · knowledge graph manipulation arXiv:2006.00088v3 [cs.AI] 26 May 2021 1 Introduction Knowledge graphs (KGs) have become the preferred technology for representing, sharing and using knowledge in applications.
    [Show full text]
  • Large Graph on Multi-Gpus
    Large Graph on multi-GPUs Enrico Mastrostefano1 Massimo Bernaschi2 Massimiliano Fatica3 1Sapienza Universita` di Roma 2Istituto per le Applicazioni del Calcolo, IAC-CNR, Rome, Italy 3NVIDIA Corporation May 11, 2012 Enrico Mastrostefano Large Graph on multi-GPUs May 11, 2012 1 / 32 Outline 1 Large Graphs and Supercomputing 2 Graph500 on multi-GPUs 3 Breadth First Search on multi-GPUs 4 Sort-Unique Breadth First Search 5 Results Large Graphs Enrico Mastrostefano Large Graph on multi-GPUs May 11, 2012 3 / 32 How do modern architectures perform running such algorithms? Large-scale benchmark for data-intensive application “This is the first serious approach to complement the Top 500 with data-intensive applications ...” (from www.graph500.org) The core of the benchmark is a set of graph algorithms Large Graphs Most of graph algorithms have low arithmetic intensity and irregular memory access patterns Enrico Mastrostefano Large Graph on multi-GPUs May 11, 2012 3 / 32 Large-scale benchmark for data-intensive application “This is the first serious approach to complement the Top 500 with data-intensive applications ...” (from www.graph500.org) The core of the benchmark is a set of graph algorithms Large Graphs Most of graph algorithms have low arithmetic intensity and irregular memory access patterns How do modern architectures perform running such algorithms? Enrico Mastrostefano Large Graph on multi-GPUs May 11, 2012 3 / 32 “This is the first serious approach to complement the Top 500 with data-intensive applications ...” (from www.graph500.org)
    [Show full text]
  • Oracle® Endeca Information Discovery Integrator
    Oracle® Endeca Information Discovery Integrator Integrator Server Guide Version 3.0.0 • March 2013 Copyright and disclaimer Copyright © 2003, 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. UNIX is a registered trademark of The Open Group. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency- specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]
  • Operator Overloading
    Do Now Exercise To prepare you for the lecture today, please do the following exercise. List name of data structures that you have heard of. 1 COMP15: Data Structures Week 11, Summer 2019 2 Admin 3 P5: Word Frequency Database Project Due by 6pm on Sunday, August 4 4 About P5 grading rubrics from Matt 5 Questions about P5? 6 A10: Showcasing portfolios Due by 6pm on Friday, August 9 7 Final Exam on Wednesday, August 7 8 Final Exam ● in class; 90 mins ● closed books, closed notes, no electronic devices ● You can bring a sheet of paper (US letter size) with your handwritten notes. We will collect your paper at the end of the exam, so please put your name on it. ● Topics include everything from the lectures, in-class activities, labs, programming projects and Teach Yourself reports. (P5 and L10 are included.) 9 Question about the Final Exam? 10 Graph 11 Pop Quiz (Fill in the blanks.) Adjacency Matrix Adjacency List add an edge remove an edge has edge from a to b space complexity 12 - Is there a cycle in the graph? - Is there a path from Vertex A to Vertex B in the graph? - Is the graph connected? - etc. 13 Discussion: The running time complexity of BFS and DFS? (Assumption: Graph is implemented with Adjacency List.) 14 (Note on Dijkstra's algorithm) No edges with negative weights. 15 The results of Hashing competition (L8) 16 1st Place: 2nd Place: 3rd Place: 4th Place: 5th Place: 17 1st Place: 2nd Place: 3rd Place: 4th Place: 90 5th Place: 92 18 1st Place: 2nd Place: 3rd Place: 63 4th Place: 90 5th Place: 92 19 1st Place: 2nd Place: 58 3rd Place: 63 4th Place: 90 5th Place: 92 20 1st Place: 53 2nd Place: 58 3rd Place: 63 4th Place: 90 5th Place: 92 21 Operator Overloading 27 (Notes from the live demo or live coding.
    [Show full text]
  • Which Chart Or Graph Is Right for You? Tell Impactful Stories with Data
    Which chart or graph is right for you? Tell impactful stories with data Authors: Maila Hardin, Daniel Hom, Ross Perez, & Lori Williams January 2012 p2 You’ve got data and you’ve got questions. Creating a Interact with your data chart or graph links the two, but sometimes you’re not sure which type of chart will get the answer you seek. Once you see your data in a visualization, it inherently leads to more questions. Your bar graph reveals that This paper answers questions about how to select the sales tanked in the second quarter in the Southeast. best charts for the type of data you’re analyzing and the A scatter plot shows an unexpected concentration of questions you want to answer. But it won’t stop there. product defects in one category. Donations from older Stranding your data in isolated, static graphs limits the alumni are significantly down according to a heat map. number of questions you can answer. Let your data In each example, your reaction is the same: why? become the centerpiece of decision making by using it Equip yourself to answer these questions by making to tell a story. Combine related charts. Add a map. your visualization interactive. Doing so creates the Provide filters to dig deeper. The impact? Business opportunity for you and others to analyze your data insight and answers to questions at the speed of visually and in real-time, letting you answer questions Which chart or graph is right for you? Tell impactful stories with data Tell Which chart or graph is right for you? thought.
    [Show full text]