Data Mining Standards Initiatives

Total Page:16

File Type:pdf, Size:1020Kb

Data Mining Standards Initiatives Lacking standards for statistical and data mining models, applications cannot leverage the benefits of data mining. DATA MINING STANDARDS INITIATIVES he data mining and statistical models <Neuron id=”10”> generated by commercial data mining <Con from=”0” applications are often used as compo- weight=”Ϫ2.08148”/> nents in other systems, including those in customer relationship management, enter- indicates that a neural network node with id 10 has prise resource planning, risk management, an input from a node with id 0 and a weight of and intrusion detection. In the research Ϫ2.08148. The standards for defining parameter- community, data mining is used in systems ized models using XML are relatively mature. They processing scientific and engineering data. Employ- assume the inputs to the models are given explicitly, ing common data mining standards greatly simpli- as in the example. In practice, however, inputs are fies the integration, updating, and maintenance of generally not explicit; the data must first be cleaned the applications and systems containing the models. and transformed. But standards for cleaning and Established and emerging standards address various transforming data are only beginning to emerge. aspects of data mining, including: Standards related to the broader process of using data mining in operational processes and systems are Models. For representing data mining and statistical relatively immature; for example, what is the busi- data. ness implication of a particular credit risk score pro- Attributes. For representing the cleaning, transform- duced by a credit card fraud model? ing, and aggregating of attributes used as input in the models. XML Standards Interfaces and APIs. For linking to other languages The Predictive Model Markup Language (PMML) and systems. is an XML standard being developed by the Data Settings. For representing the internal parameters Mining Group (www.dmg.org), a vendor-led con- required for building and using the models. sortium established in 1998 to develop data mining Process. For producing, deploying, and using the standards [7]. PMML represents and describes data models. mining and statistical models, as well as some of the Remote and distributed data. For analyzing and operations required for cleaning and transforming mining remote and distributed data. data prior to modeling. PMML aims to provide enough infrastructure for an application to be able The parameters of a parameterized data mining to produce a model (the PMML producer) and model, such as a neural network, can be represented another application to consume it (the PMML con- using the Extensible Markup Language (XML); for sumer) simply by reading the PMML XML data example, the tag file. BY ROBERT L. GROSSMAN, MARK F. H ORNICK, AND GREGOR MEYER COMMUNICATIONS OF THE ACM August 2002/Vol. 45, No. 8 59 TWO MAJOR CHALLENGES top the data mining standards agenda: agreeing on a common standard for cleaning, transforming, and preparing data for data mining; and agreeing on a common set of Web services for working with remote and distributed data. PMML consists of the following components: tionary. The consensus among Data Mining Group members is that the transformation dictionary is pow- Data dictionary. Defines the input attributes to erful enough for capturing the process of preparing models and specifies each one’s type and value data for statistical and data mining models. range. Mining schema. Precisely one in each model, listing Standard APIs the schema’s attributes and their role in the To facilitate integration of data mining with appli- model; these attributes are a subset of the attrib- cation software, several data mining APIs have been utes in the data dictionary. The schema contains developed for the following types of application: information specific to a certain model, while the data dictionary contains data definitions that do SQL. The SQL Multimedia and Applications Pack- not vary by model. It also specifies an attribute’s ages Standard (SQL/MM) includes a specification usage type, which can be active (an input of the called SQL/MM Part 6: Data Mining, which model), predicted (an output of the model), or specifies a SQL interface to data mining applica- supplementary (holding descriptive information tions and services. It provides an API for data and ignored by the model). mining applications to access data from Transformation dictionary. Can contain any of the SQL/MM-compliant relational databases. following transformations: normalization (map- Java. The Java Specification Request-73 (JSR-73) ping continuous or discrete values to numbers); defines a pure Java API supporting the building discretization (mapping continuous values to dis- of data mining models and the scoring of data- crete values); value mapping (mapping discrete using models, as well as the creation, storage, and values to discrete values); and aggregation (sum- maintenance of and access to data and metadata marizing or collecting groups of values, such as supporting data mining results [5]. by computing averages). Microsoft. The Microsoft-supported OLE DB for Model statistics. Univariate statistics about the DM defines an API for data mining for attributes in the model. Microsoft-based applications [6]. Released in Models. Model parameters specified by tags. 2000, OLE DB for DM was especially notewor- PMML v.2.0 includes regression models, cluster thy for introducing several new capabilities, vari- models, trees, neural networks, Bayesian models, ants of which are now part of other standards, association rules, and sequence models. including PMML v.2.0; included are taxonomies for data and a mechanism for transforming data. The first major release of PMML (v.1.0 in 1999) Earlier this year, however, OLE DB for DM was focused on defining XML representations for some of subsumed by Microsoft’s Analysis Services for the most common statistical and data mining models. SQL Server 2000 [9]; Analysis Services provide The assumption built into PMML v.1.0 was that the APIs to Microsoft’s SQL Server 2000 for data inputs to the models (called DataFields) were already transformations, data mining, and online analyti- defined. In practice, however, defining such inputs cal processing (OLAP). can be highly complex. The next major release of PMML (v.2.0 in 2001) introduced a mechanism, the Other Standards Efforts transformation dictionary, to more flexibly define Standards have also been developed for defining the model inputs. In PMML v.2.0, inputs to PMML software objects used in data mining, the business models can be DataFields defined in a data dictionary processes used in data mining, and Web-based ser- or DerivedFields defined in the transformation dic- vices for mining remote and distributed data. 60 August 2002/Vol. 45, No. 8 COMMUNICATIONS OF THE ACM Data mining metadata. In 2000, the Object Man- is that data mining is used in so many different ways agement Group defined the Common Warehouse and in combination with a so many different systems Model for Data Mining (CWM DM) [1] for meta- and services, many requiring their own separate data specifying model building settings, model rep- often-incompatible standards. Although some ven- resentations, and results from model operations, dor-led efforts have sought to homogenize terminol- along with other data mining-related objects. Mod- ogy and concepts among standards, more work is els are defined through the Unified Modeling Lan- indeed required. guage [10] using tools to generate XML Document Relatively narrow XML standards, such as PMML, Type Definitions, which are used to specify formally serve as common ground for several emerging stan- XML documents. dards. For example, SQL/MM Part 6: Data Mining, Process standards. The CRoss-Industry Standard JSR-73, CWM, and Microsoft’s Analysis Services all Process for Data Mining (CRISP-DM) was devel- use PMML in their specifications, providing a base oped in 1997 by two vendors (ISL and NCR) along level of compatibility among them all. with two industrial partners. Designed to capture Meanwhile, two major challenges top the data the data mining process, it begins with business mining standards agenda: agreeing on a common problems, then captures and understands data, standard for cleaning, transforming, and preparing applies data mining techniques, interprets results, data for data mining (PMML v.2.0 represents a first and deploys the knowledge gained in operations [2]. step in this direction); and agreeing on a common Web standards. The semantic Web includes the set of Web services for working with remote and dis- open standards being developed by the World Wide tributed data (an effort only just beginning). c Web Consortium (W3C) for defining and working with knowledge through XML, the Resource References Description Framework (RDF), and related stan- 1. Common Warehouse Metamodel: Data Mining. Object Management Group; see cgi.omg.org/cgi-bin/doclist.pl. dards [8]. RDF can be thought of informally as a 2. Cross Industry Standard Process for Data Mining (CRISP-DM); see way to code triples consisting of subjects, verbs, and www.crisp-dm.org. 3. Data Space Transfer Protocol. National Center for Data Mining; see objects. The semantic Web can in principle be used www.ncdm.uic.edu. to store knowledge extracted from data though data 4. Grossman, R. and Mazzucco, M. DataSpace: A data Web for the mining systems, though this capability is, today, exploratory analysis and mining of data. IEEE Comput. Sci. Eng. (2002). more a goal than an achievement. 5. Java Specification Request 73; see jcp.org/jsr/detail/073.jsp. The W3C is also standardizing Web services 6. OLE DB for Data Mining Specification 1.0. Microsoft; see www.microsoft.com/data/oledb/default.htm. based on XML and a protocol for working with 7. Predictive Model Markup Language (PMML). Data Mining Group, remote objects called the Simple Object Access Pro- see www.dmg.org. tocol (SOAP). The services describe themselves to 8. Semantic Web. World Wide Web Consortium; see www.w3c.org/ 2001/sw.
Recommended publications
  • Multidimensional Expressions (MDX) Reference SQL Server 2012 Books Online
    Multidimensional Expressions (MDX) Reference SQL Server 2012 Books Online Summary: Multidimensional Expressions (MDX) is the query language that you use to work with and retrieve multidimensional data in Microsoft Analysis Services. MDX is based on the XML for Analysis (XMLA) specification, with specific extensions for SQL Server Analysis Services. MDX utilizes expressions composed of identifiers, values, statements, functions, and operators that Analysis Services can evaluate to retrieve an object (for example a set or a member), or a scalar value (for example, a string or a number). Category: Reference Applies to: SQL Server 2012 Source: SQL Server Books Online (link to source content) E-book publication date: June 2012 Copyright © 2012 by Microsoft Corporation All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. Microsoft and the trademarks listed at http://www.microsoft.com/about/legal/en/us/IntellectualProperty/Trademarks/EN-US.aspx are trademarks of the Microsoft group of companies. All other marks are property of their respective owners. The example companies, organizations, products, domain names, email addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, email address, logo, person, place, or event is intended or should be inferred. This book expresses the author’s views and opinions. The information contained in this book is provided without any express, statutory, or implied warranties. Neither the authors, Microsoft Corporation, nor its resellers, or distributors will be held liable for any damages caused or alleged to be caused either directly or indirectly by this book.
    [Show full text]
  • Ÿþp Rovider S Ervices a Dministrator
    Oracle® Hyperion Provider Services Administrator's Guide Release 12.2.1.0.0 Provider Services Administrator's Guide, 12.2.1.0.0 Copyright © 2005, 2015, Oracle and/or its affiliates. All rights reserved. Authors: EPM Information Development Team This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]
  • SQL Server Column Store Indexes Per-Åke Larson, Cipri Clinciu, Eric N
    SQL Server Column Store Indexes Per-Åke Larson, Cipri Clinciu, Eric N. Hanson, Artem Oks, Susan L. Price, Srikumar Rangarajan, Aleksandras Surna, Qingqing Zhou Microsoft {palarson, ciprianc, ehans, artemoks, susanpr, srikumar, asurna, qizhou}@microsoft.com ABSTRACT SQL Server column store indexes are “pure” column stores, not a The SQL Server 11 release (code named “Denali”) introduces a hybrid, because they store all data for different columns on new data warehouse query acceleration feature based on a new separate pages. This improves I/O scan performance and makes index type called a column store index. The new index type more efficient use of memory. SQL Server is the first major combined with new query operators processing batches of rows database product to support a pure column store index. Others greatly improves data warehouse query performance: in some have claimed that it is impossible to fully incorporate pure column cases by hundreds of times and routinely a tenfold speedup for a store technology into an established database product with a broad broad range of decision support queries. Column store indexes are market. We’re happy to prove them wrong! fully integrated with the rest of the system, including query To improve performance of typical data warehousing queries, all a processing and optimization. This paper gives an overview of the user needs to do is build a column store index on the fact tables in design and implementation of column store indexes including the data warehouse. It may also be beneficial to build column enhancements to query processing and query optimization to take store indexes on extremely large dimension tables (say more than full advantage of the new indexes.
    [Show full text]
  • When Relational-Based Applications Go to Nosql Databases: a Survey
    information Article When Relational-Based Applications Go to NoSQL Databases: A Survey Geomar A. Schreiner 1,* , Denio Duarte 2 and Ronaldo dos Santos Mello 1 1 Departamento de Informática e Estatística, Federal University of Santa Catarina, 88040-900 Florianópolis - SC, Brazil 2 Campus Chapecó, Federal University of Fronteira Sul, 89815-899 Chapecó - SC, Brazil * Correspondence: [email protected] Received: 22 May 2019; Accepted: 12 July 2019; Published: 16 July 2019 Abstract: Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational data sets stored in NoSQL databases through an SQL interface have been proposed, focusing on scalability and availability. This paper presents a comparative analysis of these approaches based on an architectural classification that organizes them according to their system architectures. Our motivation is that wrapping is a relevant strategy for relational-based applications that intend to move relational data to NoSQL databases (usually maintained in the cloud). We also claim that this research area has some open issues, given that most approaches deal with only a subset of SQL operations or give support to specific target NoSQL databases. Our intention with this survey is, therefore, to contribute to the state-of-art in this research area and also provide a basis for choosing or even designing a relational-to-NoSQL data wrapping solution. Keywords: big data; data interoperability; NoSQL databases; relational-to-NoSQL mapping 1.
    [Show full text]
  • Data Dictionary a Data Dictionary Is a File That Helps to Define The
    Cleveland | v. 216.369.2220 • Columbus | v. 614.291.8456 Data Dictionary A data dictionary is a file that helps to define the organization of a particular database. The data dictionary acts as a description of the data objects or items in a model and is used for the benefit of the programmer or other people who may need to access it. A data dictionary does not contain the actual data from the database; it contains only information for how to describe/manage the data; this is called metadata*. Building a data dictionary provides the ability to know the kind of field, where it is located in a database, what it means, etc. It typically consists of a table with multiple columns that describe relationships as well as labels for data. A data dictionary often contains the following information about fields: • Default values • Constraint information • Definitions (example: functions, sequence, etc.) • The amount of space allocated for the object/field • Auditing information What is the data dictionary used for? 1. It can also be used as a read-only reference in order to obtain information about the database 2. A data dictionary can be of use when developing programs that use a data model 3. The data dictionary acts as a way to describe data in “real-world” terms Why is a data dictionary needed? One of the main reasons a data dictionary is necessary is to provide better accuracy, organization, and reliability in regards to data management and user/administrator understanding and training. Benefits of using a data dictionary: 1.
    [Show full text]
  • USER GUIDE Optum Clinformatics™ Data Mart Database
    USER GUIDE Optum Clinformatics Data Mart Database 1 | P a g e TABLE OF CONTENTS TOPIC PAGE # 1. REQUESTING DATA 3 Eligibility 3 Forms 3 Contact Us 4 2. WHAT YOU WILL NEED 4 SAS Software 4 VPN 5 3. ABSTRACTS, MANUSCRIPTS, THESES, AND DISSERTATIONS 5 Referencing Optum Data 5 Optum Review 5 4. DATA USER SET-UP AND ACCESS INFORMATION 6 Server Log-In After Initial Set-Up 6 Server Access 6 Establishing a Connection to Enterprise Guide 7 Instructions to Add SAS EG to the Cleared Firewall List 8 How to Proceed After Connection 8 5. BEST PRACTICES FOR DATA USE 9 Saving Programs and Back-Up Procedures 9 Recommended Coding Practices 9 6. APPENDIX 11 Version Date: 27-Feb-17 2 | P a g e Optum® ClinformaticsTM Data Mart Database The Optum® ClinformaticsTM Data Mart is an administrative health claims database from a large national insurer made available by the University of Rhode Island College of Pharmacy. The statistically de-identified data includes medical and pharmacy claims, as well as laboratory results, from 2010 through 2013 with over 22 million commercial enrollees. 1. REQUESTING DATA The following is a brief outline of the process for gaining access to the data. Eligibility Must be an employee or student at the University of Rhode Island conducting unfunded or URI internally funded projects. Data will be made available to the following users: 1. Faculty and their research team for projects with IRB approval. 2. Students a. With a thesis/dissertation proposal approved by IRB and the Graduate School (access request form, see link below, must be signed by Major Professor).
    [Show full text]
  • Activant Prophet 21
    Activant Prophet 21 Understanding Prophet 21 Databases This class is designed for… Prophet 21 users that are responsible for report writing System Administrators Operations Managers Helpful to be familiar with SQL Query Analyzer and how to write basic SQL statements Objectives Explain the difference between databases, tables and columns Extract data from different areas of the system Discuss basic SQL statements using Query Analyzer Use Prophet 21 to gather SQL Information Utilize Data Dictionary This course will NOT cover… Basic Prophet 21 functionality Seagate’s Crystal Reports Definitions Columns Contains a single piece of information (fields) Record (rows) One complete set of columns Table A collection of rows View Virtual table created for easier data extraction Database Collection of information organized for easy selection of data SQL Query A graphical user interface used to extract Analyzer data SQL Query Analyzer Accessed through Microsoft SQL Server SQL Server Tools Enterprise Manager Perform administrative functions such as backing up and restoring databases and maintaining SQL Logins Profiler Run traces of activity in your system Basic SQL Commands sp_help sp_help <table_name> select * from <table_name> Select <field_name> from <table_name> Most Common Reporting Areas Address and Contact tables Order Processing Inventory Purchasing Accounts Receivable Accounts Payable Production Orders Address and Contact tables Used in conjunction with other tables These tables hold every address/contact in the
    [Show full text]
  • Data Warehousing
    DMIF, University of Udine Data Warehousing Andrea Brunello [email protected] April, 2020 (slightly modified by Dario Della Monica) Outline 1 Introduction 2 Data Warehouse Fundamental Concepts 3 Data Warehouse General Architecture 4 Data Warehouse Development Approaches 5 The Multidimensional Model 6 Operations over Multidimensional Data 2/80 Andrea Brunello Data Warehousing Introduction Nowadays, most of large and medium size organizations are using information systems to implement their business processes. As time goes by, these organizations produce a lot of data related to their business, but often these data are not integrated, been stored within one or more platforms. Thus, they are hardly used for decision-making processes, though they could be a valuable aiding resource. A central repository is needed; nevertheless, traditional databases are not designed to review, manage and store historical/strategic information, but deal with ever changing operational data, to support “daily transactions”. 3/80 Andrea Brunello Data Warehousing What is Data Warehousing? Data warehousing is a technique for collecting and managing data from different sources to provide meaningful business insights. It is a blend of components and processes which allows the strategic use of data: • Electronic storage of a large amount of information which is designed for query and analysis instead of transaction processing • Process of transforming data into information and making it available to users in a timely manner to make a difference 4/80 Andrea Brunello Data Warehousing Why Data Warehousing? A 3NF-designed database for an inventory system has many tables related to each other through foreign keys. A report on monthly sales information may include many joined conditions.
    [Show full text]
  • Working with Mondrian and Pentaho 176
    Open source business analytics William D. Back Nicholas Goodman Julian Hyde SAMPLE CHAPTER MANNING Mondrian in Action by William D. Back Nicholas Goodman and Julian Hyde Chapter 9 Copyright 2014 Manning Publications brief contents 1 ■ Beyond reporting: business analytics 1 2 ■ Mondrian: a first look 17 3 ■ Creating the data mart 36 4 ■ Multidimensional modeling: making analytics data accessible 57 5 ■ How schemas grow 86 6 ■ Securing data 115 7 ■ Maximizing Mondrian performance 133 8 ■ Dynamic security 162 9 ■ Working with Mondrian and Pentaho 176 10 ■ Developing with Mondrian 198 11 ■ Advanced analytics 227 v Working with Mondrian and Pentaho This chapter is recommended for ✓ Business analysts ✓ Data architects ✓ Enterprise architects ✓ Application developers As we pointed out in chapter 1, Mondrian is an OLAP engine. It provides a lot of power, but you need to couple it with an end-user tool to make it effective. As we’ve explored Mondrian’s various capabilities, we’ve used examples of end-user tools use to explain particular points, but we haven’t looked very deeply into any of the specific tools. In this chapter, we’ll broaden our scope and cover topics that should be of interest to all users of Mondrian. We’re going to take a look at several tools that are commonly used with Mondrian and show how they’re used. These tools are written and maintained by Pentaho, as well as several tools from other companies that work closely with Pentaho. As you’ll see, there is a rich variety of tools tai­ lored to specific needs: 176 Pentaho Analyzer 177 ■ Pentaho Analyzer—An Enterprise Edition plugin that provides drag-and-drop analysis as well as advanced charting.
    [Show full text]
  • Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide Thomas Kejser and Denny Lee
    Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide Thomas Kejser and Denny Lee Contributors and Technical Reviewers: Peter Adshead (UBS), T.K. Anand, KaganArca, Andrew Calvett (UBS), Brad Daniels, John Desch, Marius Dumitru, WillfriedFärber (Trivadis), Alberto Ferrari (SQLBI), Marcel Franke (pmOne), Greg Galloway (Artis Consulting), Darren Gosbell (James & Monroe), DaeSeong Han, Siva Harinath, Thomas Ivarsson (Sigma AB), Alejandro Leguizamo (SolidQ), Alexei Khalyako, Edward Melomed, AkshaiMirchandani, Sanjay Nayyar (IM Group), TomislavPiasevoli, Carl Rabeler (SolidQ), Marco Russo (SQLBI), Ashvini Sharma, Didier Simon, John Sirmon, Richard Tkachuk, Andrea Uggetti, Elizabeth Vitt, Mike Vovchik, Christopher Webb (Crossjoin Consulting), SedatYogurtcuoglu, Anne Zorner Summary: Download this book to learn about Analysis Services Multidimensional performance tuning from an operational and development perspective. This book consolidates the previously published SQL Server 2008 R2 Analysis Services Operations Guide and SQL Server 2008 R2 Analysis Services Performance Guide into a single publication that you can view on portable devices. Category: Guide Applies to: SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012 Source: White paper (link to source content, link to source content) E-book publication date: May 2012 200 pages This page intentionally left blank Copyright © 2012 by Microsoft Corporation All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. Microsoft and the trademarks listed at http://www.microsoft.com/about/legal/en/us/IntellectualProperty/Trademarks/EN-US.aspx are trademarks of the Microsoft group of companies. All other marks are property of their respective owners.
    [Show full text]
  • Uniform Data Access Platform for SQL and Nosql Database Systems
    Information Systems 69 (2017) 93–105 Contents lists available at ScienceDirect Information Systems journal homepage: www.elsevier.com/locate/is Uniform data access platform for SQL and NoSQL database systems ∗ Ágnes Vathy-Fogarassy , Tamás Hugyák University of Pannonia, Department of Computer Science and Systems Technology, P.O.Box 158, Veszprém, H-8201 Hungary a r t i c l e i n f o a b s t r a c t Article history: Integration of data stored in heterogeneous database systems is a very challenging task and it may hide Received 8 August 2016 several difficulties. As NoSQL databases are growing in popularity, integration of different NoSQL systems Revised 1 March 2017 and interoperability of NoSQL systems with SQL databases become an increasingly important issue. In Accepted 18 April 2017 this paper, we propose a novel data integration methodology to query data individually from different Available online 4 May 2017 relational and NoSQL database systems. The suggested solution does not support joins and aggregates Keywords: across data sources; it only collects data from different separated database management systems accord- Uniform data access ing to the filtering options and migrates them. The proposed method is based on a metamodel approach Relational database management systems and it covers the structural, semantic and syntactic heterogeneities of source systems. To introduce the NoSQL database management systems applicability of the proposed methodology, we developed a web-based application, which convincingly MongoDB confirms the usefulness of the novel method. Data integration JSON ©2017 Elsevier Ltd. All rights reserved. 1. Introduction solution to retrieve data from heterogeneous source systems and to deliver them to the user.
    [Show full text]
  • Product Master Data Management Reference Guide
    USAID GLOBAL HEALTH SUPPLY CHAIN PROGRAM Procurement and Supply Management PRODUCT MASTER DATA MANAGEMENT REFERENCE GUIDE Version 1.0 February 2020 The USAID Global Health Supply Chain Program-Procurement and Supply Management (GHSC-PSM) project is funded under USAID Contract No. AID-OAA-I-15-0004. GHSC-PSM connects technical solutions and proven commercial processes to promote efficient and cost-effective health supply chains worldwide. Our goal is to ensure uninterrupted supplies of health commodities to save lives and create a healthier future for all. The project purchases and delivers health commodities, offers comprehensive technical assistance to strengthen national supply chain systems, and provides global supply chain leadership. GHSC-PSM is implemented by Chemonics International, in collaboration with Arbola Inc., Axios International Inc., IDA Foundation, IBM, IntraHealth International, Kuehne + Nagel Inc., McKinsey & Company, Panagora Group, Population Services International, SGS Nederland B.V., and University Research Co., LLC. To learn more, visit ghsupplychain.org DISCLAIMER: The views expressed in this publication do not necessarily reflect the views of the U.S. Agency for International Development or the U.S. government. Contents Acronyms ....................................................................................................................................... 3 Executive Summary ...................................................................................................................... 4 Background
    [Show full text]