Magic Sets for Data Integration∗

Total Page:16

File Type:pdf, Size:1020Kb

Magic Sets for Data Integration∗ Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Magic Sets for Data Integration∗ Wolfgang Faber and Gianluigi Greco and Nicola Leone Department of Mathematics, University of Calabria, Italy {faber,ggreco,leone}@mat.unical.it Abstract 2004; Bravo & Bertossi 2003). An important feature of these programs is that they are consistent, viz. that a stable model We present a generalization of the Magic Sets technique to Datalog¬ programs with (possibly unstratified) negation un- is guaranteed to exist. However, given the co-NP hardness of der the stable model semantics, originally defined in (Faber, their evaluation, the design of optimization techniques is of Greco, & Leone 2005; 2007). The technique optimizes utmost importance for applications in real scenarios, where Datalog¬ programs by means of a rewriting algorithm that the size of the input database may be huge. preserves query equivalence, under the proviso that the orig- In this paper, we focus on the optimization of Datalog¬ inal program is consistent. The approach is motivated by re- programs, by discussing an extension of the well-known cently proposed methods for query answering in data integra- Magic Set method (Bancilhon et al. 1986; Beeri & Ramakr- tion and inconsistent databases, which use cautious reasoning ¬ ishnan 1991). This method exploits the fact that while an- over consistent Datalog programs under the stable model se- swering a user query, often only a certain part of the stable mantics. models needs to be considered, so there is no need to com- In order to prove the correctness of our Magic Sets transfor- pute these models in their entirety. In fact, its aim is to fo- mation, we have introduced a novel notion of modularity for ¬ cus the instantiation of the program to those ground rules Datalog under the stable model semantics, which is more suitable for query answering than previous module defini- that are really needed to answer the query, by propagating tions, and which is also relevant per se. A module under this binding information from the query goal into the program ¬ definition guarantees independent evaluation of queries if the rules. Differing from the original method, Datalog requires full program is consistent. Otherwise, it guarantees sound- also body-to-head propagation in the presence of unstratified ness under cautious and completeness under brave reasoning. negation. The key idea is then to identify rules for which this is necessary, which we term dangerous rules. The formal properties of the proposed approach have been Introduction deeply analyzed. First, we show that the program obtained is Datalog¬ programs are function-free logic programs where query equivalent under brave and cautious reasoning to the negation may occur in the bodies of rules. Datalog¬ with original program if the latter is consistent, making it a per- 1 stable model semantics (Gelfond & Lifschitz 1988) is a fect fit for data integration applications, where consistency very expressive query language in a precise mathematical is guaranteed. If the original program is not guaranteed to 2 sense: Under brave (cautious) reasoning , Datalog¬ allows be consistent, we can still show that on the transformed pro- to express every query that is decidable in the complexity gram, brave reasoning is complete, and cautious reasoning class NP (co-NP) (Schlipf 1995). is sound with respect to the original program. In many recent proposals for data integration and reason- In order to establish the above results, we introduce ing on inconsistent databases, query answering turned out to a suitable notion of modularity for query answering over be co-NP-complete and, in fact, it was reduced to cautious Datalog¬ programs. Previous notions like splitting sets of reasoning on suitable Datalog¬ programs (Arenas, Bertossi, (Lifschitz & Turner 1994) and modules of (Eiter, Gottlob, & & Chomicki 2000; Greco, Greco, & Zumpano 2001; Lembo Mannila 1997) have been defined for stable model genera- ∗Supported by M.I.U.R. within projects “Potenziamento e Ap- tion, while the new notion is tailored to query answering. plicazioni della Programmazione Logica Disgiuntiva” and “Sistemi Finally, we analyze the complexity of determining basati sulla logica per la rappresentazione di conoscenza: esten- whether a predicate is dangerous in a given program, which sioni e tecniche di ottimizzazione,” and “tocai.it: Tecnologie Ori- is a central notion of our Magic Set method. It turns out that entate alla Conoscenza per Aggregazioni di Imprese in Internet.” this task is NL-complete and thus tractable. Copyright c 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ¬ ¬ Datalog Programs 1Unless explicitly specified, Datalog will always denote Dat- alog with negation under stable model semantics in this paper. An atom p(t1, . , tk) is composed of a predicate symbol 2Note that brave and cautious consequences are also called pos- p of arity k and terms t1, . , tk, each of which is either a sible and certain answers, respectively. constant or a variable. A literal is either an atom a or its 1528 ¬ − tnegation no a. A (Datalog ) rule r is of the form an arc is marked if a appears in B (r).A cycle of DGP is a sequence of nodes C = n1, . , nk, such that each ni h :- b1, . , bm, not bm+1,..., not bn. (1) (1 < i < k) occurs exactly once in C, n1 = nk, and each where h, b1, ··· , bn are atoms and 0 ≤ m ≤ n. The atom h (ni, ni+1) (1 ≤ i < k) is an arc in DGP . An odd cycle in is called the head of the rule. A Datalog¬ program is a set of DGP is a cycle C = n1, . , nk such that an odd number Datalog¬ rules. If all defining rules of a predicate p are facts of the arcs (ni, ni+1) (1 ≤ i < k) is marked. In analogy, A (that is, n = m = 0), then p is an EDB predicate; otherwise one can also define the atom dependency graph DGP of a p is an IDB predicate. A set of facts for EDB predicates of ground program P, by considering atoms rather than predi- a program P is called an EDB (for P). A query Q is just an cates. We now use this notion to define dangerous predicates atom. and rules. The intuition is that dangerous predicates may in- Let the (Herbrand) universe and base for a Datalog¬ pro- hibit a stable model. gram P be denoted UP and BP , respectively. The ground Definition 1 Let P be a program (resp., ground program), instantiation of P w.r.t. UP is denoted by Ground(P). An and d be a predicate (resp., atom) of P. Then, we say that d interpretation is a subset of BP . A ground positive literal A is dangerous if either (resp. negative literal not A) is true w.r.t. I if A ∈ I (resp. A A/∈ I); otherwise it is false. An interpretation I satisfies a 1. d occurs in an odd cycle of DGP (resp., DGP ), or ground rule r ∈ Ground(P) if the head of r is true w.r.t. I 2. d occurs in the body of a rule with a dangerous head pred- whenever the body of r is true w.r.t. I. An interpretation I icate (resp., atom). ¬ is a model of a Datalog program P if I satisfies all rules in A rule r is dangerous, if it contains a dangerous predicate Ground(P). (resp., atom) in the head. 2 Each not -free program P has a least (under subset inclu- sion) model, which is denoted by LM(P) and is the unique Based on this definition, we define a notion of indepen- stable model of P. Given a Datalog¬ program P and an dence for sets of atoms. These sets must be closed under interpretation I, the Gelfond-Lifschitz transform PI is ob- rules in the head-to-body direction and in the body-to-head tained from Ground(P) by deleting all rules containing direction for dangerous rules. The defining rules of these not b where b ∈ I, and deleting all not literals in the re- sets then form modules. maining rules. The set of stable models of a Datalog¬ pro- Definition 2 An independent atom set of a ground program gram P, denoted by SM(P), is the set of interpretations I, P is a set S ⊆ BP such that for each atom a ∈ S the fol- such that I = LM(PI). lowing holds: A program P is consistent if SM(P) 6= ∅, otherwise it is 1. if a is the head of a rule r ∈ P then all atoms of r are in inconsistent. A program P is data consistent if P = P ∪F F S, and is consistent for each EDB F. Given a ground atom a and a Datalog¬ program P, a is 2. if a appears in the body of a dangerous rule r ∈ P then all atoms of r are in S. a cautious (or certain) consequence of P, denoted by P |=c a, if ∀M ∈ SM(P) : a ∈ M; a is a brave (or possible) A subset T of a program P is a module if T = {r | consequence of P, denoted by P |=b a, if ∃M ∈ SM(P): the head of r is in S} for some independent atom set S. 2 . Given a query , Ans denotes the set of a ∈ M Q c(Q, P) These modules can be used to partially evaluate programs, substitutions , such that ; Ans denotes ϑ P |=c Qϑ b(Q, P) as the following results show. the set of substitutions ϑ, such that P |=b Qϑ. Let P and P′ be Datalog¬ programs and Q be a query.
Recommended publications
  • Scientific Data Mining in Astronomy
    SCIENTIFIC DATA MINING IN ASTRONOMY Kirk D. Borne Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA [email protected] Abstract We describe the application of data mining algorithms to research prob- lems in astronomy. We posit that data mining has always been fundamen- tal to astronomical research, since data mining is the basis of evidence- based discovery, including classification, clustering, and novelty discov- ery. These algorithms represent a major set of computational tools for discovery in large databases, which will be increasingly essential in the era of data-intensive astronomy. Historical examples of data mining in astronomy are reviewed, followed by a discussion of one of the largest data-producing projects anticipated for the coming decade: the Large Synoptic Survey Telescope (LSST). To facilitate data-driven discoveries in astronomy, we envision a new data-oriented research paradigm for astron- omy and astrophysics – astroinformatics. Astroinformatics is described as both a research approach and an educational imperative for modern data-intensive astronomy. An important application area for large time- domain sky surveys (such as LSST) is the rapid identification, charac- terization, and classification of real-time sky events (including moving objects, photometrically variable objects, and the appearance of tran- sients). We describe one possible implementation of a classification broker for such events, which incorporates several astroinformatics techniques: user annotation, semantic tagging, metadata markup, heterogeneous data integration, and distributed data mining. Examples of these types of collaborative classification and discovery approaches within other science disciplines are presented. arXiv:0911.0505v1 [astro-ph.IM] 3 Nov 2009 1 Introduction It has been said that astronomers have been doing data mining for centuries: “the data are mine, and you cannot have them!”.
    [Show full text]
  • Data Quality Through Data Integration: How Integrating Your IDEA Data Will Help Improve Data Quality
    center for the integration of IDEA Data Data Quality Through Data Integration: How Integrating Your IDEA Data Will Help Improve Data Quality July 2018 Authors: Sara Sinani & Fred Edora High-quality data is essential when looking at student-level data, including data specifically focused on students with disabilities. For state education agencies (SEAs), it is critical to have a solid foundation for how data is collected and stored to achieve high-quality data. The process of integrating the Individuals with Disabilities Education Act (IDEA) data into a statewide longitudinal data system (SLDS) or other general education data system not only provides SEAs with more complete data, but also helps SEAs improve accuracy of federal reporting, increase the quality of and access to data within and across data systems, and make better informed policy decisions related to students with disabilities. Through the data integration process, including mapping data elements, reviewing data governance processes, and documenting business rules, SEAs will have developed documented processes and policies that result in more integral data that can be used with more confidence. In this brief, the Center for the Integration of IDEA Data (CIID) provides scenarios based on the continuum of data integration by focusing on three specific scenarios along the integration continuum to illustrate how a robust integrated data system improves the quality of data. Scenario A: Siloed Data Systems Data systems where the data is housed in separate systems and State Education Agency databases are often referred to as “data silos.”1 These data silos are isolated from other data systems and databases.
    [Show full text]
  • Managing Data in Motion This Page Intentionally Left Blank Managing Data in Motion Data Integration Best Practice Techniques and Technologies
    Managing Data in Motion This page intentionally left blank Managing Data in Motion Data Integration Best Practice Techniques and Technologies April Reeve AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Morgan Kaufmann is an imprint of Elsevier Acquiring Editor: Andrea Dierna Development Editor: Heather Scherer Project Manager: Mohanambal Natarajan Designer: Russell Purdy Morgan Kaufmann is an imprint of Elsevier 225 Wyman Street, Waltham, MA 02451, USA Copyright r 2013 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods or professional practices, may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information or methods described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.
    [Show full text]
  • Informatica Cloud Data Integration
    Informatica® Cloud Data Integration Microsoft SQL Server Connector Guide Informatica Cloud Data Integration Microsoft SQL Server Connector Guide March 2019 © Copyright Informatica LLC 2017, 2019 This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC. U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License. Informatica, the Informatica logo, Informatica Cloud, and PowerCenter are trademarks or registered trademarks of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners. Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product. See patents at https://www.informatica.com/legal/patents.html. DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of noninfringement, merchantability, or use for a particular purpose.
    [Show full text]
  • What to Look for When Selecting a Master Data Management Solution What to Look for When Selecting a Master Data Management Solution
    What to Look for When Selecting a Master Data Management Solution What to Look for When Selecting a Master Data Management Solution Table of Contents Business Drivers of MDM .......................................................................................................... 3 Next-Generation MDM .............................................................................................................. 5 Keys to a Successful MDM Deployment .............................................................................. 6 Choosing the Right Solution for MDM ................................................................................ 7 Talend’s MDM Solution .............................................................................................................. 7 Conclusion ..................................................................................................................................... 8 Master data management (MDM) exists to enable the business success of an orga- nization. With the right MDM solution, organizations can successfully manage and leverage their most valuable shared information assets. Because of the strategic importance of MDM, organizations can’t afford to make any mistakes when choosing their MDM solution. This article will discuss what to look for in an MDM solution and why the right choice will help to drive strategic business initiatives, from cloud computing to big data to next-generation business agility. 2 It is critical for CIOs and business decision-makers to understand the value
    [Show full text]
  • Using ETL, EAI, and EII Tools to Create an Integrated Enterprise
    Data Integration: Using ETL, EAI, and EII Tools to Create an Integrated Enterprise Colin White Founder, BI Research TDWI Webcast October 2005 TDWI Data Integration Study Copyright © BI Research 2005 2 Data Integration: Barrier to Application Development Copyright © BI Research 2005 3 Top Three Data Integration Inhibitors Copyright © BI Research 2005 4 Staffing and Budget for Data Integration Copyright © BI Research 2005 5 Data Integration: A Definition A framework of applications, products, techniques and technologies for providing a unified and consistent view of enterprise-wide business data Copyright © BI Research 2005 6 Enterprise Business Data Copyright © BI Research 2005 7 Data Integration Architecture Source Target Data integration Master data applications Business domain dispersed management (MDM) MDM applications integrated internal data & external Data integration techniques data Data Data Data propagation consolidation federation Changed data Data transformation (restructure, capture (CDC) cleanse, reconcile, aggregate) Data integration technologies Enterprise data Extract transformation Enterprise content replication (EDR) load (ETL) management (ECM) Enterprise application Right-time ETL Enterprise information integration (EAI) (RT-ETL) integration (EII) Web services (services-oriented architecture, SOA) Data integration management Data quality Metadata Systems management management management Copyright © BI Research 2005 8 Data Integration Techniques and Technologies Data Consolidation centralized data Extract, transformation
    [Show full text]
  • Introduction to Data Integration Driven by a Common Data Model
    Introduction to Data Integration Driven by a Common Data Model Michal Džmuráň Senior Business Consultant Progress Software Czech Republic Index Data Integration Driven by a Common Data Model 3 Data Integration Engine 3 What Is Semantic Integration? 3 What Is Common Data Model? Common Data Model Examples 4 What Is Data Integration Driven by a Common Data Model? 5 The Position of Integration Driven by a Common Data Model in the Overall Application Integration Architecture 6 Which Organisations Would Benefit from Data Integration Driven by a Common Data Model? 8 Key Elements of Data Integration Driven by a Common Data Model 9 Common Data Model, Data Services and Data Sources 9 Mapping and Computed Attributes 10 Rules 11 Glossary of Terms 13 Information Resources 15 Web 15 Articles and Analytical Reports 15 Literature 15 Industry Standards for Common Data Models 15 Contact 16 References 16 Data Integration Driven by a Common Data Model Data Integration Engine Today, not even small organisations can make do with a single application. The majority of business processes in an organisation is nowadays already supported by some kind of implemented application, and the organisation must focus on making its operations more efficient. One of the ways of achieving this goal is the information exchange optimization across applications, assisted by the data integration from various applications; it is summarily known as Enterprise Application Integration (EAI). Without effective Enterprise Application Integration, a modern organisation cannot run its business processes to meet the ever increasing customer demands and have an up-to-date knowledge of its operations. Data integration is an important part of EAI.
    [Show full text]
  • Using the Qlik Data Integration Platform for Better Analytics
    DATA SHEET DATA INTEGRATION Modern DataOps Using the Qlik Data Integration Platform for Better Analytics QLIK.COM INTRODUCTION Enabling DataOps for Analytics-Ready Data Qlik’s Data Integration Platform (formerly Attunity solutions) automates real-time data streaming, cataloging, and publishing, so you can quickly find and free analytics-ready data — and take action on it. In today’s hyper-competitive business climate, real-time insights are critical. Users need robust data integration and agile analytics solutions, to make decisions fast. Unlike traditional batch data movement and ETL scripting — which are slow, inflexible and labor intensive — Qlik’s data integration platform automates the creation of data streams from core transactional systems. It efficiently moves data to applications, warehouses, and lakes — on premise and in the cloud — and makes data immediately available via an Amazon-like catalog marketplace experience. With Qlik, users get the frictionless data agility they need to drive greater business value. Data consumers need better access to their data in real time. Many existing processess and technologies simply can’t keep up with this increasing demand, creating more complexity and tighter bottlenecks within IT. DataOps seeks to bring improvements to data integration. Borrowing methods from DevOps — which combines softward development (Dev) and IT operations (Ops) to improve the velocity, quality, predictability, and scale of software development and deployment — DataOps focuses on the practices and technologies for building and enhancing data pipelines to rapidly meet business analytics needs. Modern DataOps 2 Data Warehouse Automation Business needs are in a continual state of flux. Data sets are increasingly diverse. To meet the growing need for analytics-ready data delivery at the speed of now, Qlik’s Data Integration Platform automates the entire data warehouse lifecycle.
    [Show full text]
  • 2.4 Data Integration
    Data Mining 2.4 Data Integration Fall 2008 Instructor: Dr. Masoud Yaghini Data Integration Data Integration Data integration : – Combines data from multiple databases into a coherent store – Denormalization tables (often done to improve performance by avoiding joins) Integration of the data from multiple sources may produces redundancies and inconsistencies in the resulting data set. Tasks of data integration : – Detecting and resolving data value and schema conflicts – Handling Redundancy in Data Integration Data Integration Outline Detecting and Resolving Data Value and Schema Conflicts Handling Redundancy in Data Integration References Data Integration Detecting and Resolving Data Value and Schema Conflicts Data Integration Schema Integration Schema Integration : – Integrate metadata from different sources – The same attribute or object may have different names in different databases – e.g. customer_id in one database and cust_number in another The metadata include: – the name, meaning, data type, and range of values permitted for the attribute, and etc. Data Integration Detecting and resolving data value conflicts For the same real world entity, attribute values from different sources are different This may be due to differences in representation, scaling, or encoding. Examples: – the data codes for pay_type in one database may be “H” and “S”, and 1 and 2 in another . – a weight attribute may be stored in metric units in one system and British imperial units in another. – For a hotel chain, the price of rooms in different cities may involve not only different currencies but also different services (such as free breakfast) and taxes. Data Integration Detecting and resolving data value conflicts This step also relates to data cleaning, as described earlier.
    [Show full text]
  • DATA INTEGRATION GLOSSARY Data Integration Glossary
    glossary DATA INTEGRATION GLOSSARY Data Integration Glossary August 2001 U.S. Department of Transportation Federal Highway Administration Office of Asset Management NOTE FROM THE DIRECTOR Office of Asset Management, Infrastructure Core Business Unit, Federal Highway Administration his glossary is one of a series of documents on data integration being published by the Federal Highway Administration’s Office of Asset T Management. It defines in simple and understandable language a broad set of the terminologies used in information management, particularly in regard to database management and integration. Our objective is to provide convenient reference material that can be used by individuals involved in data integration activities. The glossary is limited to the more fundamental concepts and taxonomies applied in transportation database management and is not intended to be comprehensive. The importance of data integration in implementing Asset Management processes cannot be overstated. My office will continue to provide information and support to all transportation agencies as they work to integrate their data. Madeleine Bloom Director, Office of Asset Management DATA INTEGRATION GLOSSARY 3 LIST OF TERMS Aggregate data . .7 Dynamic segmentation . .12 Application . .7 Enterprise . .12 Application integration . .7 Enterprise application integration (EAI) . .12 Application program interface (API) . .7 Enterprise resource planning (ERP) . .12 Archive . .7 Entity . .12 Asset Management . .7 Entity relationship (ER) diagram . .12 Atomic data . .7 Executive information system (EIS) . .13 Authorization request . .7 Extensibility . .13 Bulk data transfer . .7 Geographic information system (GIS) . .13 Business process . .7 Graphical user interface (GUI) . .13 Business process reengineering (BPR) . .7 Information systems architecture . .13 Communications protocol . .7 Interoperable database . .13 Computer Aided Software Engineering (CASE) tools .
    [Show full text]
  • Developing a Data Integration and Lifecycle Management Strategy for a Hybrid Environment
    IBM Analytics Developing a data integration and lifecycle management strategy for a hybrid environment ENTER » IBM Analytics Developing a data integration and lifecycle management strategy for a hybrid environment Table of contents Introduction: Imposing order on chaos 3 Turn data into information: Defining an integration and lifecycle strategy 7 IBM solutions for data integration and lifecycle management 17 Next steps: Continuing the cloud governance discussion 19 « BACK | NEXT » 2 IBM Analytics Developing a data integration and lifecycle management strategy for a hybrid environment Table of contents > Introduction: Imposing order on chaos Introduction: Imposing order on chaos Cloud- based data presents a wealth of potential Hadoop offer unparalleled cost benefits and analyti- Introduction: Imposing order on chaos ●● The four pillars information for organizations seeking to build and cal opportunities. However, while Hadoop and maintain competitive advantage in their industries. Hadoop- based solutions have their advantages Turn data into information: Defining an integration and However, as discussed in “The truth about when it comes to addressing big data volumes, lifecycle strategy information governance and the cloud,” most Hadoop itself is not designed for data integration. ●● Where is the data that my organization needs? ●● How will the data be assembled to create the information my organizations will be challenged to reconcile their Data integration carries its own unique requirements organization needs? legacy on- premises data with new third-pa rty cloud- (such as supporting governance, metadata manage- ●● Do I really need to keep all this data? based data. It is within these “hybrid” environments ment, data quality and flexible data delivery styles) that people will look for insights to make critical for success.
    [Show full text]
  • Data Integration Hub for Master Data Management
    Solution Brief Data Integration Hub for Master Data Management Key Benefits Accelerate Master Data Implementations in Complex Data Integration • Organizes and automates Environments With Data Integration Hub integration of master data • Unifies master data across on-premises, cloud, and big Succeeding in data-driven digital transformation requires organizations to enable self-service of data systems trusted data. With Master Data Management (MDM), organizations can ensure delivery of reliable • Enables agile, self-service and governed data sharing across disparate systems and applications. But implementing MDM access to master data solutions in a complex system landscape can be daunting. The traditional point-to-point data • Automates syndication of integration approach creates difficult-to-manage, un-scalable, and sometimes redundant data master data integration projects, which impact data sharing and data governance. A new approach is required • Guarantees master data delivery across the enterprise to address the master data integration challenge. Designed to simplify complex integration environments, the Informatica® Data Integration Hub (DIH) solution provides publication/subscription integration patterns, automation, and business user self-service. Leveraging DIH, organizations can govern, manage, and automate the movement of master data across the enterprise. This helps enterprises dramatically accelerate MDM implementations and streamline new system onboarding. Once the mastered data is added to DIH as golden topics, other applications can subscribe and consume the data on their schedule. Onboarding additional applications is simple and can be done by distributed teams themselves through DIH’s self-service wizards. When MDM is used with DIH, you can cross-reference source IDs to the Golden Record. The cross-referenced information allows for better visibility into the quality of source data.
    [Show full text]