Annex 1 Technology Architecture 1 Source Layer

Total Page:16

File Type:pdf, Size:1020Kb

Annex 1 Technology Architecture 1 Source Layer Annex 1 Technology Architecture The Technology Architecture is the combined set of software, hardware and networks able to develop and support IT services. This is a high-level map or plan of the information assets in an organization, including the physical design of the building that holds the hardware. This annex is intended to be an overview of software packages existing on the market or developed on request in NSIs in order to describe the solutions that would meet NSI needs, implement S-DWH concept and provide the necessary functionality for each S-DWH level. 1 Source layer The Source layer is the level in which we locate all the activities related to storing and managing internal or external data sources. Internal data are from direct data capturing carried out by CAWI, CAPI or CATI while external data are from administrative archives, for example from Customs Agencies, Revenue Agencies, Chambers of Commerce, Social Security Institutes. Generally, data from direct surveys are well-structured so they can flow directly into the integration layer. This is because NSIs have full control of their own applications. Differently, data from others institution’s archives must come into the S-DWH with their metadata in order to be read correctly. In the early days extracting data from source systems, transforming and loading the data to the target data warehouse was done by writing complex codes which with the advent of efficient tools was an inefficient way to process large volumes of complex data in a timely manner. Nowadays ETL (Extract, Transform and Load) is essential component used to load data into data warehouses from the external sources. ETL processes are also widely used in data integration and data migration. The objective of an ETL process is to facilitate the data movement and transformation. ETL is the technology that performs three distinct functions of data movement: o The extraction of data from one or more sources. o The transformations of the data e.g. cleansing, reformatting, standardisation, aggregation. o The loading of resulting data set into specified target systems or file formats. ETL processes are reusable components that can be scheduled to perform data movement jobs on a regular basis. ETL supports massive parallel processing for large data volumes. The ETL tools were created to improve and facilitate data warehousing. Depending on the needs of customers, there are several types of tools. One of them performs and supervises only selected stages of the ETL process like data migration tools (EtL Tools, “small t” tools), data transformation tools (eTl Tools, “capital T” tools). Another are complete (ETL Tools) and have many functions that are intended for processing large amounts of data or more complicated ETL projects. Some of them (like server engine tools) execute many ETL steps at the same time from more than one developer, while other like client engine tools are simpler and execute ETL routines on the same machine as they are developed. There are two more types. The first one is called Code base tools and is a family of programing tools which allow you to work with many operating systems and programing languages. The second one called GUI base tools removes the coding layer and allows you to work without any knowledge (in theory) about coding languages. The first task is data extraction from internal or external sources. After sending queries to the source, system data may go indirectly to the database. However, usually there is a need to monitor or gather more 1 information and then go to staging area. Some tools extract only new or changed information automatically so we don’t have to update it by our own. The second task is transformation which is a broad category: o transforming data into a structure which is required to continue the operation (extracted data has usually a structure typical to the source); o sorting data; o connecting or separating; o cleansing; o checking quality. The third task is loading into a data warehouse. ETL Tools have many other capabilities (next to the main three: extraction, transformation and loading) like for instance sorting, filtering, data profiling, quality control, cleansing, monitoring, synchronization and consolidation. The most popular commercial ETL Tools are: IBM Infosphere DataStage IBM Infosphere DataStage integrates data on demand with a high performance parallel framework, extended metadata management, and enterprise connectivity. It supports the collection, integration and transformation of large volumes of data, with data structures ranging from simple to highly complex. It also provides support for big data and Hadoop, enabling customers to directly access big data on a distributed file system, thereby helping customers address the most challenging data volumes in the systems. It offers in addition a scalable platform that enables customers to solve large-scale business problems through high- performance processing of massive data volumes, as well as supports real-time data integration and completes connectivity between any data source and any application. Informatica PowerCenter Informatica PowerCenter is a widely used extraction, transformation and loading (ETL) tool used in building enterprise data warehouses. PowerCenter empowers its customers to implement a single approach to accessing, transforming, and delivering data without having to resort to hand coding. The software scales to support large data volumes and meets customers’ demands for security and performance. PowerCenter serves as the data integration foundation for all enterprise integration initiatives, including data warehousing, data governance, data migration, service-oriented architecture (SOA), B2B data exchange, and master data management (MDM). Informatica PowerCenter also empowers teams of developers, analysts, and administrators to work faster and better together, sharing and reusing work, to accelerate project delivery. Oracle Warehouse Builder (OWB) Oracle Warehouse Builder (OWB) is a tool that enables designing a custom Business Intelligence application. It provides dimensional ETL process design, extraction from heterogeneous source systems, and metadata reporting functions. Oracle Warehouse Builder allows creation of both dimensional and relational models, and also star schema data warehouse architectures. Except of being an ETL (Extract, Transform, Load) tool, Oracle Warehouse Builder also enables users to design and build ETL processes, target data warehouses, intermediate data storages and user access layers. It allows metadata reading in a wizard-driven form from a data dictionary or Oracle Designer but also supports over 40 metadata files from other vendors. SAS Data Integration Studio SAS Data Integration Studio is a powerful visual design tool for building, implementing and managing data integration processes regardless of data sources, applications, or platforms. An easy-to-manage, multiple- user environment enables collaboration on large enterprise projects with repeatable processes that are 2 easily shared. The creation and management of data and metadata are improved with extensive impact analysis of potential changes made across all data integration processes. SAS Data Integration Studio enables users to quickly build and edit data integration, to automatically capture and manage standardized metadata from any source, and to easily display, visualize, and understand enterprise metadata and your data integration processes. SAS Data Integration Studio is part of the SAS software offering, SAS Enterprise Data Integration Server. SAP Business Objects Data Services (SAP BODS) SAP Business Objects Data Services (SAP BODS) is one of the fundamental capabilities of Data Services. It is used for extracting, transforming, and loading (ETL) data from heterogeneous sources into a target database or data warehouse. Customers can create applications (jobs) that specify data mappings and transformations by using the Designer. Also it empowers users to use any type of data, including structured or unstructured data from databases or flat files to process, cleanse and remove duplicate entries. Data Services RealTime interfaces provide additional support for real-time data movement and access. Data Services RealTime reacts immediately to messages as they are sent, performing predefined operations with message content. Data Services RealTime components provide services to web applications and other client applications. The Data Services product consists of several components including: Designer, Job server, Engine and Repository. Microsoft SQL Server Integration Services (SSIS) Microsoft SQL Server Integration Services (SSIS) is a platform for building enterprise-level data integration and data transformations solutions. Integration Services are used to solve complex business problems by copying or downloading files, sending e-mail messages in response to events, updating data warehouses, cleaning and mining data, and managing SQL Server objects and data. The packages can work alone or together with other packages to address complex business needs. Integration Services can extract and transform data from a wide variety of sources such as XML data files, flat files, and relational data sources, and then load the data into one or more destinations. Integration Services includes a rich set of built-in tasks and transformations, tools for constructing packages, and the Integration Services service for running and managing packages. You can use the graphical Integration
Recommended publications
  • DICE Framework – Initial Version
    Developing Data-Intensive Cloud Applications with Iterative Quality Enhancements DICE Framework – Initial version Deliverable 1.5 Deliverable 1.5. DICE Framework – Initial version Deliverable: D1.5 Title: DICE Framework – Initial version Editor(s): Marc Gil (PRO) Contributor(s): Marc Gil (PRO), Ismael Torres (PRO), Christophe Joubert (PRO) Giuliano Casale (IMP), Darren Whigham (Flexi), Matej Artač (XLAB), Diego Pérez (Zar), Vasilis Papanikolaou (ATC), Francesco Marconi (PMI), Eugenio Gianniti(PMI), Marcello M. Bersani (PMI), Daniel Pop (IEAT), Tatiana Ustinova (IMP), Gabriel Iuhasz (IEAT), Chen Li (IMP), Ioan Gragan (IEAT), Damian Andrew Tamburri (PMI), Jose Merseguer (Zar), Danilo Ardagna (PMI) Reviewers: Darren Whigham (Flexi), Matteo Rossi (PMI) Type (R/P/DEC): - Version: 1.0 Date: 31-January-2017 Status: First Version Dissemination level: Public Download page: http://www.dice-h2020.eu/deliverables/ Copyright: Copyright © 2017, DICE consortium – All rights reserved DICE partners ATC: Athens Technology Centre FLEXI: FlexiOPS IEAT: Institutul e-Austria Timisoara IMP: Imperial College of Science, Technology & Medicine NETF: Netfective Technology SA PMI: Politecnico di Milano PRO: Prodevelop SL XLAB: XLAB razvoj programske opreme in svetovanje d.o.o. ZAR: Universidad De Zaragoza The DICE project (February 2015-January 2018) has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 644869 Copyright © 2017, DICE consortium – All rights reserved 2 Deliverable 1.5. DICE Framework – Initial version Executive summary This deliverable documents the DICE Framework, which is composed of a set of tools developed to support the DICE methodology. One of these tools is the DICE IDE, which is the front-end of the DICE methodology and plays a pivotal role in integrating the other tools of the DICE framework.
    [Show full text]
  • IN-BROWSER BLITZ LITERATURE REVIEWS 1 Submitted to Meta
    IN-BROWSER BLITZ LITERATURE REVIEWS 1 Submitted to Meta-Psychology. Participate in open peer review by commenting through hypothes.is directly on this preprint. The full editorial process of all articles under review at Meta-Psychology can be found following this link: https://tinyurl.com/mp-submissions You will find this preprint by searching for the first author's name. Writing a Psychological Blitz Literature Review with Nothing but a Browser Bogdan Cocoş1 1Department of Psychology, University of Bucharest Author Note Correspondence regarding this article should be addressed to Bogdan Cocoş, 90 Panduri Road, Sector 5, 050663, Bucharest, Romania. E-mail: [email protected] https://orcid.org/0000-0003-4098-7551 IN-BROWSER BLITZ LITERATURE REVIEWS 2 Abstract The ways so far of writing literature reviews represent valid, but not sufficient, landmarks, connected to the current technological context. In this sense, this article proposes a research method called blitz literature review, as a way to quickly, transparently, and repeatably consult key references in a particular area of interest, seen as a network composed of elements that are indispensable to such a process. The tutorial consists of six steps explained in detail, easy to follow and reproduce, accompanied by publicly available supplementary material. Finally, the possible implications of this research method are discussed, being brought to the fore a general recommendation regarding the optimization of the citizens’ involvement in the efforts and approaches of open scientific research. Keywords: blitz literature review, open access, open science, research methods IN-BROWSER BLITZ LITERATURE REVIEWS 3 Writing a Psychological Blitz Literature Review with Nothing but a Browser Context The term “blitz literature review” refers to an adaptation of the concept of literature review.
    [Show full text]
  • Office Suites - Best of Both Worlds Collaborate on Documents
    Technology Update Informatics, January 2009 January 2009 | informatics.nic.in Technology Update live conferencing capabilities, making it possible for — Keeps the user documents/presentations free from unlimited number of Glide users and non-users to viruses. Remember 1999 Melissa virus which Office Suites - Best of Both Worlds collaborate on documents. Users can synchronize embedded itself in word documents and spread havoc. documents for online/offline access on Windows, — The hidden information which travels with offline Thanks in part to the remarkable — Simdesk - this suite offers partial Mac, and Linux PCs, as well as export documents to created documents can reveal information related to Innovations in technology have growth of the Internet and the compatibility with Microsoft Word, PDF, and RTF formats. your computer and other files on the system thereby historically provoked profound explosion of high-speed Internet Office file formats (Word, Excel, — compromising the security and privacy. changes in literacy acquisition access, a new generation of Web and Powerpoint) but with a minor Peepel - Allows to import and export documents with Word and Open Office, collaborate with others on a and expression. From the applications is beginning to compete cost. — The version compatibility of various offline office with traditional office-productivity document, and even work offline, and then re-sync suites can create problems for the users whereas the development of the written — ThinkFree Office - An office products such as Microsoft Word, when you can get Internet connectivity again. online office suites take care of these version changes alphabet to the printing press, suite written in Java and includes Excel, and PowerPoint.
    [Show full text]
  • Advanced Information Technologies for Management – AITM 2011 Information Systems in Business
    3 strona:Makieta 1 2012-03-16 14:42 Strona 1 PRACE NAUKOWE Uniwersytetu Ekonomicznego we Wrocławiu RESEARCH PAPERS 205 of Wrocław University of Economics Advanced Information Technologies for Management – AITM 2011 Information Systems in Business edited by Jerzy Korczak, Helena Dudycz, Mirosław Dyczkowski Publishing House of Wrocław University of Economics Wrocław 2011 Reviewers: Frederic Andres, Witold Chmielarz, Jacek Cypryjański, Beata Czarnacka-Chrobot, Bernard F. Kubiak, Wojciech Olejniczak, Celina M. Olszak, Marcin Sikorski, Ewa Ziemba Copy-editing: Agnieszka Flasińska Layout: Barbara Łopusiewicz Proof-reading: Marcin Orszulak Typesetting: Adam Dębski Cover design: Beata Dębska This publication is available at www.ibuk.pl Abstracts of published papers are available in the international database The Central European Journal of Social Sciences and Humanities http://cejsh.icm.edu.pl and in The Central and Eastern European Online Library www.ceeol.com Information on submitting and reviewing papers is available on the Publishing House’s website www.wydawnictwo.ue.wroc.pl All rights reserved. No part of this book may be reproduced in any form or in any means without the prior written permission of the Publisher © Copyright Wrocław University of Economics Wrocław 2011 ISSN 1899-3192 ISBN 978-83-7695-178-2 The original version: printed Printing: Printing House TOTEM Contents Preface .............................................................................................................. 9 Kenneth Brown, Helwig Schmied: Collaboration management – a visual approach to managing people and results ................................................... 11 Joanna Bryndza: Quantitative risk analysis of IT projects ............................ 32 Witold Chmielarz: The integration and convergence in the information systems development – theoretical outline ................................................. 43 Iwona Chomiak-Orsa, Michał Flieger: Computeratization as the improvement of processes in local administration offices ........................
    [Show full text]
  • D2.1 State-Of-The-Art, Detailed Use Case Definitions and User Requirements
    D2.1 State-of-the-art, detailed use case definitions and user requirements Deliverable ID: D 2.1 Deliverable Title: State-of-the-art, detailed use case definitions and user requirements Revision #: 1.0 Dissemination Level: Public Responsible beneficiary: VTT Contributing beneficiaries: All Contractual date of delivery: 31.5.2017 Actual submission date: 15.6.2017 ESTABLISH D2.1 State-of-the-art, detailed use case definitions and user requirements 1 Table of Content Table of Content .................................................................................................................................... 2 1. Introduction ..................................................................................................................................... 4 2. Air quality state-of-the-art ................................................................................................................ 5 2.1 Indoor air quality effect on health and productivity ........................................................................ 5 2.2 Outdoor air quality ........................................................................................................................ 8 3. Technological state-of-the-art ........................................................................................................ 13 3.1 Self-awareness and self-adaptivity to cope with uncertainty ........................................................ 13 3.2 Objective health monitoring with sensors ...................................................................................
    [Show full text]
  • Eclipse Ganymede at a Glance 10/13/09 9:39 AM
    Eclipse Ganymede at a glance 10/13/09 9:39 AM Eclipse Ganymede at a glance Learn what is aboard the 24-project release train Chris Aniszczyk ([email protected]), Principal Consultant, Code 9 Summary: The Eclipse Ganymede release of 24 projects showcases the diversity and innovation going on inside the Eclipse ecosystem. Get an overview of several Ganymede projects, along with resources to find out more information. Date: 20 Jun 2008 Level: Intermediate Activity: 3827 views Comments: 0 (Add comments) Average rating Simply put, Ganymede is the simultaneous release of 24 major Eclipse projects. The important thing to remember about Ganymede and Eclipse release trains in general is that even though it's a simultaneous release, it doesn't mean these projects are unified. Each project remains a separate open source project, operating with its own project leadership, its own committers, and its own development plan. In the end, Ganymede is about improving the productivity of developers working on top of Eclipse projects by providing a more transparent and predictable development cycle. Getting Ganymede Before we get into the details about the various projects, let's complete a quick hands-on exercise to install Ganymede on your machine. There are two main ways to get Ganymede and that depends on your preference. The first — and recommended — way is to just grab a package relevant to you. The other way to get Ganymede is to use an update site. Ganymede packages The recommended way to get Ganymede is to head over to the Eclipse Ganymede Packages site. The packages site contains pre-bundled versions of Ganymede specific for your usage needs.
    [Show full text]
  • BIRT) Mastering BIRT
    Business Intelligence & Reporting Tools (BIRT) Mastering BIRT Scott Rosenbaum BIRT Project Management Committee Innovent Solutions, Inc. Confidential | Date | Other Information, if necessary © 2002 IBM Corporation BIRT in the Big Picture Ecosystem Vertical Industry Initiatives Modeling Embedded Data Require- SOA System Mgt Tools Tools Management ments Mgt Java Dev C/C++ Dev Test and Web Tools Business Tools Tools Performance Intelligence & Reporting Frameworks Modeling Graphical Frameworks Frameworks Tools Platform Multi-language Project Model support Potential New Projects Rich Client Platform Runtime Generic Workbench Update (OSGi) Eclipse Foundation, Inc. | © 2005 by Innovent Solutions, Inc. and made available under the EPL v1.0 BIRT Top-Level Project Scope Operational Reporting Ad hoc Query & Reporting Analytics/OLAP/Data Mining In Reality, this is a Continuum: Typical Characteristics: • Operational reports • Simple ad hoc exploration of data • Complex “Slice and Dice” of data • Developer creates reports • Business user creates reports • Business user creates reports • Very easy end user access • Fairly easy to use • More complex to use • Highly formatted • Typically limited formatting • Minimal formatting • Multiple output formats • Interactive • Very interactive • No end user training needed • Minimal training • Requires training • Data access can be complex • Semantic layer hides complexity • Semantic layer/data cubes BIRT Initial Focus Eclipse Foundation, Inc. | © 2005 by Innovent Solutions, Inc. and made available under the EPL v1.0 What is the BIRT Project? BIRT has 4 initial projects 1 Eclipse Report Designer (ERD) 2 Eclipse Report Engine (ERE) 3 Eclipse Charting Engine (ECE) 4 Web Based Report Designer (WRD) Eclipse Web Based Report Engine Report Report Designer Designer (future) 4 1 Data Transform.
    [Show full text]
  • Recommendations to Evaluate and Choose Cloud Services and Deployment Models for E-Business Strategic Use: a Case Study of Malaysian Smes
    International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611 Recommendations to Evaluate and Choose Cloud Services and Deployment Models for e-Business Strategic Use: A Case Study of Malaysian SMEs Haider N. Hussain Department of Computer Science, Faculty of Education for Pure Science, Basra University, Basra (Iraq) Abstract: This research aims to investigate the best practices of Cloud Computing within the e-Business context for the Malaysian SMEs. Findings showed that the SMEs have difficulty to decide and consider cloud computing services and deployment model due to lack of awareness and exposure on how they can benefit from this enabled technology. In this paper, we will discuss the proposed recommendation of good practices in using Cloud Computing for e-Business needs, focusing on evaluating and choosing Cloud service and deployment model. This paper also discussed the qualitative and quantitative approaches that were used to gather data on how the Cloud Computing services can strategically provide technology and business opportunities to the SMEs in Malaysia. The deliverable from this research is a set of recommendations to guide the Malaysian SMEs to adopt both e-Business and Cloud Computing technologies that consists of five recommendations: Create and enhance awareness of the Cloud Computing concept, Evaluate and choose a suitable Cloud service model, Evaluate and choose a suitable Cloud deployment model, Evaluate cloud provider, Test then applies the Cloud. Keyword: e-Business, Cloud Computing, SMEs 1. Introduction Cloud computing was reported to offer great potential to improve productivity and reduce operation costs.
    [Show full text]
  • The Statistical Data and Metadata Exchange Standard (SDMX)
    UNITED NATIONS ECONOMIC AND SOCIAL COMMISSION FOR ASIA AND THE PACIFIC Expert Group Meeting: Opportunities and advantages of enhanced collaboration on statistical information management in Asia and the Pacific 20-22 June 2011 Swissotel Nailert Park, Bangkok Tuesday, 21 June 2011 Session 3: Emerging generic business process models and common frameworks and terminology – A basis for practical cooperation? Integrating statistical information systems: The Statistical Data and Metadata eXchange Standard (SDMX) Brian Studman Australian Bureau of Statistics [email protected] Session Overview •SDMX – Background –History – What can it represent – SDMX & Other Standards (particularly DDI) • ABS & SDMX: Integration via IMTP (ABS major business programme) • Using SDMX in the context of GSBPM • Examples of Use: ABS main projects that utilise SDMX • A quick preview of GSIM v 0.1 (if we have time) 1 SDMX • Origins – Bank for International Settlements, European Central Bank, Eurostat, IMF, UN, OECD, and the World Bank (SDMX Consortium) • Format for statistical returns for aggregated data • Version 2.1 (April 2011 public comment) • http://sdmx.org/ (best starting place) • SDMX comes from the international agencies (OECD, IMF, Eurostat, UNSD, World Bank, ECB, BIS) – they get aggregate statistical tables from many countries regularly over time – they wanted to automate and manage the process • they need standard agreed definitions and classifications, standard agreed table structures, standard agreed formats for both data and metadata – They commissioned
    [Show full text]
  • Joint Adb – Unescap Sdmx Capacity Building Initiative Sdmx Starter Kit for National Statistical Agencies
    JOINT ADB – UNESCAP SDMX CAPACITY BUILDING INITIATIVE SDMX STARTER KIT FOR NATIONAL STATISTICAL AGENCIES Version: 5 October 2014 1 FOREWORD The aim of the Starter Kit is to provide a resource for national statistical agencies in countries in the Asia and Pacific region contemplating the implementation of the Statistical Data and Metadata Exchange (SDMX) technical standards and content-oriented guidelines for the exchange of aggregate data and their related methodological information (metadata). It outlines a structured process for the implementation of SDMX by agencies that have little knowledge about SDMX and how they would go about the implementation of the standards and guidelines. In order to avoid duplicating the work of the SDMX sponsoring agencies the Kit makes extensive use of links to existing SDMX background documents and artifacts that have been developed at the global level. The Starter Kit was developed under the auspices of the joint 2014 Asian Development Bank– United Nations Economic and Social Commission for Asia and the Pacific SDMX initiative to improve the efficiency of data and metadata exchange between national statistical agencies in the Asia and Pacific region and international organizations through the ongoing use of SDMX standards. The aim of the joint initiative is to build the capacity of countries in the region to apply SDMX standards through mapping national concepts to specific identified SDMX Data Structure Definitions (DSDs) and Metadata Structure Definitions (MSDs). The initiative also aims at enabling national agencies to determine which, of a range of available tools for SDMX implementation, best meets the needs of the organisation. The basic premise of the Kit is that SDMX implementation must be seen in the context of a wide range of corporate institutional, infrastructure and statistical initiatives currently underway in almost all statistical agencies around the globe to improve the quality and relevance of the service they provide to government and non-government users of their outputs.
    [Show full text]
  • Central Banks' Use of the SDMX Standard
    Irving Fisher Committee on Central Bank Statistics IFC Report No 4 Central banks’ use of the SDMX standard 2015 Survey, conducted by the SDMX Global Conference Organising Committee (this report includes only the central bank responses) March 2016 1 Contributors to the IFC report Bank for International Settlements (BIS) Heinrich Ehrmann Bruno Tissot (IFC Secretariat) Central Bank of the Republic of Turkey (CBRT) Erdem Başer Timur Hülagü This publication is available on the BIS website (www.bis.org). © Bank for International Settlements 2016. All rights reserved. Brief excerpts may be reproduced or translated provided the source is stated. ISSN 1991-7511 (online) ISBN 978-92-9197-480-1 (online) 1 The views expressed in this document reflect those of the contributors and are not necessarily the views of the institutions they represent. Contents 1. Executive summary .......................................................................................................................... 1 2. Background: SDMX ......................................................................................................................... 3 3. Survey findings .................................................................................................................................. 5 Annex A: List of participating central banks and respondents ........................................... 12 Annex B: IFC Questionnaire on central banks’ use of SDMX standard ............................ 14 Annex C: SDMX Roadmap 2020 .....................................................................................................
    [Show full text]
  • Implementor's Guide for Sdmx Format Standards (Version
    STATISTICAL DATA AND METADATA EXCHANGE INITIATIVE IMPLEMENTOR’S GUIDE FOR SDMX FORMAT STANDARDS (VERSION 1.0) STATISTICAL DATA AND METADATA EXCHANGE INITIATIVE 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Initial Release September 2004 26 First Revision December 2004 27 © SDMX 2004 28 http://www.sdmx.org/ 29 30 31 32 33 3 STATISTICAL DATA AND METADATA EXCHANGE INITIATIVE 34 1 INTRODUCTION ............................................................................................................................ 5 35 2 SDMX INFORMATION MODEL FOR FORMAT IMPLEMENTORS .............................................. 5 36 2.1 Introduction...................................................................................................................................................5 37 2.2 Fundamental Parts of the Information Model .................................................................................6 38 2.3 Data Set.....................................................................................................................................................6 39 2.4 Attachment Levels and Data Formats....................................................................................................8 40 2.5 Concepts, Definitions, Properties and Rules.......................................................................................9 41 3 SDMX-ML AND SDMX-EDI: COMPARISON OF EXPRESSIVE CAPABILITIES AND 42 FUNCTION .........................................................................................................................................
    [Show full text]