An ETL Metadata Model for Data Warehousing

Total Page:16

File Type:pdf, Size:1020Kb

An ETL Metadata Model for Data Warehousing Journal of Computing and Information Technology - CIT 20, 2012, 2, 95–111 95 doi:10.2498/cit.1002046 An ETL Metadata Model for Data Warehousing Nayem Rahman1, Jessica Marz1 and Shameem Akhter2 1 Intel Corporation, USA 2 Western Oregon University, USA Metadata is essential for understanding information An enterprise data warehouse (EDW) gets data stored in data warehouses. It helps increase levels of from different heterogeneous sources. Since adoption and usage of data warehouse data by knowledge workers and decision makers. A metadata model is operational data source and target data ware- important to the implementation of a data warehouse; the house reside in separate places, a continuous lack of a metadata model can lead to quality concerns flow of data from source to target is critical to about the data warehouse. A highly successful data ware- house implementation depends on consistent metadata. maintain data freshness in the data warehouse. This article proposes adoption of an ETL (extract- Information about the data-journey from source transform-load) metadata model for the data warehouse to target needs to be tracked in terms of load that makes subject area refreshes metadata-driven, loads observation timestamps and other useful parameters, and timestamps and other load parameters for the minimizes consumption of database systems resources. sake of data consistency and integrity. This The ETL metadata model provides developers with a set information is captured in a metadata model. of ETL development tools and delivers a user-friendly batch cycle refresh monitoring tool for the production Given the increased frequency of data ware- support team. house refresh cycles, the increased importance of data warehouse in business organization, and Keywords: ETL metadata, metadata model, data ware- the increasing complexity of data warehouses, house, EDW, observation timestamp a centralized management of metadata is essen- tial for data warehouse administration, mainte- nance and usage [33]. From the standpoint of a data warehouse refresh process, metadata sup- 1. Introduction port is crucial to data warehouse maintenance team such as ETL developers, database admin- istrators, and the production support team. The data warehouse is a collection of deci- sion support technologies, aimed at enabling An efficient, flexible, robust, and state of the art the knowledge worker (executive, manager, and data warehousing architecture requires a num- analyst) to make better and faster decisions ber of technical advances [36]. A metadata [5]. A data warehouse is defined as a “subject- model-driven cycle refresh is one such impor- oriented, integrated, non-volatile and time vari- tant advancement. Metadata is essential in data ant collection of data in support of manage- warehouse environments since it enables activ- ment’s decisions” [17]. It is considered as a key ities such as data integration, data transforma- platform for the integrated management of deci- tion, on-line analytical processing (OLAP) and sion support data in organizations [31]. One of data mining [10]. Lately, in data warehouses, the primary goals in building data warehouses batch cycles run several times a day to load data is to improve information quality in order to from operational data source to the data ware- achieve certain business objectives such as com- house. A metadata model could be used for dif- petitive advantage or enhanced decision making ferent purposes such as extract-transform-load, capabilities [2, 3]. cycle runs, and cycle refresh monitoring. 96 An ETL Metadata Model for Data Warehousing Metadata has been identified as one of the key The model is also designed to provide the pro- success factors of data warehousing projects duction support team with a user-friendly tool. [34]. It captures information about data ware- This allows them to monitor the cycle refresh house data that is useful to business users and and look for issues relating to a job failure of back-end support personnel. Metadata helps a table and load discrepancy in the error and data warehouse end users to understand the var- message log table. The metadata model pro- ious types of information resources available vides the capability to setup subsequent cycle from a data warehouse environment [11]. Meta- run behavior followed by the one-time full re- data enables decision makers to measure data fresh. This works towards enabling tables to be quality [30]. The empirical evidence from the truly metadata-driven. The model also provides study suggests that end-user metadata quality developers with a set of objects to perform ETL has a positive impact on end-user attitude about development work. This enables them to fol- data warehouse data quality [11]. Metadata is low standards in ETL development across the important not only from end user perspective enterprise data warehouse. standpoint, but also from the standpoint of data In the metadata model architecture, the load jobs acquisition, transformation, load and the analy- are skipped when source data has not changed. sis of warehouse data [38]. It is essential in de- Metadata provides information to decide whe- signing, building, maintaining data warehouses. ther to run full or delta load stored procedures. In a data warehouse there are two main kinds It also has the capability to force a full load if of metadata to be collected: business (or log- needed. The model also controls collect statis- ical) metadata and technical (aka, ETL) meta- tics jobs running them after a certain interval or data [38]. The ETL metadata is linked to the on an on-demand basis, which helps minimize back-end processes that extract, transform, and resource utilization. The metadata model has load the data [30]. The ETL metadata is most several audit tables to archive critical metadata often used by the technical analysts for devel- for three months to two years or more. opment and maintenance of the data warehouse [18]. In this article, we will focus mainly on In Section 2 we discuss related work. In Section ETL metadata that is critical for ETL develop- 3 we give a detailed description of an ETL meta- data model and its essence. In Section 4, we ment, batch cycle refreshes, and maintenance of cover metadata-driven batch processing, batch a data warehouse. cycle flow, and an algorithm for wrapper stored In data warehouses, data from external sources procedures. The main contribution of this work is first loaded into staging subject areas. Then, is presented in Sections 3 and 4. In Section 5 analytical subject area tables – built in such a we discuss use of metadata in data warehouse way that they fulfill the needs of reports – are subject area refreshes. We conclude in Sec- refreshed for use by report users. These tables tion 6 by summarizing the contribution made are refreshed multiple times a day by pulling by this work, providing a review of the meta- data from staging area tables. However, not data model’s benefits and proposing the future all tables in data warehouses get changed data works. during every cycle refresh: the more frequently the batch cycle runs, the lower the percentage of tables that gets changed in any given cycle. 2. Literature Research Refreshing all tables without first checking for source data changes causes unnecessary loads Research in the field of data warehousing is fo- at the expense of resource usage of database cused on data warehouse design issues [13, 15, systems. The development of a metadata model 7, and 25], ETL tools [20, 32, 27, and 19],data that enables some utility stored procedures to warehouse maintenance [24, 12], performance identify source data changes means that load optimization or relational view materialization jobs can be run only when needed. By control- [37, 1, and 23] and implementation issues [8, ling batch job runs, the metadata model is also 29]. Limited research has been done on the designed to minimize use of database systems metadata model aspects of data warehousing. resources. The model makes analytical subject Golfarelli et al. [14] provide a model for mul- area loads metadata-driven. tidimensional data which is based on business An ETL Metadata Model for Data Warehousing 97 aspects of OLAP data. Huynh et al. [16] pro- Under our ETL metadata model, platform inde- pose the use of metadata to map between object- pendent database specific utility tools are used oriented and relational environment within the to load the tables from external sources to the metadata layer of an object-oriented data ware- staging areas of the data warehouse. The pro- house. Eder et al. [9] propose the COMET posed metadata model also enables database model that registers all changes to the schema specific software, such as stored procedures, and structure of data warehouses. They con- to perform transformation and load analytical sider the COMET model as the basis for OLAP subject areas of the data warehouse. The intent tools and transformation operations with the of the software is not to compete with exist- goal to reduce incorrect OLAP results. Stohr ing ETL tools. Instead, we focus on utilizing et al. [33] have introduced a model which uses the capabilities of current commercial database a uniform representation approach based on the engines (given their enormous power to do com- Uniform Modeling Language (UML) to inte- plex transformation and their scalability) while grate technical and semantic metadata and their using this metadata model. We first present the interdependencies. Katic et al. [21] propose a ETL metadata model followed by detailed de- model that covers the security-relevant aspects scriptions of each table. We also provide exper- ( ) of existing OLAP/ data warehouse solutions. imental results viaTable:2to6inSection5 They assert that this particular aspect of meta- based on our application of the metadata model data has seen rather little interest from product in a real-world, production data warehouse.
Recommended publications
  • Download the Paper (PDF)
    WWW.VIRUSBULLETIN.COM/CONFERENCE 2019 LONDON 2 – 4 October 2019 EXPLORING EMOTET, AN ELABORATE EVERYDAY ENIGMA Luca Nagy Sophos, Hungary [email protected] ABSTRACT Based on Sophos detection numbers, the Emotet trojan is the most widespread malware family in the wild. Since its appearance more than fi ve years ago, it has been – and remains – the most notorious and costly active malware. Emotet owes its reputation to its constant state of evolution and change. The malware’s rapid advancement helps support its highly sophisticated operation. This paper will discuss the reverse engineering of its components, as well as the capabilities and features of Emotet: a detailed overview of its multi-layered operation, starting with the spam lure, the malicious attachments (and their evolution), and the malware executable itself, from its highly sophisticated packer to its C2 server communications. Emotet is well known for its modular architecture, its worm-like propagation, and its highly skilled persistence techniques. The recent versions spread rapidly using multiple methods. Besides its ability to spread by brute forcing using its own password lists, it can also harvest email addresses and email content from victims, then spread through spam. Its diverse module list hides different malicious intentions, such as information stealing, including credentials from web browsers or email clients; spreading capabilities; and delivering other malware including ransomware and other banking trojans. We will dissect the background operations of the payload modules. We will also present statistics from Sophos about Emotet’s global reach. A BRIEF HISTORY OF EMOTET The fi rst Emotet sample we detected popped up on 26 May 2014.
    [Show full text]
  • The ENIGMA Data Clearinghouse: a Platform for Rigorous Self-Validated Data Modeling and Integrative, Reproducible Data Analysis
    The ENIGMA Data Clearinghouse: A platform for rigorous self-validated data modeling and integrative, reproducible data analysis John-Marc Chandonia*,1 ([email protected]), Pavel S. Novichov*,1, Adam P. Arkin, and Paul D. Adams1,2 1Lawrence Berkeley National Lab, Berkeley; 2University of California at Berkeley; *co-first authors http://enigma.lbl.gov Project Goals: ENIGMA (Ecosystems and Networks Integrated with Genes and Molecular Assemblies) uses a systems biology approach to understand the interaction between microbial communities and the ecosystems that they inhabit. To link genetic, ecological, and environmental factors to the structure and function of microbial communities, ENIGMA integrates and develops laboratory, field, and computational methods. One of the Grand Challenges of data science is to facilitate knowledge discovery by enabling datasets to be readily analyzable both by humans and by machine learning algorithms. In 2016, a diverse group of stakeholders formalized a concise and measurable set of principles, called FAIR, to increase the utility of datasets for the purpose of knowledge discovery. The four principles of FAIR are Findability, Accessibility, Interoperability, and Reusability. Findability means that data are assigned stable identifiers, and properly indexed. Accessibility means the data are easily retrievable by people authorized to have access. Interoperability means the data are clearly documented using a formal language, in order to facilitate integrated analyses that span multiple datasets. Reusability means the data are documented sufficiently well that it may be used by other people than those who originally generated it, and that the provenance of all data is clear. The latter two principles are particularly challenging, yet critical to achieve, for organizations such as ENIGMA that draw conclusions based on highly integrative analyses of many types of data generated by multiple labs.
    [Show full text]
  • Transaction / Regular Paper Title
    TFG EN ENGINYERIA INFORMÀTICA, ESCOLA D’ENGINYERIA (EE), UNIVERSITAT AUTÒNOMA DE BARCELONA (UAB) 1 ENIGMA A Python-based file sharing server that ensures privacy, anonymity and confidentiality Xavier Torrent Gorjón Resum—Des dels inicis de la humanitat, la comunicació i la seva privacitat ha estat un camp d'exhaustiva recerca. Això encara és cert avui dia, considerant que la privacitat i seguretat a Internet és motiu de debat diàriament. Aquest projecte apunta a oferir un servidor basat en Python capaç d'emmagatzemar arxius en línia, tot preservant la seva privacitat, confidencialitat i anonimat. Hem iterat sobre diversos dissenys en la nostra cerca d'una implementació que complís tots els nostres requeriments. Finalment hem aconseguit una solució que implica usar dos servidors diferents, un a càrrec de les accions com logins i processos de registre, i l'altre responsable de tractar amb els fitxers. Des de l'esquema teòric hem desenvolupat una aplicació considerant tots els possibles atacs que es podrien fer contra el nostre sistema, centrant-nos en els atacs de man-in-the-middle. Hem completat satisfactòriament el projecte en el calendari previst, oferint un producte que pot complir tots els requeriments de seguretat d'avui en dia, mantenint la senzillesa d'ús. Paraules clau—Privacitat de dades, confidencialitat, anonimat en arxius. Abstract—Ever since the inception of humanity, communication and its privacy has been a field of extensive research. Even today, this premise holds true, as privacy and security on the Internet is a daily topic. This project aims to deliver a Python-based on-line server capable of storing all kinds of files while preserving privacy, confidentiality and anonymity.
    [Show full text]
  • Find Truth in Data 74 $3 7,500 ,000 10 931.5125 640 5 ,859,700
    Find truth in data 2016MOMENTUM REPORT 5,859,700 Average daily traffic at 59th St N/Q/R MTA subway station 10 74 Elevators Noise $37,500,000 last complaints Last sale inspected in 2016 price 3/2/2015 of Thor Equities 640 Student enrollment at PS 112 931.5125 Frequency of Verizon Wireless Antenna #3295904 2016 MOMENTUM REPORT Issue 01 - 02 COMPANY 2014 - 2016 2015 Total Capital Raised Revenue Growth Series B Round $34.6mil 500% $28.2mil Enigma is backed by top-tier Fortune 500 clients including Circle of trusted advisors expands investors including NEA, Two Merck, American Express, to include NEA: Scott Sandell, Sigma Ventures, American Express and EMD are realizing the Managing General Partner; Ventures, Comcast Ventures, immense value of data that Forest Baskett, former CTO of Silicon Crosslink, and The New York Times. is connected and actionable. Graphics; and Greg Papadopoulos, former CTO of Sun Microsystems. 2015 - 2016 Bookings 10X Enigma bookings increased more than tenfold from 2015 to 2016 as the company has partnered with new clients and identified opportunities to expand existing relationships. 2015 2016 GROWTH Our New Flatiron HQ Office Square 13,000 ft2 Footage 867% Enigma Technologies, Inc. was founded in a small Harlem apartment and has since evolved to a 13,000 ft2 150 ft2 space on 5th Avenue to accommodate the fast growing team. 2013 2014 2015 2016 Employee Headcount 65 722% There are now 70 team members supporting a range of functions 9 from Operations and HR to Data Science and Engineering. 2013 2014 2015 2016 2016 MOMENTUM REPORT Issue 01 - 03 TEAM Where We Live Employees boast a wide range of backgrounds, from industrial engineering, computational astrophysics, and neuroscience, to art history, philosophy, and journalism.
    [Show full text]
  • The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus the Secrets of Enigma
    The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus The Secrets of Enigma B. Jack Copeland, Editor OXFORD UNIVERSITY PRESS The Essential Turing Alan M. Turing The Essential Turing Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma Edited by B. Jack Copeland CLARENDON PRESS OXFORD Great Clarendon Street, Oxford OX2 6DP Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide in Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Taipei Toronto Shanghai With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan South Korea Poland Portugal Singapore Switzerland Thailand Turkey Ukraine Vietnam Published in the United States by Oxford University Press Inc., New York © In this volume the Estate of Alan Turing 2004 Supplementary Material © the several contributors 2004 The moral rights of the author have been asserted Database right Oxford University Press (maker) First published 2004 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.
    [Show full text]
  • Dynamic Data Fabric and Trusted Data Mesh Using Goldengate
    Business / Technical Brief Technology Brief: Dynamic Data Fabric and Trusted Data Mesh using the Oracle GoldenGate Platform Core Principles and Attributes for a Trusted, Ledger-based, Low-latency Streaming Enterprise Data Architecture January 2021, Version 2.1 Copyright © 2021, Oracle and/or its affiliates Public 1 Dynamic Data Fabric and Trusted Data Mesh using the Oracle GoldenGate Platform Copyright © 2021, Oracle and/or its affiliates | Public Document Disclaimer This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade of the product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle. Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without risking significant destabilization of the code. Document Purpose The intended audience for this document includes technology executives and enterprise data architects who are interested in understanding Data Fabric, Data Mesh and the Oracle GoldenGate streaming data platform. The document’s organization and content assume familiarity with common enterprise data management tools and patterns but is suitable for individuals new to Oracle GoldenGate, Data Fabric and Data Mesh concepts. The primary intent of this document is to provide education about (1) emerging data management capabilities, (2) a detailed enumeration of key attributes of a trusted, real-time Data Mesh, and (3) concise examples for how Oracle GoldenGate can be used to provide such capabilities.
    [Show full text]
  • Enigma Reference Manual for Version 1.20
    Enigma Reference Manual for version 1.20 Daniel Heck (old API v 0.92) Petr Machata (old API v 0.92) Ralf Westram (old API v 0.92) Ronald Lamprecht Andreas Lochmann Andreas Abraham i Table of Contents 1 Running Enigma :::::::::::::::::::::::::::::::: 1 1.1 Locating Resources::::::::::::::::::::::::::::::::::::::::::::: 1 1.2 Startup Switches ::::::::::::::::::::::::::::::::::::::::::::::: 2 1.3 User Options::::::::::::::::::::::::::::::::::::::::::::::::::: 4 1.4 Inventory Console :::::::::::::::::::::::::::::::::::::::::::::: 5 1.5 Level Info :::::::::::::::::::::::::::::::::::::::::::::::::::::: 6 1.5.1 Public Ratings :::::::::::::::::::::::::::::::::::::::::::: 6 1.5.2 Scores:::::::::::::::::::::::::::::::::::::::::::::::::::: 10 1.5.3 Versions:::::::::::::::::::::::::::::::::::::::::::::::::: 10 1.5.4 Private Annotations and Ratings ::::::::::::::::::::::::: 11 1.5.5 Screenshots :::::::::::::::::::::::::::::::::::::::::::::: 11 1.6 Handicap and PAR ::::::::::::::::::::::::::::::::::::::::::: 11 1.7 User Sound Sets :::::::::::::::::::::::::::::::::::::::::::::: 12 2 Levelpack Basics::::::::::::::::::::::::::::::: 15 2.1 Getting Started with Levelpacks::::::::::::::::::::::::::::::: 15 2.2 Converting 0.92 Levelpacks:::::::::::::::::::::::::::::::::::: 16 2.3 Zip Levelpacks :::::::::::::::::::::::::::::::::::::::::::::::: 17 2.4 Grouping and Sorting Levelpacks :::::::::::::::::::::::::::::: 17 2.5 Creating New Levelpacks:::::::::::::::::::::::::::::::::::::: 18 2.6 Modifying and Deleting Levelpacks :::::::::::::::::::::::::::: 19 2.7 Composing
    [Show full text]
  • Getting Data Right Tackling the Challenges of Big Data Volume and Variety
    Compliments of Getting Data Right Tackling the Challenges of Big Data Volume and Variety Jerry Held, Michael Stonebraker, Thomas H. Davenport, Ihab Ilyas, Michael L. Brodie, Andy Palmer & James Markarian Getting Data Right Tackling the Challenges of Big Data Volume and Variety Jerry Held, Michael Stonebraker, Thomas H. Davenport, Ihab Ilyas, Michael L. Brodie, Andy Palmer, and James Markarian Getting Data Right by Jerry Held, Michael Stonebraker, Thomas H. Davenport, Ihab Ilyas, Michael L. Brodie, Andy Palmer, and James Markarian Copyright © 2016 Tamr, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Editor: Shannon Cutt Interior Designer: David Futato Production Editor: Nicholas Adams Cover Designer: Randy Comer Copyeditor: Rachel Head Illustrator: Rebecca Demarest Proofreader: Nicholas Adams January 2016: First Edition Revision History for the First Edition 2015-12-15: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Getting Data Right and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work.
    [Show full text]
  • Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata
    Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata MiHyun Jang1, Tejal Patted2, Yolanda Gil2,3, Daniel Garijo3, Varun Ratnakar3, Jie Ji2, Prince Wang1, Aggie McMahon4, Paul M. Thompson4, and Neda Jahanshad4 1 Troy High School, Fullerton, California 2 Department of Computer Science, University of Southern California 3 Information Sciences Institute, University of Southern California 4 Imaging Genetics Center, University of Southern California [email protected] Abstract Scientific collaborations involving multiple institutions are increas- ingly commonplace. It is not unusual for publications to have dozens or hun- dreds of authors, in some cases even a few thousands. Gathering the infor- mation for such papers may be very time consuming, since the author list must include authors who made different kinds of contributions and whose affilia- tions are hard to track. Similarly, when datasets are contributed by multiple in- stitutions, the collection and processing details may also be hard to assemble due to the many individuals involved. We present our work to date on automat- ically generating author lists and other portions of scientific papers for multi- institutional collaborations based on the metadata created to represent the peo- ple, data, and activities involved. Our initial focus is ENIGMA, a large interna- tional collaboration for neuroimaging genetics. Keywords: semantic metadata, semantic science, neuroinformatics. 1 Introduction Significant scientific effort is devoted to describing data with appropriate semantic metadata. Many communities have data repositories that use semantic markup to de- scribe datasets, enabling users to retrieve data based on metadata properties of inter- est. In neuroimaging, which is the focus of this work, neuroinformatics repositories exist (e.g., http://nitrc.org) where researchers may download brain scans correspond- ing to individuals of a certain age range.
    [Show full text]
  • Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata
    Proceedings of the Workshop on Enabling Open Semantic Science, co-located with the Sixteenth International Semantic Web Conference (ISWC), Vienna, Austria, October 2017. Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata MiHyun Jang1, Tejal Patted2, Yolanda Gil2,3, Daniel Garijo3, Varun Ratnakar3, Jie Ji2, Prince Wang1, Aggie McMahon4, Paul M. Thompson4, and Neda Jahanshad4 1 Troy High School, Fullerton, California 2 Department of Computer Science, University of Southern California 3 Information Sciences Institute, University of Southern California 4 Imaging Genetics Center, University of Southern California [email protected] Abstract Scientific collaborations involving multiple institutions are increas- ingly commonplace. It is not unusual for publications to have dozens or hun- dreds of authors, in some cases even a few thousands. Gathering the infor- mation for such papers may be very time consuming, since the author list must include authors who made different kinds of contributions and whose affilia- tions are hard to track. Similarly, when datasets are contributed by multiple in- stitutions, the collection and processing details may also be hard to assemble due to the many individuals involved. We present our work to date on automat- ically generating author lists and other portions of scientific papers for multi- institutional collaborations based on the metadata created to represent the peo- ple, data, and activities involved. Our initial focus is ENIGMA, a large interna- tional collaboration for neuroimaging genetics. Keywords: semantic metadata, semantic science, neuroinformatics. 1 Introduction Significant scientific effort is devoted to describing data with appropriate semantic metadata. Many communities have data repositories that use semantic markup to de- scribe datasets, which enables users to query the repositories to retrieve data based on metadata properties of interest.
    [Show full text]
  • FORUM GUIDE to METADATA the Meaning Behind Education Data
    National Forum on Education Statistics National Cooperative Education Statistics System The National Center for Education Statistics (NCES) established the National Cooperative Education Statistics System (Cooperative System) to assist in producing and maintaining comparable and uniform information and data on early childhood education and on elementary and secondary education. These data are intended to be useful for policymaking at the federal, state, and local levels. The National Forum on Education Statistics (the Forum) is an entity of the Cooperative System and, among its other activities, proposes principles of good practice to assist state and local education agencies in meeting this purpose. The Cooperative System and the Forum are supported in these endeavors by resources from NCES. Publications of the Forum do not undergo the same formal review required for products of NCES. The information and opinions published here are those of the National Forum on Education Statistics and do not necessarily represent the policy or views of the U.S. Department of Education or the National Center for Education Statistics. July 2009 This publication and other publications of the National Forum on Education Statistics may be found at the websites listed below. The NCES World Wide Web Home Page is http://nces.ed.gov The NCES World Wide Web Electronic Catalog is http://nces.ed.gov/pubsearch The Forum World Wide Web Home Page is http://nces.ed.gov/forum Suggested Citation National Forum on Education Statistics. (2009). Forum Guide to Metadata: The Meaning Behind Education Data (NFES 2009–805). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
    [Show full text]
  • 1 the ENIGMA Brain Injury Working Group: Approach
    1 The ENIGMA Brain Injury Working Group: Approach, Challenges, and Potential Benefits Elisabeth A. Wilde1,2,3, Emily L. Dennis1,2,4,5, David F. Tate1,2,6 1Department of Neurology, University of Utah School of Medicine, Salt Lake City, UT 2George E. Wahlen VA Medical Center, Salt Lake City, UT 3H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX 4Psychiatry Neuroimaging Laboratory, Brigham & Women’s Hospital, Harvard Medical School, Boston, MA 5Imaging Genetics Center, Stevens Neuroimaging & Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA 6Missouri Institute of Mental Health, University of Missouri, St. Louis, MO Please address correspondence to: Dr. Emily L Dennis TBICC, Dept of Neurology University of Utah School of Medicine [email protected] 2 Abstract The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) consortium brings together researchers from around the world to try to identify the genetic underpinnings of brain structure and function, along with robust, generalizable effects of neurological and psychiatric disorders. The recently-formed ENIGMA Brain Injury working group includes 10 subgroups, based largely on injury mechanism and patient population. This introduction to the special issue summarizes the history, organization, and objectives of ENIGMA Brain Injury, and includes a discussion of strategies, challenges, opportunities and goals common across 6 of the subgroups under the umbrella of ENIGMA Brain Injury. The following articles in this special issue, including 6 articles from different subgroups, will detail the challenges and opportunities specific to each subgroup. Introduction to ENIGMA The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA; enigma.usc.edu) consortium was formed in 2009 in an effort to increase power to detect associations between genetic variation and brain structure and function.
    [Show full text]