Grappling with the Future Use of Big Data for Translational Medicine and Clinical Care S

Total Page:16

File Type:pdf, Size:1020Kb

Grappling with the Future Use of Big Data for Translational Medicine and Clinical Care S 96 © 2017 IMIA and Schattauer GmbH Grappling with the Future Use of Big Data for Translational Medicine and Clinical Care S. Murphy1, 2, 4, V. Castro2, K. Mandl3, 4 1 Massachusetts General Hospital Laboratory of Computer Science, Boston, MA, USA 2 Partners Healthcare Research Information Science and Computing, Somerville, MA, USA 3 Boston Children’s Hospital Computational Health Informatics Program, Boston, MA, USA 4 Harvard Medical School Department of Biomedical Informatics, Boston, MA, USA Summary is surprisingly disjointed in its ability to Introduction handle imaging, waveform, genomic, and Objectives: Although patients may have a wealth of imaging, If we are to infuse science-based clinical continuous cardiac and cephalic monitoring. genomic, monitoring, and personal device data, it has yet to be practice into healthcare, we will need Much of these data fall under the label of fully integrated into clinical care. better technology platforms than the cur- “Big Data” [5]. Methods: We identify three reasons for the lack of integration. rent Electronic Medical Record Systems One defining aspect of “Big Data” is that The first is that “Big Data” is poorly managed by most Electronic (EMRS), which are not built to support the data tends to be organized in a funda- Medical Record Systems (EMRS). The data is mostly available scientific innovation. In different countries, mentally different way that cannot fit easily on “cloud-native” platforms that are outside the scope of most the functionality and overall goals of EMRS and directly into hierarchical and relational EMRs, and even checking if such data is available on a patient can vary, but in the United States (US) the databases. Rather, much of the data arrives often must be done outside the EMRS. The second reason is primary purpose of EMRS is to bill for the as (sometimes very large) data objects. In S3 that extracting features from the Big Data that are relevant to patient visit [1]. The Epic EMRS, dominant object storing systems (such as https://aws. healthcare often requires complex machine learning algorithms, in many US Academic Healthcare Centers amazon.com/s3), there is often only minimal such as determining if a genomic variant is protein-altering. The and known for its strength of efficient pa- metadata regarding what information exists third reason is that applications that present Big Data need to be tient billing [2], is mostly closed to outside in the data. These data “objects” have their modified constantly to reflect the current state of knowledge, such software additions as part of the quest to own internal structure and often require a as instructing when to order a new set of genomic tests. In some develop the most efficient system possible. great deal of computation to extract health- cases, applications need to be updated nightly. Other EMRS in the US are perhaps best care-relevant features. Support exists for Results: A new architecture for EMRS is evolving which could summarized as vehicles for inter-clinician many tools within the cloud-native platform unite Big Data, machine learning, and clinical care through a communication [3]. This communication is that enable the analysis of the healthcare microservice-based architecture which can host applications fo- mostly facilitated through the exchange of “Big Data” such as Hadoop, CouchDB, cused on quite specific aspects of clinical care, such as managing narrative text, and therefore the information MongoDB, Matlab, R, SAS, and Docker, cancer immunotherapy. which is not coded for billing purposes is left but the paradigm of computing on data ob- Conclusion: Informatics innovation, medical research, and in the unstructured notes exchanged between jects rather than examining the data through clinical care go hand in hand as we look to infuse science-based clinicians [4]. This situation has greatly Structured Query Languages requires new practice into healthcare. Innovative methods will lead to a new reduced the appeal of using Big Data for skills and infrastructures which are differ- ecosystem of applications (Apps) interacting with healthcare Healthcare in the US. We will largely focus ent from the database-oriented approaches providers to fulfill a promise that is still to be determined. on the loss of this appeal in this report. currently employed in most EMRS. Keywords At odds with what is presented above In addition to barriers regarding training as an efficient Healthcare Enterprise In- people how to use the new tools, the tools Big Data, healthcare; decision support systems, clinical; genomics formation Technology (IT) management to process Big Data and extract health- system is data entering the system through care-relevant features must often reside on Yearb Med Inform 2017: 96-102 channels outside the Healthcare Enterprise, “Cloud-native” Platforms [6]. This is be- http://dx.doi.org/10.15265/IY-2017-020 data such as personal health data, patient cause they need massive parallel processing Published online August 18, 2017 surveys, patient self-reported outcomes, to scale their file processing environments. and patient-initiated genomic testing. Even Software services and tools on cloud-native inside the Healthcare Enterprise, the bill- platforms enable scalable provisioning of ing and communication-oriented EMRS compute resources, interoperability between IMIA Yearbook of Medical Informatics 2017 97 Grappling with the Future Use of Big Data for Translational Medicine and Clinical Care digital objects, indexing to promote discov- cephalogram waveform data from seizure feeds from the entire population over the past erability of digital objects, sharing of digital monitoring is commonly put into files in spe- year where people are identified in a coded objects between providers and processes, ac- cialized systems. The metadata of the files form. This presents a challenge to placing cess to and deployment of scientific analysis are not mapped to standard annotations, or boundaries on what data is shared, especial- tools and pipeline workflows, and connec- annotations of any kind, that are recognized ly regarding consented and unconsented tivity with other repositories, registries, and at the enterprise level. The system itself may patients. Generally, one must obtain patient resources. The idea of a Big Data Commons use S3 storage, but the internal structures consent to associate the internal data with has been advanced at the National Institute of the data are completely unrecognized specific patients, although in general, Big of Health (NIH) as “The NIH Commons” by a traditional healthcare enterprise query Data sets that are shared as “de-identified” (https://datascience.nih.gov/commons) system. Without a query language to make pose a significant risk for re-identification illustrating many of these differences with a semantic link between systems, much of due to the incredibly detailed data which traditional computing environments. this data is left on the healthcare analytics’ may be available per patient [8]. This makes Understanding the use of Big Data within cutting room floor. Big Data a challenge for integration into the the policy of healthcare remains a work-in- A third integration hurdle is the necessi- healthcare enterprise. Most EMRS are not progress. The cloud-native platform does ty to adopt new policies that deal with the prepared for the challenges of untangling not need to reside in the public cloud, and sharing of Big Data, which must address at the patient attributions in the data, nor similar computational infrastructure can be least four issues: 1) what data to focus upon placing boundaries around the granular data established in private and public clouds such sharing; 2) how to place boundaries on the elements inside the Big Data. that parts of the computing infrastructure can way data is shared; 3) how to maintain the Although the cost of keeping Big move from private to public clouds as need- cost of sharing data; and 4) how to provide Data in the public cloud is comparable to ed, especially when considering concerns adequate motivation for stakeholders to the cost of local S3 storage, the cost of of data privacy. support the sharing of the data. transferring data can be very high (http:// To focus on what data to share, it is im- www.networkworld.com/article/3164444/ portant to understand the use cases behind cloud-computing/how-to-calculate-the- selecting the data to be shared. For example, true-cost-of-migrating-to-the-cloud.html). Organizing an Approach to the initial focus in a US healthcare data This reflects the general principle that network named PCORNet was to include costs are minimized by operating on the Big Data in Healthcare standard coded healthcare data only, but data locally, and extracting the results of Big Data faces numerous integration hurdles, when a needs’ assessment was later under- an operation, rather than transferring the both with the EMRS data and with the other taken among the network sites, most needs entire Big Data set to a remote site for an Big Data. This is due to several factors, in- were found to be for more specific data operation to be performed. For example, cluding the relatively unstructured approach such as cancer registry data, rare laboratory although millions of seconds of waveform to data organization in the cloud-native en- tests, and personal health data [7]. This data may be available, only particular vironment. The approach of S3 file storage emphasized that, although “classic” data in episodes of cardiac arrhythmia or brain is to allow each file to be described with the EMRS may be easier to handle than Big epileptiform discharges may be desired. metadata that helps organize the file system, Data, use cases are guiding the need to in- Supplying metadata can direct analysis pro- but the internal data structures of the files clude Big Data, and because this integration grams operating at the location of the data cannot be directly queried.
Recommended publications
  • Implementation of Openehr in Combination with Clinical Terminologies: Experiences from Norway
    International Journal on Advances in Life Sciences, vol 9 no 3 & 4, year 2017, http://www.iariajournals.org/life_sciences/ 82 Implementation of OpenEHR in Combination with Clinical Terminologies: Experiences from Norway Rune Pedersen Conceição Granja, Luis Marco Ruiz Norwegian Centre for eHealth Research Norwegian Centre for eHealth Research University Hospital of North Norway University Hospital of North Norway Telemedicine- and eHealth Tromsø, Norway University of Tromsø {conceicao.granja, luis.marco.ruiz}@ehealthresearch.no [email protected] Abstract—Norway is currently involved in several initiatives to operation needs not only efficient technologies but also the adopt clinical information standards and terminologies. This ability of these technologies to exchange information without paper aims to identify and discuss challenges and experiences any ambiguity or loss of meaning, i.e., interoperate at a se- for large-scale national implementation projects when working mantic level [10]. towards structured Electronic Health Records systems, process Enabling semantic interoperability (SIOp) across and decision support, and secondary use of clinical data. This healthcare technological platforms is needed in order to guar- paper reports from the national strategy for OpenEHR adop- antee that different stakeholders will derive the same conclu- tion in Northern Norway Regional Health Authority encour- sions from the same data set [10]. If SIOp is not granted across aged by the development of a national repository for OpenEHR organizational boundaries, the lack of precision in specifying archetypes and a national initiative to integrate clinical termi- the meaning of the information shared may lead to misinter- nologies. The paper contributes to a qualitative longitudinal in- terpretive study with an effort to increase the possibility to ob- pretations of healthcare data jeopardizing the quality of care tain semantic interoperability (towards integrated care) and dis- and hampering research outcomes.
    [Show full text]
  • Biomedical Research in a Digital Health Framework
    Cano et al. Journal of Translational Medicine 2014, 12(Suppl 2):S10 http://www.translational-medicine.com/content/12/S2/S10 RESEARCH Open Access Biomedical research in a Digital Health Framework Isaac Cano1*, Magí Lluch-Ariet2, David Gomez-Cabrero3, Dieter Maier4, Susana Kalko1, Marta Cascante1,5, Jesper Tegnér3, Felip Miralles2, Diego Herrera5,6, Josep Roca1,7, Synergy-COPD consortium Abstract This article describes a Digital Health Framework (DHF), benefitting from the lessons learnt during the three-year life span of the FP7 Synergy-COPD project. The DHF aims to embrace the emerging requirements - data and tools - of applying systems medicine into healthcare with a three-tier strategy articulating formal healthcare, informal care and biomedical research. Accordingly, it has been constructed based on three key building blocks, namely, novel integrated care services with the support of information and communication technologies, a personal health folder (PHF) and a biomedical research environment (DHF-research). Details on the functional requirements and necessary components of the DHF-research are extensively presented. Finally, the specifics of the building blocks strategy for deployment of the DHF, as well as the steps toward adoption are analyzed. The proposed architectural solutions and implementation steps constitute a pivotal strategy to foster and enable 4P medicine (Predictive, Preventive, Personalized and Participatory) in practice and should provide a head start to any community and institution currently considering to implement
    [Show full text]
  • Journal of Clinical and Translational Science
    Journal of Clinical and Translational Science n-RRMS > a-RRMS since both, as RRMS, capable of repair response, but BASIC SCIENCE/METHODOLOGY a-RRMS triggered this response more recently in response to more recent relapse. In all groups, we expect IL4I-treatment to mitigate inflammation 2079 (aim 2). Finally, we expect that H2O2 production by IL4I1 is a key player in IL4I1 Updates to the documentation system for R function, and that H2O2 will preferentially induce oxidative stress to pro- fl Andrew Middleton Redd in ammatory subsets of PBCMs (aim 3). DISCUSSION/SIGNIFICANCE OF IMPACT: MS is a chronic inflammatory neurodegenerative disease of the The University of Utah School of Medicine, Salt Lake City, UT, USA central nervous system that, with an average age of onset of 34, afflicts over 2.3 million individuals worldwide during many of the most productive years of OBJECTIVES/SPECIFIC AIMS: This research seeks to create a next generation their lives. The pathogenesis of MS, which involves autoimmune destruction of documentation system that exists independent of but is complimentary to the myelin, is poorly understood. Accurate biomarkers, which could predict disease packaging system in R. The new documentation can be manipulated program- progression, are yet to be identified and would provide valuable information to matically as with all R objects. It also implements multiple translators for creating patients and their treating clinicians. Likewise, effective treatments are few and documentation from different sources, including documentation
    [Show full text]
  • Health Level Seven® International for IMMEDIATE RELEASE HL7
    Health Level Seven® International FOR IMMEDIATE RELEASE HL7 and Regenstrief Institute Sign Statement of Understanding ANN ARBOR, MI and INDIANAPOLIS, IN, USA— Nov. 14, 2011– Health Level Seven® (HL7®) International, the global authority on standards for interoperability of health information technology with members in 55 countries, and the Regenstrief Institute, Inc., an internationally respected healthcare and informatics research organization, today announced an agreement to create a complementary process to develop and extend comprehensive standards in the healthcare industry. “This agreement further solidifies and extends the wonderful relationship HL7 has enjoyed with Regenstrief for many years,” said Bob Dolin, chair of HL7 Board of Directors. “HL7 is committed to working with Regenstrief and other standards bodies to advance the delivery of safe and effective patient care.” Logical Observation Identifiers Names and Codes (LOINC®) is a universal code system developed by the Regenstrief Institute for identifying laboratory and clinical observations. When used in conjunction with the data exchange standards developed by HL7, LOINC’s universal observation identifiers make it possible to combine test results, measurements, and other observations from many independent sources. Together, they facilitate exchange and pooling of health data for clinical care, research, outcomes management, and other purposes. 1 “Regenstrief has been a long-standing contributor to the standards developed by HL7, and likewise, LOINC has been enhanced by its adoption in HL7’s standards,” said Daniel Vreeman, associate director of terminology services at the Regenstrief Institute. “With this agreement, we look forward to an even closer collaboration with HL7 that improves the semantic interoperability of health data exchange worldwide.” LOINC began in the mid 1990's when Regenstrief investigators, using their decades of experience with electronic medical records, began the Indiana Network for Patient Care, the nation's first citywide health information exchange.
    [Show full text]
  • Re-Use of Operational Electronic Health Record Data for Research and Other Purposes
    Re-Use of Operational Electronic Health Record Data for Research and Other Purposes William Hersh, MD Professor and Chair Department of Medical Informatics & Clinical Epidemiology Oregon Health & Science University Portland, OR, USA Email: [email protected] Web: www.billhersh.info Blog: http://informaticsprofessor.blogspot.com References Anonymous (2005). Defining the personal health record. Journal of AHIMA. 76(6): 24-25. http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_027351.hcsp?dD ocName=bok1_027351 Anonymous (2010). Electronic Personal Health Information Exchange: Health Care Entities' Reported Disclosure Practices and Effects on Quality of Care. Washington, DC, Government Accountability Office. http://www.gao.gov/new.items/d10361.pdf Blumenthal, D (2010). Launching HITECH. New England Journal of Medicine. 362: 382-385. Bourgeois, FC, Olson, KL, et al. (2010). Patients treated at multiple acute health care facilities: quantifying information fragmentation. Archives of Internal Medicine. 170: 1989- 1995. Butler, D (2013). When Google got flu wrong. Nature. 494: 155-156. Chapman, WW, Christensen, LM, et al. (2004). Classifying free-text triage chief complaints into syndromic categories with natural language processing. Artificial Intelligence in Medicine. 33: 31-40. Denny, JC, Ritchie, MD, et al. (2010). PheWAS: Demonstrating the feasibility of a phenome- wide scan to discover gene-disease associations. Bioinformatics. 26: 1205-1210. Finnell, JT, Overhage, JM, et al. (2011). All health care is not local: an evaluation of the distribution of emergency department care delivered in Indiana. AMIA Annual Symposium Proceedings, Washington, DC. 409-416. Gerbier, S, Yarovaya, O, et al. (2011). Evaluation of natural language processing from emergency department computerized medical records for intra-hospital syndromic surveillance.
    [Show full text]
  • PDF Download
    53 © 2017 IMIA and Schattauer GmbH The Role of Free/Libre and Open Source Software in Learning Health Systems C. Paton1, T. Karopka2 1 Group Head for Global Health Informatics, Centre for Tropical Medicine and Global Health, University of Oxford, UK 2 Chair of IMIA OS WG, Chair of EFMI LIFOSS WG, Project Manager, BioCon Valley GmbH, Greifswald, Germany LHSs, examining the academic literature, Summary Introduction the MedFLOSS database that catalogues Objective: To give an overview of the role of Free/Libre and As more patient data are collected electron- FLOSS in use in healthcare [6], and the Open Source Software (FLOSS) in the context of secondary use of ically through Electronic Health Records grey literature from websites, reports, and patient data to enable Learning Health Systems (LHSs). (EHRs) and other information systems used personal communications with experts in Methods: We conducted an environmental scan of the academic in healthcare organisations, the opportunity the area of FLOSS adoption in healthcare. and grey literature utilising the MedFLOSS database of open to reuse the data these systems collect for Although some EHRs and research source systems in healthcare to inform a discussion of the role research and quality improvement becomes systems are fully open source, such as the of open source in developing LHSs that reuse patient data for more apparent. There is now a wide range of Veterans Information Systems and Technology research and quality improvement. large-scale research projects [1, 2] that are Architecture (VistA) [7], modern healthcare Results: A wide range of FLOSS is identified that contributes to reusing routinely collected data for analysis infrastructure is often a combination of the information technology (IT) infrastructure of LHSs including and the concept of a Learning Health Sys- open source and proprietary systems.
    [Show full text]
  • Promise and Peril: How Artificial Intelligence Is Transforming Health Care
    Promise and peril: How artificial intelligence is transforming health care COVERAGE SUPPORTED BY Promise and peril: How artificial intelligence is transforming health care AI has enormous potential to improve the quality of health care, enable early diagnosis of diseases, and reduce costs. But if implemented incautiously, AI can exacerbate health disparities, endanger patient privacy, and perpetuate bias. STAT, with support from the Commonwealth Fund, explored these possibilities and pitfalls during the past year and a half, illuminating best practices while identifying concerns and regulatory gaps. This report includes many of the articles we published and summarizes our findings, as well as recommendations we heard from caregivers, health care executives, academic experts, patient advocates, and others. It was written by STAT’s Erin Brodwin, a California-based health tech correspondent, and Casey Ross, national technology correspondent. PROMISE AND PERIL: HOW AI IS TRANSFORMING HEALTH CARE Preface The Commonwealth Fund is a national philanthropy committed to improving our health care system. When STAT approached us with the idea of a series on artificial intelligence (AI) in health care, we saw an important opportunity to shed light on a trend of huge significance, despite the fact that we don’t have a formal program on health information technology (HIT) or the related topic of AI. It didn’t hurt, of course, that as a former national coordinator for health information technology under the Obama administration, I had personally continued to follow develop- ments in HIT with more than passing interest. Information is the lifeblood of medicine and thus effective information management is fundamental to everything our health system does.
    [Show full text]
  • Translational Informatics of Population Health: How Large Biomolecular and Clinical Datasets Unite
    Pacific Symposium on Biocomputing 2019 Translational informatics of population health: How large biomolecular and clinical datasets unite Yves A. Lussier, MD Center for Biomedical Informatics & Biostatistics, Department of Medicine, BIO5 Institute, Cancer Center, The University of Arizona 1230 North Cherry Avenue, Tucson, AZ 85721 [email protected] Atul J. Butte, MD, PhD Bakar Computational Health Sciences Institute, University of California, San Francisco 550 16th Street, San Francisco, CA 94158 [email protected] Haiquan Li, PhD College of Agriculture and Life Sciences, The University of Arizona 1177 E 4th Street, Shantz 509, Tucson, AZ 85719 [email protected] Rong Chen, PhD Sema4 Genomics 333 Ludlow Street, Stamford, CT 06902 [email protected] Jason H. Moore, PhD The Perelman School of Medicine, University of Pennsylvania D202 Richards, 3700 Hamilton Walk, Philadelphia, PA 19104 [email protected] This paper summarizes the workshop content on how the integration of large biomolecular and clinical datasets can enhance the field of population health via translational informatics. Large volumes of data present diverse challenges for existing informatics technology, in terms of computational efficiency, modeling effectiveness, statistical computing, discovery algorithms, and heterogeneous data integration. While accumulating large ‘omics measurements on subjects linked with their electronic record remains a challenge, this workshop focuses on non-trivial linkages between large clinical and biomolecular datasets. For example, exposures and clinical datasets can relate through zip codes, while comorbidities and shared molecular mechanisms can relate diseases. Workshop presenters will discuss various methods developed in their respective labs/organizations to overcome the difficulties of combining together such large complex datasets and knowledge to enable the translation to clinical practice for improving health outcomes.
    [Show full text]
  • The Translational Medicine Ontology and Knowledge Base
    Luciano et al. Journal of Biomedical Semantics 2011, 2(Suppl 2):S1 http://www.jbiomedsem.com/content/2/S2/S1 JOURNAL OF BIOMEDICAL SEMANTICS PROCEEDINGS Open Access The Translational Medicine Ontology and Knowledge Base: driving personalized medicine by bridging the gap between bench and bedside Joanne S Luciano1,2*, Bosse Andersson3, Colin Batchelor4, Olivier Bodenreider5, Tim Clark6,7, Christine K Denney8, Christopher Domarew9, Thomas Gambet10, Lee Harland11, Anja Jentzsch12, Vipul Kashyap13, Peter Kos6, Julia Kozlovsky14, Timothy Lebo1, Scott M Marshall15,16, James P McCusker1, Deborah L McGuinness1, Chimezie Ogbuji17, Elgar Pichler18, Robert L Powers2, Eric Prud’hommeaux10, Matthias Samwald19,20,21, Lynn Schriml22, Peter J Tonellato6, Patricia L Whetzel23, Jun Zhao24, Susie Stephens25, Michel Dumontier26* From Bio-Ontologies 2010: Semantic Applications in Life Sciences Boston, MA, USA. 9-10 July 2010 * Correspondence: [email protected]. Abstract edu; [email protected] 1 Rensselaer Polytechnic Institute, Background: Translational medicine requires the integration of knowledge using Troy, NY, USA 26Carleton University, Ottawa, heterogeneous data from health care to the life sciences. Here, we describe a Canada collaborative effort to produce a prototype Translational Medicine Knowledge Base (TMKB) capable of answering questions relating to clinical practice and pharmaceutical drug discovery. Results: We developed the Translational Medicine Ontology (TMO) as a unifying ontology to integrate chemical, genomic and proteomic data with disease, treatment, and electronic health records. We demonstrate the use of Semantic Web technologies in the integration of patient and biomedical data, and reveal how such a knowledge base can aid physicians in providing tailored patient care and facilitate the recruitment of patients into active clinical trials.
    [Show full text]
  • Translational Biomedical Informatics and Computational Systems Medicine
    BioMed Research International Translational Biomedical Informatics and Computational Systems Medicine Guest Editors: Zhongming Zhao, Bairong Shen, Xinghua Lu, and Wanwipa Vongsangnak Translational Biomedical Informatics and Computational Systems Medicine BioMed Research International Translational Biomedical Informatics and Computational Systems Medicine Guest Editors: Zhongming Zhao, Bairong Shen, Xinghua Lu, and Wanwipa Vongsangnak Copyright © 2013 Hindawi Publishing Corporation. All rights reserved. This is a special issue published in “BioMed Research International.” All articles are open access articles distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Contents Translational Biomedical Informatics and Computational Systems Medicine, Zhongming Zhao, Bairong Shen, Xinghua Lu, and Wanwipa Vongsangnak Volume 2013, Article ID 237465, 2 pages CADe System Integrated within the Electronic Health Record,NoeliaVaxe1;llez,´ Gloria Bueno, Oscar´ Deniz,´ Mar´ıa del Milagro Fernandez,´ Carlos Pastor, Miguel Angel´ Rienda, Pablo Esteve, and Mar´ıa Arias Volume 2013, Article ID 219407, 14 pages Characterization of Schizophrenia Adverse Drug Interactions through a Network Approach and Drug Classification, Jingchun Sun, Min Zhao, Ayman H. Fanous, and Zhongming Zhao Volume 2013, Article ID 458989, 10 pages Diagnosis Value of the Serum Amyloid A Test in Neonatal Sepsis: A Meta-Analysis, Haining Yuan, Jie Huang, Bokun Lv,
    [Show full text]
  • Why Digital Medicine Depends on Interoperability
    www.nature.com/npjdigitalmed PERSPECTIVE OPEN Why digital medicine depends on interoperability Moritz Lehne 1, Julian Sass 1, Andrea Essenwanger 1, Josef Schepers 1 and Sylvia Thun1,2,3 Digital data are anticipated to transform medicine. However, most of today’s medical data lack interoperability: hidden in isolated databases, incompatible systems and proprietary software, the data are difficult to exchange, analyze, and interpret. This slows down medical progress, as technologies that rely on these data – artificial intelligence, big data or mobile applications – cannot be used to their full potential. In this article, we argue that interoperability is a prerequisite for the digital innovations envisioned for future medicine. We focus on four areas where interoperable data and IT systems are particularly important: (1) artificial intelligence and big data; (2) medical communication; (3) research; and (4) international cooperation. We discuss how interoperability can facilitate digital transformationintheseareastoimprovethehealthandwell-beingofpatients worldwide. npj Digital Medicine (2019) 2:79 ; https://doi.org/10.1038/s41746-019-0158-1 INTRODUCTION interoperability is indispensable for advances in digital health The digitalization of medicine promises great advances for global and that it is, in fact, a prerequisite for most of the innovations health. Electronic medical records, mobile health apps, medical envisioned for future medicine. imaging, low-cost gene sequencing as well as new sensors and Our article starts with an overview of
    [Show full text]
  • Tools and Techniques
    TRANSLATIONAL MEDICINE: TOOLS AND TECHNIQUES Edited by AAMIR SHAHZAD President, European Society for Translational Medicine (EUSTM), Vienna, Austria; School of Medicine University of Colorado, Aurora, CO, USA AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, UK 525 B Street, Suite 1800, San Diego, CA 92101-4495, USA 225 Wyman Street, Waltham, MA 02451, USA The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK Copyright © 2016 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.
    [Show full text]