Lessons from Biology: Genes, Neurons, Neocortex and the New Computing Model for Cognitive Information Technologies †

Total Page:16

File Type:pdf, Size:1020Kb

Lessons from Biology: Genes, Neurons, Neocortex and the New Computing Model for Cognitive Information Technologies † Proceedings Lessons from Biology: Genes, Neurons, Neocortex and the New Computing Model for Cognitive † Information Technologies Rao Mikkilineni C3DNA, 7533 Kingsbury Ct, Cupertino, CA 95014, USA; [email protected]; Tel.: +1-408-406-7639 † Presented at the IS4SI 2017 Summit DIGITALISATION FOR A SUSTAINABLE SOCIETY, Gothenburg, Sweden, 12–16 June 2017. Published: 9 June 2017 Abstract: In this paper we analyze our current understanding of genes, neurons and the neocortex and draw a parallel to current implementations of cognitive computing in Silicon. We argue that current information technologies have evolved from the original stored program control architecture implementing the Turing machines which allowed us to model, configure, monitor and control any physical system using a Universal Turing Machine model. However, large scale of distributed computations and their tolerance requirement to fluctuations have created new challenges. We suggest that a new agent based computing model extension to the current Turing machine meets this challenge and provides a path to cognitive self-learning and self-managing systems. Keywords: genes; neurons; neocortex; cognition; DIME network architecture; self-managing computations; computing models 1. Introduction According to Ray Kurzweil [1] the story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. The story of evolution, he observes, unfolds with increasing levels of abstraction that exploits information processing. From atoms to molecules to a very unique and complex molecule DNA, the increasing levels of abstraction of encoding information in reusable form and using it to define a process intent and delivering a means to execute it has resulted in a diversity of self-replicating system. The next major evolutionary abstractions, the neuron and the neural network allowed embedded, embodied, enacted and extended cognition (known as 4E cognition) in a variety of ways [2] to model, configure, monitor and control the environment using a set of information patterns with a self- identity and optimize the execution of the processes (both of managing its own life and of managing its interactions with the environment). Cognition [3] here is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various information management processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process. The Neocortex takes the neural network encoding of information to a higher level by creating a hierarchical temporal memory that not only composes complex information processing workflows that compose downstream specialized networks using neural processing units but also creates predictive and corrective patterns in real time. The neocortex holds the key to the learning processes which provides its cognitive data analytics and their predictive capabilities. In this paper we examine the evolution of Genes, Neurons and the Neocortex and the evolution of information processing in Silicon based computing systems. Prof. Mark Burgin [4] from UCLA has Proceedings 2017, 1, 213; doi:10.3390/IS4SI-2017-03989 www.mdpi.com/journal/proceedings Proceedings 2017, 1, 213 2 of 6 argued that the first mathematical model employing agent technology (called an inductive Turing machine) serves as a theoretical foundation for a new model, the DIME network architecture (DNA) and allows modelling a network as an extremely large dynamic memory. The DNA agents organize connections of an information processing network in Silicon to provide cognitive computing abstractions and manage the evolution of the computing network behavior. 2. Genes As George Dyson [5], in his book ‘Darwin among the Machines’, observes “The analog of software in the living world is not a self-reproducing organism, but a self-replicating molecule of DNA. Self-replication and self-reproduction have often been confused. Biological organisms, even single-celled organisms, do not replicate themselves; they host the replication of genetic sequences that assist in reproducing an approximate likeness of them. For all but the lowest organisms, there is a lengthy, recursive sequence of nested programs to unfold. An elaborate self-extracting process restores entire directories of compressed genetic programs and reconstructs increasingly complicated levels of hardware on which the operating system runs.” Life, it seems, is an executable directed acyclic graph (DAG) and a managed one at that. According to Richard Dawkins [6], DNA molecule is really a set of plans for building a body and it indirectly supervises the manufacture of a different kind of molecule—protein. “The coded message of the DNA, written in the four-letter nucleotide alphabet, is translated in a simple mechanical way into another alphabet. This is the alphabet of amino acids which spells out protein molecules.” The indirect supervision task mentioned by Dawkins is related to the function of the Gene, which is a piece of DNA material that contains all the information needed to “build” specific biological structure. According to Damasio [7], managing and safe keeping life is the fundamental premise of biological entities. At the base of these features there is the homeostasis process. Homeostasis is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition of properties like temperature or chemical parameters that are essential to its survival. System-wide homeostasis goals are accomplished through a representation of current state, desired state, a comparison process and control mechanisms. Consciousness plays an important role in all these processes; the knowledge about the internal (self) and external (environment) conditions is the key element for understanding which action (or sequence of actions) has to be performed to reach a specific goal given the current situations. This brings to light the cellular computing model that: 1. Spells out the computational workflow components as a stable sequence of information and information processing patterns in executable form that accomplishes a specific purpose or intent, 2. Implements a parallel management workflow with another sequence of information patterns that assures the successful execution of the system’s purpose (the computing network to assure biological value with management and safekeeping). 3. Uses a signaling mechanism that controls the execution of the workflow for gene expression (the regulatory, i.e., management, network) and 4. Assures real-time monitoring and control (homeostasis) to execute the four genetic transactions of replication, repair, recombination and reconfiguration. 3. Neurons Neurons are special purpose cells created by genes that process information received through various sensor cells and control the environment using myriad motor cells. The neuron is the fundamental unit of the nervous system. The basic purpose of a neuron is to receive incoming information and, based upon that information, send a signal to other neurons, muscles, or glands. Neurons are designed to rapidly send signals across physiologically long distances. A group of neurons thus are programmed to perform specific functions (sensory and motor) that help an Proceedings 2017, 1, 213 3 of 6 organism to process information, detect patterns and take action based on signals received from within or without. The Hebbian theory of learning summarized as “cells that fire together wire together” has been the premise used to understand “neural nets” and neuronal learning. While Hebbian unit of learning is a neuron, Ray Kurtzweil on the other hand proposes an assembly (Lego-like) of neurons whose function and structure are predetermined by genes as a fundamental unit of learning executing information processing, pattern detection and communication with other units of learning. He goes on to say that the purpose of these modules is to recognize patterns, to remember them, and to predict them based on partial patterns. In essence, groups of neurons act as basic autonomic computing units with sensors and actuators processing information based on genetically programmed pattern recognition mechanisms. 4. Neocortex The neocortex plays a crucial role in the control of higher perceptual processes, cognitive functions, and intelligent behavior. It acts as higher level information processing mechanism that uses re-composable neural subnetworks to create a hierarchical information processing system with predictive and proactive analytics. In other words, it allows us to learn, adapt and create new modes of behavior. Ray Kurzweil [1] succinctly summarizes the nature and function of the neocortex. The basic unit of the neocortex is a module of neurons which he estimates at around a hundred. The pattern of connections and synaptic strengths within each module is relatively stable. He emphasizes that it is the connections and the synaptic strengths between the modules that represent learning. The physical connections between these modules are created to represent hierarchical information pattern processing workflows in a repeated and orderly manner. He asserts that the real strength of
Recommended publications
  • Mind-Crafting: Anticipatory Critique of Transhumanist Mind-Uploading in German High Modernist Novels Nathan Jensen Bates a Disse
    Mind-Crafting: Anticipatory Critique of Transhumanist Mind-Uploading in German High Modernist Novels Nathan Jensen Bates A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2018 Reading Committee: Richard Block, Chair Sabine Wilke Ellwood Wiggins Program Authorized to Offer Degree: Germanics ©Copyright 2018 Nathan Jensen Bates University of Washington Abstract Mind-Crafting: Anticipatory Critique of Transhumanist Mind-Uploading in German High Modernist Novels Nathan Jensen Bates Chair of the Supervisory Committee: Professor Richard Block Germanics This dissertation explores the question of how German modernist novels anticipate and critique the transhumanist theory of mind-uploading in an attempt to avert binary thinking. German modernist novels simulate the mind and expose the indistinct limits of that simulation. Simulation is understood in this study as defined by Jean Baudrillard in Simulacra and Simulation. The novels discussed in this work include Thomas Mann’s Der Zauberberg; Hermann Broch’s Die Schlafwandler; Alfred Döblin’s Berlin Alexanderplatz: Die Geschichte von Franz Biberkopf; and, in the conclusion, Irmgard Keun’s Das Kunstseidene Mädchen is offered as a field of future inquiry. These primary sources disclose at least three aspects of the mind that are resistant to discrete articulation; that is, the uploading or extraction of the mind into a foreign context. A fourth is proposed, but only provisionally, in the conclusion of this work. The aspects resistant to uploading are defined and discussed as situatedness, plurality, and adaptability to ambiguity. Each of these aspects relates to one of the three steps of mind- uploading summarized in Nick Bostrom’s treatment of the subject.
    [Show full text]
  • Sustainable Development, Technological Singularity and Ethics
    European Research Studies Journal Volume XXI, Issue 4, 2018 pp. 714- 725 Sustainable Development, Technological Singularity and Ethics Vyacheslav Mantatov1, Vitaly Tutubalin2 Abstract: The development of modern convergent technologies opens the prospect of a new technological order. Its image as a “technological singularity”, i.e. such “transhuman” stage of scientific and technical progress, on which the superintelligence will be practically implemented, seems to be quite realistic. The determination of the basic philosophical coordinates of this future reality in the movement along the path of sustainable development of mankind is the most important task of modern science. The article is devoted to the study of the basic ontological, epistemological and moral aspects in the reception of the coming technological singularity. The method of this study is integrating dialectical and system approach. The authors come to the conclusion: the technological singularity in the form of a “computronium” (superintelligence) opens up broad prospects for the sustainable development of mankind in the cosmic dimension. This superintelligence will become an ally of man in the process of cosmic evolution. Keywords: Technological Singularity, Superintelligence, Convergent Technologies, Cosmocentrism, Human and Universe JEL code: Q01, Q56. 1East Siberia State University of Technology and Management, Ulan-Ude, Russia [email protected] 2East Siberia State University of Technology and Management, Ulan-Ude, Russia, [email protected] V. Mantatov, V. Tutubalin 715 1. Introduction Intelligence organizes the world by organizing itself. J. Piaget Technological singularity is defined as a certain moment or stage in the development of mankind, when scientific and technological progress will become so fast and complex that it will be unpredictable.
    [Show full text]
  • Generative Models, Structural Similarity, and Mental Representation
    The Mind as a Predictive Modelling Engine: Generative Models, Structural Similarity, and Mental Representation Daniel George Williams Trinity Hall College University of Cambridge This dissertation is submitted for the degree of Doctor of Philosophy August 2018 The Mind as a Predictive Modelling Engine: Generative Models, Structural Similarity, and Mental Representation Daniel Williams Abstract I outline and defend a theory of mental representation based on three ideas that I extract from the work of the mid-twentieth century philosopher, psychologist, and cybernetician Kenneth Craik: first, an account of mental representation in terms of idealised models that capitalize on structural similarity to their targets; second, an appreciation of prediction as the core function of such models; and third, a regulatory understanding of brain function. I clarify and elaborate on each of these ideas, relate them to contemporary advances in neuroscience and machine learning, and favourably contrast a predictive model-based theory of mental representation with other prominent accounts of the nature, importance, and functions of mental representations in cognitive science and philosophy. For Marcella Montagnese Preface Declaration This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration except as declared in the Preface and specified in the text. It is not substantially the same as any that I have submitted, or, is being concurrently submitted for a degree or diploma or other qualification at the University of Cambridge or any other University or similar institution except as declared in the Preface and specified in the text. I further state that no substantial part of my dissertation has already been submitted, or, is being concurrently submitted for any such degree, diploma or other qualification at the University of Cambridge or any other University or similar institution except as declared in the Preface and specified in the text.
    [Show full text]
  • Transhumanism Between Human Enhancement and Technological Innovation*
    Transhumanism Between Human Enhancement and Technological Innovation* Ion Iuga Abstract: Transhumanism introduces from its very beginning a paradigm shift about concepts like human nature, progress and human future. An overview of its ideology reveals a strong belief in the idea of human enhancement through technologically means. The theory of technological singularity, which is more or less a radicalisation of the transhumanist discourse, foresees a radical evolutionary change through artificial intelligence. The boundaries between intelligent machines and human beings will be blurred. The consequence is the upcoming of a post-biological and posthuman future when intelligent technology becomes autonomous and constantly self-improving. Considering these predictions, I will investigate here the way in which the idea of human enhancement modifies our understanding of technological innovation. I will argue that such change goes in at least two directions. On the one hand, innovation is seen as something that will inevitably lead towards intelligent machines and human enhancement. On the other hand, there is a direction such as “Singularity University,” where innovation is called to pragmatically solving human challenges. Yet there is a unifying spirit which holds together the two directions and I think it is the same transhumanist idea. Keywords: transhumanism, technological innovation, human enhancement, singularity Each of your smartphones is more powerful than the fastest supercomputer in the world of 20 years ago. (Kathryn Myronuk) If you understand the potential of these exponential technologies to transform everything from energy to education, you have different perspective on how we can solve the grand challenges of humanity. (Ray Kurzweil) We seek to connect a humanitarian community of forward-thinking people in a global movement toward an abundant future (Singularity University, Impact report 2014).
    [Show full text]
  • ARCHITECTS of INTELLIGENCE for Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS of INTELLIGENCE
    MARTIN FORD ARCHITECTS OF INTELLIGENCE For Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS OF INTELLIGENCE THE TRUTH ABOUT AI FROM THE PEOPLE BUILDING IT MARTIN FORD ARCHITECTS OF INTELLIGENCE Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Acquisition Editors: Ben Renow-Clarke Project Editor: Radhika Atitkar Content Development Editor: Alex Sorrentino Proofreader: Safis Editing Presentation Designer: Sandip Tadge Cover Designer: Clare Bowyer Production Editor: Amit Ramadas Marketing Manager: Rajveer Samra Editorial Director: Dominic Shakeshaft First published: November 2018 Production reference: 2201118 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78913-151-2 www.packt.com Contents Introduction ........................................................................ 1 A Brief Introduction to the Vocabulary of Artificial Intelligence .......10 How AI Systems Learn ........................................................11 Yoshua Bengio .....................................................................17 Stuart J.
    [Show full text]
  • The Transhumanist Reader Is an Important, Provocative Compendium Critically Exploring the History, Philosophy, and Ethics of Transhumanism
    TH “We are in the process of upgrading the human species, so we might as well do it E Classical and Contemporary with deliberation and foresight. A good first step is this book, which collects the smartest thinking available concerning the inevitable conflicts, challenges and opportunities arising as we re-invent ourselves. It’s a core text for anyone making TRA Essays on the Science, the future.” —Kevin Kelly, Senior Maverick for Wired Technology, and Philosophy “Transhumanism has moved from a fringe concern to a mainstream academic movement with real intellectual credibility. This is a great taster of some of the best N of the Human Future emerging work. In the last 10 years, transhumanism has spread not as a religion but as a creative rational endeavor.” SHU —Julian Savulescu, Uehiro Chair in Practical Ethics, University of Oxford “The Transhumanist Reader is an important, provocative compendium critically exploring the history, philosophy, and ethics of transhumanism. The contributors anticipate crucial biopolitical, ecological and planetary implications of a radically technologically enhanced population.” M —Edward Keller, Director, Center for Transformative Media, Parsons The New School for Design A “This important book contains essays by many of the top thinkers in the field of transhumanism. It’s a must-read for anyone interested in the future of humankind.” N —Sonia Arrison, Best-selling author of 100 Plus: How The Coming Age of Longevity Will Change Everything IS The rapid pace of emerging technologies is playing an increasingly important role in T overcoming fundamental human limitations. The Transhumanist Reader presents the first authoritative and comprehensive survey of the origins and current state of transhumanist Re thinking regarding technology’s impact on the future of humanity.
    [Show full text]
  • The Technological Singularity and the Transhumanist Dream
    ETHICAL CHALLENGES The technological singularity and the transhumanist dream Miquel Casas Araya Peralta In 1997, an AI beat a human world chess champion for the first time in history (it was IBM’s Deep Blue playing Garry Kasparov). Fourteen years later, in 2011, IBM’s Watson beat two winners of Jeopardy! (Jeopardy is a general knowledge quiz that is very popular in the United States; it demands a good command of the language). In late 2017, DeepMind’s AlphaZero reached superhuman levels of play in three board games (chess, go and shogi) in just 24 hours of self-learning without any human intervention, i.e. it just played itself. Some of the people who have played against it say that the creativity of its moves make it seem more like an alien that a computer program. But despite all that, in 2019 nobody has yet designed anything that can go into a strange kitchen and fry an egg. Are our machines truly intelligent? Successes and failures of weak AI The fact is that today AI can solve ever more complex specific problems with a level of reliability and speed beyond our reach at an unbeatable cost, but it fails spectacularly in the face of any challenge for which it has not been programmed. On the other hand, human beings have become used to trivialising everything that can be solved by an algorithm and have learnt to value some basic human skills that we used to take for granted, like common sense, because they make us unique. Nevertheless, over the last decade, some influential voices have been warning that our skills PÀGINA 1 / 9 may not always be irreplaceable.
    [Show full text]
  • Artificial Intelligence: How Does It Work, Why Does It Matter, and What Can We Do About It?
    Artificial intelligence: How does it work, why does it matter, and what can we do about it? STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Author: Philip Boucher Scientific Foresight Unit (STOA) PE 641.547 – June 2020 EN Artificial intelligence: How does it work, why does it matter, and what can we do about it? Artificial intelligence (AI) is probably the defining technology of the last decade, and perhaps also the next. The aim of this study is to support meaningful reflection and productive debate about AI by providing accessible information about the full range of current and speculative techniques and their associated impacts, and setting out a wide range of regulatory, technological and societal measures that could be mobilised in response. AUTHOR Philip Boucher, Scientific Foresight Unit (STOA), This study has been drawn up by the Scientific Foresight Unit (STOA), within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in June 2020. DISCLAIMER AND COPYRIGHT This document is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work. The content of the document is the sole responsibility of its author(s) and any opinions expressed herein should not be taken to represent an official position of the Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.
    [Show full text]
  • A Traditional Scientific Perspective on the Integrated Information Theory Of
    entropy Article A Traditional Scientific Perspective on the Integrated Information Theory of Consciousness Jon Mallatt The University of Washington WWAMI Medical Education Program at The University of Idaho, Moscow, ID 83844, USA; [email protected] Abstract: This paper assesses two different theories for explaining consciousness, a phenomenon that is widely considered amenable to scientific investigation despite its puzzling subjective aspects. I focus on Integrated Information Theory (IIT), which says that consciousness is integrated information (as φMax) and says even simple systems with interacting parts possess some consciousness. First, I evaluate IIT on its own merits. Second, I compare it to a more traditionally derived theory called Neurobiological Naturalism (NN), which says consciousness is an evolved, emergent feature of complex brains. Comparing these theories is informative because it reveals strengths and weaknesses of each, thereby suggesting better ways to study consciousness in the future. IIT’s strengths are the reasonable axioms at its core; its strong logic and mathematical formalism; its creative “experience- first” approach to studying consciousness; the way it avoids the mind-body (“hard”) problem; its consistency with evolutionary theory; and its many scientifically testable predictions. The potential weakness of IIT is that it contains stretches of logic-based reasoning that were not checked against hard evidence when the theory was being constructed, whereas scientific arguments require such supporting evidence to keep the reasoning on course. This is less of a concern for the other theory, NN, because it incorporated evidence much earlier in its construction process. NN is a less mature theory than IIT, less formalized and quantitative, and less well tested.
    [Show full text]
  • Consciousness: Here, There and Everywhere? Rstb.Royalsocietypublishing.Org Giulio Tononi1 and Christof Koch2
    Downloaded from http://rstb.royalsocietypublishing.org/ on October 17, 2017 Consciousness: here, there and everywhere? rstb.royalsocietypublishing.org Giulio Tononi1 and Christof Koch2 1Department of Psychiatry, University of Wisconsin, Madison WI, USA 2Allen Institute for Brain Science, Seattle, WA, USA Review The science of consciousness has made great strides by focusing on the be- havioural and neuronal correlates of experience. However, while such Cite this article: Tononi G, Koch C. 2015 correlates are important for progress to occur, they are not enough if we Consciousness: here, there and everywhere? are to understand even basic facts, for example, why the cerebral cortex Phil. Trans. R. Soc. B 370: 20140167. gives rise to consciousness but the cerebellum does not, though it has http://dx.doi.org/10.1098/rstb.2014.0167 even more neurons and appears to be just as complicated. Moreover, corre- lates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of function- Accepted: 6 January 2015 ing cortex, preterm infants, non-mammalian species and machines that are rapidly outperforming people at driving, recognizing faces and objects, One contribution of 11 to a theme issue and answering difficult questions. To address these issues, we need not ‘Cerebral cartography: a vision of its future’. only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it. Integrated infor- mation theory (IIT) does so by starting from experience itself via five Subject Areas: phenomenological axioms: intrinsic existence, composition, information, inte- neuroscience, cognition gration and exclusion.
    [Show full text]
  • Artificial Intelligence
    COVER STORY ARTIFICIAL INTELLIGENCE LEARNING ABOUT LEARNING MACHINES by Simon Purton Section Head, Decision Support Division Headquarters Supreme Allied Commander Transformation ►►► The Three Swords Magazine 34/2019 19 AI REVOLUTION PART I: of thumb" to help out when confronted with Human decision-making complex decisions and such short-cuts enable us to make decisions efficiently and quickly. Current scientific theory proposes that people That we routinely take such short-cuts is really make decisions rationally. This idea has where machines can gain the advantage.7, 8 its roots in behavioural economic theory.5 As an Englishman it gives me no little However, "rational" here is different from pleasure to use an example of non-rational its more common usage in English; where it decision-making that illustrates the point and would mean "sane" as opposed to "insane". shows the frailty of American Football coaches. A rational decision-maker (also known as a In a 16-game regular season, the average NFL9 hen I saw its moves, I wondered whether any rational actor) is assumed to rank possible coach throws away 1.4 wins. Just think about of the moves I have ever known were the right choices by likely benefit and associated cost. that: there are 16 games, on average season a ones. These were the words of Lee Sedol a fter They always choose the one that delivers the team only wins eight times. If the right deci- his defeat while playing Go with Google's highest benefit for a cost. Within theory, it is sions were taken the team would realise 1.4 DeepMind in 2016.
    [Show full text]
  • Analytics and Algorithms, Big Data, Cognitive Computing, and Deep Learning in Healthcare and Medicine
    ! ANALYTICS AND ALGORITHMS, BIG DATA, COGNITIVE COMPUTING, AND DEEP LEARNING IN HEALTHCARE AND MEDICINE ! ANTHONY CHANG, MD, MBA, MPH, MS CHIEF INTELLIGENCE AND INNOVATION OFFICER MEDICAL DIRECTOR, THE SHARON DISNEY LUND MEDICAL IN- TELLIGENCE AND INNOVATION INSTITUTE (MI3) CHILDREN’S HOSPITAL OF ORANGE COUNTY Anthony C. Chang, MD, MBA, MPH, MS Dr. Chang attended Johns Hopkins University for his B.A. in molecular biology prior to entering Georgetown University School of Medicine for his M.D. He then completed his pediatric residency at Children’s Hospital National Medical Center and his pediatric cardiology fellowship at the Children’s Hospital of Philadelphia. He then accepted a position as attending cardiologist in the cardiovascular intensive care unit of Boston Children’s Hospital and as assistant professor at Harvard Medical School. He has been the medical director of several pediatric cardiac intensive care programs (including Children’s Hospital of Los Angeles, Miami Children’s Hospital, and Texas Children’s Hospital). He served as the medical director of the Heart Institute at Children’s Hospital of Orange County. He is currently the Chief Intelligence and Innovation Officer and Medical Director of the Heart Failure Program at Children’s Hospital of Orange County. He has also been named a Physician of Excellence by the Orange County Medical Association and Top Cardiologist, Top Doctor for many years. He has completed a Masters in Business Administration (MBA) in Health Care Administration at the University of Miami School of Business and graduated with the McCaw Award of Academic Excellence. He also completed a Masters in Public Health (MPH) in Health Care Policy at the Jonathan Fielding School of Public Health of the University of California, Los Angeles and graduated with the Dean’s Award for Academic Excellence.
    [Show full text]