Brain-Morphism: Astrocytes As Memory Units

Total Page:16

File Type:pdf, Size:1020Kb

Brain-Morphism: Astrocytes As Memory Units combra.cs.rutgers.edu Brain-morphism: Astrocytes as Memory Units Constantine Michmizos Computational Brain Lab – Rutgers University 6th Neuro Inspired Computational Elements Workshop – Intel Hillsboro, OR 2015 Computa- Neural tional Astrocytic Brain Networks 2013 Children MEG and Modeling with Autism CNS 1 m ComBra Lab‘s goal †mimic Systems 10 cm ᴥ To understand biological intelligence and translate our knowledge to artificial intelligence †restore Maps 1 cm o by developing Networks 1 mm brain-morphic †understand Computa- Neural tional Astrocytic Brain Networks computational methods Neurons 100 μm • that integrate† with the brain Synapses 1 μm from the macro (behavioral) to the micro (synaptic) scale Adapted by Molecules 1 Å Terry Sejnowski Computing for Grillner et al. Nature Neuroscience 2016 Brain Science Understanding the function of the brain Brain Science for Computing Using computational principles of the brain for generic data analysis Alan Turing, 1948 • A fascinating prelude to today’s AI • Proposed connectionist models that would today be called neural networks The Neuron as a • Randomly connected networks of Basic Information Processing Unit artificial neurons McCulloch and Pitts (1943) • Training via reinforcing successful and useful links and cutting useless ones 1948 1956 • The proposed learning rule was inspired by the infant’s brain 50 70 90 10 1957 1996 pigeons playing ping-pong - Skinner 1950 50 70 90 10 since they both contain similar elements input neurons first hidden layer 1959 2013 50 70 90 10 Capsule NNs decreased the error by a whopping 45% A capsule is a subset of neurons within a layer that outputs: 1. an instantiation parameter: is an entity present within a limited domain? 2. a vector of pose parameters: the pose of the entity relative to a canonical version A capsule replaces max pooling 1956 Dec 2017 50 70 90 10 Computing for Brain Science Understanding the function of the brain Sejnowski et al. Putting big data to good use in neuroscience Nature Neuroscience 2014 Brain Science for Computing Using computational principles of the brain for generic data analysis 50 70 90 10 1932. Edgar Adrian, Nobel Prize 1971. David Cohen, MIT 2013. Motor neurons control a robotic Single-Neuron Recordings Magnetoencephalography arm for paraplegic patients (BrainGate) Information = f (electrical activityn) eurons 1997. Deep Brain Stimulation for 2013. TMS applied to the motor alleviating Parkinson’s disease cortex induces hand movement Non-neuronal brain cells are electrically silent The other brain (glia cells) 10% neurons90% Non-neuronal brain cells are electrically silent 2000’s. Calcium Imaging allows studies of calcium signaling in hundreds of neurons and glial cells, within neuronal circuits . Santiago Ramón y Cajal’s drawing of an astrocyte Ramón y Cajal S. Something about the physiological significance of neuroglia. Revista Trimestral Micrografía 1, 3–47, 1897 2015 2015 Astrocytes A paradigm shift in Neuroscience Computational Astrocytes Astrocytes ᴥ Neuro-morphic Computing • Introducing astrocyte, A paradigm shift in Neuroscience a new computational unit into Neural Networks ᴥ Understanding astrocytes by calcium imaging ᴥ Learning by weight updating • Embed astrocytic • The third part of the synapse mechanisms into NANs introduces an orthogonal • Suggest functions for dimension to “neural” plasticity astrocytes at the network level and large time scales ᴥ Time in discrete instances • where behavior and • Astrocytes respond dynamically diseases emerge and in larger time scales than neurons, mapping neural activity into the slower behavioral scales Microglia Astrocyte Presynaptic Neuron Astrocyte Blood Vessel Colangelo et al. Neuroscience Letters 2014 Postsynaptic Neuron The Neural Modeling Paradigm Physical Phenomenon (e.g., neural firing) v(t) I(t) treat as emergent phenomenon treat as target function • Byophysics-based models • Phenomenological models • Understand the principles of • Occam’s razor/KISS principle the underlying phenomena • Level of detail is • Data drives details hypothesis/interest driven Typically higher model complexity Typically lower model complexity Adapted by Karlheinz Meier Neural Network Astrocytic Network Tripartite Synapse Neurons encode information in spike rate Neural Network To capture the dynamics of the interaction between the neuronal and the astrocytic component of the networks Step 1. Input as Spike timing 푉푗 푡 = 훿 푡 − 푡푖 푖 Step 2. Synaptic current response Mapping function between input-output 푡−푡 푡−푡 − − 푤푗 푒 휏 − 푒 4휏 , 푖푓 푡 > 푡푖 퐼푗 푡, 푡푖 = Leaky Integrate-and-Fire (LIF) Neuron 0, 푖푓 푡 ≤ 푡푖 푑푣 Step 3. Integration of synaptic current 휏 = 푣 − 푣 푡 + 푅퐼(푡) 푑푡 푟푒푠푒푡 LIF 퐼 푡 = 퐼푗(푡) 푗 푖푓 푣(푡) ≥ 푉th Step 4. Output spike 푡ℎ푒푛 푣 푡 + 1 ← 푣푟푒푠푒푡 푉표 푡 = LIF 퐼(푡) Neural Network Astrocytic Network Tripartite Synapse 푖 푎푖 = 푎0 ∗ 푛 ∗ exp − 휏푑 2+ Inter-cellular Ca wave propagation in space and time 푐푖 푟푖, 푡 = 푎푖 ∙ 푔푖 푟푖, 푡 푟푖 − 휇 ∙ 푣 ∙ 푡 푟푖 − 휇 ∙ 푣 ∙ 푡 푔푖 푟푖, 푡 = 푒푥푝 − − 푒푥푝 − 휏푑푒푐푎푦 휏푟푖푠푒 the distance of astrocyte 푖 from the origination site (푟0 ≡ 0) intracellular Ca2+ level in astrocyte 푖 at time t nM 2+ • 휏푑푒푐푎푦, 휏푟푖푠푒 control the fall and rise time of the Ca wave • 휏푑 controls the magnitude of the amplitude fall-off between astrocytes • µ captures the permeability of the cells through which the wave propagates s with speed 푣 - lumping together all the sub-mechanisms of gap-junctional and extracellular communication • n is the normalization constant for biexponential function, which is a function μm of the rise and decay parameter Neural Network Astrocytic Network Tripartite Synapse Isyn Neural Network Astrocytic Network Tripartite Synapse 푑푥 푧 = − 1 − 푓 ∙ 푢 ∙ 푥 ∙ 훿 푡 − 푡푠푝 푑푡 휏푟푒푐 푑푦 푦 = − + 1 − 푓 ∙ 푢 ∙ 푥 ∙ 훿 푡 − 푡푠푝 푑푡 휏푖푛 푑푧 푦 푧 = − 푑푡 휏푖푛 휏푟푒푐 synaptic synaptic M. Tsodyks, A. Uziel, and H. Markram Synchrony generation in recurrent networks with current strength frequency-dependent synapses J. Neurosci. 2000 Isyn = A ∙ y(t) 푥 + 푦 + 푧 = 1 are the fractions of synaptic 휏푖푛 is the characteristic time of postsynaptic currents (PSCs) decay resources in a recovered, active, inactive state 푢 is the fraction of 푥 released when a spike 휏푟푒푐 is the recovery time from synaptic depression arrives at the synapse at time 푡푠푝 Neural Network Astrocytic Network Tripartite Synapse 푑푥 푧 = − 1 − 푓 ∙ 푢 ∙ 푥 ∙ 훿 푡 − 푡푠푝 푑푡 휏푟푒푐 푑푦 푦 = − + 1 − 푓 ∙ 푢 ∙ 푥 ∙ 훿 푡 − 푡 I 푠푝 Isynsyn 푑푡 휏푖푛 푑푧 푦 푧 = − 푑푡 휏푖푛 휏푟푒푐 Isyn = A ∙ y(t) 푑푓 푓 = − + 1 − 푓 ∙ 휅 ∙ 훩 푐푖 푟푖, 푡 − 푐푡ℎ푟푒푠ℎ 푑푡 휏퐶푎 휏퐶푎 is the decay constant of 푓 푓 is the gating variable 휅 controls the rise time of 푓 It models Ca2+-dependent presynaptic inhibition 푥 + 푦 + 푧 = 1 are the fractions of synaptic 휏푖푛 is the characteristic time of postsynaptic currents (PSCs) decay resources in a recovered, active, inactive state 훩(푥) Heaviside function 2+ 푢 is the fraction of 푥 released when a spike 휏푟푒푐 is the recovery time from 푐푡ℎ푟푒푠ℎ is [intracellular Ca ] needed synaptic depression to activate 푓 arrives at the synapse at time 푡푠푝 Neural Network Astrocytic Network Tripartite Synapse Astrocytes inject into the post-synaptic neuron a slow-injected current (SICs) I SICs Isynsyn • are well fit to bi-exponential distributions • have a rapid rise time (on the order of tens of ms) Upon reaching a Ca2+ peak, an • have a comparatively larger decay time (on the order of hundreds of ms) astrocyte releases glio- • are correlated with Ca2+ wave peaks in both time and amplitude transmitters that lead to 휇퐴 NMDAR-mediated SICs, with I = 2.11 ∙ ∙ 푙푛 푤 ∙ 훩 푙푛 푤 the SIC amplitude being astro 푐푚2 logarithmically proportional to 2+ the Ca wave amplitude. 푤 = 푐푖 푟푖, 푡 푛 푀 − 196.69 푡 푡 ISIC 푡 = Iastro 푐푖 푟푖, 푡 = cpeak 푒푥푝 − 푆퐼퐶 − 푒푥푝 − 푆퐼퐶 휏푑푒푐푎푦 휏푟푖푠푒 Neural Network Astrocytic Network Tripartite Synapse 푑푥 푧 = − 1 − 푓 ∙ 푢 ∙ 푥 ∙ 훿 푡 − 푡푠푝 푑푡 휏푟푒푐 푑푦 푦 = − + 1 − 푓 ∙ 푢 ∙ 푥 ∙ 훿 푡 − 푡푠푝 푑푡 휏푖푛 IIsyn syn 푑푧 푦 푧 = − 푑푡 휏푖푛 휏푟푒푐 I = A ∙ y(t) 푑푓 푓 syn = − + 1 − 푓 ∙ 휅 ∙ 훩 푐푖 푟푖, 푡 − 푐푡ℎ푟푒푠ℎ 푑푡 휏퐶푎 휇퐴 I = 2.11 ∙ ∙ 푙푛 푤 ∙ 훩 푙푛 푤 astro 푐푚2 푤 = 푐푖 푟푖, 푡 푛 푀 − 196.69 푡 푡 ISIC 푡 = Iastro 푐푖 푟푖, 푡 = cpeak 푒푥푝 − 푆퐼퐶 − 푒푥푝 − 푆퐼퐶 휏푑푒푐푎푦 휏푟푖푠푒 Putting a Neural-Astrocytic Network to Sleep Probabilistic neural connections, with a Gaussian fall-off m 2 2 푃μ 푟 푖푗 = 푃푚푎푥푒푥푝 − 푟푖푗 2휎 • 푟푖푗 the distance between neurons 푖 and 푗 • 푃푚푎푥 and 휎 are the peak and width of the probabilityμm distribution • No function – just basal Oscillations emerge as a network property neural activity Local Sleep in Awake Rats, Nature 2011 Bryant et al. Nature Reviews, 2004 "This hypothesis was reinforced by a recent modelling study showing that intercellular Ca2+ signaling potentially can introduce slow oscillation in neurons [our Ref]. Our experimental data strongly supports this hypothesis by demonstrating that increasing astrocytic influence on neurons indeed drives them to join the oscillatory activity (Fig. 5C). In this context it is also important to note, that the ratio of astrocytes involved in the SWA was found to start decreasing right after virtually all neurons joined the simultaneous activity (Figs 4 and 5). This observation further supports the view that astrocytic activity corresponds to the generation or maintenance, rather than termination of SWA." brain oscillations are still lacking a mechanistic origin • Rhythmic or repetitive electrical activity generated spontaneouslyPublications on or "Oscillations" in response per to Year stimuli by the neurons 2500 • Studied for 100 years (EEG, MEG, fMRI, PET, MER) 2000 • Encode and process brain information flow 1500 • Communication through coherence: interregional communication # Papers 1000 is established when the oscillatory activity between neuronal 500 pools is coherent (Fries 2005; 2009) 0• Related to all aspects of human behavior 1889 1909 1929 1949 1969 1989 2009 • cognitive, sensory, and motor tasks Year • All brain diseases are associated with oscillatory imbalances Source: PubMed Introducing function into a Neural-Astrocytic Network NAN Hopfield Nets • The idea of memories as energy minima was proposed by I.A.
Recommended publications
  • A Boltzmann Machine Implementation for the D-Wave
    A Boltzmann Machine Implementation for the D-Wave John E. Dorband Ph.D. Dept. of Computer Science and Electrical Engineering University of Maryland, Baltimore County Baltimore, Maryland 21250, USA [email protected] Abstract—The D-Wave is an adiabatic quantum computer. It is an understatement to say that it is not a traditional computer. It can be viewed as a computational accelerator or more precisely a computational oracle, where one asks it a relevant question and it returns a useful answer. The question is how do you ask a relevant question and how do you use the answer it returns. This paper addresses these issues in a way that is pertinent to machine learning. A Boltzmann machine is implemented with the D-Wave since the D-Wave is merely a hardware instantiation of a partially connected Boltzmann machine. This paper presents a prototype implementation of a 3-layered neural network where the D-Wave is used as the middle (hidden) layer of the neural network. This paper also explains how the D-Wave can be utilized in a multi-layer neural network (more than 3 layers) and one in which each layer may be multiple times the size of the D- Wave being used. Keywords-component; quantum computing; D-Wave; neural network; Boltzmann machine; chimera graph; MNIST; I. INTRODUCTION The D-Wave is an adiabatic quantum computer [1][2]. Its Figure 1. Nine cells of a chimera graph. Each cell contains 8 qubits. function is to determine a set of ones and zeros (qubits) that minimize the objective function, equation (1).
    [Show full text]
  • Spring 2013 Mapping (CBAM)
    UBATOR http://inc.uscd.edu/ In this Issue: UCSD Creates Center for Brain Activity Mapping On the Cover UCSD Creates Responding to President Barack Obama’s “grand challenge” to chart Center for Brain the function of the human brain in unprecedented detail, the University Activity Mapping of California, San Diego has established the Center for Brain Activity Spring 2013 Mapping (CBAM). The new center, under the aegis of the interdisciplinary Kavli Institute for Brain and Sejnowski Elected to Mind, will tackle the technological and biological challenge of developing a new American Academy generation of tools to enable recording of neuronal activity throughout the brain. It of Arts and Sciences will also conduct brain-mapping experiments and analyze the collected data. Pg 3 Ralph Greenspan–one of the original architects of a visionary proposal that eventually led to the national BRAIN Initiative launched by President Obama in April–has been named CBAM’s founding director. (cont on page 2) Faculty Spotlight: Garrison W. Cottrell Pg 4 Faculty Spotlight: Patricia Gorospe Pg 7 INC Events Pg 9 INC at a glance Pg 10 From left, Nick Spitzer, Ralph Greenspan, and Terry Sejnowski. (Photos by Erik Jepsen/UC San Diego Publications) 1 INSTITUTE FOR NEURAL COMPUTATION Brain Activity Mapping, cont from page 1 UC San Diego Chancellor Pradeep K. Khosla, who attended Obama’s unveiling of the BRAIN Initiative, said: “I am pleased to announce the launch of the Spring 2013 Center for Brain Activity Mapping. This new center will require the type of in-depth and impactful research that we are so good at producing at UC San Diego.
    [Show full text]
  • SFR27 13 Harris Et Al.Indd
    From “The Neocortex,” edited by W. Singer, T. J. Sejnowski and P. Rakic. Strüngmann Forum Reports, vol. 27, J. R. Lupp, series editor. Cambridge, MA: MIT Press. ISBN 978-0-262-04324-3 13 Functional Properties of Circuits, Cellular Populations, and Areas Kenneth D. Harris, Jennifer M. Groh, James DiCarlo, Pascal Fries, Matthias Kaschube, Gilles Laurent, Jason N. MacLean, David A. McCormick, Gordon Pipa, John H. Reynolds, Andrew B. Schwartz, Terrence J. Sejnowski, Wolf Singer, and Martin Vinck Abstract A central goal of systems neuroscience is to understand how the brain represents and processes information to guide behavior (broadly defi ned as encompassing perception, cognition, and observable outcomes of those mental states through action). These con- cepts have been central to research in this fi eld for at least sixty years, and research efforts have taken a variety of approaches. At this Forum, our discussions focused on what is meant by “functional” and “inter-areal,” what new concepts have emerged over the last several decades, and how we need to update and refresh these concepts and ap- proaches for the coming decade. In this chapter, we consider some of the historical conceptual frameworks that have shaped consideration of neural coding and brain function, with an eye toward what aspects have held up well, what aspects need to be revised, and what new concepts may foster future work. Conceptual frameworks need to be revised periodically lest they become counter- productive and actually blind us to the signifi cance of novel discoveries.
    [Show full text]
  • Breakthroughs in Brain Research with High Performance Computing 1 High Performance Computing Case Study
    Breakthroughs in Brain Research with High Performance Computing 1 High Performance Computing Case Study. Breakthroughs in Brain Research with High Performance Computing Breakthroughs in Brain Research with High Performance Computing This publication may not be reproduced, in whole or in part, in any form be- yond copying permitted by sections 107 and 108 of the U.S. copyright law and excerpts by reviewers for the public press, without written permission from the publishers. The Council on Competitiveness is a nonprofit, 501 (c) (3) organization as recognized by the U.S. Internal Revenue Service. The Council’s activities are funded by contributions from its members, foundations, and project contributions. To learn more about the Council on Competitiveness, visit us at www.compete.org. Copyright © 2008 Council on Competitiveness Design: Soulellis Studio This work was supported by The National Science Foundation under grant award #0456129. Printed in the United States of America Breakthroughs in Brain Research with High Performance Computing 1 Breakthroughs in Brain Research with High Performance Computing Researchers at the Salk Institute are using supercomputers at the nearby NSF-funded San Diego Supercomputer Center to investigate how the synapses of the brain work. Their research has the potential to help people suffering from mental disorders such as Alzheimer’s, schizophrenia and manic depressive disorders. In addition, the use of supercomputers is helping to change the very nature of biology – from a science that has relied primarily on observation to a science that relies on high performance computing to achieve previously impossible in-depth quantitative results. To paraphrase Yogi Berra, You can see a lot just by like those in a muscle or gland.
    [Show full text]
  • In the Beginning Please Turn to Page 28
    FROM THE EDITOR Mariette DiChristina is editor in chief of Scientific American. Follow her on Twitter @mdichristina sounds so harsh have been beneficial, you ask? To find out, In the Beginning please turn to page 28. The sun’s rays provided vitality for this world. Seeing them There was light. But then what happened? dim temporarily, as they do during a solar eclipse, is awe­inspir­ How did life arise on the third rocky planet orbiting the un­ ing. It’s been nearly a century since a total solar eclipse has remarkable star at the center of our solar system? Humans crossed the U.S. from coast to coast. Starting on page 54, you’ll have been wondering about the answer to that question prob­ find that “The Great Solar Eclipse of 2017,” by Jay M. Pasachoff, ably almost as long as we’ve been able to wonder. In recent tells you everything you need to know about this rare event. And decades scientists have made some a companion piece, “1,000 Years of gains in understanding the conceiv­ Solar Eclipses,” by senior editor Mark able mechanisms, gradually settling Fischetti, with illustrations by senior on a possible picture of our origins graphics editor Jen Christiansen and in the oceans. The idea was that designer Jan Willem Tulp, tells you hydrothermal vents at the bottom of what you will need to know as well. I the seas, protected from cataclysms like to think that the readers of Scien - rending the surface four billion tific American,which turns 172 this years ago, delivered the necessary month, will be enjoying the solar energy and could have sustained the shows well into the future.
    [Show full text]
  • The Scientist : Physics Meets the Brain
    The Scientist : Physics Meets the Brain http://www.the-scientist.com/article/display/36676/ Welcome (Institutional Subscriber : Free Anniversary Access) Please login or register Search 7:14:38 PM Volume 20 | Issue 12 | Page 54 | Reprints | Issue Contents Comment on this article By Karen Hopkin PROFILE Physics Meets the Brain How Terry Sejnowski went from December 2006 a grad student in theoretical physics to computational Table of Contents neuroscience's White Knight. Editorial Columns Features It was Editorial Advisory Board the grueling Courtesy of the Salk Institute 1 of 7 12/1/2006 7:14 PM The Scientist : Physics Meets the Brain http://www.the-scientist.com/article/display/36676/ The Daily: Sign up for The Scientist's qualifying exam for his doctorate in theoretical physics daily e-mail. in the 1970's that sparked Terry Sejnowski's interest in neuroscience. The exam "takes a week in which every Blogs: Latest Posts morning or afternoon is a different topic in physics," he says. "Cramming all of physics into your head for that week is so concentrated that you need to be able to take Podcast: a break and do something else. That summer I started TheWeek to read books about the brain." Now on iTunes subscribe now His light summer reading revealed, among other things, that neuroscientists had a lot to learn. "They didn't know the answers to basic questions, fundamental Media Kit questions, like how memory is stored," says Sejnowski. His full transformation from a theoretical physicist to Web Advertising an experimental neuroscientist didn't take place until Print Advertising 1978, when Sejnowski, then a postdoc at Princeton, signed up for a summer course in neurobiology at Woods Contact the Advertising Department Hole.
    [Show full text]
  • Talking Nets: an Oral History of Neural Networks 6
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence 119 (2000) 287–293 Book Review J.A. Anderson and E. Rosenfeld (Eds.), Talking Nets: An Oral History of Neural Networks ✩ Noel E. Sharkey 1 University of Sheffield, Department of Computer Science, Regent Court, 221 Portobello Street, Sheffield S1 4DP, UK Drugs, tragedy, romance, success, misfortune, luck, pathos, bitterness, jealousy and struggle are not words that we expect in a review of an academic book of interviews with some of the great and good of neural network research. But this book is not just concerned with the science and engineering of neural network research as seen in the literature; it deals with the motivation from the childhood and early developments of each of the 17 interviewees and with informal accounts of their scientific endeavours. This is an attempt to get to an understanding of the science through an understanding of the scientists themselves, their schooling, their interests, and their major influences. It is a wonderfully charming book and well worth reading. To maintain the historical sequence of events, the editors have placed the interviews in the order of the birth dates of the interviewees. This works well in many cases but there are some notable exceptions. For example, the Werbos interview is sandwiched between Sejnowski and Hinton when he had actually carried out significant work much earlier. Then there is Leon Cooper who, although fourth oldest, should really belong, historically, at position 12 because he first had a Nobel winning career in Physics.
    [Show full text]
  • Francis Crick's Legacy for Neuroscience
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by PubMed Central Open access, freely available online Obituary Francis Crick’s Legacy for Neuroscience: Between the α and the Ω Ralph M. Siegel*, Edward M. Callaway ‘You’, your joys and your sorrows, your memories Hubel and Wiesel wrote of functional new ideas of a theoretical neurology and your ambitions, your sense of personal architectures, embedded in beautiful, to the brain (Marr 1969, 1970). And identity and free will, are in fact no more than almost crystalline structure. The he saw the tragedy of Marr being cut the behavior of a vast assembly of nerve cells…” comprehension of mind invoked by a off from solving the big problems for —Crick (1994, p. 3) biological mechanism appeared ripe which he was so clearly destined. for the sort of thoughtful, theoretical During those early years, Francis rancis Crick was an evangelical science he had applied to DNA. Francis must have thought that consciousness atheist. He believed that scientifi c was now sixty years old and moved from was tractable—if only the right way of Funderstanding removed the need Cambridge to the Salk Institute in La thinking was brought to bear on it. for religious explanations of natural Jolla, California. Francis began with the Francis’s brain was capable of collecting phenomena. From James Watson’s brightest young minds he could fi nd. and fi ling away many disparate and his early work, the structure of David Marr was a young data, which he could then combine DNA explained the α, the origins of mathematician and physiologist uniquely and imaginatively, leading life.
    [Show full text]
  • 2020 Neural Computation and Engineering Connection Thursday, January 30, 2020
    2020 Neural Computation and Engineering Connection Thursday, January 30, 2020 UW Husky Union Building (HUB) Rooms 334 & 332 11:15a-12:30p Poster session and lunch (provided), HUB 332 12:30-12:40pm Welcome: Adrienne Fairhall, Eric Shea-Brown, Tom Daniel, HUB 334 12:40-12:55pm “The Smellicopter: a bio-hybrid odor localizing nano-air vehicle" Melanie Anderson, Mechanical Engineering graduate student, Daniel Lab 12:55-1:10pm "Go with the FLOW: Spatiotemporal Dynamics in Optical Widefield Calcium Imaging” Nate Linden, UWIN undergraduate fellow, Bing Brunton Lab 1:10-1:25pm "Learning to see again: Perceptual learning for sight restoration technologies" Rebecca Esquenazi, Psychology graduate student, Fine lab 1:25-1:40pm "Tuning for global motion in ventral visual area V4 " Anthony Bigelow, Comp Neuro graduate fellow, Pasupathy lab 1:40-2:00pm Break 2:00-2:15pm "Anipose: a full system for robust 3D markerless tracking” Pierre Karaschuk, Neuroscience graduate student, Tuthill and Bing Brunton labs 2:15-2:30pm "Stimulation rebound in deep brain stimulation for essential tremor." Sarah Cooper, Comp Neuro undergraduate fellow, Chizeck lab 2:30-2:45pm "Dimensionality and Dynamics in classifying Recurrent Neural Networks" Matt Farrell, Applied Math graduate student, Shea-Brown lab 2:45-3:00pm "A Toolbox for Studying Ischemic Stroke in Non-Human Primate Cortex" " Karam Khateeb, Bioengineering graduate student, Yazdan-Shahmorad lab 3:00-3:20pm Break 3:20-4:10pm "Modulating neural sequencing to improve recovery after stroke” Keynote lecture: Karunesh
    [Show full text]
  • Developing a 21St Century Neuroscience Workforce: Workshop Summary
    This PDF is available from The National Academies Press at http://www.nap.edu/catalog.php?record_id=21697 Developing a 21st Century Neuroscience Workforce: Workshop Summary ISBN Sheena M. Posey Norris, Christopher Palmer, Clare Stroud, and Bruce M. 978-0-309-36874-2 Altevogt, Rapporteurs; Forum on Neuroscience and Nervous System Disorders; Board on Health Sciences Policy; Institute of Medicine 130 pages 6 x 9 PAPERBACK (2015) Visit the National Academies Press online and register for... Instant access to free PDF downloads of titles from the NATIONAL ACADEMY OF SCIENCES NATIONAL ACADEMY OF ENGINEERING INSTITUTE OF MEDICINE NATIONAL RESEARCH COUNCIL 10% off print titles Custom notification of new releases in your field of interest Special offers and discounts Distribution, posting, or copying of this PDF is strictly prohibited without written permission of the National Academies Press. Unless otherwise indicated, all materials in this PDF are copyrighted by the National Academy of Sciences. Request reprint permission for this book Copyright © National Academy of Sciences. All rights reserved. Developing a 21st Century Neuroscience Workforce: Workshop Summary DEVELOPING A 21ST CENTURY NEUROSCIENCE WORKFORCE WORKSHOP SUMMARY Sheena M. Posey Norris, Christopher Palmer, Clare Stroud, and Bruce M. Altevogt, Rapporteurs Forum on Neuroscience and Nervous System Disorders Board on Health Sciences Policy PREPUBLICATION COPY: UNCORRECTED PROOFS Copyright © National Academy of Sciences. All rights reserved. Developing a 21st Century Neuroscience Workforce: Workshop Summary THE NATIONAL ACADEMIES PRESS • 500 Fifth Street, NW • Washington, DC 20001 NOTICE: The workshop that is the subject of this workshop summary was approved by the Governing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine.
    [Show full text]
  • NIH and the BRAIN Initiative Francis S
    NIH and the BRAIN Initiative Francis S. Collins, M.D., Ph.D. Director, National Institutes of Health Meeting of the President’s Council of Advisors on Science and Technology May 3, 2013 A Bold New Initiative in American science Learning the language of the brain The Need Is Great . Brain disorders: #1 source of disability in U.S. – > 100 million Americans affected . Rates are increasing: autism, Alzheimer’s disease, and in our soldiers PTSD and TBI The Need Is Great . Brain disorders: #1 source of disability in U.S. – > 100 million Americans affected . Rates are increasing: autism, Alzheimer’s disease, and in our soldiers PTSD and TBI . Costs are increasing: annual cost of dementia ~$200B – Already equals cost of cancer and heart disease The Science Is Ready . Progress in neuroscience is yielding new insights into brain structure, function . Progress in optics, genetics, nanotechnology, informatics, etc. is rapidly advancing design of new tools Advances in Understanding Brain Structure Brainbow (Livet et al., 2007) Human Connectome Before After CLARITY After CLARITY (Wedeen et al., 2012) CLARITY (Chung et al., 2013) Advances in Understanding Brain Structure: CLARITY 3D analysis of an intact mouse hippocampus Advances in Understanding Brain Function Zebra fish larvae (Ahrens et al., 2013) Advances in Understanding Brain Function Zebra fish larvae (Ahrens et al., 2013) 1202 hipp neurons (Schnitzer laboratory) Advances in Understanding Brain Function Zebra fish larvae (Ahrens et al., 2013) 21 transient functional modes (K. Ugurbil 2012) 1202 hipp neurons (Schnitzer laboratory) Making the Next Leap . Today: despite advances, we are still limited in our understanding of how the brain processes information .
    [Show full text]
  • A Timeline of Artificial Intelligence
    A Timeline of Artificial Intelligence by piero scaruffi | www.scaruffi.com All of these events are explained in my book "Intelligence is not Artificial". TM, ®, Copyright © 1996-2019 Piero Scaruffi except pictures. All rights reserved. 1960: Henry Kelley and Arthur Bryson invent backpropagation 1960: Donald Michie's reinforcement-learning system MENACE 1960: Hilary Putnam's Computational Functionalism ("Minds and Machines") 1960: The backpropagation algorithm 1961: Melvin Maron's "Automatic Indexing" 1961: Karl Steinbuch's neural network Lernmatrix 1961: Leonard Scheer's and John Chubbuck's Mod I (1962) and Mod II (1964) 1961: Space General Corporation's lunar explorer 1962: IBM's "Shoebox" for speech recognition 1962: AMF's "VersaTran" robot 1963: John McCarthy moves to Stanford and founds the Stanford Artificial Intelligence Laboratory (SAIL) 1963: Lawrence Roberts' "Machine Perception of Three Dimensional Solids", the birth of computer vision 1963: Jim Slagle writes a program for symbolic integration (calculus) 1963: Edward Feigenbaum's and Julian Feldman's "Computers and Thought" 1963: Vladimir Vapnik's "support-vector networks" (SVN) 1964: Peter Toma demonstrates the machine-translation system Systran 1965: Irving John Good (Isidore Jacob Gudak) speculates about "ultraintelligent machines" (the "singularity") 1965: The Case Institute of Technology builds the first computer-controlled robotic arm 1965: Ed Feigenbaum's Dendral expert system 1965: Gordon Moore's Law of exponential progress in integrated circuits ("Cramming more components
    [Show full text]