UCLA Electronic Theses and Dissertations

Total Page:16

File Type:pdf, Size:1020Kb

UCLA Electronic Theses and Dissertations UCLA UCLA Electronic Theses and Dissertations Title The Role of Short-Term Synaptic Plasticity in Neural Network Spiking Dynamics and in the Learning of Multiple Distal Rewards Permalink https://escholarship.org/uc/item/63r8s0br Author O'Brien, Michael John Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California University of California Los Angeles The Role of Short-Term Synaptic Plasticity in Neural Network Spiking Dynamics and in the Learning of Multiple Distal Rewards A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Michael John O'Brien 2013 c Copyright by Michael John O'Brien 2013 Abstract of the Dissertation The Role of Short-Term Synaptic Plasticity in Neural Network Spiking Dynamics and in the Learning of Multiple Distal Rewards by Michael John O'Brien Doctor of Philosophy in Mathematics University of California, Los Angeles, 2013 Professor Chris Anderson, Chair In this thesis, we assess the role of short-term synaptic plasticity in an artificial neural network constructed to emulate two important brain functions: self-sustained activity and signal propagation. We employ a widely used short-term synaptic plasticity model (STP) in a symbiotic network, in which two subnetworks with differently tuned STP behaviors are weakly coupled. This enables both self-sustained global network activity, generated by one of the subnetworks, as well as faithful signal propagation within subcircuits of the other subnetwork. Finding the parameters for a properly tuned STP network is difficult. We provide a theoretical argument for a method which boosts the probability of finding the elusive STP parameters by two orders of magnitude, as demonstrated in tests. We then combine STP with a novel critic-like synaptic learning algorithm, which we call ARG-STDP for attenuated-reward-gating of STDP. STDP refers to a commonly used long- term synaptic plasticity model called spike-timing dependent plasticity. With ARG-STDP, we are able to learn multiple distal rewards simultaneously, improving on the previous reward modulated STDP (R-STDP) that could learn only a single distal reward. However, we also provide a theoretical upperbound on the number of distal reward that can be learned using ARG-STDP. ii We also consider the problem of simulating large spiking neural networks. We describe an architecture for efficiently simulating such networks. The architecture is suitable for implementation on a cluster of General Purpose Graphical Processing Units (GPGPU). Novel aspects of the architecture are described and an analysis of its performance is benchmarked on a GPGPU cluster. With the advent of inexpensive GPGPU cards and compute power, the described architecture offers an affordable and scalable tool for the design, real-time simulation, and analysis of large scale spiking neural networks. iii The dissertation of Michael John O'Brien is approved. Dean Buonomano Joseph Teran Andrea Bertozzi Chris Anderson, Committee Chair University of California, Los Angeles 2013 iv Table of Contents 1 Introduction :::::::::::::::::::::::::::::::::::::: 1 1.1 Motivation . .1 1.2 Historical Context . .2 1.3 Thesis Overview . .4 1.4 Chapter Summaries . .6 2 Background: Computational Models for Neural Dynamics and Synaptic Plasticity ::::::::::::::::::::::::::::::::::::::::: 8 2.1 Neuron Models . .8 2.1.1 Hodgkin and Huxley Neurons . .8 2.1.2 Leaky Integrate-and-Fire Neurons . 11 2.1.3 Izhikevich Neurons . 13 2.2 Plasticity Models . 13 2.2.1 Spike Time Dependent Plasticity . 14 2.2.2 Short Term Plasticity . 14 3 Short Term Plasticity Aided Signal Propagation ::::::::::::::: 16 3.1 Introduction . 16 3.2 RAIN Networks . 17 3.3 Signal Propagation . 19 3.3.1 Circuit Design . 19 3.4 Properties of STP . 21 3.5 STP Conditioned RAIN . 24 v 3.6 Signal Transmission in Coupled STP Networks . 25 3.6.1 Network Layout . 25 3.6.2 Coupled RAIN Dynamics . 28 3.6.3 Coupled Signal Propagation Dynamics . 30 3.7 Finding Master STP Parameters . 36 3.8 Analysis . 38 3.8.1 Analyzing Firing Rate Changes . 38 3.8.2 Critical Firing Rate . 44 3.8.3 Assessing Circuit Layer Correlation . 45 3.9 Conclusion . 46 4 Learning Multiple Signals Through Reinforcement ::::::::::::: 48 4.1 Introduction . 48 4.2 Distal Reward Problem . 49 4.3 Methods . 50 4.3.1 Reward Modulated STDP . 51 4.3.2 R-STDP with Attenuated Reward Gating . 52 4.4 Single Synapse Reinforcement Experiment . 54 4.5 Generalization to Multiple Synapse Learning . 57 4.5.1 R-STDP with STP Learns Multiple r-Patterns . 59 4.5.2 ARG-STDP Learns Multiple r-Patterns . 60 4.5.3 STP Stabilizes ARG-STDP Network Learning Dynamics . 61 4.6 Properties of ARG-STDP with STP . 64 4.6.1 Reward Predictive Properties of r-Patterns . 64 4.6.2 Learning Robustness to Reward Release Probability . 66 vi 4.6.3 Learning Robustness to Reward Ordering . 68 4.6.4 Network Scaling . 69 4.6.5 The Reward Scheduling Problem . 69 4.6.6 Firing Rate Affects Learning Capacity . 72 4.6.7 Eligibility Trace Time Constant Affects Learning Capacity . 73 4.6.8 Interval Learning . 75 4.7 Analysis . 77 4.7.1 Defining the Correlation Metric . 77 4.7.2 Computing the Decaying Eligibility Trace . 78 4.8 Discussion . 81 5 HRL Simulator :::::::::::::::::::::::::::::::::::: 85 5.1 Introduction . 85 5.1.1 GPGPU Programming with CUDA . 86 5.1.2 Spiking Neural Simulators . 87 5.2 Simulator Description . 89 5.2.1 User Network Model Description . 90 5.2.2 Input . 92 5.2.3 Analysis . 95 5.3 Simulator Design . 96 5.3.1 Modular Design . 96 5.3.2 Parallelizing Simulation/Communication . 98 5.3.3 MPI Communication . 99 5.3.4 Simulation . 103 5.4 Performance Evaluation . 112 vii 5.4.1 Large-Scale Neural Model . 112 5.4.2 GPU Performance . 115 5.4.3 CPU Performance . 118 5.4.4 Network Splitting . 120 5.4.5 Memory Consumption . 120 5.5 Discussion . 121 5.6 Conclusion . 122 6 Conclusion ::::::::::::::::::::::::::::::::::::::: 124 References ::::::::::::::::::::::::::::::::::::::::: 126 viii List of Figures 3.1 RAIN network configuration. The red arrows indicate inhibitory connections and the blue arrows are excitatory connections. 17 3.2 The firing rate for the networks tested in the synaptic weight parameter sweep. 19 3.3 Signal propagation circuit network architecture. A naturally occurring feed- forward circuit is found within a RAIN network. The feed-forward connections are then strengthened, and this circuit is the circuit we observe for signal propagation. 20 3.4 A) Signal propagation through 5 layers. B) A reverberating signal that is experienced in layer 5, but without inputs to layer 1. C) The average firing rate of the neurons in each layer for the duration of the experiment. 21 3.5 The dynamic synapses plotted as a function of the presynaptic firing rate. The STP parameters can be chosen to produce a fixed point firing rate. Here, the fixed point is 10 Hz, at which point µmn = Wmn, which was already chosen to produce stable RAIN firing. 23 3.6 A) RAIN activity for 100 of the network neurons. The network parameters are suboptimal, leading to activity that lasts less than 2 seconds. B) RAIN activity for 100 of the network neurons. STP is employed, enabling the net- work to overcome the faulty choice in network parameters. The activity lasts more than 10 seconds. 24 ix 3.7 The coupled signal propagation network architecture. Two circuit networks are weakly coupled together. The two networks have the same general neural parameters and configuration statistics, but the STP parameters for each net- work can be chosen independently, producing different firing dynamics in each network. The left network is referred to as Master, having STP parameters that yield self-sustained network activity. The right network is referred to as Slave, which has STP parameters that allow short excitatory bursts through, then kills network activity. 25 3.8 A) Slave and Master are uncoupled. Master continues indefinitely whereas Slave dies. B) Slave has projections onto Master. Here Slave dies, as expected, and Master continues indefinitely. 29 3.9 A) Master has projections onto Slave. This is sufficient to restart Slave when- ever Slave dies. B) Slave and Master are mutually coupled. In this case, only Slave received initial inputs, and Master relied on Slave for a jump-start. This demonstrates that Slave has the ability to start Master in the event Master dies. In this configuration, both networks thrive indefinitely. 31 3.10 An analysis of the coupling required for the connections between Master and Slave. A & B) The average firing rate of Slave and Master, for one second of elapsed time, for different connectivity probabilities. These were performed with a bridge synapse strength of 30 nS. C & D) The average firing rate of Slave and Master, for one second of elapsed time, for different connectivity strengths. These were performed with a synaptic bridge connection probability of 2E-4. 32 3.11 For any layer k of interest, we construct a binary projection neuron pair. Layer k projects onto the excitatory indicator neuron (blue). The indicator neuron has an excitatory connection to the inhibitory neuron (red) which, in turn, inhibits the indicator neuron to prevent the indicator from being overwhelmed by the circuit layer during a stimulus. 33 x 3.12 A & B) Signal propagation through 5 layers for Master and Slave. C & D) A reverberating signal that is experienced in layer 5 of Master, but not in Slave. E & F) The average firing rate of the neurons in each layer for Slave and Master respectively. 35 4.1 System reward R, reward tracker Rk and success signal Sk for reward channel k are plotted.
Recommended publications
  • The Human Brain Project
    HBP The Human Brain Project Madrid, June 20th 2013 HBP The Human Brain Project HBP FULL-SCALE HBP Ramp-up HBP Report mid 2013-end 2015 April, 2012 HBP-PS (CSA) (FP7-ICT-2013-FET-F, CP-CSA, Jul-Oct, 2012 May, 2011 HBP-PS Proposal (FP7-ICT-2011-FET-F) December, 2010 Proposal Preparation August, 2010 HBP PROJECT FICHE: PROJECT TITLE: HUMAN BRAIN PROJECT COORDINATOR: EPFL (Switzerland) COUNTRIES: 23 (EU MS, Switzerland, US, Japan, China); 22 in Ramp Up Phase. RESEARCH LABORATORIES: 256 in Whole Flagship; 110 in Ramp Up Phase RESEARCH INSTITUTIONS (Partners): • 150 in Whole Flagship • 82 in Ramp Up Phase & New partners through Competitive Call Scheme (15,5% of budget) • 200 partners expected by Y5 (Project participants & New partners through Competitive Call Scheme) DIVISIONS:11 SUBPROJECTS: 13 TOTAL COSTS: 1.000* M€; 72,7 M€ in Ramp Up Phase * 1160 M€ Project Total Costs (October, 2012) HBP Main Scheme of The Human Brain Project: HBP Phases 2013 2015 2016 2020 2023 RAMP UP (A) FULLY OPERATIONAL (B) SUSTAINED OPERATIONS (C) Y1 Y2 Y3a Y3b Y4 Y5 Y6 Y7 Y8 Y9 Y10 2014 2020 HBP Structure HBP: 11 DIVISIONS; 13 SUBPROJECTS (10 SCIENTIFIC & APPLICATIONS & ETHICS & MGT) DIVISION SPN1 SUBPROJECTS AREA OF ACTIVITY MOLECULAR & CELLULAR NEUROSCIENCE SP1 STRATEGIC MOUSE BRAIN DATA SP2 STRATEGIC HUMAN BRAIN DATA DATA COGNITIVE NEUROSCIENCE SP3 BRAIN FUNCTION THEORETICAL NEUROSCIENCE SP4 THEORETICAL NEUROSCIENCE THEORY NEUROINFORMATICS SP5 THE NEUROINFORMATICS PLATFORM BRAIN SIMULATION SP6 BRAIN SIMULATION PLATFORM HIGH PERFORMANCE COMPUTING (HPC) SP7 HPC
    [Show full text]
  • Machine Learning for Neuroscience Geoffrey E Hinton
    Hinton Neural Systems & Circuits 2011, 1:12 http://www.neuralsystemsandcircuits.com/content/1/1/12 QUESTIONS&ANSWERS Open Access Machine learning for neuroscience Geoffrey E Hinton What is machine learning? Could you provide a brief description of the Machine learning is a type of statistics that places parti- methods of machine learning? cular emphasis on the use of advanced computational Machine learning can be divided into three parts: 1) in algorithms. As computers become more powerful, and supervised learning, the aim is to predict a class label or modern experimental methods in areas such as imaging a real value from an input (classifying objects in images generate vast bodies of data, machine learning is becom- or predicting the future value of a stock are examples of ing ever more important for extracting reliable and this type of learning); 2) in unsupervised learning, the meaningful relationships and for making accurate pre- aim is to discover good features for representing the dictions. Key strands of modern machine learning grew input data; and 3) in reinforcement learning, the aim is out of attempts to understand how large numbers of to discover what action should be performed next in interconnected, more or less neuron-like elements could order to maximize the eventual payoff. learn to achieve behaviourally meaningful computations and to extract useful features from images or sound What does machine learning have to do with waves. neuroscience? By the 1990s, key approaches had converged on an Machine learning has two very different relationships to elegant framework called ‘graphical models’, explained neuroscience. As with any other science, modern in Koller and Friedman, in which the nodes of a graph machine-learning methods can be very helpful in analys- represent variables such as edges and corners in an ing the data.
    [Show full text]
  • Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning
    Deep Neuroevolution: Genetic Algorithms are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning Felipe Petroski Such Vashisht Madhavan Edoardo Conti Joel Lehman Kenneth O. Stanley Jeff Clune Uber AI Labs ffelipe.such, [email protected] Abstract 1. Introduction Deep artificial neural networks (DNNs) are typ- A recent trend in machine learning and AI research is that ically trained via gradient-based learning al- old algorithms work remarkably well when combined with gorithms, namely backpropagation. Evolution sufficient computing resources and data. That has been strategies (ES) can rival backprop-based algo- the story for (1) backpropagation applied to deep neu- rithms such as Q-learning and policy gradients ral networks in supervised learning tasks such as com- on challenging deep reinforcement learning (RL) puter vision (Krizhevsky et al., 2012) and voice recog- problems. However, ES can be considered a nition (Seide et al., 2011), (2) backpropagation for deep gradient-based algorithm because it performs neural networks combined with traditional reinforcement stochastic gradient descent via an operation sim- learning algorithms, such as Q-learning (Watkins & Dayan, ilar to a finite-difference approximation of the 1992; Mnih et al., 2015) or policy gradient (PG) methods gradient. That raises the question of whether (Sehnke et al., 2010; Mnih et al., 2016), and (3) evolution non-gradient-based evolutionary algorithms can strategies (ES) applied to reinforcement learning bench- work at DNN scales. Here we demonstrate they marks (Salimans et al., 2017). One common theme is can: we evolve the weights of a DNN with a sim- that all of these methods are gradient-based, including ES, ple, gradient-free, population-based genetic al- which involves a gradient approximation similar to finite gorithm (GA) and it performs well on hard deep differences (Williams, 1992; Wierstra et al., 2008; Sali- RL problems, including Atari and humanoid lo- mans et al., 2017).
    [Show full text]
  • Acetylcholine in Cortical Inference
    Neural Networks 15 (2002) 719–730 www.elsevier.com/locate/neunet 2002 Special issue Acetylcholine in cortical inference Angela J. Yu*, Peter Dayan Gatsby Computational Neuroscience Unit, University College London, 17 Queen Square, London WC1N 3AR, UK Received 5 October 2001; accepted 2 April 2002 Abstract Acetylcholine (ACh) plays an important role in a wide variety of cognitive tasks, such as perception, selective attention, associative learning, and memory. Extensive experimental and theoretical work in tasks involving learning and memory has suggested that ACh reports on unfamiliarity and controls plasticity and effective network connectivity. Based on these computational and implementational insights, we develop a theory of cholinergic modulation in perceptual inference. We propose that ACh levels reflect the uncertainty associated with top- down information, and have the effect of modulating the interaction between top-down and bottom-up processing in determining the appropriate neural representations for inputs. We illustrate our proposal by means of an hierarchical hidden Markov model, showing that cholinergic modulation of contextual information leads to appropriate perceptual inference. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Acetylcholine; Perception; Neuromodulation; Representational inference; Hidden Markov model; Attention 1. Introduction medial septum (MS), diagonal band of Broca (DBB), and nucleus basalis (NBM). Physiological studies on ACh Neuromodulators such as acetylcholine (ACh), sero- indicate that its neuromodulatory effects at the cellular tonine, dopamine, norepinephrine, and histamine play two level are diverse, causing synaptic facilitation and suppres- characteristic roles. One, most studied in vertebrate systems, sion as well as direct hyperpolarization and depolarization, concerns the control of plasticity. The other, most studied in all within the same cortical area (Kimura, Fukuda, & invertebrate systems, concerns the control of network Tsumoto, 1999).
    [Show full text]
  • Model-Free Episodic Control
    Model-Free Episodic Control Charles Blundell Benigno Uria Alexander Pritzel Google DeepMind Google DeepMind Google DeepMind [email protected] [email protected] [email protected] Yazhe Li Avraham Ruderman Joel Z Leibo Jack Rae Google DeepMind Google DeepMind Google DeepMind Google DeepMind [email protected] [email protected] [email protected] [email protected] Daan Wierstra Demis Hassabis Google DeepMind Google DeepMind [email protected] [email protected] Abstract State of the art deep reinforcement learning algorithms take many millions of inter- actions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision- making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains. 1 Introduction Deep reinforcement learning has recently achieved notable successes in a variety of domains [23, 32]. However, it is very data inefficient. For example, in the domain of Atari games [2], deep Reinforcement Learning (RL) systems typically require tens of millions of interactions with the game emulator, amounting to hundreds of hours of game play, to achieve human-level performance. As pointed out by [13], humans learn to play these games much faster. This paper addresses the question of how to emulate such fast learning abilities in a machine—without any domain-specific prior knowledge.
    [Show full text]
  • When Planning to Survive Goes Wrong: Predicting the Future and Replaying the Past in Anxiety and PTSD
    Available online at www.sciencedirect.com ScienceDirect When planning to survive goes wrong: predicting the future and replaying the past in anxiety and PTSD 1 3 1,2 Christopher Gagne , Peter Dayan and Sonia J Bishop We increase our probability of survival and wellbeing by lay out this computational problem as a form of approxi- minimizing our exposure to rare, extremely negative events. In mate Bayesian decision theory (BDT) [1] and consider this article, we examine the computations used to predict and how miscalibrated attempts to solve it might contribute to avoid such events and to update our models of the world and anxiety and stress disorders. action policies after their occurrence. We also consider how these computations might go wrong in anxiety disorders and According to BDT, we should combine a probability Post Traumatic Stress Disorder (PTSD). We review evidence distribution over all relevant states of the world with that anxiety is linked to increased simulations of the future estimates of the benefits or costs of outcomes associated occurrence of high cost negative events and to elevated with each state. We must then calculate the course of estimates of the probability of occurrence of such events. We action that delivers the largest long-run expected value. also review psychological theories of PTSD in the light of newer, Individuals can only possibly approximately solve this computational models of updating through replay and problem. To do so, they bring to bear different sources of simulation. We consider whether pathological levels of re- information (e.g. priors, evidence, models of the world) experiencing symptomatology might reflect problems and apply different methods to calculate the expected reconciling the traumatic outcome with overly optimistic priors long-run values of alternate courses of action.
    [Show full text]
  • Pnas11052ackreviewers 5098..5136
    Acknowledgment of Reviewers, 2013 The PNAS editors would like to thank all the individuals who dedicated their considerable time and expertise to the journal by serving as reviewers in 2013. Their generous contribution is deeply appreciated. A Harald Ade Takaaki Akaike Heather Allen Ariel Amir Scott Aaronson Karen Adelman Katerina Akassoglou Icarus Allen Ido Amit Stuart Aaronson Zach Adelman Arne Akbar John Allen Angelika Amon Adam Abate Pia Adelroth Erol Akcay Karen Allen Hubert Amrein Abul Abbas David Adelson Mark Akeson Lisa Allen Serge Amselem Tarek Abbas Alan Aderem Anna Akhmanova Nicola Allen Derk Amsen Jonathan Abbatt Neil Adger Shizuo Akira Paul Allen Esther Amstad Shahal Abbo Noam Adir Ramesh Akkina Philip Allen I. Jonathan Amster Patrick Abbot Jess Adkins Klaus Aktories Toby Allen Ronald Amundson Albert Abbott Elizabeth Adkins-Regan Muhammad Alam James Allison Katrin Amunts Geoff Abbott Roee Admon Eric Alani Mead Allison Myron Amusia Larry Abbott Walter Adriani Pietro Alano Isabel Allona Gynheung An Nicholas Abbott Ruedi Aebersold Cedric Alaux Robin Allshire Zhiqiang An Rasha Abdel Rahman Ueli Aebi Maher Alayyoubi Abigail Allwood Ranjit Anand Zalfa Abdel-Malek Martin Aeschlimann Richard Alba Julian Allwood Beau Ances Minori Abe Ruslan Afasizhev Salim Al-Babili Eric Alm David Andelman Kathryn Abel Markus Affolter Salvatore Albani Benjamin Alman John Anderies Asa Abeliovich Dritan Agalliu Silas Alben Steven Almo Gregor Anderluh John Aber David Agard Mark Alber Douglas Almond Bogi Andersen Geoff Abers Aneel Aggarwal Reka Albert Genevieve Almouzni George Andersen Rohan Abeyaratne Anurag Agrawal R. Craig Albertson Noga Alon Gregers Andersen Susan Abmayr Arun Agrawal Roy Alcalay Uri Alon Ken Andersen Ehab Abouheif Paul Agris Antonio Alcami Claudio Alonso Olaf Andersen Soman Abraham H.
    [Show full text]
  • Technical Note Q-Learning
    Machine Learning, 8, 279-292 (1992) © 1992 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Technical Note Q-Learning CHRISTOPHER J.C.H. WATKINS 25b Framfield Road, Highbury, London N5 1UU, England PETER DAYAN Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9EH, Scotland Abstract. ~-learning (Watkins, 1989) is a simpleway for agentsto learn how to act optimallyin controlledMarkovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successivelyimproving its evaluationsof the quality of particular actions at particular states. This paper presents and proves in detail a convergencetheorem for ~-learning based on that outlined in Watkins (1989). We show that 0~-learningconverges to the optimum action-valueswith probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many O~values can be changed each iteration, rather than just one. Keywords. 0~-learning, reinforcement learning, temporal differences, asynchronous dynamic programming 1. Introduction O~-learning (Watkins, 1989) is a form of model-free reinforcement learning. It can also be viewed as a method of asynchronous dynamic programming (DP). It provides agents with the capability of learning to act optimally in Markovian domains by experiencing the con- sequences of actions, without requiring them to build maps of the domains. Learning proceeds similarly to Sutton's (1984; 1988) method of temporal differences (TD): an agent tries an action at a particular state, and evaluates its consequences in terms of the immediate reward or penalty it receives and its estimate of the value of the state to which it is taken.
    [Show full text]
  • Human Functional Brain Imaging 1990–2009
    Portfolio Review Human Functional Brain Imaging 1990–2009 September 2011 Acknowledgements The Wellcome Trust would like to thank the many people who generously gave up their time to participate in this review. The project was led by Claire Vaughan and Liz Allen. Key input and support was provided by Lynsey Bilsland, Richard Morris, John Williams, Shewly Choudhury, Kathryn Adcock, David Lynn, Kevin Dolby, Beth Thompson, Anna Wade, Suzi Morris, Annie Sanderson, and Jo Scott; and Lois Reynolds and Tilli Tansey (Wellcome Trust Expert Group). The views expressed in this report are those of the Wellcome Trust project team, drawing on the evidence compiled during the review. We are indebted to the independent Expert Group and our industry experts, who were pivotal in providing the assessments of the Trust’s role in supporting human functional brain imaging and have informed ‘our’ speculations for the future. Finally, we would like to thank Professor Randy Buckner, Professor Ray Dolan and Dr Anne-Marie Engel, who provided valuable input to the development of the timelines and report. The2 | Portfolio Wellcome Review: Trust Human is a Functional charity registeredBrain Imaging in England and Wales, no. 210183. Contents Acknowledgements 2 Key abbreviations used in the report 4 Overview and key findings 4 Landmarks in human functional brain imaging 10 1. Introduction and background 12 2 Human functional brain imaging today: the global research landscape 14 2.1 The global scene 14 2.2 The UK 15 2.3 Europe 17 2.4 Industry 17 2.5 Human brain imaging
    [Show full text]
  • © CIC Edizioni Internazionali
    4-EDITORIALE_FN 3 2013 08/10/13 12:22 Pagina 143 editorial The XXIV Ottorino Rossi Award Who was Ottorino Rossi? Ottorino Rossi was born on 17th January, 1877, in Solbiate Comasco, a tiny Italian village near Como. In 1895 he enrolled at the medical fac- ulty of the University of Pavia as a student of the Ghislieri College and during his undergraduate years he was an intern pupil of the Institute of General Pathology and Histology, which was headed by Camillo Golgi. In 1901 Rossi obtained his medical doctor degree with the high- est grades and a distinction. In October 1902 he went on to the Clinica Neuropatologica (Hospital for Nervous and Mental Diseases) directed by Casimiro Mondino to learn clinical neurology. In his spare time Rossi continued to frequent the Golgi Institute which was the leading Italian centre for biological research. Having completed his clinical prepara- tion in Florence with Eugenio Tanzi, and in Munich at the Institute directed by Emil Kraepelin, he taught at the Universities of Siena, Sassari and Pavia. In Pavia he was made Rector of the University and was instrumental in getting the buildings of the new San Matteo Polyclinic completed. Internazionali Ottorino Rossi made important contributions to many fields of clinical neurology, neurophysiopathology and neuroanatomy. These include: the identification of glucose as the reducing agent of cerebrospinal fluid, the demonstration that fibres from the spinal ganglia pass into the dorsal branch of the spinal roots, and the description of the cerebellar symp- tom which he termed “the primary asymmetries of positions”. Moreover, he conducted important studies on the immunopathology of the nervous system, the serodiagnosis of neurosyphilis and the regeneration of the nervous system.
    [Show full text]
  • Human Brain Project: Henry Markram Plans to Spend €1Bn Building a Perfect Model of the Human Brain | Science | the Observer 08/11/13 14:39
    Human Brain Project: Henry Markram plans to spend €1bn building a perfect model of the human brain | Science | The Observer 08/11/13 14:39 Human Brain Project: Henry Markram plans to spend €1bn building a perfect model of the human brain Henry Markram, co-director of the Human Brain Project, based in Lausanne, Switzerland, has been awarded €1bn by the EU. Photograph: Jean-Christophe Bott/EPA In a spartan office looking across Lake Geneva to the French Alps, Henry Markram is searching for a suitably big metaphor to describe his latest project. "It's going to be the Higgs boson of the brain, a Noah's archive of the mind," he says. "No, it's like a telescope that can span all the way across the universe of the brain from the micro the macro level." We are talking about the Human Brain Project, Markram's audacious plan to build a working model of the human brain – from neuron to hemisphere level – and simulate it on a supercomputer within the next 10 years. When Markram first unveiled his idea at a TEDGlobal conference in Oxford four years ago, few of his peers took him seriously. The brain was too complex, they said, and in any case there was no computer fast enough. Even last year when he presented a more detailed plan at a scientific meeting in Bern, showing how the requisite computer power would be available by 2020, many neuroscientists continued to insist it could not be done and dismissed his http://www.theguardian.com/science/2013/oct/15/human-brain-project-henry-markram Pagina 1 di 13 Human Brain Project: Henry Markram plans to spend €1bn building a perfect model of the human brain | Science | The Observer 08/11/13 14:39 claims as hype.
    [Show full text]
  • PDF, WIPO Forum 2013 Brochure
    WIPO Forum 2013 From inspiration to innovation: The game-changers September 24, 2013 – 3.30 p.m. International Conference Center Geneva (CICG) Geneva, Switzerland WIPO Forum 2013 What would you change to ensure that future generations see widespread improvements in nutrition, in shelter, and in new therapies to heal troubled minds and bodies? Important breakthroughs are already while opening up the possibility of new on our doorstep and the WIPO Forum markets and new solutions—including 2013 is bringing together four vision- some that may still lie beyond the limits ary innovators, each disrupting cur- of our imagination. rent paradigms in a quest to improve some of the most basic elements of the For forward-looking policy makers, human experience: food, shelter, and here’s the question: How do we foster health. Their individual achievements a creative environment that promotes are astonishing. the kind of ground-breaking work done by the WIPO Forum 2013 panelists, That they share so much in com- while ensuring that the improvements mon is no less surprising. The work lift up our entire communities? What do of each of these change-makers has the innovators of the future need from challenged long-established industrial us—now? The first step in finding the styles, methods or modes of thought, answer is asking the question. Please join us for an interactive session on September 24 at the WIPO Forum 2013 on From inspiration to innovation: The game- changers and hear our panelists’ stories. 2 From inspiration to innovation: The game-changers Anthony Atala. For years, the needs of organ-transplant patients have far out- stripped the number of donors.
    [Show full text]