Artificial Brain Project of Visual Motion

Total Page:16

File Type:pdf, Size:1020Kb

Artificial Brain Project of Visual Motion In this issue: • Editorial: Feeding the senses • Supervised learning in spiking neural networks V o l u m e 3 N u m b e r 2 M a r c h 2 0 0 7 • Dealing with unexpected words • Embedded vision system for real-time applications • Can spike-based speech Brain-inspired auditory recognition systems outperform conventional approaches? processor and the • Book review: Analog VLSI circuits for the perception Artificial Brain project of visual motion The Korean Brain Neuroinformatics Re- search Program has two goals: to under- stand information processing mechanisms in biological brains and to develop intel- ligent machines with human-like functions based on these mechanisms. We are now developing an integrated hardware and software platform for brain-like intelligent systems called the Artificial Brain. It has two microphones, two cameras, and one speaker, looks like a human head, and has the functions of vision, audition, inference, and behavior (see Figure 1). The sensory modules receive audio and video signals from the environment, and perform source localization, signal enhancement, feature extraction, and user recognition in the forward ‘path’. In the backward path, top-down attention is per- formed, greatly improving the recognition performance of real-world noisy speech and occluded patterns. The fusion of audio and visual signals for lip-reading is also influenced by this path. The inference module has a recurrent architecture with internal states to imple- ment human-like emotion and self-esteem. Also, we would like the Artificial Brain to eventually have the abilities to perform user modeling and active learning, as well as to be able to ask the right questions both to the right people and to other Artificial Brains. The output module, in addition to the head motion, generates human-like be- havior with synthesized speech and facial representation for ‘machine emotion’. It also provides computer-based services for users. The Artificial Brain may be trained to work on specific applications, and the OfficeMate is our choice of application Figure 1. An Artificial Brain with two eyes (cameras), two ears (microphones), and one mouth test-bed. Similar to office secretaries, theOf - (speaker). It can interact with humans via intelligent functions such as sound localization, speech enhancement, user and emotion recognition with speech and face images, user modeling, and Lee, continued p. 12 active learning of new things. EDITORIAL The Neuromorphic engineer What to feed the senses? is published by the I’ve been thinking about getting information and-white low-resolution world. I’m told that into the human brain. A feature for Wired this ‘visual’ feeling is probably due to the fact magazine I finished recently discusses ways that the information from the information is to feed in new (and potentially strange) kinds feeding into the visual part of the extrastriate of information through the periphery: so, for cortex, the bit involved with mental imagery. instance, I got to try a ‘tongue display’ where Which begs a question: just how much im- images acquired by a camera on my forehead agery can a person handle, and does it matter were fed into an array of pixels passing small where that imagery comes from? currents through my tongue. I also got to An interesting development that relates use a haptic vest designed to help pilots un- a little to this is the work of some military derstand their spatial orientation. I wanted researchers investigating the advisability of to understand how much information the feeding different images—say a close-up of senses could handle (no point in supplying a sniper and a view of the whole building or more), how much the brain could handle scene—into left and right eyes. You can read (without confusion), and how signals should the work yourself, but the bottom line is that it Editor be encoded. doesn’t work well: performance and reaction Sunny Bains I won’t go into the details: you can check times drop because the brain doesn’t seem to Imperial College London out the article1 or look at my blog2 if you’re be able to take it all in.4 [email protected] interested. But a number of interesting ques- There are many theories of attention, tions came up during my research: questions of course, but one presented by a colleague Editorial Assistant that the neuromorphic community may either of mine here at Imperial College London Dylan Banks be interested in or be able to help me with. recently seemed very persuasive.5 (Even Editorial Board The first thing I found fascinating was though he used the ‘C’ word, consciousness, how knowledge of sensory bandwidth— when he described it.) He presented new David Balya which seems like it should be crucial to work neuromodelling something called the Avis Cohen all engineering in this area—seemed very global workspace theory to show how dif- Tobi Delbrück sketchy. For instance, one recent paper3 says ferent sensory inputs can compete with each Ralph Etienne-Cummings the retina has about the same bandwidth as other to produce psychological phenomena Timothy Horiuchi ethernet, 10Mb/s. But it doesn’t seem to that we know take place, and which seems to Auke Ijspeert relate the image coming in through the eye have biological plausibility. Of course, it’s still Giacomo Indiveri to that being passed through to the brain. far from answering the engineering question, Shih-Chii Liu In particular, nowhere in the paper does it ‘Exactly how much can we usefully put in?’ Jonathan Tapson suggest the retina might be doing some kind One last thing I’ve been wondering about of compression, which seemed to me like an (to no avail so far), is whether the form of an é Andr van Schaik important issue (even if only to address and incoming signal matters as much as its source. Leslie Smith dismiss). Also, I practically begged a rearcher Specifically, if ‘visual’ information (images of in tactile displays at the University of Madison remote objects, rather than those on the 2D This material is based upon work (who worked on both tongue and fingertip periphery of the body) comes in through the supported by the National Science displays) to tell me where I could get figures tongue, how is the bit of the brain that deals Foundation under Grant No. IBN- for tactile bandwidth. In his opinion, it was a with the tongue equipped to decipher it? Does 0129928. Any opinions, findings, meaningless question. I certainly couldn’t find it get help from some bit of the visual cortex and conclusions or recommendations any useful literature on the subject myself: (perhaps a higher-level part) or does it just expressed in this material are those of not for this or any of the other senses I was figure out how to process images? the author(s) and do not necessarily looking at. Any answers or leads would be much ap- reflect the views of the National The other main issue that started to in- preciated: and I promise to share them! Science Foundation. trigue me was attention. I know this makes me slow on the uptake, since this has been a ‘hot Sunny Bains The Institute of topic’ for a long time, but I had no particular Editor, The Neuromorphic Engineer Neuromorphic Engineering reason to be deeply interested until now. http://www.sunnybains.com Institute for Systems Research Specifically, I’ve become intrigued by AV Williams Bldg. two things: how attention is split up within References University of Maryland a particular sense, and among the senses. My 1. Sunny Bains, Mixed feelings, Wired 15.4, April 2007. College Park, MD 20742 2. http://www.sunnybains.com/blog interest came from the fact that the tongue- 3. Kristin Koch et. al., How Much the Eye Tells the Brain, http://www.ine-web.org display system I used, which was intended to Current Biology 16 (14), 2006. help people with macular degeneration, felt 4. David Curry et al., Dichoptic image fusion in human vision very ‘visual’. My memories of using the device system, Proc. SPIE 6224, 2006. 4. Murray Shanahan, A Spiking Neuron Model of Cortical (blindfolded) are not of feeling sensations Broadcast and Competition, Consciousness and Cognition, in the tongue, but instead of seeing a black- 2007 (in press). The Neuromorphic Engineer Volume 3, Issue , March 007 Supervised learning in spiking neural networks Spiking neural networks (SNN)1-5 exhibit (unpublished results). This property has very Neural Networks, pp. 623–629, Bruges, 2006. 11. F. Ponulak and A. Kasinski, ReSuMe learning method interesting properties that make them par- important outcomes for the possible applica- for Spiking Neural Networks dedicated to neuroprostheses ticularly suitable for applications that require tions of ReSuMe: e.g. in prediction tasks, control: Dynamical principles for neuroscience and intelligent fast and efficient computation and where where SNN-based adaptive models could biomimetic devices, EPFL LATSIS Symp., pp. 119–120, Lausanne, 2006. the timing of input/output signals carries predict the behaviour of reference objects in before training important information. However, the use on-line mode. # 10 of such networks in practical, goal-oriented Especially promising applications of SNN # 9 applications has long been limited by the lack are in neuroprostheses for human patients # 8 of appropriate supervised-learning methods. with the dysfunctions of the visual, auditory, # 7 Recently, several approaches for learning or neuro-muscular systems. Our initial simula- # 6 3 # 5 in SNNs have been proposed. Here, we tions in this area point out the suitability of Output will focus on one called ReSuMe2,6 (remote ReSuMe as a training method for SNN-based # 4 supervised method) that corresponds to the neurocontrollers in movement generation and # 3 # 2 Widrow-Hoff rule and is well known from control tasks.8,9,11 # 1 traditional artificial neural networks.
Recommended publications
  • Neuro Informatics 2020
    Neuro Informatics 2019 September 1-2 Warsaw, Poland PROGRAM BOOK What is INCF? About INCF INCF is an international organization launched in 2005, following a proposal from the Global Science Forum of the OECD to establish international coordination and collaborative informatics infrastructure for neuroscience. INCF is hosted by Karolinska Institutet and the Royal Institute of Technology in Stockholm, Sweden. INCF currently has Governing and Associate Nodes spanning 4 continents, with an extended network comprising organizations, individual researchers, industry, and publishers. INCF promotes the implementation of neuroinformatics and aims to advance data reuse and reproducibility in global brain research by: • developing and endorsing community standards and best practices • leading the development and provision of training and educational resources in neuroinformatics • promoting open science and the sharing of data and other resources • partnering with international stakeholders to promote neuroinformatics at global, national and local levels • engaging scientific, clinical, technical, industry, and funding partners in colla- borative, community-driven projects INCF supports the FAIR (Findable Accessible Interoperable Reusable) principles, and strives to implement them across all deliverables and activities. Learn more: incf.org neuroinformatics2019.org 2 Welcome Welcome to the 12th INCF Congress in Warsaw! It makes me very happy that a decade after the 2nd INCF Congress in Plzen, Czech Republic took place, for the second time in Central Europe, the meeting comes to Warsaw. The global neuroinformatics scenery has changed dramatically over these years. With the European Human Brain Project, the US BRAIN Initiative, Japanese Brain/ MINDS initiative, and many others world-wide, neuroinformatics is alive more than ever, responding to the demands set forth by the modern brain studies.
    [Show full text]
  • Artificial Consciousness and the Consciousness-Attention Dissociation
    Consciousness and Cognition 45 (2016) 210–225 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Review article Artificial consciousness and the consciousness-attention dissociation ⇑ Harry Haroutioun Haladjian a, , Carlos Montemayor b a Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France b San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA article info abstract Article history: Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to Received 6 July 2016 implement sophisticated forms of human intelligence in machines. This research attempts Accepted 12 August 2016 to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable Keywords: once we overcome short-term engineering challenges. We believe, however, that phenom- Artificial intelligence enal consciousness cannot be implemented in machines. This becomes clear when consid- Artificial consciousness ering emotions and examining the dissociation between consciousness and attention in Consciousness Visual attention humans. While we may be able to program ethical behavior based on rules and machine Phenomenology learning, we will never be able to reproduce emotions or empathy by programming such Emotions control systems—these will be merely simulations. Arguments in favor of this claim include Empathy considerations about evolution, the neuropsychological aspects of emotions, and the disso- ciation between attention and consciousness found in humans.
    [Show full text]
  • ABS: Scanning Neural Networks for Back-Doors by Artificial Brain
    ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation Yingqi Liu Wen-Chuan Lee Guanhong Tao [email protected] [email protected] [email protected] Purdue University Purdue University Purdue University Shiqing Ma Yousra Aafer Xiangyu Zhang [email protected] [email protected] [email protected] Rutgers University Purdue University Purdue University ABSTRACT Kingdom. ACM, New York, NY, USA, 18 pages. https://doi.org/10.1145/ This paper presents a technique to scan neural network based AI 3319535.3363216 models to determine if they are trojaned. Pre-trained AI models may contain back-doors that are injected through training or by 1 INTRODUCTION transforming inner neuron weights. These trojaned models operate Neural networks based artificial intelligence models are widely normally when regular inputs are provided, and mis-classify to a used in many applications, such as face recognition [47], object specific output label when the input is stamped with some special detection [49] and autonomous driving [14], demonstrating their pattern called trojan trigger. We develop a novel technique that advantages over traditional computing methodologies. More and analyzes inner neuron behaviors by determining how output acti- more people tend to believe that the applications of AI models vations change when we introduce different levels of stimulation to in all aspects of life is around the corner [7, 8]. With increasing a neuron. The neurons that substantially elevate the activation of a complexity and functionalities, training such models entails enor- particular output label regardless of the provided input is considered mous efforts in collecting training data and optimizing performance.
    [Show full text]
  • The Project Loon
    AISSMS INSTITUTE OF INFORMATION TECHNOLOGY Affiliated to Savitribai Phule Pune University , Approved by AICTE, New Delhi and Recognised by Govt. Of Maharashtra (ID No. PU/PN/ENGG/124/1998) Accredited by NAAC with ‘A’ Grade TELESCAN 2K19 TELESCAN 2019 Editorial Committee We, the students of Electronics & Telecommunication feel the privilege to present the Technical Departmental Magazine of Academic year- TELESCAN 2019 in front of you the magazine provides a platform for the students to express their technical knowledge and enhance their own technical knowledge. We would like to thank Dr.M.P.Sardey (HOD) and Prof. Santosh H Lavate for their constant support and encouraging us throughout the semester to make the magazine great hit. Editorial Team: Staff Co-ordinator: Sanmay Kamble (BE) Santosh H Lavate Abhishek Parte (BE) Supriya Lohar Nehal Gholse (TE) Ankush Muley (TE) TELESCAN 2019 VISION OF E&TC DEPARTMENT To provide quality education in Electronics & Telecommunication Engineering with professional ethics MISSION OF E&TC DEPATMENT To develop technical competency, ethics for professional growth and sense of social responsibility among students TELESCAN 2019 INDEX Sr No. Topic Page No. 1 3 Dimensional Integrated Circuit 1 2 4D Printer 4 3 5 Nanometer Semiconductor 5 4 AI: Changing The Face Of Defence 7 5 Artificial Intelligence 10 6 AI: A Boon Or A Bane 11 7 Block Chain 12 8 Blue Brain 14 9 Cobra Effect 16 10 A New Era In Industrial Production 18 11 Radiometry 20 12 Growing Need For Large Bank Of Test Item 21 13 The Project Loon 23 14
    [Show full text]
  • Identifiable Neuro Ethics Challenges to the Banking of Neuro Data
    ILLES J, LOMBERA S. IDENTIFIABLE NEURO ETHICS CHALLENGES TO THE BANKING OF NEURO DATA. MINN. J.L. SCI. & TECH. 2009;10(1):71-94. Identifiable Neuro Ethics Challenges to the Banking of Neuro Data Judy Illes ∗ & Sofia Lombera** Laboratory and clinical investigations about the brain and behavioral sciences, broadly defined as “neuroscience,” have advanced the understanding of how people think, move, feel, plan and more, both in good health and when suffering from a neurologic or psychiatric disease. Shared databases built on information obtained from neuroscience discoveries hold true promise for advancing the knowledge of brain function by leveraging new possibilities for combining complex and diverse data.1 Accompanying these opportunities are ethics challenges that, in other domains like the sharing of genetic information, have an impact on all parties involved in the research enterprise. The ethics and policy challenges include regulating the content of, access to, and use of databases; ensuring that data remains confidential and that informed consent © 2009 Judy Illes and Sofia Lombera. ∗ Judy Illes, Ph.D., Corresponding author, National Core for Neuroethics, The University of British Columbia. The authors are grateful to Dr. Jack Van Horn, Dr. Peter Reiner, Patricia Lau, and Daniel Buchman for valuable discussions on the future of neuro data banks. Parts of this paper were presented at the conference “Emerging Problems in Neurogenomics: Ethical, Legal & Policy Issues at the Intersection of Genomics & Neuroscience,” February 29, 2008, University of Minnesota, Minneapolis, Minnesota by Judy Illes. Full video of that conference is available at http://www.lifesci.consortium.umn.edu/conference/neuro.php. Supported by CIHR/INMHA CNE-85117, CFI, BCKDF, and NIH/NIMH # 9R01MH84282- 04A1.
    [Show full text]
  • New Technologies for Human Robot Interaction and Neuroprosthetics
    University of Plymouth PEARL https://pearl.plymouth.ac.uk Faculty of Science and Engineering School of Engineering, Computing and Mathematics 2017-07-01 Human-Robot Interaction and Neuroprosthetics: A review of new technologies Cangelosi, A http://hdl.handle.net/10026.1/9872 10.1109/MCE.2016.2614423 IEEE Consumer Electronics Magazine All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. CEMAG-OA-0004-Mar-2016.R3 1 New Technologies for Human Robot Interaction and Neuroprosthetics Angelo Cangelosi, Sara Invitto Abstract—New technologies in the field of neuroprosthetics and These developments in neuroprosthetics are closely linked to robotics are leading to the development of innovative commercial the recent significant investment and progress in research on products based on user-centered, functional processes of cognitive neural networks and deep learning approaches to robotics and neuroscience and perceptron studies. The aim of this review is to autonomous systems [2][3]. Specifically, one key area of analyze this innovative path through the description of some of the development has been that of cognitive robots for human-robot latest neuroprosthetics and human-robot interaction applications, in particular the Brain Computer Interface linked to haptic interaction and assistive robotics. This concerns the design of systems, interactive robotics and autonomous systems. These robot companions for the elderly, social robots for children with issues will be addressed by analyzing developmental robotics and disabilities such as Autism Spectrum Disorders, and robot examples of neurorobotics research.
    [Show full text]
  • On the Technology Prospects and Investment Opportunities for Scalable Neuroscience
    On the Technology Prospects and Investment Opportunities for Scalable Neuroscience Thomas Dean1,2,3 Biafra Ahanonu3 Mainak Chowdhury3 Anjali Datta3 Andre Esteva3 Daniel Eth3 Nobie Redmon3 Oleg Rumyantsev3 Ysis Tarter3 1 Google Research, 2 Brown University, 3 Stanford University Contents 1 Executive Summary 1 2 Introduction 4 3 Evolving Imaging Technologies 6 4 Macroscale Reporting Devices 10 5 Automating Systems Neuroscience 14 6 Synthetic Neurobiology 16 7 Nanotechnology 20 8 Acknowledgements 28 A Leveraging Sequencing for Recording — Biafra Ahanonu 28 B Scalable Analytics and Data Mining — Mainak Chowdhury 32 C Macroscale Imaging Technologies — Anjali Datta 35 D Nanoscale Recording and Wireless Readout — Andre Esteva 38 E Hybrid Biological and Nanotechnology Solutions — Daniel Eth 41 F Advances in Contrast Agents and Tissue Preparation — Nobie Redmon 44 G Microendoscopy and Optically Coupled Implants — Oleg Rumyantsev 46 H Opportunities for Automating Laboratory Procedures — Ysis Tarter 49 i 1 Executive Summary Two major initiatives to accelerate research in the brain sciences have focused attention on devel- oping a new generation of scientific instruments for neuroscience. These instruments will be used to record static (structural) and dynamic (behavioral) information at unprecedented spatial and temporal resolution and report out that information in a form suitable for computational analysis. We distinguish between recording — taking measurements of individual cells and the extracellu- lar matrix — and reporting — transcoding, packaging and transmitting the resulting information for subsequent analysis — as these represent very different challenges as we scale the relevant technologies to support simultaneously tracking the many neurons that comprise neural circuits of interest. We investigate a diverse set of technologies with the purpose of anticipating their devel- opment over the span of the next 10 years and categorizing their impact in terms of short-term [1-2 years], medium-term [2-5 years] and longer-term [5-10 years] deliverables.
    [Show full text]
  • Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain–Computer Interfaces: a Systematic Literature Review
    sensors Systematic Review Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain–Computer Interfaces: A Systematic Literature Review Nuraini Jamil 1 , Abdelkader Nasreddine Belkacem 2,* , Sofia Ouhbi 1 and Abderrahmane Lakas 2 1 Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates; [email protected] (N.J.); sofi[email protected] (S.O.) 2 Department of Computer and Network Engineering, College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates; [email protected] * Correspondence: [email protected] Abstract: Humans interact with computers through various devices. Such interactions may not require any physical movement, thus aiding people with severe motor disabilities in communicating with external devices. The brain–computer interface (BCI) has turned into a field involving new elements for assistive and rehabilitative technologies. This systematic literature review (SLR) aims to help BCI investigator and investors to decide which devices to select or which studies to support based on the current market examination. This examination of noninvasive EEG devices is based on published BCI studies in different research areas. In this SLR, the research area of noninvasive Citation: Jamil, N.; Belkacem, A.N.; BCIs using electroencephalography (EEG) was analyzed by examining the types of equipment used Ouhbi, S.; Lakas, A. Noninvasive for assistive, adaptive, and rehabilitative BCIs. For this SLR, candidate studies were selected from Electroencephalography Equipment the IEEE digital library, PubMed, Scopus, and ScienceDirect. The inclusion criteria (IC) were limited for Assistive, Adaptive, and to studies focusing on applications and devices of the BCI technology.
    [Show full text]
  • An New Type of Artificial Brain Using Controlled Neurons
    New Developments in Neural Networks, J. R. Burger An New Type Of Artificial Brain Using Controlled Neurons John Robert Burger, Emeritus Professor, ECE Department, California State University Northridge, [email protected] Abstract -- Plans for a new type of artificial brain are possible because of realistic neurons in logically structured arrays of controlled toggles, one toggle per neuron. Controlled toggles can be made to compute, in parallel, parameters of critical importance for each of several complex images recalled from associative long term memory. Controlled toggles are shown below to amount to a new type of neural network that supports autonomous behavior and action. I. INTRODUCTION A brain-inspired system is proposed below in order to demonstrate the utility of electrically realistic neurons as controlled toggles. Plans for artificial brains are certainly not new [1, 2]. As in other such systems, the system below will monitor external information, analogous to that flowing from eyes, ears and other sensors. If the information is significant, for instance, bright spots, or loud passages, or if identical information is presented several times, it will automatically be committed to associative long term memory; it will automatically be memorized. External information is concurrently impressed on a short term memory, which is a logical row of short term memory neurons. From here cues are derived and held long enough to accomplish a search of associative long term memory. Unless the cues are very exacting indeed, several returned images are expected. Each return then undergoes calculations for priority; an image with maximum priority is imposed on (and overwrites the current contents of) short term memory.
    [Show full text]
  • Is Google Making Us Stupid?
    Is Google Making Us Stupid? What the Internet is doing to our brains Nicholas Carr Jul 1 2008, 12:00 PM ET "Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.” I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, and begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
    [Show full text]
  • What Are Computational Neuroscience and Neuroinformatics? Computational Neuroscience
    Department of Mathematical Sciences B12412: Computational Neuroscience and Neuroinformatics What are Computational Neuroscience and Neuroinformatics? Computational Neuroscience Computational Neuroscience1 is an interdisciplinary science that links the diverse fields of neu- roscience, computer science, physics and applied mathematics together. It serves as the primary theoretical method for investigating the function and mechanism of the nervous system. Com- putational neuroscience traces its historical roots to the the work of people such as Andrew Huxley, Alan Hodgkin, and David Marr. Hodgkin and Huxley's developed the voltage clamp and created the first mathematical model of the action potential. David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory. Computational neuroscience is distinct from psychological connectionism and theories of learning from disciplines such as machine learning,neural networks and statistical learning theory in that it emphasizes descriptions of functional and biologically realistic neurons and their physiology and dynamics. These models capture the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, protein and chemical coupling to network os- cillations and learning and memory. These computational models are used to test hypotheses that can be directly verified by current or future biological experiments. Currently, the field is undergoing a rapid expansion. There are many software packages, such as NEURON, that allow rapid and systematic in silico modeling of realistic neurons.
    [Show full text]
  • A World Survey of Artificial Brain Projects, Part II Biologically Inspired
    Neurocomputing 74 (2010) 30–49 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures Ben Goertzel a,b,n, Ruiting Lian b, Itamar Arel c, Hugo de Garis b, Shuo Chen b a Novamente LLC, 1405 Bernerd Place, Rockville, MD 20851, USA b Fujian Key Lab of the Brain-like Intelligent Systems, Xiamen University, Xiamen, China c Machine Intelligence Lab, Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, USA article info abstract Available online 18 September 2010 A number of leading cognitive architectures that are inspired by the human brain, at various levels of Keywords: granularity, are reviewed and compared, with special attention paid to the way their internal structures Artificial brains and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Cognitive architectures Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way.
    [Show full text]