Creating a PDF Document Using Pdflatex
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Nonlinear Characteristics of Neural Signals
Nonlinear Characteristics of Neural Signals Z.Zh. Zhanabaeva,b, T.Yu. Grevtsevaa,b, Y.T. Kozhagulovc,* a National Nanotechnology Open Laboratory, Al-Farabi Kazakh National University, al-Farabi Avenue, 71, Almaty 050038, Kazakhstan [email protected] b Scientific-Research Institute of Experimental and theoretical physics, Al-Farabi Kazakh National University, al-Farabi Avenue, 71, Almaty 050038, Kazakhstan [email protected] c Department of Solid State Physics and Nonlinear Physics, Faculty of Physics and Technology al-Farabi Kazakh National University, al-Farabi Avenue, 71 Almaty 050038, Kazakhstan [email protected] ABSTRACT quantities set are used as quantitative characteristics of chaos The study is devoted to definition of generalized metrical and [18,19]. A possibility to describe different types of behaviors topological (informational entropy) characteristics of neural signals of neuronal action potentials via fractal measures is described via their well-known theoretical models. We have shown that time in [16]. So, we can formulate a problem: do any entropic laws dependence of action potential of neurons is scale invariant. Information and entropy of neural signals have constant values in describing neurons dynamics exist? Obviously, in case of self- case of self-similarity and self-affinity. similarity (similarity coefficients for the different variables are Keywords: Neural networks, information, entropy, scale equal each other) and self-affinity (similarity coefficients are invariance, metrical, topological characteristics. different) of fractal measures we expect to find the existence of intervals with constant entropy. Formulation of two 1. Introduction additional laws (fractal and entropic) would help us to build a more efficient neural networks for classification, recognition, Study of artificial neural networks is one of the most urgent identification, and processes control. -
Image Completion Using Spiking Neural Networks
International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-9 Issue-1, November 2019 Image Completion using Spiking Neural Networks Vineet Kumar, A. K. Sinha, A. K. Solanki conduction delays. The network has spike timing-dependent Abstract: In this paper, we are showing how spiking neural plasticity (STDP). Synaptic connections among neurons have networks are applied in image repainting, and its results are a fixed conduction delay which are expected as a random outstanding compared with other machine learning techniques. integer that could vary between 1ms and 20ms. Spiking Neural Networks uses the shape of patterns and shifting The advantage of the spiking network is that the output can distortion on images and positions to retrieve the original picture. Thus, Spiking Neural Networks is one of the advanced be represented in sparse in time. SNN conquers the energy generations and third generation of machine learning techniques, consumption compared to the biological system for the and is an extension to the concept of Neural Networks and spikes, which have high information[24]. Convolutional Neural Networks. Spiking Neural Networks (SNN) is biologically plausible, computationally more powerful, and is II. LITERATURE SURVEY considerably faster. The proposed algorithm is tested on different sized digital images over which free form masks are applied. The It is important to classify the objects in an image so that, the performance of the algorithm is examined to find the PSNR, QF process solves many issues related to image processing and and SSIM. The model has an effective and fast to complete the computer vision. -
Electromagnetic Field and TGF-Β Enhance the Compensatory
www.nature.com/scientificreports OPEN Electromagnetic feld and TGF‑β enhance the compensatory plasticity after sensory nerve injury in cockroach Periplaneta americana Milena Jankowska1, Angelika Klimek1, Chiara Valsecchi2, Maria Stankiewicz1, Joanna Wyszkowska1* & Justyna Rogalska1 Recovery of function after sensory nerves injury involves compensatory plasticity, which can be observed in invertebrates. The aim of the study was the evaluation of compensatory plasticity in the cockroach (Periplaneta americana) nervous system after the sensory nerve injury and assessment of the efect of electromagnetic feld exposure (EMF, 50 Hz, 7 mT) and TGF‑β on this process. The bioelectrical activities of nerves (pre‑and post‑synaptic parts of the sensory path) were recorded under wind stimulation of the cerci before and after right cercus ablation and in insects exposed to EMF and treated with TGF‑β. Ablation of the right cercus caused an increase of activity of the left presynaptic part of the sensory path. Exposure to EMF and TGF‑β induced an increase of activity in both parts of the sensory path. This suggests strengthening efects of EMF and TGF‑β on the insect ability to recognize stimuli after one cercus ablation. Data from locomotor tests proved electrophysiological results. The takeover of the function of one cercus by the second one proves the existence of compensatory plasticity in the cockroach escape system, which makes it a good model for studying compensatory plasticity. We recommend further research on EMF as a useful factor in neurorehabilitation. Injuries in the nervous system caused by acute trauma, neurodegenerative diseases or even old age are hard to reverse and represent an enormous challenge for modern medicine. -
1.2 Computational Neuroscience
MASTER THESIS Petr Houˇska Deep-learning architectures for analysing population neural data Department of Software and Computer Science Education Supervisor of the master thesis: Mgr. J´anAntol´ık,Ph.D. Study programme: Computer Science Study branch: Artificial Intelligence Prague 2021 I declare that I carried out this master thesis independently, and only with the cited sources, literature and other professional sources. It has not been used to obtain another or the same degree. I understand that my work relates to the rights and obligations under the Act No. 121/2000 Sb., the Copyright Act, as amended, in particular the fact that the Charles University has the right to conclude a license agreement on the use of this work as a school work pursuant to Section 60 subsection 1 of the Copyright Act. In ............. date ............. ..................................... Author’s signature i First and foremost, I would like to thank my supervisor Mgr. J´anAntol´ık,Ph.D. for inviting me to a field that was entirely foreign to me at the beginning, pro- viding thorough and ever-friendly guidance along the (not so short) journey. On a similar note, I want to express my deep gratitude to Prof. Daniel A. Butts for his invaluable and incredibly welcoming consultations regarding both the NDN3 library and the field of computational neuroscience in general. Although I did not end up doing my thesis under his supervision, RNDr. Milan Straka, Ph.D. deserves recognition for patiently lending me his valuable time while I was trying to figure out my eventual thesis topic. Last but not least, I want to acknowledge at least some of the people who were, albeit indirectly, instrumental to me finishing this thesis. -
Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits
Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits PHILIP J. TULLY Doctoral Thesis Stockholm, Sweden 2017 TRITA-CSC-A-2017:11 ISSN 1653-5723 KTH School of Computer Science and Communication ISRN-KTH/CSC/A-17/11-SE SE-100 44 Stockholm ISBN 978-91-7729-351-4 SWEDEN Akademisk avhandling som med tillstånd av Kungl Tekniska högskolan framläg- ges till offentlig granskning för avläggande av teknologie doktorsexamen i datalogi tisdagen den 9 maj 2017 klockan 13.00 i F3, Lindstedtsvägen 26, Kungl Tekniska högskolan, Valhallavägen 79, Stockholm. © Philip J. Tully, May 2017 Tryck: Universitetsservice US AB iii Abstract Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should osten- sibly propel them towards runaway excitation or quiescence. What dynamical phe- nomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large- scale neuronal simulations. Inspiring some of the most seminal neuroscience exper- iments, theoretical models provide insights that demystify experimental measure- ments and even inform new experiments. In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. -
Operational Neural Networks
Operational Neural Networks Serkan Kiranyaz1, Turker Ince2, Alexandros Iosifidis3 and Moncef Gabbouj4 1 Electrical Engineering, College of Engineering, Qatar University, Qatar; e-mail: [email protected] 2 Electrical & Electronics Engineering Department, Izmir University of Economics, Turkey; e-mail: [email protected] 3 Department of Engineering, Aarhus University, Denmark; e-mail: [email protected] 4 Department of Signal Processing, Tampere University of Technology, Finland; e-mail: [email protected] network architectures based on the data at hand, either progressively Abstract— Feed-forward, fully-connected Artificial Neural [4], [5] or by following extremely laborious search strategies [6]- Networks (ANNs) or the so-called Multi-Layer Perceptrons [10], the resulting network architectures may still exhibit a varying (MLPs) are well-known universal approximators. However, or entirely unsatisfactory performance levels especially when facing their learning performance varies significantly depending on the with highly complex and nonlinear problems. This is mainly due to function or the solution space that they attempt to approximate. the fact that all such traditional neural networks employ a This is mainly because of their homogenous configuration based homogenous network structure consisting of only a crude model of solely on the linear neuron model. Therefore, while they learn the biological neurons. This neuron model is capable of performing very well those problems with a monotonous, relatively simple only the linear transformation (i.e., linear weighted sum) [12] while and linearly separable solution space, they may entirely fail to the biological neurons or neural systems in general are built from a do so when the solution space is highly nonlinear and complex. -
New Clues for the Cerebellar Mechanisms of Learning
REVIEW Distributed Circuit Plasticity: New Clues for the Cerebellar Mechanisms of Learning Egidio D’Angelo1,2 & Lisa Mapelli1,3 & Claudia Casellato 5 & Jesus A. Garrido1,4 & Niceto Luque 4 & Jessica Monaco2 & Francesca Prestori1 & Alessandra Pedrocchi 5 & Eduardo Ros 4 Abstract The cerebellum is involved in learning and memory it can easily cope with multiple behaviors endowing therefore of sensory motor skills. However, the way this process takes the cerebellum with the properties needed to operate as an place in local microcircuits is still unclear. The initial proposal, effective generalized forward controller. casted into the Motor Learning Theory, suggested that learning had to occur at the parallel fiber–Purkinje cell synapse under Keywords Cerebellum . Distributed plasticity . Long-term supervision of climbing fibers. However, the uniqueness of this synaptic plasticity . LTP . LTD . Learning . Memory mechanism has been questioned, and multiple forms of long- term plasticity have been revealed at various locations in the cerebellar circuit, including synapses and neurons in the gran- Introduction ular layer, molecular layer and deep-cerebellar nuclei. At pres- ent, more than 15 forms of plasticity have been reported. There The cerebellum is involved in the acquisition of procedural mem- has been a long debate on which plasticity is more relevant to ory, and several attempts have been done at linking cerebellar specific aspects of learning, but this question turned out to be learning to the underlying neuronal circuit mechanisms. -
Chapter 02: Fundamentals of Neural Networks
FUNDAMENTALS OF NEURAL 2 NETWORKS Ali Zilouchian 2.1 INTRODUCTION For many decades, it has been a goal of science and engineering to develop intelligent machines with a large number of simple elements. References to this subject can be found in the scientific literature of the 19th century. During the 1940s, researchers desiring to duplicate the function of the human brain, have developed simple hardware (and later software) models of biological neurons and their interaction systems. McCulloch and Pitts [1] published the first systematic study of the artificial neural network. Four years later, the same authors explored network paradigms for pattern recognition using a single layer perceptron [2]. In the 1950s and 1960s, a group of researchers combined these biological and psychological insights to produce the first artificial neural network (ANN) [3,4]. Initially implemented as electronic circuits, they were later converted into a more flexible medium of computer simulation. However, researchers such as Minsky and Papert [5] later challenged these works. They strongly believed that intelligence systems are essentially symbol processing of the kind readily modeled on the Von Neumann computer. For a variety of reasons, the symbolic–processing approach became the dominant method. Moreover, the perceptron as proposed by Rosenblatt turned out to be more limited than first expected. [4]. Although further investigations in ANN continued during the 1970s by several pioneer researchers such as Grossberg, Kohonen, Widrow, and others, their works received relatively less attention. The primary factors for the recent resurgence of interest in the area of neural networks are the extension of Rosenblatt, Widrow and Hoff’s works dealing with learning in a complex, multi-layer network, Hopfield mathematical foundation for understanding the dynamics of an important class of networks, as well as much faster computers than those of 50s and 60s. -
Imperfect Synapses in Artificial Spiking Neural Networks
Imperfect Synapses in Artificial Spiking Neural Networks A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Computer Science by Hayden Jackson University of Canterbury 2016 To the future. Abstract The human brain is a complex organ containing about 100 billion neurons, connecting to each other by as many as 1000 trillion synaptic connections. In neuroscience and computer science, spiking neural network models are a conventional model that allow us to simulate particular regions of the brain. In this thesis we look at these spiking neural networks, and how we can benefit future works and develop new models. We have two main focuses, our first focus is on the development of a modular framework for devising new models and unifying terminology, when describ- ing artificial spiking neural networks. We investigate models and algorithms proposed in the literature and offer insight on designing spiking neural net- works. We propose the Spiking State Machine as a standard experimental framework, which is a subset of Liquid State Machines. The design of the Spiking State Machine is to focus on the modularity of the liquid compo- nent in respect to spiking neural networks. We also develop an ontology for describing the liquid component to distinctly differentiate the terminology describing our simulated spiking neural networks from its anatomical coun- terpart. We then evaluate the framework looking for consequences of the design and restrictions of the modular liquid component. For this, we use nonlinear problems which have no biological relevance; that is an issue that is irrelevant for the brain to solve, as it is not likely to encounter it in real world circumstances. -
Computational Models of Psychiatric Disorders Psychiatric Disorders Objective: Role of Computational Models in Psychiatry Agenda: 1
Applied Neuroscience Columbia Science Honors Program Fall 2016 Computational Models of Psychiatric Disorders Psychiatric Disorders Objective: Role of Computational Models in Psychiatry Agenda: 1. Modeling Spikes and Firing Rates Overview Guest Lecture by Evan Schaffer and Sean Escola 2. Psychiatric Disorders Epilepsy Schizophrenia 3. Conclusion Why use models? • Quantitative models force us to think about and formalize hypotheses and assumptions • Models can integrate and summarize observations across experiments and laboratories • A model done well can lead to non-intuitive experimental predictions Case Study: Neuron-Glia • A quantitative model, Signaling Network in Active Brain implemented through simulations, Chemical signaling underlying neuron- can be useful from an glia interactions. Glial cells are believed engineering standpoint to be actively involved in processing i.e. face recognition information and synaptic integration. • A model can point to important This opens up new perspectives for missing data, critical information, understanding the pathogenesis of brain and decisive experiments diseases. For instance, inflammation results in known changes in glial cells, especially astrocytes and microglia. Single Neuron Models Central Question: What is the correct level of abstraction? • Filter Operations • Integrate-and-Fire Model • Hodgkin-Huxley Model • Multi-Compartment Abstract thought Models depicted in Inside • Models of Spines and Out by Pixar. Channels Single Neuron Models Artificial Neuron Model: aims for computational effectiveness -
Article Role of Delayed Nonsynaptic Neuronal Plasticity in Long-Term Associative Memory
Current Biology 16, 1269–1279, July 11, 2006 ª2006 Elsevier Ltd All rights reserved DOI 10.1016/j.cub.2006.05.049 Article Role of Delayed Nonsynaptic Neuronal Plasticity in Long-Term Associative Memory Ildiko´ Kemenes,1 Volko A. Straub,1,3 memory, and we describe how this information can be Eugeny S. Nikitin,1 Kevin Staras,1,4 Michael O’Shea,1 translated into modified network and behavioral output. Gyo¨ rgy Kemenes,1,2,* and Paul R. Benjamin1,2,* 1 Sussex Centre for Neuroscience Introduction Department of Biological and Environmental Sciences School of Life Sciences It is now widely accepted that nonsynaptic as well as University of Sussex synaptic plasticity are substrates for long-term memory Falmer, Brighton BN1 9QG [1–4]. Although information is available on how nonsy- United Kingdom naptic plasticity can emerge from cellular processes active during learning [3], far less is understood about how it is translated into persistently modified behavior. Summary There are, for instance, important unanswered ques- tions regarding the timing and persistence of nonsynap- Background: It is now well established that persistent tic plasticity and the relationship between nonsynaptic nonsynaptic neuronal plasticity occurs after learning plasticity and changes in synaptic output. In this paper and, like synaptic plasticity, it can be the substrate for we address these important issues by looking at an long-term memory. What still remains unclear, though, example of learning-induced nonsynaptic plasticity is how nonsynaptic plasticity contributes to the altered (somal depolarization), its onset and persistence, and neural network properties on which memory depends. its effects on synaptically mediated network activation Understanding how nonsynaptic plasticity is translated in an experimentally tractable molluscan model system. -
Long-Term Plasticity of Intrinsic Excitability
Downloaded from learnmem.cshlp.org on September 27, 2021 - Published by Cold Spring Harbor Laboratory Press Review Long-Term Plasticity of Intrinsic Excitability: Learning Rules and Mechanisms Gae¨l Daoudal and Dominique Debanne1 Institut National de la Sante´Et de la Recherche Me´dicale UMR464 Neurobiologie des Canaux Ioniques, Institut Fe´de´ratif Jean Roche, Faculte´deMe´decine Secteur Nord, Universite´d’Aix-Marseille II, 13916 Marseille, France Spatio-temporal configurations of distributed activity in the brain is thought to contribute to the coding of neuronal information and synaptic contacts between nerve cells could play a central role in the formation of privileged pathways of activity. Synaptic plasticity is not the exclusive mode of regulation of information processing in the brain, and persistent regulations of ionic conductances in some specialized neuronal areas such as the dendrites, the cell body, and the axon could also modulate, in the long-term, the propagation of neuronal information. Persistent changes in intrinsic excitability have been reported in several brain areas in which activity is elevated during a classical conditioning. The role of synaptic activity seems to be a determinant in the induction, but the learning rules and the underlying mechanisms remain to be defined. We discuss here the role of synaptic activity in the induction of intrinsic plasticity in cortical, hippocampal, and cerebellar neurons. Activation of glutamate receptors initiates a long-term modification in neuronal excitability that may represent a parallel, synergistic substrate for learning and memory. Similar to synaptic plasticity, long-lasting intrinsic plasticity appears to be bidirectional and to express a certain level of input or cell specificity.