Brain Computation by Assemblies of Neurons
Total Page:16
File Type:pdf, Size:1020Kb
Brain computation by assemblies of neurons Christos H. Papadimitrioua,1, Santosh S. Vempalab , Daniel Mitropolskya, Michael Collinsa, and Wolfgang Maassc aDepartment of Computer Science, Columbia University, New York, NY 10027; bCollege of Computing, Georgia Institute of Technology, Atlanta, GA 30332; and cInstitute of Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria Contributed by Christos H. Papadimitriou, March 27, 2020 (sent for review February 6, 2020; reviewed by Angela D. Friederici and Tomaso A. Poggio) Assemblies are large populations of neurons believed to imprint synapses. In other words, the high-level operations of this system memories, concepts, words, and other cognitive information. We can be “compiled down” to the world of neurons and synapses— identify a repertoire of operations on assemblies. These opera- a fact reminiscent of the way in which high-level programming tions correspond to properties of assemblies observed in experi- languages are translated into machine code. ments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neu- Model rons with Hebbian plasticity and inhibition. Assemblies and their Our mathematical results, as well as most of our simulation operations constitute a computational model of the brain which results, employ a simplified and analytically tractable, yet plausi- we call the Assembly Calculus, occupying a level of detail interme- ble, model of neurons and synapses. We assume a finite number diate between the level of spiking neurons and synapses and that of brain areas denoted A, B, C , etc., intended to stand for an of the whole brain. The resulting computational system can be anatomically and functionally meaningful partition of the cortex shown, under assumptions, to be, in principle, capable of carrying (Fig. 1). Each area contains a population of n excitatory neurons out arbitrary computations. We hypothesize that something like with random recurrent connections.∗ By this, we mean that each it may underlie higher human cognitive functions such as reason- ordered pair of neurons in an area has the same small proba- ing, planning, and language. In particular, we propose a plausible bility p of being connected by a synapse, independently of what brain architecture based on assemblies for implementing the syn- happens to other pairs—this is a well-studied framework usu- tactic processing of language in cortex, which is consistent with ally referred to as Erdos–Renyi˜ graph or Gn,p (10). In addition, recent experimental results. for certain ordered pairs of areas, say (A, B), there are random afferent synaptic interconnections from A to B; that is, for every neuronal assemblies j Assembly Calculus j computation j neuron in A and every neuron in B, there is a chance p that there NEUROSCIENCE language in the brain j random graph is a synaptic connection from the former to the latter.† We use this model to fathom, quantitatively, the creation ow does the brain beget the mind? How do molecules, and manipulation of assemblies of neurons. Since the model Hcells, and synapses effect cognition, behavior, intelligence, reasoning, and language? The remarkable and accelerating Significance progress in neuroscience, both experimental and theoretical– computational, does not seem to bring us closer to an answer: Our expanding understanding of the brain at the level of The gap is formidable, and seems to necessitate the development neurons and synapses, and the level of cognitive phenom- of new conceptual frameworks. As Richard Axel (1) recently put ena such as language, leaves a formidable gap between these it, “we do not have a logic for the transformation of neural activ- two scales. Here we introduce a computational system which ity into thought and action. I view discerning [this] logic as the promises to bridge this gap: the Assembly Calculus. It encom- most important future direction of neuroscience.” What kind of passes operations on assemblies of neurons, such as project, formal system, embodying and abstracting the realities of neural associate, and merge, which appear to be implicated in cog- activity, would qualify as the sought “logic”? nitive phenomena, and can be shown, analytically as well as We propose a formal computational model of the brain based through simulations, to be plausibly realizable at the level on assemblies of neurons; we call this system the Assembly Cal- of neurons and synapses. We demonstrate the reach of this culus. In terms of detail and granularity, the Assembly Calculus system by proposing a brain architecture for syntactic pro- occupies a position intermediate between the level of individual cessing in the production of language, compatible with recent neurons and synapses and the level of the whole-brain models experimental results. useful in cognitive science (e.g., refs. 2 and 3). The basic elementary object of our system is the assembly of Author contributions: C.H.P., S.S.V., and W.M. designed research; C.H.P., S.S.V., D.M., excitatory neurons. The idea of assemblies is, of course, not new. M.C., and W.M. performed research; and C.H.P., S.S.V., and D.M. wrote the paper.y They were first hypothesized seven decades ago by Donald O. Reviewers: A.D.F., Max Planck Institute for Human Cognitive and Brain Sciences; and Hebb (4) to be densely interconnected sets of neurons whose T.A.P., Massachusetts Institute of Technology.y loosely synchronized firing in a pattern is coterminous with the The authors declare no competing interest.y subject thinking of a particular concept or idea. Assembly-like Data deposition: Our simulation system is available online, making it possible to formations have been sought by researchers during the decades reproduce and extend our simulation results: http://github.com/dmitropolsky/assemblies.y following Hebb’s prediction (see, e.g., ref. 5), until they were This open access article is distributed under Creative Commons Attribution-NonCommercial- clearly identified more than a decade ago through calcium imag- NoDerivatives License 4.0 (CC BY-NC-ND).y ing (6, 7). More recently, assemblies (sometimes called ensem- 1 To whom correspondence may be addressed. Email: [email protected] bles) and their dynamic behavior have been studied extensively This article contains supporting information online at https://www.pnas.org/lookup/suppl/ in the animal brain (see, e.g., ref. 8), while assemblies are the doi:10.1073/pnas.2001893117/-/DCSupplemental.y central theme of Gyorgy¨ Buzs´aki’s recent book (9). Our calculus outfits assemblies with certain operations that *Our assumption that all areas contain the same number of neurons is one of expository, create new assemblies and/or modify existing ones: project, recip- not mathematical, convenience, and provides the basic numerical parameter n of our rocal project, associate, merge, and a few others. These operations analysis. A slightly more complex version would have parameters nA and kA for each area A. reflect known properties of assemblies observed in experiments, †Several experimental results (11, 12) suggest deviations of the synaptic connectivity of and they can be shown, either analytically or through simula- the animal brain from the uniformity of Gn,p. We discuss how such deviations affect— tions (often both), to result from the activity of neurons and mostly support—our framework in Discussion. www.pnas.org/cgi/doi/10.1073/pnas.2001893117 PNAS Latest Articles j 1 of 9 Downloaded by guest on September 30, 2021 Fig. 1. A mathematical model of the brain. Our analytical results, as well as most of our simulations, use a formal model of the brain intended to capture cognitive phenomena in the association cortex. Our model encompasses a finite number of brain areas denoted A, B, :::, each containing n excitatory neurons. We assume that any pair of neurons in each area have a recurrent synaptic connection independently with probability p, and that, for certain pairs of areas (A, B), there are also afferent connections from any neuron in A to any in B, also independently and with probability p. We assume that firing of neurons proceeds in discrete steps, and, at each step, k of the neurons in each area fire: namely, those k neurons which have the largest sum of presynaptic inputs from neurons that fired in the previous step. Synaptic weights are modified in a Hebbian fashion, in that synapses that fired (i.e., the presynaptic neuron fired, and, in the next step, the postsynaptic neuron did) have their weight multiplied by (1 + β), where β is a plasticity constant. In our simulations, we typically use n = 107, k = 104, p = 0:001, and β = 0:1. is probabilistic (by virtue of the random synaptic connectivity), a fixed number k of the n excitatory neurons in any area fire—in our analytical results postulating the effectiveness of the vari- particular, those neurons which receive the k highest excitatory ous operations must contain the clause “with high probability,” inputs. where the event space is implicitly the underlying random graph. The four basic parameters of our model are these: n (the We assume that all cells in an assembly x belong to the same number of excitatory neurons in an area, and the basic param- brain area, denoted area(x). eter of our model), p (the probability of recurrent and afferent Our model also encompasses simplified forms of plasticity synaptic connectivity), k (the maximum number of firing neu- and inhibition. We assume multiplicative Hebbian plasticity: If rons in any area), and the plasticity coefficient β. Typical val- at a time step neuron i fires and, at the next time step, neu- ues of these parameters in our simulations are n = 106−7, p = ron j fires, and there is a synapse from i to j , the weight of 10−3, k = 102−3, and β = 0:1.