COSYNE 2012 Workshops February 27 & 28, 2012 Snowbird, Utah
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Complex Internal Representations in Sensorimotor Decision Making: a Bayesian Investigation Doctor of Philosophy, 2014
COMPLEXINTERNALREPRESENTATIONS INSENSORIMOTORDECISIONMAKING: A BAYESIAN INVESTIGATION luigi acerbi I V N E R U S E I T H Y T O H F G E R D I N B U Doctor of Philosophy Doctoral Training Centre for Computational Neuroscience Institute of Perception, Action and Behaviour School of Informatics University of Edinburgh 2014 Luigi Acerbi: Complex internal representations in sensorimotor decision making: A Bayesian investigation Doctor of Philosophy, 2014 supervisors: Prof. Sethu Vijayakumar, Ph.D., FRSE Prof. Daniel M. Wolpert, D.Phil., FRS, FMedSci ABSTRACT The past twenty years have seen a successful formalization of the idea that percep- tion is a form of probabilistic inference. Bayesian Decision Theory (BDT) provides a neat mathematical framework for describing how an ideal observer and actor should interpret incoming sensory stimuli and act in the face of uncertainty. The predictions of BDT, however, crucially depend on the observer’s internal models, represented in the Bayesian framework by priors, likelihoods, and the loss function. Arguably, only in the simplest scenarios (e.g., with a few Gaussian variables) we can expect a real observer’s internal representations to perfectly match the true statistics of the task at hand, and to conform to exact Bayesian computations, but how humans systemati- cally deviate from BDT in more complex cases is yet to be understood. In this thesis we theoretically and experimentally investigate how people represent and perform probabilistic inference with complex (beyond Gaussian) one-dimensional distributions of stimuli in the context of sensorimotor decision making. The goal is to reconstruct the observers’ internal representations and details of their decision- making process from the behavioural data – by employing Bayesian inference to un- cover properties of a system, the ideal observer, that is believed to perform Bayesian inference itself. -
Dynamic Compression and Expansion in a Classifying Recurrent Network
bioRxiv preprint doi: https://doi.org/10.1101/564476; this version posted March 1, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Dynamic compression and expansion in a classifying recurrent network Matthew Farrell1-2, Stefano Recanatesi1, Guillaume Lajoie3-4, and Eric Shea-Brown1-2 1Computational Neuroscience Center, University of Washington 2Department of Applied Mathematics, University of Washington 3Mila|Qu´ebec AI Institute 4Dept. of Mathematics and Statistics, Universit´ede Montr´eal Abstract Recordings of neural circuits in the brain reveal extraordinary dynamical richness and high variability. At the same time, dimensionality reduction techniques generally uncover low- dimensional structures underlying these dynamics when tasks are performed. In general, it is still an open question what determines the dimensionality of activity in neural circuits, and what the functional role of this dimensionality in task learning is. In this work we probe these issues using a recurrent artificial neural network (RNN) model trained by stochastic gradient descent to discriminate inputs. The RNN family of models has recently shown promise in reveal- ing principles behind brain function. Through simulations and mathematical analysis, we show how the dimensionality of RNN activity depends on the task parameters and evolves over time and over stages of learning. We find that common solutions produced by the network naturally compress dimensionality, while variability-inducing chaos can expand it. We show how chaotic networks balance these two factors to solve the discrimination task with high accuracy and good generalization properties. -
Moira Rose (Molly) Dillon
Moira Rose Dillon Department of Psychology, New York University, 6 Washington Place, New York, NY 10003 Email – [email protected] Departmental Website – http://as.nyu.edu/psychology/people/faculty.Moira-Dillon.html Lab Website – https://www.labdevelopingmind.com Employment New York University New York, NY Assistant Professor, Department of Psychology, Faculty of Arts and Sciences (July 2017-present) Concurrent Positions New York University New York, NY Faculty Affiliate, Institute of Human Development and Social Change, Steinhardt School of Culture, Education, and Human Development (May 2019-present) Massachusetts Institute of Technology Cambridge, MA Invited Researcher, Abdul Latif Jameel Poverty Action Lab (J-PAL), Foundations of Learning (April 2021-present) Education Harvard University Cambridge, MA (August 2011-May 2017) Ph.D., Psychology (May 2017) A.M., Psychology (May 2014) Yale University New Haven, CT (August 2004-May 2008) B.A., Cognitive Science; Art (May 2008) Funding 2019-2024 National Science Foundation (PI: $1,718,437) CAREER: Becoming Euclid: Characterizing the geometric intuitions that support formal learning in mathematics (PI: $24,671) CLB: Career-Life Balance Faculty Early Career Development Program Supplement 2019-2023 DARPA (Co-PI; to NYU: $1,703,553; to Dillon: $871,874) Cognitive milestones for Machine Common Sense, Co-PI: Brenden Lake 2019-2021 Jacobs Foundation (PI: 150,000 CHF) Early Career Research Fellowship 2018-2019 Institute of Human Development and Social Change at NYU (PI: $14,848) The arc of geometric -
Dynamics of Excitatory-Inhibitory Neuronal Networks With
I (X;Y) = S(X) - S(X|Y) in c ≈ p + N r V(t) = V 0 + ∫ dτZ 1(τ)I(t-τ) P(N) = 1 V= R I N! λ N e -λ www.cosyne.org R j = R = P( Ψ, υ) + Mγ (Ψ, υ) σ n D +∑ j n k D k n MAIN MEETING Salt Lake City, UT Feb 27 - Mar 2 ................................................................................................................................................................................................................. Program Summary Thursday, 27 February 4:00 pm Registration opens 5:30 pm Welcome reception 6:20 pm Opening remarks 6:30 pm Session 1: Keynote Invited speaker: Thomas Jessell 7:30 pm Poster Session I Friday, 28 February 7:30 am Breakfast 8:30 am Session 2: Circuits I: From wiring to function Invited speaker: Thomas Mrsic-Flogel; 3 accepted talks 10:30 am Session 3: Circuits II: Population recording Invited speaker: Elad Schneidman; 3 accepted talks 12:00 pm Lunch break 2:00 pm Session 4: Circuits III: Network models 5 accepted talks 3:45 pm Session 5: Navigation: From phenomenon to mechanism Invited speakers: Nachum Ulanovsky, Jeffrey Magee; 1 accepted talk 5:30 pm Dinner break 7:30 pm Poster Session II Saturday, 1 March 7:30 am Breakfast 8:30 am Session 6: Behavior I: Dissecting innate movement Invited speaker: Hopi Hoekstra; 3 accepted talks 10:30 am Session 7: Behavior II: Motor learning Invited speaker: Rui Costa; 2 accepted talks 11:45 am Lunch break 2:00 pm Session 8: Behavior III: Motor performance Invited speaker: John Krakauer; 2 accepted talks 3:45 pm Session 9: Reward: Learning and prediction Invited speaker: Yael -
COSYNE 2014 Workshops
I (X;Y) = S(X) - S(X|Y) in c ≈ p + N r V(t) = V0 + ∫dτZ 1(τ)I(t-τ) P(N) = V= R I 1 N! λ N e -λ www.cosyne.org R j = γ R = P( Ψ, υ) + M(Ψ, υ) σ n D n +∑ j k D n k WORKSHOPS Snowbird, UT Mar 3 - 4 ................................................................................................................................................................................................................. COSYNE 2014 Workshops Snowbird, UT Mar 3-4, 2014 Organizers: Tatyana Sharpee Robert Froemke 1 COSYNE 2014 Workshops March 3 & 4, 2014 Snowbird, Utah Monday, March 3, 2014 Organizer(s) Location 1. Computational psychiatry – Day 1 Q. Huys Wasatch A T. Maia 2. Information sampling in behavioral optimization – B. Averbeck Wasatch B Day 1 R.C. Wilson M. R. Nassar 3. Rogue states: Altered Dynamics of neural activity in C. O’Donnell Magpie A brain disorders T. Sejnowski 4. Scalable models for high dimensional neural data I. M. Park Superior A E. Archer J. Pillow 5. Homeostasis and self-regulation of developing J. Gjorgjieva White Pine circuits: From single neurons to networks M. Hennig 6. Theories of mammalian perception: Open and closed E. Ahissar Magpie B loop modes of brain-world interactions E. Assa 7. Noise correlations in the cortex: Quantification, J. Fiser Superior B origins, and functional significance M. Lengyel A. Pouget 8. Excitatory and inhibitory synaptic conductances: M. Lankarany Maybird Functional roles and inference methods T. Toyoizumi Workshop Co-Chairs Email Cell Robert Froemke, NYU [email protected] 510-703-5702 Tatyana Sharpee, Salk [email protected] 858-610-7424 Maps of Snowbird are at the end of this booklet (page 38). -
Why the Bayesian Brain May Need No Noise
Stochasticity from function – why the Bayesian brain may need no noise Dominik Dolda,b,1,2, Ilja Bytschoka,1, Akos F. Kungla,b, Andreas Baumbacha, Oliver Breitwiesera, Walter Sennb, Johannes Schemmela, Karlheinz Meiera, Mihai A. Petrovicia,b,1,2 aKirchhoff-Institute for Physics, Heidelberg University. bDepartment of Physiology, University of Bern. 1Authors with equal contributions. 2Corresponding authors: [email protected], [email protected]. August 27, 2019 Abstract An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functional Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. -
Predictive Coding in Balanced Neural Networks with Noise, Chaos and Delays
Predictive coding in balanced neural networks with noise, chaos and delays Jonathan Kadmon Jonathan Timcheck Surya Ganguli Department of Applied Physics Department of Physics Department of Applied Physics Stanford University,CA Stanford University,CA Stanford University, CA [email protected] Abstract Biological neural networks face a formidable task: performing reliable compu- tations in the face of intrinsic stochasticity in individual neurons, imprecisely specified synaptic connectivity, and nonnegligible delays in synaptic transmission. A common approach to combatting such biological heterogeneity involves aver- aging over large redundantp networks of N neurons resulting in coding errors that decrease classically as 1= N. Recent work demonstrated a novel mechanism whereby recurrent spiking networks could efficiently encode dynamic stimuli, achieving a superclassical scaling in which coding errors decrease as 1=N. This specific mechanism involved two key ideas: predictive coding, and a tight balance, or cancellation between strong feedforward inputs and strong recurrent feedback. However, the theoretical principles governing the efficacy of balanced predictive coding and its robustness to noise, synaptic weight heterogeneity and communica- tion delays remain poorly understood. To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the de- gree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean field theory of coding accuracy. Overall, our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes, reveals the fundamental role that balance plays in achieving superclassical scaling, and unifies previously disparate models in theoretical neuroscience. -
Wilken and Ma 2004
Journal of Vision (2004) 4, 1120-1135 http://journalofvision.org/4/12/11/ 1120 A detection theory account of change detection Division of Biology, California Institute of Technology, Patrick Wilken Pasadena, CA, USA Division of Biology, California Institute of Technology, Wei Ji Ma Pasadena, CA, USA Previous studies have suggested that visual short-term memory (VSTM) has a storage limit of approximately four items. However, the type of high-threshold (HT) model used to derive this estimate is based on a number of assumptions that have been criticized in other experimental paradigms (e.g., visual search). Here we report findings from nine experiments in which VSTM for color, spatial frequency, and orientation was modeled using a signal detection theory (SDT) approach. In Experiments 1-6, two arrays composed of multiple stimulus elements were presented for 100 ms with a 1500 ms ISI. Observers were asked to report in a yes/no fashion whether there was any difference between the first and second arrays, and to rate their confidence in their response on a 1-4 scale. In Experiments 1-3, only one stimulus element difference could occur (T = 1) while set size was varied. In Experiments 4-6, set size was fixed while the number of stimuli that might change was varied (T = 1, 2, 3, and 4). Three general models were tested against the receiver operating characteristics generated by the six experiments. In addition to the HT model, two SDT models were tried: one assuming summation of signals prior to a decision, the other using a max rule. In Experiments 7-9, observers were asked to directly report the relevant feature attribute of a stimulus presented 1500 ms previously, from an array of varying set size. -
Using Games to Understand Intelligence
Using Games to Understand Intelligence Franziska Brandle¨ 1, Kelsey Allen2, Joshua B. Tenenbaum2 & Eric Schulz1 1Max Planck Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics 2Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology Over the last decades, games have become one of the most isons between human and machine agents, for example in re- popular recreational activities, not only among children but search of human tool use (Allen et al., 2020). Games also also among adults. Consequently, they have also gained pop- offer complex environments for human-agent collaborations ularity as an avenue for studying cognition. Games offer sev- (Fan, Dinculescu, & Ha, 2019). Games have gained increas- eral advantages, such as the possibility to gather big data sets, ing attention during recent meetings of the Cognitive Science engage participants to play for a long time, and better resem- Society, with presentations on multi-agent collaboration (Wu blance of real world complexities. In this workshop, we will et al., 2020), foraging (Garg & Kello, 2020) and mastery (An- bring together leading researchers from across the cognitive derson, 2020) (all examples taken from the 2020 Conference). sciences to explore how games can be used to study diverse aspects of intelligent behavior, explore their differences com- Goal and scope pared to classical lab experiments, and discuss the future of The aim of this workshop is to bring together scientists who game-based cognitive science research. share a joint interest in using games as a tool to research in- Keywords: Games; Cognition; Big Data; Computation telligence. We have invited leading academics from cogni- tive science who apply games in their research. -
Wei Ji Ma CV Apr 2021 Center for Neural Science
Wei Ji Ma CV Apr 2021 Center for Neural Science and Department of Psychology New York University 4 Washington Place, New York, NY 10003, USA, (212) 992 6530 [email protected] Lab website: http://www.cns.nyu.edu/malab POSITIONS 2020-present Professor, Center for Neural Science and Department of Psychology, New York University ● Affiliate faculty in the Institute for Decision-Making, the Center for Data Science, the Neuroscience Institute, and the Center for Experimental Social Science ● Collaborating Faculty of the NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai 2013-2020 Associate Professor, Center for Neural Science and Department of Psychology, New York University (with affiliations as above) 2008-2013 Assistant Professor, Department of Neuroscience, Baylor College of Medicine ● Adjunct faculty in the Department of Psychology, Rice University TRAINING 2004-2008 Postdoc, Department of Brain and Cognitive Sciences, University of Rochester. Advisor: Alexandre Pouget 2002-2004 Postdoc, Division of Biology, California Institute of Technology. Advisor: Christof Koch 1996-2001 PhD in Theoretical Physics, University of Groningen, the Netherlands Jan-Jun 2000 Visiting PhD student, Department of Physics, Princeton University. Advisor: Erik Verlinde 1994-1997 BS/MS in Mathematics, University of Groningen 1993-1996 BS/MS in Physics, University of Groningen RESEARCH INTERESTS Planning, reinforcement learning, social cognition, perceptual decision-making, perceptual organization, visual working memory, comparative cognition, optimality/rationality, approximate inference, probabilistic computation, efficient coding, computational methods. TEACHING, MENTORING, AND OUTREACH Improving the quality of undergraduate teaching, improving mentorship, diversity/equity/inclusion, science outreach, science writing, science advocacy, translating science to policy, social justice. Founder of Growing up in science Founding member of the Scientist Action and Advocacy Network (ScAAN) Founding member of NeuWrite NYU 1 PUBLICATIONS 1. -
Minian: an Open-Source Miniscope Analysis Pipeline
bioRxiv preprint doi: https://doi.org/10.1101/2021.05.03.442492; this version posted May 4, 2021. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Title: Minian: An open-source miniscope analysis pipeline Authors: Zhe Dong1, William Mau1, Yu (Susie) Feng1, Zachary T. Pennington1, Lingxuan Chen1, YosiF Zaki1, Kanaka RaJan1, Tristan Shuman1, Daniel Aharoni*2, Denise J. Cai*1 *Corresponding Authors 1Nash Family Department oF Neuroscience, Icahn School oF Medicine at Mount Sinai 2Department oF Neurology, David GeFFen School oF Medicine, University oF CaliFornia bioRxiv preprint doi: https://doi.org/10.1101/2021.05.03.442492; this version posted May 4, 2021. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Abstract Miniature microscopes have gained considerable traction For in vivo calcium imaging in Freely behaving animals. However, extracting calcium signals From raw videos is a computationally complex problem and remains a bottleneck For many researchers utilizing single-photon in vivo calcium imaging. Despite the existence oF many powerFul analysis packages designed to detect and extract calcium dynamics, most have either key parameters that are hard-coded or insuFFicient step-by-step guidance and validations to help the users choose the best parameters. -
Bayesian Modeling of Behavior Fall 2017 Wei Ji Ma
Bayesian modeling of behavior Fall 2017 Wei Ji Ma This syllabus is subject to change. Changes will be announced in class and by email. Description Bayesian inference is the mathematical framework for making optimal decisions and actions when the state of the world is not exactly known. This course will provide an intuitive yet mathematically rigorous introduction to Bayesian models of behavior in perception, memory, decision-making, and cognitive reasoning. While this is primarily a psychology course, we will also discuss connections to economics and neuroscience. This course is not about Bayesian data analysis, but about theories that the brain itself is a Bayesian decision-maker. Nevertheless, we will spend some time on model fitting and model comparison. Prerequisites • Strong command of Calculus 1 or equivalent • Introductory course in probability or probability-based statistics. • Ability to code in Matlab. If you have not coded in Matlab before, you will be ok if you have other programming experience and do a tutorial before the course starts. Email Wei Ji if you have any questions about prerequisites. Lecturer Prof. Wei Ji Ma, [email protected], 212 992 6530 Weekly schedule Lecture Wednesdays, 4-6 pm Meyer 815 Recitation Thursdays, 2-4 pm Meyer 815 Office hours By appointment Meyer 754 (Wei Ji’s office) Materials • Bayesian modeling of perception, by Ma, Kording, and Goldreich. Will be distributed in electronic form. • You will need Matlab. If you have a laptop, please install Matlab on it before the course starts. Instructions for if you are on the Meyer building network: http://localweb.cns.nyu.edu/unixadmin/#august10-2015.