Computation and Psychophysics of Sensorimotor Integration
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
(Title of the Thesis)*
THE OPTIMALITY OF DECISION MAKING DURING MOTOR LEARNING by Joshua Brent Moskowitz A thesis submitted to the Department of Psychology in conformity with the requirements for the degree of Master of Science Queen’s University Kingston, Ontario, Canada (June, 2016) Copyright ©Joshua B. Moskowitz, 2016 Abstract In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to selected rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions. -
Risk Bounds for Classification and Regression Rules That Interpolate
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate Mikhail Belkin Daniel Hsu Partha P. Mitra The Ohio State University Columbia University Cold Spring Harbor Laboratory Abstract Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phe- nomenon of strong generalization performance for “overfitted” / interpolated clas- sifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted k-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. Moreover, the nearest neighbor schemes exhibit optimal rates under some standard statistical assumptions. Finally, this paper suggests a way to explain the phenomenon of adversarial ex- amples, which are seemingly ubiquitous in modern machine learning, and also discusses some connections to kernel machines and random forests in the interpo- lated regime. 1 Introduction The central problem of supervised inference is to predict labels of unseen data points from a set of labeled training data. The literature on this subject is vast, ranging from classical parametric and non-parametric statistics [48, 49] to more recent machine learning methods, such as kernel machines [39], boosting [36], random forests [15], and deep neural networks [25]. -
Probabilistic Circuits: Representations, Inference, Learning and Theory
Inference Probabilistic Representations Learning Circuits Theory Antonio Vergari YooJung Choi University of California, Los Angeles University of California, Los Angeles Robert Peharz Guy Van den Broeck TU Eindhoven University of California, Los Angeles January 7th, 2021 - IJCAI-PRICAI 2020 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs The Alphabet Soup of probabilistic models 2/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs Intractable and tractable models 3/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs tractability is a spectrum 4/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs Expressive models without compromises 5/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs a unifying framework for tractable models 6/153 Why tractable inference? or expressiveness vs tractability 7/153 Why tractable inference? or expressiveness -
Flexible Corticospinal Control of Muscles
Flexible Corticospinal Control of Muscles Najja J. Marshall Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2021 © 2021 Najja J. Marshall All Rights Reserved Abstract Flexible Corticospinal Control of Muscles Najja J. Marshall The exceptional abilities of top-tier athletes – from Simone Biles’ dizzying gymnastics to LeBron James’ gravity-defying bounds – can easily lead one to forget to marvel at the exceptional breadth of everyday movements. Whether holding a cup of coffee, reaching out to grab a falling object, or cycling at a quick clip, every motor action requires activating multiple muscles with the appropriate intensity and timing to move each limb or counteract the weight of an object. These actions are planned and executed by the motor cortex, which transmits its intentions to motoneurons in the spinal cord, which ultimately drive muscle contractions. A central problem in neuroscience is precisely how neural activity in cortex and the spinal cord gives rise to this diverse range of behaviors. At the level of spinal cord, this problem is considered to be well understood. A foundational tenet in motor control asserts that motoneurons are controlled by a single input to which they respond in a reliable and predictable manner to drive muscle activity, akin to the way that depressing a gas pedal by the same degree accelerates a car to a predictable speed. Theories of how motor cortex flexibly generates different behaviors are less firmly developed, but the available evidence indicates that cortical neurons are coordinated in a similarly simplistic, well-preserved manner. -
Risk Bounds for Classification and Regression Rules That Interpolate
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate Mikhail Belkin1, Daniel Hsu2, and Partha P. Mitra3 1The Ohio State University, Columbus, OH 2Columbia University, New York, NY 3Cold Spring Harbor Laboratory, Cold Spring Harbor, NY October 29, 2018 Abstract Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for “overfitted” / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted k-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. Moreover, the nearest neighbor schemes exhibit optimal rates under some standard statistical assumptions. Finally, this paper suggests a way to explain the phenomenon of adversarial examples, which are seemingly ubiquitous in modern machine learning, and also discusses some connections to kernel machines and random forests in the interpolated regime. 1 Introduction arXiv:1806.05161v3 [stat.ML] 26 Oct 2018 The central problem of supervised inference is to predict labels of unseen data points from a set of labeled training data. -
Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition
This is a free sample of content from Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition. Click here for more information on how to buy the book. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY VOLUME LXXIX Cognition symposium.cshlp.org Symposium organizers and Proceedings editors: Cori Bargmann (The Rockefeller University), Daphne Bavelier (University of Geneva, Switzerland, and University of Rochester), Terrence Sejnowski (The Salk Institute for Biological Studies), and David Stewart and Bruce Stillman (Cold Spring Harbor Laboratory) COLD SPRING HARBOR LABORATORY PRESS 2014 © 2014 by Cold Spring Harbor Laboratory Press. All rights reserved. This is a free sample of content from Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition. Click here for more information on how to buy the book. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY VOLUME LXXIX # 2014 by Cold Spring Harbor Laboratory Press International Standard Book Number 978-1-621821-26-7 (cloth) International Standard Book Number 978-1-621821-27-4 (paper) International Standard Serial Number 0091-7451 Library of Congress Catalog Card Number 34-8174 Printed in the United States of America All rights reserved COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY Founded in 1933 by REGINALD G. HARRIS Director of the Biological Laboratory 1924 to 1936 Previous Symposia Volumes I (1933) Surface Phenomena XXXIX (1974) Tumor Viruses II (1934) Aspects of Growth XL (1975) The Synapse III (1935) Photochemical Reactions XLI (1976) Origins -
Toolmaking and the Origin of Normative Cognition
Preprint, 14 April 2020 Toolmaking and the Origin of Normative Cognition Jonathan Birch Department of Philosophy, Logic and Scientific Method, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, UK. [email protected] http://personal.lse.ac.uk/birchj1 11,947 words 1 Abstract: We are all guided by thousands of norms, but how did our capacity for normative cognition evolve? I propose there is a deep but neglected link between normative cognition and practical skill. In modern humans, complex motor skills and craft skills, such as skills related to toolmaking and tool use, are guided by internally represented norms of correct performance. Moreover, it is plausible that core components of human normative cognition evolved in response to the distinctive demands of transmitting complex motor skills and craft skills, especially skills related to toolmaking and tool use, through social learning. If this is correct, the expansion of the normative domain beyond technique to encompass more abstract norms of reciprocity, ritual, kinship and fairness involved the elaboration of a basic platform for the guidance of skilled action by technical norms. This article motivates and defends this “skill hypothesis” for the origin of normative cognition and sets out various ways in which it could be empirically tested. Key words: normative cognition, skill, cognitive control, norms, evolution 2 1. The Skill Hypothesis We are all guided by thousands of norms, often without being able to articulate the norms in question. A “norm”, as I will use the term here, is just any standard of correct or appropriate behaviour. -
Part I Officers in Institutions Placed Under the Supervision of the General Board
2 OFFICERS NUMBER–MICHAELMAS TERM 2009 [SPECIAL NO.7 PART I Chancellor: H.R.H. The Prince PHILIP, Duke of Edinburgh, T Vice-Chancellor: 2003, Prof. ALISON FETTES RICHARD, N, 2010 Deputy Vice-Chancellors for 2009–2010: Dame SANDRA DAWSON, SID,ATHENE DONALD, R,GORDON JOHNSON, W,STUART LAING, CC,DAVID DUNCAN ROBINSON, M,JEREMY KEITH MORRIS SANDERS, SE, SARAH LAETITIA SQUIRE, HH, the Pro-Vice-Chancellors Pro-Vice-Chancellors: 2004, ANDREW DAVID CLIFF, CHR, 31 Dec. 2009 2004, IAN MALCOLM LESLIE, CHR, 31 Dec. 2009 2008, JOHN MARTIN RALLISON, T, 30 Sept. 2011 2004, KATHARINE BRIDGET PRETTY, HO, 31 Dec. 2009 2009, STEPHEN JOHN YOUNG, EM, 31 July 2012 High Steward: 2001, Dame BRIDGET OGILVIE, G Deputy High Steward: 2009, ANNE MARY LONSDALE, NH Commissary: 2002, The Rt Hon. Lord MACKAY OF CLASHFERN, T Proctors for 2009–2010: JEREMY LLOYD CADDICK, EM LINDSAY ANNE YATES, JN Deputy Proctors for MARGARET ANN GUITE, G 2009–2010: PAUL DUNCAN BEATTIE, CC Orator: 2008, RUPERT THOMPSON, SE Registrary: 2007, JONATHAN WILLIAM NICHOLLS, EM Librarian: 2009, ANNE JARVIS, W Acting Deputy Librarian: 2009, SUSANNE MEHRER Director of the Fitzwilliam Museum and Marlay Curator: 2008, TIMOTHY FAULKNER POTTS, CL Director of Development and Alumni Relations: 2002, PETER LAWSON AGAR, SE Esquire Bedells: 2003, NICOLA HARDY, JE 2009, ROGER DERRICK GREEVES, CL University Advocate: 2004, PHILIPPA JANE ROGERSON, CAI, 2010 Deputy University Advocates: 2007, ROSAMUND ELLEN THORNTON, EM, 2010 2006, CHRISTOPHER FORBES FORSYTH, R, 2010 OFFICERS IN INSTITUTIONS PLACED UNDER THE SUPERVISION OF THE GENERAL BOARD PROFESSORS Accounting 2003 GEOFFREY MEEKS, DAR Active Tectonics 2002 JAMES ANTHONY JACKSON, Q Aeronautical Engineering, Francis Mond 1996 WILLIAM NICHOLAS DAWES, CHU Aerothermal Technology 2000 HOWARD PETER HODSON, G Algebra 2003 JAN SAXL, CAI Algebraic Geometry (2000) 2000 NICHOLAS IAN SHEPHERD-BARRON, T Algebraic Geometry (2001) 2001 PELHAM MARK HEDLEY WILSON, T American History, Paul Mellon 1992 ANTHONY JOHN BADGER, CL American History and Institutions, Pitt 2009 NANCY A. -
Machine Learning Conference Report
Part of the conference series Breakthrough science and technologies Transforming our future Machine learning Conference report Machine learning report – Conference report 1 Introduction On 22 May 2015, the Royal Society hosted a unique, high level conference on the subject of machine learning. The conference brought together scientists, technologists and experts from across academia, industry and government to learn about the state-of-the-art in machine learning from world and UK experts. The presentations covered four main areas of machine learning: cutting-edge developments in software, the underpinning hardware, current applications and future social and economic impacts. This conference is the first in a new series organised This report is not a verbatim record, but summarises by the Royal Society, entitled Breakthrough Science the discussions that took place during the day and the and Technologies: Transforming our Future, which will key points raised. Comments and recommendations address the major scientific and technical challenges reflect the views and opinions of the speakers and of the next decade. Each conference will focus on one not necessarily that of the Royal Society. Full versions technology and cover key issues including the current of the presentations can be found on our website at: state of the UK industry sector, future direction of royalsociety.org/events/2015/05/breakthrough-science- research and the wider social and economic implications. technologies-machine-learning The conference series is being organised through the Royal Society’s Science and Industry programme, which demonstrates our commitment to reintegrate science and industry at the Society and to promote science and its value, build relationships and foster translation. -
AAAI News AAAI News
AAAI News AAAI News Winter News from the Association for the Advancement of Artificial Intelligence AAAI-18 Registration Student Activities looking for internships or jobs to meet with representatives from over 30 com - As part of its outreach to students, Is Open! panies and academia in an informal AAAI-18 will continue several special "meet-and-greet" atmosphere. If you AAAI-18 registration information is programs specifically for students, are representing a company, research now available at aaai.org/aaai18, and including the Doctoral Consortium, organization or university and would online registration can be completed at the Student Abstract Program, Lunch like to participate in the job fair, please regonline.com/aaai18. The deadline with a Fellow, and the Volunteer Pro - send an email with your contact infor - for late registration rates is January 5, gram, in addition to the following: 2018. Complete tutorial and workshop mation to [email protected] no later than January 5. The organizers of information, as well as other special Student Reception the AAAI/ACM SIGAI Job Fair are John program information is available at AAAI will welcome all students to Dickerson (University of Maryland, these sites. AAAI-18 by hosting an evening stu - USA) and Nicholas Mattei (IBM, USA). dent reception on Friday, February 2. Make Your Hotel Reservation Although the reception is especially The Winograd Now! beneficial to new students at the con - Schema Challenge AAAI has reserved a block of rooms at ference, all are welcome! Please join us and make all the newcomers welcome! Nuance Communications, Inc. is spon - the Hilton New Orleans Riverside at soring a competition to encourage reduced conference rates. -
Wellcome Investigators March 2011
Wellcome Trust Investigator Awards Feedback from Expert Review Group members 28 March 2011 1 Roughly 7 months between application and final outcome The Expert Review Groups 1. Cell and Developmental Biology 2. Cellular and Molecular Neuroscience (Zaf Bashir) 3. Cognitive Neuroscience and Mental Health 4. Genetics, Genomics and Population Research (George Davey Smith) 5. Immune Systems in Health and Disease (David Wraith) 6. Molecular Basis of Cell Function 7. Pathogen Biology and Disease Transmission 8. Physiology in Health and Disease (Paul Martin) 9. Population and Public Health (Rona Campbell) 2 Summary Feedback from ERG Panels • The bar is very high across all nine panels • Track record led - CV must demonstrate a substantial impact of your research (e.g. high impact journals, record of ground breaking research, clear upward trajectory etc) To paraphrase Walport ‘to support scientists with the best track records, obviously appropriate to the stage in their career’ • Notable esteem factors (but note ‘several FRSs were not shortlisted’) • Your novelty of your research vision is CRUCIAL. Don’t just carry on doing more of the same • The Trust is not averse to risk (but what about ERG panel members?) • Success rate for short-listing for interview is ~15-25% at Senior Investigator level (3-5 proposals shortlisted from each ERG) • Fewer New Investigator than Senior Investigator applications – an opportunity? • There are fewer applications overall for the second round, but ‘the bar will not be lowered’ The Challenge UoB has roughly 45 existing -
Probabilistic Machine Learning: Foundations and Frontiers
Probabilistic Machine Learning: Foundations and Frontiers Zoubin Ghahramani1,2,3,4 1 University of Cambridge 2 Alan Turing Institute 3 Leverhulme Centre for the Future of Intelligence 4 Uber AI Labs [email protected] http://mlg.eng.cam.ac.uk/zoubin/ Sackler Forum, National Academy of Sciences Washington DC, 2017 Machine Learning Zoubin Ghahramani 2 / 53 Many Related Terms Statistical Modelling Artificial Intelligence Neural Networks Machine Learning Data Mining Data Analytics Deep Learning Pattern Recognition Data Science Zoubin Ghahramani 3 / 53 Many Related Fields Engineering Computer Science Statistics Machine Learning Computational Applied Mathematics Neuroscience Economics Cognitive Science Physics Zoubin Ghahramani 4 / 53 Many Many Applications Bioinformatics Computer Vision Robotics Scientific Data Analysis Natural Language Processing Machine Learning Information Retrieval Speech Recognition Recommender Systems Signal Processing Machine Translation Medical Informatics Targeted Advertising Finance Data Compression Zoubin Ghahramani 5 / 53 Machine Learning • Machine learning is an interdisciplinary field that develops both the mathematical foundations and practical applications of systems that learn from data. Main conferences and journals: NIPS, ICML, AISTATS, UAI, KDD, JMLR, IEEE TPAMI Zoubin Ghahramani 6 / 53 Canonical problems in machine learning Zoubin Ghahramani 7 / 53 Classification x x x o x xxx o x o o o x x o o x o • Task: predict discrete class label from input data • Applications: face recognition, image recognition,