Computation and Psychophysics of Sensorimotor Integration

Total Page:16

File Type:pdf, Size:1020Kb

Computation and Psychophysics of Sensorimotor Integration Computation and Psychophysics of Sensorimotor Integration by Zoubin Ghahramani BSE Computer Science University of Pennsylvania BA Cognitive Science University of Pennsylvania Submitted to the Department of Brain and Cognitive Sciences in partial fulllment of the requirements for the degree of Do ctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY Septemb er c Massachusetts Institute of Technology All rights reserved Author Department of Brain and Cognitive Sciences July Certied by Michael I Jordan Professor Thesis Sup ervisor Certied by Tomaso Poggio Uncas and Helen Whitaker Professor Thesis CoSup ervisor Accepted by Emilio Bizzi Eugene McDermott Professor Chairman Department of Brain and Cognitive Sciences Computation and Psychophysics of Sensorimotor Integration by Zoubin Ghahramani Submitted to the Department of Brain and Cognitive Sciences on July in partial fulllment of the requirements for the degree of Do ctor of Philosophy Abstract All higher organisms are able to integrate information from multiple sensory mo dalities and use this information to select and guide movements In order to do this the central nervous system CNS must solve two problems Converting information from distinct sensory representations into a common co ordinate system and integrating this information in a sensible way This dissertation prop oses a computational framework based on statistics and information theory to study these two problems The framework suggests explicit mo dels for b oth the co ordinate transformation and integration problems which are tested through human psychophysics The exp eriments in Chapter suggest that Spatial information from the visual and auditory systems is integrated so as to minimize the variance in lo calization When the relation b etween visual and auditory space is articially remapp ed the spatial pattern of auditory adaptation can b e predicted from its lo calization variance These studies suggest that multisensory integration and intersensory adaptation are closely related through the principle of minimizing lo calization variance This principle is used to mo del sensorimotor integration of proprio ceptive and motor signals during arm movements Chapter The temp oral propagation of errors in estimating the hands state is captured by the mo del providing supp ort for the existence of an internal mo del in the CNS that simulates the dynamic b ehavior of the arm The co ordinate transformation problem is examined in the visuomotor system which mediates reaching to visuallyp erceived ob jects Chapter The pattern of changes in duced by a lo cal remapping of this transformation suggests a representation based on units with large functional receptive elds Finally the problem of converting information from disparate sensory representations into a common co ordinate system is addressed computa tionally Chapter An unsup ervised learning algorithm is prop osed based on the prin ciple of maximizing mutual information b etween two top ographic maps What results is an algorithm which develops multiple mutuallyaligned top ographic maps based purely on correlations b etween the inputs to the dierent sensory mo dalities Thesis Sup ervisor Michael I Jordan Title Professor Thesis CoSup ervisor Tomaso Poggio Title Uncas and Helen Whitaker Professor Acknowledgments I thank Daniel Wolp ert for reasons to o numerous to mention here Without his men torship encouragement patience and creativity there would b e no psychophysics in this thesis He made hard work enjoyable and exciting and I doubt I will nd such a combination of friend and collab orator in the future I thank Michael Jordan for providing an unparalleled training environment I learned more in his grueling threehour lab meetings than in the rest of my graduate coursework combined Without his mentorship there would b e no computation in this thesis I thank Dick Held Emilio Bizzi and Tomaso Poggio for kindly serving as memb ers of my thesis committee Peter Dayan provided helpful comments on the manuscript and I b enetted from engaging discussions with Geo Hinton who also made it p ossible for me to write this thesis without worrying ab out searching for a p ostdo ctoral p osition All the memb ers of Jordan lab provided an excellent environment for research I esp ecially thank Flip Sab es for critical comments on the manuscript and friendly moral supp ort during the last weeks of writing Lawrence Saul for tutorials on sta tistical mechanics and Tommi Jaakkola for letting me measure his head Carlotta Domeniconi provided many hours of assistance in conducting exp eriments Adee Matan who earned Jordan lab memb ership by always using our computers deserves a sp ecial thanks for keeping track of my progress and mental sanity I thank David Po epp el for egging me on to defend early and for b eing just as stressed as I was during our last few days John Houde was my dietary advisor during the thesis and travel companion throughout graduate scho ol I thank James Thomas for distracting me late at night with horror stories and go o d music while I was trying to write my dissertation Thanks to Gregg Solomon for advice on writing Acknowledgment sections I thank Jan Ellertsen for guiding me through the tortuous road of academic re quirements I am grateful to Marney Smyth for dedicating many hours to helping me with slides and gures Ellie Bonsaint provided sup erb administrative supp ort throughout my graduate education and Pat Claey tracked down many obscure ar ticles for me I have enjoyed graduate scho ol immensely mostly due to the wonderful environ ment provided by the students in the program I thank each of them for their indi vidual gift in making this department unique I am grateful to the McDonnellPew Foundation for supp orting my studies in this department I esp ecially want to thank Azita Ghahramani for letting me crash at her place for years while doing a PhD and for b eing the most wonderful sister and housemate and Monica Biagioli for moral supp ort and making my last few months here very sp ecial Biographical note Although my family is originally from Shiraz Iran I was b orn in Moscow on February th After four years in Russia my family moved back to Iran for one year and then to Madrid Spain I lived in Spain and attended the American Scho ol of Madrid from until my high scho ol graduation in I then went to Philadelphia to study at the University of Pennsylvania where I obtained a BA in Cognitive Science and a BSE in Computer Science In I entered the do ctoral program in Brain and Cognitive Sciences at MIT To my father for al l the joy he brought me Contents Intro duction Outline of the Thesis I Integration Integration and Adaptation of Visual and Auditory Maps Intro duction Background Psychophysics Neuroscience The Computational Mo del Integration Adaptation Related Mo dels Summary Overview of the Exp eriments Exp eriment Lo calization of Visual Auditory and Visuoauditory Stimuli Metho d Results Discussion Exp eriment Adaptation to a VisuoAuditory Remapping CONTENTS Metho d Results Discussion Exp eriment Adaptation to VisuoAuditory Variance Metho d Results Discussion Exp eriment Generalization of the VisuoAuditory Map Metho d Results Discussion Controls Alternative Cues to Auditory Stimulus Lo cation Pointing with the Left Hand Discussion Empirical ndings Implications Directions for future work Conclusion An Internal Mo del for Sensorimotor Integration Intro duction Exp eriment Propagation of Errors in Sensorimotor Integration App endix A Paradigm App endix B Simulation II Co ordinate Transformations Representation of the Visuomotor Co ordinate Transformation CONTENTS Intro duction The Visuomotor Co ordinate Transformation Spatial Generalization Contextual Generalization Exp erimental Aims and Overview Exp eriment Visuomotor Generalization to a OnePoint Displacement Metho d Results Discussion Exp eriment Visuomotor Generalization to a TwoPoint Displacement Metho d Results Discussion Exp eriment Contextual Generalization of the Visuomotor Map A Metho d Results Exp eriment Contextual
Recommended publications
  • (Title of the Thesis)*
    THE OPTIMALITY OF DECISION MAKING DURING MOTOR LEARNING by Joshua Brent Moskowitz A thesis submitted to the Department of Psychology in conformity with the requirements for the degree of Master of Science Queen’s University Kingston, Ontario, Canada (June, 2016) Copyright ©Joshua B. Moskowitz, 2016 Abstract In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to selected rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions.
    [Show full text]
  • Risk Bounds for Classification and Regression Rules That Interpolate
    Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate Mikhail Belkin Daniel Hsu Partha P. Mitra The Ohio State University Columbia University Cold Spring Harbor Laboratory Abstract Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phe- nomenon of strong generalization performance for “overfitted” / interpolated clas- sifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted k-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. Moreover, the nearest neighbor schemes exhibit optimal rates under some standard statistical assumptions. Finally, this paper suggests a way to explain the phenomenon of adversarial ex- amples, which are seemingly ubiquitous in modern machine learning, and also discusses some connections to kernel machines and random forests in the interpo- lated regime. 1 Introduction The central problem of supervised inference is to predict labels of unseen data points from a set of labeled training data. The literature on this subject is vast, ranging from classical parametric and non-parametric statistics [48, 49] to more recent machine learning methods, such as kernel machines [39], boosting [36], random forests [15], and deep neural networks [25].
    [Show full text]
  • Probabilistic Circuits: Representations, Inference, Learning and Theory
    Inference Probabilistic Representations Learning Circuits Theory Antonio Vergari YooJung Choi University of California, Los Angeles University of California, Los Angeles Robert Peharz Guy Van den Broeck TU Eindhoven University of California, Los Angeles January 7th, 2021 - IJCAI-PRICAI 2020 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs The Alphabet Soup of probabilistic models 2/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs Intractable and tractable models 3/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs tractability is a spectrum 4/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs Expressive models without compromises 5/153 Fully factorized NaiveBayes AndOrGraphs PDGs Trees PSDDs CNets LTMs SPNs NADEs Thin Junction Trees ACs MADEs MAFs VAEs DPPs FVSBNs TACs IAFs NAFs RAEs Mixtures BNs NICE FGs GANs RealNVP MNs a unifying framework for tractable models 6/153 Why tractable inference? or expressiveness vs tractability 7/153 Why tractable inference? or expressiveness
    [Show full text]
  • Flexible Corticospinal Control of Muscles
    Flexible Corticospinal Control of Muscles Najja J. Marshall Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2021 © 2021 Najja J. Marshall All Rights Reserved Abstract Flexible Corticospinal Control of Muscles Najja J. Marshall The exceptional abilities of top-tier athletes – from Simone Biles’ dizzying gymnastics to LeBron James’ gravity-defying bounds – can easily lead one to forget to marvel at the exceptional breadth of everyday movements. Whether holding a cup of coffee, reaching out to grab a falling object, or cycling at a quick clip, every motor action requires activating multiple muscles with the appropriate intensity and timing to move each limb or counteract the weight of an object. These actions are planned and executed by the motor cortex, which transmits its intentions to motoneurons in the spinal cord, which ultimately drive muscle contractions. A central problem in neuroscience is precisely how neural activity in cortex and the spinal cord gives rise to this diverse range of behaviors. At the level of spinal cord, this problem is considered to be well understood. A foundational tenet in motor control asserts that motoneurons are controlled by a single input to which they respond in a reliable and predictable manner to drive muscle activity, akin to the way that depressing a gas pedal by the same degree accelerates a car to a predictable speed. Theories of how motor cortex flexibly generates different behaviors are less firmly developed, but the available evidence indicates that cortical neurons are coordinated in a similarly simplistic, well-preserved manner.
    [Show full text]
  • Risk Bounds for Classification and Regression Rules That Interpolate
    Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate Mikhail Belkin1, Daniel Hsu2, and Partha P. Mitra3 1The Ohio State University, Columbus, OH 2Columbia University, New York, NY 3Cold Spring Harbor Laboratory, Cold Spring Harbor, NY October 29, 2018 Abstract Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for “overfitted” / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted k-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. Moreover, the nearest neighbor schemes exhibit optimal rates under some standard statistical assumptions. Finally, this paper suggests a way to explain the phenomenon of adversarial examples, which are seemingly ubiquitous in modern machine learning, and also discusses some connections to kernel machines and random forests in the interpolated regime. 1 Introduction arXiv:1806.05161v3 [stat.ML] 26 Oct 2018 The central problem of supervised inference is to predict labels of unseen data points from a set of labeled training data.
    [Show full text]
  • Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition
    This is a free sample of content from Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition. Click here for more information on how to buy the book. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY VOLUME LXXIX Cognition symposium.cshlp.org Symposium organizers and Proceedings editors: Cori Bargmann (The Rockefeller University), Daphne Bavelier (University of Geneva, Switzerland, and University of Rochester), Terrence Sejnowski (The Salk Institute for Biological Studies), and David Stewart and Bruce Stillman (Cold Spring Harbor Laboratory) COLD SPRING HARBOR LABORATORY PRESS 2014 © 2014 by Cold Spring Harbor Laboratory Press. All rights reserved. This is a free sample of content from Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIX: Cognition. Click here for more information on how to buy the book. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY VOLUME LXXIX # 2014 by Cold Spring Harbor Laboratory Press International Standard Book Number 978-1-621821-26-7 (cloth) International Standard Book Number 978-1-621821-27-4 (paper) International Standard Serial Number 0091-7451 Library of Congress Catalog Card Number 34-8174 Printed in the United States of America All rights reserved COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY Founded in 1933 by REGINALD G. HARRIS Director of the Biological Laboratory 1924 to 1936 Previous Symposia Volumes I (1933) Surface Phenomena XXXIX (1974) Tumor Viruses II (1934) Aspects of Growth XL (1975) The Synapse III (1935) Photochemical Reactions XLI (1976) Origins
    [Show full text]
  • Toolmaking and the Origin of Normative Cognition
    Preprint, 14 April 2020 Toolmaking and the Origin of Normative Cognition Jonathan Birch Department of Philosophy, Logic and Scientific Method, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, UK. [email protected] http://personal.lse.ac.uk/birchj1 11,947 words 1 Abstract: We are all guided by thousands of norms, but how did our capacity for normative cognition evolve? I propose there is a deep but neglected link between normative cognition and practical skill. In modern humans, complex motor skills and craft skills, such as skills related to toolmaking and tool use, are guided by internally represented norms of correct performance. Moreover, it is plausible that core components of human normative cognition evolved in response to the distinctive demands of transmitting complex motor skills and craft skills, especially skills related to toolmaking and tool use, through social learning. If this is correct, the expansion of the normative domain beyond technique to encompass more abstract norms of reciprocity, ritual, kinship and fairness involved the elaboration of a basic platform for the guidance of skilled action by technical norms. This article motivates and defends this “skill hypothesis” for the origin of normative cognition and sets out various ways in which it could be empirically tested. Key words: normative cognition, skill, cognitive control, norms, evolution 2 1. The Skill Hypothesis We are all guided by thousands of norms, often without being able to articulate the norms in question. A “norm”, as I will use the term here, is just any standard of correct or appropriate behaviour.
    [Show full text]
  • Part I Officers in Institutions Placed Under the Supervision of the General Board
    2 OFFICERS NUMBER–MICHAELMAS TERM 2009 [SPECIAL NO.7 PART I Chancellor: H.R.H. The Prince PHILIP, Duke of Edinburgh, T Vice-Chancellor: 2003, Prof. ALISON FETTES RICHARD, N, 2010 Deputy Vice-Chancellors for 2009–2010: Dame SANDRA DAWSON, SID,ATHENE DONALD, R,GORDON JOHNSON, W,STUART LAING, CC,DAVID DUNCAN ROBINSON, M,JEREMY KEITH MORRIS SANDERS, SE, SARAH LAETITIA SQUIRE, HH, the Pro-Vice-Chancellors Pro-Vice-Chancellors: 2004, ANDREW DAVID CLIFF, CHR, 31 Dec. 2009 2004, IAN MALCOLM LESLIE, CHR, 31 Dec. 2009 2008, JOHN MARTIN RALLISON, T, 30 Sept. 2011 2004, KATHARINE BRIDGET PRETTY, HO, 31 Dec. 2009 2009, STEPHEN JOHN YOUNG, EM, 31 July 2012 High Steward: 2001, Dame BRIDGET OGILVIE, G Deputy High Steward: 2009, ANNE MARY LONSDALE, NH Commissary: 2002, The Rt Hon. Lord MACKAY OF CLASHFERN, T Proctors for 2009–2010: JEREMY LLOYD CADDICK, EM LINDSAY ANNE YATES, JN Deputy Proctors for MARGARET ANN GUITE, G 2009–2010: PAUL DUNCAN BEATTIE, CC Orator: 2008, RUPERT THOMPSON, SE Registrary: 2007, JONATHAN WILLIAM NICHOLLS, EM Librarian: 2009, ANNE JARVIS, W Acting Deputy Librarian: 2009, SUSANNE MEHRER Director of the Fitzwilliam Museum and Marlay Curator: 2008, TIMOTHY FAULKNER POTTS, CL Director of Development and Alumni Relations: 2002, PETER LAWSON AGAR, SE Esquire Bedells: 2003, NICOLA HARDY, JE 2009, ROGER DERRICK GREEVES, CL University Advocate: 2004, PHILIPPA JANE ROGERSON, CAI, 2010 Deputy University Advocates: 2007, ROSAMUND ELLEN THORNTON, EM, 2010 2006, CHRISTOPHER FORBES FORSYTH, R, 2010 OFFICERS IN INSTITUTIONS PLACED UNDER THE SUPERVISION OF THE GENERAL BOARD PROFESSORS Accounting 2003 GEOFFREY MEEKS, DAR Active Tectonics 2002 JAMES ANTHONY JACKSON, Q Aeronautical Engineering, Francis Mond 1996 WILLIAM NICHOLAS DAWES, CHU Aerothermal Technology 2000 HOWARD PETER HODSON, G Algebra 2003 JAN SAXL, CAI Algebraic Geometry (2000) 2000 NICHOLAS IAN SHEPHERD-BARRON, T Algebraic Geometry (2001) 2001 PELHAM MARK HEDLEY WILSON, T American History, Paul Mellon 1992 ANTHONY JOHN BADGER, CL American History and Institutions, Pitt 2009 NANCY A.
    [Show full text]
  • Machine Learning Conference Report
    Part of the conference series Breakthrough science and technologies Transforming our future Machine learning Conference report Machine learning report – Conference report 1 Introduction On 22 May 2015, the Royal Society hosted a unique, high level conference on the subject of machine learning. The conference brought together scientists, technologists and experts from across academia, industry and government to learn about the state-of-the-art in machine learning from world and UK experts. The presentations covered four main areas of machine learning: cutting-edge developments in software, the underpinning hardware, current applications and future social and economic impacts. This conference is the first in a new series organised This report is not a verbatim record, but summarises by the Royal Society, entitled Breakthrough Science the discussions that took place during the day and the and Technologies: Transforming our Future, which will key points raised. Comments and recommendations address the major scientific and technical challenges reflect the views and opinions of the speakers and of the next decade. Each conference will focus on one not necessarily that of the Royal Society. Full versions technology and cover key issues including the current of the presentations can be found on our website at: state of the UK industry sector, future direction of royalsociety.org/events/2015/05/breakthrough-science- research and the wider social and economic implications. technologies-machine-learning The conference series is being organised through the Royal Society’s Science and Industry programme, which demonstrates our commitment to reintegrate science and industry at the Society and to promote science and its value, build relationships and foster translation.
    [Show full text]
  • AAAI News AAAI News
    AAAI News AAAI News Winter News from the Association for the Advancement of Artificial Intelligence AAAI-18 Registration Student Activities looking for internships or jobs to meet with representatives from over 30 com - As part of its outreach to students, Is Open! panies and academia in an informal AAAI-18 will continue several special "meet-and-greet" atmosphere. If you AAAI-18 registration information is programs specifically for students, are representing a company, research now available at aaai.org/aaai18, and including the Doctoral Consortium, organization or university and would online registration can be completed at the Student Abstract Program, Lunch like to participate in the job fair, please regonline.com/aaai18. The deadline with a Fellow, and the Volunteer Pro - send an email with your contact infor - for late registration rates is January 5, gram, in addition to the following: 2018. Complete tutorial and workshop mation to [email protected] no later than January 5. The organizers of information, as well as other special Student Reception the AAAI/ACM SIGAI Job Fair are John program information is available at AAAI will welcome all students to Dickerson (University of Maryland, these sites. AAAI-18 by hosting an evening stu - USA) and Nicholas Mattei (IBM, USA). dent reception on Friday, February 2. Make Your Hotel Reservation Although the reception is especially The Winograd Now! beneficial to new students at the con - Schema Challenge AAAI has reserved a block of rooms at ference, all are welcome! Please join us and make all the newcomers welcome! Nuance Communications, Inc. is spon - the Hilton New Orleans Riverside at soring a competition to encourage reduced conference rates.
    [Show full text]
  • Wellcome Investigators March 2011
    Wellcome Trust Investigator Awards Feedback from Expert Review Group members 28 March 2011 1 Roughly 7 months between application and final outcome The Expert Review Groups 1. Cell and Developmental Biology 2. Cellular and Molecular Neuroscience (Zaf Bashir) 3. Cognitive Neuroscience and Mental Health 4. Genetics, Genomics and Population Research (George Davey Smith) 5. Immune Systems in Health and Disease (David Wraith) 6. Molecular Basis of Cell Function 7. Pathogen Biology and Disease Transmission 8. Physiology in Health and Disease (Paul Martin) 9. Population and Public Health (Rona Campbell) 2 Summary Feedback from ERG Panels • The bar is very high across all nine panels • Track record led - CV must demonstrate a substantial impact of your research (e.g. high impact journals, record of ground breaking research, clear upward trajectory etc) To paraphrase Walport ‘to support scientists with the best track records, obviously appropriate to the stage in their career’ • Notable esteem factors (but note ‘several FRSs were not shortlisted’) • Your novelty of your research vision is CRUCIAL. Don’t just carry on doing more of the same • The Trust is not averse to risk (but what about ERG panel members?) • Success rate for short-listing for interview is ~15-25% at Senior Investigator level (3-5 proposals shortlisted from each ERG) • Fewer New Investigator than Senior Investigator applications – an opportunity? • There are fewer applications overall for the second round, but ‘the bar will not be lowered’ The Challenge UoB has roughly 45 existing
    [Show full text]
  • Probabilistic Machine Learning: Foundations and Frontiers
    Probabilistic Machine Learning: Foundations and Frontiers Zoubin Ghahramani1,2,3,4 1 University of Cambridge 2 Alan Turing Institute 3 Leverhulme Centre for the Future of Intelligence 4 Uber AI Labs [email protected] http://mlg.eng.cam.ac.uk/zoubin/ Sackler Forum, National Academy of Sciences Washington DC, 2017 Machine Learning Zoubin Ghahramani 2 / 53 Many Related Terms Statistical Modelling Artificial Intelligence Neural Networks Machine Learning Data Mining Data Analytics Deep Learning Pattern Recognition Data Science Zoubin Ghahramani 3 / 53 Many Related Fields Engineering Computer Science Statistics Machine Learning Computational Applied Mathematics Neuroscience Economics Cognitive Science Physics Zoubin Ghahramani 4 / 53 Many Many Applications Bioinformatics Computer Vision Robotics Scientific Data Analysis Natural Language Processing Machine Learning Information Retrieval Speech Recognition Recommender Systems Signal Processing Machine Translation Medical Informatics Targeted Advertising Finance Data Compression Zoubin Ghahramani 5 / 53 Machine Learning • Machine learning is an interdisciplinary field that develops both the mathematical foundations and practical applications of systems that learn from data. Main conferences and journals: NIPS, ICML, AISTATS, UAI, KDD, JMLR, IEEE TPAMI Zoubin Ghahramani 6 / 53 Canonical problems in machine learning Zoubin Ghahramani 7 / 53 Classification x x x o x xxx o x o o o x x o o x o • Task: predict discrete class label from input data • Applications: face recognition, image recognition,
    [Show full text]