Bayesian Algorithmic Modeling in Cognitive Science Julien Diard

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Algorithmic Modeling in Cognitive Science Julien Diard Bayesian Algorithmic Modeling in Cognitive Science Julien Diard To cite this version: Julien Diard. Bayesian Algorithmic Modeling in Cognitive Science. Computer science. Université Grenoble Alpes, 2015. tel-01237127 HAL Id: tel-01237127 https://hal.archives-ouvertes.fr/tel-01237127 Submitted on 2 Dec 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. HABILITATION A DIRIGER DES RECHERCHES École Doctorale Mathématiques, Sciences et Technologies de l’Information, Informatique Spécialité : Mathématiques Appliquées et Informatique Présentée par Julien DIARD préparée au sein du Laboratoire de Psychologie et NeuroCognition Bayesian Algorithmic Modeling in Cognitive Science Habilitation à diriger des recherches soutenue publiquement le 27 Novembre 2015, devant le jury composé de : Dr Frédéric ALEXANDRE DR INRIA, INRIA Bordeaux Sud-Ouest Examinateur Pr Laurent BESACIER PU Université Joseph Fourier, Laboratoire d’Informatique de Grenoble Examinateur Dr Pierre BESSIÈRE DR CNRS, Institut des Systèmes Intelligents et de Robotique Examinateur Pr Philippe LERAY PU Polytech’Nantes, Laboratoire d’Informatique de Nantes Atlantique Rapporteur Dr Pierre-Yves OUDEYER DR INRIA, INRIA Bordeaux Sud-Ouest Rapporteur Pr Mark PITT Professor Ohio State University, Language Perception Laboratory Rapporteur Contents Contents i List of Figures iii 1 Introduction 5 2 Bayesian Programming 11 2.1 Preliminary: Probabilities and probabilistic calculus . 11 2.2 Bayesian Programming methodology . 13 2.3 Tools for structured Bayesian Programming . 19 2.4 Relation with other probabilistic frameworks . 21 2.5 Graphical representation of Bayesian Programs . 24 3 Bayesian modeling of reading and writing 25 3.1 Bayesian modeling of letter reading and writing: BAP . 25 3.2 Bayesian modeling of word recognition and visual attention: BRAID . 34 4 Bayesian modeling of speech perception and production 43 4.1 Bayesian modeling of communicating agents: COSMO . 43 4.2 Bayesian modeling of language universals: COSMO-Emergence . 46 4.3 Bayesian modeling of speech perception: COSMO-Perception . 52 4.4 Bayesian modeling of speech production: COSMO-Production . 57 4.5 Discussion . 63 5 Bayesian modeling of cognition or modeling Bayesian cognition? 65 5.1 Marr’s hierarchy of models in cognitive science . 67 5.2 Bayesian models in Marr’s hierarchy . 73 5.3 Is Bayesian modeling a psychological theory? . 79 5.4 Can Bayesian models be psychological theories? . 81 5.5 Summary and position . 85 6 Conclusion 89 6.1 Main contribution . 89 6.2 Main perspectives . 90 Bibliography 93 A Other projects: bullet list summaries 111 A.1 Bayesian modeling of robotic navigation . 111 i Contents A.2 Bayesian modeling of human navigation . 112 A.3 Bayesian modeling of human proxemics . 114 A.4 Bayesian modeling of eye writing: BAP-EOL . 116 A.5 Bayesian modeling of performance: PARSEVAL . 116 A.6 Bayesian modeling of distinguishability of models: DEAD . 117 B Curriculum Vitæ 119 ii List of Figures 2.1 Commented structure of the Bayesian Programming methodology. 13 2.2 Taxonomies of probabilistic frameworks. 22 3.1 Graphical representation of the structure of the BAP model. 26 3.2 Example of sample traces and the learned probability distributions in the letter rep- resentation model. 28 3.3 Illustration of between-writer variability when writing as. 31 3.4 Illustration of inter-trial variability when writing as. 31 3.5 Examples of trajectory and letter copying. 32 3.6 Example cases where motor knowledge helps identify the correct letter. 33 3.7 Graphical representation of the structure of the BRAID model. 36 3.8 Spatial repartition of visual attention in BRAID. 37 3.9 Study of a portion of BRAID’s parameter space. 39 3.10 Temporal evolution of word recognition in BRAID. 41 4.1 General models of a communication situation and a communicating agent. 44 4.2 Bayesian questions and inferences for speech production and perception tasks. 45 4.3 Emergence of communication codes in societies of COSMO-Emergence agents. 49 4.4 Emergence of vowel systems in societies of COSMO-Emergence agents. 50 4.5 Emergence of stop consonant systems in societies of COSMO-Emergence agents. 51 4.6 Schema of the supervised learning scenario in COSMO. 52 4.7 Graphical representation of the structure of the COSMO-Perception model. 54 4.8 Performance of perception processes for syllables presented at various levels of noise. 57 4.9 Graphical representation of the structure of the COSMO-Production model. 58 4.10 Acoustic categorization in COSMO-Production. 60 4.11 Production of the /aki/ phoneme sequence in COSMO-Production. 61 4.12 Distances between motor commands of /aki/ sequences in GEPPETO and COSMO- Production. 62 6.1 Graphical representation of the structure of a model integrating BAP, BRAID and COSMO components. 91 A.1 Illustration of the Sensorimotor Interaction of Bayesian Maps. 112 A.2 Examples of von Mises probability distributions for various µ, λ parameter values. 113 A.3 Path integration using von Mises probability distributions. 113 A.4 Bayesian cue combination in the linear case. 114 A.5 Bayesian cue combination in the orientation case. 114 A.6 Bayesian modeling of personal and interaction social spaces. 115 iii List of Figures A.7 Video game demonstrating the PARSEVAL algorithm. 117 A.8 Bayesian distinguishability of models illustrated on memory retention. 118 iv À Éva qui, elle, sait écrire. Acknowledgment Writing a habilitation thesis is an odd exercise. It consists of course of exposing scientific topics and results, in a domain which is hopefully well-known by the author. This part of the exercise is rather usual, with common trade-offs between details and conciseness, scientific rigor and care for pedagogical concerns, etc. However, this thesis is also, in part, a biographical account, as it summarizes the scientific journey of the author and appears as a single-author manuscript. Attributing anything found in the remaining pages to me would be a gross mischaracterization at best, an agreed-upon lie. There are of course many other actors to consider, from the well-known shoulders of giants, to support and inspiration found in all places, most of them outside the lab. It is also conventional to acknowledge here all lab directors, team leaders, supporting agencies, many colleagues, friends and family; consider it done. The silver lining to this biographical nature of this manuscript is that I took it as an op- portunity to list and count the supervisors, collaborators and students I directly worked with. Supervisors and students were easy to count; for collaborators, I settled upon the list of those I have co-authored papers with. In this manner, I find that there are N = 48 people I have worked with over the years. Most, but not all of them, are mentioned in the following pages. Out of these, I find that about n = 12 of them are, to my eyes, exceptional scientists. Of course, I will not name names, so as to stay cordial to the others. Also, and more importantly, this estimation is really tongue-in-cheek; I would not presume to judge anyone here, lest they judge me in return, which would evidently be unpleasant. In any case, that number n is certainly larger that expected. Let me offer a back of the envelope estimation of my luck. Assuming being “exceptional” is to lie beyond the µ + 2σ boundary, each colleague being exceptional has roughly probability p = :025. Having n = 12 out of N = 48 colleagues being exceptional has probability Npn(1 p)N−n = 1:6691 10−9: Looser criteria for being exceptional n − ∗ of course increase this number, but even if it included ten percent of the population, this number would still be very small (around 10−3). Thus, I find that having such an exceptional collaboration network had a very small, probability to occur. I therefore find myself extremely lucky, and I have demonstrated it. As a consequence, I feel extremely thankful, but you will have to take my word for it, as, of course, you do not have direct access to my cognitive constructs. It will be no surprise that probabilities and building models of presumed cognitive entities are themes that will permeate the following pages. 3 CHAPTER 1 Introduction My research domain is Bayesian modeling of sensorimotor systems. I am interested both in natural and artificial sensorimotor systems. In other words, my research is multidisciplinary by nature, at the crossroads, on the one hand, of mathematics and computer science, and, on the other hand, of cognitive science and experimental psychology. By “sensorimotor systems” above, I mean systems with sensors and actuators, like animals and robots, that perceive, reason and act, not necessarily in this order. Indeed, a classical, coarse- grained view of information processing in both experimental psychology and robotics considers a three-step process: first, information collection and pre-processing to acquire or update internal representations of external phenomena of interest (perception), second, computation involving these internal representations to prepare future actions (planning), and third, action execution and monitoring (action). This very simple model of information processing is, like most models, both wrong and useful. Indeed, it is useful, for instance, as a rough guide to understand information flow in neurobi- ology, and at the same time, it is wrong because it hides away some of the complexity of said information flow (sensory prediction, temporal loop effects, top-down tuning of perception, etc.).
Recommended publications
  • The Semantics of Subroutines and Iteration in the Bayesian Programming Language Probt
    1 The semantics of Subroutines and Iteration in the Bayesian Programming language ProBT R. LAURENT∗, K. MEKHNACHA∗, E. MAZERy and P. BESSIERE` z ∗ProbaYes S.A.S, Grenoble, France yUniversity of Grenoble-Alpes, CNRS/LIG, Grenoble, France zUniversity Pierre et Marie Curie, CNRS/ISIR, Paris, France Abstract—Bayesian models are tools of choice when solving 8 8 8 problems with incomplete information. Bayesian networks pro- > > >Variables > > <> vide a first but limited approach to address such problems. > <> Decomposition <> Specification (π) For real world applications, additional semantics is needed to Description > P arametric construct more complex models, especially those with repetitive > :>F orms > > P rogram structures or substructures. ProBT, a Bayesian a programming > :> > Identification (based on δ) language, provides a set of constructs for developing and applying :> complex models with substructures and repetitive structures. Question The goal of this paper is to present and discuss the semantics associated to these constructs. Figure 1. A Bayesian program is constructed from a Description and a Question. The Description is given by the programmer who writes a Index Terms—Probabilistic Programming semantics , Bayesian Specification of a model π and an Identification of its parameter values, Programming, ProBT which can be set in the program or obtained through a learning process from a data set δ. The Specification is constructed from a set of relevant variables, a decomposition of the joint probability distribution over these variables and a set of parametric forms (mathematical models) for the terms I. INTRODUCTION of this decomposition. ProBT [1] was designed to translate the ideas of E.T. Jaynes [2] into an actual programming language.
    [Show full text]
  • Teaching Bayesian Behaviours to Video Game Characters
    Robotics and Autonomous Systems 47 (2004) 177–185 Teaching Bayesian behaviours to video game characters Ronan Le Hy∗, Anthony Arrigoni, Pierre Bessière, Olivier Lebeltel GRAVIR/IMAG, INRIA Rhˆone-Alpes, ZIRST, 38330 Montbonnot, France Abstract This article explores an application of Bayesian programming to behaviours for synthetic video games characters. We address the problem of real-time reactive selection of elementary behaviours for an agent playing a first person shooter game. We show how Bayesian programming can lead to condensed and easier formalisation of finite state machine-like behaviour selection, and lend itself to learning by imitation, in a fully transparent way for the player. © 2004 Published by Elsevier B.V. Keywords: Bayesian programming; Video games characters; Finite state machine; Learning by imitation 1. Introduction After listing our practical objectives, we will present our Bayesian model. We will show how we use it to Today’s video games feature synthetic characters specify by hand a behaviour, and how we use it to involved in complex interactions with human players. learn a behaviour. We will tackle learning by exam- A synthetic character may have one of many different ple using a high-level interface, and then the natural roles: tactical enemy, partner for the human, strategic controls of the game. We will show that it is possible opponent, simple unit amongst many, commenter, etc. to map the player’s actions onto bot states, and use In all of these cases, the game developer’s ultimate this reconstruction to learn our model. Finally, we will objective is for the synthetic character to act like a come back to our objectives as a conclusion.
    [Show full text]
  • Two-Phase Auto-Piloted Synchronous Motors and Actuators
    IMPERIAL COLLEGE OF SCIENCE AND TECHNOLOGY DEPARTMENT OF ELECTRICAL ENGINEERING TWO-PHASE AUTO-PILOTED SYNCHRONOUS MOTORS AND ACTUATORS it Thesis submitted for the degree of Doctor of Philosophy of the University of London. it by Amadeu Leao Santos Rodrigues, M.Sc. (D.I.C.) July 1983 The thesis is concerned with certain aspects of the design and per- formance of drives based on variable speed synchronous motors fed from variable frequency sources, controlled by means of rotor-position sensors. This auto-piloted or self-synchronised form of control enables the machine to operate either with an adjustable load angle or an adjustable torque angle depending on whether a voltage or a current source feed is used. D.c. machine-type characteristics can thus be obtained from the system. The thesis commences with an outline of some fundamental design aspects and a summary of torque production mechanisms in electrical machines. The influence of configuration and physical size on machine performance is discussed and the advantages of the use for servo applic- ations of direct-drive as opposed to step-down transmissions are explained. Developments in permanent magnet materials have opened the way to permanent magnet motors of improved performance, and a brief review of the properties of the various materials presently available is given. A finite-difference method using magnetic scalar potential for calculating the magnetic field of permanent magnets in the presence of iron cores and macroscopic currents is described. A comparison with the load line method is made for some typical cases. Analogies between the mechanical commutator of a d.c.
    [Show full text]
  • The Logical Basis of Bayesian Reasoning and Its Application on Judicial Judgment
    2018 International Workshop on Advances in Social Sciences (IWASS 2018) The Logical Basis of Bayesian Reasoning and Its Application on Judicial Judgment Juan Liu Zhengzhou University of Industry Technology, Xinzheng, Henan, 451100, China Keywords: Bayesian reasoning; Bayesian network; judicial referee Abstract: Bayesian inference is a law that corrects subjective judgments of related probabilities based on observed phenomena. The logical basis is that when the sample's capacity is close to the population, the probability of occurrence of events in the sample is close to the probability of occurrence of the population. The basic expression is: posterior probability = prior probability × standard similarity. Bayesian networks are applications of Bayesian inference, including directed acyclic graphs (DAGs) and conditional probability tables (CPTs) between nodes. Using the Bayesian programming tool to construct the Bayesian network, the ECHO model is used to analyze the node structure of the proposition in the first trial of von Blo, and the jury can be simulated by the insertion of the probability value in the judgment of the jury in the first instance, but find and set The difficulty of all conditional probabilities limits the effectiveness of its display of causal structures. 1. Introduction The British mathematician Thomas Bayes (about 1701-1761) used inductive reasoning for the basic theory of probability theory and created Bayesian statistical theory, namely Bayesian reasoning. After the continuous improvement of scholars in later generations, a scientific methodology system has gradually formed, "applied to many fields and developed many branches." [1] Bayesian reasoning needs to reason about the estimates and hypotheses to be made based on the sample information observed by the observer and the relevant experience of the inferencer.
    [Show full text]
  • Debugging Probabilistic Programs
    Debugging Probabilistic Programs Chandrakana Nandi Adrian Sampson Todd Mytkowicz Dan Grossman Cornell University, Ithaca, USA Microsoft Research, Redmond, USA University of Washington, Seattle, USA [email protected] [email protected] fcnandi, [email protected] Kathryn S. McKinley Google, USA [email protected] Abstract ity of being true in a given execution. Even though these assertions Many applications compute with estimated and uncertain data. fail when the program’s results are unexpected, they do not give us While advances in probabilistic programming help developers build any information about the cause of the failure. To help determine such applications, debugging them remains extremely challenging. the cause of failure, we identify three types of common probabilistic New types of errors in probabilistic programs include 1) ignoring de- programming defects. pendencies and correlation between random variables and in training Modeling errors and insufficient evidence. Probabilistic pro- data, 2) poorly chosen inference hyper-parameters, and 3) incorrect grams may use incorrect statistical models, e.g., using Gaussian statistical models. A partial solution to prevent these errors in some (0.0, 1.0) instead of Gaussian (1.0, 1.0), where Gaussian (µ, s) rep- languages forbids developers from explicitly invoking inference. resents a Gaussian distribution with mean µ and standard deviation While this prevents some dependence errors, it limits composition s. On the other hand,even if the statistical model is correct, their and control over inference, and does not guarantee absence of other input data (e.g., training data) may be erroneous, insufficient or inappropriate for performing a given statistical task. types of errors.
    [Show full text]
  • Bayesian Cognition Probabilistic Models of Action, Perception, Inference, Decision and Learning
    Bayesian Cognition Probabilistic models of action, perception, inference, decision and learning Diard – LPNC/CNRS Cognition Bayésienne – 2017-2018 #1 Bayesian Cognition Cours 2 : Bayesian Programming Julien Diard http://diard.wordpress.com [email protected] CNRS - Laboratoire de Psychologie et NeuroCognition, Grenoble Pierre Bessière CNRS - Institut des Systèmes Intelligents et de Robotique, Paris Diard – LPNC/CNRS Cognition Bayésienne – 2017-2018 #2 Contents / Schedule • c1 : fondements théoriques, • c1 : Mercredi 11 Octobre définition du formalisme de la • c2 : Mercredi 18 Octobre programmation bayésienne • c3 : Mercredi 25 Octobre • *pas de cours la semaine du 30 • c2 : programmation Octobre* bayésienne des robots • c4 : Mercredi 8 Novembre • c5 : Mercredi 15 Novembre • c3 : modélisation bayésienne • *pas de cours la semaine du 20 cognitive Novembre* • c6 : Mercredi 29 Novembre • c4 : comparaison bayésienne de modèles • Examen ?/?/? (pour les M2) Diard – LPNC/CNRS Cognition Bayésienne – 2017-2018 #3 Plan • Summary & questions! • Basic concepts: minimal example and spam detection example • Bayesian Programming methodology – Variables – Decomposition & conditional independence hypotheses – Parametric forms (demo) – Learning – Inference • Taxonomy of Bayesian models Diard – LPNC/CNRS Cognition Bayésienne – 2017-2018 #4 Probability Theory As Extended Logic • Probabilités • Probabilités « fréquentistes » « subjectives » E.T. Jaynes (1922-1998) – Une probabilité est une – Référence à un état de propriété physique d'un connaissance
    [Show full text]
  • Fully Bayesian Computing
    Fully Bayesian Computing Jouni Kerman Andrew Gelman Department of Statistics Department of Statistics Columbia University Columbia University [email protected] [email protected] November 24, 2004 Abstract A fully Bayesian computing environment calls for the possibility of defining vector and array objects that may contain both random and deterministic quantities, and syntax rules that allow treating these objects much like any variables or numeric arrays. Working within the statistical package R, we introduce a new object-oriented framework based on a new random variable data type that is implicitly represented by simulations. We seek to be able to manipulate random variables and posterior simulation objects conveniently and transparently and provide a basis for further development of methods and functions that can access these objects directly. We illustrate the use of this new programming environment with several examples of Bayesian com- puting, including posterior predictive checking and the manipulation of posterior simulations. This new environment is fully Bayesian in that the posterior simulations can be handled directly as random vari- ables. Keywords: Bayesian inference, object-oriented programming, posterior simulation, random variable ob- jects 1 1 Introduction In practical Bayesian data analysis, inferences are drawn from an L × k matrix of simulations representing L draws from the posterior distribution of a vector of k parameters. This matrix is typically obtained by a computer program implementing a Gibbs sampling scheme or other Markov chain Monte Carlo (MCMC) process. Once the matrix of simulations from the posterior density of the parameters is available, we may use it to draw inferences about any function of the parameters.
    [Show full text]
  • 5Cfefcfd1682f70bb4a20064c6b7
    Hindawi Mathematical Problems in Engineering Volume 2019, Article ID 1728768, 9 pages https://doi.org/10.1155/2019/1728768 Research Article Nonlinear Compound Control for Nonsinusoidal Vibration of the Mold Driven by Servo Motor with Variable Speed Qiang Li ,1 Yi-ming Fang ,1,2 Jian-xiong Li ,1 and Zhuang Ma1 1Key Lab of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao, Hebei Province 066004, China 2National Engineering Research Center for Equipment and Technology of Cold Strip Rolling, Qinhuangdao, Hebei Province 066004, China Correspondence should be addressed to Yi-ming Fang; [email protected] Received 24 April 2019; Revised 3 September 2019; Accepted 10 September 2019; Published 22 October 2019 Academic Editor: Francesc Pozo Copyright © 2019 Qiang Li et al. ,is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, a fuzzy PI control method based on nonlinear feedforward compensation is proposed for the nonsinusoidal vibration system of mold driven by servo motor, rotated in single direction with variable speed. During controller design, there are mainly two issues to consider: (i) nonlinear relationship (approximate periodic function) between mold displacement and servo motor speed and (ii) uncertainties caused by backlash due to motor variable speed. So, firstly, the relationship between mold displacement and motor rotation speed is built directly based on the rotation vector method. ,en, an observer is designed to estimate the uncertainties and feedforward compensation. Secondly, as the motor rotates in single direction with variable speed, a fuzzy control with bidirectional parameter adjustment is adopted to improve rapidity and stability based on the traditional PI method.
    [Show full text]
  • Categorical Quantum Dynamics
    Categorical Quantum Dynamics Stefano Gogioso Trinity College University of Oxford arXiv:1709.09772v1 [quant-ph] 28 Sep 2017 A thesis submitted for the degree of Doctor of Philosophy Michaelmas 2016 Per Aspera Ad Astra Contents 1 Introduction 1 1.1 Summary of this work . .1 1.2 Background literature . .2 1.2.1 Categorical Quantum Mechanics . .2 1.2.2 Dagger compact structure . .3 1.2.3 Some application of CQM . .3 1.2.4 The ZX calculus . .4 1.2.5 Applications of strong complementarity . .4 1.3 Brief synopsis of this work . .6 1.3.1 A primer of CQM . .6 1.3.2 Coherent dynamics and symmetries . .6 1.3.3 Strong complementarity in quantum algorithms . .7 2 Categorical Quantum Mechanics 8 2.1 Symmetric monoidal categories . .8 2.1.1 Objects as physical systems . .8 2.1.2 Sequential composition of processes . .8 2.1.3 Parallel composition of processes . .9 2.1.4 States, effects and scalars . 11 2.1.5 Examples of symmetric monoidal categories . 12 2.2 Dagger-compact categories . 14 2.2.1 Dagger, isometries and unitaries . 14 2.2.2 Dagger compact structure . 15 2.2.3 Examples of dagger compact categories . 17 2.2.4 The matrix algebra . 19 2.3 Environments, causality and purification . 19 2.3.1 Environment structures . 19 2.3.2 Discarding maps in categories of matrices . 20 i 2.3.3 The CPM construction . 21 2.3.4 Purification . 24 2.4 Coherent data manipulation . 26 2.4.1 Dagger Frobenius algebras . 26 2.4.2 Quantum observables .
    [Show full text]
  • Bayesian Change Points and Linear Filtering in Dynamic Linear Models Using Shrinkage Priors
    Bayesian Change Points and Linear Filtering in Dynamic Linear Models using Shrinkage Priors Jeffrey B. Arnold 2015-10-14 Political and social processes are rarely, if ever, constant over time, and, as such, social scientists have a need to model that change (Büthe 2002; Lieberman 2002). Moreover, these political process are often marked by both periods of stability and periods of large changes (Pierson 2004). If the researcher has a strong prior about or wishes to test a specific location of a change, they may include indicator variables, an example in international relations is the ubiquitous Cold War dummy variable. Other approaches estimate locations of these change using change point or structural break models (Calderia and Zorn 1998; Western and Kleykamp 2004; Spirling 2007b; Spirling 2007a; Park 2010; Park 2011; Blackwell 2012). This offers a different Bayesian approach to modeling change points. I combineacon- tinuous latent state space approach, e.g. dynamic linear models, with recent advances in Bayesian shrinkage priors (Carvalho, Polson, and Scott 2009; Carvalho, Polson, and Scott 2010; Polson and Scott 2010). These sparse shrinkage priors are the Bayesian analog to regularization and penalized likelihood approaches such as the LASSO (Tibshirani 1996) in maximum likelihood. An example of such an approach, is a model of change points in the mean of normally distributed observations, 푦푡 = 휇푡 + 휖푡 휖 iid, E(휖) = 0 (1) 휇푡 = 휇푡−1 + 휔푡 Since this is a change point model, the change in the mean, 휔푡, should be sparse, with most values at or near zero, and a few which can be large.
    [Show full text]
  • Obstacle Avoidance and Proscriptive Bayesian Programming Carla Koike, Cédric Pradalier, Pierre Bessiere, Emmanuel Mazer
    Obstacle Avoidance and Proscriptive Bayesian Programming Carla Koike, Cédric Pradalier, Pierre Bessiere, Emmanuel Mazer To cite this version: Carla Koike, Cédric Pradalier, Pierre Bessiere, Emmanuel Mazer. Obstacle Avoidance and Proscriptive Bayesian Programming. –, 2003, France. hal-00019258 HAL Id: hal-00019258 https://hal.archives-ouvertes.fr/hal-00019258 Submitted on 10 Mar 2006 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Submitted to the Workshop on Reasoning with Uncertainty in Robotics, EIGHTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE August 9, 2003. Acapulco (Mexico) Obstacle Avoidance and Proscriptive Bayesian Programming Carla Koike a, Cédric Pradalier, Pierre Bessière, Emmanuel Mazer [email protected] Inriab Rhône-Alpes & Gravirc 655 av. de l’Europe, Montbonnot, 38334 St Ismier Cedex, France March 2, 2003 aCorresponding Author: INRIA Rhône-Alpes, 38334 Saint Ismier cedex France - Phone: +33.476.615348 - Fax: +33.476.615210 bInstitut National de Recherche en Informatique et en Automatique. cLab. Graphisme, Vision et Robotique. Abstract — Unexpected events and not modeled properties of the robot environment are some of the challenges presented by situated robotics research field. Collision avoidance is a basic security requirement and this paper proposes a probabilistic approach called Bayesian Programming, which aims to deal with the uncertainty, imprecision and incompleteness of the information handled to solve the obstacle avoidance problem.
    [Show full text]
  • Application of Probabilistic Graphical Models in Forecasting Crude Oil Price
    Application of Probabilistic Graphical Models in Forecasting Crude Oil Price by Danish A. Alvi A dissertation submitted in partial satisfaction of the requirements for the degree of Bachelors of Science in Computer Science Department of Computer Science University College London arXiv:1804.10869v1 [q-fin.TR] 29 Apr 2018 Spring 2018 2 Abstract The dissertation investigates the application of Probabilistic Graphical Models (PGMs) in forecasting the price of Crude Oil. This research is important because crude oil plays a very pivotal role in the global econ- omy hence is a very critical macroeconomic indicator of the industrial growth. Given the vast amount of macroeconomic factors affecting the price of crude oil such as supply of oil from OPEC countries, demand of oil from OECD countries, geopolitical and geoeconomic changes among many other variables - probabilistic graphical models (PGMs) allow us to understand by learning the graphical structure. This dissertation proposes condensing data numerous Crude Oil factors into a graphical model in the attempt of creating a accurate forecast of the price of crude oil. The research project experiments with using different libraries in Python in order to con- struct models of the crude oil market. The experiments in this thesis investigate three main challenges commonly presented while trading oil in the financial markets. The first challenge it investigates is the process of learning the structure of the oil markets; thus allowing crude oil traders to understand the different physical market factors and macroeconomic indicators affecting crude oil markets and how they are causally related. The second challenge it solves is the exploration and exploitation of the available data and the learnt structure in predicting the behaviour of the oil markets.
    [Show full text]