
Combining Reinforcement Learning and Causal Models for Robotic Applications Arqu´ımides Mendez´ Molina, Eduardo Morales Manzanares, Luis Enrique Sucar Sucar Technical Report No. CCC-20-005 September, 2020 c Coordinacion´ de Ciencias Computacionales INAOE Luis Enrique Erro 1 Sta. Ma. Tonantzintla, 72840, Puebla, Mexico.´ Abstract Both reinforcement learning (RL) and Causal Modeling(CM) are indispensable part of machine learning and each plays an essential role in artificial intelligence, however, they are usually treated separately, despite the fact that both are directly relevant to problem solving processes. On the one hand, Reinforcement Learning has proven to be successful in many se- quential decision problems (been robotics a prominent field of application); and on the other hand, causal models using Graphical Probabilistic Models is clearly a novel but a relevant and related area with untapped potential for any learning task. In this Ph.D. research proposal, we combine both areas to improve their respective learning processes, especially in the context of our application area (service robotics). The idea is to use observational and interventional data from a reinforcement learning agent to discover the underlying causal structure and simulta- neously use this structure to learn better and faster a policy for a given task. The preliminary results obtained so far are a good starting point for thinking about the success of our research project, especially for part of our hypothesis which states that once the casual model is known, the learning time can be improved when compared with traditional reinforcement algorithms. Keywords— Reinforcement Learning, Causal Models , Robotics Contents 1 Introduction 1 1.1 Motivation . .2 1.2 Justification . .2 1.3 Problem Statement . .3 1.4 Research Questions . .3 1.5 Hypothesis . .4 1.6 Objectives . .4 1.6.1 General Objective . .4 1.6.2 Specifics Objectives . .4 1.7 Scope and Limitations . .4 1.8 Expected Contributions . .5 1.9 Outline . .5 2 Background 6 2.1 Reinforcement Learning . .6 2.1.1 Markov Decision Process . .7 2.1.2 Learning Algorithms . .8 2.1.3 Model based RL vs Model Free RL . .9 2.2 Causality . 10 2.2.1 Difference between Causal Relations and Associative Relations . 11 2.2.2 Advantages of Causal Models . 11 2.2.3 Causal Probabilistic Graphical Models . 12 2.2.4 Causal Discovery . 13 2.2.5 Causal Inference . 16 2.3 Summary . 17 3 Related work and State-of-the-art 18 3.1 RL and Causal Inference . 18 3.2 RL for Causal Discovery . 20 3.3 Causal RL . 20 3.4 Summary . 21 4 Research Proposal 24 4.1 Methodology . 24 4.2 Work Plan . 25 4.3 Publications Plan . 25 5 Preliminary Results 27 5.1 Using Causal Models to improve Reinforcement Learning . 28 5.1.1 Experiments . 30 5.1.2 Results . 33 5.1.3 Conclusions of this experiment . 33 5.2 Using Reinforcement Learning for Causal Discovery . 34 5.2.1 The Reinforcement Learning task . 34 5.2.2 The candidates models . 35 5.2.3 Causal Discovery Algorithms . 36 5.2.4 Experiments and Results . 38 5.3 Combining Reinforcement Learning and Causal Discovery . 38 6 Final Remarks 41 1 Introduction Reinforcement Learning (RL) [48] is the study of how an agent (human, animal or machine) can learn to choose actions that maximize its future rewards. This approach is inspired by the way humans learn, and this is to let the agent to explore the environment to learn a task through rewards associated with each of the actions taken on each situation (labeled as state) along the way. Determining what action to take on each state is known as a policy. RL provides a promising technique to solve complex sequential decision making problems in several domains such as healthcare, economy, robotics (our area of interest), among others. However, existing studies apply RL algorithms in discovering optimal policies for a targeted problem, but ignores the abundant causal relationships present in the target domain. Causal Modeling [39] (CM) is another learning paradigm concerned at uncovering the cause-effect relations between a set of variables. It provides the information for an intelligent system to predict what may happen next so that it can better plan for the future. In this paradigm, it is also possible to reason backwards: If I desire this outcome, what actions should I take? Given a causal structure of a system, it is possible to predict what would happen if some variables are intervened, estimate the effect of confounding factors that affect both an intervention and its outcome, and also, predict the outcomes of cases that were never observed before. In recent years, the machine learning research community has expressed growing interest in both fields. This interest in Reinforcement Learning has been fueled by significant achievements in combining Deep Learning and Reinforcement Learning to create agents capable of defeating human experts. Prominent examples include the ancient strategy game Go [43] and Atari games [34]. Both reinforcement learning (RL) and Causal Modeling(CM) are an indispensable parts of machine learning and each plays an essential role in artificial intelligence, however, they are usually treated sepa- rately, despite the fact that both are directly relevant to problem solving processes. At present, the first works focusing on the relationship between these learning methods are beginning to be appeared [18, 21, 51, 8]. However, a growth in what some are beginning to call (CausalRL) [31, 32] is to be expected in order to become an indispensable part of General Artificial Inteligence. What CausalRL does, seems to mimic human behaviors, learn causal effects from an agent communicating with the environment, and then optimizing its policy based on the learned causal relationships. More specifically, humans summarize rules or experience from their interaction with nature and then exploit this to improve their adaptation in the next exploration. [31] One area with great application possibilities is robotics. So far, the use of traditional RL techniques for learning task in robotics has been hampered by the following aspects: (i) problems for learning in continuous spaces, (ii) the inability to re-use previously learned policies in new, although related tasks, (iii) 1 difficulty in incorporating domain knowledge, (iv) long learning times and (v) many data samples. Our research efforts would be focused on the use of causal models in favor of reinforcement learning to mainly attack the problems raised in points (ii, iii, iv). On the other hand, we believe that it is possible to use the Reinforcement Learning process thought directed interventions to improve or discover new relationships of the underlying causal model for the given task or problem. Through this document, a new methodology for learning and using Causal Models during Reinforce- ment Learning will be developed and hypothesized to be computationally feasible. 1.1 Motivation One of the main motivations for this work is that in the area of robotics there are certain characteristics that facilitate causal discovery, such as the fact that interventions (experiments) can be made, which is prohibitive in other areas. If an agent can know the possible consequences of its actions, it can then make a better selection of them. This is particularly relevant in RL because that knowledge, which can be given by a causal model, can significantly reduce the exploration process and therefore accelerate the learning process (as will be seen in the preliminary results). On the other hand, trying to learn causal models from observational data presents several problems which can be reduced if we can make interventions, so more reliable causal models can be learned. This is relevant for Robotics, because under certain circumstances, targeted interventions can be made to learn causal models. In this thesis we are going to see how we can link RL and Causal Discovery to learn faster policies and better causal models. 1 In this way, a very attractive set of potential applications arises, such as: explanation (the agent can explain the reason for its actions using causal models), transfer (the learned causal models can be directly reused between similar tasks), and efficiency (reduce the long times of the learning process by reinforcement learning). 1.2 Justification This research will address the following open problems reported in the literature: From the Reinforcement Learning perspective: 1It is worth mentioning that Machine Learning pioneers such as Yoshua Bengio (a computer scientist at the Uni- versity of Montreal who shared the 2018 Turing Award for his work on deep learning) recently suggests that creating algorithms that can infer cause and effect is the key to avoiding another AI winter and unlocking new frontiers in machine intelligence. 2 1. Long training times and many training examples. 2. Difficult to re-use previously learned policies in new, although related tasks. 3. Difficult to incorporate causal relationships of the target task From Causal Modeling perspective: 1. How to combine observational and interventional data from a robotics domain in an efficient way for causal discovery? 2. How to use and guide Reinforcement Learning experiences for causal discovery? 1.3 Problem Statement How can we provide an intelligent agent with the ability to simultaneously learn causal relationships and efficient induction of task policies, based on the experiences obtained during the reinforcement learning process? Formally: Let G be a causal graphical model and let M = (S; A; T; R) a sequential Markov Deci- sion Problem (MDP) whose actions (A), states (S), and rewards (R) are causally related and corresponds to variables in G. Let (RLπ=Q) denote the process of learning a policy π and a value function (Q) for M following a reinforcement learning algorithm. Let CLG denote the learning process of G following a causal discovery algorithm.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages50 Page
-
File Size-