Enabling Safe Exploration of Action Space in Real–World Robots

Enabling Safe Exploration of Action Space in Real–World Robots

Published as a conference paper at RL4RealLife 2020 ENABLING SAFE EXPLORATION OF ACTION SPACE IN REAL–WORLD ROBOTS Shivam Garg∗1 , Homayoon Farrahi∗1, and A. Rupam Mahmood1;2 1Reinforcement Learning and Artificial Intelligence Lab, University of Alberta, Canada 2Canada CIFAR AI Chair, Amii, Canada fsgarg2,farrahi,[email protected] ABSTRACT The use of robots has been steadily increasing, and reinforcement learning (RL) has shown increasingly promising results in controlling physical robots. How- ever, due to physical safety constraints, RL methods are often unable to exploit the full action space of the robots, which can be crucial in achieving good con- trol performance. In this work, we experimentally demonstrate that using higher magnitude action spaces leads to better policy performance on a physical reaching task. Based on this result, we propose a curriculum technique for safe exploration of the full action space. Our experiments on a physical robot show that this tech- nique applied to a policy gradient method allows RL agents to explore safely while utilizing the full range of the action space. 1 INTRODUCTION Robots are important in automating manual tasks, and have already been deployed extensively in manufacturing to reduce costs and improve efficiency. They have been instrumental in manufac- turing automation (automobile industry), collaborating with humans in high precision environments (medical robotics), or working in dangerous situations (mine detection robots). Robots have greatly minimized the human physical effort. However, developing robot control algorithms is a painstak- ing task requiring expert knowledge. Apart from being time consuming to develop, the traditional approaches don’t scale across tasks, i.e. a human expert needs to separately devise an algorithm for each different robotic task. The need to minimize human intellectual effort provides the motivation to develop methods, which can scale across tasks and help robots learn effective control without human supervision. Reinforcement Learning (RL) (Sutton & Barto, 2018) promises to alleviate these problems by pro- viding general purpose control algorithms which can automate the learning process of the robot with minimal human intervention. However, the application of RL methods to real world robots has been limited due to the need for high amount of experience required to learn a policy from scratch and the complexity associated with physical hardware systems. Recent work has shown some promising results in applying RL to real world robots (Mahmood et al., 2018a; 2018b). RL methods learn control by directly interacting with the robot in real– time. They require minimal human intervention (except while setting up the task) and have shown to consistently learn good control policies. However, in experiments, RL methods were not able to learn control policies better than those obtained from traditional control engineering (scripted agents like the PID controller). We hypothesize that there are two major reasons for this observation: 1. The range of action space (such as motor torques) was restricted during the experiments while learning, in order to prevent aggressive exploratory behavior (such as jerkiness, mo- tors getting too hot, moving violently causing harm to surroundings and its internal com- ponents); and 2. The training time on robots was limited to short durations. ∗Equal contribution. 1 Published as a conference paper at RL4RealLife 2020 Figure 1: Dynamixel MX–64AT motor. Consequently, increasing the range of actions while learning robot control, and training the robots for long durations maybe necessary to learn superior control. The focus of this work is on mitigating the effects arising from the former reason. This area of study, where RL algorithms are applied to critical systems with safety constraints is known as Safe RL (Gardia & Fernandez,´ 2015). We propose to solve this problem by developing a method which enables the robots to safely explore their complete action space. In this project, we will focus on controlling the Dynamixel MX–64AT motor shown in Figure 1. We try to answer the following questions: 1. Does providing the motor access to a larger action space actually result in superior perfor- mance; and 2. How do we safely train a policy on the complete action space of the robot? For answering the first question, we train the robot on identical control tasks with a sequentially increasing action space (e.g. torque range increase from [−100;100] to [−200;200]). This will help us evaluate if the policies learned for tasks with a larger action space are superior to those learned on a smaller action space. For each new task, we can either learn the policy from scratch or utilize the policy learned from the previous task, leading to two separate approaches. To answer the second question, we will train a policy successively on identical tasks with increasing action space based on a fixed schedule. We expect the smoothness of the actions generated by the policy, learned on the previous task, to carry over to subsequent tasks with larger action spaces, effectively enabling safe exploration. In summary, our method will enable robots to safely explore the complete action space. By utilizing the full motor capabilities of robots, RL methods would ideally be able to find control policies superior to those obtained from classical techniques, which would help increase the adoption of robots in different domains. 2 BACKGROUND AND METHODOLOGY 2.1 TASK We run our experiments on the Dynamixel MX–64AT servo motor manufactured by ROBOTIS. It features a stall torque of 6 N.m and a maximum no load speed of 63 rpm. We define our task similar to the DXL–Reacher task defined by Mahmood et al. (2018b). In this task, the agent can control the motor through position, speed, or torque. We use torque control for all of our experiments. The agent’s goal is to move the motor to the target position as fast as possible. 2.2 NO SCHEDULING:FIXED TORQUE RANGE To answer the first research question, we compare the learning curves for four different constant torque ranges. For these experiments, the network’s mean output is fed into a tanh() function to 2 Published as a conference paper at RL4RealLife 2020 0.25 0.50 0.75 1.00 1.25 Average Returns Schedule 1 100 Schedule 2 1.50 200 Schedule 3 300 Schedule 4 1.75 400 Schedule Clip PID PID 0 10000 20000 30000 40000 50000 60000 0 10000 20000 30000 40000 50000 60000 Transition Steps Transition Steps Figure 2: Average returns (across batch size of 40 episodes) against the transition steps for different variants of learning methods on the Dynamixel Reacher task. The graphs show mean and standard error for five independent runs of each method. The solid horizontal line corresponds to the average performance of a simple PID controller on the reacher task. map it between -1 and 1, then multiplied by a scaling factor t. Runs with different scaling factors are then compared in terms of the final return performance and the speed of learning. 2.3 SCHEDULING METHODS Next, we investigate the effect of two scheduling techniques for gradually increasing the action space. Scaling Schedule Method: The first method is scaling. The network’s output is mapped to the range [-1, 1] using tanh() and then multiplied by a constant scaling factor. This scaling factor can then be increased using different schedules. Various scheduling methods differ in their type e.g.: linear or quadratic and how often the scaling factor is increased during train- ing (such as gradually increasing the scaling factor). Using the scaling technique results in sudden jumps in the action magnitudes whenever the scaling factor is increased. How- erver, the output of the policy network (mean of the policy distribution) remains bounded via tanh(). Clipping Schedule Method: Another technique is to clip the policy action. In this case, the network’s output is multiplied by a constant factor and clipped to be within the range [-clip;clip]. This range can then be increased using the same scheduling methods described above. In the clipping technique, the output mean is not bounded and can go well beyond the clipping range. This makes it harder to learn policies, since small changes to the policy can result in the same output action if the action is beyond the clipping range and thereby gets clipped to the threshold. However, unlike the scaling method, there are no longer any sudden jumps in the actions. 3 EXPERIMENTAL SETUP 3.1 ENVIRONMENT AND THE LEARNING ALGORITHM We use our own implementations of the Reacher task and the PPO algorithm (Schulman et al., 2017). For the PPO algorithm, we use neural networks as function approximators for both our policy and value functions and train them using the Adam optimizier (Kingma & Ba, 2014). Our observation space consists of [motor angle;motor speed;target angle] for the motor. Each observa- tion is normalized to lie between 0 and 1. Both neural networks have two fully–connected hidden layers of size 64. The output of the value network is a single number and the output of the policy network consists of two numbers m and s defining the Normal distribution from which we then draw our agent’s actions: At ∼ N (m;s); this way both m and s are state dependent. We apply a softplus() function to s to ensure that it stays positive. For fixed torque schedules and scal- ing schedules, we constrain the mean output of the policy network between -1 and 1 by applying 3 Published as a conference paper at RL4RealLife 2020 Table 1: The average values (along with standard deviations) of various metrics across the learning phase of different policy variants on the reacher task. T–value policies refer to fixed torque policies with t = value, S–value policies refer to scaling schedule policy with schedule type equal to value, and CLIP policy refers to clipping schedule policy.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us