Federated Transfer Reinforcement Learning for Autonomous Driving

Federated Transfer Reinforcement Learning for Autonomous Driving

Federated Transfer Reinforcement Learning for Autonomous Driving Xinle Liang1, Yang Liu1∗, Tianjian Chen1, Ming Liu2 and Qiang Yang1 Abstract— Reinforcement learning (RL) is widely used in of training data stays local in the autonomous edge vehi- autonomous driving tasks and training RL models typically cles. Therefore the framework can be potentially applied involves in a multi-step process: pre-training RL models on to real-life scenarios where multiple self-driving technology simulators, uploading the pre-trained model to real-life robots, and fine-tuning the weight parameters on robot vehicles. This companies collaborate to train more powerful RL tasks by sequential process is extremely time-consuming and more im- pooling their robotic car resources without revealing raw data portantly, knowledge from the fine-tuned model stays local and information. We perform extensive real-life experiments on a can not be re-used or leveraged collaboratively. To tackle this well-known RL application, i.e, steering control RL task for problem, we present an online federated RL transfer process for collision avoidance of autonomous driving cars to evaluate real-time knowledge extraction where all the participant agents make corresponding actions with the knowledge learned by oth- the feasibility of the proposed framework and demonstrates ers, even when they are acting in very different environments. that the proposed framework has superior performance com- To validate the effectiveness of the proposed approach, we pared to the non-federated local training process. constructed a real-life collision avoidance system with Microsoft Airsim simulator and NVIDIA JetsonTX2 car agents, which A. Related Work cooperatively learn from scratch to avoid collisions in indoor One of the most important tasks for transfer reinforcement environment with obstacle objects. We demonstrate that with learning is to generalize the already-learned knowledge to the proposed framework, the simulator car agents can transfer new tasks [5]–[7]. With the fast advance of robotics simula- knowledge to the RC cars in real-time, with 27% increase in the average distance with obstacles and 42% decrease in the tors, lots of researches start to investigate the feasibility and collision counts. effectiveness of transferring the knowledge of simulators to real-life agents [1], [8]–[11]. I. INTRODUCTION [8] proposed a decentralized end-to-end sensor-level Recent Reinforcement Learning (RL) researches in au- collision avoidance policy for multi-robot systems, with tonomous robots have achieved significant performance im- the pre-trained process conducted on stage mobile robot provement by employing distributed architecture for decen- simulator1. [1] studied the problem of reducing the compu- tralized agents [1], [2], which is termed as Distributed Re- tationally prohibitive process of anticipating interaction with inforcement Learning (DRL). However, most existing DRL neighboring agents in a decentralized multi-agent collision frameworks consider only synchronous learning with a con- avoidance scenario. The pre-trained model of the RL model stant environment. In addition, with the fast development of used is based on the trained data generated by the simulator. autonomous driving simulators, it is now common to perform [9] investigated the problem end-to-end nonprehensile rear- pre-training on simulators, and then transfer the pre-trained rangement that maps raw pixels as visual input to control model to real-life autonomous cars for fine-tuning. One of the actions without any form of engineered feature extraction. main drawbacks of this path is that the model transfer process The authors firstly trained a suitable rearrangement policy in is conducted offline, which may be very time-consuming, and Gazebo [12], and then adapt the learned rearrangement policy there is lack of feedback and collaborations from the fine- to real-world input data based on the transfer framework tuned model trained with different real-life scenarios. proposed. arXiv:1910.06001v1 [cs.LG] 14 Oct 2019 To overcome these challenges, we propose an end-to- It can be easily concluded that for transfer reinforcement end training process which leverages federated learning learning in robotics area, most RL researches employed the (FL, [3]) and transfer learning [4] to enable asynchronous following research path: pre-training RL model on simula- learning of agents from different environments simultane- tors, transferring the model to robots and fine-tuning the ously. Specifically, we bridge the pre-training on simulators model parameters. Usually, the above processes are executed and real-life fine tuning processes by various agents with sequentially, i.e., after the RL models have been pre-trained asynchronous updating strategies. Our proposed framework and transferred to the robots, no meaningful experience or alleviates the time-consuming offline model transfer process knowledge from the simulators can be provided for the final in autonomous driving simulations while allows heavy load models fine-tuned on the real-life robots. Then, one may ask: can we make the transfer and fine-tune processes executed 1Xinle Liang, Yang Liu, Tianjian Chen and Qiang Yang are with Depart- in parallel? ment of Artificial Intelligence, Webank, Shenzhen, China. fmadawcliang, The framework proposed in this work utilizes RL tasks yangliu, tobychen, [email protected] 2Ming Liu is with the Robotics and MultiPerception Laborotary, Robotics in the architecture of federated learning. Note that some re- Institute, Hong Kong University of Science and Technology, Hong Kong cent works also investigate federated reinforcement learning SAR, China. [email protected] ∗Corresponding author 1http://rtv.github.io/Stage/ (a) JetsonTX2 RC car (b) “coastline” map in (c) Experiment Race Airsim platform Fig. 1: Hardware and simulator platforms employed in FTRL validation experiments. (FRL) architecture. [13] presents two real-life FRL examples II. HARDWARE PLATFORM AND TASKS for privacy-preserving issues both in manufactory industry In order to better illustrate and validate the framework and medical treatment systems. The authors further investi- proposed, we construct real-life autonomous systems based gated the problem of multi-agent RL system in a cooperative on three JetsonTX2 RC cars, Microsoft Airsim autonomous way, when considering the privacy-preserving requirements driving simulator and a PC server. Fig. 1 presents the basic of agent data, gradients and models. [14] studied the FRL hardware and software platforms used in the validation settings in the autonomous navigation where the main task is process. to make the robots fuse and transfer their experience so that The real-life RL agents run on three RC cars, which house they can effectively use prior knowledge and quickly adapt a battery, a JetsonTX2 single-board computer, a USB hub, to new environments. The authors presented the Lifelong a LIDAR sensor and an on-board Wi-Fi module. Fig. 1a Federated Reinforcement Learning (LFRL), in which the presents an image of the experiment RC car. robots can learn efficiently in a new environment and extend In the collision avoidance experiment, we use a PC as their experience so that they can use their prior knowledge. the model pre-training platform and as the FL server, which [15] employed the techniques in FRL for personalization of a is armed with an 8-core 32G Intel i9-9820X CPU, and 4 non-player character, and developed player grouping policy, NVIDIA 2080 Ti GPU. communication policy and federation policy respectively. Developed by Microsoft, Airsim is a simulator for drones and cars, which serves as a platform for AI research to experiment with ideas on deep reinforcement learning, au- B. Our Proposal tonomous driving etc. The version used in this experiment is v1.2.2.-Windows 2. In the pre-train and federation processes, Different from existing FRL researches, our research mo- we “coastline” build-in map in the Airsim platform, which tivation originates from the feasibility of conducting online can be seen in Fig. 1b. transfer on the knowledge learned from one RL task to As can be seen in Fig. 1c, we construct a fence-like another task, with the aim of both federated learning and experimental race for the collision avoidance tasks in indoor online transfer model. environment. We regularly change the overall shape of the In this paper, we present Federated Transfer Reinforce- race and sometimes set some obstacles in the race in order to ment Learning (FTRL) framework, which is capable of trans- construct different RL environments. However, for a single ferring RL agents knowledge in real-time on the foundation run of a specific RL task, the race shape and obstacle of federated learning. To the best of our knowledge, it is positions remain unchanged. the first literature dealing with FRL techniques with online transfer model. Compared to the above existing works, our III. PROPOSED FRAMEWORK proposed framework has the advantages of It is worth noting that FTRL framework is not designed 1) Online transfer. The proposed framework is capable for any specific RL method. However, in order to thoroughly of executing the source and the target RL tasks in describe the framework and validate its effectiveness, Deep simulator or real-life environments with non-identical Deterministic Policy Gradient (DDPG, [16]) is chosen to be robots, obstacles, sensors and control systems; the RL implementation. 2) Knowledge aggregation. Based on the functionality of federated learning,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us