Reinforcement Learning with Quantitative Verification for Assured

Reinforcement Learning with Quantitative Verification for Assured

Reinforcement Learning with Quantitative Verification for Assured Multi-Agent Policies Joshua Riley1 a, Radu Calinescu1, Colin Paterson1, Daniel Kudenko2 and Alec Banks3 1Department of Computer Science, University of York, York, U.K. 2L3S Research Centre, Leibniz University, Hanover, Germany 3Defence Science and Technology Laboratory, U.K. Keywords: Reinforcement Learning, Multi-Agent System, Quantitative Verification, Assurance, Multi-Agent Reinforcement Learning. Abstract: In multi-agent reinforcement learning, several agents converge together towards optimal policies that solve complex decision-making problems. This convergence process is inherently stochastic, meaning that its use in safety-critical domains can be problematic. To address this issue, we introduce a new approach that combines multi-agent reinforcement learning with a formal verification technique termed quantitative verification. Our assured multi-agent reinforcement learning approach constrains agent behaviours in ways that ensure the satisfaction of requirements associated with the safety, reliability, and other non-functional aspects of the decision-making problem being solved. The approach comprises three stages. First, it models the problem as an abstract Markov decision process, allowing quantitative verification to be applied. Next, this abstract model is used to synthesise a policy which satisfies safety, reliability, and performance constraints. Finally, the synthesised policy is used to constrain agent behaviour within the low-level problem with a greatly lowered risk of constraint violations. We demonstrate our approach using a safety-critical multi-agent patrolling problem. 1 INTRODUCTION technique which enables agents to learn how to achieve system objectives efficiently (Patel et al., Multi-agent systems (MAS) have the potential for use 2011). MAS with RL has been proposed for work in a range of different industrial, agricultural, and de- within many scenarios and has become a significant fence domains (Fan et al., 2011). These systems, research area, including the use of MAS for nuclear which allow multiple robots to share responsibilities power plant inspections (Bogue, 2011). and work together to achieve goals, can be used in ap- However, successful deployment of these systems plications where it would not be practical or safe to in- within safety-critical scenarios must consider hazards volve humans. Multiple robotic agents fitted with spe- within the environment, which if not accounted for, cialised tools and domain-specific functionality can can lead to unwanted outcomes and potentially result work together to achieve complex goals which would in damage to the system, resources, or personnel. otherwise require human agents to place themselves Such safety considerations and guarantees are at risk. MAS could be particularly beneficial within missing from traditional RL, which aims to learn hazardous work environments, such as search and res- a policy which maximises a reward function with- cue operations (Gregory et al., 2016), or where tasks out consideration of safety constraints (Garcia and need to be completed in irradiated places. Indeed this Fernandez,´ 2012). An RL policy defines which action has been seen previously with the Fukushima nuclear an agent should take when it finds itself in a particular power plant disaster, where multiple robots were used state within the problem space. to complete jobs (Schwager et al., 2017). Our approach extends previous work on safe Many of these complex and hazardous environ- single-agent RL (Mason et al., 2017; Mason et al., ments require the agents to operate independently of 2018) by integrating formal verification with multi- direct human control, and it is these environments agent reinforcement learning (MARL) algorithms to which are the focus of our study. provide policies for use in safety-critical domains. Reinforcement learning (RL) is one promising In this work, we present a 3-stage approach for safe multi-agent reinforcement learning. First, we a https://orcid.org/0000-0002-9403-3705 encode the problem as an abstract Markov decision 237 Riley, J., Calinescu, R., Paterson, C., Kudenko, D. and Banks, A. Reinforcement Learning with Quantitative Verification for Assured Multi-Agent Policies. DOI: 10.5220/0010258102370245 In Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021) - Volume 2, pages 237-245 ISBN: 978-989-758-484-8 Copyright c 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence process (AMDP). Abstracting the problem is a com- tion 7 gives a summary of the results and future work. mon technique used within safety engineering for re- ducing complexity (Cizelj et al., 2011). The AMDP must contain all relevant information needed to de- 2 BACKGROUND scribe the problem space, including all of the features necessary to capture the mandated safety constraints. Next, we synthesise policies for the abstract model us- 2.1 Single-agent Reinforcement ing quantitative verification (QV), a mathematically Learning based technique for the verification (Kwiatkowska, 2007; Calinescu et al., 2012) and synthesis (Calinescu Reinforcement learning (RL) is a technique that en- et al., 2017; Gerasimou et al., 2018; Calinescu et al., ables an agent to learn the best action to take depend- 2018) of probabilistic models whose properties and ing upon the current state of the system. This learning safety constraints are expressed formally using proba- makes use of past experiences to influence an agent’s bilistic computation tree logic (PCTL) (Ciesinski and future behaviour. In this way, rewards are associated Großer,¨ 2004). Using QV for this stage allows for for- with each possible action as the agent explores the mal guarantees that properties will be met such that problem space. the policy generated is safe with respect to the de- The problem space is typically represented as a fined constraints. Finally, these policies deemed as Markov Decision Process (MDP) with an agent able safe by the verification stage are used to constrain to select from a set of actions in each state. As a multi-agent reinforcement learning problem where the agent moves through the environment, it may the agents learn a policy within a ROS simulator choose between using an action known to be benefi- which more closely resembles the real-world environ- cial (exploitation) and those actions about which little ment. is known (exploration). In order to demonstrate our approach, we intro- When the action is taken a reward (or penalty) duce a MARL safety domain based on a MAS pa- is obtained and the agent updates the reward asso- trolling problem. In this domain, two robots share the ciated with the state action pair Q : (s;a) ! R. Q- responsibility of performing tasks within the rooms learning (Patel et al., 2011) is commonly used to find of a nuclear power plant. They must work together an optimal value for this mapping. to ensure these rooms are visited three times in order Once the mapping of state, action pairs to rewards to complete their tasks successfully. However, one is complete, we can extract a policy by selecting the of these rooms has very high amounts of radiation— action which returns the maximum reward for the enough to damage the robots unless the three visits of state we are currently in. this room are partitioned between the robots in a sen- A policy can be seen as a mapping of which ac- sible way. Another requirement from these robots is tions should be taken in each state. An optimal policy to ensure their battery does not drop below a certain is the most efficient collection of state action pairings level, and ideally to finish the objective with spare bat- possible to reach the desired goal. Standard RL is tery above the minimum requirement. concerned with finding an optimal policy; however, Our research contributes to the areas of safe it does not allow for safety constraints to be defined MARL and safe RL, specifically to constrained as part of the learning process, which means that an RL (Garcia and Fernandez,´ 2012). To our knowl- optimal policy may be unsafe. edge, this is the first piece of work to apply safe RL methods to MAS in this fashion. Our approach al- 2.2 Multi-Agent Reinforcement lows for the use of MARL while having guarantees Learning (MARL) on meeting all safety requirements without the need to restrict the environment as strictly as previous ap- MARL is an extension of single-agent RL in which proaches (Moldovan, 2012). multiple agents learn how to navigate and work to- The remainder of this paper is structured as fol- gether towards the desired outcome (Boutilier, 1996). lows. Section 2 provides an introduction to the rel- There is a great deal of literature exploring the ben- evant tools and techniques used throughout the pa- efits and challenges of MARL discussed at length per. Section 3 introduces a domain example which in (Bus¸oniu et al., 2010). Benefits include efficiency, we use to demonstrate our approach. Section 4 pro- and robustness through the division of labour while vides an overview of each stage in our approach. Sec- challenges include ensuring reliable communications tion 5 evaluates the effectiveness of our approach. and increased complexity. A number of algorithms Section 6 reflects on related research, and finally, Sec- have been created explicitly for learning in MAS; 238 Reinforcement Learning with Quantitative Verification for Assured Multi-Agent Policies these algorithms are commonly classified as inde- 3 DOMAIN EXAMPLE pendent learners, joint action learners, and gradient- descent algorithms (Bus¸oniu et al., 2010; Bloember- In order to demonstrate our approach, we have con- gen et al., 2015). structed a domain example that takes the form of a Independent learners employ techniques in which patrolling robot system within a nuclear power plant. agents learn within a MARL environment but ignore There have been many situations in which robots have joint actions for reduced complexity. Independent been used within this setting, and new technologies learners are the primary type of algorithm on which continue to emerge (Bogue, 2011).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us