Towards Verifying GOAL Agents in Isabelle/HOL

Alexander Birch Jensen a DTU Compute, Department of Applied Mathematics and , Technical University of Denmark, Richard Petersens Plads, Building 324, DK-2800 Kongens Lyngby, Denmark

Keywords: Agent Programming, Formal Verification, Proof Assistants.

Abstract: The need to ensure reliability of agent systems increases with the applications of multi-agent technology. As we continue to develop tools that make verification more accessible to industrial applications, it becomes an even more critical requirement for the tools themselves to be reliable. We suggest that this reliability ought not be based on empirical evidence such as testing procedures. Instead we propose using an interactive theorem prover to ensure the reliability of the verification process. Our work aims to verify agent systems by emdedding a verification framework in the interactive theorem prover Isabelle/HOL.

1 INTRODUCTION considers related work in the literature. Section 3 introduces the semantics of GOAL and a verifica- A key issue in deployment of multi-agent systems is tion framework for GOAL agents. Section 4 goes to ensure its reliability (Dix et al., 2019). We ob- into details with our work on formalizing GOAL in serve that testing does not translate well from tradi- Isabelle/HOL. Section 5 discusses some of the future tional to multi-agent technology: we often challenges. Finally, Section 6 makes concluding re- have agents with complex behavior and the number marks. of possible system configurations may be intractable. Two leading paradigms in ensuring reliability are formal verification and testing. The state-of-the-art 2 RELATED WORK approach for formal verification is model-checking (Bordini et al., 2006) where one seeks to verify de- sired properties over a (generalized) finite state space. The work in this paper relates to our work on mech- However, the process of formal verification is often anizing a transformation of GOAL program code to hard and reserved for specialists. As such, we find an agent logic (Jensen, 2021). Unlike this paper, it that testing has a lower entry-level for ensuring reli- does not consider a theorem proving approach. It fo- able behavior in critical configurations. Metrics such cuses on closing the gap between program code and as code coverage help solidify that the system is thor- logic which is an essential step towards improving the oughly tested. Even so, reliability of testing stands or practicality of the approach. In (Jensen et al., 2021), falls by the knowledge and creativity of the designer. we argue that a theorem proving approach is interest- In our work, we focus on formal verification. We ing to pursue further for cognitive agent-oriented pro- lay out the building blocks for a solid foundation of gramming, and consequently we ask ourselves why it a verification framework (de Boer et al., 2007) for has played only a miniature role in the literature so the GOAL agent (Hindriks, far. 2009) by formalizing it in Isabelle/HOL (Nipkow (Alechina et al., 2010) apply theorem-proving et al., 2002) (a proof assistant based on higher-order techniques to successfully verify correctness proper- logic). All proofs developed in Isabelle are verified ties of agents for simpleAPL (a simplified version of by a small logical kernel that is trusted. This ensures 3APL (Hindriks et al., 1999)). that Isabelle itself as well as any proof developments (Dennis and Fisher, 2009) develop techniques are trustworthy. for analysis of implemented, BDI-based multi-agent The paper is structured as follows. Section 2 system platforms. Model-checking techniques for AgentSpeak (Bordini et al., 2006) are used as a start- a https://orcid.org/0000-0002-7708-667X ing point and then extended to other languages includ-

345

Jensen, A. Towards Verifying GOAL Agents in Isabelle/HOL. DOI: 10.5220/0010268503450352 In Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021) - Volume 1, pages 345-352 ISBN: 978-989-758-484-8 Copyright c 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved

ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence

ing the GOAL programming language. A consistent belief base means that it does not en- Tools for generating test cases and debug- tail a contradiction which in turn ensures that we can- ging multi-agent systems have been proposed by not derive everything (any formula follows from fal- (Poutakidis et al., 2009), where the test cases are used sity). The goals are declarative and describe a state to monitor (and debug) the system behavior while the agent desires to reach. As such, it does not make running. Another take on the testing approach is by sense to have a goal that is already believed to be (Cossentino et al., 2008) which simulates a simpli- achieved. Furthermore, we should not allow goals fied version of the system by executing it in a virtual that can never be achieved. The following captures environment. This especially appeals to systems to some of the temporal features of goals: consider an be deployed in real-world agent environments where agent that has as its goal to ”be at home ∧ watch a testing can be less accessible or of a high cost. movie”. Then both ”be at home” and ”watch a movie” The verification framework theory of programs are subgoals. On the contrary, from individual goals has some notational similarities with UNITY (Misra, to ”be at home” and ”watch a movie”, we cannot con- 1994). However, while we verify an agent program- clude that it is a goal to ”be at home ∧ watch a movie”. ming language, UNITY is for verification of general Any sort of knowledge we want the agent to have parallel programs. (Paulson, 2000) has worked at can be packed into the belief base as a logic formula. mechanizing the UNITY framework in Isabelle/HOL. As we are modelling agents for propositional logic, the expressive power is of course limited to proposi- tional symbols, but this is more of a practical limita- 3 VERFICATION FRAMEWORK tion than a theoretical one. 3.1.2 Mental State Formulas In this section, we introduce GOAL, its formal seman- tics and a verification framework for agents. To reason about mental states of agents, we introduce 3.1 GOAL Agents a language for mental state formulas. These formu- las are built from the usual logical connectives and the special belief modality BΦ and goal modality GΦ, Agents in GOAL are rational agents: they make de- where Φ is a formula of propositional logic. cisions based on their current beliefs and goals. The The semantics of mental state formulas is defined agent’s beliefs and goals continuously change to adapt as usual for the logical connectives. For the belief and to new situations. Thus, it is useful to describe the goal modality, we have: agent’s by its current mental state: its current beliefs and goals. • (Σ,Γ) M BΦ ⇐⇒ Σ C Φ, Based on the agent’s mental state, it may select to • (Σ,Γ) GΦ ⇐⇒ Φ ∈ Γ, execute a conditional action: an action that may be M executed given the truth of a condition on the current where (Σ,Γ) is a mental state with belief base Σ and mental state (its enabledness). goal base Γ and C is classical semantic consequence GOAL agents are thus formally defined by an ini- for propositional logic. tial mental state and a set of conditional actions. We We can think of the modalities as queries to the will get into more details with those concepts in the belief and goal base: following. • BΦ: do my current beliefs entail Φ? 3.1.1 Mental States • GΦ: do I have a goal to achieve Φ? Below we state a number of important properties A mental state is the agent’s beliefs and goals in a of the goal modality: given state and consists of a belief and a goal base. The belief and goal base are sets of propositional for- • 2M G(φ −→ ψ) −→ (Gφ −→ Gψ), mulas for which certain restrictions apply: • 2M G(φ ∧ (φ −→ ψ)) −→ Gψ, 1. The set of formulas in the belief base is consistent, • 2M G(φ ∧ ψ) −→ (Gφ ∧ Gψ), 2. for every formula φ in the goal base: • C (ϕ ←→ γ) =⇒ 2M (Gϕ ←→ Gγ), (a) φ is not entailed by the agent’s beliefs, where 2M stated without a mental state is to be under- (b) φ is satisfiable, stood as universally quantified over all mental states. (c) If φ entails φ0, and φ0 is not entailed by the The properties show that G does not distribute over agent’s beliefs, then φ0 is also a goal. implication, subgoals cannot be combined to form a

346 Towards Verifying GOAL Agents in Isabelle/HOL

larger goal, and lastly that equivalent formulas are 3.2 Proof Theory also equivalent goals. We further define syntactic consequence `M for Hoare triples are used to specify actions: {ϕ} ρ . mental state formulas as given by the rules R1-R2 and do(a) {ψ} specifies the conditional action a with axioms A1-A5 in Tables 1 and 2. These rules are as mental state formulas ϕ and ψ that state the pre- and given in (de Boer et al., 2007). We assume the exis- postcondition, respectively. These will be the axioms tence of `C: a proof system for propositional logic. of our proof system and will vary depending on the Mental states are useful for stating the agent’s context, i.e. available actions. mental state at a given point in time. The idea is that The provable Hoare triples are given by the Hoare the mental state changes over time, either due to per- system in Table 3. The rule for infeasible actions ceived changes in the environment or due to actions states the non-effect of actions when not enabled. The taken by the agent. We will assume that any changes rule for conditional actions gives means to reason in the environment are solely caused by the actions about the enabledness of a conditional action. The of agents. This allows us to model changes to mental conjunction and disjunction rules allow for combin- states as effects of performing actions. ing proofs of Hoare triples. Lastly, with the conse- quence rule we can strengthen the precondition and 3.1.3 Actions weaken the postcondition.

Agents take actions to change the state of affairs, and 3.3 Proving Properties of Agents thus their mental states. Hopefully, a sequence of actions will lead to the agents having a mental state On top of Hoare triples, a temporal logic is used to where it believes its goals are achieved (believed). In verify properties of GOAL agents. We extend the general terms, the idea behind our work is that the ex- language of mental state formulas with the temporal istence of such a sequence is provable. operator until. The formula ϕ until ψ means that ψ In order for the agent to make reasonable choices, eventually comes true and until then ϕ is true, or that actions have conditions that state when they can (and ϕ never becomes true and ψ remains true forever. should) be performed. The pair of action and condi- The temporal logic formulas are evaluated with re- tion form a conditional action. The conditional action spect to a given trace and some time point on the trace. ϕ.do(a) of an agent states a condition ϕ for when the This means that conditions are checked in the current agent may execute action a. We may write enabled(a) mental state as the program is executed. when referring to the condition of action a. The con- Proving that a property of the agent is ensured by ditions are mental state formulas. the execution of the program is done by proving a We now need to specify what it means to execute number of ensures formulas: an action. For this, we need to associate the execu- tion of an action with updates to the belief base. In ϕensuresψ := (ϕ −→ (ϕuntilψ)) ∧ (ϕ −→ ♦ψ) practical cases, this is given by an action specifica- The formula ϕ ensures ψ states that ϕ guarantees the tion stated in a more general language (i.e. using vari- realization of ψ. We can show that ψn is realized from ables). Informally, the result of executing an action is ψ1 in a finite number of steps: thus to update the belief base of the mental state and to remove any goals that the agent now believes to be ψ1 ensures ψ2, achieved. ψ2 ensures ψ3, Note that we do not allow for direct manipulation ..., of the goal base. We have an initial goal base that is only changed as a result of action execution. This is ψn−1 ensures ψn not an inherent limitation of GOAL, but is a useful The existence of such a sequence is stated by the abstraction for the purpose of this paper. operator ϕ 7→ ψ: A trace is an infinite sequence of interleaved men- ϕ ensures ψ ϕ 7→ χ χ 7→ ψ tal states and action executions. As such, a trace de- ϕ 7→ ψ ϕ 7→ ψ scribes an order of execution. In case the agents have 7→ ... 7→ choices, multiple traces may exist for a given pro- ϕ1 ψ ϕn ψ gram. We assume that we have only fair traces: each (ϕ1 ∨ ... ∨ ϕn) 7→ ψ action is scheduled infinitely often. This means that As opposed to the ensures operator, here ϕ is not re- there is always a future point in which an action is quired to remain true until ψ is realized. To briefly scheduled (note that a scheduled action need not be lay out how to prove a 7→ property, ϕ ensures ψ is enabled). proved by showing that every action a satisfies the

347 ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence

Table 1: `M: Properties of beliefs. Table 2: `M: Properties of goals.

R1 if ϕ is an instance of a classical tautology, A3 `M ¬G⊥ then `M ϕ A4 `M Bφ −→¬Gφ R2 `C φ =⇒`M Bφ A5 `C φ −→ ψ =⇒`M ¬Bψ −→ (Gφ −→ Gψ) A1 `M B(φ −→ ψ) −→ Bφ −→ Bψ A2 `M ¬B⊥

Table 3: The proof rules of our Hoare system. ϕ −→¬enabled(a) Infeasible actions: {ϕ} a {ϕ}

{ϕ ∧ ψ} a {ϕ0} (ϕ ∧ ¬ψ) −→ ϕ0 Rule for conditional actions: {ϕ} ψ . do(a) {ϕ0}

ϕ0 −→ ϕ {ϕ} a {ψ} ψ −→ ψ0 Consequence rule: {ϕ0} a {ψ0}

{ϕ } a {ψ }{ϕ } a {ψ } Conjunction rule: 1 1 2 2 {ϕ1 ∧ ϕ2} a {ψ1 ∧ ψ2}

{ϕ } a {ψ}{ϕ } a {ψ} Disjunction rule: 1 2 {ϕ1 ∨ ϕ2} a {ψ}

Hoare triple {ϕ ∧ ¬ψ} a {ϕ ∨ ψ} and that the Hoare We introduce natural numbers as a type for propo- triple {ϕ ∧ ¬ψ} a0 {ψ} is satisfied by at least one ac- sitional symbols: 0 tion a . type-synonym id = nat In line with usual textbook presentations it would have been straight-forward to use string symbols. In a 4 ISABELLE FORMALIZATION theorem prover setting, working with natural numbers can be a lot more smooth, however.

This section describes the Isabelle/HOL formaliza- 4.1.1 Syntax tion. So far we have formalized propositional logic, mental states, formulas over mental states and a proof We define the formulas of propositional logic as a system for mental state formulas. In Section 5, we datatype with constructors for propositional symbols will discuss future work on the formalization of the and the operators ¬, −→, ∨ and ∧: verification framework. datatype ΦL = The Isabelle files are publicly available online: Prop id | Neg Φ (h¬ i) | https://people.compute.dtu.dk/aleje/public/ L L Imp ΦL ΦL (infixr h−→Li 60) | The file Gvf PL.thy is a formalization of proposi- Dis ΦL ΦL (infixl h∨Li 70) | Con Φ Φ (infixl h∧ i 80) tional logic. The file Gvf GOAL.thy is the presented L L L part of the formalization of GOAL and the verification We introduce infix notation for the logical opera- framework. tors with their usual precedence and −→ being right- associative. A subscript is used to avoid conflicts with 4.1 Propositional Logic the built-in Isabelle logical operators. 4.1.2 Semantics Before we can get started on the more interesting parts of our formalization, we need a basis of propo- The semantics of formulas is evaluated from a model. sitional logic to build on. We formalize the language Since the language is that of propositional logic, the of propositional formulas, its semantics and a sequent model is merely an assignment of truth values to calculus proof system. propositional symbols:

348 Towards Verifying GOAL Agents in Isabelle/HOL

type-synonym model = hid ⇒ booli 4.1.4 Soundness We assume this assignment to be a function over propositional symbols returning a Boolean value. The soundness theorem for our sequent calculus can Given a model and a formula, propositional sym- be stated nicely with entailment for multisets: bols are simply looked up in the model f . For the theorem seq-sound: operators, we recursively decompose the formula and hΓ `C# ∆ =⇒ set-mset Γ |=C# set-mset ∆i use Isabelle’s built-in logical operators to compute the by (induct rule: seq.induct)(auto) truth value: The function set-mset turns a multiset into a set. primrec semL :: hmodel ⇒ ΦL ⇒ booli where The proof is by induction of the sequent calculus rules hsemL f (Prop x) = f xi | and all subgoals (one for each rule) can be solved au- hsemL f (¬L p) = (¬semL f p)i | tomatically without further specification. hsemL f (p −→L q) = (semL f p −→ semL f q)i | We skip proving completeness. Firstly, it is much hsemL f (p ∨L q) = (semL f p ∨ semL f q)i | harder to prove and, more importantly, it does not hsem f (p ∧ q) = (sem f p ∧ sem f q)i L L L L serve a critical purpose in our further work. The We introduce the notion of entailment (`) as an soundness theorem is a strong result ensuring that we infix operator that takes two multisets of formulas: only prove valid formulas. abbreviation entails :: hΦL set ⇒ ΦL set ⇒ booli (infix h|=C#i 50) where 4.2 Mental States hΓ |=C# ∆ ≡ (∀f . (∀p∈Γ. sem f p) −→ (∃p∈∆. sem f p))i L L We introduce a type for mental states. A mental state The abbreviation above encodes the usual under- is a pair of sets of propositional logic formulas: standing of entailment: for all models, at least one mst = h( set × set)i formula in the succedent should be true if all the for- type-synonym ΦL ΦL mulas in the antecedent are true. Mental states have properties that constrain which We introduce shorthand syntax for the special case sets of formulas qualify as belief or goal bases. Bak- where the right-hand side is a singleton set: ing these properties into the type is rather compli- abbreviation entails-singleton :: hΦL set ⇒ ΦL ⇒ booli cated. We instead introduce them in a separate def- (infix h|=C i 50) where inition: h i Γ |=C Φ ≡ Γ |=C# { Φ } definition is-mst :: hmst ⇒ booli (h∇i) where h∇ x ≡ let (Σ, Γ) = x in 4.1.3 Sequent Calculus Σ ¬|=C ⊥L ∧ (∀γ∈Γ. Σ ¬|=C γ ∧ {} ¬|=C ¬L γ)i We define a sequent calculus to derive tautologies. Note that one important property of mental states, The inductive definition below is based on a standard namely that subgoals of goals should also be goals, is sequent calculus for propositional logic with multisets left our of this definition. We will instead bake this on both sides of the turnstile: into the semantics. We can use the definition as an assumption when inductive seq :: hΦL multiset ⇒ ΦL multiset ⇒ booli needed. We should keep in mind that if we manipulate (infix h` #i 50) where C a mental state, it will also be required to prove that h{# p #} + Γ ` # ∆ + {# p #}i | C mental state properties are preserved. hΓ `C# ∆ + {# p #} =⇒ Γ + {# ¬L p #} `C# ∆i | hΓ + {# p #} `C# ∆ =⇒ Γ `C# ∆ + {# ¬L p #}i | hΓ + {# p #} `C# ∆ + {# q #} =⇒ 4.3 Mental State Formulas Γ `C# ∆ + {# p −→L q #}i | hΓ ` # ∆ + {# p, q #} =⇒ Γ ` # ∆ + {# p ∨ q #}i | C C L The language of formulas over mental states are hΓ + {# p, q #} ` # ∆ =⇒ Γ + {# p ∧ q #} ` # ∆i | C L C somewhat similar to the language for propositional hΓ `C# ∆ + {# p #} =⇒ Γ `C# ∆ + {# q #} =⇒ Γ `C# ∆ + {# p ∧L q #}i | formulas. However, instead of propositional symbols hΓ + {# p #} `C# ∆ =⇒ Γ + {# q #} `C# ∆ =⇒ we may have belief or goal modalities. Γ + {# p ∨L q #} `C# ∆i | hΓ `C# ∆ + {# p #} =⇒ Γ + {# q #} `C# ∆ =⇒ 4.3.1 Syntax Γ + {# p −→L q #} `C# ∆i Again, we introduce a shorthand for a singleton The syntax for mental state formulas is defined as a set on the right-hand side: new datatype: abbreviation seq-st-rhs :: hΦL multiset ⇒ ΦL ⇒ booli datatype ΦM = (infix h`C i 50) where B ΦL | hΓ `C Φ ≡ Γ `C# {# Φ #}i G ΦL |

349 ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence

Neg ΦM (h¬M i) | assume ∗: Imp ΦM ΦM (infixr h−→M i 60) | h∀Σ Γ Φ ψ. ∇ (Σ, Γ) −→ (Σ, Γ) |=M Dis ΦM ΦM (infixl h∨M i 70) | G (Φ −→L ψ) −→M G Φ −→M G ψi Con ΦM ΦM (infixl h∧M i 80) We then come up with a belief and goal base to While the well-known logical operators are as be- use as a counter-example: fore, the language of mental state formulas is defined let ?Σ = h{}i and ?Γ = h{ ?Φ −→L ?ψ, ?Φ }i on a level above the propositional language. The automation easily proves that in our example 4.3.2 Semantics G does not distribute over implication: have We define the semantics of mental state formulas as a h¬ (?Σ, ?Γ) |=M G (?Φ −→L ?ψ) −→M G?Φ −→M G?ψi recursive function taking a mental state and a mental by auto state formula: We further prove that the example belief and goal primrec semantics :: hmst ⇒ ΦM ⇒ booli (infix h|=M i 50) base in fact constitute a mental state: where moreover have h∇ (?Σ, ?Γ)i hM |=M (B Φ) = (let (Σ, -) = M in Σ |=C Φ)i | unfolding is-mst-def hM |=M (G Φ) = (let (Σ, Γ) = M in by auto Φ ∈ Γ ∨ Σ ¬|= Φ ∧ (∃γ∈Γ. {} |= γ −→ Φ))i | C C L Combining the last two facts with our assumption, hM |=M (¬M Φ) = (¬ M |=M Φ)i | we show a contradiction and the proof is done: hM |=M (Φ1 −→M Φ2) = (M |=M Φ1 −→ M |=M Φ2)i | hM |=M (Φ1 ∨M Φ2) = (M |=M Φ1 ∨ M |=M Φ2)i | ultimately show False hM |=M (Φ1 ∧M Φ2) = (M |=M Φ1 ∧ M |=M Φ2)i using ∗ by blast The belief modality for a given propositional for- The two next subgoals are analogous. The remain- mula is true if the formula is entailed by the belief ing last property is easily proved by Isabelle’s simpli- base. For the goal modality, we require that the goal fier. is either directly in the goal base or that it is a sub- goal not entailed by the belief base. The remaining 4.4 Tautologies of Propositional Logic cases for logical operators are trivial. Let us elabo- rate on the choice of baking the subgoal property into The next large part of the formalization concerns the the semantics. For any given goal, there are an in- belief property R1 from Table 1. The property states finite number of subgoals. If we require that these that any instantiation of a classical tautology is a tau- subgoals be part of the goal base set, no finite set will tology in the logic for GOAL. ever constitute a valid goal base. Working with finite To automate the step from a propositional formula goal bases is more convenient. to a mental state formula instance, we design an algo- rithm to substitute propositional symbols for belief or 4.3.3 Properties of the Goal Modality goal modalities: primrec conv :: h(id, Φ ) map ⇒ Φ ⇒ Φ i where We state a lemma with the properties of the goal M L M hconv T (Prop p) = the (T p)i | modality from 3.1.2: hconv T (¬L Φ) = (¬M (conv T Φ))i | lemma G-properties: hconv T (Φ1 −→L Φ2) = (conv T Φ1 −→M conv T Φ2)i | shows hconv T (Φ1 ∨L Φ2) = (conv T Φ1 ∨M conv T Φ2)i | h¬ (∀Σ Γ Φ ψ. ∇ (Σ, Γ) −→ (Σ, Γ) |=M hconv T (Φ1 ∧L Φ2) = (conv T Φ1 ∧M conv T Φ2)i G (Φ −→L ψ) −→M G Φ −→M G ψ)i The algorithm requires some mapping of proposi- and tional symbols to mental state formulas. For simplic- h¬ (∀Σ Γ Φ ψ. ∇ (Σ, Γ) −→ (Σ, Γ) |=M G (Φ ∧L (Φ −→L ψ)) −→M G ψ)i ity, we can assume that each formula is either a belief and or goal modality without losing any expressivity. h¬ (∀Σ Γ Φ ψ. ∇ (Σ, Γ) −→ (Σ, Γ) |=M G Φ ∧M G ψ −→M G (Φ ∧L ψ))i 4.4.1 Automatic Substitution and h{} |=C Φ ←→L ψ −→ ∇ (Σ, Γ) −→ (Σ, Γ) |=M For most practical applications, we will encounter G Φ ←→M G ψi top-down proofs starting from some mental state for- The first three show that G is a weak logical opera- mula. In that case, the substitution is from belief tor. Each of those are proved by providing a counter- and goal modalities to propositional symbols. To be example. Let us consider the first property: G does able to determine if the mental state formula is an in- not distribute over implication. We assume the con- stantiation of a classical tautology, we need a one-to- trary to hold: one substitution. By this we mean that if and only

350 Towards Verifying GOAL Agents in Isabelle/HOL

if modality operators and their contents are equal, subformulas can be replaced by that of the parent for- then they map to the same propositional symbol. The mula, e.g. Φ1 ∧ Φ2 for subformulas Φ1 and Φ2. choice of propositional symbol may be arbitrary. We design an algorithm to produce such a one-to-one 4.5.2 Soundness mapping. We exploit the built-in enumeration function in The soundness theorem for the proof system is: Isabelle: theorem derive-soundness: assumes h∇ Mi abbreviation bst :: hΦM ⇒ (id × ΦM) listi where shows h` =⇒ M |= i hbst ϕ ≡ enumerate 0 (atoms ϕ)i M Φ M Φ The function atoms collect all belief and goal The proof is by induction over the rules: modalities, and enumerate builds a list of pairs of nat- proof (induct rule: derive.induct) ural numbers and mental state formulas, starting from We will go into details with the case for R1. We get 0 and incrementing by 1 for each formula. to assume that Φ0 is provable, and that some mapping We substitute the modalities by the enumeration T exists to a mental state formula Φ: of propositional symbols: case (R1 T ϕ 0 ϕ) definition to-L :: hΦM ⇒ ΦLi where 0 hto-L ϕ ≡ to-L (map-swap (bst ϕ)) ϕi Due to the soundness result for the sequent calculus, we know that formula is true for any model. We get Here, to-L’ is defined similarly to conv except it the idea to focus on a model in which the truth value produces a propositional formula. of propositional symbols is derived directly from se- mantics of the belief or goal modality it maps to in the 4.5 Proof System mental state: 0 then have hsemL (λx. semantics M (the (map-of T x))) ϕ i The belief and goal properties are collected to form a using seq-sound by fastforce proof system that is inductively defined: By induction over the structure of Φ0, we show that inductive derive :: hΦM ⇒ booli (h`M i) where 0 0 this is equal to the semantics of the Φ for the given R1: hconv (map-of T) ϕ = ϕ =⇒ {#} ` ϕ =⇒ ` ϕi | C M mental state: R2: h{#} `C Φ =⇒ `M (B Φ)i | A1: h`M (B (Φ −→L ψ) −→M (B Φ −→M B ψ))i | moreover have 0 A2: h`M (¬M (B ⊥L))i | hsemL (λx. semantics M (the (map-of T x))) ϕ = 0 A3: h`M (¬M (G ⊥L))i | (M |=M (conv (map-of T) ϕ ))i 0 A4: h`M ((B Φ) −→M (¬M (G Φ)))i | by (induct ϕ ) simp-all A5: h{#} `C (Φ −→L ψ) =⇒ `M (¬M (B ψ) −→M (G Φ) ultimately show ?case 0 −→M (G ψ))i using hconv (map-of T) ϕ = ϕi by simp The definition of R1 requires some explanation. We invite the interested reader to study the avail- If there exists a substitution from a propositional for- able theory files for further details, although some mula Φ0 to a mental state formula Φ, and if Φ0 is prov- prior experience using Isabelle may be required. able in the sequent calculus, then we are allowed to conclude Φ.

4.5.1 Correctness of Automatic Substitution 5 DISCUSSION

Before we start work on proving soundness for the The work on the formalization still has a long way to proof system, let us first come back to our auto- go before we can start concluding on the process as a matic substitution for top-down proofs. The follow- whole. So far our formalization covers the theory up ing lemma states that if the generated propositional the point of mental state formulas and a proof system. formula from Φ is provable, then Φ is also provable: What remains is to formalize actions, Hoare triples and a temporal logic for GOAL, and how to prove lemma to-L-correctness: h{#} `C to-L ϕ −→ `M ϕi temporal properties such as correctness. We will merely sketch the proof: we apply induc- We have successfully been able to verify some of tion over the structure of Φ and prove it for each case. the results in (de Boer et al., 2007). The progress The base cases and are proved automatically. shows promise and everything points towards it being The branching cases require some extra work as the feasible to verify all results of the paper. function bst will produce overlapping enumerations We have not touched much on the limitations of for subformulas. We assist the automation by provid- the verification framework in this paper. In order ing a lemma showing that the generated mappings for to give the framework more appeal, we need to find

351 ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence

ways to deal with current limitations of the frame- ACKNOWLEDGEMENTS work, namely: • not being able to model or prove statements quan- I would like to thank Asta H. From for Isabelle re- tifying over mental states, lated discussions and for comments on drafts of this paper. I would also like to thank Jrgen Villadsen for • not modelling the environment by assuming that comments on drafts of this paper. I would also like to the agent is the only actor, thank Koen V. Hindriks for helpful insights. • not accounting for multiple agents. It is clear that neither of these limitations have an easy fix. It is not obvious how to quantify over mental REFERENCES states: we need to consider the temporal aspect of the agent system and prove that a property holds across Alechina, N., Dastani, M., Khan, A. F., Logan, B., and several mental states. We have not encountered any Meyer, J.-J. (2010). Using Theorem Proving to Verify proposals in the literature. Modelling the environ- Properties of Agent Programs, pages 1–33. Springer. ment seems less of a challenge but potentially adds Bordini, R. H., Fisher, M., Visser, W., and Wooldridge, M. little immediate value beyond providing a better level (2006). Verifying Multi-agent Programs by Model Checking. Autonomous Agents and Multi-Agent Sys- of abstraction. Even if we alleviate assumptions of tems, 12:239–256. complete knowledge for agents, we still need to add Cossentino, M., Fortino, G., Garro, A., Mascillaro, S., and some domain knowledge into the environment to be Russo, W. (2008). PASSIM: A simulation-based pro- able to make any logical reasoning of interest. In a cess for the development of multi-agent systems. In- way, it simply pushes part of the problem into another ternational Journal of Agent-Oriented Software Engi- structure, but could potentially be a step towards be- neering, 2:132–170. ing able to state richer assumptions about interactions de Boer, F. S., Hindriks, K. V., Hoek, W., and Meyer, J.-J. in the environment. Lastly, we want to discuss mod- (2007). A verification framework for agent program- elling of multiple agents and of . This ming with declarative goals. Journal of Applied Logic, 5:277–302. is very much at the core of the issue in the field of verifying multi-agent systems. The nature of multi- Dennis, L. A. and Fisher, M. (2009). Programming Verifi- able Heterogeneous Agent Systems. In Programming ple parallel processes quickly makes every model ex- Multi-Agent Systems, pages 40–55. Springer. plode in complexity. We acknowledge its importance Dix, J., Logan, B., and Winikoff, M. (2019). Engineer- but it is too early to share any thoughts on its role in ing Reliable Multiagent Systems (Dagstuhl Seminar our continued work, and how to resolve the problems 19112). Dagstuhl Reports, 9(3):52–63. that may arise. Hindriks, K. V. (2009). Programming Rational Agents in GOAL, pages 119–157. Springer US. Hindriks, K. V., Boer, F., Hoek, W., and Meyer, J.-J. (1999). Agent programming in 3APL. Autonomous Agents 6 CONCLUSIONS and Multi-Agent Systems, 2:357–401. Jensen, A. B. (2021). Towards Verifying a Blocks World for We have argued that the use of theorem proving can Teams GOAL Agent. In ICAART 2021 - Proceedings be a valuable tool for developing formal verification of the 13th International Conference on Agents and techniques to ensure the reliability of multi-agent sys- Artificial Intelligence. SciTePress. To appear. tems. The value comes from assuring that developed Jensen, A. B., Hindriks, K. V., and Villadsen, J. (2021). On techniques work as intended and does not introduce Using Theorem Proving for Cognitive Agent-Oriented new levels of possible errors. Such errors may com- Programming. In ICAART 2021 - Proceedings of the promise the reliability of the system even further by 13th International Conference on Agents and Artifi- cial Intelligence. SciTePress. To appear. providing a false sense of security. Misra, J. (1994). A Logic for Concurrent Programming. We have described a verification framework for Technical report, Formal Aspects of Computing. the GOAL agent programming language that can be Nipkow, T., Paulson, L., and Wenzel, M. (2002). used to prove properties of agents. We have proved Isabelle/HOL — A Proof Assistant for Higher-Order some of the results of the author’s original work us- Logic, volume 2283 of LNCS. Springer. ing the Isabelle/HOL proof assistant, and deemed it Paulson, L. C. (2000). Mechanizing UNITY in Isabelle. feasible to formalize all of the results. ACM Trans. Comput. Logic, 1(1):332. Finally, we have discussed the many challenges Poutakidis, D., Winikoff, M., Padgham, L., and Zhang, ahead and given ideas to address them. Our plans are Z. (2009). Debugging and Testing of Multi-Agent to continue work on the Isabelle formalization and si- Systems using Design Artefacts, pages 215–258. multaneously work on extending the theory. Springer.

352