Automated Planning and Acting – Acting within BDI Agent Architecture –
O. Boissier
IMT Mines Saint-Etienne, France
Défi IA — Spring 2021 Acting
I How to perform chosen actions I Acting6=Execution ActorAgent Deliberation components I Agent is situated in a dynamic Objectives Planning unpredictable environment QueriesDeliberation Other Plans actorsagents I Adapt actions to current Actingcomponents Messages context I React to events Commands Percepts I Relies on ExecutionExecution platformplatform I Operational models of actions Actuations Signals I Observations of current state External World I Acting6=Planning
2 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason
Comparison with other paradigms
Conclusions and wrap-up
3 Agent Oriented Programming Features
I Reacting to events × long-term goals I Course of actions depends on circumstance I Plan failure (dynamic environments) I Social ability ; see course on Integrating and engineering intelligent systems I Combination of theoretical and practical reasoning I Theoretical reasoning is directed towards beliefs, knowledge ; see course on Knowledge representation and reasoning I Practical reasoning is directed towards actions ; this course
4 Agent Oriented Programming Fundamentals
I Use of mentalistic notions and a societal view of computation [Shoham, 1993]
I Heavily influenced by the BDI architecture and reactive planning systems [Bratman et al., 1988]
5 Literature Agent Oriented Programming
Books: [Bordini et al., 2005], [Bordini et al., 2009]
Proceedings: ProMAS, DALT, LADS, EMAS, AGERE, ...
Surveys: [Bordini et al., 2006], [Fisher et al., 2007] ...
Languages of historical importance: Agent0 [Shoham, 1993], AgentSpeak(L) [Rao, 1996], MetateM [Fisher, 2005], 3APL [Hindriks et al., 1997], Golog [Giacomo et al., 2000]
Other prominent languages: Jason [Bordini et al., 2007a], Jadex [Pokahr et al., 2005], 2APL [Dastani, 2008], GOAL [Hindriks, 2009], JACK [Winikoff, 2005], JIAC, AgentFactory
But many other languages and platforms...
6 Some Languages and Platforms Agent Oriented Programming
Jason (Hübner, Bordini, ...); 3APL and 2APL (Dastani, van Riemsdijk, Meyer, Hindriks, ...); Jadex (Braubach, Pokahr); MetateM (Fisher, Guidini, Hirsch, ...); ConGoLog (Lesperance, Levesque, ... / Boutilier – DTGolog); Teamcore/ MTDP (Milind Tambe, ...); IMPACT (Subrahmanian, Kraus, Dix, Eiter); CLAIM (Amal El Fallah-Seghrouchni, ...); GOAL (Hindriks); BRAHMS (Sierhuis, ...); SemantiCore (Blois, ...); STAPLE (Kumar, Cohen, Huber); Go! (Clark, McCabe); Bach (John Lloyd, ...); MINERVA (Leite, ...); SOCS (Torroni, Stathis, Toni, ...); FLUX (Thielscher); JIAC (Hirsch, ...); JADE (Agostino Poggi, ...); JACK (AOS); Agentis (Agentis Software); ...Jackdaw (Calico Jack); simpAL, ALOO (Ricci, ...);
7 Theories, Models, Architectures Agent Oriented Programming
I Agents are used to solve problems (e.g. to find solutions, to take decisions, to act on the environment) I The characteristics of the problem influence the way the agents are built ; we then talk about agent architectures I It may be the case that some architectures are designed using general principles ; we then talk about agent models I Some of these models have a theory associated with them that allows the verification of some properties ; we then talk about agent theories Several agent architectures, models and theories exist in the literature!!!
8 Agent Architectures Agent Oriented Programming
I Modules Organisation:
P P A P A
A P : perception, A : action a) horizontal architecture b) modular vertical architecture c) layered vertical architecture one path two paths
I Control flow: one / several I Data flow: broadcast, translation I Control structure: inhibition, hierarchy, ...
9 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason
Comparison with other paradigms
Conclusions and wrap-up
10 BDI “Mental” Notions (the mentalistic view)
The behavior of the agent is explained in terms of its mental state: I Beliefs (B): information that the agent has about its environment (updated by perception, reasoning, communication) I Desires (D): states/goals that the agent wishes to be true (aka Goals) I Intentions (I): desires the agent has committed to (based on the current beliefs, desires and other intentions)
11 BDI architecture [Wooldridge, 2009]
1 while true do 2 B ← brf (B,perception()) ; // belief revision 3 D ← options(B,I ) ; // desire revision 4 I ← filter(B,D,I ) ; // deliberation 5 π ← plan(B,I ,A) ; // means-end 6 while π 6= ∅ do 7 execute( head(π) ) 8 π ← tail(π)
12 BDI architecture [Wooldridge, 2009]
1 while true do 2 B ← brf (B,perception()) ; // belief revision 3 D ← options(B,I ) ; // desire revision 4 I ← filter(B,D,I ) ; // deliberation 5 π ← plan(B,I ,A) ; // means-end 6 while π 6= ∅ do 7 execute( head(π) ) 8 π ← tail(π)
fine for pro-activity, but not for reactivity (over commitment)
12 BDI architecture [Wooldridge, 2009]
1 while true do 2 B ← brf (B,perception()) ; // belief revision 3 D ← options(B,I ) ; // desire revision 4 I ← filter(B,D,I ) ; // deliberation 5 π ← plan(B,I ,A) ; // means-end 6 while π 6= ∅ do 7 execute( head(π) ) 8 π ← tail(π) 9 B ← brf (B,perception()) 10 if ¬sound(π,I ,B) then 11 π ← plan(B,I ,A)
revise commitment to plan – re-planning for context adaptation
12 BDI architecture [Wooldridge, 2009]
1 while true do 2 B ← brf (B,perception()) ; // belief revision 3 D ← options(B,I ) ; // desire revision 4 I ← filter(B,D,I ) ; // deliberation 5 π ← plan(B,I ,A) ; // means-end 6 while π 6= ∅ and ¬succeeded(I ,B) and ¬impossible(I ,B) do 7 execute( head(π) ) 8 π ← tail(π) 9 B ← brf (B,perception()) 10 if ¬sound(π,I ,B) then 11 π ← plan(B,I ,A)
revise commitment to intentions – Single-Minded Commitment
12 BDI architecture [Wooldridge, 2009]
1 while true do 2 B ← brf (B,perception()) ; // belief revision 3 D ← options(B,I ) ; // desire revision 4 I ← filter(B,D,I ) ; // deliberation 5 π ← plan(B,I ,A) ; // means-end 6 while π 6= ∅ and ¬succeeded(I ,B) and ¬impossible(I ,B) do 7 execute( head(π) ) 8 π ← tail(π) 9 B ← brf (B,perception()) 10 if reconsider(I ,B) then 11 D ← options(B,I ) 12 I ← filter(B,D,I )
13 if ¬sound(π,I ,B) then 14 π ← plan(B,I ,A)
reconsider the intentions (not always!)
12 Intentions
I Intentions pose problems for the agents: they need to determine a way to achieve them (planning and acting) I Intentions provide a “screen of admissibility” for adopting new intentions I Agents keep tracking their success of attempting to achieve their intentions I Agents should not spend all their time revising intentions (losing pro-activity and reactivity)
13 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason Agent Abstractions in Jason Agent Dynamics Other language features
Comparison with other paradigms
Conclusions and wrap-up
14 Overall view Agent Oriented Programming with Jason
Dimension Belief * Goal
Concept * Event * * * * Action Agent Plan * * * composition * Agent Simplified Conceptual View (Jason meta-model [Bordini et al., 2007b]):
Simple Agent Program:
example bob.asl example carl.asl
15 Jason
The foundational language for Jason is AgentSpeak I Originally proposed by Rao [Rao, 1996] I Programming language for BDI agents I Elegant notation, based on logic programming I Inspired by PRS [Georgeff and Lansky, 1987], dMARS [d’Inverno et al., 1997], and BDI Logics [Rao et al., 1995] I Abstract programming language aimed at theoretical results
16 Jason A practical implementation of a variant of AgentSpeak / BDI
I Jason implements the operational semantics of a variant of AgentSpeak I Has various extensions aimed at a more practical programming language (e.g. definition of the MAS, communication, ...) I Highly customised to simplify extension and experimentation I Developed by Jomi F. Hübner, Rafael H. Bordini, and others
17 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason Agent Abstractions in Jason Agent Dynamics Other language features
Comparison with other paradigms
Conclusions and wrap-up
18 Main Language Constructs
Beliefs: represent the information available to an agent (e.g. about the environment or other agents) Goals: represent states of affairs the agent wants to bring about Plans: are recipes for action, representing the agent’s know-how Actions can be internal, external, communicative or organisational ones
Events: happen as consequence to changes in the agent’s beliefs or goals Intentions: plans instantiated to achieve some goal
Note: identifiers starting in upper case denote variables
19 Main Language Constructs and Runtime Structures
Beliefs: represent the information available to an agent (e.g. about the environment or other agents) Goals: represent states of affairs the agent wants to bring about Plans: are recipes for action, representing the agent’s know-how Actions can be internal, external, communicative or organisational ones
Runtime structures: Events: happen as consequence to changes in the agent’s beliefs or goals Intentions: plans instantiated to achieve some goal
Note: identifiers starting in upper case denote variables
19 Belief representation Agent Oriented Programming with Jason
Syntax Beliefs are represented by annotated literals of first order logic
functor(term1, ..., termn)[annot1, ..., annotm]
Example (belief base of agent Tom)
red(box1)[source(percept)]. friend(bob,alice)[source(bob)]. lier(alice)[source(self),source(bob)]. ~lier(bob)[source(self)].
20 Goal representation Agent Oriented Programming with Jason
Syntax Goals are represented as beliefs with a prefix: I ! to denote achievement goal (goal to do) I ? to denote test goal (goal to know)
Example (Initial goal of agent Tom)
!write(book).
21 Plan representation Agent Oriented Programming with Jason
Syntax An AgentSpeak plan has the following general structure:
triggering_event : context <- body. where: I triggering_event: events that the plan is meant to handle I context: situations in which the plan can be used I body: course of action to be used to handle the event The course of action is used if the context is believed to be true at the time the plan is being chosen to handle the event
22 Plan representation – Triggering event Agent Oriented Programming with Jason
I Events happen as consequence to changes in the agent’s beliefs or goals I An agent reacts to events by executing plans Syntax
I ‘belief addition’ event: +b I ‘belief deletion’ event: -b I ‘achievement-goal addition’ event: +!g I ‘achievement-goal deletion’ event: -!g (≡ failure of the goal) I ‘test-goal addition’ event: +?g I ‘test-goal deletion’ event: -?g (≡ failure of the goal)
23 Plan representation – Context Agent Oriented Programming with Jason
Context is a boolean expression built with the following operators: Boolean operators Arithmetic operators
& (and) + (sum) | (or) - (subtraction) not (not) * (multiply) = (unification) / (divide) >, >= (relational) div (divide – integer) <, <= (relational) mod (remainder) == (equals) ** (power) \ == (different)
24 Plan representation – Body Agent Oriented Programming with Jason A plan body may contain a list of expressions ending with ’;’ corresponding to: I Belief operators + to create belief (e.g. +b1;) - to dispose belief (e.g. -b1;) -+ to update belief (e.g. -+b1;) I Goal operators ! to create achievement sub-goal (e.g. !g1;) ? to create test sub-goal (e.g. ?g1;) !! to create achievement goal (e.g. !!g1;) I External actions to act on the environment (e.g. act1;) I Internal actions to execute some code as part of agent reasoning cycle (e.g. .intact1;) I Do not change the environment I Encapsulate code I May invoke legacy code I Constraints to check some condition (e.g. X==2;)
25 Internal Actions Agent Oriented Programming with Jason
I Internal actions can be defined by the user in Java
libname.action_name(...)
I Standard (pre-defined) internal actions have an empty library name I .print(term1,term2,...) I .union(list1, list2, list3) I .my_name(var) I .send(ag,perf ,literal) I .intend(literal) I .drop_intention(literal)
I Many others available for: printing, sorting, list/string operations, manipulating the beliefs/annotations/plan library, creating agents, waiting/generating events, etc.
26 Plan representation Agent Oriented Programming with Jason Example
+rain: time_to_leave(T)& clock.now(H)& H >= T <- !g1; // new sub-goal !!g2; // new goal ?b(X); // new test goal +b1(T-H); // add mental note -b2(T-H); // remove mental note -+b3(T*H); // update mental note jia.get(X); // internal action X > 10; // constraint to carry on close(door);// external action !g3[hard_deadline(3000)]. // goal with deadline
27 Plan representation Agent Oriented Programming with Jason Example
+green_patch(Rock)[source(percept)] : not battery_charge(low) <- ?location(Rock,Coordinates); !at(Coordinates); !examine(Rock). +!at(Coords) : not at(Coords)& safe_path(Coords) <- move_towards(Coords); !at(Coords). +!at(Coords) : not at(Coords) & not safe_path(Coords) <- ... +!at(Coords) : at(Coords).
28 (BDI & Jason) Hello World – agent bob
happy(bob). // B !say(hello). // D
+!say(X) : happy(bob) <- .print(X). // I
beliefs: prolog like (First Order Logic)
29 (BDI & Jason) Hello World – agent bob
happy(bob). // B !say(hello). // D
+!say(X) : happy(bob) <- .print(X). // I
beliefs: prolog like (First Order Logic) desires: prolog like, with ! prefix
29 (BDI & Jason) Hello World – agent bob
happy(bob). // B !say(hello). // D
+!say(X) : happy(bob) <- .print(X). // I
beliefs: prolog like (First Order Logic) desires: prolog like, with ! prefix plans: I define when a desire becomes an intention ; deliberate I how it is satisfied I are used for practical reasoning ; means-end
29 (BDI & Jason) Hello World – agent bob desires from perception – options
+happy(bob) <- !say(hello).
+!say(X) : not today(monday) <- .print(X).
30 (BDI & Jason) Hello World – agent bob source of beliefs
+happy(bob)[source(A)] : someone_who_knows_me_very_well(A) <- !say(hello).
+!say(X) : not today(monday) <- .print(X).
31 (BDI & Jason) Hello World – agent bob plan selection
+happy(H)[source(A)] : sincere(A)& .my_name( H) <- !say(hello).
+happy(H) : not .my_name(H) <- !say(i_envy(H)).
+!say(X) : not today(monday) <- .print(X).
32 (BDI & Jason) Hello World – agent bob intention revision
+happy(H)[source(A)] : sincere(A)& .my_name( H) <- !say(hello).
+happy(H) : not .my_name(H) <- !say(i_envy(H)).
+!say(X) : not today(monday) <- .print(X); !say(X).
-happy(H) : .my_name(H) <- .drop_intention(say(hello)).
33 (BDI & Jason) Hello World – agent bob intention revision
+happy(H)[source(A)] : sincere(A)& .my_name( H) <- !say(hello).
+happy(H) : not .my_name(H) <- !say(i_envy(H)).
+!say(X) : not today(monday) <- .print(X); !say(X).
-happy(H) : .my_name(H) <- .drop_intention(say(hello)).
33 (BDI & Jason) Hello World – agent bob intention revision / Features
I we can have several intentions based on the same plans ; running concurrently I long term goal running ; reaction meanwhile!
34 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason Agent Abstractions in Jason Agent Dynamics Other language features
Comparison with other paradigms
Conclusions and wrap-up
35 Agent dynamics Agent Oriented Programming with Jason
Belief Agent Beliefs Base
5 1 2 Events Percepts Percepts SE Plan perceive BUF BRF Library
External External Selected Events Events Events Beliefs Event
4 Relevant Internal 7 6 Beliefs to Check Plans Unify SocAcc Add and Events Plans Delete Context Event
Applicable Beliefs Plans 3 Messages Messages 8 9 Selected 10 checkMail SM Intended Intention Execute Action Actions S S act O Means I Intention
.send Intentions
Push New Messages Suspended Intentions Intentions Intention sendMsg (Actions and Msgs) New Plan ... New ... Updated New Intention
36 Basic Reasoning cycle runtime interpreter
I perceive the environment and update belief base I process new messages I select event I select relevant plans I select applicable plans I create/update intention I select intention to execute I execute one step of the selected intention
37 Basic Reasoning Cycle
Belief Agent Beliefs Base
5 1 2 Events Percepts Percepts SE Plan perceive BUF BRF Library
External External Selected Events Events Events Beliefs Event
4 Relevant Internal 7 6 Beliefs to Check Plans Unify SocAcc Add and Events Plans Delete Context Event
Applicable Beliefs Plans 3 Messages Messages 8 9 Selected 10 checkMail SM Intended Intention Execute Action Actions S S act O Means I Intention
.send Intentions
Push New Messages Suspended Intentions Intentions Intention sendMsg (Actions and Msgs) New Plan ... New ... Updated New Intention
38 Basic Reasoning Cycle Jason Reasoning Cycle
Belief Agent Beliefs Base
5 12 I machine perception Events Percepts Percepts SE Plan perceive BUF BRF I belief revison Library
External I knowledgeExternal Selected Events representationEvents Events Beliefs Event
4 Relevant I Internalcommunication, 7 6 Beliefs to Check Plans Unify SocAcc Add and Events Plans Delete argumentation Context Event
I trust Applicable Beliefs Plans 3 I social power Messages Messages 8 9 Selected 10 checkMail SM Intended Intention Execute Action Actions S S act O Means I Intention
.send Intentions
Push New Messages Suspended Intentions Intentions Intention sendMsg (Actions and Msgs) New Plan 26 ... New ... Updated 38 New Intention Basic Reasoning Cycle Jason Reasoning Cycle
Belief Agent Beliefs Base
5 12 Events Percepts Percepts SE Plan perceive BUF BRF I Libraryplanning
External External Selected I reasoning Events Events Events Beliefs Event I decision theoretic 4 Relevant Internal 7 6 Beliefs to Check Plans Unify SocAcc Add and Events Plans techniques Delete Context Event I learning Applicable Beliefs Plans (reinforcement) 3 Messages Messages 8 9 Selected 10 checkMail SM Intended Intention Execute Action Actions S S act O Means I Intention
.send Intentions
Push New Messages Suspended Intentions Intentions Intention sendMsg (Actions and Msgs) New Plan ... New ... Updated New Intention
27
38 Belief Agent Beliefs Base
5 12 Events Percepts Percepts SE Plan perceive BUF BRF Basic ReasoningLibrary Cycle
External External Selected Events Events Events Beliefs JasonEvent Reasoning Cycle
4 Relevant Internal 7 6 Beliefs to Check Plans Unify SocAcc Add and Events Plans Delete Context Event
Applicable Beliefs Plans 3 I intention Messages Messages 8 9 Selected 10 checkMail SM Intended Intention Execute Action Actions reconsideration S S act O Means I Intention I scheduling .send Intentions I action theories Push New Messages Suspended Intentions Intentions Intention sendMsg (Actions and Msgs) New Plan ... New ... Updated New Intention
28 38 Belief dynamics Agent Oriented Programming with Jason
Internal reasoning The belief operators+ and - can be used to add and remove beliefs annotated with source(self)( mental notes)
+lier(alice); // adds lier(alice)[source(self)] -lier(john); // removes lier(john)[source(self)]
Perception (from the environment) Beliefs are automatically updated accordingly to the perception of the agent (annotated with source(percept))
39 Belief dynamics Agent Oriented Programming with Jason
Communication (from other agents) When an agent receives a tell (resp. untell) message, the content is a new belief (annotated with the sender of the message) (resp. belief corresponding to the content is deleted)
.send(tom,tell,lier(alice)); // sent by bob // adds lier(alice)[source(bob)] in Tom’s Belief Base ... .send(tom,untell,lier(alice)); // sent by bob // removes lier(alice)[source(bob)] from Tom’s Belief Base
40 Goal dynamics Agent Oriented Programming with Jason
Internal reasoning The goal operators! , !! and ? are used to add a new goal (annotated with source(self))
... // adds new achievement goal !write(book)[source(self)] !write(book);
// adds new test goal ?publisher(P)[source(self)] ?publisher(P); ...
41 Goal dynamics Agent Oriented Programming with Jason Communication of achievement goal When an agent receives an achieve message, the content is a new achievement goal (annotated with the sender of the message)
.send(tom,achieve,write(book)); // sent by Bob // adds new goal write(book)[source(bob)] for Tom .send(tom,unachieve,write(book)); // sent by Bob // removes goal write(book)[source(bob)] for Tom
Communication of test goal When an agent receives an askOne or askAll message, the content is a new test goal (annotated with the sender of the message)
.send(tom,askOne,published(P),Answer); // sent by Bob // adds new goal ?publisher(P)[source(bob)] for Tom // the response of Tom will unify with Answer
42 Plan dynamics Agent Oriented Programming with Jason
The plans that form the plan library of the agent come from I plans added (resp. removed) dynamically by intentions in internal reasoning: I .add_plan (resp. .remove_plan) I plans added (resp. removed) by communication: I tellHow (resp. untellHow) Example
.send(bob, askHow, +!goto(_,_)[source(_)], ListOfPlans); ... .plan_label(Plan,hp); // get a plans based on a plan’s label .send(A,tellHow,Plan); .send(bob,tellHow,"+!start : true <- .println(hello¨ ).")¨ .
43 A note about “Control”
Agents can control (manipulate) their own (and influence the others) I beliefs I goals I plan By doing so they control their behaviour
The developer provides initial values of these elements and thus also influence the behaviour of the agent
44 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason Agent Abstractions in Jason Agent Dynamics Other language features
Comparison with other paradigms
Conclusions and wrap-up
45 Achievement goal
Example +!g : g <- true. // g declarative goal
+!g : c1 <- p1; ?g. +!g : c2 <- p2; ?g. ... +!g : cn <- pn; ?g. +g : true <- .succeed_goal(g).
46 Backtracking Declarative Goal Patterns
Example +!g : g <- true. // g declarative goal
+!g : c1 <- p1; ?g. +!g : c2 <- p2; ?g. ... +!g : cn <- pn; ?g. +g : true <- .succeed_goal(g). -!g : true <- !!g.
47 Exclusive Backtracking Declarative Goal Pattern
Example +!g : g <- true. // g declarative goal
+!g : not p(1,g) & c1 <- +p(1,g); p1; ?g. +!g : not p(2,g) & c2 <- +p(2,g); p2; ?g. ... +!g : not p(n,g) & cn <- +p(n,g); pn; ?g. -!g : true <- !!g. +g : true <- .abolish(p(_,g); .succeed_goal(g).
48 Failure Handling: Contingency Plans
Example !g1 // initial goal
+!g1 : true <- !g2(X); .print(“end g1 “,X). +!g2 : true <- !g3(X); .print(“end g2 “,X). +!g3 : true <- !g4(X); .print(“end g3 “,X). +!g4 : true <- !g5(X); .print(“end g4 “,X). +!g5 : true <- .fail.
-!g3(X) : true <- .print(“in g3 failure”).
49 Failure Handling: Contingency Plans
Example !g1 // initial goal
+!g1 : true <- !g2(X); .print(“end g1 “,X). +!g2 : true <- !g3(X); .print(“end g2 “,X). +!g3 : true <- !g4(X); .print(“end g3 “,X). +!g4 : true <- !g5(X); .print(“end g4 “,X). +!g5 : true <- .fail.
-!g3(X) : true <- .print(“in g3 failure”). saying: in g3 failure saying: end g2 failure saying: end g1 failure
49 Failure Handling: Contingency Plans
Example (blind commitment to g) +!g:g. // gp is a declarative goal
+!g : ... <- a1; ?g. +!g : ... <- a2; ?g. +!g : ... <- a3; ?g.
+!g : true <- !g. // keep trying -!g : true <- !g. // in case of some failure
+g : true <- .succeed_goal(g).
50 Failure Handling: Contingency Plans
Example (single minded commitment) +!g:g. // g is a declarative goal
+!g : ... <- a1; ?g. +!g : ... <- a2; ?g. +!g : ... <- a3; ?g.
+!g : true <- !g. // keep trying -!g : true <- !g. // in case of some failure
+g : true <- .succeed_goal(g). +f : true <- .fail_goal(g). // f is the drop condition for g
51 Failure Handling: Compiler pre-processing – directives
Example (single minded commitment) { begin smc(g,f)} +!g : ... <- a1. +!g : ... <- a2. +!g : ... <- a3. { end }
52 Meta Programming
Example (an agent that asks for plans on demand)
-!G[error(no_relevant)]: teacher(T) <- .send(T, askHow, { +!G }, Plans); .add_plan(Plans); !G.
in the event of a failure to achieve any goalG due to no relevant plan, asks a teacher for plans to achieveG and then tryG again
I The failure event is annotated with the error type, line, source, ... error(no_relevant) means no plan in the agent’s plan library to achieveG I { +!G } is the syntax to enclose triggers/plans as terms
53 Other Language Features Strong Negation
+!leave(home) : ~raining <- open(curtains); ...
+!leave(home) : not raining & not ~raining <- .send(mum,askOne,raining,Answer,3000); ...
54 Prolog-like Rules in the Belief Base
tall(X):- woman(X)& height(X, H)& H > 1.70 | man(X)& height(X, H)& H > 1.80.
likely_color(Obj,C):- colour(Obj,C)[degOfCert(D1)] & not (colour(Obj,_)[degOfCert(D2)] & D2 > D1)& not ~colour(C,B).
55 Plan Annotations
I Like beliefs, plans can also have annotations, which go in the plan label I Annotations contain meta-level information for the plan, which selection functions can take into consideration I The annotations in an intended plan instance can be changed dynamically (e.g. to change intention priorities) I There are some pre-defined plan annotations, e.g. to force a breakpoint at that plan or to make the whole plan execute atomically
Example (an annotated plan)
@myPlan[chance_of_success(0.3), usual_payoff(0.9), any_other_property] +!g(X): c(t) <- a(X).
56 Internal Actions
I Unlike actions, internal actions do not change the environment I Code to be executed as part of the agent reasoning cycle I AgentSpeak is meant as a high-level language for the agent’s practical reasoning and internal actions can be used for invoking legacy code elegantly
I Internal actions can be defined by the user in Java
libname.action_name(...)
57 Standard Internal Actions
I Standard (pre-defined) internal actions have an empty library name I .print(term1,term2,...) I .union(list1, list2, list3) I .my_name(var) I .send(ag,perf ,literal) I .intend(literal) I .drop_intention(literal)
I Many others available for: printing, sorting, list/string operations, manipulating the beliefs/annotations/plan library, creating agents, waiting/generating events, etc.
58 NamespacesNamespaces & & Modularity
5951 Namespaces & Modularity Namespaces & Modularity
–include(”initiator.asl”, pc)˝ –include(”initiator.asl”, tv)˝
!pc::startCNP(fix(pc)). !tv::startCNP(fix(tv)).
+pc::winner(X) ¡- .print(X).
52 60 Concurrent Plans
+!ga <- ...; !gb; .... +!gb <- ...; !g1 |&| !g2; a1; ... // fork-join-and // a1 will be executed when !g2 and !g1 will be achieved
+!ga <- ...; !gb; .... +!gb <- ...; !g1 ||| !g2; a1; ... // fork-join-xor // a1 will be executed after !g2 or !g1 are achieved // when one of !g2 or !g1 is achieved the other is dropped -!g : true <- !g. // in case of some failure
+g : true <- .succeed_goal(g). +f : true <- .fail_goal(g). // f is the drop condition for g
61 Jason Customisations
I Agent class customisation: selectMessage, selectEvent, selectOption, selectIntention, buf, brfn, ...
I Agent architecture customisation: perceive, act, sendMsg, checkMail, ...
I Belief base customisation: add, remove, contains, ... I Example available with Jason: persistent belief base (in text files, in data bases, ...)
62 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason
Comparison with other paradigms
Conclusions and wrap-up
63 Jason × Java
Consider a very simple robot with two goals: I when a piece of gold is seen, go to it I when battery is low, go charge it
64 Java code – go to gold
public class Robot extends Thread{ boolean seeGold, lowBattery; public void run() { while (true) { while (! seeGold) { a = randomDirection(); doAction(go(a));
} while (seeGold) { a = selectDirection();
doAction(go(a));
}}}}
65 Java code – charge battery
public class Robot extends Thread{ boolean seeGold, lowBattery; public void run() { while (true) { while (! seeGold) { a = randomDirection(); doAction(go(a)); if (lowBattery) charge(); } while (seeGold) { a = selectDirection (); if (lowBattery) charge(); doAction(go(a)); if (lowBattery) charge(); }}}}
66 Jason code
direction(gold):- see(gold). direction(random) :- not see(gold).
+!find(gold) // long term goal <- ?direction(A); go(A); !find(gold). +battery(low) // reactivity <- !charge.
ˆ!charge[state(started)] // goal meta-events <- .suspend(find(gold)). ˆ!charge[state(finished)] <- .resume(find(gold)).
67 FibonacciFibonacci calculator calculator server server – – “java” “java” version version
Fibonaccer fib(40) Bob
fib(3)
Alice
while true int fib(int n) m = receiveMsg() if n ¡= 2 if m == fib(N) return 1 m.answer(fib(m.getArg(0))) else ... return fib(n-1)+fib(n-2)
How long will Alice wait?
6859 FibonacciFibonacci calculator calculator server – – Akka Akka
60 69 FibonacciFibonacci calculator calculator agent agent – – Jason version version Jason
Fibonaccer fib(40) Bob
fib(3)
Alice
+?fib(1,1). +?fib(2,1). +?fib(N,F) ¡- ?fib(N-1,A); ?fib(N-2,B); F = A+B.
How long will Alice wait?
70 61 FibonacciFibonacci calculator calculator agent server – – Jason Jason version
6271 Jason × Prolog
I With the Jason extensions, nice separation of theoretical and practical reasoning
I BDI architecture allows I long-term goals (goal-based behaviour) I reacting to changes in a dynamic environment I handling multiple foci of attention (concurrency)
I Acting on an environment and a higher-level conception of a distributed system
72 Outline
Agent Oriented Programming
BDI Architecture
Agent Oriented Programming with Jason
Comparison with other paradigms
Conclusions and wrap-up
73 Summary
I AgentSpeak I Logic + BDI I Agent programming language I Jason I AgentSpeak interpreter I Implements the operational semantics of AgentSpeak I Speech-act based communicaiton I Highly customisable I Useful tools I Open source I Open issues
74 Further Resources
I http://jason.sourceforge.net
I R.H. Bordini, J.F. Hübner, and M. Wooldrige Programming Multi-Agent Systems in AgentSpeak using Jason John Wiley & Sons, 2007.
75 BibliographyI
Bordini, R. H., Braubach, L., Dastani, M., Fallah-Seghrouchni, A. E., Gómez-Sanz, J. J., Leite, J., O’Hare, G. M. P., Pokahr, A., and Ricci, A. (2006). A survey of programming languages and platforms for multi-agent systems. Informatica (Slovenia), 30(1):33–44.
Bordini, R. H., Dastani, M., Dix, J., and Fallah-Seghrouchni, A. E., editors (2005). Multi-Agent Programming: Languages, Platforms and Applications, volume 15 of Multiagent Systems, Artificial Societies, and Simulated Organizations. Springer.
Bordini, R. H., Dastani, M., Dix, J., and Fallah-Seghrouchni, A. E., editors (2009). Multi-Agent Programming: Languages, Tools and Applications. Springer.
Bordini, R. H., Hübner, J. F., and Wooldridge, M. (2007a). Programming Multi-Agent Systems in AgentSpeak Using Jason. Wiley Series in Agent Technology. John Wiley & Sons.
76 BibliographyII
Bordini, R. H., Hübner, J. F., and Wooldrige, M. (2007b). Programming Multi-Agent Systems in AgentSpeak using Jason. Wiley Series in Agent Technology. John Wiley & Sons.
Bratman, M. E., Israel, D. J., and Pollack, M. E. (1988). Plans and resource-bounded practical reasoning. Computational Intelligence, 4:349–355.
Dastani, M. (2008). 2apl: a practical agent programming language. Autonomous Agents and Multi-Agent Systems, 16(3):214–248.
d’Inverno, M., Kinny, D., Luck, M., and Wooldridge, M. (1997). A formal specification of dmars. In International Workshop on Agent Theories, Architectures, and Languages, pages 155–176. Springer.
Fisher, M. (2005). Metatem: The story so far. In PROMAS, pages 3–22.
77 BibliographyIII
Fisher, M., Bordini, R. H., Hirsch, B., and Torroni, P. (2007). Computational logics and agents: A road map of current technologies and future trends. Computational Intelligence, 23(1):61–91.
Georgeff, M. P. and Lansky, A. L. (1987). Reactive reasoning and planning. In AAAI, volume 87, pages 677–682.
Giacomo, G. D., Lespérance, Y., and Levesque, H. J. (2000). Congolog, a concurrent programming language based on the situation calculus. Artif. Intell., 121(1-2):109–169.
Hindriks, K. V. (2009). Programming rational agents in GOAL. In [Bordini et al., 2009], pages 119–157.
Hindriks, K. V., de Boer, F. S., van der Hoek, W., and Meyer, J.-J. C. (1997). Formal semantics for an abstract agent programming language. In Singh, M. P., Rao, A. S., and Wooldridge, M., editors, ATAL, volume 1365 of Lecture Notes in Computer Science, pages 215–229. Springer.
78 BibliographyIV
Pokahr, A., Braubach, L., and Lamersdorf, W. (2005). Jadex: A bdi reasoning engine. In [Bordini et al., 2005], pages 149–174.
Rao, A. S. (1996). Agentspeak(l): Bdi agents speak out in a logical computable language. In de Velde, W. V. and Perram, J. W., editors, MAAMAW, volume 1038 of Lecture Notes in Computer Science, pages 42–55. Springer.
Rao, A. S., Georgeff, M. P., et al. (1995). Bdi agents: From theory to practice. In ICMAS, volume 95, pages 312–319.
Shoham, Y. (1993). Agent-oriented programming. Artif. Intell., 60(1):51–92.
Winikoff, M. (2005). Jack intelligent agents: An industrial strength platform. In [Bordini et al., 2005], pages 175–193.
79 BibliographyV
Wooldridge, M. (2009). An Introduction to MultiAgent Systems. John Wiley and Sons, 2nd edition.
80