
WORD2HTN: Learning Task Hierarchies Using Statistical Semantics and Goal Reasoning Sriram Gopalakrishnan and Hector´ Munoz-Avila˜ Ugur Kuter Lehigh University SIFT, LLC Computer Science and Engineering 9104 Hunting Horn Lane, 19 Memorial Drive West Potomac, Maryland 20854 USA Bethlehem, PA 18015-3084 USA [email protected] fsrg315,[email protected] Abstract to make goal-directed inferences to produce task groupings and hierarchies [Choi and Langley, 2005; Hogg et al., 2008; This paper describes WORD2HTN, an algorithm Zhuo et al., 2014]. for learning hierarchical tasks and goals from plan traces in planning domains. WORD2HTN com- In this paper, we present a new semantic goal-reasoning bines semantic text analysis techniques and sub- approach, called WORD2HTN, for learning tasks and their goal learning in order to generate Hierarchical Task decompositions by identifying subgoals and learning HTNs. Networks (HTNs). Unlike existing HTN learning WORD2HTN builds on work involving word embeddings, algorithms, WORD2HTN learns distributed vec- which are vector representations for words. These were tor representations that represent the similarities originally developed for Natural Language Processing(NLP). and semantics of the components of plan traces. WORD2HTN combines that paradigm and an algorithm to WORD2HTN uses those representations to clus- process it for generating task hierarchies. Our contributions ter them into task and goal hierarchies, which can are the following: then be used for automated planning. We de- scribe our algorithm and present our preliminary • We describe WORD2HTN, a new semantic learning al- evaluation thereby demonstrating the promise of gorithm for identifying tasks and goals based on seman- WORD2HTN. tic groupings automatically elicited from plan traces. WORD2HTN takes as input the execution traces as sen- tences with the planning components of the trace (op- 1 Introduction erators, atoms and objects) as words . It then uses Hierarchical Task Networks (HTN) planning is a problem- the popular semantic text analysis tool WORD2VECTOR solving paradigm for generating plans (i.e., sequences of ac- [Mikolov et al., 2013], which uses a neural network to tions) that achieve some input tasks. Tasks are symbolic rep- learn distributed vector representations for the words in resentations of activities such as achieve goal g, where g is an the plan trace. We discuss the WORD2VECTOR tool in standard goal. Tasks can be more abstract conditions such as Section 4 and give an intuitive explanation for why it ”protect the house”. HTN planning generates a plan by using learns good representations. To put it succinctly, dis- a recursive procedure in which tasks of higher level of ab- tributed vector representations are learned based on the straction are decomposed into simpler, more concrete tasks. co-occurrence of the words in the input traces. From The task decomposition process terminates when a sequence the vector representations, we can generate groupings of actions is generated solving the input tasks. HTN plan- through agglomerative hierarchical clustering. This ning’s theoretical underpinnings are well understood [Erol et groups words that are closer together, and repeats this al., 1994] and has been shown to be useful in many practical process to build a hierarchy. applications [Nau et al., 2005]. HTN planners require two knowledge sources: operators • We describe a new algorithm that extracts HTNs from (generalized definitions of actions) and methods (descriptions the clustering. The results of the clustering can also of how and when to decompose a task). While operators be used to group goals and identify tasks. The similar- are generally accepted to be easily elicited in many plan- ity of the learned vector representations with the current ning domains, methods require a more involved and time- and goal states (input to the planner) will be used as the consuming knowledge acquisition effort. Part of the reason method to decompose a higher level task in an HTN to for this effort is that crafting methods requires us to reason the next action to be taken. about how to combine operators to achieve the tasks. For this reason, automated learning of HTN methods have been • We report and discuss our implementation of the the subject of frequent research over the years. Most of the WORD2HTN approach and demonstrate the progress work on learning HTNs use structural assumptions and addi- and promise via examples in a logistics planning do- tional domain knowledge explicitly relating tasks and goals main. 2 Semantic Word Embeddings and Vector of each word in the vocabulary is learned. The distributed Representations representations are learned for each word to maximize each word’s similarity with the words that appear in its contexts. For semantic reasoning about goals and tasks, we took inspi- Additionally, words that do not occur within the same con- ration from Natural Language Processing(NLP) techniques, texts would have differently aligned vectors. More details on specifically Word Embeddings. They are a way of repre- this process is discussed in Section 4. senting, or embedding a word in an N-dimensional space as distributed vectors [Mikolov et al., 2013]. This tech- nique for learning word embeddings is popularly called 3 Definitions and Terminology WORD2VECTOR. The number of dimensions N can be We use standard definitions from the planning literature, such chosen by heuristics or experimentally (we will discuss as objects, atoms, actions, and state variables [Ghallab et al., more on this in Section 4). To illustrate how words 2004] [Chapter 2]. We will refer to these constructs as plan- might look, let us consider a component of a domain ning components. They correspond to the “words” in the plan Location1 in a 5-dimensional space. It maybe represented by traces. We summarize our definitions for planning compo- [0:5; 1:2; −0:6; 0:8; 0:4], The component Location2 might nents below. have a representation of [0:4; 1:4; 0:5; 0:9; 0:3]. Notice how An action is a primitive (smallest) task with arguments that all dimensions but one are close in value. This is a simplified are constant components (e.g., load-truck truck1,location1 example to illustrate how related components could be en- ). Note that the entire action component including the argu- coded in the vector model. Most of the dimensions would be ments is treated as one word in the plan traces while learn- similar when Location1 and Location2 are used frequently ing. An action can change the state of the domain when it is in similar contexts. Their vector representations would be applied. Each action has a precondition and an effect. The learned by adjusting the dimensions from their initial ran- precondition and effect of an action is a set of atoms. For ex- dom values in the same directions. It is important to note ample, “Move Truck1 Location1 Location2”, is an action that that we do not know what each dimension means with the has the precondition “Truck1 in Location 1”. The effect can WORD2VECTOR approach to learning word embeddings. In be “Truck1 in Location2”. NLP, they are referred to as “latent semantic dimensions” be- cause they implicitly encode properties, but what each dimen- An atom is a positive literal, such as “Truck1 in Loca- sion means is hidden and affected by the training process. See tion2”. All the literals in the initial work were atoms (positive Figure 1 for an example of distributed vector representations literals) as opposed to negations. “Truck1 not in Location2” for some components of a logistics (transportation) domain is instead covered by “Truck1 in Location3” and the like. A state is a collection of literals, indicating the current val- ues for properties of the objects in a domain. A goal is a con- junction of literals. A plan trace for a state-goal (s; g) pair is a sequence of actions that, when applied in s, produces a state that satisfies the goal g. We represent a plan trace by a series of preconditions, actions, and their effects. As an ex- ample, the plan trace to transport package1 from Location1 to Location2 could be as follows: Precondition (“package1”, “package1 in Loca- tion1”, “Location1”, “Truck1”, “Truck1 in Location 1”, “Location 1”) Action(“Load package1 Truck1 Location1”) Effect(“package1”, “package1 in Truck1”, “Truck1”, “Truck1”, “Truck1 in Location 1”, “Location 1”) Figure 1: Sample Distributed Vector Representations of Components from Logistics Domain Precondition(“Truck1”, “Truck1 in Location 1”, “Location 1”) Mikolov et al. showed how quality vector representations Action(“Move Truck1 Location1 Location2”) of words from a corpus (group of documents) can be learned Effect(“Truck1”, “Truck1 in Location 2”, “Location using neural networks. The neural network starts with ran- 2”) domized initial weights, and corrects them with the training sentences (plan traces in our case) . The representations are Precondition (“package1”, “package1 in Truck1”, learned by looking at words within a specified context win- “Truck1”, “Truck1”, “Truck1 in Location 2”, “Lo- dow size. The context window size (C) defines the number of cation 2”) words (planning components) before and after a target word Action(“Unload package1 Truck1 Location2”) that is used to define the context. The target component’s rep- Effect(“package1”, “package1 in Location2”, resentation is adjusted to be closer to the words in it’s context. “Location2”, “Truck1”, “Truck1 in Location 2”, When all the sentences (plan traces) are iterated over, the fi- “Location 2”) nal N-dimensional representation (N is an input parameter) In the previous example, the goal only consists of a single in WORD2VECTOR .One is Continuous-Bag-of-Words atom, namely “Package1 in Location2”. If the goal also re- (CBOW) and the other is Skip-Gram (SG). These are two quires that Truck 1 be in Location2, then the goal would be ways of comparing a word to it’s context, and that determines the conjunction of two atoms “Package1 in Location2” and the network structure (see Figure 2; cf.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-