ICT Call 7 ROBOHOW.COG FP7-ICT-288533

ICT Call 7 ROBOHOW.COG FP7-ICT-288533

ICT Call 7 ROBOHOW.COG FP7-ICT-288533 Deliverable D1.2: Knowledge representation, reasoning, and the transformation of robot skills into executable programs January 31st, 2013 D1.2 FP7-ICT-288533 ROBOHOW.COG January 31st, 2013 Project acronym: ROBOHOW.COG Project full title: Web-enabled and Experience-based Cognitive Robots that Learn Complex Everyday Manipulation Tasks Work Package: WP 1 Document number: D1.2 Document title: Knowledge representation, reasoning, and the transforma- tion of robot skills into executable programs Version: 1.0 Delivery date: January 31st, 2013 Nature: Report Dissemination level: Restricted (RE) Authors: Moritz Tenorth (UNIHB) Michael Beetz (UNIHB) The research leading to these results has received funding from the European Union Seventh o Framework Programme FP7/2007-2013 under grant agreement n 288533 ROBOHOW.COG. 2 Contents 1 System Aspects of Knowledge Representation and Reasoning for Robots 5 2 Representations for Robot Knowledge 31 3 Summary In this deliverable, we discuss different issues related to the representation of a robot’s knowledge and the integration of this knowledge into the task execution. The presented methods form the backbone of the representation and reasoning infrastructure in RoboHow, which serves as a kind of “semantic integration layer” to integrate information from the various information sources (vi- sual observation of human activities, object perception, Web instructions, kinesthetic teaching, simulation games, etc) in multiple different modalities. This deliverable consists of two chapters, each of which is constituted by a journal paper. The first part discusses the systems aspects: Which kinds of representation formalisms are suitable for robot knowledge processing, how can the formal representations be integrated with lower-level information, which capabilities does the addition of formally represented knowledge enable? The second chapter focuses on the representational aspects (i.e. how knowledge about actions, objects, the environment, the robot itself etc. be described inside the system described in the first part) on the example of complementing underspecified instructions and making them executable. 4 Chapter 1 System Aspects of Knowledge Representation and Reasoning for Robots This chapter discusses requirements on knowledge representation and reasoning systems for robots. As robots have to reason about problems in the physical world based on perceived sensor infor- mation, the problems to be addressed are different from classical, purely symbolic, disembodied knowledge bases. We present different aspects of the KnowRob knowledge processing system that serves as the basis for knowledge-related tasks in RoboHow, including the core ontology, the dif- ferent inference mechanisms, integration with the robot’s control system and perception methods, tools for acquiring knowledge from online sources and observation and methods for exchanging information with other robots. A core concept of KnowRob are virtual knowledge bases that can be defined over different kinds of (possibly external) knowledge sources. To the reasoning system, they provide a query interface in terms of Prolog predicates which are, however, internally computed using various inference meth- ods or even evaluated using queries to the robot’s perception system. This allows very elegant integration of different kinds of information into a coherent semantic representation. The knowledge provided by KnowRob can be accessed by the robot’s executive during task exe- cution and is used for taking control decisions and inferring suitable action parameters. This tight coupling between knowledge representation and task execution is important for achieving flexible and adaptable behavior and for translating high-level information into executable robot control programs. This chapter is based on the following publication that has been published in the International Journal of Robotics Research: Tenorth, M. and Beetz, M. (2013). KnowRob – A Knowledge Processing Infrastruc- ture for Cognition-enabled Robots. International Journal of Robotics Research (IJRR), 32(5):566 – 590. 5 Article The International Journal of Robotics Research KnowRob: A knowledge processing 32(5) 566–590 © The Author(s) 2013 Reprints and permissions: infrastructure for cognition-enabled sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0278364913481635 robots ijr.sagepub.com Moritz Tenorth and Michael Beetz Abstract Autonomous service robots will have to understand vaguely described tasks, such as “set the table” or “clean up”. Performing such tasks as intended requires robots to fully, precisely, and appropriately parameterize their low-level control programs. We propose knowledge processing as a computational resource for enabling robots to bridge the gap between vague task descriptions and the detailed information needed to actually perform those tasks in the intended way. In this article, we introduce the KNOWROB knowledge processing system that is specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks. The system allows the realization of “virtual knowledge bases”: collections of knowledge pieces that are not explicitly represented but computed on demand from the robot’s internal data structures, its perception system, or external sources of information. This article gives an overview of the different kinds of knowledge, the different inference mechanisms, and interfaces for acquiring knowledge from external sources, such as the robot’s perception system, observations of human activities, Web sites on the Internet, as well as Web-based knowledge bases for information exchange between robots. We evaluate the system’s scalability and present different integrated experiments that show its versatility and comprehensiveness. Keywords knowledge representation, knowledge bases for robots, grounded reasoning methods 1. Introduction To include these reasoning techniques into the task exe- cution, we propose to implement the robot’s control pro- Future service robots are expected to be robot assistants, grams in a knowledge-enabled manner. This means that companions, and (co-)workers (Bicchi et al., 2007; Bischoff decisions are formulated as inference tasks which can be and Guhl, 2009; Hollerbach et al., 2009) that are to perform answered by the robot’s knowledge base. Figure 1 gives an tasks such as setting the table, cleaning up, and making example of a routine for fetching objects that is compactly pancakes. In order to understand and execute informal com- specified as: “Find the object at its most likely storage loca- mands such as “set the table”, they need to infer the miss- tion. If the object is inside a container, open the container in ing pieces of information that are not spelled out explicitly the appropriate manner using the articulation model of the in these vague instructions. To set a table, for instance, a container object”. When writing a plan, a programmer usu- robot has to determine which items are needed for the meal, ally knows which information is required to take a decision where in the kitchen they can be found and where they shall and can therefore decide on the structure of these queries, be placed. like the query for the most likely object locations in the pre- We expect that this ability to infer what is meant from vious example. While the structure of the queries and the what is described will be a ubiquitous prerequisite for results is known, the actual result set will be determined intuitively taskable future robotic agents. Most instructions based on the robot’s knowledge at execution time. given by humans are incomplete in some sense because humans expect the communication partner to have some amount of commonsense knowledge. Competently detect- ing these information gaps and deciding on how to obtain Universität Bremen, Bremen, Germany the missing information and how to make use of it in the task execution context will require robotic agents to have Corresponding author: Moritz Tenorth, Universität Bremen, Am Fallturm 1, 28359 Bremen, substantial bodies of knowledge and powerful knowledge Germany. processing mechanisms. Email: [email protected] Tenorth and Beetz 567 Fig. 1. Example of a knowledge-enabled robot control program (left). Control decisions, e.g. where to search for an object, are formulated as inference tasks to be answered by the robot’s knowledge base. Using queries to the knowledge base as interface allows realize knowledge-enabled control programs. KNOWROB is the creation of robot plans that automatically adapt when one of the most comprehensive knowledge processing sys- the robot’s knowledge changes. If the robot in the exam- tems for robots to date, providing the most diverse set of ple explores its environment and detects a cupboard with knowledge types, inference methods, and integration with rice and spaghetti, this information shall be considered from the control program (see Section 11 for a comparison with that time on when computing where to search for, e.g., related systems). We believe that it is this combination of macaroni pasta. Additional knowledge can also lead to the a variety of methods in a coherent framework that brings selection of different inference techniques: searching for us closer to equipping robots with all of the knowledge they objects at places where similar objects are stored (Schuster actually need for accomplishing complex realistic tasks. We et al., 2012) requires information about other objects

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    51 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us