Semantics for Robotic Mapping, Perception and Interaction: a Survey Full Text Available At
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Synergy of Iot and AI in Modern Society: the Robotics and Automation Case
Case Report Robot Autom Eng J Volume 3 Issue 5 - September 2018 Copyright © All rights are reserved by Spyros G Tzafestas DOI: 10.19080/RAEJ.2018.03.555621 Synergy of IoT and AI in Modern Society: The Robotics and Automation Case Spyros G Tzafestas* National Technical University of Athens, Greece Submission: August 15, 2018; Published: September 12, 2018 *Corresponding author: Spyros G Tzafestas, National Technical University of Athens, Greece; Tel: + ; Email: Abstract The Internet of Things (IoT) is a recent revolution of the Internet which is increasingly adopted with great success in business, industry, the success in a large repertory of every-day applications with dominant one’s enterprise, transportation, robotics, industrial, and automation systemshealthcare, applications. economic, andOur otheraim in sectors this article of modern is to provide information a global society. discussion In particular, of the IoTmain supported issues concerning by artificial the intelligence synergy of enhances IoT and considerablyAI, including what is meant by the concept of ‘IoT-AI synergy’, illustrates the factors that drive the development of ‘IoT enabled by AI’, and summarizes the conceptscurrently ofrunning ‘Industrial and potentialIoT’ (IIoT), applications ‘Internet of of Robotic great value Things’ for (IoRT),the society. and ‘IndustrialStarting with Automation an overview IoT of (IAIoT). the IoT Then, and AI a numberfields, the of article case studies describes are outlined, Keywords: and, finally, some IoT/AI-aided robotics and industrial -
Mapping Why Mapping?
Mapping Why Mapping? Learning maps is one of the fundamental problems in mobile robotics Maps allow robots to efficiently carry out their tasks, allow localization … Successful robot systems rely on maps for localization, path planning, activity planning etc. The General Problem of Mapping What does the environment look like? The General Problem of Mapping The problem of robotic mapping is that of acquiring a spatial model of a robot’s environment Formally, mapping involves, given the sensor data, d {u1, z1,u2 , z2 ,,un , zn} to calculate the most likely map m* arg max P(m | d) m Mapping Challenges A key challenge arises from the measurement errors, which are statistically dependent Errors accumulate over time They affect future measurement Other Challenges of Mapping The high dimensionality of the entities that are being mapped Mapping can be high dimensional Correspondence problem Determine if sensor measurements taken at different points in time correspond to the same physical object in the world Environment changes over time Slower changes: trees in different seasons Faster changes: people walking by Robots must choose their way during mapping Robotic exploration problem Factors that Influence Mapping Size: the larger the environment, the more difficult Perceptual ambiguity the more frequent different places look alike, the more difficult Cycles cycles make robots return via different paths, the accumulated odometric error can be huge The following discussion assumes mapping with known poses Mapping vs. Localization Learning maps is a “chicken-and-egg” problem First, there is a localization problem. Errors can be easily accumulated in odometry, making it less certain about where it is Methods exist to correct the error given a perfect map Second, there is a mapping problem. -
(12) United States Patent (10) Patent No.: US 8,337,482 B2 Wood, Jr
US008337482B2 (12) United States Patent (10) Patent No.: US 8,337,482 B2 Wood, Jr. (45) Date of Patent: *Dec. 25, 2012 (54) SYSTEM FOR PERFUSION MANAGEMENT (56) References Cited (75) Inventor: Lowell L. Wood, Jr., Livermore, CA U.S. PATENT DOCUMENTS (US) 3,391,697 A 7, 1968 Greatbatch 3,821,469 A 6, 1974 Whetstone et al. (73) Assignee: The Invention Science Fund I, LLC, 3,941,1273,837,339 A 3,9, 19761974 flagAisenb e tal. Bellevue, WA (US) 3,983.474. A 9/1976 Kuipers 4,054,881 A 10, 1977 Raab (*)c Notice:- r Subject to any disclaimer, the term of this 4,202,3494,119,900 A 10,5/1980 1978 JonesKremnitz patent is extended or adjusted under 35 4.262.306 A 4, 1981 Renner U.S.C. 154(b) by 1417 days. 4.267,831. A 5/1981 Aguilar This patent is Subject to a terminal dis- 2. A 3.18: R et al. claimer. 4,339.953 A 7, 1982 Iwasaki 4,367,741 A 1/1983 Michaels 4,396,885 A 8, 1983 Constant (21) Appl. No.: 10/827,576 4,403,321 A 9/1983 Kriger 4.418,422 A 11/1983 Richter et al. (22) Filed: Apr. 19, 2004 4,431,005 A 2f1984 McCormick 4,583, 190 A 4, 1986 Sab O O 4,585,652 A 4, 1986 Miller et al. (65) Prior Publication Data 4,628,928 A 12/1986 Lowell 4,638,798 A 1/1987 Shelden et al. US 2005/O234399 A1 Oct. -
The Robot Economy: Here It Comes
Noname manuscript No. (will be inserted by the editor) The Robot Economy: Here It Comes Miguel Arduengo · Luis Sentis Abstract Automation is not a new phenomenon, and ques- tions about its effects have long followed its advances. More than a half-century ago, US President Lyndon B. Johnson established a national commission to examine the impact of technology on the economy, declaring that automation “can be the ally of our prosperity if we will just look ahead”. In this paper, our premise is that we are at a technological in- flection point in which robots are developing the capacity to greatly increase their cognitive and physical capabilities, and thus raising questions on labor dynamics. With increas- ing levels of autonomy and human-robot interaction, intel- ligent robots could soon accomplish new human-like capa- bilities such as engaging into social activities. Therefore, an increase in automation and autonomy brings the question of robots directly participating in some economic activities as autonomous agents. In this paper, a technological frame- Fig. 1 Robots are rapidly developing capabilities that could one day work describing a robot economy is outlined and the chal- allow them to participate as autonomous agents in economic activities lenges it might represent in the current socio-economic sce- with the potential to change the current socio-economic scenario. Some nario are pondered. interesting examples of such activities could eventually involve engag- ing into agreements with human counterparts, the purchase of goods Keywords Intelligent robots · Robot economy · Cloud and services and the participation in highly unstructured production Robotics · IoRT · Blockchain processes. -
Simultaneous Localization and Mapping in Marine Environments
Chapter 8 Simultaneous Localization and Mapping in Marine Environments Maurice F. Fallon∗, Hordur Johannsson, Michael Kaess, John Folkesson, Hunter McClelland, Brendan J. Englot, Franz S. Hover and John J. Leonard Abstract Accurate navigation is a fundamental requirement for robotic systems— marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely high performance, the localization accuracy decays monotonically with distance traveled. Other ap- proaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV. Maurice F. Fallon · Hordur Johannsson · Michael Kaess · John Folkesson · Hunter McClelland · Brendan J. -
2012 IEEE International Conference on Robotics and Automation (ICRA 2012)
2012 IEEE International Conference on Robotics and Automation (ICRA 2012) St. Paul, Minnesota, USA 14 – 18 May 2012 Pages 1-798 IEEE Catalog Number: CFP12RAA-PRT ISBN: 978-1-4673-1403-9 1/7 Content List of 2012 IEEE International Conference on Robotics and Automation Technical Program for Tuesday May 15, 2012 TuA01 Meeting Room 1 (Mini-sota) Estimation and Control for UAVs (Regular Session) Chair: Spletzer, John Lehigh Univ. Co-Chair: Robuffo Giordano, Paolo Max Planck Inst. for Biological Cybernetics 08:30-08:45 TuA01.1 State Estimation for Aggressive Flight in GPS-Denied Environments Using Onboard Sensing, pp. 1-8. Bry, Adam Massachusetts Inst. of Tech. Bachrach, Abraham Massachusetts Inst. of Tech. Roy, Nicholas Massachusetts Inst. of Tech. 08:45-09:00 TuA01.2 Autonomous Indoor 3D Exploration with a Micro-Aerial Vehicle, pp. 9-15. Shen, Shaojie Univ. of Pennsylvania Michael, Nathan Univ. of Pennsylvania Kumar, Vijay Univ. of Pennsylvania 09:00-09:15 TuA01.3 Wind Field Estimation for Autonomous Dynamic Soaring, pp. 16-22. Langelaan, Jack W. Penn State Univ. Spletzer, John Lehigh Univ. Montella, Corey Lehigh Univ. Grenestedt, Joachim Lehigh Univ. 09:15-09:30 TuA01.4 Decentralized Formation Control with Variable Shapes for Aerial Robots, pp. 23-30. Attachment Turpin, Matthew Univ. of Pennsylvania Michael, Nathan Univ. of Pennsylvania Kumar, Vijay Univ. of Pennsylvania 09:30-09:45 TuA01.5 Versatile Distributed Pose Estimation and Sensor Self-Calibration for an Autonomous MAV, pp. 31-38. Attachment Weiss, Stephan ETH Zurich Achtelik, Markus W. ETH Zurich, Autonomous Systems Lab. Chli, Margarita ETH Zurich Siegwart, Roland ETH Zurich 09:45-10:00 TuA01.6 Probabilistic Velocity Estimation for Autonomous Miniature Airships Using Thermal Air Flow Sensors, pp. -
Distributed Robotic Mapping
DISTRIBUTED COLLABORATIVE ROBOTIC MAPPING by DAVID BARNHARD (Under Direction the of Dr. Walter D. Potter) ABSTRACT The utilization of multiple robots to map an unknown environment is a challenging problem within Artificial Intelligence. This thesis first presents previous efforts to develop robotic platforms that have demonstrated incremental progress in coordination for mapping and target acquisition tasks. Next, we present a rewards based method that could increase the coordination ability of multiple robots in a distributed mapping task. The method that is presented is a reinforcement based emergent behavior approach that rewards individual robots for performing desired tasks. It is expected that the use of a reward and taxation system will result in individual robots effectively coordinating their efforts to complete a distributed mapping task. INDEX WORDS: Robotics, Artificial Intelligence, Distributed Processing, Collaborative Robotics DISTRIBUTED COLLABORATIVE ROBOTIC MAPPING by DAVID BARNHARD B.A. Cognitive Science, University of Georgia, 2001 A Thesis Submitted to the Graduate Faculty of The University of Georgia in Partial Fulfillment of the Requirements for the Degree MASTERS OF SCIENCE ATHENS, GEORGIA 2005 © 2005 David Barnhard All Rights Reserved DISTRIBUTED COLLABORATIVE ROBOTIC MAPPING by DAVID BARNHARD Major Professor: Dr. Walter D. Potter Committee: Dr. Khaled Rasheed Dr. Suchendra Bhandarkar Electronic Version Approved: Maureen Grasso Dean of the Graduate School The University of Georgia August 2005 DEDICATION I dedicate this thesis to my lovely betrothed, Rachel. Everyday I wake to hope and wonder because of her. She is my strength and happiness every moment that I am alive. She has continually, but gently kept pushing me to get this project done, and I know without her careful insistence this never would have been completed. -
Consumer Cloud Robotics and the Fair Information Practice Principles
Consumer Cloud Robotics and the Fair Information Practice Principles ANDREW PROIA* DREW SIMSHAW ** & DR. KRIS HAUSER*** INTRODUCTION I. CLOUD ROBOTICS & TOMORROW’S DOMESTIC ROBOTS II. THE BACKBONE OF CONSUMER PRIVACY REGULATIONS AND BEST PRACTICES: THE FAIR INFORMATION PRACTICE PRINCIPLES A. A Look at the Fair Information Practice Principles B. The Consumer Privacy Bill of Rights 1. Scope 2. Individual Control 3. Transparency 4. Respect for Context 5. Security 6. Access and Accuracy 7. Focused Collection 8. Accountability C. The 2012 Federal Trade Commission Privacy Framework 1. Scope 2. Privacy By Design 3. Simplified Consumer Choice 4. Transparency III. WHEN THE FAIR INFORMATION PRACTICE PRINCIPLES MEET CLOUD ROBOTICS: PRIVACY IN A HOME OR DOMESTIC ENVIRONMENT A. The “Sensitivity” of Data Collected by Cloud-Enabled Robots in a Domestic Environment B. The Context of a Cloud-Enabled Robot Transaction: Data Collection, Use, and Retention Limitations C. Adequate Disclosures and Meaningful Choices Between a Cloud- Enabled Robot and the User 1. When Meaningful Choices are to be Provided 2. How Meaningful Choices are to be Provided D. Transparency & Privacy Notices E. Security F. Access & Accuracy G. Accountability IV. APPROACHING THE PRIVACY CHALLENGES INHERENT IN CONSUMER CLOUD ROBOTICS CONCLUSION PROIA, SIMSHAW & HAUSER CONSUMER CLOUD ROBOTICS & THE FIPPS INTRODUCTION At the 2011 Google I/O Conference, Google’s Ryan Hickman and Damon Kohler, and Willow Garage’s Ken Conley and Brian Gerkey, took the stage to give a rather intriguing presentation: -
Benchmarks for Cloud Robotics by Arjun Kumar Singh a Dissertation
Benchmarks for Cloud Robotics by Arjun Kumar Singh A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Pieter Abbeel, Chair Professor Ken Goldberg Professor Bruno Olshausen Summer 2016 Benchmarks for Cloud Robotics Copyright 2016 by Arjun Kumar Singh 1 Abstract Benchmarks for Cloud Robotics by Arjun Kumar Singh Doctor of Philosophy in Computer Science University of California, Berkeley Professor Pieter Abbeel, Chair Several areas of computer science, including computer vision and natural language processing, have witnessed rapid advances in performance, due in part to shared datasets and benchmarks. In a robotics setting, benchmarking is challenging due to the amount of variation in common applications: researchers can use different robots, different objects, different algorithms, different tasks, and different environments. Cloud robotics, in which a robot accesses computation and data over a network, may help address the challenge of benchmarking in robotics. By standardizing the interfaces in which robotic systems access and store data, we can define a common set of tasks and compare the performance of various systems. In this dissertation, we examine two problem settings that are well served by cloud robotics. We also discuss two datasets that facilitate benchmarking of several problems in robotics. Finally, we discuss a framework for defining and using cloud-based robotic services. The first problem setting is object instance recognition. We present an instance recognition system which uses a library of high-fidelity object models of textured household objects. The system can handle occlusions, illumination changes, multiple objects, and multiple instances of the same object. -
Creativity in Humans, Robots, Humbots 23
T. Lubart, D. Esposito, A. Gubenko, and C. Houssemand, Creativity in Humans, Robots, Humbots 23 Vol. 8, Issue 1, 2021 Theories – Research – Applications Creativity in Humans, Robots, Humbots Todd Lubart1, 2, Dario Esposito1, Alla Gubenko3, and Claude Houssemand3 1 LaPEA, Université de Paris and Univ. Gustav Eiffel, F-92100 Boulogne-Billancourt, France 2 HSE University, Moscow, Russia 3 Department of Education and Social Work, Institute for Lifelong Learning and Guidance, University of Luxembourg, Luxembourg ABSTRACT This paper examines three ways that robots can inter- KEYWORDS: face with creativity. In particular, social robots which are creation process, human-robot interaction, designed to interact with humans are examined. In the human-robot co-creativity, embodied creativity, first mode, human creativity can be supported by social social robots robots. In a second mode, social robots can be creative agents and humans serve to support robot’s produc- Note: The article was prepared in the framework of a research grant funded by the Ministry of Science and tions. In the third and final mode, there is complemen- Higher Education of the Russian Federation (grant ID: tary action in creative work, which may be collaborative 075-15-2020-928). co-creation or a division of labor in creative projects. Il- lustrative examples are provided and key issues for fur- ther discussion are raised. Article history: Corresponding author at: Received: June 20, 2021 Todd Lubart Received in revised from: July 9, 2021 E-MAIL: [email protected] Accepted: July 9, 2021 ISSN 2354-0036 DOI: 10.2478/ctra-2021-0003 24 Creativity. Theories – Research – Applications, 8(1) 2021 INtrODUctiON Robots are agents equipped with sensors, actuators and effectors which enable them to move and perform manipulative tasks (Russell & Norvig, 2010). -
(12) United States Patent (10) Patent No.: US 8,092,549 B2 Hillis Et Al
USO08092.549B2 (12) United States Patent (10) Patent No.: US 8,092,549 B2 Hillis et al. (45) Date of Patent: Jan. 10, 2012 (54) CILLATED STENT-LIKE-SYSTEM 4,054,881. A 10/1977 Raab 4,119,900 A 10, 1978 Kremnitz (75) Inventors: E.O O i. CA (US, 4,262,3064,202,349 A 4,5/1980 1981 JonesRenner uriel Y. Snkawa, L1Vermore, 4,314,251 A 2f1982 Raab (US); Clarence T. Tegreene, Bellevue, 4,317,078 A 2/1982 Weed et al. WA (US); Richa Wilson, San Francisco, 4,339.953 A 7, 1982 Iwasaki CA (US); Victoria Y. H. Wood 4,367,741 A 1/1983 Michaels Livermore,s CA (US); Lowell L.s Wood, 4,403,3214,396,885 A 9/19838, 1983 KrigerConstant Jr., Livermore, CA (US) 4.418,422 A 1 1/1983 Richter et al. 4,431,005 A 2f1984 McCormick (73) Assignee: The Invention Science Fund I, LLC, 4,583, 190 A 4, 1986 Sab Bellevue, WA (US) 4,585,652 A 4, 1986 Miller et al. 4,628,928 A 12/1986 Lowell (*)c Notice:- r Subject to any disclaimer, the term of this 4,642,7864,638,798 A 2f19871/1987 SheldenHansen et al. patent is extended or adjusted under 35 4,651,732 A 3, 1987 Frederick U.S.C. 154(b) by 210 days. 4,658,214. A 4, 1987 Petersen 4,714.460 A 12/1987 Calderon (21) Appl. No.: 10/949,186 4,717,381 A 1/1988 Papantonakos 4,733,661 A 3, 1988 Palestrant 1-1. -
Cloud-Based Methods and Architectures for Robot Grasping By
Cloud-based Methods and Architectures for Robot Grasping by Benjamin Robert Kehoe A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering - Mechanical Engineering in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Ken Goldberg, Co-chair Professor J. Karl Hedrick, Co-chair Associate Professor Pieter Abbeel Associate Professor Francesco Borrelli Fall 2014 Cloud-based Methods and Architectures for Robot Grasping Copyright 2014 by Benjamin Robert Kehoe 1 Abstract Cloud-based Methods and Architectures for Robot Grasping by Benjamin Robert Kehoe Doctor of Philosophy in Engineering - Mechanical Engineering University of California, Berkeley Professor Ken Goldberg, Co-chair Professor J. Karl Hedrick, Co-chair The Cloud has the potential to enhance a broad range of robotics and automation sys- tems. Cloud Robotics and Automation systems can be broadly defined as follows: Any robotic or automation system that relies on either data or code from a network to support its operation, i.e., where not all sensing, computation, and memory is integrated into a single standalone system. We identify four potential benefits of Cloud Robotics and Automa- tion: 1) Big Data: access to remote libraries of images, maps, trajectories, and object data, 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning, 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes, and 4) Human computation: using crowdsourcing access to remote human expertise for analyzing images, classification, learning, and error recovery. We present four Cloud Robotics and Automation systems in this dissertation.