Navigation of Autonomous Light Vehicles Using an Optimal Trajectory Planning Algorithm
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Control in Robotics
Control in Robotics Mark W. Spong and Masayuki Fujita Introduction The interplay between robotics and control theory has a rich history extending back over half a century. We begin this section of the report by briefly reviewing the history of this interplay, focusing on fundamentals—how control theory has enabled solutions to fundamental problems in robotics and how problems in robotics have motivated the development of new control theory. We focus primarily on the early years, as the importance of new results often takes considerable time to be fully appreciated and to have an impact on practical applications. Progress in robotics has been especially rapid in the last decade or two, and the future continues to look bright. Robotics was dominated early on by the machine tool industry. As such, the early philosophy in the design of robots was to design mechanisms to be as stiff as possible with each axis (joint) controlled independently as a single-input/single-output (SISO) linear system. Point-to-point control enabled simple tasks such as materials transfer and spot welding. Continuous-path tracking enabled more complex tasks such as arc welding and spray painting. Sensing of the external environment was limited or nonexistent. Consideration of more advanced tasks such as assembly required regulation of contact forces and moments. Higher speed operation and higher payload-to-weight ratios required an increased understanding of the complex, interconnected nonlinear dynamics of robots. This requirement motivated the development of new theoretical results in nonlinear, robust, and adaptive control, which in turn enabled more sophisticated applications. Today, robot control systems are highly advanced with integrated force and vision systems. -
An Abstract of the Dissertation Of
AN ABSTRACT OF THE DISSERTATION OF Austin Nicolai for the degree of Doctor of Philosophy in Robotics presented on September 11, 2019. Title: Augmented Deep Learning Techniques for Robotic State Estimation Abstract approved: Geoffrey A. Hollinger While robotic systems may have once been relegated to structured environments and automation style tasks, in recent years these boundaries have begun to erode. As robots begin to operate in largely unstructured environments, it becomes more difficult for them to effectively interpret their surroundings. As sensor technology improves, the amount of data these robots must utilize can quickly become intractable. Additional challenges include environmental noise, dynamic obstacles, and inherent sensor non- linearities. Deep learning techniques have emerged as a way to efficiently deal with these challenges. While end-to-end deep learning can be convenient, challenges such as validation and training requirements can be prohibitive to its use. In order to address these issues, we propose augmenting the power of deep learning techniques with tools such as optimization methods, physics based models, and human expertise. In this work, we present a principled framework for approaching a prob- lem that allows a user to identify the types of augmentation methods and deep learning techniques best suited to their problem. To validate our framework, we consider three different domains: LIDAR based odometry estimation, hybrid soft robotic control, and sonar based underwater mapping. First, we investigate LIDAR based odometry estimation which can be characterized with both high data precision and availability; ideal for augmenting with optimization methods. We propose using denoising autoencoders (DAEs) to address the challenges presented by modern LIDARs. -
Introduction to ROS – Robot Operating System –
Introduction to ROS – Robot Operating System – Daniel Serrano Department of Intelligent Systems, ASCAMM Technology Center Parc Tecnològic del Vallès, Av. Universitat Autònoma, 23 08290 Cerdanyola del Vallès SPAIN [email protected] ABSTRACT This paper gives an introduction to the open-source Robot Operating System (ROS. ROS has been widely adopted by the research robotics community as it provides a powerful software framework to develop autonomous unmanned systems. In this paper, we discuss some of the components that are available on the ROS repository. 1.0 INTRODUCTION The Robot Operating System (ROS) [1] is an open source robot operating system. It provides libraries and tools to help software developers create robot applications. It offers hardware abstraction, device drivers, libraries, visualizers, message-passing, package management, and more. ROS is similar in some respects to 'robot frameworks,' such as Player, YARP, Orocos, CARMEN, Orca, MOOS, and Microsoft Robotics Studio. The primary goal of ROS is to support code reuse in robotics research and development. It has been quickly adopted by many robotics research institutions and companies as their standard development framework. The list of robots using ROS is quite extent and it includes platforms from diverse domains, ranging from aerial and ground robots to humanoids and underwater vehicles. To name some, the humanoid robots PR-2 and REEM-C, ground robots such as the ClearPath platforms and aerial platforms such as the ones from Ascending Technologies are fully developed in ROS. Furthermore, a quite extent list of popular sensors for robots is already supported in ROS. To name some, sensors such as Inertial Measurement Units, GPS receivers, cameras and lasers can already be accessed from the drivers widely available on the ROS system. -
Robot Operating System - the Complete Reference (Volume 4) Part of the Studies in Computational Intelligence Book Series (SCI, Volume 831)
Book Robot Operating System - The Complete Reference (Volume 4) Part of the Studies in Computational Intelligence book series (SCI, volume 831) Several authors CISTER-TR-190702 2019 Book CISTER-TR-190702 Robot Operating System - The Complete Reference (Volume 4) Robot Operating System - The Complete Reference (Volume 4) Several authors CISTER Research Centre Rua Dr. António Bernardino de Almeida, 431 4200-072 Porto Portugal Tel.: +351.22.8340509, Fax: +351.22.8321159 E-mail: https://www.cister-labs.pt Abstract This is the fourth volume of the successful series Robot Operating Systems: The Complete Reference, providing a comprehensive overview of robot operating systems (ROS), which is currently the main development framework for robotics applications, as well as the latest trends and contributed systems. The book is divided into four parts: Part 1 features two papers on navigation, discussing SLAM and path planning. Part 2 focuses on the integration of ROS into quadcopters and their control. Part 3 then discusses two emerging applications for robotics: cloud robotics, and video stabilization. Part 4 presents tools developed for ROS; the first is a practical alternative to the roslaunch system, and the second is related to penetration testing. This book is a valuable resource for ROS users and wanting to learn more about ROS capabilities and features. © 2019 CISTER Research Center 1 www.cister-labs.pt Studies in Computational Intelligence 831 Anis Koubaa Editor Robot Operating System (ROS) The Complete Reference (Volume 4) Studies in Computational Intelligence Volume 831 Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland [email protected] The series “Studies in Computational Intelligence” (SCI) publishes new develop- ments and advances in the various areas of computational intelligence—quickly and with a high quality. -
Curriculum Reinforcement Learning for Goal-Oriented Robot Control
MEng Individual Project Imperial College London Department of Computing CuRL: Curriculum Reinforcement Learning for Goal-Oriented Robot Control Supervisor: Author: Dr. Ed Johns Harry Uglow Second Marker: Dr. Marc Deisenroth June 17, 2019 Abstract Deep Reinforcement Learning has risen to prominence over the last few years as a field making strong progress tackling continuous control problems, in particular robotic control which has numerous potential applications in industry. However Deep RL algorithms alone struggle on complex robotic control tasks where obstacles need be avoided in order to complete a task. We present Curriculum Reinforcement Learning (CuRL) as a method to help solve these complex tasks by guided training on a curriculum of simpler tasks. We train in simulation, manipulating a task environment in ways not possible in the real world to create that curriculum, and use domain randomisation in attempt to train pose estimators and end-to-end controllers for sim-to-real transfer. To the best of our knowledge this work represents the first example of reinforcement learning with a curriculum of simpler tasks on robotic control problems. Acknowledgements I would like to thank: • Dr. Ed Johns for his advice and support as supervisor. Our discussions helped inform many of the project’s key decisions. • My parents, Mike and Lyndsey Uglow, whose love and support has made the last four year’s possible. Contents 1 Introduction8 1.1 Objectives................................. 9 1.2 Contributions ............................... 10 1.3 Report Structure ............................. 11 2 Background 12 2.1 Machine learning (ML) .......................... 12 2.2 Artificial Neural Networks (ANNs) ................... 12 2.2.1 Strengths of ANNs ....................... -
Final Program of CCC2020
第三十九届中国控制会议 The 39th Chinese Control Conference 程序册 Final Program 主办单位 中国自动化学会控制理论专业委员会 中国自动化学会 中国系统工程学会 承办单位 东北大学 CCC2020 Sponsoring Organizations Technical Committee on Control Theory, Chinese Association of Automation Chinese Association of Automation Systems Engineering Society of China Northeastern University, China 2020 年 7 月 27-29 日,中国·沈阳 July 27-29, 2020, Shenyang, China Proceedings of CCC2020 IEEE Catalog Number: CFP2040A -USB ISBN: 978-988-15639-9-6 CCC2020 Copyright and Reprint Permission: This material is permitted for personal use. For any other copying, reprint, republication or redistribution permission, please contact TCCT Secretariat, No. 55 Zhongguancun East Road, Beijing 100190, P. R. China. All rights reserved. Copyright@2020 by TCCT. 目录 (Contents) 目录 (Contents) ................................................................................................................................................... i 欢迎辞 (Welcome Address) ................................................................................................................................1 组织机构 (Conference Committees) ...................................................................................................................4 重要信息 (Important Information) ....................................................................................................................11 口头报告与张贴报告要求 (Instruction for Oral and Poster Presentations) .....................................................12 大会报告 (Plenary Lectures).............................................................................................................................14 -
Real-Time Vision, Tracking and Control
Proceedings of the 2000 IEEE International Conference on Robotics & Automation San Francisco, CA April 2000 Real-Time Vision, Tracking and Control Peter I. Corke Seth A. Hutchinson CSIRO Manufacturing Science & Technology Beckman Institute for Advanced Technology Pinjarra Hills University of Illinois at Urbana-Champaign AUSTRALIA 4069. Urbana, Illinois, USA 61801 [email protected] [email protected] Abstract sidered the fusion of computer vision, robotics and This paper, which serves as an introduction to the control and has been a distinct field for over 10 years, mini-symposium on Real- Time Vision, Tracking and though the earliest work dates back close to 20 years. Control, provides a broad sketch of visual servoing, the Over this period several major, and well understood, approaches have evolved and been demonstrated in application of real-time vision, tracking and control many laboratories around the world. Fairly compre- for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical archi- hensive overviews of the basic approaches, current ap- tecture, and discusses major milestones, applications plications, and open research issues can be found in a and the significant vision sub-problems that must be number of recent sources, including [l-41. solved. The next section, Section 2, describes three basic ap- proaches to visual servoing. Section 3 provides a ‘walk 1 Introduction around’ the main functional blocks in a typical visual Visual servoing is a maturing approach to the control servoing system. Some major milestones and proposed applications are discussed in Section 4. Section 5 then of robots in which tasks are defined visually, rather expands on the various vision sub-problems that must than in terms of previously taught Cartesian coordi- be solved for the different approaches to visual servo- nates. -
Lab Lecture 1: Introduction to Ros
16-311-Q INTRODUCTION TO ROBOTICS LAB LECTURE 1: INTRODUCTION TO ROS INSTRUCTOR: GIANNI A. DI CARO PROBLEM(S) IN ROBOTICS DEVELOPMENT In Robotics, before ROS • Lack of standards • Little code reusability • Keeping reinventing (or rewriting) device drivers, access to robot’s interfaces, management of on- board processes, inter-process communication protocols, … • Keeping re-coding standard algorithms • New robot in the lab (or in the factory) start re-coding (mostly) from scratch 2 ROBOT OPERATING SYSTEM (ROS) http://www.ros.org 3 WHAT IS ROS? . ROS is an open-source robot operating system . A set of software libraries and tools that help you build robot applications that work across a wide variety of robotic platforms . Originally developed in 2007 at the Stanford Artificial Intelligence Laboratory and development continued at Willow Garage . Since 2013 managed by OSRF (Open Source Robotics Foundation) Note: Some of the following slides are adapted from Roi Yehoshua 4 ROS MAIN FEATURES ROS has two "sides" . The operating system side, which provides standard operating system services such as: o hardware abstraction o low-level device control o implementation of commonly used functionality o message-passing between processes o package management . A suite of user contributed packages that implement common robot functionality such as SLAM, planning, perception, vision, manipulation, etc. 5 ROS MAIN FEATURES 6 ROS PHILOSOPHY . Peer to Peer o ROS systems consist of many small programs (nodes) which connect to each other and continuously exchange messages . Tools-based o There are many small, generic programs that perform tasks such as visualization, logging, plotting data streams, etc. -
Architecture of a Software System for Robotics Control
Architecture of a software system for robotics control Aleksey V. Shevchenko Oksana S. Mezentseva Information Systems Technologies Dept. Information Systems Technologies Dept. Stavropol City, NCFU Stavropol City, NCFU [email protected] [email protected] Dmitriy V. Mezentsev Konstantin Y. Ganshin Information Systems Technologies Dept. Information Systems Technologies Dept. Stavropol City, NCFU Stavropol City, NCFU [email protected] [email protected] Abstract This paper describes the architecture and features of RoboStudio, a software system for robotics control that allows complete and simulta- neous monitoring of all electronic components of a robotic system. This increases the safety of operating expensive equipment and allows for flexible configuration of different robotics components without changes to the source code. 1 Introduction Today, there are no clear standards for robotics programming. Each manufacturer creates their own software and hardware architecture based on their own ideas for optimizing the development process. Large manufacturers often buy their software systems from third parties, while small ones create theirs independently. As a result, the software often targets a specific platform (more often, a specific modification of a given platform). Thus, even minor upgrades of an existing robotic system often require a complete software rewrite. The lack of universal software products imposes large time and resource costs on developers, and often leads to the curtailment of promising projects. Some developers of robotic systems software partially solve the problem of universal software using the free Robotics Operation System (ROS), which also has limitations and is not always able to satisfy all the requirements of the manufacturers. 2 Equipment The research was conducted using two robots produced by the SPA "Android Technics" the "Mechatronics" stand and the full-size anthropomorphic robot AR-601E (figure 1). -
Design and Implementation of an Autonomous Robotics Simulator
DESIGN AND IMPLEMENTATION OF AN AUTONOMOUS ROBOTICS SIMULATOR by Adam Carlton Harris A thesis submitted to the faculty of The University of North Carolina at Charlotte in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Charlotte 2011 Approved by: _______________________________ Dr. James M. Conrad _______________________________ Dr. Ronald R. Sass _______________________________ Dr. Bharat Joshi ii © 2011 Adam Carlton Harris ALL RIGHTS RESERVED iii ABSTRACT ADAM CARLTON HARRIS. Design and implementation of an autonomous robotics simulator. (Under the direction of DR. JAMES M. CONRAD) Robotics simulators are important tools that can save both time and money for developers. Being able to accurately and easily simulate robotic vehicles is invaluable. In the past two decades, corporations, robotics labs, and software development groups have released many robotics simulators to developers. Commercial simulators have proven to be very accurate and many are easy to use, however they are closed source and generally expensive. Open source simulators have recently had an explosion of popularity, but most are not easy to use. This thesis describes the design criteria and implementation of an easy to use open source robotics simulator. SEAR (Simulation Environment for Autonomous Robots) is designed to be an open source cross-platform 3D (3 dimensional) robotics simulator written in Java using jMonkeyEngine3 and the Bullet Physics engine. Users can import custom-designed 3D models of robotic vehicles and terrains to be used in testing their own robotics control code. Several sensor types (GPS, triple-axis accelerometer, triple-axis gyroscope, and a compass) have been simulated and early work on infrared and ultrasonic distance sensors as well as LIDAR simulators has been undertaken. -
ROS History and Basics
Computer Vision Laboratory Robot Vision Systems Lecture 9: ROS History and Basics Michael Felsberg [email protected] Goals • System design and programming in –OpenCV 3.0 rc1 –ROS (robot operating system) Indigo (Jade ?) • focus part 2: robot vision systems –distributed computing with ROS –efficient use of OpenCV in ROS • access to robotic hardware is not part of the course – make use of available resources from your lab • simulation as fallback Organization • lectures –systems basics in ROS Jade –Python introduction given by Hannes • seminars –participants who only participate in the ROS part (and did not read the OpenCV course): one seminar presentation required for credits • exercises –installation of ROS –going through essential first steps • project (example application) Organization • credits: 9hp if –project work –80% presence –one seminar presentation • without the project work: 6hp • note: if you have participated in the course ‘visual computing with OpenCV’, you can only get 6hp (3hp without project) • 'listen-only’: 0hp What is ROS? • Open Source Library for robotics middleware • Standford AI now supported by Willow Garage / Open Source Robotics Foundation • Free for use under the open source BSD license (most parts) • Linux (Ubuntu) –Experimental support Win, Mac, other Linux –rosjava (platform independent) Android, Matlab History of ROS • 2007: “Switchyard” by the Stanford Artificial Intelligence Laboratory • 2008-2013: development primarily at Willow Garage • Since 2013: Open Source Robotics Foundation Versions • 2010: -
Arxiv:2011.00554V1 [Cs.RO] 1 Nov 2020 AI Agents [5]–[10]
Can a Robot Trust You? A DRL-Based Approach to Trust-Driven Human-Guided Navigation Vishnu Sashank Dorbala, Arjun Srinivasan, and Aniket Bera University of Maryland, College Park, USA Supplemental version including Code, Video, Datasets at https://gamma.umd.edu/robotrust/ Abstract— Humans are known to construct cognitive maps of their everyday surroundings using a variety of perceptual inputs. As such, when a human is asked for directions to a particular location, their wayfinding capability in converting this cognitive map into directional instructions is challenged. Owing to spatial anxiety, the language used in the spoken instructions can be vague and often unclear. To account for this unreliability in navigational guidance, we propose a novel Deep Reinforcement Learning (DRL) based trust-driven robot navigation algorithm that learns humans’ trustworthiness to perform a language guided navigation task. Our approach seeks to answer the question as to whether a robot can trust a human’s navigational guidance or not. To this end, we look at training a policy that learns to navigate towards a goal location using only trustworthy human guidance, driven by its own robot trust metric. We look at quantifying various affective features from language-based instructions and incorporate them into our policy’s observation space in the form of a human trust metric. We utilize both these trust metrics into an optimal cognitive reasoning scheme that decides when and when not to trust the given guidance. Our results show that Fig. 1: We look at whether humans can be trusted on the naviga- the learned policy can navigate the environment in an optimal, tional guidance they give to a robot.