Collaborative Multi-Robot Systems for Search and Rescue: Coordination and Perception

Total Page:16

File Type:pdf, Size:1020Kb

Collaborative Multi-Robot Systems for Search and Rescue: Coordination and Perception 1 Collaborative Multi-Robot Systems for Search and Rescue: Coordination and Perception Jorge Pena˜ Queralta1, Jussi Taipalmaa2, Bilge Can Pullinen2, Victor Kathan Sarker1, Tuan Nguyen Gia1, Hannu Tenhunen1, Moncef Gabbouj2, Jenni Raitoharju2, Tomi Westerlund1 1Turku Intelligent Embedded and Robotic Systems, University of Turku, Finland Email: 1fjopequ, vikasar, tunggi, toveweg@utu.fi 2Department of Computing Sciences, Tampere University, Finland Email: 2fjussi.taipalmaa, bilge.canpullinen, moncef.gabbouj, jenni.raitoharjug@tuni.fi Abstract—Autonomous or teleoperated robots have been play- ing increasingly important roles in civil applications in recent years. Across the different civil domains where robots can sup- port human operators, one of the areas where they can have more impact is in search and rescue (SAR) operations. In particular, multi-robot systems have the potential to significantly improve the efficiency of SAR personnel with faster search of victims, initial assessment and mapping of the environment, real-time monitoring and surveillance of SAR operations, or establishing emergency communication networks, among other possibilities. SAR operations encompass a wide variety of environments and situations, and therefore heterogeneous and collaborative multi- robot systems can provide the most advantages. In this paper, we review and analyze the existing approaches to multi-robot (a) Maritime search and rescue with UAVs and USVs SAR support, from an algorithmic perspective and putting an emphasis on the methods enabling collaboration among the robots as well as advanced perception through machine vision and multi-agent active perception. Furthermore, we put these algorithms in the context of the different challenges and constraints that various types of robots (ground, aerial, surface or underwater) encounter in different SAR environments (maritime, urban, wilderness or other post-disaster scenarios). This is, to the best of our knowledge, the first review considering heterogeneous SAR robots across different environments, while giving two complimentary points of view: control mechanisms and machine perception. Based on our review of the state-of-the-art, we discuss the main open research questions, and outline our insights on the current approaches that have potential to improve the real-world performance of multi-robot SAR systems. (b) Urban search and rescue with UAVs and UGVs Index Terms—Robotics, search and rescue (SAR), multi-robot systems (MRS), machine learning (ML), active perception, active arXiv:2008.12610v1 [cs.RO] 28 Aug 2020 vision, multi-agent perception, autonomous robots. I. INTRODUCTION Autonomous robots have seen an increasing penetration across multiple domains in the last decade. In industrial environments, collaborative robots are being utilized in the manufacturing sector, and fleets of mobile robots are swarming in logistics warehouses. Nonetheless, their utilization within civil applications presents additional challenges owing to the interaction with humans and their deployment in potentially unknown environments [1]–[3]. Among civil applications, (c) Wilderness search and rescue with support UAVs search and rescue (SAR) operations present a key scenario where autonomous robots have the potential to save lives by Fig. 1: Different search and rescue scenarios where heteroge- enabling faster response time [4], [5], supporting in hazardous neous multi-robot systems can assist SAR taskforces. environments [6]–[8], or providing real-time mapping and [10], among other possibilities. In this paper, we perform a monitoring of the area where an incident has occurred [9], literature review of multi-robot systems for SAR scenarios. 2 System Level Perspective of Multi-Robot SAR Systems Equipment Operational Human Shared Communication and Sensors Environments Detection Autonomy Section III-A Section III-B Section III-C Section III-D Section III-E (a) Aspects of multi-robot SAR systems discussed in Section III of this paper. Algorithmic Perspective of Multi-Robot SAR Systems Coordination Algorithms Perception Algorithms Formation Control Multi-Agent Decision Active Multi-Agent Segmentation and Multi-Modal and Area Coverage Making and Planning Perception Object Detection Sensor Fusion Section IV-A to IV-C Section IV-D to IV-H Section V-A to V-C Section V-D to V-E Section IV Section VI Section V (b) Division of multi-robot SAR systems into separate components from an algorithmic point of view. Control, planning and coordination algorithms are described in Section IV, while Section V reviews perception algorithms from a machine learning perspective. Section VI then puts these two views together by reviewing the works in single and multi-agent active perception. Fig. 2: Summary of the different aspects of multi-robot SAR systems considered in this survey, where we have separated (a) system-level perspective, and (b) planning and perception algorithmic perspective. These systems involve SAR operations in a variety of envi- of multi-UAV systems from the point of view of communica- ronments, some of which are illustrated in Fig. 1. With the tion, and for a wide range of applications from construction wide variability of SAR scenarios, different situations require or delivery to SAR missions. An extensive classification robots to be able to operate in different environments. In of previous works is done taking into account the mission this document, we utilize the following standard notation to and network requirements in terms of data type, frequency, refer to the different types of robots: unmanned aerial vehi- throughput and quality of service (latency and reliability). cles (UAVs), unmanned ground vehicles (UGVs), unmanned In comparison to [11], our focus is on multi-robot systems surface vehicles (USVs), and unmanmed underwater vehicles including also ground, surface, or underwater robots. Another (UUVs). These can be either autonomous or teleoperated, recent review related to civil applications for UAVs was carried and very often a combination of both modalities exists when out in [3].In [3], the authors provide a classification in terms considering heterogeneous multi-robot systems. In maritime of technological trends and algorithm modalities utilized in SAR, autonomous UAVs and USVs can support in finding research papers: collision avoidance, mmWave communication victims (Fig. 1a). In urban scenarios, UAVs can provide real- and radars, cloud-based offloading, machine learning, image time information for assessing the situation and UGVs can processing and software-defined networking, among others. access hazardous areas (Fig. 1b). In mountain scenarios, UAVs A recent survey [12] focused on UAVs for SAR operations, can help in monitoring and getting closer to the victims that with an extensive classification of research papers based on (i) are later rescued by a helicopter (Fig. 1c). sensors utilized onboard the UAVs, (ii) robot systems (single In recent years, multiple survey papers addressing the uti- or multi-robot systems, and operational mediums), and (iii) lization of multi-UAV systems for civil applications have been environment where the system is meant to be deployed. In a published. In [11], the authors perform an exhaustive review study from Grayson et al. [13], the focus is on using multi- 3 robot systems for SAR operations, with an emphasis on task summary, our contribution focuses on reviewing the different allocation algorithms, communication modalities, and human- aspects of multi-robot SAR operations with robot interaction for both homogeneous and heterogeneous 1) a system-level perspective for designing autonomous multi-robot systems. In this work, we review also heteroge- SAR robots considering the operational environment, neous multi-robot systems. However, rather than focusing on communication, level of autonomy, and the interaction describing the existing solutions at a system level, we put an with human operators, emphasis on the algorithms that are being used for multi-robot 2) an algorithmic point of view of multi-robot coordination, coordination and perception. Moreover, we describe the role multi-robot search and area coverage, and distributed of machine learning in single and multi-agent perception, and task allocation and planning applied to SAR operations, discuss how active perception can play a key role towards 3) a deep learning viewpoint to single- and multi-agent the development of more intelligent robots supporting SAR perception, with a focus on object detection and tracking operations. The survey is further divided into three main and segmentation, and a description of the challenges sub-categories: 1) planning and area coverage algorithms, and opportunities of active perception in multi-robot 2) machine perception, and 3) active perception algorithms systems for SAR scenarios. combining the previous two concepts (Fig. 2). The remainder of this paper is organized as follows: Sec- While autonomous robots are being increasingly adopted tion II describes some of the most relevant projects in SAR for SAR missions, current levels of autonomy and safety of robotics, with an emphasis on those considering multi-robot robotic systems only allow for full autonomy in the search systems. In Section III, we present a system view on SAR part, but not for rescue, where human operators need to robotic systems, describing the different types of robots being intervene [14]. This leads to the design of shared autonomy utilized, particularities of SAR environments, and different interfaces and
Recommended publications
  • 1 Design of a Low-Cost, Highly Mobile Urban Search and Rescue Robot
    Design of a Low-Cost, Highly Mobile Urban Search and Rescue Robot Bradley E. Bishop * Frederick L. Crabbe Bryan M. Hudock United States Naval Academy 105 Maryland Ave (Stop 14a) Annapolis, MD 21402 [email protected] Keywords: Rescue Robotics, Mobile Robotics, Locomotion, Physical Simulation, Genetic Algorithms Abstract— In this paper, we discuss the design of a novel robotic platform for Urban Search and Rescue (USAR). The system developed possesses unique mobility capabilities based on a new adjustable compliance mechanism and overall locomotive morphology. The main facets of this work involve the morphological concepts, initial design and construction of a prototype vehicle, and a physical simulation to be used for developing controllers for semi-autonomous (supervisory) operation. I. INTRODUCTION Recovering survivors from a collapsed building has proven to be one of the more daunting challenges that face rescue workers in today’s world. Survivors trapped in a rubble pile generally have 48 hours before they will succumb to dehydration and the elements [1]. Unfortunately, the environment of urban search and rescue (USAR) does not lend itself to speedy reconnaissance or retrieval. The terrain is extremely unstable and the spaces for exploration are often irregular in nature and very confined. Though these challenges often make human rescue efforts deep within the rubble pile prohibitive, a robot designed for urban search and rescue would be very well suited to the problem. Robots have already proven their worth in urban search and rescue, most notably in the aftermath of the September 11 th , 2001 disaster. The combined efforts of Professor Robin Murphy, 1 a computer scientist at the University of South Florida, and Lt.
    [Show full text]
  • 6D Image-Based Visual Servoing for Robot Manipulators with Uncalibrated Stereo Cameras
    6D Image-based Visual Servoing for Robot Manipulators with uncalibrated Stereo Cameras Caixia Cai1, Emmanuel Dean-Leon´ 2, Nikhil Somani1, Alois Knoll1 Abstract— This paper introduces 6 new image features to A. Related work provide a solution to the open problem of uncalibrated 6D image-based visual servoing for robot manipulators, where An IBVS usually employs the image Jacobian matrix the goal is to control the 3D position and orientation of the (Jimg) to relate end-effector velocities in the manipulator’s robot end-effector using visual feedback. One of the main Task space to the feature parameter velocities in the feature contributions of this article is a novel stereo camera model which employs virtual orthogonal cameras to map 6D Cartesian (image) space. A full and comprehensive survey on Visual poses defined in the Task space to 6D visual poses defined Servoing and image Jacobian definitions can be found in [1], in a Virtual Visual space (Image space). This new model is [3], [4] and more recently in [5]. In general, the classical used to compute a full-rank square Image Jacobian matrix image Jacobian is defined using a set of image feature (Jimg), which solves several common problems exhibited by the measurements (usually denoted by s) and it describes how classical image Jacobians, e.g., Image space singularities and local minima. This Jacobian is a fundamental key for the image- image features change when the robot manipulator pose based controller design, where a chattering-free adaptive second changess ˙ = Jimgv. In Visual Servoing the image Jacobian order sliding mode is employed to track 6D visual motions for needs to be calculated or estimated.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • Pipeline Following by Visual Servoing for Autonomous Underwater Vehicles
    Pipeline following by visual servoing for Autonomous Underwater Vehicles Guillaume Alliberta,d, Minh-Duc Huaa, Szymon Krup´ınskib, Tarek Hamela,c aUniversity of Cˆoted’Azur, CNRS, I3S, France. Emails: allibert(thamel; hua)@i3s:unice: f r bCybernetix, Marseille, France. Email: szymon:krupinski@cybernetix: f r cInstitut Universitaire de France, France dCorresponding author Abstract A nonlinear image-based visual servo control approach for pipeline following of fully-actuated Autonomous Underwater Vehicles (AUV) is proposed. It makes use of the binormalized Plucker¨ coordinates of the pipeline borders detected in the image plane as feedback information while the system dynamics are exploited in a cascade manner in the control design. Unlike conventional solutions that consider only the system kinematics, the proposed control scheme accounts for the full system dynamics in order to obtain an enlarged provable stability domain. Control robustness with respect to model uncertainties and external disturbances is re- inforced using integral corrections. Robustness and efficiency of the proposed approach are illustrated via both realistic simulations and experimental results on a real AUV. Keywords: AUV, pipeline following, visual servoing, nonlinear control 1. Introduction control Repoulias and Papadopoulos (2007); Aguiar and Pas- coal (2007); Antonelli (2007) and Lyapunov model-based con- Underwater pipelines are widely used for transportation of trol Refsnes et al. (2008); Smallwood and Whitcomb (2004) oil, gas or other fluids from production sites to distribution sites. mostly concern the pre-programmed trajectory tracking prob- Laid down on the ocean floor, they are often subject to extreme lem with little regard to the local topography of the environ- conditions (temperature, pressure, humidity, sea current, vibra- ment.
    [Show full text]
  • Real-Time Kinematics Coordinated Swarm Robotics for Construction 3D Printing
    1 Real-time Kinematics Coordinated Swarm Robotics for Construction 3D Printing Darren Wang and Robert Zhu, John Jay High School Abstract Architectural advancements in housing are limited by traditional construction techniques. Construction 3D printing introduces freedom in design that can lead to drastic improvements in building quality, resource efficiency, and cost. Designs for current construction 3D printers have limited build volume and at the scale needed for printing houses, transportation and setup become issues. We propose a swarm robotics-based construction 3D printing system that bypasses all these issues. A central computer will coordinate the movement and actions of a swarm of robots which are each capable of extruding concrete in a programmable path and navigating on both the ground and the structure. The central computer will create paths for each robot to follow by processing the G-code obtained from slicing a CAD model of the intended structure. The robots will use readings from real-time kinematics (RTK) modules to keep themselves on their designated paths. Our goal for this semester is to create a single functioning unit of the swarm and to develop a system for coordinating its movement and actions. Problem Traditional concrete construction is costly, has substantial environmental impact, and limits freedom in design. In traditional concrete construction, workers use special molds called forms to shape concrete. Over a third of the construction cost of a concrete house stems from the formwork alone. Concrete manufacturing and construction are responsible for 6% – 8% of CO2 emissions as well as 10% of energy usage in the world. Many buildings use more concrete than necessary, and this stems from the fact that formwork construction requires walls, floors, and beams to be solid.
    [Show full text]
  • Supporting Mobile Swarm Robotics in Low Power and Lossy Sensor Networks 1 Introduction
    1 Supporting Mobile Swarm Robotics in Low Power and Lossy Sensor Networks Kevin Andrea [email protected] Robert Simon [email protected] Sean Luke [email protected] Department of Computer Science George Mason University 4400 University Drive MSN 4A5 Fairfax, VA 22030 USA Summary Wireless low-power and lossy networks (LLNs) are a key enabling technology for the deployment of massively scaled self-organizing sensor swarm systems. Supporting applications such as providing human users situational awareness across large areas requires that swarm-friendly LLNs effectively support communication between embedded and mobile devices, such as autonomous robots. The reason for this is that large scale embedded sensor applications such as unattended ground sensing systems typically do not have full end-to-end connectivity, but suffer frequent communication partitions. Further, it is desirable for many tactical applications to offload tasks to mobile robots. Despite the importance of support this communication pattern, there has been relatively little work in designing and evaluating LLN-friendly protocols capable of supporting such interactions. This paper addresses the above problem by describing the design, implementation, and evaluation of the MoRoMi system. MoRoMi stands for Mobile Robotic MultI-sink. It is intended to support autonomous mobile robots that interact with embedded sensor swarms engaged in activities such as cooperative target observation, collective map building, and ant foraging. These activities benefit if the swarm can dynamically interact with embedded sensing and actuator systems that provide both local environmental or positional information and an ad-hoc communication system. Our paper quantifies the performance levels that can be expected using current swarm and LLN technologies.
    [Show full text]
  • Arxiv:1902.05947V1 [Cs.CV] 18 Feb 2019
    DIViS: Domain Invariant Visual Servoing for Collision-Free Goal Reaching Fereshteh Sadeghi University of Washington Servo Goal: semantic category Real Robot Test Mobile Robot Platforms at test time Servo Goal: image crop H Images N . #) &"'$ . Collision Net N ( . (FCN) #$ #$ . #%'$ . Conv . Conv #% &" N LSTM #% N Stack ! Train in Simulation Goal Semantic Net " (FCN) Time Figure 1: Domain Invariant Visual Servoing (DIViS) learns collision-free goal reaching entirely in simulation using dense multi-step rollouts and a recurrent fully convolutional neural network (bottom). DIViS can directly be deployed on real physical robots with RGB cameras for servoing to visually indicated goals as well as semantic object categories (top). Abstract Robots should understand both semantics and physics to be functional in the real world. While robot platforms provide means for interacting with the physical world they cannot autonomously acquire object-level semantics with- (a) (b) (c) out needing human. In this paper, we investigate how to Figure 2: (a) The classic 1995 visual servoing robot [46, 15]. minimize human effort and intervention to teach robots per- The image at final position (b) was given as the goal and the robot form real world tasks that incorporate semantics. We study was started from an initial view of (c). this question in the context of visual servoing of mobile robots and propose DIViS, a Domain Invariant policy learn- 1. Introduction ing approach for collision free Visual Servoing. DIViS in- corporates high level semantics from previously collected Perception and mobility are the two key capabilities that static human-labeled datasets and learns collision free ser- enable animals and human to perform complex tasks such voing entirely in simulation and without any real robot data.
    [Show full text]
  • Interactive Robots in Experimental Biology 3 4 5 6 Jens Krause1,2, Alan F.T
    1 2 Interactive Robots in Experimental Biology 3 4 5 6 Jens Krause1,2, Alan F.T. Winfield3 & Jean-Louis Deneubourg4 7 8 9 10 11 12 1Leibniz-Institute of Freshwater Ecology and Inland Fisheries, Department of Biology and 13 Ecology of Fishes, 12587 Berlin, Germany; 14 2Humboldt-University of Berlin, Department for Crop and Animal Sciences, Philippstrasse 15 13, 10115 Berlin, Germany; 16 3Bristol Robotics Laboratory, University of the West of England, Coldharbour Lane, Bristol 17 BS16 1QY, UK; 18 4Unit of Social Ecology, Campus Plaine - CP 231, Université libre de Bruxelles, Bd du 19 Triomphe, B-1050 Brussels - Belgium 20 21 22 23 24 25 26 27 28 Corresponding author: Krause, J. ([email protected]), Leibniz Institute of Freshwater 29 Ecology and Inland Fisheries, Department of the Biology and Ecology of Fishes, 30 Müggelseedamm 310, 12587 Berlin, Germany. 31 32 33 1 33 Interactive robots have the potential to revolutionise the study of social behaviour because 34 they provide a number of methodological advances. In interactions with live animals the 35 behaviour of robots can be standardised, morphology and behaviour can be decoupled (so that 36 different morphologies and behavioural strategies can be combined), behaviour can be 37 manipulated in complex interaction sequences and models of behaviour can be embodied by 38 the robot and thereby be tested. Furthermore, robots can be used as demonstrators in 39 experiments on social learning. The opportunities that robots create for new experimental 40 approaches have far-reaching consequences for research in fields such as mate choice, 41 cooperation, social learning, personality studies and collective behaviour.
    [Show full text]
  • Special Feature on Advanced Mobile Robotics
    applied sciences Editorial Special Feature on Advanced Mobile Robotics DaeEun Kim School of Electrical and Electronic Engineering, Yonsei University, Shinchon, Seoul 03722, Korea; [email protected] Received: 29 October 2019; Accepted: 31 October 2019; Published: 4 November 2019 1. Introduction Mobile robots and their applications are involved with many research fields including electrical engineering, mechanical engineering, computer science, artificial intelligence and cognitive science. Mobile robots are widely used for transportation, surveillance, inspection, interaction with human, medical system and entertainment. This Special Issue handles recent development of mobile robots and their research, and it will help find or enhance the principle of robotics and practical applications in real world. The Special Issue is intended to be a collection of multidisciplinary work in the field of mobile robotics. Various approaches and integrative contributions are introduced through this Special Issue. Motion control of mobile robots, aerial robots/vehicles, robot navigation, localization and mapping, robot vision and 3D sensing, networked robots, swarm robotics, biologically-inspired robotics, learning and adaptation in robotics, human-robot interaction and control systems for industrial robots are covered. 2. Advanced Mobile Robotics This Special Issue includes a variety of research fields related to mobile robotics. Initially, multi-agent robots or multi-robots are introduced. It covers cooperation of multi-agent robots or formation control. Trajectory planning methods and applications are listed. Robot navigations have been studied as classical robot application. Autonomous navigation examples are demonstrated. Then services robots are introduced as human-robot interaction. Furthermore, unmanned aerial vehicles (UAVs) or autonomous underwater vehicles (AUVs) are shown for autonomous navigation or map building.
    [Show full text]
  • Multipurpose Tactical Robot
    Engineering International, Volume 2, No 1 (2014) Asian Business Consortium | EI Page 20 Engineering International, Volume 2, No 1 (2014) Multipurpose Tactical Robot Md. Taher-Uz-Zaman1, Md. Sazzad Ahmed2, Shabbir Hossain3, Shakhawat Hossain4, & G. R. Ahmed Jamal5 1,2,3,4Department of Electrical & Electronic Engineering, University of Asia Pacific, Bangladesh 5Assistant Professor, Dept. of Electrical & Electronic Engineering, University of Asia Pacific, Bangladesh ABSTRACT This paper presents a general framework for planning a multipurpose robot which can be used in multiple fields (both civil and military). The framework shows the assembly of multiple sensors, mechanical arm, live video streaming and high range remote control and so on in a single robot. The planning problem is one of five fundamental challenges to the development of a real robotic system able to serve both purposes related to military and civil like live surveillance(both auto and manual), rescuing under natural disaster aftermath, firefighting, object picking, hazard like ignition, volatile gas detection, exploring underground mine or even terrestrial exploration. Each of the four other areas – hardware design, programming, controlling and artificial intelligence are also discussed. Key words: Robotics, Artificial intelligence, Arduino, Survelience, Mechanical arm, Multi sensoring INTRODUCTION There are so many robots developed based on line follower, obstacle avoider, robotic arm, wireless controlled robot and so on. Most of them are built on the basis of a specific single function. For example a mechanical arm only can pick an object inside its reach. But what will happen if the object is out of its reach? Besides implementing a simple AI (e.g. line following, obstacle or edge detection) is more or less easy but that does not serve any human need.
    [Show full text]
  • A Rough Terrain Negotiable Teleoperated Mobile Rescue Robot with Passive Control Mechanism
    3-Survivor: A Rough Terrain Negotiable Teleoperated Mobile Rescue Robot with Passive Control Mechanism Rafia Alif Bindu1,2, Asif Ahmed Neloy1,2, Sazid Alam1,2, Shahnewaz Siddique1 Abstract— This paper presents the design and integration of hazardous areas, the robot must have high mobility ability to 3-Survivor: a rough terrain negotiable teleoperated mobile move in rough terrain, clearing the path and also transmitting rescue and service robot. 3-Survivor is an improved version of live video of the affected area. Generally, three types of robot two previously studied surveillance robots named Sigma-3 and locomotion are found. The first is the walking robot [9], Alpha-N. In 3-Survivor, a modified double-tracked with second, the wheel-based robot [10] and the third is a track- caterpillar mechanism is incorporated in the body design. A based robot [6]. The wheel-based robot is not efficient for passive adjustment established in the body balance enables the moving in the rough terrain [11]. The walking robot can move front and rear body to operate in excellent synchronization. Instead of using an actuator, a reconfigurable dynamic method but it’s very slow and difficult to control [9]. In that sense, is constructed with a 6 DOF arm. This dynamic method is the double-track mechanism provides good mobility under configured with the planer, spatial mechanism, rotation matrix, rough terrain conditions with the added benefit of low power motion control of rotation using inverse kinematics and consumption. Considering all these conditions, the track- controlling power consumption of the manipulator using based robot is more efficient in rough terrain as well as in a angular momentum.
    [Show full text]
  • Acknowledgements Acknowl
    2161 Acknowledgements Acknowl. B.21 Actuators for Soft Robotics F.58 Robotics in Hazardous Applications by Alin Albu-Schäffer, Antonio Bicchi by James Trevelyan, William Hamel, The authors of this chapter have used liberally of Sung-Chul Kang work done by a group of collaborators involved James Trevelyan acknowledges Surya Singh for de- in the EU projects PHRIENDS, VIACTORS, and tailed suggestions on the original draft, and would also SAPHARI. We want to particularly thank Etienne Bur- like to thank the many unnamed mine clearance experts det, Federico Carpi, Manuel Catalano, Manolo Gara- who have provided guidance and comments over many bini, Giorgio Grioli, Sami Haddadin, Dominic Lacatos, years, as well as Prof. S. Hirose, Scanjack, Way In- Can zparpucu, Florian Petit, Joshua Schultz, Nikos dustry, Japan Atomic Energy Agency, and Total Marine Tsagarakis, Bram Vanderborght, and Sebastian Wolf for Systems for providing photographs. their substantial contributions to this chapter and the William R. Hamel would like to acknowledge work behind it. the US Department of Energy’s Robotics Crosscut- ting Program and all of his colleagues at the na- C.29 Inertial Sensing, GPS and Odometry tional laboratories and universities for many years by Gregory Dudek, Michael Jenkin of dealing with remote hazardous operations, and all We would like to thank Sarah Jenkin for her help with of his collaborators at the Field Robotics Center at the figures. Carnegie Mellon University, particularly James Os- born, who were pivotal in developing ideas for future D.36 Motion for Manipulation Tasks telerobots. by James Kuffner, Jing Xiao Sungchul Kang acknowledges Changhyun Cho, We acknowledge the contribution that the authors of the Woosub Lee, Dongsuk Ryu at KIST (Korean Institute first edition made to this chapter revision, particularly for Science and Technology), Korea for their provid- Sect.
    [Show full text]