Acknowledgements Acknowl

Total Page:16

File Type:pdf, Size:1020Kb

Acknowledgements Acknowl 2161 Acknowledgements Acknowl. B.21 Actuators for Soft Robotics F.58 Robotics in Hazardous Applications by Alin Albu-Schäffer, Antonio Bicchi by James Trevelyan, William Hamel, The authors of this chapter have used liberally of Sung-Chul Kang work done by a group of collaborators involved James Trevelyan acknowledges Surya Singh for de- in the EU projects PHRIENDS, VIACTORS, and tailed suggestions on the original draft, and would also SAPHARI. We want to particularly thank Etienne Bur- like to thank the many unnamed mine clearance experts det, Federico Carpi, Manuel Catalano, Manolo Gara- who have provided guidance and comments over many bini, Giorgio Grioli, Sami Haddadin, Dominic Lacatos, years, as well as Prof. S. Hirose, Scanjack, Way In- Can zparpucu, Florian Petit, Joshua Schultz, Nikos dustry, Japan Atomic Energy Agency, and Total Marine Tsagarakis, Bram Vanderborght, and Sebastian Wolf for Systems for providing photographs. their substantial contributions to this chapter and the William R. Hamel would like to acknowledge work behind it. the US Department of Energy’s Robotics Crosscut- ting Program and all of his colleagues at the na- C.29 Inertial Sensing, GPS and Odometry tional laboratories and universities for many years by Gregory Dudek, Michael Jenkin of dealing with remote hazardous operations, and all We would like to thank Sarah Jenkin for her help with of his collaborators at the Field Robotics Center at the figures. Carnegie Mellon University, particularly James Os- born, who were pivotal in developing ideas for future D.36 Motion for Manipulation Tasks telerobots. by James Kuffner, Jing Xiao Sungchul Kang acknowledges Changhyun Cho, We acknowledge the contribution that the authors of the Woosub Lee, Dongsuk Ryu at KIST (Korean Institute first edition made to this chapter revision, particularly for Science and Technology), Korea for their provid- Sect. 36.1, Sect. 36.2, and Sect. 36.5. ing valuable documents and pictures. He also appreci- ates Munsang Kim for his leading projects that have E.52 Modeling and Control produced many of research achievements related to of Aerial Robots Sect. 58.3 enabling technologies. by Robert Mahony, Randal Beard, Vijay Kumar All three authors acknowledge comments from The authors would like to acknowledge the contribu- Prof. Cdric Pradalier of Georgia Tech Lorraine and tions of Peter Corke to the background material on Mark Noakes of the Oak Ridge National Laboratory in quadrotor vehicles (R. Mahony, V. Kumar, P. Corke: USA. Multirotor aerial vehicles: Modeling, estimation, and control of quadrotor, Robotics Autom. Mag. 19(3), 20– F.59 Robotics in Mining 32 (2013)) and Eric Feron and Eric Johnson (E. Feron, by Joshua Marshall, Adrian Bonchis, Eduardo E. Johnson: Aerial robotics. In: Springer Handbook Nebot, Steven Scheding of Robotics, ed. by B. Siciliano, O. Khatib (Springer, The authors would like to thank Elliot Duff at CSIRO Berlin, Heidelberg 2008)) to the introductory material for his work in mustering the team for this chapter in this chapter. project. Thanks to Andrew Crose and Michael Lewis and Modular Mining Systems for help in obtaining F.57 Robotics in Construction information about the Komatsu AHS. Thanks also to by Kamel Saidi, Thomas Bock, Johan Larsson, Ola Pettersson, and Oscar Lundhede for Christos Georgoulas facilitating the use of some Atlas Copco AB images and Disclaimer: Although certain commercial construction video. equipment are included in this chapter, the inclusion of such information should in no way be construed as F.62 Intelligent Vehicles indicating that such products are endorsed by the Na- by Alberto Broggi, Alex Zelinsky, Ümit Özgüner, tional Institute of Standards and Technology (NIST) or Christian Laugier are recommended by NIST or that they are necessarily The authors acknowledge the contributions of Chuck the best equipment for the purposes described. Thorpe and Michel Parent, who made significant contri- 2162 Acknowledgements Acknowl. butions to the first edition of the handbook chapter, and Rocco Vertechy for their helpful assistance and sugges- part of their contributions remain in the revised second tions in writing this manuscript. edition. We also acknowledge the helpful contributions of Dizan Alejandro Vasquez. G.74 Learning from Humans by Aude Billard, Sylvain Calinon, G.70 Human–Robot Augmentation Rüdiger Dillmann by Massimo Bergamasco, Hugh Herr We warmly thank Daniel Grollman and Stefan Schaal The authors wish to thank Jared Markowitz, Elliott for participation in drafting earlier versions of this Rouse, Carlo Alberto Avizzano, Marco Fontana, and chapter. 2163 About the Authors Markus W. Achtelik Chapter B.26 ETH Zurich Markus Wilhelm Achtelik received his Diploma degree in Electrical Engineering Authors Autonomous Systems Laboratory from the TU München in 2009. He finished his PhD in 2014 at the Autonomous Zurich, Switzerland Systems Lab (ASL) at ETH Zurich, and currently works as Postdoc at ASL on control, [email protected] state estimation and planning, with the goal of enabling autonomous manoeuvres for MAVs, using an IMU and onboard camera(s) as main sensors. Alin Albu-Schäffer Chapter B.21 DLR Institute of Robotics and Alin Albu-Schäffer graduated in Electrical Engineering from the Technical University Mechatronics of Timisoara, in 1993 and got the PhD from the TU Munich in 2002. Since 2012 he Wessling, Germany is the Head of the Institute of Robotics and Mechatronics at the German Aerospace [email protected] Center, which he joined in 1995 as a PhD candidate. Moreover, he is a Professor at the TU Munich’s Computer Science Department. His research interests include robot design, modeling and control, flexible joints, as well as bio-inspired robot design. Kostas Alexis Chapter B.26 ETH Zurich Kostas Alexis obtained his PhD in the field of aerial robotics control and Institute of Robotics and Intelligent collaboration from the University of Patras, Greece in 2011. Since then Systems he holds a senior researcher position at ASL – ETH Zurich. His research Zurich, Switzerland interests lie in control and optimization focusing on aerial systems [email protected] navigation. He is the co-author of more than 40 technical publications. Jorge Angeles Chapter B.16 McGill University Jorge Angeles graduated as an Electromechanical Engineer and obtained Department of Mechanical Engineering the MEng degree in Mechanical Engineering, both at Universidad and Centre for Intelligent Machines Nacional Autónoma de México (UNAM), then received the PhD degree Montreal, Canada in Applied Mechanics from Stanford University. Research activities [email protected] include robot kinematics, dynamics, design, and control as well as design theory and methodology. Angeles is a Fellow of ASME, CSME, IEEE and RSC, The Academies of Arts, Humanities, and Sciences of Canada, and Honorary Member of IFToMM, the International Federation for the Promotion of Mechanism and Machine Science. Gianluca Antonelli Chapter E.51 For biographical details see “About the Multimedia Editors” Fumihito Arai Chapter B.27 Nagoya University Fumihito Arai received Master of Eng. degree from Tokyo University of Science in Department of Micro-Nano Systems 1988. He received Dr. of Eng. from Nagoya University in 1993. Since 1994, he was Engineering Assistant Professor at Nagoya University, from 2005 Professor at Tohoku University. Nagoya, Japan Since 2010 he has been Professor at Nagoya University mainly engaged in micro- and [email protected] nano-robotics. Michael A. Arbib Chapter G.77 University of Southern California Michael Arbib’s group at the University of Massachusetts Amherst (UMass) in- Computer Science, Neuroscience and ABLE troduced the notions of opposition space and affordances to the study of brain Project mechanisms for primate and control of robot hands. At the University of Southern Los Angeles, USA California, his group developed new models of primate visuomotor coordination, [email protected] including the first computational model of mirror neurons. He continues to develop the mirror system hypothesis for the evolution of the language-ready brain. 2164 About the Authors J. Andrew Bagnell Chapter A.15 Carnegie Mellon University Dr. J. Andrew Bagnell is an Associate Research Professor at Carnegie Robotics Institute Mellon University’s Robotics Institute and National Robotics Engineering Pittsburgh, USA Center (NREC) and the Machine Learning Department. His research fo- [email protected] cuses on machine learning and perception, adaptive control, optimization and planning under uncertainty. He received a BSc in Electrical Engi- neering from the University of Florida in 1998. He joined the Robotics Institute at CMU in 2000 receiving an MS and PhD in Robotics in 2002 Authors and 2004, respectively. Randal W. Beard Chapter E.52 Brigham Young University Randal Beard is a Professor in the Department of Electrical and Electrical and Computer Engineering Computer Engineering at Brigham Young University. He received Provo, USA his PhD from Rensselaer Polytechnic Institute. His research interests [email protected] include guidance and control of teams of unmanned air vehicles. He is a Fellow of the IEEE and an Associate Fellow of AIAA. Michael Beetz Chapter A.14 University Bremen Michael Beetz is a Professor for Computer Science at the Faculty for Informatics of Institute for Artificial Intelligence the University Bremen and Head of the Institute for Artificial Intelligence. He received Bremen, Germany his diploma degree in Computer Science from
Recommended publications
  • Artificial Intelligence and the Ethics of Self-Learning Robots Shannon Vallor Santa Clara University, [email protected]
    Santa Clara University Scholar Commons Philosophy College of Arts & Sciences 10-3-2017 Artificial Intelligence and the Ethics of Self-learning Robots Shannon Vallor Santa Clara University, [email protected] George A. Bekey Follow this and additional works at: http://scholarcommons.scu.edu/phi Part of the Philosophy Commons Recommended Citation Vallor, S., & Bekey, G. A. (2017). Artificial Intelligence and the Ethics of Self-learning Robots. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0 (pp. 338–353). Oxford University Press. This material was originally published in Robot Ethics 2.0 edited by Patrick Lin, Keith Abney, and Ryan Jenkins, and has been reproduced by permission of Oxford University Press. For permission to reuse this material, please visit http://www.oup.co.uk/academic/rights/permissions. This Book Chapter is brought to you for free and open access by the College of Arts & Sciences at Scholar Commons. It has been accepted for inclusion in Philosophy by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. ARTIFICIAL INTELLIGENCE AND 22 THE ETHICS OF SELF - LEARNING ROBOTS Shannon Va ll or and George A. Bekey The convergence of robotics technology with the science of artificial intelligence (or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities for­ merly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use.
    [Show full text]
  • Modeling and Control of Underwater Robotic Systems
    Modeling and Control of Underwater Robotic Systems Dr.ing. thesis Ingrid Schj0lberg Department of Engineering Cybernetics Norwegian University of Science and Technology March 1996 Report 96-21-W Department of Engineering Cybernetics Norwegian University of Science and Technology N-7034 Trondheim, Norway OBmetmoN of im doombst b mwia DISCLAI R Portions of tins document may be Illegible in electronic image products. Images are produced from the best available original document Preface This thesis is submitted for the Doktor ingenipr degree at the Norwegian University of Science and Technology (NTNU). The research was carried out at the Norwegian Institute of Technology (NTH), Department of Engi ­ neering Cybernetics, in the period from January 1991 to March 1996. The work was funded by The Research Council of Norway (NFR) through the MOBATEL project. Professor Dr.ing. Olav Egeland was my supervisor. During the academic year 1992-1993 I worked at the European Organization for Nuclear Research (CERN). This stay was funded by TOTAL Norge. The work performed during this stay has not been included in this thesis. I am indebted to Professor Egeland for giving me the opportunity to take this study. I am also grateful for advice and comments during the writing of scientific articles and for the introduction to the fields of force control and vibration damping. I am thankful to the administrator of the MOBATEL project, Professor Jens G. Balchen, for letting me take part in this project. I am grateful to Professor Dr.ing. Thor I. Fossen for useful discussions on hydrodynamics. I want to express my gratitude to Dr.ing.
    [Show full text]
  • A Retrospective of the AAAI Robot Competitions
    AI Magazine Volume 18 Number 1 (1997) (© AAAI) Articles A Retrospective of the AAAI Robot Competitions Pete Bonasso and Thomas Dean ■ This article is the content of an invited talk given this development. We found it helpful to by the authors at the Thirteenth National Confer- draw comparisons with developments in avia- ence on Artificial Intelligence (AAAI-96). The tion. The comparisons allow us to make use- piece begins with a short history of the competi- ful parallels with regard to aspirations, moti- tion, then discusses the technical challenges and vations, successes, and failures. the political and cultural issues associated with bringing it off every year. We also cover the sci- Dreams ence and engineering involved with the robot tasks and the educational and commercial aspects Flying has always seemed like a particularly of the competition. We finish with a discussion elegant way of getting from one place to of the community formed by the organizers, par- another. The dream for would-be aviators is ticipants, and the conference attendees. The orig- to soar like a bird; for many researchers in AI, inal talk made liberal use of video clips and slide the dream is to emulate a human. There are photographs; so, we have expanded the text and many variations on these dreams, and some added photographs to make up for the lack of resulted in several fanciful manifestations. In such media. the case of aviation, some believed that it should be possible for humans to fly by sim- here have been five years of robot com- ply strapping on wings.
    [Show full text]
  • Mobile Robot Kinematics
    Mobile Robot Kinematics We're going to start talking about our mobile robots now. There robots differ from our arms in 2 ways: They have sensors, and they can move themselves around. Because their movement is so different from the arms, we will need to talk about a new style of kinematics: Differential Drive. 1. Differential Drive is how many mobile wheeled robots locomote. 2. Differential Drive robot typically have two powered wheels, one on each side of the robot. Sometimes there are other passive wheels that keep the robot from tipping over. 3. When both wheels turn at the same speed in the same direction, the robot moves straight in that direction. 4. When one wheel turns faster than the other, the robot turns in an arc toward the slower wheel. 5. When the wheels turn in opposite directions, the robot turns in place. 6. We can formally describe the robot behavior as follows: (a) If the robot is moving in a curve, there is a center of that curve at that moment, known as the Instantaneous Center of Curvature (or ICC). We talk about the instantaneous center, because we'll analyze this at each instant- the curve may, and probably will, change in the next moment. (b) If r is the radius of the curve (measured to the middle of the robot) and l is the distance between the wheels, then the rate of rotation (!) around the ICC is related to the velocity of the wheels by: l !(r + ) = v 2 r l !(r − ) = v 2 l Why? The angular velocity is defined as the positional velocity divided by the radius: dθ V = dt r 1 This should make some intuitive sense: the farther you are from the center of rotation, the faster you need to move to get the same angular velocity.
    [Show full text]
  • 6D Image-Based Visual Servoing for Robot Manipulators with Uncalibrated Stereo Cameras
    6D Image-based Visual Servoing for Robot Manipulators with uncalibrated Stereo Cameras Caixia Cai1, Emmanuel Dean-Leon´ 2, Nikhil Somani1, Alois Knoll1 Abstract— This paper introduces 6 new image features to A. Related work provide a solution to the open problem of uncalibrated 6D image-based visual servoing for robot manipulators, where An IBVS usually employs the image Jacobian matrix the goal is to control the 3D position and orientation of the (Jimg) to relate end-effector velocities in the manipulator’s robot end-effector using visual feedback. One of the main Task space to the feature parameter velocities in the feature contributions of this article is a novel stereo camera model which employs virtual orthogonal cameras to map 6D Cartesian (image) space. A full and comprehensive survey on Visual poses defined in the Task space to 6D visual poses defined Servoing and image Jacobian definitions can be found in [1], in a Virtual Visual space (Image space). This new model is [3], [4] and more recently in [5]. In general, the classical used to compute a full-rank square Image Jacobian matrix image Jacobian is defined using a set of image feature (Jimg), which solves several common problems exhibited by the measurements (usually denoted by s) and it describes how classical image Jacobians, e.g., Image space singularities and local minima. This Jacobian is a fundamental key for the image- image features change when the robot manipulator pose based controller design, where a chattering-free adaptive second changess ˙ = Jimgv. In Visual Servoing the image Jacobian order sliding mode is employed to track 6D visual motions for needs to be calculated or estimated.
    [Show full text]
  • Abbreviations and Glossary
    Appendix A Abbreviations and Glossary Abbreviations are defined and the mathematical symbols and notations used in this book are specified. Furthermore, the random number generator used in this book is referenced. A.1 Abbreviations arccos arccosine BiRRT Bidirectional rapidly growing random tree C-space Configuration space DH Denavit-Hartenberg DLR German aerospace center DOF Degree of freedom FFT Fast fourier transformation IK Inverse kinematics HRI Human-Robot interface LWR Light weight robot MMI Institute of Man-Machine interaction PCA Principal component analysis PRM Probabilistic road map RRT Rapidly growing random tree rulaCapMap Rula-restricted capability map RULA Rapid upper limb assessment SFE Shape fit error TCP Tool center point OV workspace overlap 130 A Abbreviations and Glossary A.2 Mathematical Symbols C configuration space K(q) direct kinematics H set of all homogeneous matrices WR reachable workspace WD dexterous workspace WV versatile workspace F(R,x) function that maps to a homogeneous matrix VRobot voxel space for the robot arm VHuman voxel space for the human arm P set of points on the sphere Np set of point indices for the points on the sphere No set of orientation indices OS set of all homogeneous frames distributed on a sphere MS capability map A.3 Mathematical Notations a scalar value a vector aT vector transposed A matrix AT matrix transposed < a,b > inner product 3 SO(3) group of rotation matrices ∈ IR SO(3) := R ∈ IR 3×3| RRT = I,detR =+1 SE(3) IR 3 × SO(3) A TB reference frame B given in coordinates of reference frame A a ceiling function a floor function A.4 Random Sampling In this book, the drawing of random samples is often used.
    [Show full text]
  • Pipeline Following by Visual Servoing for Autonomous Underwater Vehicles
    Pipeline following by visual servoing for Autonomous Underwater Vehicles Guillaume Alliberta,d, Minh-Duc Huaa, Szymon Krup´ınskib, Tarek Hamela,c aUniversity of Cˆoted’Azur, CNRS, I3S, France. Emails: allibert(thamel; hua)@i3s:unice: f r bCybernetix, Marseille, France. Email: szymon:krupinski@cybernetix: f r cInstitut Universitaire de France, France dCorresponding author Abstract A nonlinear image-based visual servo control approach for pipeline following of fully-actuated Autonomous Underwater Vehicles (AUV) is proposed. It makes use of the binormalized Plucker¨ coordinates of the pipeline borders detected in the image plane as feedback information while the system dynamics are exploited in a cascade manner in the control design. Unlike conventional solutions that consider only the system kinematics, the proposed control scheme accounts for the full system dynamics in order to obtain an enlarged provable stability domain. Control robustness with respect to model uncertainties and external disturbances is re- inforced using integral corrections. Robustness and efficiency of the proposed approach are illustrated via both realistic simulations and experimental results on a real AUV. Keywords: AUV, pipeline following, visual servoing, nonlinear control 1. Introduction control Repoulias and Papadopoulos (2007); Aguiar and Pas- coal (2007); Antonelli (2007) and Lyapunov model-based con- Underwater pipelines are widely used for transportation of trol Refsnes et al. (2008); Smallwood and Whitcomb (2004) oil, gas or other fluids from production sites to distribution sites. mostly concern the pre-programmed trajectory tracking prob- Laid down on the ocean floor, they are often subject to extreme lem with little regard to the local topography of the environ- conditions (temperature, pressure, humidity, sea current, vibra- ment.
    [Show full text]
  • Design and Evaluation of Modular Robots for Maintenance in Large Scientific Facilities
    UNIVERSIDAD POLITÉCNICA DE MADRID ESCUELA TÉCNICA SUPERIOR DE INGENIEROS INDUSTRIALES DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES PRITHVI SEKHAR PAGALA, MSC 2014 DEPARTAMENTO DE AUTOMÁTICA, INGENIERÍA ELECTRÓNICA E INFORMÁTICA INDUSTRIAL ESCUELA TÉCNICA SUPERIOR DE INGENIEROS INDUSTRIALES DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES PhD Thesis Author: Prithvi Sekhar Pagala, MSC Advisors: Manuel Ferre Perez, PhD Manuel Armada, PhD 2014 DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES Author: Prithvi Sekhar Pagala, MSC Tribunal: Presidente: Dr. Fernando Matía Espada Secretario: Dr. Claudio Rossi Vocal A: Dr. Antonio Giménez Fernández Vocal B: Dr. Juan Antonio Escalera Piña Vocal C: Dr. Concepción Alicia Monje Micharet Suplente A: Dr. Mohamed Abderrahim Fichouche Suplente B: Dr. José Maráa Azorín Proveda Acuerdan otorgar la calificación de: Madrid, de de 2014 Acknowledgements The duration of the thesis development lead to inspiring conversations, exchange of ideas and expanding knowledge with amazing individuals. I would like to thank my advisers Manuel Ferre and Manuel Armada for their constant men- torship, support and motivation to pursue different research ideas and collaborations during the course of entire thesis. The team at the lab has not only enriched my professionally life but also in Spanish ways. Thank you all the members of the ROMIN, Francisco, Alex, Jose, Jordi, Ignacio, Javi, Chema, Luis .... This research project has been supported by a Marie Curie Early Stage Initial Training Network Fellowship of the European Community’s Seventh Framework Program "PURESAFE". I wish to thank the supervisors and fellow research members of the project for the amazing support, fruitful interactions and training events.
    [Show full text]
  • Learning for Microrobot Exploration: Model-Based Locomotion, Sparse-Robust Navigation, and Low-Power Deep Classification
    Learning for Microrobot Exploration: Model-based Locomotion, Sparse-robust Navigation, and Low-power Deep Classification Nathan O. Lambert1, Farhan Toddywala1, Brian Liao1, Eric Zhu1, Lydia Lee1, and Kristofer S. J. Pister1 Abstract— Building intelligent autonomous systems at any Classification Intelligent, mm scale Fast Downsampling scale is challenging. The sensing and computation constraints Microrobot of a microrobot platform make the problems harder. We present improvements to learning-based methods for on-board learning of locomotion, classification, and navigation of microrobots. We show how simulated locomotion can be achieved with model- Squeeze-and-Excite Hard Activation System-on-chip: based reinforcement learning via on-board sensor data distilled Camera, Radio, Battery into control. Next, we introduce a sparse, linear detector and a Dynamic Thresholding method to FAST Visual Odometry for improved navigation in the noisy regime of mm scale imagery. Locomotion Navigation We end with a new image classifier capable of classification Controller Parameter Unknown Dynamics Original Training Image with fewer than one million multiply-and-accumulate (MAC) Optimization Modeling & Simulator Resulting Map operations by combining fast downsampling, efficient layer Reward structures and hard activation functions. These are promising … … SLIPD steps toward using state-of-the-art algorithms in the power- Ground Truth Estimated Pos. State Space limited world of edge-intelligence and microrobots. Sparse I. INTRODUCTION Local Control PID: K K K Microrobots have been touted as a coming revolution p d i for many tasks, such as search and rescue, agriculture, or Fig. 1: Our vision for microrobot exploration based on distributed sensing [1], [2]. Microrobotics is a synthesis of three contributions: 1) improving data-efficiency of learning Microelectromechanical systems (MEMs), actuators, power control, 2) a more noise-robust and novel approach to visual electronics, and computation.
    [Show full text]
  • Robot Learning
    Robot Learning 15-494 Cognitive Robotics David S. Touretzky & Ethan Tira-Thompson Carnegie Mellon Spring 2009 04/06/09 15-494 Cognitive Robotics 1 What Can Robots Learn? ● Parameter tuning, e.g., for a faster walk ● Perceptual learning: ALVINN driving the Navlab ● Map learning, e.g., SLAM algorithms ● Behavior learning; plans and macro-operators – Shakey the Robot (SRI) – Robo-Soar ● Learning from human teachers – Operant conditioning: Skinnerbots – Imitation learning 04/06/09 15-494 Cognitive Robotics 2 Lots of Work on Robot Learning ● IEEE Robotics and Automation Society – Technical Committee on Robot Learning – http://www.learning-robots.de ● Robot Learning Summer School – Lisbon, Portugal; July 20-24, 2009 ● Workshops at major robotics conferences – ICRA 2009 workshop: Approachss to Sensorimotor Learning on Humanoid Robots – Kobe, Japan; May 17, 2009 04/06/09 15-494 Cognitive Robotics 3 Parameter Optimization ● How fast can an AIBO walk? Figures from Kohl & Stone, ICRA 2004, for the ERS-210 model: – CMU (2002) 200 mm/s – German Team 230 mm/s Hand-tuned gaits – UT Austin Villa 245 mm/s – UNSW 254 mm/s – Hornsby (1999) 170 mm/s – UNSW 270 mm/s Learned gaits – UT Austin Villa 291 mm/s 04/06/09 15-494 Cognitive Robotics 4 Walk Parameters 12 parameters to optimize: ● Front locus (height, x pos, ypos) ● Rear locus (height, x pos, y pos) ● Locus length ● Locus skew multiplier (in the x-y plane, for turning) ● Height of front of body ● Height of rear of body From Kohl & Stone (ICRA 2004) ● Foot travel time ● Fraction of time foot is on ground 04/06/09 15-494 Cognitive Robotics 5 Optimization Strategy ● “Policy gradient reinforcement learning”: – Walk parameter assignment = “policy” – Estimate the gradient along each dimension by trying combinations of slight perturbations in all parameters – Measure walking speed on the actual robot – Optimize all 12 parameters simultaneously – Adjust parameters according to the estimated gradient.
    [Show full text]
  • An Ethical Framework for Smart Robots Mika Westerlund
    An Ethical Framework for Smart Robots Mika Westerlund Never underestimate a droid. Leia Organa Star Wars: The Rise of Skywalker This article focuses on “roboethics” in the age of growing adoption of smart robots, which can now be seen as a new robotic “species”. As autonomous AI systems, they can collaborate with humans and are capable of learning from their operating environment, experiences, and human behaviour feedback in human-machine interaction. This enables smart robots to improve their performance and capabilities. This conceptual article reviews key perspectives to roboethics, as well as establishes a framework to illustrate its main ideas and features. Building on previous literature, roboethics has four major types of implications for smart robots: 1) smart robots as amoral and passive tools, 2) smart robots as recipients of ethical behaviour in society, 3) smart robots as moral and active agents, and 4) smart robots as ethical impact-makers in society. The study contributes to current literature by suggesting that there are two underlying ethical and moral dimensions behind these perspectives, namely the “ethical agency of smart robots” and “object of moral judgment”, as well as what this could look like as smart robots become more widespread in society. The article concludes by suggesting how scientists and smart robot designers can benefit from a framework, discussing the limitations of the present study, and proposing avenues for future research. Introduction capabilities (Lichocki et al., 2011; Petersen, 2007). Hence, Lin et al. (2011) define a “robot” as an Robots are becoming increasingly prevalent in our engineered machine that senses, thinks, and acts, thus daily, social, and professional lives, performing various being able to process information from sensors and work and household tasks, as well as operating other sources, such as an internal set of rules, either driverless vehicles and public transportation systems programmed or learned, that enables the machine to (Leenes et al., 2017).
    [Show full text]
  • Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior
    Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior Jason Borenstein* and Ron Arkin** Predictions are being commonly voiced about how robots are going to become an increasingly prominent feature of our day-to-day lives. Beyond the military and industrial sectors, they are in the process of being created to function as butlers, nannies, housekeepers, and even as companions (Wallach and Allen 2009). The design of these robotic technologies and their use in these roles raises numerous ethical issues. Indeed, entire conferences and books are now devoted to the subject (Lin et al. 2014).1 One particular under-examined aspect of human-robot interaction that requires systematic analysis is whether to allow robots to influence a user’s behavior for that person’s own good. However, an even more controversial practice is on the horizon and warrants attention, which is the ethical acceptability of allowing a robot to “nudge” a user’s behavior for the good of society. For the purposes of this paper, we will examine the feasibility of creating robots that would seek to nurture a user’s empathy towards other human beings. We specifically draw attention to whether it would be ethically appropriate for roboticists to pursue this type of design pathway. In our prior work, we examined the ethical aspects of encoding Rawls’ Theory of Social Justice into robots in order to encourage users to act more socially just towards other humans (Borenstein and Arkin 2016). Here, we primarily limit the focus to the performance of charitable acts, which could shed light on a range of socially just actions that a robot could potentially elicit from a user and what the associated ethical concerns may be.
    [Show full text]