Acknowledgements Acknowl
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Artificial Intelligence and the Ethics of Self-Learning Robots Shannon Vallor Santa Clara University, [email protected]
Santa Clara University Scholar Commons Philosophy College of Arts & Sciences 10-3-2017 Artificial Intelligence and the Ethics of Self-learning Robots Shannon Vallor Santa Clara University, [email protected] George A. Bekey Follow this and additional works at: http://scholarcommons.scu.edu/phi Part of the Philosophy Commons Recommended Citation Vallor, S., & Bekey, G. A. (2017). Artificial Intelligence and the Ethics of Self-learning Robots. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0 (pp. 338–353). Oxford University Press. This material was originally published in Robot Ethics 2.0 edited by Patrick Lin, Keith Abney, and Ryan Jenkins, and has been reproduced by permission of Oxford University Press. For permission to reuse this material, please visit http://www.oup.co.uk/academic/rights/permissions. This Book Chapter is brought to you for free and open access by the College of Arts & Sciences at Scholar Commons. It has been accepted for inclusion in Philosophy by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. ARTIFICIAL INTELLIGENCE AND 22 THE ETHICS OF SELF - LEARNING ROBOTS Shannon Va ll or and George A. Bekey The convergence of robotics technology with the science of artificial intelligence (or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities for merly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use. -
Modeling and Control of Underwater Robotic Systems
Modeling and Control of Underwater Robotic Systems Dr.ing. thesis Ingrid Schj0lberg Department of Engineering Cybernetics Norwegian University of Science and Technology March 1996 Report 96-21-W Department of Engineering Cybernetics Norwegian University of Science and Technology N-7034 Trondheim, Norway OBmetmoN of im doombst b mwia DISCLAI R Portions of tins document may be Illegible in electronic image products. Images are produced from the best available original document Preface This thesis is submitted for the Doktor ingenipr degree at the Norwegian University of Science and Technology (NTNU). The research was carried out at the Norwegian Institute of Technology (NTH), Department of Engi neering Cybernetics, in the period from January 1991 to March 1996. The work was funded by The Research Council of Norway (NFR) through the MOBATEL project. Professor Dr.ing. Olav Egeland was my supervisor. During the academic year 1992-1993 I worked at the European Organization for Nuclear Research (CERN). This stay was funded by TOTAL Norge. The work performed during this stay has not been included in this thesis. I am indebted to Professor Egeland for giving me the opportunity to take this study. I am also grateful for advice and comments during the writing of scientific articles and for the introduction to the fields of force control and vibration damping. I am thankful to the administrator of the MOBATEL project, Professor Jens G. Balchen, for letting me take part in this project. I am grateful to Professor Dr.ing. Thor I. Fossen for useful discussions on hydrodynamics. I want to express my gratitude to Dr.ing. -
A Retrospective of the AAAI Robot Competitions
AI Magazine Volume 18 Number 1 (1997) (© AAAI) Articles A Retrospective of the AAAI Robot Competitions Pete Bonasso and Thomas Dean ■ This article is the content of an invited talk given this development. We found it helpful to by the authors at the Thirteenth National Confer- draw comparisons with developments in avia- ence on Artificial Intelligence (AAAI-96). The tion. The comparisons allow us to make use- piece begins with a short history of the competi- ful parallels with regard to aspirations, moti- tion, then discusses the technical challenges and vations, successes, and failures. the political and cultural issues associated with bringing it off every year. We also cover the sci- Dreams ence and engineering involved with the robot tasks and the educational and commercial aspects Flying has always seemed like a particularly of the competition. We finish with a discussion elegant way of getting from one place to of the community formed by the organizers, par- another. The dream for would-be aviators is ticipants, and the conference attendees. The orig- to soar like a bird; for many researchers in AI, inal talk made liberal use of video clips and slide the dream is to emulate a human. There are photographs; so, we have expanded the text and many variations on these dreams, and some added photographs to make up for the lack of resulted in several fanciful manifestations. In such media. the case of aviation, some believed that it should be possible for humans to fly by sim- here have been five years of robot com- ply strapping on wings. -
Mobile Robot Kinematics
Mobile Robot Kinematics We're going to start talking about our mobile robots now. There robots differ from our arms in 2 ways: They have sensors, and they can move themselves around. Because their movement is so different from the arms, we will need to talk about a new style of kinematics: Differential Drive. 1. Differential Drive is how many mobile wheeled robots locomote. 2. Differential Drive robot typically have two powered wheels, one on each side of the robot. Sometimes there are other passive wheels that keep the robot from tipping over. 3. When both wheels turn at the same speed in the same direction, the robot moves straight in that direction. 4. When one wheel turns faster than the other, the robot turns in an arc toward the slower wheel. 5. When the wheels turn in opposite directions, the robot turns in place. 6. We can formally describe the robot behavior as follows: (a) If the robot is moving in a curve, there is a center of that curve at that moment, known as the Instantaneous Center of Curvature (or ICC). We talk about the instantaneous center, because we'll analyze this at each instant- the curve may, and probably will, change in the next moment. (b) If r is the radius of the curve (measured to the middle of the robot) and l is the distance between the wheels, then the rate of rotation (!) around the ICC is related to the velocity of the wheels by: l !(r + ) = v 2 r l !(r − ) = v 2 l Why? The angular velocity is defined as the positional velocity divided by the radius: dθ V = dt r 1 This should make some intuitive sense: the farther you are from the center of rotation, the faster you need to move to get the same angular velocity. -
6D Image-Based Visual Servoing for Robot Manipulators with Uncalibrated Stereo Cameras
6D Image-based Visual Servoing for Robot Manipulators with uncalibrated Stereo Cameras Caixia Cai1, Emmanuel Dean-Leon´ 2, Nikhil Somani1, Alois Knoll1 Abstract— This paper introduces 6 new image features to A. Related work provide a solution to the open problem of uncalibrated 6D image-based visual servoing for robot manipulators, where An IBVS usually employs the image Jacobian matrix the goal is to control the 3D position and orientation of the (Jimg) to relate end-effector velocities in the manipulator’s robot end-effector using visual feedback. One of the main Task space to the feature parameter velocities in the feature contributions of this article is a novel stereo camera model which employs virtual orthogonal cameras to map 6D Cartesian (image) space. A full and comprehensive survey on Visual poses defined in the Task space to 6D visual poses defined Servoing and image Jacobian definitions can be found in [1], in a Virtual Visual space (Image space). This new model is [3], [4] and more recently in [5]. In general, the classical used to compute a full-rank square Image Jacobian matrix image Jacobian is defined using a set of image feature (Jimg), which solves several common problems exhibited by the measurements (usually denoted by s) and it describes how classical image Jacobians, e.g., Image space singularities and local minima. This Jacobian is a fundamental key for the image- image features change when the robot manipulator pose based controller design, where a chattering-free adaptive second changess ˙ = Jimgv. In Visual Servoing the image Jacobian order sliding mode is employed to track 6D visual motions for needs to be calculated or estimated. -
Abbreviations and Glossary
Appendix A Abbreviations and Glossary Abbreviations are defined and the mathematical symbols and notations used in this book are specified. Furthermore, the random number generator used in this book is referenced. A.1 Abbreviations arccos arccosine BiRRT Bidirectional rapidly growing random tree C-space Configuration space DH Denavit-Hartenberg DLR German aerospace center DOF Degree of freedom FFT Fast fourier transformation IK Inverse kinematics HRI Human-Robot interface LWR Light weight robot MMI Institute of Man-Machine interaction PCA Principal component analysis PRM Probabilistic road map RRT Rapidly growing random tree rulaCapMap Rula-restricted capability map RULA Rapid upper limb assessment SFE Shape fit error TCP Tool center point OV workspace overlap 130 A Abbreviations and Glossary A.2 Mathematical Symbols C configuration space K(q) direct kinematics H set of all homogeneous matrices WR reachable workspace WD dexterous workspace WV versatile workspace F(R,x) function that maps to a homogeneous matrix VRobot voxel space for the robot arm VHuman voxel space for the human arm P set of points on the sphere Np set of point indices for the points on the sphere No set of orientation indices OS set of all homogeneous frames distributed on a sphere MS capability map A.3 Mathematical Notations a scalar value a vector aT vector transposed A matrix AT matrix transposed < a,b > inner product 3 SO(3) group of rotation matrices ∈ IR SO(3) := R ∈ IR 3×3| RRT = I,detR =+1 SE(3) IR 3 × SO(3) A TB reference frame B given in coordinates of reference frame A a ceiling function a floor function A.4 Random Sampling In this book, the drawing of random samples is often used. -
Pipeline Following by Visual Servoing for Autonomous Underwater Vehicles
Pipeline following by visual servoing for Autonomous Underwater Vehicles Guillaume Alliberta,d, Minh-Duc Huaa, Szymon Krup´ınskib, Tarek Hamela,c aUniversity of Cˆoted’Azur, CNRS, I3S, France. Emails: allibert(thamel; hua)@i3s:unice: f r bCybernetix, Marseille, France. Email: szymon:krupinski@cybernetix: f r cInstitut Universitaire de France, France dCorresponding author Abstract A nonlinear image-based visual servo control approach for pipeline following of fully-actuated Autonomous Underwater Vehicles (AUV) is proposed. It makes use of the binormalized Plucker¨ coordinates of the pipeline borders detected in the image plane as feedback information while the system dynamics are exploited in a cascade manner in the control design. Unlike conventional solutions that consider only the system kinematics, the proposed control scheme accounts for the full system dynamics in order to obtain an enlarged provable stability domain. Control robustness with respect to model uncertainties and external disturbances is re- inforced using integral corrections. Robustness and efficiency of the proposed approach are illustrated via both realistic simulations and experimental results on a real AUV. Keywords: AUV, pipeline following, visual servoing, nonlinear control 1. Introduction control Repoulias and Papadopoulos (2007); Aguiar and Pas- coal (2007); Antonelli (2007) and Lyapunov model-based con- Underwater pipelines are widely used for transportation of trol Refsnes et al. (2008); Smallwood and Whitcomb (2004) oil, gas or other fluids from production sites to distribution sites. mostly concern the pre-programmed trajectory tracking prob- Laid down on the ocean floor, they are often subject to extreme lem with little regard to the local topography of the environ- conditions (temperature, pressure, humidity, sea current, vibra- ment. -
Design and Evaluation of Modular Robots for Maintenance in Large Scientific Facilities
UNIVERSIDAD POLITÉCNICA DE MADRID ESCUELA TÉCNICA SUPERIOR DE INGENIEROS INDUSTRIALES DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES PRITHVI SEKHAR PAGALA, MSC 2014 DEPARTAMENTO DE AUTOMÁTICA, INGENIERÍA ELECTRÓNICA E INFORMÁTICA INDUSTRIAL ESCUELA TÉCNICA SUPERIOR DE INGENIEROS INDUSTRIALES DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES PhD Thesis Author: Prithvi Sekhar Pagala, MSC Advisors: Manuel Ferre Perez, PhD Manuel Armada, PhD 2014 DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES Author: Prithvi Sekhar Pagala, MSC Tribunal: Presidente: Dr. Fernando Matía Espada Secretario: Dr. Claudio Rossi Vocal A: Dr. Antonio Giménez Fernández Vocal B: Dr. Juan Antonio Escalera Piña Vocal C: Dr. Concepción Alicia Monje Micharet Suplente A: Dr. Mohamed Abderrahim Fichouche Suplente B: Dr. José Maráa Azorín Proveda Acuerdan otorgar la calificación de: Madrid, de de 2014 Acknowledgements The duration of the thesis development lead to inspiring conversations, exchange of ideas and expanding knowledge with amazing individuals. I would like to thank my advisers Manuel Ferre and Manuel Armada for their constant men- torship, support and motivation to pursue different research ideas and collaborations during the course of entire thesis. The team at the lab has not only enriched my professionally life but also in Spanish ways. Thank you all the members of the ROMIN, Francisco, Alex, Jose, Jordi, Ignacio, Javi, Chema, Luis .... This research project has been supported by a Marie Curie Early Stage Initial Training Network Fellowship of the European Community’s Seventh Framework Program "PURESAFE". I wish to thank the supervisors and fellow research members of the project for the amazing support, fruitful interactions and training events. -
Learning for Microrobot Exploration: Model-Based Locomotion, Sparse-Robust Navigation, and Low-Power Deep Classification
Learning for Microrobot Exploration: Model-based Locomotion, Sparse-robust Navigation, and Low-power Deep Classification Nathan O. Lambert1, Farhan Toddywala1, Brian Liao1, Eric Zhu1, Lydia Lee1, and Kristofer S. J. Pister1 Abstract— Building intelligent autonomous systems at any Classification Intelligent, mm scale Fast Downsampling scale is challenging. The sensing and computation constraints Microrobot of a microrobot platform make the problems harder. We present improvements to learning-based methods for on-board learning of locomotion, classification, and navigation of microrobots. We show how simulated locomotion can be achieved with model- Squeeze-and-Excite Hard Activation System-on-chip: based reinforcement learning via on-board sensor data distilled Camera, Radio, Battery into control. Next, we introduce a sparse, linear detector and a Dynamic Thresholding method to FAST Visual Odometry for improved navigation in the noisy regime of mm scale imagery. Locomotion Navigation We end with a new image classifier capable of classification Controller Parameter Unknown Dynamics Original Training Image with fewer than one million multiply-and-accumulate (MAC) Optimization Modeling & Simulator Resulting Map operations by combining fast downsampling, efficient layer Reward structures and hard activation functions. These are promising … … SLIPD steps toward using state-of-the-art algorithms in the power- Ground Truth Estimated Pos. State Space limited world of edge-intelligence and microrobots. Sparse I. INTRODUCTION Local Control PID: K K K Microrobots have been touted as a coming revolution p d i for many tasks, such as search and rescue, agriculture, or Fig. 1: Our vision for microrobot exploration based on distributed sensing [1], [2]. Microrobotics is a synthesis of three contributions: 1) improving data-efficiency of learning Microelectromechanical systems (MEMs), actuators, power control, 2) a more noise-robust and novel approach to visual electronics, and computation. -
Robot Learning
Robot Learning 15-494 Cognitive Robotics David S. Touretzky & Ethan Tira-Thompson Carnegie Mellon Spring 2009 04/06/09 15-494 Cognitive Robotics 1 What Can Robots Learn? ● Parameter tuning, e.g., for a faster walk ● Perceptual learning: ALVINN driving the Navlab ● Map learning, e.g., SLAM algorithms ● Behavior learning; plans and macro-operators – Shakey the Robot (SRI) – Robo-Soar ● Learning from human teachers – Operant conditioning: Skinnerbots – Imitation learning 04/06/09 15-494 Cognitive Robotics 2 Lots of Work on Robot Learning ● IEEE Robotics and Automation Society – Technical Committee on Robot Learning – http://www.learning-robots.de ● Robot Learning Summer School – Lisbon, Portugal; July 20-24, 2009 ● Workshops at major robotics conferences – ICRA 2009 workshop: Approachss to Sensorimotor Learning on Humanoid Robots – Kobe, Japan; May 17, 2009 04/06/09 15-494 Cognitive Robotics 3 Parameter Optimization ● How fast can an AIBO walk? Figures from Kohl & Stone, ICRA 2004, for the ERS-210 model: – CMU (2002) 200 mm/s – German Team 230 mm/s Hand-tuned gaits – UT Austin Villa 245 mm/s – UNSW 254 mm/s – Hornsby (1999) 170 mm/s – UNSW 270 mm/s Learned gaits – UT Austin Villa 291 mm/s 04/06/09 15-494 Cognitive Robotics 4 Walk Parameters 12 parameters to optimize: ● Front locus (height, x pos, ypos) ● Rear locus (height, x pos, y pos) ● Locus length ● Locus skew multiplier (in the x-y plane, for turning) ● Height of front of body ● Height of rear of body From Kohl & Stone (ICRA 2004) ● Foot travel time ● Fraction of time foot is on ground 04/06/09 15-494 Cognitive Robotics 5 Optimization Strategy ● “Policy gradient reinforcement learning”: – Walk parameter assignment = “policy” – Estimate the gradient along each dimension by trying combinations of slight perturbations in all parameters – Measure walking speed on the actual robot – Optimize all 12 parameters simultaneously – Adjust parameters according to the estimated gradient. -
An Ethical Framework for Smart Robots Mika Westerlund
An Ethical Framework for Smart Robots Mika Westerlund Never underestimate a droid. Leia Organa Star Wars: The Rise of Skywalker This article focuses on “roboethics” in the age of growing adoption of smart robots, which can now be seen as a new robotic “species”. As autonomous AI systems, they can collaborate with humans and are capable of learning from their operating environment, experiences, and human behaviour feedback in human-machine interaction. This enables smart robots to improve their performance and capabilities. This conceptual article reviews key perspectives to roboethics, as well as establishes a framework to illustrate its main ideas and features. Building on previous literature, roboethics has four major types of implications for smart robots: 1) smart robots as amoral and passive tools, 2) smart robots as recipients of ethical behaviour in society, 3) smart robots as moral and active agents, and 4) smart robots as ethical impact-makers in society. The study contributes to current literature by suggesting that there are two underlying ethical and moral dimensions behind these perspectives, namely the “ethical agency of smart robots” and “object of moral judgment”, as well as what this could look like as smart robots become more widespread in society. The article concludes by suggesting how scientists and smart robot designers can benefit from a framework, discussing the limitations of the present study, and proposing avenues for future research. Introduction capabilities (Lichocki et al., 2011; Petersen, 2007). Hence, Lin et al. (2011) define a “robot” as an Robots are becoming increasingly prevalent in our engineered machine that senses, thinks, and acts, thus daily, social, and professional lives, performing various being able to process information from sensors and work and household tasks, as well as operating other sources, such as an internal set of rules, either driverless vehicles and public transportation systems programmed or learned, that enables the machine to (Leenes et al., 2017). -
Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior
Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior Jason Borenstein* and Ron Arkin** Predictions are being commonly voiced about how robots are going to become an increasingly prominent feature of our day-to-day lives. Beyond the military and industrial sectors, they are in the process of being created to function as butlers, nannies, housekeepers, and even as companions (Wallach and Allen 2009). The design of these robotic technologies and their use in these roles raises numerous ethical issues. Indeed, entire conferences and books are now devoted to the subject (Lin et al. 2014).1 One particular under-examined aspect of human-robot interaction that requires systematic analysis is whether to allow robots to influence a user’s behavior for that person’s own good. However, an even more controversial practice is on the horizon and warrants attention, which is the ethical acceptability of allowing a robot to “nudge” a user’s behavior for the good of society. For the purposes of this paper, we will examine the feasibility of creating robots that would seek to nurture a user’s empathy towards other human beings. We specifically draw attention to whether it would be ethically appropriate for roboticists to pursue this type of design pathway. In our prior work, we examined the ethical aspects of encoding Rawls’ Theory of Social Justice into robots in order to encourage users to act more socially just towards other humans (Borenstein and Arkin 2016). Here, we primarily limit the focus to the performance of charitable acts, which could shed light on a range of socially just actions that a robot could potentially elicit from a user and what the associated ethical concerns may be.