Dynamic Visual Servo Control of Robots: an Adaptive Image-Based

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Visual Servo Control of Robots: an Adaptive Image-Based Dynamic Visual Servo Control cf Robots: An Adaptive Image-Based Approach Lee Elliot Weiss CMU-RI-TR-84-16 Department of Electrical and Computer Engineering The Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 152.13 April 1984 Copyright @ 7984 Carnegie-Mellon University i Table of Contents 1. Introduction 3 1.1. Overview 3 1.2. Introduction to Visual Servo Control 3 1.3. Outline of Dissertation 13 1.3.1. Dissertation Goals 13 1.3.2. Design Approach 14 1.3.3. Outline of Dissertation 14 2. Visual Servo Control 19 2.1. Overview 19 2.2. Robot Kinematics and Dynamics 20 2.2.1. Kinematics 21 2.2.2. Rigid Body Dynamics 22 2.3. Control 23 2.3.1. Joint Level Control 24 2.3.1.1. Feed-Formrd Open-Loop Control 25 2.3.1.2. Fixed Linear Servo Control 26 2.3.1.3. Adaptive Feedback Control 28 2.3.2. Kinematic; Control 31 2.3.2.1. Inctemental Motion Control 33 2.4. Visual Servo Control Structures 34 2.4.1. Computer Vision 36 2.4.2. Position Based Visual Control Structures 37 2.4.2.1. Static Position Based "Look and Move" 37 2.4.2.2. Dynamic Position-Based "Look and Move" 39 2.4.2.3. Position Based Visual Servoing 40 2.4.3. Imaged Based Visual Control Structures 41 2.4.3.1. Dynamic Image-Based "Look and Move" 42 2.4.3.2. Image Based Visual Servoing 44 2.5. Control Of Image-BasedSystems 46 2.5.1. Control Apptoach 47 2.5.2. Feature Selection and Assignment 51 2.5.2.1. Feature Assignment Using Diagonal Dominance 51 2.5.2.2. Feature Selection 53 2.5.3. Path Constraint 56 2.6. Summary 60 3. Model Reference Adaptive Control 63 3.1. Overview 63 3.2. MRAC for Systems Without Measurement Delay 63 3.3. MRAC Control With Measi.irement Delay 65 3.3.1. MRAC Controller with Delay 66 3.3.2. Control Penalty 68 3.3.3. Reference Model 68 3.4. Signal Biasing and Gravity Compensation 70 3.4.1. Signal Biasing 70 3.4.2. Gravity Compensation 71 3.5. Linear Control Interpretation 72 3.5.1. FP MRAC 73 3.5.2. SP MRAC 75 3.6. Parameter Specification 78 3.6.1. Model Time Constant Specification 78 3.6.2. Sampling Period Selection 79 3.6.2.1. Ftindaniental Upper Bound 80 3.6.2.2. Performance Margin 80 3.6.2.3. MRAC Identifier Requirements 81 3.6.3. Adaptive Gain Matrix Initialization and Limiting 81 3.6.4. Parameter Vector Initialization 82 3.6.5. Weighting Factor 83 3.6.6. Control Penalty 03 3.7. Summary 83 4. Task Modeling and Simulation 05 4.1. Overvkw 85 4.2. Task Configurations 86 4.2.1. Configuration 1: 2 DOF wi!h linear kinematics and dynamics 86 4.2.2. Configuration 2: 2 DOF with r,onlinesr kinematics t3 dynamics 88 4.2.3. Con;iguration 3. 3 DOF with nonlinear kinematics, and linear and 89 iincoupled dynamics 4.3. Plant Modeling for Translational and Rotational Stages 91 4.3 1. Rotational Stage 91 4.3.2. X-Y Stages 93 4.3.3. X-Y-ThetaStage Parameters 94 4.3.4. Computer Simulation Of Uncoupled Linear Models 96 4.4. Plant Modeling for 2 DOF Robot Arm 97 4.4.1. Kinematic Equations 99 4.4.2. Rigid Body Dynamics 100 4.4.3. Actuator Dynamics 101 4.4.4. Robot Arm Parameters 102 4.4.4.1. Motor 2 Selection 104 4.4.4.2. Motor 1 Selection 105 4.4.5. Computer Simulation of the Robot Arm 106 4.4.5.1. Linear Model 108 4.5. Modeling Picture Taking and Feature Measurement 110 4.5.1. Image Transformations 110 4.5.2. Feature Transtormations 111 4.5.2.1. Two-Dimensional Line 111 4.5.2.2. Polyhedral Objects 115 4.5.2.3. Noise Signal Modeling 118 4.6. Summary 119 iii 5. Evaluation: One DOF Systems 121 5.1. Overview 121 5.2. Baseline MRAC 122 5.2.1. MRAC Without Measurement Delay 123 5.2.2. MRAC With Measurement Delay 124 5.3. Linear IBVS 126 5.3.1. Noise Performance: Fixed vs. Adaptive Control 132 5.4. Nonlinear IBVS 135 5.4.1. Adaptive Control Vs. Fixed 144 5.5. Summary 147 6. Evaluation: Two DOF Systems 149 6.1. Overview 149 6.2. Linear Kinematics and Dynamics 150 6.2.1. Sensitivity Analysis 151 6.2.1 .l.Predicted Path 153 6.2.1.2. FeatiireIJoint Assignment 156 6.2.1.3. Path Constraint 160 6.2.2. Control Evaluation 161 6.2.3. Independent Adaptive Control Problems 171 6.2.3.1. Sign-Sensitivity 171 6.2.3.2. FestureIJoint Reassignment 173 6.3. Nonlinear Kinematics and Dynamics 176 6.3.1. Control of Nonlinear Dynamics 176 6.3.1 .l.Joint-Lsvel Control Examples 178 6.3.2. Sensitivity f..nalysis 186 6.3.2.1. FeaturcIJoint Assignment 189 6.3.2.2. Pdtli Constraint 192 6.3.3. IBVS Evaluation 192 6.3.3.1. Small Motion Trajectories 195 6.3.3.2. Large Motion Travel 199 6.3.3.3. FeatureIJoint Reassignment 20 1 6.4. Summary 203 7. Evaluation: Three DOF 207 7.1. Overview 207 7.2. Candidate Control Features 208 7.2.1. Sensitivity Measurements 209 7.2.2. Feature/Joint Assignment 21 I 7.3. Small Signal Evaluation 21 3 7.4. Large Motion Evaluation 21 8 7.5. Summary 223 8. Conclusions 225 8.1. Overview 225 8.2. Conclusions 225 8.3. Suggestions for Future Research 229 8.3.1. MlMO vs. SlSO Adaptive Control 229 8.3.1.1. On-Line Sensitivity Estimation 230 8.3.2. Implementation 23 1 8.3.2.1. Robot Mechanical Structure 231 iv 8.3.2.2. Sensor Resolution 232 8.3.2.3. Extension to Higher DOF 232 8.3.2.4. Position vs. Image Based Control 233 Appendix A. MRAC Without Measurement Delay 235 A.l. Refererice Model 236 A.2. Plant Model Structure 239 A.3. Controller 239 A.4. Adjustment Mechanism 24 1 A.5. Enhancements 242 Appendix B. Dynamic Equations for 2 DOF Arm 247 Appendix C. 2 DOF Controller Parameters 255 Appendix D. 3 DOF Controller Parameters 257 V List of Figures Figure 1-1: Robot Acquiring Object Using Computer Vision 4 Figure 1-2: Robot Assembling PC Boards Using Computer Vision 5 Figure 1-3: Basic Robot Positioning Control 6 Figure 1-4: Computer Vision 8 Figure 1-5: Static "Look and Move" Control 9 Figure 1-6: Dynamic Visual Servo Control 9 Figure 1-7: Features Change with Motion 11 Figure 2-1: Robot Block Diagram 20 Figure 2-2: Robot Control Block Diagram 25 Figure 2-3: Self - Tu nin g Reg uI a t o r 29 Figure 2-4: Model Reference Adaptive Control 29 Figure 2-5: Open-Loop Kinematic Control 32 Figure 2-6: Incremental Robot System 34 Figure 2-7: Robotic Visual Servo Control 35 Figure 2-8: Computer Based Vision System 36 Figure 2-9: Static Position-Based "Look 8. Move" 38 Figure 2- 10: Dynamic Position-Based "Look 8 Move" 39 Figii re 2- 1 1 : Position-BasedVisual Servoing 40 Figure 2- 1 2: Dynamic Image-Based "Look 8 Move" 43 Figure 2- 13: Image-Based Visual Servoing 45 Figure 2- 14: Small-Signal Image-Based "Look and Move" 47 Figure 2- 15: Small-Signal Image-Based Visual Servoing 48 Figure 2- 16: MRAC Control of an IBVS System 50 Figure 2- 17: Equivalent IBVS Control 51 Figure 2- 18: Control Hierarchy 58 Figure 2- 19: Hierarchical Control Example 59 Figure 2-20: Visual Based Control Structures 61 Figure 3-1: Analog Process Under MRAC Control 64 Figure 3-2: MRAC with Measurement Delay 66 Figure 3-3: Reference Model With Delays 69 Figure 3-4: Reference Model Root-Locus with One Delay 69 Figu re 3-5: FP LMFC Control 73 Figure 3-6: FP Root Locus vs. n 75 Figure 3-7: SP LMFC Control 76 Figure 3-8: SP Root Locus vs. n 78 Figure 4- 1 : Configuration 1 07 Figure 4-2: Configuration 2. 08 Figure 4-3: Configuration 3 90 Figure 4-4: RotatiGnal Stage Model 92 vi Figure 4-5: Translational Stage Model 93 Figure 4-6: Rotational Plant Block Diagram 96 Figure 4-7: Translational Plant Block Diagram 96 Figure 4-8: 2 DOF Robot Arm 98 Figure 4-9: Arm Coordinate Frames 99 Figure 4- 10: Robot Actuator 102 Figure 4- 1 1 : Arm Simulation Block Diagram 109 Figure 4- 12: Line-Image 113 Figure 4- 13: Objects 115 Figure 4- 14: Image Planes 117 Figure 4- 15: Scaled Distribution 118 Figure 5-1 : SP MRAC Without Measurement Delay 125 Figure 5-2: SP MRAC With Measurement Delay 127 Figure 5-3: Linear Time-Invariant Tasks 129 Figure 5-4: Linear Autonomous IBVS 130 Figure 5-5: IBVS with Control Signal Saturation 131 Figure 5-6: MRAC vs. LMFC Noise Performance (SNR = 44 db) 133 Figure 5-7: MRAC vs. LMFC Noise Performance (SNR = 30 db) 134 Figure 5-8: Error index vs. SNR 134 Figure 5-9: 1 DOF Nonlinear Tasks 136 Figu re 5-10: Task D Feature Characteristics 137 Figure 5-1 1 : IBVS With Nonlinear Sensitivity 139 Figure 5-12: Sensitivity Change for TASK C 140 Figure 5-13: IBVS With Large Gain Changes 142 Figure 5-14: IBVS With Varying Sampling-to-BandwidthRatios 143 Figure 5-15: Fixed vs.
Recommended publications
  • 6D Image-Based Visual Servoing for Robot Manipulators with Uncalibrated Stereo Cameras
    6D Image-based Visual Servoing for Robot Manipulators with uncalibrated Stereo Cameras Caixia Cai1, Emmanuel Dean-Leon´ 2, Nikhil Somani1, Alois Knoll1 Abstract— This paper introduces 6 new image features to A. Related work provide a solution to the open problem of uncalibrated 6D image-based visual servoing for robot manipulators, where An IBVS usually employs the image Jacobian matrix the goal is to control the 3D position and orientation of the (Jimg) to relate end-effector velocities in the manipulator’s robot end-effector using visual feedback. One of the main Task space to the feature parameter velocities in the feature contributions of this article is a novel stereo camera model which employs virtual orthogonal cameras to map 6D Cartesian (image) space. A full and comprehensive survey on Visual poses defined in the Task space to 6D visual poses defined Servoing and image Jacobian definitions can be found in [1], in a Virtual Visual space (Image space). This new model is [3], [4] and more recently in [5]. In general, the classical used to compute a full-rank square Image Jacobian matrix image Jacobian is defined using a set of image feature (Jimg), which solves several common problems exhibited by the measurements (usually denoted by s) and it describes how classical image Jacobians, e.g., Image space singularities and local minima. This Jacobian is a fundamental key for the image- image features change when the robot manipulator pose based controller design, where a chattering-free adaptive second changess ˙ = Jimgv. In Visual Servoing the image Jacobian order sliding mode is employed to track 6D visual motions for needs to be calculated or estimated.
    [Show full text]
  • Pipeline Following by Visual Servoing for Autonomous Underwater Vehicles
    Pipeline following by visual servoing for Autonomous Underwater Vehicles Guillaume Alliberta,d, Minh-Duc Huaa, Szymon Krup´ınskib, Tarek Hamela,c aUniversity of Cˆoted’Azur, CNRS, I3S, France. Emails: allibert(thamel; hua)@i3s:unice: f r bCybernetix, Marseille, France. Email: szymon:krupinski@cybernetix: f r cInstitut Universitaire de France, France dCorresponding author Abstract A nonlinear image-based visual servo control approach for pipeline following of fully-actuated Autonomous Underwater Vehicles (AUV) is proposed. It makes use of the binormalized Plucker¨ coordinates of the pipeline borders detected in the image plane as feedback information while the system dynamics are exploited in a cascade manner in the control design. Unlike conventional solutions that consider only the system kinematics, the proposed control scheme accounts for the full system dynamics in order to obtain an enlarged provable stability domain. Control robustness with respect to model uncertainties and external disturbances is re- inforced using integral corrections. Robustness and efficiency of the proposed approach are illustrated via both realistic simulations and experimental results on a real AUV. Keywords: AUV, pipeline following, visual servoing, nonlinear control 1. Introduction control Repoulias and Papadopoulos (2007); Aguiar and Pas- coal (2007); Antonelli (2007) and Lyapunov model-based con- Underwater pipelines are widely used for transportation of trol Refsnes et al. (2008); Smallwood and Whitcomb (2004) oil, gas or other fluids from production sites to distribution sites. mostly concern the pre-programmed trajectory tracking prob- Laid down on the ocean floor, they are often subject to extreme lem with little regard to the local topography of the environ- conditions (temperature, pressure, humidity, sea current, vibra- ment.
    [Show full text]
  • Arxiv:1902.05947V1 [Cs.CV] 18 Feb 2019
    DIViS: Domain Invariant Visual Servoing for Collision-Free Goal Reaching Fereshteh Sadeghi University of Washington Servo Goal: semantic category Real Robot Test Mobile Robot Platforms at test time Servo Goal: image crop H Images N . #) &"'$ . Collision Net N ( . (FCN) #$ #$ . #%'$ . Conv . Conv #% &" N LSTM #% N Stack ! Train in Simulation Goal Semantic Net " (FCN) Time Figure 1: Domain Invariant Visual Servoing (DIViS) learns collision-free goal reaching entirely in simulation using dense multi-step rollouts and a recurrent fully convolutional neural network (bottom). DIViS can directly be deployed on real physical robots with RGB cameras for servoing to visually indicated goals as well as semantic object categories (top). Abstract Robots should understand both semantics and physics to be functional in the real world. While robot platforms provide means for interacting with the physical world they cannot autonomously acquire object-level semantics with- (a) (b) (c) out needing human. In this paper, we investigate how to Figure 2: (a) The classic 1995 visual servoing robot [46, 15]. minimize human effort and intervention to teach robots per- The image at final position (b) was given as the goal and the robot form real world tasks that incorporate semantics. We study was started from an initial view of (c). this question in the context of visual servoing of mobile robots and propose DIViS, a Domain Invariant policy learn- 1. Introduction ing approach for collision free Visual Servoing. DIViS in- corporates high level semantics from previously collected Perception and mobility are the two key capabilities that static human-labeled datasets and learns collision free ser- enable animals and human to perform complex tasks such voing entirely in simulation and without any real robot data.
    [Show full text]
  • Acknowledgements Acknowl
    2161 Acknowledgements Acknowl. B.21 Actuators for Soft Robotics F.58 Robotics in Hazardous Applications by Alin Albu-Schäffer, Antonio Bicchi by James Trevelyan, William Hamel, The authors of this chapter have used liberally of Sung-Chul Kang work done by a group of collaborators involved James Trevelyan acknowledges Surya Singh for de- in the EU projects PHRIENDS, VIACTORS, and tailed suggestions on the original draft, and would also SAPHARI. We want to particularly thank Etienne Bur- like to thank the many unnamed mine clearance experts det, Federico Carpi, Manuel Catalano, Manolo Gara- who have provided guidance and comments over many bini, Giorgio Grioli, Sami Haddadin, Dominic Lacatos, years, as well as Prof. S. Hirose, Scanjack, Way In- Can zparpucu, Florian Petit, Joshua Schultz, Nikos dustry, Japan Atomic Energy Agency, and Total Marine Tsagarakis, Bram Vanderborght, and Sebastian Wolf for Systems for providing photographs. their substantial contributions to this chapter and the William R. Hamel would like to acknowledge work behind it. the US Department of Energy’s Robotics Crosscut- ting Program and all of his colleagues at the na- C.29 Inertial Sensing, GPS and Odometry tional laboratories and universities for many years by Gregory Dudek, Michael Jenkin of dealing with remote hazardous operations, and all We would like to thank Sarah Jenkin for her help with of his collaborators at the Field Robotics Center at the figures. Carnegie Mellon University, particularly James Os- born, who were pivotal in developing ideas for future D.36 Motion for Manipulation Tasks telerobots. by James Kuffner, Jing Xiao Sungchul Kang acknowledges Changhyun Cho, We acknowledge the contribution that the authors of the Woosub Lee, Dongsuk Ryu at KIST (Korean Institute first edition made to this chapter revision, particularly for Science and Technology), Korea for their provid- Sect.
    [Show full text]
  • Real-Time Vision, Tracking and Control
    Proceedings of the 2000 IEEE International Conference on Robotics & Automation San Francisco, CA April 2000 Real-Time Vision, Tracking and Control Peter I. Corke Seth A. Hutchinson CSIRO Manufacturing Science & Technology Beckman Institute for Advanced Technology Pinjarra Hills University of Illinois at Urbana-Champaign AUSTRALIA 4069. Urbana, Illinois, USA 61801 [email protected] [email protected] Abstract sidered the fusion of computer vision, robotics and This paper, which serves as an introduction to the control and has been a distinct field for over 10 years, mini-symposium on Real- Time Vision, Tracking and though the earliest work dates back close to 20 years. Control, provides a broad sketch of visual servoing, the Over this period several major, and well understood, approaches have evolved and been demonstrated in application of real-time vision, tracking and control many laboratories around the world. Fairly compre- for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical archi- hensive overviews of the basic approaches, current ap- tecture, and discusses major milestones, applications plications, and open research issues can be found in a and the significant vision sub-problems that must be number of recent sources, including [l-41. solved. The next section, Section 2, describes three basic ap- proaches to visual servoing. Section 3 provides a ‘walk 1 Introduction around’ the main functional blocks in a typical visual Visual servoing is a maturing approach to the control servoing system. Some major milestones and proposed applications are discussed in Section 4. Section 5 then of robots in which tasks are defined visually, rather expands on the various vision sub-problems that must than in terms of previously taught Cartesian coordi- be solved for the different approaches to visual servo- nates.
    [Show full text]
  • A Self-Triggered Position Based Visual Servoing Model Predictive Control Scheme for Underwater Robotic Vehicles †
    machines Article A Self-triggered Position Based Visual Servoing Model Predictive Control Scheme for Underwater Robotic Vehicles † Shahab Heshmati-alamdari 1 , Alina Eqtami 2, George C. Karras 3,4 , Dimos V. Dimarogonas 1 and Kostas J. Kyriakopoulos 4,* 1 Division of Decision and Control Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden; [email protected] (S.H.-a.); [email protected] (D.V.D.) 2 Laboratoire des Signaux et Systémes (L2S) CNRS, CentraleSupélec, Université Paris-Sud, Université Paris-Saclay 3, rue Joliot-Curie, 91192 Gif-sur-Yvette, cedex, France; [email protected] 3 Dept. of Computer Science and Telecommunications, University of Thessaly, 3rd Km Old National Road Lamia-Athens, 35100, Lamia, Greece; [email protected] 4 Control Systems Laboratory, School of Mechanical Engineering, National Technical University of Athens, 15780 Athens, Greece * Correspondence: [email protected] † This paper is an extended version of our paper published in Shahab Heshmati-Alamdari, Alina Eqtami, George C. Karras, Dimos V. Dimarogonas, and Kostas J. Kyriakopoulos. A Self-triggered Visual Servoing Model Predictive Control Scheme for Under-actuated Underwater Robotic Vehicles. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. Received: 1 May 2020; Accepted: 7 June 2020; Published: 11 June 2020 Abstract: An efficient position based visual sevroing control approach for Autonomous Underwater Vehicles (AUVs) by employing Non-linear Model Predictive Control (N-MPC) is designed and presented in this work. In the proposed scheme, a mechanism is incorporated within the vision-based controller that determines when the Visual Tracking Algorithm (VTA) should be activated and new control inputs should be calculated.
    [Show full text]
  • Robot Systems Integration
    Transformative Research and Robotics Kazuhiro Kosuge Distinguished Professor Department of Robotics Tohoku University 2020 IEEE Vice President-elect for Technical Activities IEEE Fellow, JSME Fellow, SICE Fellow, RSJ Fellow, JSAE Fellow My brief history • March 1978 Bachelor of Engineering, Department of Control Engineering, Tokyo Institute of Technology • Marcy 1980 Master of Engineering, Department of Control Engineering, Tokyo Institute of Technology • April 1980 Research staff, Department of Production Engineering Nippondenso (Denso) Corporation • October 1982 Research Associate, Tokyo Institute of Technology • July 1988 Dr. of Engineering, Tokyo Institute of Technology • September 1989 - August 1990 Visiting Research Scientist, Department of Mechanical Engineering, Massachusetts Institute of Technology • September 1990 Associate Professor, Faculty of Engineering, Nagoya University • March 1995 Professor, School of Engineering, Tohoku University • April 1997 Professor, Graduate School of Engineering, Tohoku University • December 2018 Tohoku University Distinguished Professor 略 歴 • 1978年3月 東京工業大学工学部制御工学科卒業 • 1980年3月 東京工業大学大学院理工学研究科修士課程修了(制御工学専攻,工学修士) • 1980年4月 日本電装株式会社(現 株式会社デンソー) • 1982年10月 東京工業大学工学部制御工学科助手(工学部) • 1988年7月 東京工業大学大学院理工学研究科 工学博士(制御工学専攻) • 1989年9月-1990年8月 米国マサチューセッツ工科大学機械工学科客員研究員 (Visiting Research Scientist, Department of Mechanical Engineering, Massachusetts Institute of Technology) • 1990年 9月 名古屋大学 助教授(工学部) • 1995年 3月 東北大学 教授(工学部) • 1997年 4月 東北大学 教授(工学研究科)大学院重点化による配置換 • 2018年12月 東北大学 Distinguished Professor My another
    [Show full text]
  • IEEE Transactions on Robotics (T-RO) Editorial Board Listing
    IEEE Transactions on Robotics (T-RO) Editorial Board Listing Status by February 1, 2006 Senior Editors Dr. Hirohiko Arai, Editor Intelligent Systems Institute National Institute of Advanced Industrial Science & Technology AIST Tsukuba East 1-2-1 Namiki Tsukuba, Ibaraki 305-8564 Japan P: +81 298 61 7088 F: +81-298 61 7201 E: [email protected] Primary Areas: Robot Control, Manipulators, Dynamics, Kinematics, Force/Impedance/Compliance, Underactuated Mechanisms, Nonholonomic Systems Secondary Areas: Human-Robot Cooperation, Humanoids, Walking Robots, Telerobotics, Industrial Robots ********** Professor George A. Bekey, Founding Editor Department of Computer Science University of Southern California Los Angeles, CA 90089-0782 USA P: +1 213 740 4501 F: +1 213 740 7512 E: [email protected] Primary Areas: Mobile Robots, Autonomy, Walking Machines, Robot Intelligence, Multi-Robot Cooperation Secondary Areas: Medical Robotics and Personal Robots ********** Professor Alessandro De Luca, Editor-in-Chief Dipartimento di Informatica e Sistemistica Universita di Roma “La Sapienza” Via Eudossiana 18 00184 Roma Italy P: +39 06 44585 371 F: +39 06 44585 367 E: [email protected] U: http://www.dis.uniroma1.it/labrob Primary Areas: Learning, Force/Impedance/Impact Control, Flexible Manipulators, Nonholonomic Robots, Nonlinear Control, Redundant Manipulators, Underactuated Mechanisms Secondary Areas: Kinematics, Dynamics, Mobile Robots, Motion/Path Planning, Obstacle Avoidance, Space Robots ********** Professor Peter B. Luh, Editor Emeritus Department
    [Show full text]
  • VISUAL SERVOING for ROBOTIC ASSEMBLY Brad Nelson, N.P
    in Visual Servoing-Real-Time Control of Robot Manipulators Based on Visual Sensory Feedback, ed. K. Hashimoto, World Scien- tific Publishing Co. Pte. Ltd., River Edge, NJ, pp. 139-164, 1993. VISUAL SERVOING FOR ROBOTIC ASSEMBLY Brad Nelson,† N.P. Papanikolopoulos,‡ and P.K. Khosla‡ The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 ABSTRACT Visual feedback has traditionally been used in the assembly process to a very limited extent. With the advent of effective visual servoing techniques, visual feedback can become an integral part of the assembly process by complementing the use of force feedback to accomplish precision assemblies in imprecisely calibrated robotic assembly workcells. In this paper we present some of the issues pertaining to the introduction of visual servoing techniques into the assembly process and solutions we have demonstrated to these prob- lems. 1. Introduction Geometric modeling systems are rapidly replacing manual drafting techniques for defining the geometry of mechanical parts and assemblies during the design process. This makes possible the development of software tools to aid the manufacturing engineer in set- ting up assembly workcells, thus, integrating design and manufacturing. Robotic assembly workcells have been created which can be automatically programmed, including the auto- matic generation of real-time code, to perform mechanical assemblies based on geometric models of the assembly.9,15 An important benefit of these systems is that by automatically generating assembly plans and real-time code for performing mechanical assemblies, assembly workcell setup times can be drastically reduced. When programming robotic assembly workcells automatically, however, the accu- racy as well as the repeatability of the robots’ motion must be extremely high, so that parts that are to be assembled can be placed at positions and orientations within thousands of an inch of their desired position and orientation.
    [Show full text]
  • Visp for Visual Servoing: a Generic Software Platform with a Wide Class of Robot Control Skills E
    ViSP for visual servoing: a generic software platform with a wide class of robot control skills E. Marchand, F. Spindler, François Chaumette To cite this version: E. Marchand, F. Spindler, François Chaumette. ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, Institute of Electrical and Electronics Engineers, 2005, 12 (4), pp.40-52. inria-00351899 HAL Id: inria-00351899 https://hal.inria.fr/inria-00351899 Submitted on 12 Jan 2009 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ViSP for Visual Servoing A Generic Software Platform with a Wide Class of Robot Control Skills everal software packages or toolboxes written in vari- same philosophy as the XVision system [16] and would have ous languages have been proposed in order to simu- been independent from the robot and the tracking algorithms. late robotic manipulator control. The MATLAB Following these precedents, ViSP (that is, Visual Servoing Robotics Toolbox [8] allows for the simple manipu- Platform), the software environment we present in this article, lation of serial-link manipulator; Roboop [14] is a features all these capabilities: independence with respect to the Smanipulator simulation package (written in C++).
    [Show full text]
  • Project-Team LAGADIC Visual Servoing in Robotics, Computer Vision, and Augmented Reality
    INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Project-Team LAGADIC Visual servoing in robotics, computer vision, and augmented reality Rennes - Bretagne-Atlantique Theme : Robotics c t i v it y ep o r t 2009 Table of contents 1. Team :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1 2. Overall Objectives :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1 3. Scientific Foundations :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::2 3.1. Visual servoing2 3.2. Visual tracking3 4. Application Domains ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::3 5. Software ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 4 5.1. ViSP: a visual servoing platform4 5.2. Marker: Marker-based augmented reality kernel5 5.3. MarkerLess: MarkerLess-based augmented reality kernel5 5.4. Development work: Robot vision platforms6 5.5. Development work: Medical robotics platforms6 5.6. Development work: Cycab8 6. New Results :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 8 6.1. Visual servoing8 6.1.1. Visual features from a spherical projection model8 6.1.2. Photometric visual servoing8 6.1.3. Mutual information-based visual servoing8 6.1.4. Design of new control schemes9 6.1.5. Visual servoing for aircrafts9 6.1.6. Multi sensor-based control9 6.1.7. Visual servoing of non-holonomic mobile robots9 6.1.8. MEMS micro-assembly 10 6.2. Medical robotics 10 6.2.1. Ultrasound image-based visual servoing 10 6.2.2. 3D robot registration from ultrasound images 10 6.2.3. Autonomous control modes for ultrasound probe guidance 10 6.3. Active vision 11 6.4. Visual tracking 11 6.4.1. Localization for augmented reality 11 6.4.2. Robust model-based tracking for aircraft localization 11 6.4.3. Robust tracking for controlling small helicopters 11 6.4.4. Omnidirectional stereovision 12 6.4.5.
    [Show full text]
  • Assessment of Visual Servoing Techniques for Target Localization
    Proceedings of the 6th International Conference of Control, Dynamic Systems, and Robotics (CDSR'19) Ottawa, Canada – June 6-7, 2019 Paper No. CDSR 110 DOI: 10.11159/cdsr19.110 Assessment of Visual Servoing Techniques for Target Localization Khushali Saraiya1, Dippal Israni1, Parth Goel2 1U & P. U. Patel Department Of Computer Engineering CSPIT, CHARUSAT, Changa, India [email protected];[email protected] 2Computer Engineering Department, DEPSTAR CHARUSAT, Changa, India [email protected] Abstract - Robotics is advancing rapidly in the research area of designing, controlling and building new robots for domestic, commercial and military applications. In addition to these, it also becomes easy to control objects in dangerous locations like bomb defusal, mines and exploring shipwrecks. Visual Servoing (VS) is a technique that controls a robot and gives the motion to achieve the field of technologies such as bionomics and security guard. In robotics, the most important part is to place cameras in the workspace such that each and every place is visible during the whole task. In Visual Servoing, the challenge is to achieve kinematics that gives ease to pick and place objects. This paper represents an overall idea, state of the art techniques and architecture for camera control techniques and robot control techniques. This paper also highlights examples related to forward kinematics and inverse kinematic using Matlab. Keywords: Visual Servoing, Position Based Visual Servoing, Image-Based Visual Servoing, Kinematics. 1. Introduction Robotics is gradually upgrading manufacturing industry to engineer the environment so that it will simplify the detection and retrieval of the item. The vision that is primarily based on absolute control or visual servoing based is used as a solution in robot organization.
    [Show full text]