
Visual servoing for mobile robots navigation with collision avoidance and field-of-view constraints Wenhao Fu To cite this version: Wenhao Fu. Visual servoing for mobile robots navigation with collision avoidance and field-of-view constraints. Automatic. Université Evry Val d’Essonne, 2014. English. tel-01413584 HAL Id: tel-01413584 https://hal.archives-ouvertes.fr/tel-01413584 Submitted on 9 Dec 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Université Evry Val d’Essonne THÈSE présentée pour obtenir le grade de DOCTEUR DE UNIVERSITÉ EVRY VAL D’ESSONNE par Wenhao Fu Équipe d’accueil : IRA2 École doctorale : Sciences & Ingénierie Composante Universitaire : Informatique, Biologie Intégrative et Systèmes Complexes Titre de la thèse : Visual Servoing for Mobile Robots Navigation with Collision Avoidance and Field-of-View Constraints soutenue le 18 avril 2014 devant le jury composé de : M. Alain Pruski Rapporteurs Cédric Demonceaux Malik Mallem Examinateurs Omar Ait Aider Hicham Hadj-Abdelkader Encadrant Etienne Colle Directeur de thèse Acknowledgements I want to thank all the people who have been with me in my PhD re- search. Firstly, I am grateful to my advisor Etienne Colle for his help and support throughout my PhD course. I also want to thank my supervisor Hicham HADJ-ABDELKADER for his advising of my research work. I would like also to thank all the members at Lab IBISC, especially in the IR2 team: Malik, Samir, Fr´ed´eric, Jean-Yves, Christophe, Bruno, Paul, Antonio, BOUCHER Maxime, Jean-Cl´ement, JUBERT Maxime. I am also grateful to Professor Sa¨ıd Mammar for his help and support of my research academic activity throughout my PhD course. I would also like to acknowl- edge Florant and Sabine. The are responsible for their to assistant our research work in the laboratory. Finally, thanks to my parents and friends for their heed and support. Abstract This thesis is concerned with the problem of vision-based navigation for mo- bile robots in indoor environments. More recently, many works have been carried out to solve the navigation using a visual path, namely appearance- based navigation. The visual path is generated after a training step. During the navigation, the mobile robot tracks the trained visual path by visual servoing approaches from key to key images. Compared to the metric map- based navigation approaches, the appearance-based scheme can avoid envi- ronment modeling, loop closing problem and the time consuming of plan- ning algorithms. However, using this scheme the robot motion is limited to the trained visual path. For safety navigation, the robot should have the ability to avoid obstacles during the navigation process. The potential collision can make robot deviate from the current visual path, in which the visual landmarks can be lost in the current field of view. To the best of our knowledge, seldom works consider collision avoidance and landmark loss in the framework of appearance-based navigation. We outline a mobile robot navigation framework in order to enhance the capability of appearance-based method, especially in case of collision avoid- ance and field-of-view constraints. Our framework introduces several tech- nical contributions. First of all, the motion constraints are considered into the visual landmark detection to improve the detection performance. Next then, we model the obstacle boundary using B-Spline by interpolating the convex polygonal chain of the boundary. The B-Spline representation has no accidented regions due to the convex polygonal chain and can generate a smooth motion for the collision avoidance task. Additionally, we propose an vision-based control strategy, which can deal with the complete target loss. Finally, we use spherical image to handle the case of ambiguity and infinity projections due to perspective projection. The real experiments demon- strate the feasibility and the effectiveness of our framework and methods. Contents Contents iii List of Figures viii List of Tables xii Synth`ese en Fran¸cais 1 Introduction..................................... 1 Objectifdelath`ese. 1 Contributions................................. 2 Chapitre 1 Fonction de S´election pour la D´etection d’Objets.......... 3 1.1 Exp´erience pour la s´election d’entit´e . 3 1.2 Contrainte RANSAC pour Homographie Estimation . 4 Chapitre 2 Evitement R´eactif Temps R´eel d’Obstacles . ........ 6 2.1 Introduction ............................. 6 2.2 D´etection d’Obstacles et Repr´esentation . 6 2.2.1 D´etectiond’Obstacles . 7 2.2.2 LaRepr´esentation . 7 2.3 Approches d’´evitement d’obstacles r´eactif . 9 2.4 R´esultatsExp´erimentaux . 10 Chapitre 3 Contrˆole bas´esur la Vision pour les Robots Mobiles . ..... 12 3.1 AsservissementVisuelPourRobotMobile . 12 3.2 Asservissement Visuel Based Navigation avec Perte Compl`ete de Cible.................................. 13 3.2.1 Enonc´eduprobl`eme.´ . 13 3.2.2 CadredeNotreSyst`emedeNavigation . 13 3.2.3 Algorithme de g´en´eration de comportements . 15 iii CONTENTS 3.2.4 Algorithme de g´en´eration de mouvements . 16 3.2.5 L’estimation des donn´ees visuelles en cas de Perte Cible 16 3.2.6 R´esultatsExp´erimentaux . 18 Conclusion ..................................... 19 Introduction 1 Autonomous Mobile Robot Navigation . 1 ObjectiveoftheThesis .............................. 3 Contributions.................................... 4 Overview ...................................... 5 1 Feature Selection for Object Recognition and Visual tracking 7 1.1 Introduction.................................. 7 1.2 TheoreticalBases............................... 8 1.2.1 Homogeneous Transformation . 8 1.2.2 Image Formation . 9 1.2.3 Homography Transformation . 12 1.3 LocalFeatureDetectionandDescription . 14 1.3.1 Local Feature Detectors . 14 1.3.1.1 Overview of the State of the Art . 14 1.3.1.2 Representative Feature Detectors. 16 1.3.2 Local Feature Descriptors . 18 1.3.2.1 Overview of the State of the Art . 18 1.3.2.2 Representative Feature Descriptors . 19 1.4 Matching ................................... 20 1.4.1 Overview of the State of the Art . 20 1.4.2 Outlier Removal using RANSAC . 21 1.5 Feature Selection for Object Recognition . 22 1.5.1 Experimental Framework . 23 1.5.2 ExperimentalResults . 23 1.5.3 Constraint RANSAC for Homography Estimation . 27 1.5.4 Discussion............................... 32 1.6 PlanarObjectTracking ........................... 33 1.6.1 Overview of Tracking Approaches in the Literature . 33 1.6.2 Template-Based Tracking . 34 1.6.3 Our Visual Tracking System . 35 iv CONTENTS 1.7 Conclusion .................................. 36 2 Real-time Reactive Obstacle Avoidance 37 2.1 Introduction.................................. 37 2.2 OverviewoftheStateoftheArt . 39 2.2.1 ObstacleDetectionandRepresentation . 39 2.2.2 Obstacle Avoidance Approaches . 40 2.3 ObstacleDetectionandRepresentation. 42 2.3.1 ObstacleDetection.......................... 43 2.3.2 ObstacleRepresentation . 47 2.3.2.1 Polygonal Chain Representation . 47 2.3.2.2 Convex Polygonal Chain Representation. 50 2.3.2.3 B-SplineRepresentation. 52 2.3.3 Discussion............................... 53 2.4 Reactive Obstacle Avoidance Approaches . 54 2.4.1 MotionModeling........................... 54 2.4.2 Obstacle Avoidance based on Path Following . 55 2.4.2.1 Formalism . 55 2.4.2.2 Simulation Results . 58 2.4.3 PotentialFieldMethod(PFM) . 59 2.4.3.1 Formulation . 60 2.4.3.2 Simulation . 61 2.4.3.3 Discussion ......................... 61 2.5 ExperimentalResults. 62 2.6 Conclusion .................................. 63 3 Visual Servo Control for Mobile Robots 65 3.1 Introduction.................................. 65 3.2 StateoftheArt................................ 66 3.3 Robot-Vision System Configuration . 69 3.4 Visual Servoing for Mobile Robot . 70 3.4.1 General Formulation of Visual Servoing . 70 3.4.2 Image-Based Visual Servoing (IBVS) . 72 3.4.3 Position-Based Visual Servoing (PBVS) . 74 3.4.4 2D 1/2 Visual Servoing (HVS) . 76 3.5 Visual Servoing Based Navigation with Complete Target Loss . 76 v CONTENTS 3.5.1 ProblemStatement.......................... 76 3.5.2 Navigation System . 78 3.5.3 Behavior Generation Algorithm . 79 3.5.4 Motion Generation Algorithm . 80 3.5.5 Visual Data Estimation in Case of Target Loss . 82 3.5.5.1 Using a Known Target . 82 3.5.5.2 Using an Unknown Planar Target through Homography Recovery .......................... 82 3.5.5.3 Navigation Using Straight Line . 86 3.5.6 ExperimentalResults . 86 3.6 Spherical Image-Based Visual Servoing (SIBVS) . ...... 88 3.6.1 WhySphericalProjection . 88 3.6.2 SphericalProjectionFormulation . 90 3.6.3 AdjustmentofSIBVS . 92 3.6.3.1 Coordinate Selection . 93 3.6.3.2 Symmetric versus Nonsymmetric of Visual Point Position 95 3.6.4 SphericalVisualServoingforMobileRobot . 97 3.6.4.1 SystemModeling. 97 3.6.4.2 Control Law Using Constant Linear Velocity . 98 3.6.4.3 Control Law Scaling the Error with Different Values . 99 3.6.4.4 Control Law Scaling the Velocity with Different Values 99 3.6.4.5 SwitchingSchemes. 100 3.6.5 Discussion............................... 102
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages183 Page
-
File Size-