ROSSINI State of the Art Analysis

Total Page:16

File Type:pdf, Size:1020Kb

ROSSINI State of the Art Analysis D1.2 H – Requirement No. 2 ROSSINI RObot enhanced SenSing, INtelligence and actuation to Improve productivity and job quality in manufacturing Deliverable D2.1 ROSSINI State of the Art Analysis Deliverable Lead: CRIT Deliverable due date: 31/12/2018 Actual submission date: 31/12/2018 Version: v 3.0 This project receives funding in the European Commission’s Horizon 2020 Research Programme under Grant Agreement Number 818087. D1.2 H – Requirement No. 2 Document Control Page Title D1.2 ROSSINI State of the Art Analysis Creator CRIT This deliverable reports on the analysis of State-of-the-Art of both existing and past projects, as well as it preliminary discusses on opportunities and threats for the ROSSINI consortium. First, the State of the Art presented in the proposal is recalled in each section. Then, a further investigation and update on this State of the Art is provided, by analysing new, existing and past projects, initiatives and if new products/technologies have been Description introduced into the market. Analysis of existing solutions or platforms and their structure is provided; ROSSINI specificity against existing solutions is also presented, with a special focus on new potential opportunities and possible threats. CRIT, DATALOGIC, FRAUNHOFER IFF, IMA, IMA MACHINEBOUW, IRIS, PILZ, Contributors SUPSI, TNO, UNIMORE, WHIRPOOL, SCHINDLER SUPPLY CHAIN EUROPE Creation date 06/11/2018 Type Report Language English public Audience confidential Draft Review status WP leader accepted Coordinator accepted to be revised by Partners for approval by the WP leader Action requested for approval by the Project Coordinator for acknowledgement by Partners ROSSINI | GA n. 818087 Pag. 2 | 128 D1.2 H – Requirement No. 2 Table of Contents Table of Contents .............................................................................................................................................. 3 List of Acronyms ............................................................................................................................................... 7 List of Figures.................................................................................................................................................. 11 List of Tables ................................................................................................................................................... 13 1 Deliverable Description ........................................................................................................................... 14 2 Introduction ............................................................................................................................................. 15 3 Detection and tracking of the working environment ............................................................................... 17 3.1 ROSSINI Envisioned Sensing Technology ..................................................................................... 17 3.1.1 Architectural elements of a sensing systems and available technologies ................................ 19 3.2 Data fusion and processing techniques ............................................................................................ 23 3.2.1 General data fusion algorithms ................................................................................................ 23 3.2.2 Data fusion and processing techniques for safety applications ............................................... 33 4 Safety robot architecture .......................................................................................................................... 39 4.1 Robot Control Technology .............................................................................................................. 39 4.2 Robot Actuation Technology ........................................................................................................... 41 4.3 Reference architectural solutions ..................................................................................................... 42 4.4 Example of real-world use cases ..................................................................................................... 44 4.4.1 Valeri European Project ........................................................................................................... 44 4.4.2 Robo-partner Project ................................................................................................................ 46 4.5 Collaborative robotic arm range ...................................................................................................... 47 4.6 Discussion on the presented market solutions ................................................................................. 57 4.7 Mobile Robots ................................................................................................................................. 57 5 Human-robot mutual understanding ........................................................................................................ 62 5.1 Human-robot teaming ...................................................................................................................... 63 5.1.1 Common challenges in human-robot teaming ......................................................................... 63 5.1.2 The robotic teammate .............................................................................................................. 64 5.1.3 Forms of co-work .................................................................................................................... 64 5.1.4 Levels of autonomy ................................................................................................................. 65 5.1.5 Interaction Roles ...................................................................................................................... 66 5.1.6 Team structure ......................................................................................................................... 67 5.1.7 Task allocation ......................................................................................................................... 67 5.1.8 Coactive design........................................................................................................................ 68 5.2 Trust ................................................................................................................................................. 69 5.3 IO-modalities ................................................................................................................................... 69 5.3.1 Robot to human ....................................................................................................................... 69 5.3.2 Human to robot ........................................................................................................................ 72 5.4 Job quality ....................................................................................................................................... 75 5.4.1 OECD Job Quality Framework ............................................................................................... 76 5.4.2 Measuring job quality .............................................................................................................. 76 5.4.3 Impact of Human-Robot Teaming on Job Quality .................................................................. 78 5.5 Ergonomics ...................................................................................................................................... 78 ROSSINI | GA n. 818087 Pag. 3 | 128 D1.2 H – Requirement No. 2 5.5.1 How to apply ergonomics? ...................................................................................................... 81 5.5.2 Task analysis as basis .............................................................................................................. 81 5.5.3 Laws and regulation ................................................................................................................. 82 5.5.4 Guidelines for design and evaluation of work ......................................................................... 82 6 Safety system architecture ....................................................................................................................... 86 6.1 Reference System Solutions ............................................................................................................ 87 6.2 Design tools for HRC and industrial robot applications .................................................................. 89 6.2.1 Comparison of Open Source and Non-Open-Source robotics simulator software .................. 89 6.2.2 Comparison of five Open Source robotics simulator software ................................................ 95 6.2.3 Quick overview on Gazebo ..................................................................................................... 96 6.2.4 Gazebo simulation examples ................................................................................................... 97 7 Market Overview ................................................................................................................................... 100 7.1 Human-Robot Collaboration: a key driver for manufacturing sustainability in Europe ............... 100 7.1.1 HRC for Production Reshoring ............................................................................................. 100 7.1.2 HRC for Flexible
Recommended publications
  • Automatic Generation of a 3D City Model
    UNIVERSITY OF CASTILLA-LA MANCHA ESCUELA SUPERIOR DE INFORMÁTICA COMPUTER ENGINEERING DEGREE DEGREE FINAL PROJECT Automatic generation of a 3D city model David Murcia Pacheco June, 2017 AUTOMATIC GENERATION OF A 3D CITY MODEL Escuela Superior de Informática UNIVERSITY OF CASTILLA-LA MANCHA ESCUELA SUPERIOR DE INFORMÁTICA Information Technology and Systems SPECIFIC TECHNOLOGY OF COMPUTER ENGINEERING DEGREE FINAL PROJECT Automatic generation of a 3D city model Author: David Murcia Pacheco Director: Dr. Félix Jesús Villanueva Molina June, 2017 David Murcia Pacheco Ciudad Real – Spain E-mail: [email protected] Phone No.:+34 625 922 076 c 2017 David Murcia Pacheco Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". i TRIBUNAL: Presidente: Vocal: Secretario: FECHA DE DEFENSA: CALIFICACIÓN: PRESIDENTE VOCAL SECRETARIO Fdo.: Fdo.: Fdo.: ii Abstract HIS document collects all information related to the Degree Final Project (DFP) of Com- T puter Engineering Degree of the student David Murcia Pacheco, tutorized by Dr. Félix Jesús Villanueva Molina. This work has been developed during 2016 and 2017 in the Escuela Superior de Informática (ESI), in Ciudad Real, Spain. It is based in one of the proposed sub- jects by the faculty of this university for this year, called "Generación automática del modelo en 3D de una ciudad".
    [Show full text]
  • Benchmarks on WWW Performance
    The Scalability of X3D4 PointProperties: Benchmarks on WWW Performance Yanshen Sun Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Application Nicholas F. Polys, Chair Doug A. Bowman Peter Sforza Aug 14, 2020 Blacksburg, Virginia Keywords: Point Cloud, WebGL, X3DOM, x3d Copyright 2020, Yanshen Sun The Scalability of X3D4 PointProperties: Benchmarks on WWW Performance Yanshen Sun (ABSTRACT) With the development of remote sensing devices, it becomes more and more convenient for individual researchers to acquire high-resolution point cloud data by themselves. There have been plenty of online tools for the researchers to exhibit their work. However, the drawback of existing tools is that they are not flexible enough for the users to create 3D scenes of a mixture of point-based and triangle-based models. X3DOM is a WebGL-based library built on Extensible 3D (X3D) standard, which enables users to create 3D scenes with only a little computer graphics knowledge. Before X3D 4.0 Specification, little attention has been paid to point cloud rendering in X3DOM. PointProperties, an appearance node newly added in X3D 4.0, provides point size attenuation and texture-color mixing effects to point geometries. In this work, we propose an X3DOM implementation of PointProperties. This implementation fulfills not only the features specified in X3D 4.0 documentation, but other shading effects comparable to the effects of triangle-based geometries in X3DOM, as well as other state-of-the-art point cloud visualization tools.
    [Show full text]
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Building Functional Safety Into Industrial Robotics
    BUILDING FUNCTIONAL SAFETY INTO INDUSTRIAL ROBOTICS Rian Whitton Principal Analyst EXECUTIVE SUMMARY TABLE OF CONTENTS Functional safety is critical to many modern robotic applications and will become more so. In the near future, many robots will run in areas where they interact directly Executive Summary ................................1 and indirectly with workers, and the tasks they perform will disproportionately relate The Market for Industrial Robotics ...........2 The Potential of the Robotics Industry .....................2 to production logistics and moving goods for the general public. If a robot malfunctions, New Robots, New Demands ...................................4 there is a risk to the surrounding workers, to product safety, and to public safety. This Functional Safety Is Crucial for the Industry to Meet risk amplifies as robots are increasingly required to handle multiple tasks and navigate Its Potential ............................................................5 Regulations .............................................................6 crowded environments. So, mitigating safety risks under these conditions will become Embedded Systems for Robotics ..............7 a fundamental requirement that cannot be addressed without considering security as RTOS and the Benefits of Microkernel an integral part of robot design. Architecture ............................................................8 Hypervisors-as-a-Solution .......................................9 This is where functional safety of the underlying software and
    [Show full text]
  • Objekttracking Für Dynamisches Videoprojektions Mapping
    Bachelorthesis Iwer Petersen Using object tracking for dynamic video projection mapping Fakultät Technik und Informatik Faculty of Engineering and Computer Science Studiendepartment Informatik Department of Computer Science Iwer Petersen Using object tracking for dynamic video projection mapping Bachelorthesis submitted within the scope of the Bachelor examination in the degree programme Bachelor of Science Technical Computer Science at the Department of Computer Science of the Faculty of Engineering and Computer Science of the Hamburg University of Applied Science Mentoring examiner: Prof. Dr. Ing. Birgit Wendholt Second expert: Prof. Dr.-Ing. Andreas Meisel Submitted at: January 31, 2013 Iwer Petersen Title of the paper Using object tracking for dynamic video projection mapping Keywords video projection mapping, object tracking, point cloud, 3D Abstract This document presents a way to realize video projection mapping onto moving objects. Therefore a visual 3D tracking method is used to determine the position and orientation of a known object. Via a calibrated projector-camera system, the real object then is augmented with a virtual texture. Iwer Petersen Thema der Arbeit Objekttracking für dynamisches Videoprojektions Mapping Stichworte Video Projektions Mapping, Objektverfolgung, Punktwolke, 3D Kurzzusammenfassung Dieses Dokument präsentiert einen Weg um Video Projektions Mapping auf sich bewegende Objekte zu realisieren. Dafür wird ein visuelles 3D Trackingverfahren verwendet um die Position und Lage eines bekannten Objekts zu bestimmen. Über ein kalibriertes Projektor- Kamera System wird das reale Objekt dann mit einer virtuellen Textur erweitert. Contents 1 Introduction1 1.1 Motivation . .1 1.2 Structure . .3 2 Related Work4 2.1 Corresponding projects . .4 2.1.1 Spatial Augmented Reality . .4 2.1.2 2D Mapping onto people .
    [Show full text]
  • Room Layout Estimation on Mobile Devices
    En vue de l'obtention du DOCTORAT DE L'UNIVERSITÉ DE TOULOUSE Délivré par : Institut National Polytechnique de Toulouse (Toulouse INP) Discipline ou spécialité : Image, Information et Hypermédia Présentée et soutenue par : M. VINCENT ANGLADON le vendredi 27 avril 2018 Titre : Room layout estimation on mobile devices Ecole doctorale : Mathématiques, Informatique, Télécommunications de Toulouse (MITT) Unité de recherche : Institut de Recherche en Informatique de Toulouse (I.R.I.T.) Directeur(s) de Thèse : M. VINCENT CHARVILLAT M. SIMONE GASPARINI Rapporteurs : M. CARSTEN GRIWODZ, UNIVERSITE D'OSLO M. DAVID FOFI, UNIVERSITE DE BOURGOGNE Membre(s) du jury : Mme LUCE MORIN, INSA DE RENNES, Président M. PASCAL BERTOLINO, UNIVERSITE GRENOBLE ALPES, Membre M. SIMONE GASPARINI, INP TOULOUSE, Membre M. TOMISLAV PRIBANIC, UNIVERSITE DE ZAGREB, Membre M. VINCENT CHARVILLAT, INP TOULOUSE, Membre Acknowledgments I would like to thank my thesis committee for giving me the opportunity to work on an exciting topic, and for the trust they placed in me. First, Simone Gasparini, I had the pleasure to have as advisor, who kept a cautious eye on my scientific and technical productions. I am certain this great attention to the details played an important role in the quality of my publications and the absence of rejection notice. Then, my thesis director, Vincent Charvillat, who was always generous in original ideas and positive energy. His advice helped me to put a more flattering light on my works and feel more optimistic. Finally, Telequid, which funded my works, with a special thought to Frédéric Bruel and Benjamin Ahsan for their great patience. I would also like to thank my referees: Prof.
    [Show full text]
  • Relevant Norms and Standards
    Appendix A Relevant norms and standards A.1 A Short Overview of the Most Relevant Process Standards There is a huge number of different process standards, and Fig. A.1 contains an overview of the most relevant such standards as covered in this book, showing the topics covered. A more extensive list of the relevant standards will be provided in the following section below. ISO/IEC 15504, ISO 19011 ISO/IEC 330xx Auditing man- agement systems Process Process assessment ISO/IEC 20000-1 ISO 9001 ISO/IEC 15504-6 ISO/IEC 15504-5 Service management QM system re- System life cycle PAM Software life cycle PAM Assessments, audits Criteria system requirements quirements ISO/IEC/IEEE 15288 ISO/IEC/IEEE 12207 ITIL System life cy- Software life cy- IT Infrastruc- cle processes cle processes ture Library COBIT Life cycle processes SWEBoK Software engineering Funda- Body of Knowledge mentals ISO/IEC/IEEE 24765 ISO 9000 Systems and SW Engineering Vocabulary QM fundamentals (SEVOCAB) and vocabulary Vocabulary Systems Software Organzational IT Quality Management Engineering Engineering Fig. A.1 Overview of the most important standards for software processes © Springer Nature Switzerland AG 2018 327 R. Kneuper, Software Processes and Life Cycle Models, https://doi.org/10.1007/978-3-319-98845-0 328 A Relevant norms and standards A.2 ISO and IEC Standards The International Organization for Standardization (ISO) is the main international standard-setting organisation, working with representatives from many national standard-setting organisations. Standards referring to electrical, electronic and re- lated technologies, including software, are often published jointly with its sister organisation, the International Electrotechnical Commission (IEC), but IEC also publishes a number of standards on their own.
    [Show full text]
  • Standards Action Layout SAV3438.Fp5
    PUBLISHED WEEKLY BY THE AMERICAN NATIONAL STANDARDS INSTITUTE 25 West 43rd Street, NY, NY 10036 VOL. 34, #38 September 19, 2003 Contents American National Standards Call for Comment on Standards Proposals................................................. 2 Call for Comment Contact Information........................................................ 5 Initiation of Canvasses.................................................................................. 7 Final Actions .................................................................................................. 8 Project Initiation Notification System (PINS) .............................................. 10 International Standards ISO Draft Standards ...................................................................................... 13 ISO Newly Published Standards .................................................................. 15 Registration of Organization Names in the U.S. ........................................... 18 Proposed Foreign Government Regulations ................................................ 18 Information Concerning.................................................................................. 19 Standards Action is now available via the World Wide Web For your convenience Standards Action can now be down- loaded from the following web address: http://www.ansi.org/news_publications/periodicals/standard s_action/standards_action.aspx?menuid=7 American National Standards Call for comment on proposals listed This section solicits your comments on proposed
    [Show full text]
  • Basic Implementation and Measurements of Plane Detection
    Basic Implementation and Measurements of Plane Detection in Point Clouds A project of the 2017 Robotics Course of the School of Information Science and Technology (SIST) of ShanghaiTech University https://robotics.shanghaitech.edu.cn/teaching/robotics2017 Chen Chen Wentao Lv Yuan Yuan School of Information and School of Information and School of Information and Science Technology Science Technology Science Technology [email protected] [email protected] [email protected] ShanghaiTech University ShanghaiTech University ShanghaiTech University Abstract—In practical robotics research, plane detection is an Corsini et al.(2012) is working on dealing with the problem important prerequisite to a wide variety of vision tasks. FARO is of taking random samples over the surface of a 3D mesh another powerful devices to scan the whole environment into describing and evaluating efficient algorithms for generating point cloud. In our project, we are working to apply some algorithms to convert point cloud data to plane mesh, and then different distributions. In this paper, the author we propose path planning based on the plane information. Constrained Poisson-disk sampling, a new Poisson-disk sam- pling scheme for polygonal meshes which can be easily I. Introduction tweaked in order to generate customized set of points such as In practical robotics research, plane detection is an im- importance sampling or distributions with generic geometric portant prerequisite to a wide variety of vision tasks. Plane constraints. detection means to detect the plane information from some Kazhdan and Hoppe(2013) is talking about poisson surface basic disjoint information, for example, the point cloud. In this reconstruction, which can create watertight surfaces from project, we will use point clouds from FARO devices which oriented point sets.
    [Show full text]
  • Point Cloud Subjective Evaluation Methodology Based on Reconstructed Surfaces
    Point cloud subjective evaluation methodology based on reconstructed surfaces Evangelos Alexioua, Antonio M. G. Pinheirob, Carlos Duartec, Dragan Matkovi´cd, Emil Dumi´cd, Luis A. da Silva Cruzc, Lovorka Gotal Dmitrovi´cd, Marco V. Bernardob Manuela Pereirab and Touradj Ebrahimia aMultimedia Signal Processing Group, Ecole´ Polytechnique F´ed´eralede Lausanne, Switzerland; bInstitute of Telecommunications, University of Beira Interior, Portugal; cDepartment of Electrical and Computer Engineering, University of Coimbra and Institute of Telecommunications, Portugal; dDepartment of Electrical Engineering, University North, Croatia ABSTRACT Point clouds have been gaining importance as a solution to the problem of efficient representation of 3D geometric and visual information. They are commonly represented by large amounts of data, and compression schemes are important for their manipulation transmission and storing. However, the selection of appropriate compression schemes requires effective quality evaluation. In this work a subjective quality evaluation of point clouds using a surface representation is analyzed. Using a set of point cloud data objects encoded with the popular octree pruning method with different qualities, a subjective evaluation was designed. The point cloud geometry was presented to observers in the form of a movie showing the 3D Poisson reconstructed surface without textural information with the point of view changing in time. Subjective evaluations were performed in three different laboratories. Scores obtained from each test were correlated and no statistical differences were observed. Scores were also correlated with previous subjective tests and a good correlation was obtained when compared with mesh rendering in 2D monitors. Moreover, the results were correlated with state of the art point cloud objective metrics revealing poor correlation.
    [Show full text]
  • MEPP2: a Generic Platform for Processing 3D Meshes and Point Clouds
    EUROGRAPHICS 2020/ F. Banterle and A. Wilkie Short Paper MEPP2: a generic platform for processing 3D meshes and point clouds Vincent Vidal, Eric Lombardi , Martial Tola , Florent Dupont and Guillaume Lavoué Université de Lyon, CNRS, LIRIS, Lyon, France Abstract In this paper, we present MEPP2, an open-source C++ software development kit (SDK) for processing and visualizing 3D surface meshes and point clouds. It provides both an application programming interface (API) for creating new processing filters and a graphical user interface (GUI) that facilitates the integration of new filters as plugins. Static and dynamic 3D meshes and point clouds with appearance-related attributes (color, texture information, normal) are supported. The strength of the platform is to be generic programming oriented. It offers an abstraction layer, based on C++ Concepts, that provides interoperability over several third party mesh and point cloud data structures, such as OpenMesh, CGAL, and PCL. Generic code can be run on all data structures implementing the required concepts, which allows for performance and memory footprint comparison. Our platform also permits to create complex processing pipelines gathering idiosyncratic functionalities of the different libraries. We provide examples of such applications. MEPP2 runs on Windows, Linux & Mac OS X and is intended for engineers, researchers, but also students thanks to simple use, facilitated by the proposed architecture and extensive documentation. CCS Concepts • Computing methodologies ! Mesh models; Point-based models; • Software and its engineering ! Software libraries and repositories; 1. Introduction GUI for processing and visualizing 3D surface meshes and point clouds. With the increasing capability of 3D data acquisition devices, mod- Several platforms exist for processing 3D meshes such as Meshlab eling software and graphics processing units, three-dimensional [CCC08] based on VCGlibz, MEPP [LTD12], and libigl [JP∗18].
    [Show full text]
  • Guideline for Applying Functional Safety to Autonomous Systems in Mining
    20200709_Guideline_for_Applying_Functional_Safety_to_Autonomous_Systems- GMG-AM-FS-v01-r01 GUIDELINE FOR APPLYING FUNCTIONAL SAFETY TO AUTONOMOUS SYSTEMS IN MINING SUBMITTED BY Functional Safety for Autonomous Equipment Sub-committee VERSION DATE 09 Jul 2020 APPROVED BY Autonomous Mining Working Group 31 Jul 2020 and GMG Executive Council 12 Aug 2020 PUBLISHED 18 Aug 2020 DATE DOCUMENT TO BE REVIEWED 18 Aug 2022 PREPARED BY THE FUNCTIONAL SAFETY FOR AUTONOMOUS EQUIPMENT SUB-COMMITTEE OF THE AUTONOMOUS MINING WORKING GROUP Global Mining Guidelines Group (GMG) ii | GUIDELINE FOR APPLYING FUNCTIONAL SAFETY TO AUTONOMOUS SYSTEMS IN MINING DISCLAIMER Although the Global Mining Guidelines Group (GMG) believes that the information on https://gmggroup.org, which includes guidelines, is reliable, GMG and the organizations involved in the preparation of the guidelines do not guarantee that it is accu- rate or complete. While the guidelines are developed by participants across the mining industry, they do not necessarily rep- resent the views of all of the participating organizations. This information does not replace or alter requirements of any national, state, or local governmental statutes, laws, regulations, ordinances, or other requirements. Your use of GMG guide- lines is entirely voluntary. CREDITS Organizations Involved in the Preparation of these Guidelines ABB, Abbott Risk Consulting, Agnico Eagle, Airobiotics, Alex Atkins & Associates, Ambuja Cements, AMOG Consulting, Antofa- gasta Minerals, Australian Droid + Robot, Autonomous Solutions,
    [Show full text]