A Camera-Based Position Correction System for Autonomous Production Line Inspection
Total Page:16
File Type:pdf, Size:1020Kb
sensors Article A Camera-Based Position Correction System for Autonomous Production Line Inspection Amit Kumar Bedaka 1,* , Shao-Chun Lee 1, Alaa M. Mahmoud 1, Yong-Sheng Cheng 1 and Chyi-Yeu Lin 1,2,3,* 1 Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan; [email protected] (S.-C.L.); [email protected] (A.M.M.); [email protected] (Y.-S.C.) 2 Center for Cyber-Physical System Innovation, National Taiwan University of Science and Technology, Taipei 106, Taiwan 3 Taiwan Building Technology Center, National Taiwan University of Science and Technology, Taipei 106, Taiwan * Correspondence: [email protected] (A.K.B.); [email protected] (C.-Y.L.) Abstract: Visual inspection is an important task in manufacturing industries in order to evaluate the completeness and quality of manufactured products. An autonomous robot-guided inspection system was recently developed based on an offline programming (OLP) and RGB-D model system. This system allows a non-expert automatic optical inspection (AOI) engineer to easily perform inspections using scanned data. However, if there is a positioning error due to displacement or rotation of the object, this system cannot be used on a production line. In this study, we developed an automated position correction module to locate an object’s position and correct the robot’s pose and position based on the detected error values in terms of displacement or rotation. The proposed Citation: Bedaka, A.K.; Lee, S.-C.; module comprised an automatic hand–eye calibration and the PnP algorithm. The automatic hand– Mahmoud, A.M.; Cheng, Y.-S.; Lin, eye calibration was performed using a calibration board to reduce manual error. After calibration, the C.-Y. A Camera-Based Position PnP algorithm calculates the object position error using artificial marker images and compensates for Correction System for Autonomous the error to a new object on the production line. The position correction module then automatically Production Line Inspection. Sensors 2021, 21, 4071. https://doi.org/ maps the defined AOI target positions onto a new object, unless the target position changes. We 10.3390/s21124071 performed experiments that showed that the robot-guided inspection system with the position correction module effectively performed the desired task. This smart innovative system provides a Academic Editors: Abir Hussain, novel advancement by automating the AOI process on a production line to increase productivity. Dhiya Al-Jumeily, Hissam Tawfik and Panos Liatsis Keywords: automatic optical inspection (AOI); offline programming; position correction; image processing; production line; manipulator; camera-based system Received: 30 May 2021 Accepted: 12 June 2021 Published: 13 June 2021 1. Introduction Publisher’s Note: MDPI stays neutral The development of industrial automation became quite mature by the 1970s, and with regard to jurisdictional claims in many companies have been gradually introducing automated production technology published maps and institutional affil- to assist production lines since [1]. The automation of the production line has enabled iations. the manufacturing process to be more efficient, data-based, and unified. Automation technologies have heavily influenced many processes, from loading and unloading objects to sorting, assembly, and packaging [2]. In 2012, the German government proposed the concept of Industry 4.0 [3]. Many Copyright: © 2021 by the authors. companies have started to develop automation technologies, from upper layer software Licensee MDPI, Basel, Switzerland. or hardware providers and middle layer companies that store, analyze, manage, and This article is an open access article provide solutions, to lower layer companies, so that they can apply these technologies distributed under the terms and in their factories [4]. In addition, a smarter integrated sensing and control system based conditions of the Creative Commons Attribution (CC BY) license (https:// on existing automation technologies has begun to evolve as part of the development of a creativecommons.org/licenses/by/ highly automated or even fully automated production model [5]. 4.0/). Sensors 2021, 21, 4071. https://doi.org/10.3390/s21124071 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 4071 2 of 19 In modern automation technology, development is mainly based on a variety of algorithms and robot arm control systems [6]. The robotic arm has the advantages of high precision, fast-moving speed, and versatile motion. In advanced applications, ex- tended sensing modules, such as vision modules, distance sensors, or force sensors, can be integrated with the robotic arm to give human sense to robotic systems [7]. Robot systems in general have the ability to “see” things through various vision systems [8]. These help robots to detect and locate objects, which speeds up the technical process. In addition, they improve the processing accuracy and expand the range of applications for automated systems. The type of data obtained can be approximately divided into a 2D vision system (RGB/Mono Camera system), which receives planar images [9], and a 3D vision system (RGB-D Camera system), which receives depth images in the form of vision modules [10]. These two systems have their own advantages and disadvantages and suitable application scenarios. In the field of industrial manufacturing automation, the 2D vision system is commonly used to meet the needs of high precision, ease of installation, and use. The robotic arm with a 2D vision camera is the most commonly used setup in today’s industrial automation applications. Pick and place [11] and random bin picking [12] are among the most frequent applications that use an industrial camera with a manipulator. Several open-source and commercial toolboxes, such as OpenCV [13] and Halcon [14], are already available for use in vision systems. However, regardless of how mature the software support is, a lot of hardware is still needed with human intervention to complete the process. Camera calibration is needed before operation and is applied to various image processing algorithms so that important parameters, such as focal length, center of image, intrinsic parameter, extrinsic parameter, and lens distortion [15,16], can be learnt. Furthermore, camera calibration is a challenging and time-consuming process for an operator unfamiliar with the characteristics of the camera on the production line. Camera calibration is a major issue that is not easily solved in the automation industry and complicates the introduction of production line automation. Traditional vision-based methods [17–20] require 3D fixtures corresponding to a reference coordinate system to calibrate a robot. These methods are time-consuming, inconvenient, and may not be feasible in some applications. A camera installed in the working environment or mounted on a robotic arm can be categorized as an eye-to-hand (camera-in-hand) calibration or stand-alone calibration [21]. The purpose of the hand–eye correction is similar to that of the robot arm end-point tool calibration (TCP calibration), which obtains a convergence homogeneous matrix between the robot end-effector and the tool end-point [22]. However, unlike TCP calibration, it can be corrected by using tools to touch fixed points in different positions. To obtain a hand–eye conversion matrix, the visual systems use different methods, such as the parametrization of a stochastic mode [23] and dual-quaternion parameterization [24], since the actual image center cannot be used. For self-calibration methods, the camera is rigidly linked to the robot end-effector [25]. A vision-based measurement device and a posture measuring device have been used in a system that captures robot position data to model manipulator stiffness [26] and estimate kinematic parameters [27–29]. The optimization technique is based on the end-effector’s measured positions. However, these methods require offline calibration, which is a limitation. In such systems, accurate camera model calibration and robot kinematics model calibration are required for accurate positioning. The camera calibration procedure required to achieve high accuracy is, therefore, time-consuming and expensive [30]. Positioning of the manufactured object is an important factor in industrial arm appli- cations. If the object is not correctly positioned, it may cause assembly failure or destroy the object. Consequently, the accuracy of object positioning often indirectly influences the processing accuracy of the automated system. Although numerous studies have been conducted to define object positioning accurately based on vision systems [31–33], no system has been found with an offline programing platform to perform AOI inspection Sensors 2021, 21, 4071 3 of 19 on a production line. Table1 shows a comparison between the system we propose here and to existing vision-based position correction systems in terms of performance. The proposed system expedites the development of a vision-based object position correction module for a robot-guided inspection system. This allows the robot-guided inspection system to complete AOI tasks automatically in a production line regardless of the object’s position. In the proposed system, the AOI targets can be automatically mapped onto a new object for the inspection task, whereas existing vision systems have been developed to locate the object’s position but not for the production line. To operate these systems, the operator