<<

International Journal of Control Systems and L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

Integration of low-cost vision system with the Fanuc M-1ia for sorting and assembly operations L. VAN DER WESTHUIZEN, I.A. GORLACH Department of Mechatronics Nelson Mandela University Gardham Avenue, North Campus, Summerstrand, Port Elizabeth SOUTH AFRICA [email protected], [email protected], https://www.mandela.ac.za

Abstract: - Machine vision has given the advantage of gathering more information from the workspace around them. The enactment of machine vision to improve robotic performance is becoming an essential component for the amalgamation of robot technology in the process. Machine vision increases the robot’s collaboration with the work setting and provides inordinate system flexibility. This paper describes a low-cost robotic vision system for assembling workpieces transported along a conveyor. The vision system identifies workpieces of different profiles and determines their respective coordinates as well as the orientation of the workpieces. Corresponding sorting algorithms are implemented in order to simulate the manufacturing environment. This paper will discuss the theory of image processing, equipment selected, experimental procedures and conclusions drawn. The vision algorithm will be presented in the paper. A low-cost vision sensor is to be implemented in order to perform object tracking as well as object detection to determine both the shape of the workpiece and the position of the workpiece relative to the camera. The controller is thus required to perform image processing and data acquisition. Coordinates and data are to be traversed between the controller and robot in real-time.

Key-Words: Machine vision, robotics,

1 Introduction Computer vision is a science that develops Machine vision or vision tracking is an algorithms for automatically extracting and application in industry in which analyzing useful information from workpieces on any given surface are observed images using a computer [1] detected. The vision system is able to Object detection is used to recognize communicate with robots, rotary tables and certain objects within the video sequence. presses to form a smarter more advanced Frame matching is a form of object manufacturing process. Vision tracking detection in which frames are captured makes use of various forms of image through a stationary camera through processing which comprises of object constant intervals. The differing image of detection and object tracking whereby the captured frames is altered to binary video imaging is received via an industrial which produce distinctive differences webcam. between the object and background in Machine vision has given robots the which edge detection can be performed advantage of gathering more information effectively [2]. from the workspace around them. Vision Edge detection on an image represents systems allow robots to detect parts changes in textures, lines and colours. One travelling on a conveyor and perform form of edge detection used is the left-right required movements in order to select search [3]. Assuming a pointer moves from those parts or determine whether certain one pixel to another, at any time the parts have faults, as used in quality control. pointer reaches a pixel containing

ISSN: 2367-8917 34 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

segments of the blob, it will turn left. Consequently, if the pointer reaches a blank pixel, it will turn right. So the 2.1 Camera Selection process will continue in order to represent Three cameras were compared and tested a line drawing of a blob object. in order to produce the best quality video Optical flow ascends from the comparative feed for producing an efficient vision motion of objects and the observer, thus system. Cameras were compared based on giving important information about the image resolution, cost and lens distortion. spatial arrangement of the objects observed The three cameras compared included an and the rate of change of this arrangement Industrial CMOS Web Camera, [4]. It involves calculating the image IMX179 Camera and the Microsoft Life optical flow field as well as performing Cam HD-3000 Camera. clustering processing according to the The Microsoft Life-Cam HD-3000 optical flow distribution characteristics [5]. performed optimally as it produced a 720P video image which will perform 2 Development of Vision sufficiently for edge detection and reduce any risk of premature lighting, thus the System camera can perform in any sufficient A personal computer (PC) is used as the lighting condition. The camera contains a cell controller and is able to communicate non-distortion lens which eliminates with the Fanuc R-30iB controller using the additional computation required for Fanuc Robotics Controller Interface which calibration. Additionally, the camera is of transfers data by means of an Ethernet the most affordable choice costing R495 connection. The controller interface allows [6]. Therefore, the Microsoft Life-Cam the PC to monitor and switch I/O’s on the HD-3000 is selected as the camera used for CRMA15 peripheral interface as well as the vision system. update the robot position registers. The PC receives a video image from a webcam 2.2 Image Processing which is linked using a universal serial bus Image processing is performed on a (USB). AForge.NET is an open source computer using Visual Studio. Visual framework used in the field of Computer Studio is a Microsoft based integrated Vision to perform essential tasks in image development environment used to develop processing. AForge.Net is used to perform programs, web apps and web sites. object detection in determining the Visual Studio is a suitable development workpiece profile as well as the position of environment to use in order to create a the workpiece relative to the camera. program for the vision system using C# as a default developing language. Visual Studio itself does not support forms of image processing such as object detection, object recognition and object classification. However, direct library links are available to perform such tasks. One in particular is AForge.NET, used by developers and researchers in the field of Artificial Intelligence and Computer Vision to perform essential tasks in image processing [7]. Therefore, AForge.Net is a suitable framework to use in order to perform accurate and precise object recognition to Fig 1. System block diagram

ISSN: 2367-8917 35 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

determine the shape of the part as well as Fig 2. Calibration template object tracking in recording the respective coordinates of the workpiece relative to the 2.4 Communication between PC and camera frame. Robot Fanuc Robotics offers a direct library link which allows for communication between 2.3 Calibration Procedure a windows-based PC and the robot Camera calibration is a vital step in controller using a C# programing language securing both accuracy and the ability of [8]. The direct library link file is easily the vision system to perform at an optimal accessible in Visual Studio and can be level. When performing object tracking added as a reference. through image processing, an object’s Since the Fanuc M-1ia robot makes use of coordinates relative to the camera frame is the RS-232-C port, communication given in pixels. However, a measurement between the PC and Fanuc M-1ia is in pixels cannot be transformed to robot performed through the RS-232-C via an coordinates. Therefore, it is necessary to Ethernet connection in order to update perform a calibration as the camera position registers and switch selected input produces measurements in pixels. and output ports. Calibration is implemented as a scaling Using various built-in methods provided tool to convert the measurement in pixels by FANUC Robotics Controller Interface. to a measurement in millimeters which is A program can be written in Visual Studio used by the robot coordinate system. in which the values of position registers on The vision system is calibrated using a the robot controller can be manipulated. linear based calibration technique. Linear The status of digital inputs and outputs can based calibration is a technique in which be monitored in real-time through Visual three or more collinear points are used to Studio using the FANUC Robotics measure world coordinates on a camera Controller Interface. The library link offers frame. This technique involves one programming objects which can be used to snapshot of a set of collinear points of a simulate the status of digital inputs and known orientation relative to the camera outputs. In addition to updating various frame and a known distance between the position registers on the robot controller, points in world coordinates. Linear the library link is capable of monitoring the regression between coordinates in the real-time position of the robot through world frame and camera frame produce an Visual Studio. equation for determining the relative position of varies points on the camera 2.5 Part Detection frame in world coordinates. Figure 46 In order for the vision system to perform below illustrates the calibration template object detection of the workpiece for used in order to calibrate the vision system determining its respective coordinates, the using a linear based calibration technique. vision system must determine exactly when a workpiece moves within the camera field of view. A laser and photoelectric sensor is fixed to either side of the conveyor to determine when the workpiece is about to move within the camera field of view. When the workpiece breaks the laser, the photoelectric sensor will not receive any light from the laser. The photoelectric sensor then switches a digital input on the

ISSN: 2367-8917 36 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

robot controller, the status of the digital input can be monitored using Fanuc Robotics Controller Interface library in Visual Studio. Once the digital input has been set to a high, the camera must capture an image of the part on the conveyor while it is still moving at a constant velocity. An average delay of 0.15 seconds must be accounted for when the photo-electric sensor is activated until the image is captured. Fig 4. Calibration user interface

Once calibration is complete, an additional window is opened once the vision system is fully operational. The picture box on the left hand side of the window depicts the live video feed from the webcam. The picture box on the right hand side of the window shows the captured image of the workpiece on the conveyor once the laser beam has been broken, signaling the presence of the workpiece in the camera field of view. Fig 3. Setup of robotic cell The coordinates of the captured workpiece are displayed on the panel below the 2.5 Graphic User Interface captured image. Both the coordinates of The user interface is designed using a the workpiece in the camera frame and windows form application, designed and robot frame are displayed on the window. programmed in Visual Studio. The user The camera coordinates displayed are in interface is designed to be effective and units of pixels, which represent the easy to use. position of the workpiece relative to the The calibration procedure includes placing center of the camera frame. The robot a calibration template in front of the coordinates displayed are measured in camera field of view. Vertical and units of millimeters. horizontal guidelines on the interface allow the correct placement of the calibration template directly in the middle of the camera field of view. Once the calibration has commenced, linear regression is performed to calibrate the position of circles on the calibration to world coordinates in millimeters.

Fig 5. Operational user interface

3 Overall Design Description The camera used is a Microsoft Life-Cam HD-3000 which returns a video feed of

ISSN: 2367-8917 37 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

720P through the USB 2.0 interface, moving along the conveyor. Therefore, the thereby eliminating the need for additional vision system will determine the exact time lighting. The video feed is then processed taken for image processing and matrix using AForge.Net, which is used to transformation to occur. Once the time is determine both the workpiece profile and known with a fixed conveyor speed, the the coordinates of the workpiece. Once the position of the workpiece is updated as it coordinates of the workpiece have been moves along the conveyor. determined, Fanuc Robotics Controller After all processing measures have been Interface is used to send the workpiece completed, the coordinates and orientation coordinates to the robot by updating of the workpiece are passed to a position position registers. register on the robot controller. Workpieces are manually loaded onto the The robot will then continue to wait at a conveyor at any given position or home position until the status of a digital orientation. Once a workpiece moves into input has been changed, therefore signaling the line of sight of the laser beam fitted on that the robot may move to the updated the conveyor, the status of digital input 101 position register and select the moving on the robot controller is changed. Giving workpiece from the conveyor. a certain time delay for the workpiece to Additionally, the status of digital outputs fall completely within the camera field of are changed to notify the robot as to view, an image of the workpiece is whether the workpiece is of a quadrilateral captured. or triangular profile. Therefore sorting each Once the image has been captured, image profile accordingly. processing and object detection occurs. The subroutines of both the cell controller The shape, position and orientation of the and robot controller are represented below. workpiece relative to the camera is Figure 6 illustrates the routine followed by determined using the AForge.Net. the cell controller whilst the vision system The position of the workpiece relative to is operational. the camera field of view is transferred to the robot coordinate system by means of matrix transformations. The equation below illustrates the matrices used to transfer coordinates relative to the robot base.

= 𝑅𝑅𝑅𝑅𝑅𝑅𝑜𝑜𝑜𝑜 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊 Therefore,𝑇𝑇 the matrix𝑇𝑇 that∙ 𝑇𝑇denotes the position (X, Y, Z) and orientation about the Z-axis of the workpiece relative to the robot base is represented as follows:

0 0 = 𝑥𝑥 𝐶𝐶𝐶𝐶0 0𝑆𝑆 −𝑆𝑆 1 𝑝𝑝 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 𝑆𝑆𝑆𝑆 𝐶𝐶𝐶𝐶 𝑝𝑝𝑦𝑦 𝑇𝑇𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 � 0 0 0 1 � 𝑝𝑝𝑧𝑧 However, as the camera captures an image of the workpiece and image processing occurs to determine the relative position and orientation, the workpiece is still

ISSN: 2367-8917 38 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

Fig 6. Robot controller block diagram

Similarly, Figure 7 illustrates the subroutine followed by the robot controller once activated by the cell controller.

Fig 7. Cell controller block diagram

ISSN: 2367-8917 39 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

4 Statistical Analysis Such that, 4.1 Repeatability = 1 = 9 The repeatability test was performed using = 95% 𝑣𝑣 𝑁𝑁 − circles to avoid any variations in the offset From t-table: 𝑃𝑃 due to the orientation of the workpiece. 9,95 = 2.262 The accuracy at which the robot picks a Thus the true value is as follows: 𝑡𝑡 workpiece off the conveyor will be = 1.753 ± 0.7 determined by the distance from the tool Since ′ 2 2 2 center point to the center of the workpiece. 𝑥𝑥 + = = = 1.24 ± 0.495 This distance can be measured once the 𝑖𝑖 part has been placed as the position at To conclude, 𝑥𝑥for the𝑦𝑦 given𝑥𝑥 set of results, which the robot tool center point will be the accuracy𝑥𝑥 𝑦𝑦 of the vision system𝑚𝑚𝑚𝑚 in both once the part is placed is known. The the X and Y direction is 1.24mm. position of the tool center point forms the center point of the 1mm grid on which the 4.2 Calibration part is placed. The offset is then The vision system, as previously determined by measuring the distance of mentioned is calibrated using a linear the workpiece center to the center of the based calibration technique. Linear based 1mm grid. calibration is a technique in which three or Ten measurements are taken of the offset more collinear points are used to measure of the center of the part and the center world coordinates on a camera frame. This point at which the part should be placed. A technique involves one snapshot of a set of repeatability test is performed on the data collinear points of a known orientation relative to the camera frame and a known collected to determine the accuracy of the distance between the points in world vision system. coordinates [9]. Linear regression between Table 1. Repeatability results coordinates in the world frame and camera frame produce an equation for determining the relative position of various points on the camera frame in world coordinates. A table is drawn containing the distances between each circle on the calibration template and the coordinates of these circles measured in pixels by the vision system. Linear Regression analysis is performed on both X and Y coordinates of Mean value of repeatability test: the camera frame. = =1 𝑁𝑁 ∑𝑖𝑖 𝑖𝑖 = 1.753𝑥𝑥 𝑁𝑁 Standard deviation𝑥𝑥 of repeatability test: 𝑥𝑥 ( )2 = =1 𝑁𝑁 1 ∑𝑖𝑖 𝑥𝑥̅−𝑥𝑥𝑖𝑖 = 0.9831 𝑠𝑠𝑥𝑥 � 𝑁𝑁− Standard deviation of means of the repeatability test:𝑠𝑠𝑥𝑥 = 𝑠𝑠𝑥𝑥 = 0.31 𝑠𝑠𝑥𝑥̅ √𝑁𝑁 Fig 8. Regression analysis for X coordinates True value: 𝑥𝑥̅ 𝑠𝑠= ± , ′ 𝑥𝑥 𝑥𝑥̅ 𝑡𝑡𝑣𝑣 𝑃𝑃𝑠𝑠𝑥𝑥̅

ISSN: 2367-8917 40 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

The best fit equation with 95% confidence:

The line of best fit is represented by the = 1 + 0 ± , equation: 𝑠𝑠𝑦𝑦𝑦𝑦 = 95% = 5.5613 + 9.0251 𝑦𝑦𝑐𝑐 𝑎𝑎 𝑥𝑥 𝑎𝑎 𝑡𝑡𝑣𝑣 𝑃𝑃 √𝑁𝑁 7,95 = 2.365 The correlation coefficient of 𝑃𝑃 𝑦𝑦𝑐𝑐 𝑥𝑥 2 = 5.5613 + 9.0251 ± 655.93(95%) determination is given by R = 0.9993 𝑡𝑡 The correlation coefficient of 𝑦𝑦𝑐𝑐 𝑥𝑥 determination is a key product of 5 Conclusion regression analysis. It is interpreted as the The vision system is successfully able to proportion of the variance in the dependent identify two profiles of workpieces, variable that is predicted from the namely squares and triangles. The vision independent variable [10]. However, the system is able to determine the position of coefficient of determination may lack both X and Y as well as the orientation of internal validity as no direct cause and moving workpieces on the conveyor. After effect can be inferred. That stated, the performing calibration and repeatability association of two variables could be tests, an accuracy of 1.24mm in both the X explained by a third variable [11]. Thus the and Y direction has been achieved. The correlation coefficient values are not velocity of workpieces moving along the effective estimators in the random error conveyor is determined, therefore the present in the equation represented by yc vision system can adjust accordingly to instead, a precise estimate of the slope of assemble workpieces moving at various the fit can be calculated by its standard velocities. All of the listed requirements of error. the vision system has therefore been The standard error of fit is given by: achieved, creating a low-cost vision system capable of selecting and correctly sorting ( )2 = =1 moving workpieces from a conveyor. 𝑁𝑁 ∑𝑖𝑖 𝑦𝑦𝑖𝑖− 𝑦𝑦𝑐𝑐𝑐𝑐 = ( + 1) = 7 𝑠𝑠𝑦𝑦𝑦𝑦 � 𝑣𝑣 References: = 798.05 [1] M. Haralik and G. Shapiro, The best fit𝑣𝑣 equation𝑁𝑁 − 𝑚𝑚with 95% confidence: “Computer and Robot Vision”, Journal on 𝑠𝑠𝑦𝑦𝑦𝑦 = 1 + 0 ± , Image and Video Processing, vol. 1. no. 1, 𝑠𝑠𝑦𝑦𝑦𝑦 = 95% p. 1-10, 1992 𝑦𝑦𝑐𝑐 𝑎𝑎 𝑥𝑥 𝑎𝑎 𝑡𝑡𝑣𝑣 𝑃𝑃 √𝑁𝑁 = 2.365 [2] S. Devi, N. Malmurugan and M. 7,95 Manikandan, “Object Motion Detection in = 5.5613 +𝑃𝑃 9.0251 ± 629.13(95%) Video Frames Using Background Frame 𝑡𝑡 𝑐𝑐 Matching”, International Journal of Similarly,𝑦𝑦 for𝑥𝑥 the Y coordinates. The Computer Trends and Technology, April, statistical measurements are as follows: vol. 4, no. 6, p. 1928-1931, 2013

[3] S. Niku, Introduction to Robotics. 2nd ed. Hoboken, N.J.: Wiley, 2011 [4] H. Ali, F. Hafiz and A. Shafie, “Motion Detection Techniques using Optical Flow”, World Academy of Science, Engineering and Technology, vol. 3 no. 1, p. 1561-1563, 2009 [5] C.M. Sukanya, R. Gokul, and V. Paul, “A Survey on Object Recognition Methods”, International Journal of Fig 9. Regression analysis for Y Computer Science and Engineering coordinates Technology, vol. 6, no. 1, p. 48-52, 2016

ISSN: 2367-8917 41 Volume 3, 2018 International Journal of Control Systems and Robotics L. Van Der Westhuizen, I. A. Gorlach http://www.iaras.org/iaras/journals/ijcsr

[6] PC Mag, “Microsoft Life-Cam HD- 3000”, PC Mag Digital Group, 2017. [Online].Available:https://www.pcmag.co m[Accessed 22 May 2017] [7] AForge, “AForge Image Processing”, AForge.Net, 2012. [Online] Available:http://www.aforgenet.com/frame work/ [Accessed 17 April 2017]. [8] DMC Inc.,”Fanuc Robotics Controller Interface”, DMC, 2017. [Online] Available:https://www.dmcinfo.com/latest- thinking/blog/id/9196/how-to-use-fanuc- pc-developers-kit-pcdk [Accessed 15 March 2017] [9] Z. Zhang, “Camera Calibration with One-Dimensional Objects”, Microsoft Research, December, vol. 1 no. 1, p. 1-13, 2001 [10] Stat Trek, “Regression Analysis”, StatTrek. 2017. [Online] Available: http://stattrek.com/statistics/dictionary.asp x?definition=coefficient_of_determination [Accessed 14 April 2017]. [11] Get Revising, “Correlation Coefficient”, GetRevising. 2017. [Online] Available:https://getrevising.co.uk/grids/co rrelational_research [Accessed 19 April 2017].

ISSN: 2367-8917 42 Volume 3, 2018