A Visual Servoing Architecture for Controlling Electromechanical Systems R
Total Page:16
File Type:pdf, Size:1020Kb
A VISUAL SERVOING ARCHITECTURE FOR CONTROLLING ELECTROMECHANICAL SYSTEMS R. Garrido, A. Soria, P. Castillo, I. Vásquez CINVESTAV - IPN, Departamento de Control Automático Av. IPN No. 2508, C.P. 07360, México, D.F., MEXICO fax (52)57 47 70 89 e-mail: [email protected] Keywords.- Direct Visual Servoing. Low-cost visual architecture. Electromechanical control system. Planar Robot Control. Abstract.- Most of the architectures employed in visual servoing research use specialised hardware and software. The high cost of the specialised hardware and the engineering skills required to develop the software complicates the set-up of visual controlled systems. The present day costs for the key elements for computer vision such as cameras, frame grabbers and computers continue to fall thus making reasonable low-cost visual architectures. In this paper, we present a visual servoing architecture for controlling electromechanical systems based on standard off-the-shelf hardware and software. The proposed scheme allows to control a wide class of systems including robot manipulators. The programming environment is based on MathLab/Simulink which allows to take advantage of the graphic programming facilities of Simulink . Two experimental evaluations are presented, a linear motion cart controlled using direct visual servoing and a planar robot arm controlled in a look and move framework. I.- INTRODUCTION The sense of sight procures most of the information received by a human being allowing him to interact with his environment and to survive. In the case of a robot manipulator, computer vision is an useful sensor since it mimics the human sense of sight and allows for non-contact measurement of the environment, a feature which could give it more versatility with respect to robots endowed only with optical encoders and limit switches. This feature could potentially allow to a robot to deal with problems related to flexibility and backlash in the robot transmission system and with partial knowledge of the robot kinematics. The task in visual control of robots is to use visual information to control the pose of the robot end-effector relative to a target object or a set of target features. The first works dealing with computer vision applied to robot manipulators appeared in the seventies [1], [5]. However, technical limitations at that time prevented an important development of this subject. During the eighties and nineties, the availability of more powerful processor and the development of digital electronics gave a strong impulse to visual control of robots and mechanical systems. Examples of works involving visual control of robot manipulators are [3], [6], [7], [8], [10], [11], [12], [13]. For other mechanical systems see for example [2] where visual control is applied to an inverted pendulum and [9] for a flexible arm controlled using a vision system. In most of the references cited above, the experiments were executed using specialised hardware and software, which may increase time required to set up an experiment. One of the first computer vision architectures was the BVV1 [2], which contained up to 15 Intel 8085A, 8-bit microprocessors. In the same paper, another architecture, the BVV2, was also proposed. The main difference between the aforementioned architectures is that the later employs Intel 8086 16 bits processors. Almost all the software was written in machine code and the architectures were tailored to suit the computing needs of each experiment. Both systems are able to work at 17 ms of visual sampling period. Unfortunately, the authors does not mention if other kind of interfaces are available with these systems, e.g. analog to digital converters, interfaces for optical encoders, etc. In the case of visual control of robot manipulators, Corke [1] proposed an architecture in which image processing is done using a Datacube card and visual control is performed through a VME-based computer. Visual sampling period was 20 ms and custom made software, ARCL, was employed for programming. In some experiments the robot controller shares the control law execution. In Hashimoto and Kimura [4], vision processing was done using a Parsytec card mounted in a personal computer and another personal computer hosting a transputer network was employed for control. Vision sampling period was 85 ms and the period for joint robot control was 1 ms. Papanikolopoulos and Koshla [8] uses an architecture in which an IDAS/150 board is used for image processing and 6 Texas Instruments TMS320 DSP processors for controlling the joints of a CMU DDArm II robot. All the cards were connected through a VME bus hosted in a Sun workstation. Vision sampling period was 100 ms and the robot controller has a period of 3.33 ms. The whole system runs under the Chimera-II real-time software environment. Another interesting architecture is proposed in [13] where a custom made board based on a Texas Instruments TMS320C15 fixed-point processor is employed for image processing. Sampling period for image acquisition and processing was 16.7 ms. Control at the visual level is executed using a network of RISC processors with a sampling period of 7.3 ms. Joint control was left to the robot controller. A serial protocol is employed to connect the robot to the personal computer hosting the RISC and the image processors. In [10], a personal computer hosts a Texas Instruments TMS320C31 DSP processor for joint control of a direct drive two degrees of freedom robot and a Data Translation DT3851-4 card for image processing. Sampling period for joint control was 2.5 ms and 50 ms for the visual feedback part. From the above non-exhaustive review, it can be concluded that in most cases data processing is executed using specialised and sometimes high cost boards, and it is not always possible to modify the image processing algorithms since in some boards these algorithms are in hardware. The control part also relies on specialised hardware such as transputers and DSP. It is also worth remarking that programming is made using machine code or C language. This feature may be adequate for researcher with good programming skills but for other users, some time would be needed for acquiring a good level of familiarity with the system and for setting up an experiment. Motivated by the remarks made above, in this work we propose a visual servoing architecture for controlling electromechanical systems based on personal computers and standard off-the-shelf hardware and software. This paper is organised in the following way. Section II describes the visual servoing architecture. In Section III, the proposed architecture is tested through two experiments, namely direct visual servoing of a linear motion cart and look and move control of a two degree of freedom robot arm. Finally, in Section IV some concluding remarks are given and future work is also discussed. II.- VISUAL SERVOING ARCHITECTURE. a. Overview. From the review presented in the introduction, it is clear that in most of the visual servoing architectures, the image processing and the control parts should be executed using separate processors. This philosophy is reasonable if one takes into account the computing burden associated with image processing. In some instances in the robot control part, visual servoing algorithms may also require significant computing resources. However, as it was pointed out, specialised hardware is employed for the above aims. In order to benefit from the advantages of the above philosophy and, at the same time, avoiding excessive costs associated with highly specialised components, it would be interesting to integrate off-the-shelf hardware and software in a particular architecture. It would allow, on the one hand, a user-friendly control algorithm programming environment, and on the other hand, a performance comparable with those architectures proposed in the visual servoing literature. In our case, the proposed architecture achieves the above goals separating the visual servoing task into 3 different components, each having a specific function (see Figure 1): • A programming, algorithm development and data logging environment component. • A control component that can interact with the vision component to fulfil the control goals. • A vision component capable of perceiving the environment. A/D DATA ACQUISITION ELECTROMECHANICAL CARD SYSTEM D/A Servotogo S8 Op. Enc. ISA Bus CAMERA RS-170 REAL-TIME CONTROL COMPUTER Wincon Client FRAME GRABBER National Instruments PCI 1408 ETHERNET RS-232 PCI Bus PROGRAMMING AND DATA LOGGING COMPUTER IMAGE ACQUISITION Wincon Server MathLab/Simulink AND PROCESSING COMPUTER MS Visual C++ Borland C Figure 1 : Block diagram for the proposed visual servoing architecture. b. Programming, algorithm development and data logging environment. The computer devoted to programming, development and data logging, which we will call in the sequel the Server, hosts MathLab/Simulink from The MathWorks Inc. , Wincon from Quanser Consulting Inc. and MS Visual C++ software, all running under Windows 95 . Simulink is devoted to programming the control algorithms and compiles the graphical code produced under Simulink . Wincon (Server part) downloads the code to the real-time control computer, which we will call the Client. Once the code has been downloaded, it is possible from the Server to start and stop the experiment, to change controller parameters on the fly and to log data from the Client. Interconnection between the Server and the Client is made through an Ethernet network. Further details can be found in [15]. In our set-up, the Server is a Pentium computer running at 200 Mhz. c. Real time control. For the Client, we use a computer with a Pentium processor running at 350 MHz under Windows 95 . Wincon is employed to run the code generated at the Server. Data acquisition is performed using a Servotogo S8 card, which is able to handle optical encoders and analog voltage inputs and outputs. Sampling time will depend on the processing power of the computer.