
Survey of vision-based robot control Ezio Malis INRIA, Sophia Antipolis, France, [email protected] Abstract In this paper, a short survey of vision-based robot control (generally called vi- sual servoing) is presented. Visual servoing concerns several ¯eld of research including vision systems, robotics and automatic control. Visual servoing can be useful for a wide range of applications and it can be used to control many dif- ferent dynamic systems (manipulator arms, mobile robots, aircraft, etc.). Visual servoing systems are generally classi¯ed depending on the number of cameras, on the position of the camera with respect to the robot, on the design of the error function to minimize in order to reposition the robot. In this paper, we describe the main visual servoing approaches proposed in the literature. For simplicity, the examples in the survey focuses on manipulator arms with a single camera mounted on the end-e®ector. Examples are taken from work made at the Uni- versity of Cambridge for the European Long Term Research Project VIGOR (Visually guided robots using uncalibrated cameras). 1 Introduction Vision feedback control loops have been introduced in order to increase the flexibility and the accuracy of robotic systems. The aim of the visual servoing approach is to control a robot using the information provided by a vision system. More generally, vision can be used to control disparate dynamic systems like for example vehicles, aircrafts and submarines. Vision systems are generally classi¯ed depending on the number of cameras and on their positions. Single camera vision systems are generally used since they are cheaper and easier to build than multi-camera vision systems. On the other hand, using two cameras in a stereo con¯guration [26, 22, 27] (i.e. the two cameras have a common ¯eld of view) make easier several computer vision problems. If the camera(s) are mounted on the robot we call the system \in-hand". In contrast, if the camera observe the robot from we can call the system \out-hand" (the term \stand- alone" is generally used in the literature). There exist hybrid systems where one camera is in-hand and another camera stand-alone observing the scene [19]. A fundamental classi¯cation of visual servoing approaches depends on the design of the control scheme. Two di®erent control schemes are generally used for the visual servoing of a dynamic system [48, 30]. The ¯rst control scheme is called \direct visual servoing" since the vision-based controller directly compute the input of the dynamic systems (see Figure 1) [33, 47]. The visual servoing is carried out at a very fast (at least 100 Hz, with a rate of 10 ms). The second Reference Vision−based Dynamic Camera − Control System Figure 1: Direct visual servoing system control scheme can be called, contrary to the ¯rst one, \indirect visual servoing" since the vision-based control compute a reference control law which is sent to the low-level controller of the dynamic system (see Figure 2). Most of the visual servoing proposed in the literature follows an indirect control scheme which is called \dynamic look-and-move" [30]. In this case the servoing of the inner loop (generally the rate is 10 ms) must be faster than the visual servoing (generally the rate is 50 ms) [8]. For simplicity, in this paper we consider examples of Reference Vision−based Dynamic Camera − Control − System Low−level control Figure 2: Indirect visual servoing system positioning tasks using a manipulator with a single camera mounted on the end- e®ector. Examples are taken from work made at the University of Cambridge for the European Long Term Research Project VIGOR (Visually guided robots using uncalibrated cameras). The VIGOR project, leaded by INRIA Rhones- Alpes, produced a successful application of visual servoing to a welding task for shipbuilding industry at Odense Steel Shipyard. Several other examples of visual servoing systems can be found in [24] and in the special issues on visual servoing which have been published in international journals. The ¯rst one, published in the IEEE Transaction on Robotics and Automation, October 1996, contains a detailed tutorial [30]. The second one has been published in the International Journal of Computer Vision, June 2000. A third special issue on visual servoing should appear fall 2002 in the International Journal of Robotics Research. 2 2 Vision systems A \pinhole" camera perform the perspective projection of a 3D point to the image plane. The image plane is a matrix of light sensitive cells. The resolution of the image is the size of the matrix. The single cell is called a \pixel". For each pixel of coordinates (u; v), the camera measures the intensity of the light. For example, a 3D point, with homogeneous coordinates X = (X; Y; Z; 1) project to an image point with homogeneous coordinates p = (u; v; 1) (see Figure 3): p / K 0 X (1) £ ¤ where K is a matrix containing the intrinsic parameters matrix of the camera: fku fku cot(Á) u0 2 fkv 3 K = 0 v0 (2) sin(Á) 6 7 6 0 0 1 7 4 5 where u0 and v0 are the pixels coordinates of the principal point, ku and kv are the scaling factors along the !¡u and !¡v axes (in pixels/meters), Á is the angle between these axes and f is the focal length. For most of commercial ¼ cameras, it is a reasonable approximation to suppose square pixels (i.e. Á = 2 and ku = kv). Figure 3: Camera model The intrinsic parameters of the camera are often only roughly known. Pre- cise calibration of the parameters is a tedious procedure which needs a speci¯c calibration grid [17]. It is thus preferable to estimate the intrinsic parameters without knowing the model of the observed object. If several images of any rigid object are available it is possible to use a self-calibration algorithm [16] to estimate the camera intrinsic parameters. 3 2.1 Features extraction Vision-based control approaches generally use points as visual features. One of the most known algorithm used to extracts interest points is the Harris detector [23]. However, several other features (straight lines, ellipses, contours, etc.) can be extracted from the images and used in the control scheme. One of the most known algorithm used to extracts contours from the image has been proposed by Canny [3]. 2.2 Matching features The problem of matching features, common to all visual servoing techniques, has been investigated in the literature but it is not yet a solved problem. For the model-based approach, we need to match the model to the current image [31]. With the model-free approach, we need to match feature points [52] or curves [49] between the initial and reference views. Finally, when the camera is zooming we need to match images with di®erent resolutions [12]. Figure 4 shows an example of matching features between two views of the same object. The matching problem consist in ¯nding the features in the left image which corresponds to the features in the right image. The problem is particularly di±cult when the displacement of the camera between the two images is big and when light conditions change. Figure 4: Matching features between two images 2.3 Tracking features Tracking features is a similar problem to matching. However, in this case the displacement between the two images is generally smaller. Several tracking al- gorithms have been proposed in the literature. They can be classi¯ed depending on the a priori knowledge on the target used in the algorithm. If the model of the target is known see [18, 44, 45] and if the model of the target is known see [21, 11, 50, 41]. 4 2.4 Motion estimation The use of geometric features in visual servoing supposes the presence of these features on the target. Often, textured objects do not have any evident geo- metric feature. Thus, in this case, a di®erent approach to tracking and visual servoing can be obtained by estimating the motion of the target between two consecutive images [43]. The velocity of the target in the image (see Figure 5) can be measured without any a priori knowledge of the model of the target. Figure 5: Motion estimation between two consecutive images. 3 Visual servoing approaches Visual servoing schemes can be classi¯ed on the basis of the knowledge we have on the target and on the camera parameters. If the camera parameters are known we can use a \calibrated visual servoing" approach, while if they are only roughly known we must use an \uncalibrated visual servoing" approach. If a 3D model of the target is available we can use a \model-based visual servoing" approach, while if the 3D model of the target is unknown we must use a \model- free visual servoing" approach. 3.1 Model-based visual servoing ¤ Let F0 be the coordinate frame attached to the target, F and F be the co- ordinate frames attached to the camera in its desired and current position re- spectively. Knowing the coordinates, expressed in F0, of at least four points of the target [10] (i.e. the 3D model of the target is supposed to be perfectly known), it is possible from their projection to compute the desired camera pose and the current camera pose (thus, the robot can be servoed to the reference pose). In this case, the camera parameters must be perfectly known and we are in the presence of a calibrated visual servoing scheme. If more than four points of the target are available, it is possible to compute the pose of the camera without knowing the camera parameters [15] and we are in the presence of an uncalibrated visual servoing scheme.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-