
APPLICATION OF COMPUTER VISION METHODS AND ALGORITHMS IN DOCUMENTATION OF CULTURAL HERITAGE David KÁŇA 1, Vlastimil HANZL2 1Geodis Brno, ltd Lazaretní 11a, Brno, Czech Republic, [email protected] 2Brno University of Technology, Faculty of Civil Engineering Veveří 331/98, Brno, Czech Republic, [email protected] Keywords: computer vision, interest operator, matching Abstract: The main task of this paper is to describe methods and algorithms used in computer vision for fully automatic reconstruction of exterior orientation in ordered and unordered sets of images captured by digital calibrated cameras without prior informations about camera positions or scene structure. Attention will be paid to the SIFT interest operator for finding key points clearly describing the image areas with respect to scale and rotation, so that these areas could be compared to the regions in other images. There will also be discussed methods of matching key points, calculation of the relative orientation and strategy of linking sub- models to estimate the parameters entering complex bundle adjustment. The paper also compares the results achieved with above system with the results obtained by standard photogrammetric methods in processing of project documentation for reconstruction of the Žinkovy castle. 1. INTRODUCTION Images captured by digital cameras are one of the most important form of information in documentation of cultural heritage. Effective assignment of camera pose in space is necessary for consequential usage for measuring purposes. The automatic process of finding exterior orientation can be divided to three main tasks: Key point finding and matching, relative orientation and bundle adjustment. Our paper presents practical experiment of such procedure. 2. KEY POINTS EXTRACTION 2.1 SIFT – scale invariant feature transform The initial phase during comparing and relative orientation of two images is to choose characteristic or key points in images. The key point should by no mean characterize the image area so that this area could be reliably found and compared with the same area in different image. By finding corresponding points in both images a correspondent couple (correspondence) is defined. For detection and comparison of significant points in the image SIFT operator was chosen as the most convenient detector. This detector is unlike simple correlation between two areas in the images with the size of for example 21x 21 pixels partly invariant toward view geometry change therefore rotation (circa 15 degrees), scale change and is partly invariant to noise as well. SIFT detector is based on searching for extremities in the images by finding differences among images incurred by the convolution of image function I(x, y) and Gauss filter G(x, y, ) with variable values of sigma. D(x, y,) L(x, y,k) L(x, y,) G(x, y,k) G(x, y,)* I(x, y). (1) Exact procedure is described for example in [2]. Figure 1: Blurred images and their differences 2.2 Finding extremities Single images with a different degree of blurring are subtracted each other and difference imagess would come into being. These differences are evidently approximation of the second derivation of the image function and serve to detect local extremities. After creation of differential images (Fig. 1) each pixel value is compared with six neighbouring pixels in the image and nine neighbouring pixels in the image with blurring partly a level higher and partly a level lower. If the value of the tested pixel is the lowest or eventually the highest out of all the neighbouring pixels, this pixel is chosen as possible key point. Once the candidate for the key point is found by comparison with its neighbours it is necessary to decide about its stability and therefore about possible denial on the bases of information about its location, scale and a rate of main curvatures. This information enables effective removal of nondesired points in low contrast areas. For each point is its surrounding in the range of 3x3 pixels approximated by the three dimensional quadratic function. Consequently is found maximum or eventually minimum of this function that defines the exact location of the key point with subpixel accuracy. The points along the edges are according to curvature diameters rate in two perpendicular directions removed as well. 2.3 Orientation assignment A certain orientation can be assigned to each key point. By this step is ensured key point descriptor invariance toward the image rotation. Descriptor is expressed relatively in the view of key point rotation. Orientation is computed in dependence of smoothing out rate for given key point and does not depend on scale. In blurred image size values m(x, y) and orientation values (x, y) of the function gradient L(x, y) are computed. Consequently orientations in the surrounding of the key point are computed, when an orientation histogram of neighbouring pixels that contains 36 items for 360 degrees range with 10 degrees interval is put together. In this histogram a peak is found (an item with biggest occurrence) and this dominant rotation is assigned to the key point. Farther it is tested whether other occurrences reaching 80 percent of the biggest occurrence are present. If so a new key point is made out with the same coordinate but with a new orientation given by this direction. After all in some places a rank of key points can ensue with the same coordinate but with various orientations. 2.4 Key point deskriptor I(x, y) The parameters defining the location, scale and orientation of the key points also define auxiliary coordinate system, to which descriptor can be expressed clearly describing the area around key point location. Descriptor is based on image gradients and their orientation in a region surrounding key point, where area 16 x 16 pixels is divided into 16 blocks of 4 x 4 pixels. For every block is created histogram of eight meaningful orientations weighted by gradient magnitude using Gaussian function. Subsequently by ordering and normalizing those 16 partial histograms containing 8 intervals descriptor, vector of dimension 128 is formed. Figure 2: Key points and their orientations For key points detection an implementation [1] was used, for another acceleration of computing an implementation with usage of hardware acceleration on CUDA [4] platform would be tested. 3 FINDING CORRESPONDENCES If there are detected key points in each image counting descriptors, we can approach to pairing and finding of corresponding point couples - correspondences, which came into being by projection of point in three dimensional space to both images. The rate of agreement of the two key points is determined on the basis of their SIFT descriptor vector’s Euclidean distance in two ways. Because we do not have any information about image sequence it is necessary to compare them by each to each method. 3.1 Symmetric pairing To every key point in the first image we can find a point with the nearest descriptor distance in the second image. In the second image is to each key point found the nearest key point in the first image as well. As potential pair of corresponding points is labeled the one where mutual agreement is in both comparisons. 3.2 Test poměru vzdáleností Lowe [2] recommends testifying the agreement of the key points by rate of descriptor Euclidean distance to the first and second nearest point. As limiting he offers distance ratio of 0.6. It is obvious that this criterion can fulfill more key points. For correspondence’s stability and reliability reasons it is convenient not to count on these multiple correspondences in farther calculations. This test proofed as not much convenient with markedly repeating texture (for example the facade of panel house images). 4 FUNDAMENTAL AND ESSENTIAL MATRIX The correspondences set obtained by above mentioned procedures is however loaded with errors and false correspondences, which arouse from a change of camera location, change of lighting, digital image noise and so on. These false correspondences could be eliminated by usage of geometric criterion – epipolar condition. Figure 3: Illustration of epipolar geometry The X point together with projection centers C and C’ forms epipolar plane in 3D space. By epipolar plane intesection with projection planes are formed epipolar lines – epipolars. And points x and x’, which are projections of X point into projection plane, lies on this epipolar lines. This lines also pass thru epipoles e and e’ where epipole is projection of first camera projection center to the projection plane of the second camera. Algebraic formulation of the epipolar condition is equation (2): x x' y' 1 F y 0 (2) 1 Where F is fundamental matrix size is 3 x 3 and rank of this matrix is 2. This fundamental matrix defines relation between two cameras without dependency on scene structure. For calculation it is not necessary to know cameras interior orientation parameters. 4.1 Fundamental matrix computation using correspondences Matrix F has seven degrees of freedom so minimal number of correspondences is seven. Number of solutions can vary from one to three. Due to the numeric stability is more optimal to use eight-point algorithm and find the best solution using SVD decomposition. Formula (2) defines relation between two corresponding points. Providing F: f11 f12 f13 F f f f (3) 21 22 23 f31 f32 f33 We can obtain one linear equation for each correspondence: x' x f11 x' y f12 x' f13 y' x f 21 y' y f 22 y' f 23 x f31 y f32 f33 0 (4) For n correspondences we obtain homogeneous system of linear equations in following form: x'1 x1 x'1 y1 x'1 y'1 x1 y'1 y1 y'1 x1 y1 1 Af F 0 (5) x'n xn x'n yn x'n y'n xn y'n yn y'n xn yn 1 Solution which minimizes distances of projected points to epipolar lines is such a vector f where Ax is minimal and f 1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-