<<

03-071.qxd 9/5/05 21:05 Page 699

Photogrammetric and Data Registration Using Linear Features

Ayman Habib, Mwafag Ghanma, Michel Morgan, and Rami Al-Ruzouq

Abstract objects in which surfaces play an important role (Habib and The enormous increase in the volume of datasets acquired Schenk, 1999). is the conventional method by lidar systems is leading towards their extensive exploita- for surface reconstruction. However, lidar systems, whether tion in a variety of applications, such as, surface reconstruc- ground based, airborne, or space borne, have recently emerged tion, city modeling, and generation of perspective views. as a new technology with a promising potential towards Though being a fairly new technology, lidar has been influ- dense and accurate data capture on physical surfaces (Schenk enced by and had a significant impact on photogrammetry. and Csathó, 2002). Photogrammetry and lidar have unique Such an influence or impact can be attributed to the com- characteristics that make either technology preferable in plementary nature of the information provided by the two specific applications. For example, photogrammetry is more systems. For example, photogrammetric processing of suited for mapping heavily populated areas, while lidar is imagery produces accurate information regarding object preferable in mapping Polar Regions. However, one can space break lines (discontinuities). On the other hand, lidar observe that a negative aspect in one technology is con- provides accurate information describing homogeneous trasted by a complementary strength in the other. Therefore, physical surfaces. Hence, it proves logical to combine data integrating the two systems would prove extremely benefi- from the two sensors to arrive at a more robust and com- cial (Schenk and Csathó, 2002). plete reconstruction of 3D objects. This paper introduces Photogrammetric object space reconstruction starts with alternative approaches for the registration of data captured identifying features of interest in overlapping images. Con- by photogrammetric and lidar systems to a common refer- jugate features and the exterior orientation parameters of the ence frame. The first approach incorporates lidar features as involved images are then used in an intersection procedure control for establishing the datum in the photogrammetric yielding corresponding object features. Surfaces derived from . The second approach starts by manipu- terrestrial and aerial imagery possess a rich body of scene lating the photogrammetric imagery to produce a 3D model, information. Moreover, derived object space features are very including a set of linear features along object space disconti- accurate, especially if they appear in more than two images as nuities, relative to an arbitrarily chosen coordinate system. a result of the high redundancy. The weakness of photogram- Afterwards, conjugate photogrammetric and lidar straight- metry is the “matching problem” (i.e., finding corresponding line features are used to establish the transformation be- features in overlapping images). The success of automatic tween the arbitrarily chosen photogrammetric coordinate surface reconstruction from imagery is contingent on the system and the lidar reference frame. The second approach reliability of the matching process. Manual or automatic (bundle adjustment, followed by similarity transformation) is matching is only possible when features with unique gray- general enough to be applied for the co-registration of mul- scale value distribution function are used. As a result, im- tiple three-dimensional datasets regardless of their origin plemented features usually correspond to locations along (e.g., adjacent lidar strips, surfaces in GIS databases, and discontinuities in one or more directions within the images temporal elevation data). The registration procedure would (e.g., edges and interest points). Such features usually pertain allow for the identification of inconsistencies between the to discontinuities and break lines in the object space. There- surfaces in question. Such inconsistencies might arise from fore, photogrammetric surfaces provide a rich set of informa- changes taking place within the object space or inaccurate tion along object space break lines and almost no information calibration of the internal characteristics of the lidar and the along homogeneous surfaces with uniform texture. photogrammetric systems. Therefore, the proposed method- Lidar has been conceived as a method to directly and ology is useful for change detection and system calibration accurately capture digital elevation data. However, in order applications. Experimental results from aerial and terrestrial to reach the high accuracy potential, the lidar system must datasets proved the feasibility of the suggested methodologies. be well calibrated and equipped with a high end DGPS/INS navigation unit (Filin and Csathó, 1999). An appealing feature in the lidar output is the direct availability of 3D Introduction coordinates of points in the object space. The surface recon- Currently, a variety of applications demand fast and reliable struction process is simply formulated as a three-dimensional collection of data about physical objects (e.g., automatic DEM rigid body transformation of points from scanner space to generation, city modeling, and object recognition). Such object space. One should note that there is no inherent applications require the availability of information pertain- redundancy in the computation of lidar points. Moreover, ing to the geometric and semantic characteristics of such

Photogrammetric Engineering & Vol. 71, No. 6, June 2005, pp. 699–707. Department of Engineering, University of Calgary, 2500, University Drive, NW., Calgary, Alberta, Canada T2N 0099-1112/05/7106–0699/$3.00/0 1N4 ([email protected]; mghanma@geomatics. © 2005 American Society for Photogrammetry ucalgary.ca; [email protected]; [email protected]). and Remote Sensing

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING June 2005 699 03-071.qxd 9/5/05 21:05 Page 700

lidar surfaces are mainly positional, and there is no addi- registration primitives from photogrammetric and lidar data tional semantic or scene information available except when is explained. Then, the details of the mathematical model for the intensity of the reflected signal is recorded. Since lidar establishing the transformation parameters between the two provides a discrete set of irregularly distributed object points, datasets are introduced. The last two sections cover experi- the acquired surfaces possess rich information along homo- mental results (using terrestrial and airborne datasets), as geneous physical surfaces, and almost no information along well as, conclusions and recommendations for future work. object space discontinuities. It should be clear from the previous discussion that the integration of photogrammetric and lidar data would be Registration Paradigm extremely beneficial. For example, lidar surfaces could be In general, any registration process aims at combining data used to constrain and resolve ambiguities in the photogram- and information from multiple sensors in order to achieve an metric matching process. Moreover, photogrammetric data improved accuracy and better inference about the environ- will enrich lidar surfaces by providing more semantic attri- ment than could be attained through the use of a single butes. Also, photogrammetric and lidar surfaces can be sensor. Due to the enormous increase in the volume of spa- inspected for inconsistencies, which has to be justified (e.g., tial data that is being acquired by an ever-growing number changes in the object space or inaccurate calibration of the of sensors, there is a pressing need for the development of internal characteristics of either system). Therefore, success- accurate and robust registration procedures that can handle ful integration will facilitate subsequent processing activities spatial data with varying formats. An effective registration such as system calibration, object recognition, and genera- procedure must address the following issues (Brown, 1992): tion of 3D textured models. However, achieving the full potential of the synergism between the two technologies is Registration Primitives contingent on accurate and reliable co-registration of the The first step in the registration procedure is to decide upon respective surfaces relative to the same reference frame the primitives to use for establishing the transformation be- (Habib and Schenk, 1999; Schenk, 1999; Postolov et al., tween the datasets in question. The primitive choice influ- 1999). This should not be surprising, since any registration ences subsequent registration steps. In this research, straight- process aims at combining data and information from mul- line features have been used as the registration primitives. tiple sensors in order to achieve improved accuracies and This choice is motivated by the fact that such primitives can better inference about the environment than could be attained be reliably, accurately, and automatically extracted from through the use of a single sensor (Brown, 1992). photogrammetric and lidar datasets. The most common methods for solving the registration problem between two datasets are based on the identifica- Similarity Measure tion of common points. Such methods are not applicable The next step in the registration paradigm is the selection of when dealing with lidar surfaces, since they correspond to the similarity measure, which mathematically expresses the laser footprints rather than distinct points that could be relationship between the attributes of conjugate primitives in identified in the imagery (Baltsavias, 1999). Traditionally, overlapping surfaces. The similarity measure formulation surface-to-surface registration or comparisons have been depends on the selected registration primitives and their achieved by interpolating both datasets into a regular grid. respective attributes. In this work, the similarity measure The comparison is then reduced to estimating the necessary formulation has been incorporated in mathematical con- shifts by analyzing the elevations at corresponding grid straints ensuring the coincidence of conjugate linear features posts (Ebner and Strunz, 1988; Ebner and Ohlhof, 1994; after establishing the proper co-registration between invol- Kilian et al., 1996). There are several problems with this ved surfaces. approach. First, the interpolation to a grid will introduce errors especially when dealing with captured surfaces over Registration Transformation Function urban areas. Moreover, minimizing the differences between The most fundamental characteristic of any registration the surfaces along the z-direction is only valid when dealing technique is the type of spatial transformation or mapping with horizontal planar surfaces (Habib and Schenk, 1999). function needed to properly overlay the two datasets. In this Postolov et al. (1999) introduced another approach, which research, a 3D similarity transformation is used as the regis- does not require initial interpolation of the data. However, tration transformation function, Equation 1. Such transfor- the implementation procedure involves an interpolation of mation assumes the absence of systematic biases in both one surface at the location of conjugate points on the other photogrammetric and lidar surfaces (Filin, 2001). However, surface. Furthermore, the registration is based on minimiz- the quality of fit between conjugate primitives can be ana- ing the differences between the two surfaces along the lyzed to investigate the presence of such behavior. z-direction. Schenk (1999) devised an alternative approach, where distances between points of one surface along surface X X X normals to locally interpolated patches of the other surface A T a Y Y S R(, , K ) Y (1) are minimized. Habib and Schenk (1999) and Habib et al. £ A § £ T § £ a § (2001b) implemented this methodology within a comprehen- ZA ZT Za sive automatic registration procedure. Such an approach is T based on the manipulation of photogrammetric data to pro- where S is the scale factor, (XT YT ZT) is the translation duce object space planar patches. This might not be always vector between the origins of the photogrammetric and lidar possible, since photogrammetric surfaces provide accurate coordinate systems, R(,,K) is the 3D orthogonal rotation T information along object space discontinuities while supply- matrix between the two coordinate systems, (Xa Ya Za) are T ing almost no information along homogeneous surfaces with the photogrammetric point coordinates, and (XA YA ZA) are uniform texture. the coordinates of the corresponding point relative to the This paper introduces alternative methodologies for the lidar reference frame. registration of photogrammetric and lidar data using three- dimensional, straight-line features. The following section Matching Strategy outlines the main components of an effective registration To automate the solution of the registration problem, a paradigm. Afterwards, the methodology for extracting the controlling framework that utilizes the primitives, the

700 June 2005 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING 03-071.qxd 9/5/05 21:05 Page 701

similarity measure, and the transformation function must be established. Such framework is usually referred to as the matching strategy. In the current research, the correspon- dence between conjugate entities has been solved manually. A comprehensive matching strategy will be the focus of future work. The presented research in this paper is concerned with the first two issues of the abovementioned registration para- digm. The following two sections describe the extraction of the registration primitives from photogrammetric and lidar datasets as well as the manipulation of these primitives to derive an estimate of the transformation parameters between the involved datasets.

Extraction of Registration Primitives In this research, overlapping surfaces are co-registered by the virtue of conjugate linear features extracted from pho- togrammetric and lidar data. The next subsection explains the manipulation of photogrammetric data for the purpose of producing a 3D model of the object space, relative to an arbitrarily chosen reference frame, including a set of linear features along objects’ discontinuities. Afterwards, the gen- Figure 1. End points defining the object line are either eration of corresponding linear features from the lidar data- measured in (a) one image or (b) two images. set will be discussed. Photogrammetric and lidar discontinu- ities will then be used to establish the transformation between the arbitrarily chosen photogrammetric coordinate system and the lidar reference frame. straight line in the object space, Figure 2. In other words, for the intermediate points a constraint, which indicates that Photogrammetric Linear Features the points {(X , Y , Z ), (X , Y , Z ), (X , Y , Z ) and (x , y , The methodology for producing 3D straight-line features 1 1 1 2 2 2 Oi Oi Oi i i 0)} are coplanar is introduced. This can be mathematically from photogrammetric datasets depends on the representa- described through Equation 2: tion scheme of such features in the object and image space. Prior research in this area concluded that representing object (V1 V2) V3 0 . (2) space straight-lines using two points along the line is the most convenient representation from a photogrammetric In the above equation, V1 is the vector connecting the point of view since it yields well-defined line segments perspective center to the first end point along the object (Habib, 1999; Habib et al., 2002). On the other hand, image space line, V is the vector connecting the perspective center 2 space lines will be represented by a sequence of 2D coordi- to the second end point along the object space line, and V3 nates of intermediate points along the feature. This represen- is the vector connecting the perspective center to an inter- tation is attractive since it can handle image space linear mediate point along the corresponding image line. It should features in the presence of distortions, as they will cause be noted that the three vectors should be represented rela- deviations from straightness. Moreover, it will allow for the tive to a common coordinate system (e.g., the ground co- incorporation of linear features in scenes captured by line ordinate system). The constraint in Equation 2 incorporates cameras since perturbations in the flight trajectory would lead to deviations from straightness in image space linear features corresponding to object space straight lines (Habib et al., 2001a). The suggested procedure starts by identifying two points in one (Figure 1a) or two images (Figure 1b) along the line under consideration, Figure 1. These points will be used to define the corresponding object space line segment. One should note that these points can be selected in any of the images within which this line appears. Moreover, they need not be identifiable or even visible in other images. Interme- diate points along the line are measured in all the overlap- ping images. Similar to the end points, the intermediate points need not be conjugate, Figure 1. For the end points, the relationship between the mea- sured image coordinates {(x1, y1), (x2, y2)} and the corre- sponding ground coordinates {(X1, Y1, Z1), (X2, Y2, Z2)} is established through the collinearity equations. Only four equations will be written for each line. The incorporation of intermediate points into the adjustment procedure is achieved through a mathematical constraint. The underlying principle in this constraint is that the vector from the per- Figure 2. Perspective transformation between image spective center to any intermediate image point along the and object space straight lines and the coplanarity line is contained within the plane defined by the perspec- constraint for intermediate points along the line. tive center of that image and the two points defining the

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING June 2005 701 03-071.qxd 9/5/05 21:05 Page 702

the image coordinates of the intermediate point, the Exterior and lidar data. Therefore, suspected planar patches in lidar Orientation Parameters (EOP), the Interior Orientation Para- dataset are manually identified with the help of correspon- meters (IOP) including distortion parameters, as well as, the ding optical imagery, Figure 3. The selected patches are ground coordinates of the points defining the object space then checked using a least squares adjustment to determine line. Such a constraint does not introduce any new parame- whether they are planar or not and to remove blunders. ters and can be written for all intermediate points along the Finally, neighboring planar patches with different orientation line in the imagery. The number of constraints is equal to are intersected to determine the end points along object space the number of intermediate points measured along the image discontinuities between the patches under consideration. line. In general, Equation 2 and four collinearity equations The extraction of linear features from ranging data can are used to estimate the EOP as well as the object coordi- be simplified by using the intensity image provided by nates of the end points along the linear feature in question. newly available lidar systems. In such a case, image process- However, for self-calibration exercises, the IOP can be solved ing or edge detection techniques can be applied to the for as well. intensity image to identify the linear features, which could The manipulation of tie straight-lines appearing in a be then related to photogrammetric linear features. group of overlapping images can be summarized as follows: first, two points that define the object line are measured in one or two images. For these points four collinearity equa- Photogrammetric to Lidar Data Registration Using Linear tions are formulated. Then, intermediate points are meas- Features: Similarity Measure ured along the image line in overlapping images. Each inter- This section discusses the manipulation of conjugate linear mediate point provides one additional constraint of the form features for the purpose of determining the parameters of the in Equation 2. Such a methodology can be easily incorpo- rated in an existing point-based bundle adjustment proce- dure. The treatment of a control linear feature (with known object coordinates of its end points) will be slightly differ- ent. Since the control line already provides the end points, they need not be measured in any of the images. Therefore, the image space linear features will be represented by a group of intermediate points in all the images. Each interme- diate point provides one constraint of the form in Equation 2. For single photo resection, a minimum of three control lines per image is required. One should note that the above discussion outlines one approach, which can be utilized for the extraction of linear features from photogrammetric data. However, other approaches might be used to achieve this objective (e.g., Baillard et al., 1999; Habib et al., 2003).

Lidar Linear Features The growing acceptance of lidar as an efficient data acquisi- tion system by researchers in the photogrammetric commu- nity led to a number of studies aiming at preprocessing lidar data. The purpose of such studies ranges from simple pri- mitive detection and extraction to more complicated tasks such as segmentation and perceptual organization (Maas and Vosselman, 1999; Csathó et al., 1999; Lee and Schenk, 2001; Vosselman and Dijkman, 2001; Filin, 2002; Rottensteiner and Briese, 2003). Lidar data segmentation into a group of homo- geneous patches can be carried out using an accumulator array that keeps track of the frequency of points along pre- defined analytical surfaces involving unknown parameters (Maas and Vosselman, 1999). Another alternative for surface segmentation starts by identifying seed points, which can be augmented by neighboring points that fit the behavior of the sought-after surfaces (Lee and Schenk, 2001). Identified point clouds belonging to the selected patches from either approach are then used within a least squares adjustment to determine the encompassing plane. The quality of fit should be ana- lyzed to check whether the segmented patch is planar, or not. Also, a blunder detection procedure has to be imple- mented to remove points outside the segmented planar patch. Neighboring planar patches are finally intersected to produce lidar straight-line features, which are defined by their end points. Filin (2002) introduced an alternative surface clustering technique, which starts by identifying patterns in the data based on a set of attributes that catego- rize the sought-after information and produce the best sepa- ration among classes. A grouping process is then applied to Figure 3. Manually identified planar patches within the classify areas with homogeneous attributes. (a) lidar data guided by the (b) corresponding optical This research aims at investigating the possibility of image. using linear features for the registration of photogrammetric

702 June 2005 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING 03-071.qxd 9/5/05 21:05 Page 703

registration transformation function relating the surfaces in registration transformation function relating the two surfaces question. Lidar linear features can be integrated with the (assumed to be 3D similarity transformation in this applica- photogrammetric data in two different ways. The first and tion). Such a constraint can be mathematically described by most straightforward alternative is to use the lidar linear Equation 3: features as control for the photogrammetric bundle adjust- ment. In this scenario, the lidar features will establish the XT X1 XA XB XA datum for the photogrammetric model. This alternative is Y S R Y Y l Y Y (3) £ T § ( , ,K) £ 1 § £ A § 1 £ B A § restrictive since it is not general enough to allow for any Z Z Z Z Z surface-to-surface registration exercise regardless of their T 1 A B A T origin. For example, it cannot be used to establish the regis- where (XT YT ZT) is the translation vector between the tration between two overlapping lidar surfaces. Moreover, origins of the photogrammetric and lidar coordinate systems, direct incorporation of the lidar features in the photogram- R(,,K) is the required rotation matrix to make the photo- metric adjustment will not allow for an explicit inspection grammetric coordinate system parallel to the lidar reference of the compatibility or the discrepancy between the involved frame, and S is the scale factor. surfaces. Such a discrepancy might take place due to im- Another constraint of the form in Equation 3 can be proper system calibration, measurement blunders, or physi- introduced for the second point along the photogrammetric cal changes in the object space. model (e. g., point 2), Equation 4: The other alternative is to separately process the pho- togrammetric and lidar datasets to generate the linear fea- XT X2 XA XB XA tures. It has to be mentioned that the datum for the pho- Y S R Y Y l Y Y . (4) £ T § ( , ,K) £ 2 § £ A § 2 £ B A § togrammetric bundle adjustments will be established by choosing an arbitrary reference frame. For example, seven ZT Z2 ZA ZB ZA out of the nine coordinates of three well-distributed tie Subtracting Equations 3 and 4 yields the following mathe- points can be arbitrarily fixed. Afterwards, conjugate features matical expression: will be manipulated to establish the parameters of the 3D similarity transformation relating the photogrammetric co- XB XA X2 X1 ordinate system to the lidar reference frame. Determining the (l l ) Y Y S R Y Y . (5) parameters of the registration transformation function will be 2 1 £ B A § ( , ,K) £ 2 1 § carried out using a similarity measure that involves the attri- ZB ZA Z2 Z1 butes of the linear features. The derivation of the similarity measure is based on the fact that the photogrammetric line Substituting for S/( 2 1), Equation 5 can be rewritten as segment should coincide with the corresponding lidar seg- follows: ment after applying the registration transformation function, X X X X Figure 4. B A 2 1 Y Y lR Y Y . (6) The formulation of the similarity measure depends on £ B A § ( , ,K) £ 2 1 § the choice of the selected parameters to represent the regis- ZB ZA Z2 Z1 tration primitives (3D straight lines). As mentioned before, representing the line segments using two points along the Equation 6 should come as no surprise, since it mathe- line is the most convenient representation alternative. In this matically formulates the concept that the photogrammet- regard, it is worth mentioning that the end points represent- ric line segment (1-2) should be parallel to the lidar line ing corresponding lidar and photogrammetric line segments segment (A-B) after applying the rotation matrix (see Figure 4). need not be conjugate. Other means for representing straight- Dividing the first two rows of Equation 6 by the third line features will entail several problems such as failure to one would lead to the elimination of the scale factor (), represent finite line segments, singularities, variant error Equation 7: measures, and complicated models relating corresponding (X X ) R (X X ) R (Y Y ) R (Z Z ) lines (Schwermann, 1994; Habib, 1999; Habib et al., 2002). B A 11 2 1 12 2 1 13 2 1 Referring to Figure 4, one can see that one of the end (ZB ZA) R31 (X2 X1) R32 (Y2 Y1) R33 (Z2 Z1) points of the photogrammetric line segment (e.g., point 1) should lie along the vector connecting the end points defi- (YB YA) R21 (X2 X1) R22 (Y2 Y1) R23 (Z2 Z1) . ning the lidar line segment (e.g., points A and B). Such (ZB ZA) R31 (X2 X1) R32 (Y2 Y1) R33 (Z2 Z1) coincidence will only take place after applying the necessary (7) Equation 7 can be written for each pair of conjugate line segments yielding two equations which contribute towards the estimation of two rotation angles, the azimuth, and pitch angle along the line. On the other hand, the roll angle across the line cannot be estimated. Hence, a minimum of two non- parallel lines is needed to recover the three elements of the rotation matrix (, , K). Having estimated the rotation angles, one can proceed with the recovery of the scale factor and the shift compo- nents of the registration transformation function. To derive these parameters, the rotation matrix is first applied to the coordinates of the points defining the photogrammetric line segment, Equation 8: Figure 4. Similarity measure between photogrammetric XT U1 XA XB XA and lidar linear features. Y S V Y l Y Y (8) £ T § £ 1 § £ A § 1 £ B A § ZT W1 ZA ZB ZA

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING June 2005 703 03-071.qxd 9/5/05 21:05 Page 704

where,

U1 X1 V R Y . £ 1 § ( , ,K) £ 1 § W1 Z1

Rearranging the terms in Equation 8 yields the following equation: XB XA XT S U1 XA l Y Y Y S V Y . (9) 1 £ B A § £ T 1 A § ZB ZA ZT S W1 ZA In Equation 9, one can eliminate 1 by dividing the first and second rows by the third one, Equation 10: (X X ) (X SU X ) B A T 1 A (ZB ZA) (ZT SW1 ZA) (Y Y ) (Y SV Y ) B A T 1 A . . (10) (ZB ZA) (ZT SW1 ZA)

A similar set of Equation 10 can be written for the second point along the photogrammetric line segment. The resulting four equations from both points are then used to derive two independent constraints, which could be utilized to estimate the shift components and the scale factor of the registration function, Equation 11: (X S U X ) (X S U X ) T 1 A T 2 A (ZT S W1 ZA) (ZT S W2 ZA) (Y S V Y ) (Y S V Y ) T 1 A T 2 A . (11) (ZT S W1 ZA) (ZT S W2 ZA)

Thus, a single pair of line segments would yield two constraints of the form in Equation 11 towards the estima- tion of the scale and shift components. Therefore, at least two lines are needed to estimate the involved four parame- ters (XT, YT, ZT, and S). Through Eigenvalue analysis, it has been established that these lines should not be coplanar. In summary, a minimum of two non-coplanar line segments is needed to recover the seven elements of the 3D similarity transformation. One should note the parameters can be solved for in a sequential manner (i.e., using Equation 7 for estimating the rotation angles followed by the estimation of Figure 5. (a) Terrestrial photogrammetric dataset, and the shift components and the scale factor through Equation 11) (b) extracted linear features. or simultaneously through concurrent consideration of the constraints in Equations 7 and 11.

Experimental Results To illustrate the feasibility of using linear features to estab- the ground coordinates of points defining the object space lish the registration between two three-dimensional datasets, line segments. Figure 5b shows the arrangement of linear two experiments involving data captured by terrestrial, features extracted from imagery. as well as, aerial platforms were conducted. For the terres- A Cyrax 2400 ground-based laser scanner was used to trial dataset, a scene rich with planar surfaces and linear capture 3D point cloud over the same scene, Figure 6a. features was prepared as depicted in Figure 5a. A SONY DSC- Homogeneous patches have been manually identified in the F707 camera with a five mega-pixel grid and a maximum lidar data and then used in a least squares adjustment to fit resolution of 2560 pixels 1960 pixels was used to cap- planar surfaces. Neighboring planar surfaces were finally ture twelve overlapping images from different locations. intersected to produce object space straight-line segments, A bundle adjustment procedure incorporating tie points Figure 6b. Using special targets, the Cyrax system provided as well as linear features was carried out according to 3D coordinates of selected points, which have been used as the methodology outlined earlier. The datum of the tie points within the photogrammetric bundle adjustment photogrammetric model had been arbitrarily chosen by procedure. The lidar coordinates of these points will be randomly fixing seven coordinates of three, well-distributed used to check the quality of the registration by comparing tie points. The output of the bundle adjustment procedure them to the corresponding photogrammetric coordinates included the ground coordinates of tie points in addition to after applying the estimated transformation function.

704 June 2005 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING 03-071.qxd 9/5/05 21:05 Page 705

Least squares adjustment was then used to solve for the parameters of the 3D similarity transformation function using the similarity measure described by Equations 7 and 11. As mentioned before, a minimum of two, non-coplanar line segments are needed for the estimation of the seven transformation parameters. The estimated parameters are shown in Table 1. Afterwards, the lidar coordinates of the special targets were used to verify the compatibility between photogrammetric and lidar coordinates. The estimated trans- formation parameters were applied to the derived photo- grammetric coordinates of the special targets relative to the arbitrarily chosen reference frame. The transformed photo- grammetric coordinates were finally compared to the corre- sponding lidar measurements. The comparison results are summarized in Table 2. The comparison shows that the two sets of coordinates are compatible within the range of 2.65 mm to 4.47 mm, which is commensurate with the specification of the Cyrax and implemented digital camera. Therefore, it is concluded that the photogrammetric and lidar surfaces are compatible, and there are no systematic biases in either system that have not been accounted for. Another experiment had been conducted with a dataset captured using an aerial platform over heavily populated area, Figure 3b. In this experiment, twenty-three overlapping images in three flight lines were used in a bundle adjust- ment, which incorporated linear features, mainly building boundaries, as well as some tie points. Similar to the terres- trial dataset, the datum had been arbitrarily chosen by fixing seven coordinates of three, well-distributed tie points. The output of the bundle adjustment procedure included the ground coordinates of tie as well as the ground coordinates of points defining the object space line segments. In the lidar data, homogeneous patches had been manually identi- fied to correspond to selected features in imagery. Planar surfaces were then fitted through the selected patches, from which neighboring planar surfaces were intersected to pro- duce object space line segments. A total of twenty-three 3D edges had been identified along ten buildings from three lidar strips (well-distributed within the area of interest). Once again, the developed similarity measures (Equa- tions 7 and 11) were utilized in a least squares adjustment procedure, involving the identified linear features in the photogrammetric and lidar datasets, to solve for the transfor- mation parameters relating the two surfaces. The estimated parameters of the 3D similarity transformation function are shown in Table 1. These parameters were then used to superimpose the lidar features onto the transformed photogrammetric ones, Figure 7. The mean normal distance between conjugate lidar and transformed photogrammetric line segments turned out to be 3.27 m (mainly in the Figure 6. (a) Terrestrial lidar dataset, and (b) extracted Z-direction). This was a surprising result considering the linear features. camera, flight mission, and lidar specifications. The compat- ibility between the two surfaces was expected to be in the sub-meter range. Gazing at the discrepancy between the

TABLE 1. ESTIMATED 3D SIMILARITY TRANSFORMATION PARAMETERS

Aerial Dataset (before Aerial Dataset (after Experiment Terrestrial Dataset Distortion Compensation) Distortion Compensation)

Scale Factor 0.899 0.0012 1.009 0.0023 1.018 0.0007 XT (m) 1.472 0.0023 4.805 0.7368 7.044 0.1830 YT (m) 0.619 0.0017 1.242 0.4498 2.417 0.1349 ZT (m) 2.463 0.0013 30.051 0.4421 24.266 0.1132 (°) 108.044 0.0651 1.892 0.1328 4.927 0.0345 (°) 2.486 0.0968 1.315 0.3548 0.603 0.0921 K(°) 8.631 0.1824 0.320 0.0946 0.215 0.0295

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING June 2005 705 03-071.qxd 9/5/05 21:05 Page 706

TABLE 2. COMPARISON BETWEEN PHOTOGRAMMETRIC AND LIDAR COORDINATES OF THE LASER TARGETS (TERRESTRIAL DATASET)

Point dX (m) dY (m) dZ (m)

1 0.0026 0.0077 0.0001 2 0.0030 0.0045 0.0016 3 0.0046 0.0046 0.0011 4 0.0027 0.0018 0.0023 6 0.0014 0.0019 0.0007 7 0.0012 0.0004 0.0045 8 0.0035 0.0057 0.0044 RMS (m) 0.00292 0.00447 0.00265

Figure 8. A typical pattern of the effect of uncompen- sated radial lens distortion on the reconstructed object space.

The mean normal distance between conjugate lidar and transformed photogrammetric line segments turned out to be 0.58 m, which lies within the expected accuracy range. One should also note the significant improvement in the quality of the estimated registration transformation parameters as depicted by the respective standard deviation. This improve- ment in the spatial discrepancies and the quality of the transformation parameters after introducing the radial lens distortion verifies its existence. It is worth mentioning that independent processing of the photogrammetric and lidar data allowed for: (a) the identification of unexpected dis- crepancies between the extracted features (in the range of 3.27 m), and (b) the justification of the origin of such a discrepancy (i.e., a pattern resembling that of uncompen- sated radial lens distortion). These findings would not be possible by simultaneous incorporation of the lidar features in the photogrammetric adjustment, where one would only observe a poor quality of fit.

Conclusions and Recommendations for Future Research This paper introduced two methodologies for establishing the co-registration between three-dimensional datasets, such Figure 7. (a) Top view, and (b) side view of the as these generated from lidar and photogrammetric data deviation between the lidar and aerial photogrammetric using conjugate straight-line segments. The first approach features. directly incorporates the lidar lines in the photogrammetric bundle adjustment procedure (i.e., one step procedure). The second approach is characterized by a two-step procedure. It starts with deriving a photogrammetric model relative to an registered surfaces revealed a pattern similar to deformations arbitrary datum. Then, the lidar lines are used to establish arising from ignored radial lens distortion (Figure 8). the datum for the photogrammetric model through absolute To determine the radial lens distortion of the imple- orientation. The second approach is general enough for deal- mented camera, two alternatives were followed. The first ing with photogrammetric-to-lidar and lidar-to-lidar registra- alternative implemented the lidar features as control infor- tion problems. mation within the bundle adjustment procedure in a self- The involved line segments were represented by their calibration mode (allowing for the derivation of an estimate end points while assuming the end points of corresponding for the radial lens distortion). The estimated radial lens line segments might not be conjugate. Analyzing the forego- distortion coefficient turned out to be 6.828 105 mm2. ing results from the terrestrial and aerial experiments, the The second alternative determined an estimate of the radial proposed methodology proved the feasibility of using linear lens distortion through a bundle adjustment with self- features to establish a common reference frame for datasets calibration involving imagery captured from a test field with acquired by photogrammetric and lidar systems. The limited numerous control points, which had been surveyed earlier. number of required primitives (a minimum of two, non- The estimated radial lens distortion coefficient turned out to coplanar line segments) to implement the procedure added be 6.913 105 mm2, which is almost identical to the to its practicality and convenience. The registration process value determined by implementing the lidar features as allows for the detection, as well as, the justification of the control within the photogrammetric bundle adjustment. origin of any discrepancy between the involved datasets Afterwards, the registration procedure had been repeated (e.g., sensor calibration errors as explained by the experi- after considering the radial lens distortion. The estimated mental results from the aerial dataset, measurement errors, parameters from the repeated procedure are shown in Table 1. and physical changes) through a closer look at the discrep-

706 June 2005 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING 03-071.qxd 9/5/05 21:05 Page 707

ancies behavior. The involved datasets in the experimental Filin, S., 2001. Recovery of systematic biases in laser altimeters section illustrates the compatibility between lidar and pho- using natural surfaces, International Archives of Photogramme- togrammetric surfaces. However, such compatibility will be try and Remote Sensing, 34(3-W4):85–91. only realized after precise calibration of both systems to Filin, S., 2002. Surface clustering from airborne data, guarantee the absence of systematic biases. International Archives of Photogrammetry and Remote Sensing, It is of equal importance to clarify that having the two 32(3A):119–124. datasets registered relative to the same reference frame is a Habib, A., 1999. Aerial triangulation using point and linear features, prerequisite for any further integration between the two International Archives of Photogrammetry and remote Sensing, datasets. For example, optical imagery can be rendered onto 32(3–2W5):137–141. Habib, A., and T. Schenk, 1999. New approach for matching surfaces the lidar data to provide a realistic 3D textured model of the area of interest. Furthermore, registered datasets can be from laser scanners and optical sensors, International Archives of Photogrammetry and Remote Sensing, 32(3W14):55–61. locally inspected for changes that took place in the object space between the data capture epochs. It is imperative to Habib, A., Y. Lee, and M. Morgan, 2001a. Bundle adjustment with self-calibration of line cameras using straight lines, Joint Work- mention that applying the proposed methodology is contin- shop of ISPRS WG I/2, I/5 and IV/7: High Resolution Mapping gent on the availability of straight linear features, natural or from Space 2001, University of Hanover, 19–21 September, man made. unpaginated CD-ROM. Current research is focusing on complementing and Habib, A., Y. Lee, and M. Morgan, 2001b. Surface matching and extending the proposed methodology by the development change detection using the modified Hough transform for robust of an automatic matching strategy of the identified features. parameter estimation, The Photogrammetric Record, 17(98): Another worthy extension is exploring the possibility of 303–315. automatic primitive extraction from the datasets in ques- Habib, A., M. Morgan, and Y. Lee, 2002. Bundle adjustment with tion. In this regard, intensity lidar images will be used as a self calibration using linear features, The Photogrammetric source of linear features. The utilization of the intensity Record, 17(100):635–650. image is expected to facilitate an easier extraction mecha- Habib, A., Y. Lee, and M. Morgan, 2003. Automatic matching and nism of linear features from the range data. Ongoing re- three-dimensional reconstruction of free-form linear features search has proven the possibility of deriving linear features from stereo images, Journal of Photogrammetric Engineering & along the road network as well as building boundaries and Remote Sensing, 69(2):189–197. roof features through the combined utilization of range Kilian, J., N. Haala, and M. Englich, 1996. Capture and evaluation of airborne laser scanner data, International Archives of Photo- and intensity data. Also, the validity of using 3D similarity transformation as the registration transformation function grammetry and Remote Sensing, 31(B3):383–388. will be investigated, especially in the presence of unac- Lee, I., and T. Schenk, 2001. 3D perceptual organization of laser altimetry data, International Archives of Photogrammetry and counted GPS/INS/LIDAR spatial and rotational offsets or bia- Remote Sensing, 34(3W4):57–65. ses in the interior orientation parameters of the involved camera. Moreover, the proposed registration strategy is Maas, H., and G. Vosselman, 1999. Two algorithms for extracting building model from raw laser altimetry data, ISPRS Journal of being used for automatic registration of multi-source imagery Photogrammetry and Remote Sensing, 54(2–3):153–163. with varying radiometric and geometric resolutions using Postolov, Y., A. Krupnik, and K. McIntosh, 1999. Registration of linear features. airborne laser data to surfaces generated by photogrammetric means, International Archives of Photogrammetry and Remote Sensing, 32(3W14):95–99. References Rottensteiner, F., and C. Briese, 2003. Automatic generation of building models from lidar data and the integration of aerial Baillard, C., C. Schmid, A. Zisserman, and A. Fitzgibbon, 1999. images, International Archives of Photogrammetry and Remote Automatic line matching and of buildings Sensing, 34(3W13):174–180. from multiple views, International Archives of Photogrammetry and Remote Sensing, 32(3–2W5):69–80. Schenk, T., 1999. Determining transformation parameters between surfaces without identical points, Technical Report Photogram- Baltsavias, E., 1999. A comparison between photogrammetry and metry No. 15, Department of Civil and Environmental Engineer- laser scanning, ISPRS Journal of Photogrammetry and Remote ing and Geodetic Science, Ohio State University, 22 p. Sensing, 54(1):83–94. Schenk, T., and B. Csathó, 2002. Fusion of lidar data and aerial Brown, L. G., 1992. A survey of image registration techniques, ACM imagery for a more complete surface description, International Computing Surveys, 24(4):325–376. Archives of Photogrammetry and Remote Sensing, 34(3A): Csathó, B.M., K. Boyer, and S. Filin, 1999. Segmentation of laser 310–317. surfaces, International Archives of Photogrammetry and Remote Schwermann, R., 1994. Automatic image orientation and object Sensing, 32(3W14):73–80. reconstruction using straight lines in close range photogramme- Ebner, H., and G. Strunz, 1988. Combined point determination using try, International Archives of Photogrammetry and Remote digital terrain models as control information, International Sensing, Vol. 30, Commission V Symposium, Melbourne, Archives of Photogrammetry and Remote Sensing, pp. 349–356. 27(B11/3):578–587. Vosselman, G., and S. Dijkman, 2001. 3D building model recon- Ebner, H., and T. Ohlhof, 1994. Utilization of ground control points struction from point clouds and ground plans, International for image orientation without point identification in image Archives of Photogrammetry and Remote Sensing, 34(3W4): space, International Archives of Photogrammetry and Remote 37–43. Sensing, 30(3/1):206–211. Filin, S., and B. Csathó, 1999. A novel approach for calibrating satellite laser altimeter systems, International Archives of (Received 02 December 2003; accepted 08 March 2004; revised 31 Photogrammetry and Remote Sensing, 32(3W14):47–54. March 2004)

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING June 2005 707