<<

Homography estimation on the Special Linear based on direct point correspondence

Tarek Hamel, Robert Mahony, Jochen Trumpf, Pascal Morin and Minh-Duc Hua

Abstract— This paper considers the question of obtaining instantaneous estimates, the observer still requires individual a high quality estimate of a time-varying sequence of image image homographies to be computed for each image in the homographies using point correspondences from an image se- sequence to compute the observer innovation. quence without requiring explicit computation of the individual homographies between any two given images. The approach In this paper, we consider the question of deriving an uses the representation of a homography as an element of the observer for a sequence of image homographies that directly Special Linear group and defines a nonlinear observer directly takes point feature correspondences as input. The proposed on this structure. We assume, either that the group velocity approach is only valid in the case where a sequence of images of the homography sequence is known, or more realistically, used as data is associated with a continuous variation of that the homographies are generated by rigid-body motion of a camera viewing a planar surface, and that the angular velocity the reference image. The most common case encountered is of the camera is known. where the images are derived from a moving camera viewing a planar scene. The nonlinear proposed observer is posed I.INTRODUCTION on the Special Linear group SL(3) that is in one-to-one A homography is an invertible mapping relating two correspondence with the group of homographies [2] and uses images of the same planar scene. Homographies play a velocity measurements to propagate the homography estimate key role in many computer vision and robotics problems, and fuse this with new data as it becomes available [13], especially those that involve manmade environments typically [14]. A key advance on prior work by the authors is the constructed of planar surfaces, and those where the camera is formulation of a point feature innovation for the observer sufficiently far from the scene viewed that the relief of surface that incorporates point correspondences directly in the ob- features is negligible, such as the situation encountered in server without requiring reconstruction of individual image vision sequences of the ground taken from a flying vehicle. homographies. The proposed approach has a number of Computing homographies from point correspondences has advantages. Firstly, it avoids the computation associated with been extensively studied in the last fifteen years and different the full homography construction. This saves considerable techniques have been proposed in the literature that provide computational resources and makes the proposed algorithm an estimate of the homography matrix [5]. The quality of suitable for embedded systems with simple point tracking the homography estimate depends strongly on the algorithm software. Secondly, the algorithm is well posed even when used and the size of the set of points considered. For a well there is insufficient data for a full reconstruction of a homog- textured scene, state of the art methods provide high quality raphy. For example, if the number of corresponding points homography estimates but require significant computational between two images drops below four it is impossible to effort (see [17] and references therein). For a scene with algebraically reconstruct an image homography and the exist- poor texture, the existing homography estimation algorithms ing algorithms fail. In such situations, the proposed observer perform poorly. In a recent paper by the authors [13], [14] a will continue to operate, incorporating what information is nonlinear observer for homography estimation was proposed available and relying on propagation of prior estimates where based on the group structure of the set of homographies, the necessary. Finally, even if a homography can be reconstructed Special Linear group SL(3) [2]. This observer uses velocity from a small set of feature correspondences, the estimate information to interpolate across a sequence of images and is often unreliable and the associated error is difficult to to improve the overall homography estimate between any characterize. The proposed algorithm integrates information two given images. Although this earlier approach addresses from a sequence of images, and noise in the individual the problem partly by using temporal information to improve feature correspondences is filtered through the natural low- pass response of the observer, resulting in a highly robust T. Hamel and M.D. Hua are with I3S UNSA-CNRS, Nice-Sophia An- tipolis, France. e-mails: thamel(hua)@i3s.unice.fr. estimate. As a result, the authors believe that the proposed R. Mahony and J. Trumpf are with the School of Engineer- observer is ideally suited for poorly textured scenes and real- ing, Australian National University, ACT, 0200, Australia. e-mails: time implementation. We initially consider the case where Robert.Mahony(Jochen.Trumpf)@anu.edu.au. P. Morin is with ISIR UPMC, Paris, France. email: the group velocity is known, a situation that is rarely true in [email protected]. practice but provides considerable insight. The main result of the paper considers the case of a moving camera where the the equation written in a simple form: angular velocity of the camera is measured. This is a practical ∼ ˚ ∼ scenario where a camera is equipped with gyrometers. The ˚p = P , p = P. (3) primary focus of the paper is on the presentation of the B. Homography observers and analysis of their stability properties, however, we do provide simulations to indicate the performance of the Assumption 2.1: Assume a calibrated camera and that proposed scheme. there is a planar surface π containing a set of n target points The paper is organized into five sections including the (n ≥ 4) so that introduction and the conclusion sections. Section II presents n o π = P˚ ∈ R3 : ˚ηT P˚ − d˚= 0 , a brief recap of the Lie group structure of the set of homo- graphies and relates it to rigid-body motion of the camera. where ˚η is the normal to the plane expressed in {A˚} and Section III provides an initial lemma in the case where it is ˚ ˚ assumed the group velocity is known and then considers the d is the distance of the plane to the origin of {A}. From the rigid-body relationships (1), one has P = RT P˚ − case of a moving camera where the angular velocity of the T T camera is known. Simulation results are provided in Section R ξ. Define ζ = −R ξ. Since all target points lie in a single IV to verify performance of the proposed algorithms. planar surface one has T T ζ˚η II.PRELIMINARY MATERIAL Pi = R P˚i + P˚i, i = {1, . . . , n}, (4) d˚ A. and thus, using (3), the projected points obey Visual data is obtained via a projection of observed  T  images onto the camera image surface. The projection is ∼ T ζ˚η pi = R + ˚pi, i = {1, . . . , n}. (5) parameterised by two sets of parameters: intrinsic (“internal” d˚ parameters of the camera such as the focal length, the pixel  T −1 The projective mapping H : A → A˚, H :=∼ RT + ζ˚η aspect ratio, etc.) and extrinsic (the pose, i.e. the position and d˚ orientation of the camera). Let A˚ (resp. A) denote projective is termed a homography and it relates the images of points coordinates for the image plane of a camera A˚ (resp. A), on the plane π when viewed from two poses defined by the and let {A˚} (resp. {A}) denote its (right-hand) frame of coordinate systems A and A˚. It is straightforward to verify reference. Let ξ ∈ R3 denote the position of the frame {A} that the homography H can be written as follows: with respect to {A˚} expressed in {A˚}. The orientation of the  ξη>  frame {A} with respect to {A˚}, is given by a matrix, H =∼ R + (6) element of the Special Orthogonal group, R ∈ SO(3) : d ˚ {A} → {A}. The pose of the camera determines a rigid where η is the normal to the observed planar surface ex- ˚ body transformation from {A} to {A} (and visa versa). One pressed in the frame {A} and d is the orthogonal distance of has the plane to the origin of {A}. One can verify that [2]: ˚ P = RP + ξ (1) η = RT ˚η (7) T T as a relation between the coordinates of the same point in d = d˚− ˚η ξ = d˚+ η ζ. (8) the reference frame (P˚ ∈ {A˚}) and in the current frame The homography matrix contains the pose information (R, ξ) (P ∈ {A}). The camera internal parameters, in the commonly of the camera from the frame {A} (termed current frame) to used approximation, define a 3 × 3 matrix K so that we can the frame {A˚} (termed reference frame). However, since the write1: relationship between the image points and the homography ˚p ∼ KP˚ , p ∼ KP, (2) = = is a projective relationship, it is only possible to determine where p ∈ A is the image of a point when the camera is H up to a scale factor (using the image points relationships aligned with frame {A}, and can be written as (x, y, w)T alone). using the homogeneous coordinate representation for that 2D C. Homography versus element of the Special Linear Goup image point. Likewise, ˚p ∈ A˚ is the image of the same point SL(3) viewed when the camera is aligned with frame {A˚}. If the camera is calibrated (the intrinsic parameters are Recall that the Special Linear Lie-group SL(3) is defined known), then all quantities can be appropriately scaled and as the set of all real valued 3 × 3 matrices with unit determinant 1Most statements in involve equality up to a multi- 3 plicative constant denoted by =∼. SL(3) = {S ∈ R | det S = 1}. Since a homography matrix H is only defined up to scale then D. Homography kinematics from a camera moving with any homography matrix is associated with a unique matrix rigid-body motion ¯ H ∈ SL(3) by re-scaling Assume that a sequence of homographies is generated by ¯ 1 a moving camera viewing a stationary planar surface. Thus, H = 1 H (9) det(H) 3 any group velocity (infinitesimal variation of the homogra- phy) must be associated with an instantaneous variation in ¯ such that det(H) = 1. Moreover, the map measurement of the current image A and not with a variation 2 2 ˚ w : SL(3) × P −→ P , in the reference image A. This imposes constraints on two Hp degrees of freedom in the homography velocity, namely those (H, p) 7→ w(H, p) =∼ associated with variation of the normal to the reference |Hp| image, and leaves the remaining six degrees of freedom in is a group action of SL(3) on the P2 since the homography group velocity depending on the rigid-body velocities of the camera. w(H1, w(H2, p)) = w(H1H2, p), w(I, p) = p Denote the rigid-body angular velocity and linear velocity where H1,H2 and H1H2 ∈ SL(3) and I is the identity of {A} with respect to {A˚} expressed in {A} by Ω and V , matrix, the unit element of SL(3). The geometrical meaning respectively. The rigid body kinematics of (R, ξ) are given of the above property is that the 3D motion of the camera by between views A0 and A1, followed by the 3D motion R˙ = RΩ× (13) between views A1 and A2 is the same as the 3D motion between views A0 and A2. As a consequence, we can think ξ˙ = RV (14) of homographies as described by elements of SL(3). The Lie-algebra sl(3) for SL(3) is the set of matrices with where Ω× is the skew symmetric matrix associated with the 3×3 vector cross-product, i.e. Ω y = Ω × y, for all y. trace equal to zero: sl(3) = {X ∈ R | tr(X) = 0}. The × adjoint operator is a mapping Ad : SL(3) × sl(3) → sl(3) Recalling (8), it is easily verified that defined by d d˙ = −η>V, d˚= 0 −1 dt AdH X = HXH ,H ∈ SL(3),X ∈ sl(3). This constraint on the variation of η and d˚is precisely the For any two matrices A, B ∈ 3×3 the Euclidean matrix R velocity constraint associated with the fact that the reference inner product and Frobenius norm are defined as image is stationary. p hhA, Bii = tr(A>B) , ||A|| = hhA, Aii Lemma 2.2: Consider a camera attached to the moving frame A moving with kinematics (13) and (14) viewing a Let denote the unique orthogonal projection of 3×3 P R stationary planar scene. Let H : A → A˚denote the calibrated onto sl(3) with respect to the inner product hh·, ·ii. It is easily homography (12). The group velocity U ∈ sl(3) induced by verified that the rigid-body motion and such that  tr(H)  (H) := H − I ∈ sl(3). (10) ˙ P 3 H = HU, (15) For any matrices G ∈ SL(3) and B ∈ sl(3) then hhB,Gii = is given by hhB, (G)ii and hence P  V η> η>V  > > U = Ω× + − I (16) tr(B G) = tr(B P(G)). (11) d 3d Proof: Consider the time derivative of (12). One has Since any homography is defined up to a scale factor, we ! assume from now that H ∈ SL(3): ξη˙ > + ξη˙> dξη˙ > γ˙ H˙ = γ R˙ + − + H (17) d d2 γ  ξη>  H = γ R + (12) d Recalling Equations (13) and (14) one has There are numerous approaches for determining H, up to this  RV η> + ξη>Ω η>V ξη>  γ˙ H˙ = γ RΩ + × + + H scale factor, cf. for example [6]. Note that direct computation × d d2 γ of the determinant of H in combination with the expression  ξη>   ξη>  V η>  γ˙ of d (8) and using the fact that det(H) = 1, shows that = γ R + Ω× + R + + H d 1 d d d γ γ = ( ) 3 . d˚ ξ  V η> γ˙  Extracting R and d from H is in general quite complex = H Ω× + + I [2], [20], [19], [4] and is beyond the scope of this paper. d γ Applying the constraint that tr(U) = 0 for any element of The estimator equation is posed directly as a kinematic sl(3), one obtains filter system on SL(3) with state Hˆ , based on a collection 2 of n measurements pi ∈ S .  V η> γ˙  η>V 3γ ˙ 0 = tr Ω× + + I = + . −1 d γ d γ H ˚pi p = , i = {1 . . . n} (21) i |H−1˚p | The result follows by substitution. i Note that the group velocity U induced by camera motion The observer includes a correction term derived from the depends on the additional variables η and d that define the measurements and depends implicitly on the error H˜ . The scene geometry at time t as well as the scale factor γ. Since general form of the estimator filter is these variables are unmeasurable and cannot be extracted ˙ Hˆ = HUˆ + k ωHˆ directly from the measurements, in the sequel, we rewrite: P (22) The innovation or correction term ω ∈ sl(3) is thought of as V η> η>V  U := (Ω + Γ) , with Γ = − I . (18) an error of the measurements p and their estimates × d 3d i pˆi (or as an error function of the measured points ˚pi and their ˜ Since {A˚} is stationary by assumption, the vector Ω can estimates ei). It depends implicitly on H. The estimates pˆi be directly obtained from the set of embedded gyros. The of pi are defined as follows, term Γ is related to the translational motion expressed in the Hˆ −1˚p ˙ i ξ pˆi = (23) current frame {A}. If we assume that is constant (e.g. the −1 d |Hˆ ˚pi| situation in which the camera moves with a constant velocity parallel to the scene or converges exponentially toward it), Equivalently, the estimates ei of ˚pi are defined as follows, T ˙ and using the fact that V = R ξ, it is straightforward to Hpˆ H˜˚p e = i = i , H˜ = HHˆ −1 (24) verify that i ˜ |Hpˆ i| |H˚pi| Γ˙ = [Γ, Ω×] (19) Definition 3.1: A set Mn of n ≥ 4 vector directions ˚pi ∈ 2 where [Γ, Ω×] = ΓΩ× − Ω×Γ is the Lie bracket. S (i = {1 . . . n}) is called consistent, if it contains a subset V However, if we assume that d is constant (the situation M4 ⊂ Mn of 4 constant vector directions such that all its in which the camera follows a circular trajectory over the vector triplets are linearly independent. scene or performs an exponential convergence towards it), it This definition implies that if the set Mn is consistent then, follows that for all ˚pi ∈ M4 there exist a unique set of three non V vanishing scalars bj 6= 0 (j 6= i) such that Γ˙ = Γ Ω , with Γ = ηT . (20) 1 1 × 1 d 4 yi X ˚pi = where yi = bj˚pj III.NONLINEAR OBSERVER ON SL(3) BASEDONDIRECT |yi| j=1(j6=i) MEASUREMENTS Theorem 3.2: H : A → A˚ The system considered is the kinematics of an element of Let denote the calibrated SL(3) given by (15) along with (18). homography (12) and consider the kinematics (15) along with (18). Assume that U ∈ {A˚} is known. Consider the nonlinear A general framework for non-linear filtering on the Special estimator filter defined by (22) along with the innovation Linear group is introduced for the case where the group ω ∈ sl(3) given by velocity is known. The theory is then extended to the situation n for which a part of the group velocity is not available. In X ω = π ˚p eT particular, we extend the result to the case where the camera ei i i (25) is attached to a current frame {A} while the reference frame i=1 ˚ T 2 {A} is assumed to be a Galilean frame such that Ω represents where πx = (I − xx ), for all x ∈ S . Then, if the set Mn the measurements of embarked gyrometers. In this case, it is of the measured directions ˚pi is consistent, the equilibrium also assumed that the matrix Γ is slowly time varying in the H˜ = I is asymptotically stable. inertial frame and its time derivative obeys either (19) or (20). Proof: Based on Eqn’s (15) and (22), it is straightfor- ward to show that the derivatives of (21) and (23) fulfill A. Observer with known group velocity ˙ p˙i = −πp Upi and pˆi = −πpˆ (U + kP Ad ˆ −1 ω)ˆpi The goal of the estimation of H(t) ∈ SL(3), is to provide i i H an estimate Hˆ ∈ SL(3) and to drive the error term H˜ = Differentiating H˜ , it yields HHˆ −1 to the identity matrix I while assuming that U ∈ sl(3) ˜˙ ˆ −1 ˆ −1 ˜ is known. H = H(U + kP AdHˆ −1 ω)H − HUH = kP ωH Differentiating ei defined by (24), we get Since Mn is a consistent set, it follows at the limit (and without loss of generality) that (˚p ,˚p ,˚p ) are three e˙ = k π ωe 1 2 3 i P ei i independent vectors and therefore they represent three non Define the following candidate Lyapunov function. collinear eigenvectors of H˜ associated with the eigenvalues n λi for i = {1, 2, 3} such that H˜˚pi = λi˚pi. X 1 2 L = |e − ˚p | (26) Exploiting again the consistency of the set Mn, it fol- 0 2 i i i=1 lows that there exists a constant direction ˚pk from the set {˚p4,...˚pn} such that: Using the consistency of the set Mn, one can ensure that L0 is locally a definite positive function of H˜ . Differentiating 3 yk X L0, it yields: ˚pk = where yk = bi˚pi, bi ∈ R/{0}, i = {1, 2, 3} |yk| n i=1 X L˙ = (e − ˚p )T e˙ 0 i i i Since ˚pk can be seen as a forth eigenvector for H˜ associated i=1 to the eigenvalue λk = ±|H˜˚pk|, this yields Introducing the above expression of e˙i, it follows: n n ! 3 3 1 X 1 X ˙ X T X T λ ˚p = H˜˚p = H˜ b ˚p = b H˜˚p L0 = kP (ei − ˚pi) πei ωei = −kP tr ei˚pi πei ω k k k i i i i |yk| |yk| i=1 i=1 i=1 i=1 3 3 Introducing the expression of ω (25), we get: X X λk bi˚pi = biλi˚pi n 2 X i=1 i=1 L˙ = −k e ˚pT π 0 P i i ei Using the fact that the measured directions form a consistent i=1 set, it follows that bi 6= 0, i = {1, 2, 3} and invoking the The derivative of the Lyapunov function is negative and equal fact that det(H˜ ) = 1, a straightforward identification shows to zero when ω = 0, and therefore one can ensure that H˜ is that λk = λ1 = λ2 = λ3 = 1. Consequently, H˜ converges locally bounded. From the definitions of ω (25) and ei (24), asymptotically to the identity I. one deduces that Remark 3.3: Note that the characterization of the stability n ˜ > ˜ > ! > domain remains an open problem and not addressed in the −> X H˚pi˚pi H ˚pi˚pi ωH˜ = I3 − |H˜˚p |2 |H˜˚p | paper. Although, simulation results that we have performed i=1 i i tend to indicate that the stability domain is sufficiently Computing the trace of ωH˜ −>, it follows: large, this issue along with convergence property towards the n equilibrium should be thoroughly analysed. 4 −> X 1  2 2 > 2 tr(ωH˜ ) = |H˜˚pi| |˚pi| − ((H˜˚pi) ˚pi) ˜ 3 B. Observer with partially known velocity of the rigid body i=1 |H˚pi| In this section we assume that U (18) is not available and ˜ Define Xi = H˚pi and Yi = ˚pi, it is straightforward to verify the goal consists in providing an estimate Hˆ (t) ∈ SL(3) to that drive the error term H˜ = HHˆ −1 to the identity matrix I and n ˜ ˆ ˜ ˆ X 1 the error term Γ = Γ − Γ (resp. Γ1 = Γ1 − Γ1) to 0 if Γ (ωH˜ −>) = |X |2|Y |2 − (XT Y )2 ≥ 0 tr 3 i i i i (resp. Γ ) is constant or slowly time varying. The observer |Xi| 1 i=1 when Γ is constant in {A˚}, is chosen as follows: Using the fact that ω = 0 at the equilibrium and therefore −> ˆ˙ ˆ ˆ ˆ tr(ωH˜ ) = 0, as well as the Cauchy-Schwarz inequality, it H = H(Ω× + Γ) + kP ωH, (28) T follows that Xi Yi = ±|Xi||Yi| and consequently one has: ˆ˙ ˆ Γ = [Γ, Ω×] + kI AdHˆ T ω (29) ˜ > ˜ (H˚pi) ˚pi = ±|H˚pi||˚pi|, ∀i = {1 . . . n}, V The observer when d is constant in {A}, is defined as which in turn implies the existence of some non-null con- follows, ˜ stants λi = ±|H˚pi| such that ˙ 1 Hˆ = Hˆ (Ω + Γˆ − tr(Γ )I) + k ωH,ˆ (30) × 1 3 1 P H˜˚pi = λi˚pi (27) ˆ˙ ˆ Γ1 = Γ1Ω× + kI AdHˆ T ω (31) Note that the fact that λi 6= 0 can be easily verified. For −1 instance, if λi = 0, then ˚pi = λiH˜ ˚pi = 0 which Proposition 3.4: Consider a camera moving with kinemat- 2 contradicts the fact that ˚pi ∈ S . Relation (27) indicates that ics (13) and (14) viewing a planar scene. Assume that A˚ 2 all λi are eigenvalues of H˜ and all ˚pi ∈ S are the associated is stationary and that the orientation velocity Ω ∈ {A} is eigenvectors of H˜ . measured and bounded. Let H : A → A˚denote the calibrated homography (12) and consider the kinematics (15) along with expression (32) converges to zero and kΓ˜k2 converges to a (18). Assume that H is bounded and Γ (resp. Γ1) is constant constant. in {A˚} (resp. in {A}) such that it obeys (19) (resp. (20)). Computing the time derivative of H˜ and using the fact Consider the nonlinear estimator filter defined by (28-29), that ω converges to zero and H˜ converges to I, it is (resp. (30-31)) along with the innovation ω ∈ sl(3) given by straightforward to show that: (25). Then, if the set Mn of the measured directions ˚pi is ˜˙ ˜ consistent, the equilibrium (H,˜ Γ)˜ = (I, 0) (resp. (H,˜ Γ˜ ) = lim H = −Ad ˆ Γ = 0 1 t→∞ H (I, 0)) is asymptotically stable. Sketch of the Proof 3.5: We will consider only the situa- Using boundedness of H, one can insure that limt→∞ Γ˜ = 0. tion where the estimate of Γ is used. The same arguments are used for the case where the estimate of Γ1 is considered. IV. SIMULATION RESULTS Differentiating ei (24) and using (28) yields In this section, we illustrate the performance and robust- ness of the proposed observers through simulation results. e˙i = πe (kP ω − Ad ˆ Γ)˜ ei i H The camera is assumed to be attached to an aerial vehicle Define the following candidate Lyapunov function moving in a circular trajectory which stays in a plane parallel ˚ n to the ground. The reference camera frame {A} is chosen X 1 2 1 L = |e − ˚p | + ||Γ˜k2 (32) as the NED (North-East-Down) frame situated above four 2 i i 2k i=1 I observed points on the ground. The four observed points form a square whose center lies on the Z-axis of the NED frame Differentiating L and using the fact that  h i {A˚}. The vehicle’s trajectory is chosen such that the term tr Γ˜T Γ˜, Ω = 0, it follows that Γ1 defined by (20) remains constant, and the observer (30- n 31) is applied with the following gains: kP = 4, kI = 1. ˙ X T ˜T  Distributed noise of variance 0.01 is added on the measure- L = (ei − ˚pi) e˙i − tr Γ AdHˆ T ω i=1 ment of the angular velocity Ω. The chosen initial estimated homography Hˆ (0) corresponds to i) an error of π/2 in both Introducing the above expression of e˙i and using the fact that pitch and yaw angles of the attitude, and ii) an estimated tr(AB) = tr(BT AT ), it follows: translation equal to zero. The initial value of Γˆ1 is set to n zero. From 40s to 45s, we assumed that the measurements ˙ X T ˜  T ˜ L = (ei − ˚pi) πei (kP ω − AdHˆ Γ)ei − tr AdHˆ −1 ω Γ of two observed points are lost. Then, from 45s we regain i=1 the measurements of all four points as previously. n X T ˜  T ˜ The results reported in Fig. 1 show a good convergence = − ˚p π (k ω − Ad Γ)e − tr Ad −1 ω Γ i ei P Hˆ i Hˆ rate of the estimated homography to the real homography (see i=1 n ! from 0 to 40s and from 45s). The loss of point measurements X T ˜ T ˜ marginally affects the global performance of the proposed = − tr ei˚pi πei (kP ω − AdHˆ Γ) + AdHˆ −1 ω Γ i=1 observed. Note that in this situation, no existing method for n extracting the homography from measurements of only two X T = − tr kP ei˚pi πei ω points is available. i=1 n !! T X T ˜ + AdHˆ −1 (ω − ei˚pi πei )Γ ONCLUDINGREMARKS i=1 V. C Finally, introducing the expression of ω (25), we get: In this paper we developed a nonlinear observer for a sequence of homographies represented as elements of the n 2 X Special Linear group SL(3). More precisely, the observer L˙ = −k e ˚pT π P i i ei directly uses point correspondences from an image sequence

i=1 without requiring explicit computation of the individual ho- The derivative of the Lyapunov function is negative semi- mographies between any two given images. The stability of definite, and equal to zero when ω = 0. Given that Ω is the observer has been proved for both cases of known full bounded, it is easily verified that L˙ is uniformly continuous group velocity and known rigid-body velocities only. Even and Barbalat’s Lemma can be used to prove asymptotic if the characterization of the stability domain still remains convergence of ω → 0. Using the same arguments used in an open issue, simulation results have been provided as a the proof of theorem 3.2, it is straightforward to verify that complement to the theoretical approach to demonstrate a H˜ → I. Consequently the left hand side of the Lyapunov large domain of stability. REFERENCES [1] Faugeras, O. and Lustman, F., “Motion and structure from mo- tion in a piecewise planar environment”. International Journal of Pattern Recongnition and Artificial Intellihence, 2(3) : 485- 508, 1988. [2] S. Benhimane and E. Malis. Homography-based 2d visual tracking and servoing. International Journal of Robotic Research, 26(7): 661–676, 2007. [3] S. Bonnabel and P. Rouchon. Control and Observer Design for Nonlinear Finite and Infinite Dimensional Systems, volume 322 of Lec- ture Notes in Control and Information Sciences, chapter On Observers, pages 53–67. Springer-Verlag, 2005. [4] Faugeras, O. and Lustman, F., “Motion and structure from mo- tion in a piecewise planar environment”. International Journal of Pattern Recongnition and Artificial Intellihence, 2(3) : 485- 508, 1988. [5] R. Hartley and A. Zisserman. Multiple View Geomerty in Computer Vision. Cambridge University Press, second edition, 2003. [6] E. Malis and M. Vargas, Deeper understanding of the homography decomposition for vision-based control, INRIA, Tech. Rep. 6303, 2007, available at http://hal.inria.fr/inria-00174036/fr/. [7] C. Lageman, R. Mahony, and J. Trumpf. State observers for invariant dynamics on a lie group. Conf. on the Mathematical Theory of Networks and Systems, 2008. [8] C. Lageman, J. Trumpf, and R. Mahony. Gradient-like observers for invariant dynamics on a lie group. IEEE Trans. on Automatic Control, to appear. [9] R. Mahony, T. Hamel, and J.-M. Pflimlin. Non-linear complementary filters on the special orthogonal group. IEEE Trans. on Automatic Control, 53(5): 1203–1218, 2008. [10] E. Malis and F. Chaumette. Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods. IEEE Trans. on Robotics and Automation, 18(2): 176–186, 2002. [11] E. Malis, F. Chaumette, and S. Boudet. 2 1/2 d visual servoing. IEEE Trans. on Robotics and Automation, 15(2):234–246, 1999. [12] E. Malis and M. Vargas. Deeper understanding of the homography decomposition for vision-based control. Research Report 6303, INRIA, 2007. [13] E. Malis, T. Hamel, R. Mahony, and P. Morin. Dynamic estimation of homography transformations on the special linear group for visual servo control. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2009. [14] E. Malis, T. Hamel, R. Mahony, and P. Morin. Estimation of Ho- mography Dynamics on the Special Linear Group, in Visual Servoing via Advanced Numerical Methods, Eds G. Chesi and K. Hashimoto, Springer. Chapter 8, pages 139–158, 2010. [15] P. Martin and E. Salaun.¨ Invariant observers for attitude and heading estimation from low-cost inertial and magnetic sensors. IEEE Conf. on Decision and Control, pages 1039–1045, 2007. [16] P. Martin and E. Salaun.¨ An invariant observer for earth-velocity-aided attitude heading reference systems. IFAC World Congress, pages 9857– 9864, 2008. [17] C. Mei, S. Benhimane, E.Malis and P. Rives. Efficient Homography- Based Tracking and 3-D Reconstruction for Single-Viewpoint Sensors. IEEE Trans. On Robotics, VOL. 24, NO. 6, December 2008. [18] J. Thienel and R. M. Sanner. A coupled nonlinear spacecraft attitude controller and observer with an unknow constant gyro bias and gyro noise. IEEE Trans. on Automatic Control, 48(11): 2011–2015, 2003. [19] Weng, J., Ahuja, N. and Huang, T. S., “Motion and struc- ture from point correspondences with error estimation: planar surfaces” IEEE Trans. Pattern Aanalysis and Machine Intelli- gence, 39(12) : 2691-2717, 1991. [20] Zhang, Z. and Hanson, A. R., “3D Reconstruction Based On Homography Mapping” Proc. Image nderstanding Workshop, Morgan Kaufman, pages 1007-1012, 1996.

Fig. 1. Estimated homography (solid ) and true homography (dashed line) vs. time.