Interest Points Outline

Interest Points Outline

Interest points Outline • Correspondence • Interest point detection • Scale space • Orientation selection Core visual understanding task: finding correspondences between images (a) incline L.jpg (img1) (b) incline R.jpg (img2) (c) img2 warped to img1’s frame Figure 5: Example output for Q6.1: Original images img1 and img2 (left and center) and img2 warped to fit img1 (right). Notice that the warped image clips out of the image. We will fix this in Q6.2 Figure 6: Final panorama view. With homography estimated with RANSAC. H2to1=computeH(p1,p2) a folder matlab containing all the .m and .mat files you were asked to write and • Inputs: p1 and p2 should be 2 N matrices of correspondinggenerate (x, y)T coordinates between two images. ⇥ Outputs: H2to1 should be a 3 3 matrix encoding the homographya pdf thatnamed best matcheswriteup.pdf containing the results, explanations and images asked for ⇥ • the linear equation derived above for Equation 8 (in the leastin squares the assignment sense). Hint: along with to the answers to the questions on homographies. Remember that a homography is only determined up to scale. The Matlab functions eig() or svd() will be useful. Note that this functionSubmit can be all written the code without needed an to make your panorama generator run. Make sure all the .m explicit for-loop over the data points. files that need to run are accessable from the matlab folder without any editing of the path variable. If you downloaded and used a feature detector for the extra credit, include the 6 Stitching it together: Panoramas (30code pts)with your submission and mention it in your writeup. You may leave the data folder in your submission, but it is not needed. Please zip your homework as usual and submit it We can also use homographies to create a panorama image from multiple views of the same using blackboard. scene. This is possible for example when there is no camera translation between the views (e.g., only rotation about the camera center), as we saw in Q4.2. First, you will generate panoramas using matched point correspondences between images using the BRIEF matching you implemented in Q2.4. We willAppendix: assume that there is Image no error Blending in your matched point correspondences between images (Although there might be some errors). Note: This section is not for credit and is for informational purposes only. In the next section you will extend the technique to use (potentially noisy) keypoint matches. For overlapping pixels, it is common to blend the values of both images. You can sim- You will need to use the provided function warp im=warpH(im,ply average H, out thesize) values,which but that will leave a seam at the edges of the overlapping images. warps image im using the homography transform H.Thepixelsinwarp_im are sampled at coordinates in the rectangle (1, 1) to (out_size(2), out_size(1)Alternatively,). The coordinates you can obtain of a blending value for each image that fades one image into the the pixels in the source image are taken to be (1, 1) to (size(im,2)other. To, dosize(im,1) this, first) and create a mask like this for each image you wish to blend: transformed according to H. mask = zeros(size(im,1), size(im,2)); Q6.1 (15pts) In this problem you will implement and use the function (stub provided • mask(1,:) = 1; mask(end,:) = 1; mask(:,1) = 1; mask(:,end) = 1; in matlab/imageStitching.m): mask = bwdist(mask, ’city’); [panoImg] = imageStitching(img1,mask img2, = H2to1) mask/max(mask(:)); on two images from the Dusquesne incline. This functionThe accepts function two imagesbwdist andcomputes the the distance transform of the binarized input image, so this output from the homography estimation function. Thismask function will will: be zero at the borders and 1 at the center of the image. You can warp this mask just as you warped your images. How would you use the mask weights to compute a linear 10 combination of the pixels in the overlap region? Your function should behave well where one or both of the blending constants are zero. 13 Example: license plate recognition Example: product recognition Google Glass Example: image matching of landmarks Reference Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada [email protected] January 5, 2004 Abstract This paper presents a method for extracting distinctive invariant features from images that can be used to performIJCVreliable 04matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substantial range of affine dis- tortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be cor- rectly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual fea- tures to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a sin- gle object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance. Accepted for publication in the International Journal of Computer Vision, 2004. 1 Motivation Which of these patches are easier to match? Why? How can be mathematically operationalize this? Corner Detector: Basic Idea “flat” region: “edge”: “corner”: no change in any no change along the significant change in direction edge direction all directions Defn: points are “matchable” if small shifts always produce a large SSD error The math Defn: points are “matchable” small shifts always produce a large SSD error W corner(x0,y0) = min Ex0,y0 (u, v) u2+v2=1 where E (u, v)= [I(x + u, y + v) I(x, y)]2 x0,y0 − (x,y) W (x ,y ) ∈ 0 0 Background: taylor series expansion @f(x) 1 @f(x) f(x + u)=f(x)+ u + u2 + Higher Order Terms @x 2 @xx Approximation of f(x) = ex at x=0 Why are low-order expansions reasonable? Underyling smoothness of real-world signals Multivariate taylor series u I(x + u, y + v)=I(x, y)+ @I(x,y) @I(x,y) + @x @y v gradient @I(x,y) @I(x,y) h i 1 u uv @xx @xy + Higher Order Terms 2 @I(x,y) @I(x,y) v " @xy @yy # ⇥ ⇤ Hessian I(x + u, y + v) I + I u + I v ⇡ x y where @I(x, y) I = x @x Feature detection: the math Consider shifting the window W by (u,v) • how do the pixels in W change? • compare each pixel before and after by summing up the squared differences W • this defines an “error” of E(u,v): E(u, v)= [I(x + u, y + u) I(x, y)]2 − (x,y) W X2 [I + I u + I v I]2 ⇡ x y − (x,y) W X2 2 2 2 2 = [Ixu + Iyv +2IxIyuv] (x,y) W X2 2 u Ix IxIy = uvA ,A= 2 v IyIx Iy (x,y) W ⇥ ⇤ X2 The math (cont’d) Defn: points are “matchable” small shifts always produce a large SSD error W Corner(x0,y0)= min E(u, v) u2+v2=1 where simple products of Ix,Iy 2 u Ix IxIy E(u, v)= uvA ,A= 2 v IyIx Iy (x,y) W (x0,y0) ⇥ ⇤ 2X Claim 1: ‘A’ is symmetric (AT = A) and PSD Claim 2: Corner-ness is given by min eigenvalue of ‘A’ SVD Deva Ramanan February 1, 2016 Let us represent a linear transformation as follows: n m y = Ax, A R ⇥ (1) 2 where A is a matrix with n columns and m rows. This document uses the singular value decomposition (SVD) to decompose A into a series of geometric transformations, focusing intuition rather than a precise formulation. For simplicity, let n =2andm =3,suchthat A transforms points in 2D to 3D. 2 3 2 3 Figure 1: Visualizing a matrix A R ⇥ as a transformation of points from R to R . Revisiting spectral2 decomposition Orthonormal basis: First,A let= usV ⇤ recallV T that the projection of a vector x Rn along a unit vector v (e.g., vT v =1)canbewrittenasvT x. Let us construct a set of n2unit vectors and write them asλ1 a matrix00... 0 λ2 0 ... T 2 3 V = v1 v2,...vn . ⇤ = 00λ3 ... V V = I We can then compute6 . the projection. 7 or coordinates of vector x along the unit vectors with a 6 . 7 ⇥ ⇤ 6 T 7 T matrix multiplication4 p = V x.5 If all the unit vectors are orthogonal to each other (vi vj =0 for i = j), then V T V =AI.= ThisV implies⇤V T that V can be thought o↵ as a rotation matrix (whos inverse6 is V T ), making it easy to undo the projection. The set of vectors in V form an orthogonal basis for Rn. Let us similar construct an orthonormal basis for the output space U = u1 u2 ... ⇥ ⇤ 1 Turns out the above holds for any symmetric matric Figure 2: Visualizing the spectral eigendecomposition of a symmetric PSD matrix. This makes intuitive sense geometrically; taking the k largest singular values and vectors produces a transformation A0 that uses as much of the output space as possible. The sketch of the proof relies on the fact that U and V act as rotations and so do not e↵ect the rank of A.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    63 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us