Or Motion) Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Or Motion) Analysis Image Sequence (or Motion) Analysis also referred as: Dynamic Scene Analysis Input: A Sequence of Images (or frames) Output: Motion parameters (3-D) of the objects/s in motion and/or 3-D Structure information of the moving objects Assumption: The time interval ΔT between two consecutive or successive pair of frames is small. Typical step s in Dy namic Scene Analy sis • Imaggge filtering and enhancement • Tracking • Feature detection • Establish feature correspondence • Motion param eter e sti matio n (lo cal and glob al) • Structure estimation ((poptional ) • Predict occurrence in next frame TRACKING in a Dynamic Scene F1 Since ΔT is small, the amount of motion/displacement of objects, F2 between two successive pair of frames is also small. F3 At 25 frames per sec (fps): ΔT=40msecT = 40 msec. FN At 50 frames per sec (fps): ΔT = 20 msec. Image Motion Image changes by difference equation: fd (x1, x2 ,ti ,t j ) = − f (x1, x2 ,ti ) f (x1, x2 ,t j ) = − = − f (ti ) f (ti ) fi f j Accumulated difference image: = − ≥ fT (X ,tn ) fd (X ,tn−1,tn ) fT (X ,tn−1);n 3, = where, fT (X ,t2 ) fd (X ,t2 ,t1) Moving Edge (or feature) detector: ∂f F (X ,t ,t ) = . f (X ,t ,t ) mov _ feat 1 2 ∂X d 1 2 Recent methods include background and foreground modeling. F 1 F2 F1 -F2 abs(F1 -F2) ItVidFInput Video Frame Segmented Video frame Extracted moving object using Alpha matte Two categories of Visual Tracking Algorithms: Target Representation and Localization; Filtering and Data Association. A. Some common Target Representation and Localization algorithms: Blob tracking: Segmentation of object interior (for example blob detection, block- based correlation or optical flow) Kernel-based tracking (Mean-shift tracking): An iterative localization procedure based on the maximization of a similarity measure (Bhattacharyya coefficient). Contour tracking: Detection of object boundary (e.g. active contours or Condensation algorithm) Visual feature matching: Registration B. Some common Filteri ng and D ata A ssoci ati on alihlgorithms: Kalman filter: An optimal recursive Bayesian filter for linear functions and Gaussian noise. Particle filter: Useful for sampling the underlying state-space distribution of non-linear and non-Gaussian processes. Also see: Match moving; Motion capture; Swistrack BLOB Detection – Approaches used: • Corner detectors (Harris, Shi & Tomashi, Susan, Level Curve Curvature, Fast etc.) • Ridge detection, Scale-space Pyramids • LOG, DOG, DOH (Det. Of Hessian) • Hess ian a ffine, SIFT (Scale-invariant feature transform) • SURF (Speeded Up Robust Features) • GLOH (Gradient Location and Orientation Histogram) • LESH (Local Energy based Shape Histogram). Complexities and issues in tracking: Need to overcome difficulties that arise from noise, occlusion, clutter, moving cameras , mu ltiple mo ving objects and changes in the foregro und objects or in the background environment. DOH - the scale-normalized determinant of the Hessian, also referred to as the Monge–Ampère operator, where, HL dhiifdenotes the Hessian matrix of L andhd then detect ing scale-space maxima/minima of this operator one obtains another straightforward differential blob detector with automatic scale selection which also responds to saddles. Hessian Affine: SURF is based on a set of 2-D HAAR wavelets; implements DOH SIFT: Four major steps: Detect extremas 1. Scale-space extrema detection at various scales: 2. Keypoint localization 3. Orientation assignment 4. Keypoint descriptor OlOnly th ree st eps (1 (13&4), 3 & 4) are s hown blbelow: Histoggprams contain 8 bins each, and each descriptor contains an array of 4 histograms around the keypoint. This leads to a SIFT feature vector with (4 x 4 x 8 = 128 elements). ege.g. 2*2*8: GLOH (Gradient Location and Orientation Histogram) Gradient location-orientation histogram (GLOH) is an extension of the SIFT descriptor designed to increase its robustness and distinctiveness. The SIFT descriptor is computed for a log-polar location grid with 3 bins in radial direction (the radius set to 6, 11 and 15) and 8 in angular direction, which results 17 location bins. Note that the central bin is not divided in angular directions. The gradient orientations are quantized in 16 bins. This gives a 272 bin histogram. The size of this descriptor is reduced with PCA. The covariance matrix for PCA is estimated on 47000 image patches collected from various images. The 128 largest eigenvectors are used for description. Gradient location and orientation histogram (GLOH) is a new descriptor which extends SIFT by changing the location grid and using PCA to reduce the size. Motion Equations: o x Models de rived fro m mec ha nics . Euler’s or Newton’s equations: (X, Y) P’ = R.P + T, where: (X’,Y , Y)’) = 2 + − 2 θ y m11 n1 (1 n1 )cos = − θ − θ m12 n1n2 (1 cos ) n3 sin m = n n (1− cosθ ) + n sinθ 13 1 3 2 z P(x, y, z) P’(x ’, y ’, z ’) at t1 = at t R [mij ]3*3 2 m = n n (1− cosθ ) + n sinθ 21 1 2 3 T = [∂x ∂y ∂z]T = 2 + − 2 θ m22 n2 (1 n2 )cos = − θ − θ 2 + 2 + 2 = m23 n2n3 (1 cos ) n1 sin where: n1 n2 n3 1 Observation in the image plane: = − θ − θ U = X’ – X; V = Y’ – Y. m31 n1n3 (1 cos ) n2 sin = − θ + θ X = Fx / z;Y = Fy / z. m32 n2n3 (1 cos ) n1 sin = = = 2 + − 2 θ X ' Fx'/ z';Y Fy'/ z'. m33 n3 (1 n3 )cos Mathematically (for any two successive frames), the problem is: Input: Given (X , Y) , (X’ , Y’); θ δ δ δ Output: Estimate n1, n2, n3, , x, y, z First look at the problem of estimating motion parameters using 3D knowledge only: Given only three (3) non-linear equations, you have to obtai n seven (7) parame ters. Need a few more constraints and may be assumptions too. Since ΔT is small, θ must also be small enough (in radians). Thus R simplifies (reduces) to: − θ θ −φ φ 1 n3 n2 1 3 2 R = n θ 1 − n θ = φ 1 −φ θ →0 3 1 3 1 − θ θ −φ φ n2 n1 1 2 1 1 φ 2 + φ 2 + φ 2 = θ 2 where, 1 2 3 Now (problem linearized), gg()q,iven three (3) linear equations, Evaluate R . you have to obtain six (6) parameters. - Solution ? Take two point correspondences: ' = + ' = + P1 R.P1 T; P2 R.P2 T; Su btr acting one fr om the other, giv es: ' − ' = − (eliminates the translation component) (P1 P2 ) R.(P1 P2 ) Δx' 1 −φ φ Δx 12 3 2 12 φ ' 1 Δy = φ 1 −φ Δy , Solve 12 3 1 12 Φ = φ ; Δ ' −φ φ Δ for: 2 z12 2 1 1 z12 φ Δ = − 3 where, x12 x1 x2 , Δ − Δ Δ ' = ' − ' 0 z12 y12 x12 x1 x2; ∇ = − Δz 0 Δx and so on .... for y and z. 12 12 12 Δ − Δ y12 x12 0 Re-arrange to form: φ 1 Δ ' − Δ x12 x12 []∇ φ = Δ2 12 2 12 Δ2 = Δ ' − Δ and 12 y12 y12 φ Δ ' − Δ 3 z12 z12 φ 1 [∇ ]φ = Δ2 12 2 12 Δ − Δ Δ ' − Δ 0 z12 y12 x12 x12 φ ∇ = − Δz 0 Δx and Δ2 = Δy' − Δy 3 12 12 12 12 12 12 Δ − Δ Δ ' − Δ y12 x12 0 z12 z12 ∇ 12 is a skew - symmetric matrix. ∇ = Interprete, why is it so ? 12 0 So what to do ? Contact a Mathematician? φ 1 Take two (one pair) more point correspondences: []∇ φ = Δ2 34 2 34 φ 3 − Δ Δ Δ ' − Δ z34 0 x34 y34 y34 ∇ = Δy − Δx 0 and Δ2 = Δz ' − Δz 34 34 34 34 34 34 Δ − Δ Δ ' − Δ 0 z34 y34 x34 x34 φ φ Using two pairs (4 points) 1 1 of correspondences: [∇ ]φ = Δ2 [∇ ]φ = Δ2 12 2 12 34 2 34 Adding: φ φ φ 1 3 3 []∇ + ∇ φ = [Δ2 + Δ2 ] [][]∇ Φ = [Δ2 ] 12 34 2 12 34 1234 1234 φ − Δ Δ 3 z34 0 x34 0 Δz − Δy 12 12 ∇ = Δy − Δx 0 34 34 34 ∇ = − Δz 0 Δx 12 12 12 0 Δz − Δy Δ − Δ 34 34 y12 x12 0 Δy' − Δy Δ ' − Δ 34 34 x12 x12 Δ2 = Δ ' − Δ and 34 z34 z34 and Δ2 = Δy' − Δy 12 12 12 Δx' − Δx Δ ' − Δ 34 34 z12 z12 −Δ Δ Δ −Δ Δ ' −Δ +Δ ' −Δ z34 z12 x34 y12 y34 y34 x12 x12 2 ' ' ∇ = Δy −Δz −Δx Δx ; Δ = Δz −Δz +Δy −Δy 1234 34 12 34 12 1234 34 34 12 12 Δ −Δ −Δ −Δ Δ ' −Δ +Δ ' −Δ y12 x12 z34 y34 x34 x34 z12 z12 φ 1 [∇ + ∇ ]φ = [Δ2 + Δ2 ] [∇ ][Φ]= [Δ2 ] 12 34 2 12 34 1234 1234 φ 3 Condition for existence of a unique solution is based on a geometrical relationship of the coordinates of four points in space, at time t : 1 −Δ Δ Δ −Δ z34 z12 x34 y12 ∇ = Δy −Δz −Δx Δx 1234 34 12 34 12 Δ −Δ −Δ −Δ y12 x12 z34 y34 Δ ' −Δ +Δ ' −Δ y34 y34 x12 x12 Δ2 = Δ ' −Δ +Δ ' −Δ and 1234 z34 z34 y12 y12 Δ ' −Δ +Δ ' −Δ x34 x34 z12 z12 This solution is often used as an initial guess for the final estimate of the motion parameters . Find geometric condition, when: ∇ = 1234 0 OPTICAL FLOW A point in 3D space: = [ ] ≠ X 0 kxo kyo kkzo k ;k 0, an arbitary constant. Image point: = [ ] where, = X i wxi wyi w X i PX 0 xo x i = zo Assuming normalized focal length, f = 1: ....(1) y yo i zo Assuming linear motion model (no acceleration), between successive frames: + xo (t) xo ut Combining equations (1) and (2): = + yo (t) yo vt ....(2) + (xo ut) x (t) + z (t) z + wt i = (zo wt) o o + ....(3) y (t) (yo vt) i + (zo wt) + (xo ut) xi (t) (z + wt) Assume in equation (3), = o ....(3) that w (=dz/dt) < 0.
Recommended publications
  • Context-Aware Features and Robust Image Representations 5 6 ∗ 7 P
    CAKE_article_final.tex Click here to view linked References 1 2 3 4 Context-Aware Features and Robust Image Representations 5 6 ∗ 7 P. Martinsa, , P. Carvalhoa,C.Gattab 8 aCenter for Informatics and Systems, University of Coimbra, Coimbra, Portugal 9 bComputer Vision Center, Autonomous University of Barcelona, Barcelona, Spain 10 11 12 13 14 Abstract 15 16 Local image features are often used to efficiently represent image content. The limited number of types of 17 18 features that a local feature extractor responds to might be insufficient to provide a robust image repre- 19 20 sentation. To overcome this limitation, we propose a context-aware feature extraction formulated under an 21 information theoretic framework. The algorithm does not respond to a specific type of features; the idea is 22 23 to retrieve complementary features which are relevant within the image context. We empirically validate the 24 method by investigating the repeatability, the completeness, and the complementarity of context-aware fea- 25 26 tures on standard benchmarks. In a comparison with strictly local features, we show that our context-aware 27 28 features produce more robust image representations. Furthermore, we study the complementarity between 29 strictly local features and context-aware ones to produce an even more robust representation. 30 31 Keywords: Local features, Keypoint extraction, Image content descriptors, Image representation, Visual 32 saliency, Information theory. 33 34 35 1. Introduction While it is widely accepted that a good local 36 37 feature extractor should retrieve distinctive, accu- 38 Local feature detection (or extraction, if we want 39 rate, and repeatable features against a wide vari- to use a more semantically correct term [1]) is a 40 ety of photometric and geometric transformations, 41 central and extremely active research topic in the 42 it is equally valid to claim that these requirements fields of computer vision and image analysis.
    [Show full text]
  • Pavement Crack Detection by Ridge Detection on Fractional Calculus and Dual-Thresholds
    International Journal of Multimedia and Ubiquitous Engineering Vol.10, No.4 (2015), pp.19-30 http://dx.doi.org/10.14257/ijmue.2015.10.4.03 Pavement Crack Detection by Ridge Detection on Fractional Calculus and Dual-thresholds Song Hongxun, Wang Weixing, Wang Fengping, Wu Linchun and Wang Zhiwei Shaanxi Road Traffic Intelligent Detection and Equipment Engineering Technology Research Center, Chang’an University, Xi’an, China School of Information Engineering, Chang’an University, Xi’an, China songhongx @163.com Abstract In this paper, a new road surface crack detection algorithm is proposed; it is based on the ridge edge detection on fractional calculus and the dual-thresholds on a binary image. First, the multi-scale reduction of image data is used to shrink an original image to eliminate noise, which can not only smooth an image but also enhance cracks. Then, the main cracks are extracted by using the ridge edge detection on fractional calculus in a grey scale image. Subsequently, the resulted binary image is further processed by applying both short and long line thresholds to eliminate short curves and noise for getting rough crack segments. Finally the gaps in cracks are connected with a curve connection function which is an artificial intelligence routine. The experiments show that the algorithm for pavement crack images has the good performance of noise immunity, accurate positioning, and high accuracy. It can accurately locate and detect small and thin cracks that are difficult to identify by other traditional algorithms. Keywords: Crack; Multiscale; Image enhancement; Ridge edge; Threshold 1. Introduction In a time period, any road can be damaged gradually due to traffic loads and the other natural/artificial factors, and cracks are the most common road pavement surface defects that need to mend as early as possible.
    [Show full text]
  • Feature Detection Florian Stimberg
    Feature Detection Florian Stimberg Outline ● Introduction ● Types of Image Features ● Edges ● Corners ● Ridges ● Blobs ● SIFT ● Scale-space extrema detection ● keypoint localization ● orientation assignment ● keypoint descriptor ● Resources Introduction ● image features are distinct image parts ● feature detection often is first operation to define image parts to process later ● finding corresponding features in pictures necessary for object recognition ● Multi-Image-Panoramas need to be “stitched” together at according image features ● the description of a feature is as important as the extraction Types of image features Edges ● sharp changes in brightness ● most algorithms use the first derivative of the intensity ● different methods: (one or two thresholds etc.) Types of image features Corners ● point with two dominant and different edge directions in its neighbourhood ● Moravec Detector: similarity between patch around pixel and overlapping patches in the neighbourhood is measured ● low similarity in all directions indicates corner Types of image features Ridges ● curves which points are local maxima or minima in at least one dimension ● quality depends highly on the scale ● used to detect roads in aerial images or veins in 3D magnetic resonance images Types of image features Blobs ● points or regions brighter or darker than the surrounding ● SIFT uses a method for blob detection SIFT ● SIFT = Scale-invariant feature transform ● first published in 1999 by David Lowe ● Method to extract and describe distinctive image features ● feature
    [Show full text]
  • Coding Images with Local Features
    Int J Comput Vis (2011) 94:154–174 DOI 10.1007/s11263-010-0340-z Coding Images with Local Features Timo Dickscheid · Falko Schindler · Wolfgang Förstner Received: 21 September 2009 / Accepted: 8 April 2010 / Published online: 27 April 2010 © Springer Science+Business Media, LLC 2010 Abstract We develop a qualitative measure for the com- The results of our empirical investigations reflect the theo- pleteness and complementarity of sets of local features in retical concepts of the detectors. terms of covering relevant image information. The idea is to interpret feature detection and description as image coding, Keywords Local features · Complementarity · Information and relate it to classical coding schemes like JPEG. Given theory · Coding · Keypoint detectors · Local entropy an image, we derive a feature density from a set of local fea- tures, and measure its distance to an entropy density com- puted from the power spectrum of local image patches over 1 Introduction scale. Our measure is meant to be complementary to ex- Local image features play a crucial role in many computer isting ones: After task usefulness of a set of detectors has vision applications. The basic idea is to represent the image been determined regarding robustness and sparseness of the content by small, possibly overlapping, independent parts. features, the scheme can be used for comparing their com- By identifying such parts in different images of the same pleteness and assessing effects of combining multiple de- scene or object, reliable statements about image geome- tectors. The approach has several advantages over a simple try and scene content are possible. Local feature extraction comparison of image coverage: It favors response on struc- comprises two steps: (1) Detection of stable local image tured image parts, penalizes features in purely homogeneous patches at salient positions in the image, and (2) description areas, and accounts for features appearing at the same lo- of these patches.
    [Show full text]
  • Rotational Features Extraction for Ridge Detection
    1 Rotational Features Extraction for Ridge Detection German´ Gonzalez,´ Member, IEEE, Franc¸ois Fleuret, Member, IEEE, Franc¸ois Aguet, Fethallah Benmansour, Michael Unser Fellow, IEEE., and P. Fua, Senior Member, IEEE. Abstract State-of-the-art approaches to detecting ridge-like structures in images rely on filters designed to respond to locally linear intensity features. While these approaches may be optimal for ridges whose appearance is close to being ideal, their performance degrades quickly in the presence of structured noise that corrupts the image signal, potentially to the point where it truly does not conform to the ideal model anymore. In this paper, we address this issue by introducing a learning framework that relies on rich, local, rotationally invariant image descriptors and demonstrate that we can outperform state-of-the-art ridge detectors in many different kinds of imagery. More specifically, our framework yields superior performance for the detection of blood vessel in retinal scans, dendrites in bright-field and confocal microscopy image-stacks, and streets in satellite imagery. Index Terms Rotational features. Ridge Detection. Steerable filters. Data aggregation. Machine Learning. F 1 INTRODUCTION INEAR structures of significant width, such as roads in aerial images, blood vessels in retinal scans, or L dendrites in 3D-microscopy image stacks, are pervasive. State-of-the-art approaches to detecting them rely on ideal models of their appearance and of the noise. They are usually optimized to find ideal ribbon or tubular structures. However, ridge-like linear structures such as those of Fig. 1 often do not conform to these models, which can drastically impact performance.
    [Show full text]
  • A Statistical Approach to Feature Detection and Scale Selection in Images
    A Statistical Approach to Feature Detection and Scale Selection in Images Dissertation zur Erlangung des wirtschaftswissenschaftlichen Doktorgrades des Fachbereichs Wirtschaftswissenschaften der Universität Göttingen vorgelegt von Peter Majer aus Frankfurt a.M. Göttingen, Mai 2000 Erstgutachter: Professor Dr. W. Zucchini (Göttingen) Zweitgutachter: Professor Dr. T. Lindeberg (Stockholm) Tag der mündlichen Prüfung: 7.7.2000 Contents 1 Introduction 1 1.1 A Feature Detection Recipe ..................... 2 1.2 Some Questions about Scale .................... 5 1.3 Contributions of the Thesis ..................... 6 1.4 Some Remarks Concerning the Formulation ............ 7 2 Scale-Space 8 2.1 Linear Scale-Space ......................... 8 2.2 The Purpose of Scale-Space: Vision ................ 9 2.3 Useful Properties of Scale-Space .................. 11 2.3.1 Simplification ........................ 12 2.3.2 Translation and Rotation Invariance ............ 14 2.3.3 Observational Blur and Scaling .............. 16 2.3.4 Differentiability ....................... 17 2.4 Stochastic Simplification and Scale-Space ............. 19 2.4.1 A Derivation of Scale-Space ................ 19 2.4.2 Stochastic Simplification and Local Entropy ........ 21 3 Feature Detection 24 3.1 Pattern matching .......................... 24 3.2 Feature Detection Operators .................... 25 3.2.1 Design Criteria for Feature Detectors ........... 26 3.2.2 Derivative of Gaussian Feature Detectors ......... 27 3.2.3 Interpretations of Derivative of Gaussian Detectors .... 30 3.3 Differential Geometry of Scale-Space ............... 31 3.3.1 Local Coordinates ..................... 33 3.4 Zero-Crossings ........................... 35 4 Scale Selection 38 4.1 The Need for Scale Selection .................... 38 4.2 Invariance Requirements and Scale Selection ........... 39 I CONTENTS II 4.3 Normalized Derivatives ....................... 40 4.4 γ-Normalized Derivatives .....................
    [Show full text]
  • Edge Detection and Ridge Detection with Automatic Scale Selection
    Edge detection and ridge detection with automatic scale selection Tony Lindeberg Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and Computing Science KTH (Royal Institute of Technology) S-100 44 Stockholm, Sweden. http://www.nada.kth.se/˜tony Email: [email protected] Technical report ISRN KTH/NA/P–96/06–SE, May 1996, Revised August 1998. Int. J. of Computer Vision, vol 30, number 2, 1998. (In press). Shortened version in Proc. CVPR’96, San Francisco, June 1996. Abstract When computing descriptors of image data, the type of information that can be extracted may be strongly dependent on the scales at which the image operators are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting one-dimensional image features, such as edges and ridges. A novel concept of a scale-space edge is introduced, defined as a connected set of points in scale-space at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important consequence of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail, the gradient magnitude and a differential expression derived from the third-order derivative in the gradient direction. For a certain way of normalizing these differential de- scriptors, by expressing them in terms of so-called γ-normalized derivatives,an immediate consequence of this definition is that the edge detector will adapt its scale levels to the local image structure.
    [Show full text]
  • Salient Point and Scale Detection by Minimum Likelihood
    JMLR: Workshop and Conference Proceedings 1: 59-72 Gaussian Processes in Practice Salient Point and Scale Detection by Minimum Likelihood Kim Steenstrup Pedersen [email protected] Marco Loog [email protected] Department of Computer Science University of Copenhagen Universitetsparken 1 DK-2100 Copenhagen Denmark Pieter van Dorst Department of Biomedical Engineering Technische Universiteit Eindhoven P.O. Box 513 5600 MB Eindhoven The Netherlands Editor: Neil Lawrence, Anton Schwaighofer and Joaquin Quinonero~ Candela Abstract We propose a novel approach for detection of salient image points and estima- tion of their intrinsic scales based on the fractional Brownian image model. Under this model images are realisations of a Gaussian random process on the plane. We define salient points as points that have a locally unique image structure. Such points are usually sparsely distributed in images and carry important information about the image content. Locality is defined in terms of the measurement scale of the filters used to describe the image structure. Here we use partial derivatives of the image function defined using linear scale space theory. We propose to detect salient points and their intrinsic scale by detecting points in scale-space that locally minimise the likelihood under the model. Keywords: Image analysis, salient image point detection, scale selection, fractional Brownian images 1. Introduction Following the widely recognised paradigm by Marr (1982), image feature detec- tion is the basis which many vision and image analysis algorithms build upon. The definition of what constitutes an image feature is debatable, but there is a common agreement that it is either a point or curve at which the image in- tensities have a visual information carrying geometry.
    [Show full text]
  • The Harris Corner Detection Method Based on Three Scale Invariance Spaces
    View metadata,IJCSI International citation and Journal similar of papers Computer at core.ac.uk Science Issues, Vol. 9, Issue 6, No 2, November 2012 brought to you by CORE ISSN (Online): 1694-0814 provided by Directory of Open Access Journals www.IJCSI.org 18 The Harris Corner Detection Method Based on Three Scale Invariance Spaces Yutian Wang, Yiqiang Chen, Jing Li, Biming Li Institute of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei Province, 066004, China given, ignoring the role of the differential scales in the Abstract establishing of the scale space image. In order to solve the problem that the traditional Harris comer operator hasn’t the property of variable scales and is sensitive to Aiming at the problem of the traditional Harris detectors noises, an improved three scale Harris corner detection without the property of variable scales and is sensitive to algorithm was proposed. First, three scale spaces with the noises, this paper proposed an improved three scale Harris characteristic of scale invariance were constructed using discrete corner detection method. Three scale spaces were Gaussian convolution. Then, Harris scale invariant detector was used to extract comers in each scale image. Finally, supportable constructed through selecting reasonably the and unsupportable set of points were classified according to parameters , S and t that influence the performance of whether the corresponding corners in every scale image support the Harris scale invariant detector. Harris comers in each that of the original images. After the operations to those scale images were extracted by the Harris scale invariant unsupportable set of points, the noised corners and most of detector and the supportable and unsupportable set of unstable corners could be got rid of.
    [Show full text]
  • Edge, Ridge, and Blob Detection with Symmetric Molecules
    Edge, Ridge, and Blob Detection with Symmetric Molecules Rafael Reisenhofer∗1 and Emily J. King2 1University of Vienna, Faculty of Mathematics 2University of Bremen, AG Computational Data Analysis Abstract We present a novel approach to the detection and characterization of edges, ridges, and blobs in two-dimensional images which exploits the symmetry properties of directionally sensitive an- alyzing functions in multiscale systems that are constructed in the framework of α-molecules. The proposed feature detectors are inspired by the notion of phase congruency, stable in the presence of noise, and by definition invariant to changes in contrast. We also show how the behavior of coefficients corresponding to differently scaled and oriented analyzing functions can be used to obtain a comprehensive characterization of the geometry of features in terms of local tangent directions, widths, and heights. The accuracy and robustness of the proposed measures are validated and compared to various state-of-the-art algorithms in extensive numerical experi- ments in which we consider sets of clean and distorted synthetic images that are associated with reliable ground truths. To further demonstrate the applicability, we show how the proposed ridge measure can be used to detect and characterize blood vessels in digital retinal images and how the proposed blob measure can be applied to automatically count the number of cell colonies in a Petri dish. 1 Introduction The correct localization of the significant structures in an image as well as the precise characteri- zation of their geometry are two eminent tasks in digital image processing with an overwhelming number of applications. Since the advent of digital image processing, a large body of research has been devoted to the development and analysis of algorithms for the detection and characterization of features such as edges, ridges, and well-defined two-dimensional shapes.
    [Show full text]
  • Improving the Effectiveness of Local Feature Detection
    Improving the Effectiveness of Local Feature Detection Shoaib Ehsan A thesis submitted for the degree of Doctor of Philosophy at the School of Computer Science and Electronic Engineering University of Essex May 2012 Abstract The last few years have seen the emergence of sophisticated computer vision systems that target complex real-world problems. Although several factors have contributed to this success, the ability to solve the image correspondence problem by utilizing local image features in the presence of various image transformations has made a major contribution. The trend of solving the image correspondence problem by a three-stage system that comprises detection, description, and matching is prevalent in most vision systems today. This thesis concentrates on improving the local feature detection part of this three-stage pipeline, generally targeting the image correspondence problem. The thesis presents offline and online performance metrics that reflect real- world performance for local feature detectors and shows how they can be utilized for building more effective vision systems, confirming in a statistically meaningful way that these metrics work. It then shows how knowledge of individual feature detectors’ functions allows them to be combined and made into an integral part of a robust vision system. Several improvements to feature detectors’ performance in terms of matching accuracy and speed of execution are presented. Finally, the thesis demonstrates how resource-efficient architectures can be designed for local feature detection methods, especially for embedded vision applications. i ii Copyright © 2012 Shoaib Ehsan All Rights Reserved iii iv To Abbu, Ammi, Maria & Guria v vi Acknowledgements First of all, I would like to thank ALLAH Almighty for giving me the strength and determination to complete this work.
    [Show full text]
  • On Detection of Faint Edges in Noisy Images
    1 On Detection of Faint Edges in Noisy Images Nati Ofir, Meirav Galun, Sharon Alpert, Achi Brandt, Boaz Nadler, Ronen Basri Abstract—A fundamental question for edge detection in noisy images is how faint can an edge be and still be detected. In this paper we offer a formalism to study this question and subsequently introduce computationally efficient multiscale edge detection algorithms designed to detect faint edges in noisy images. In our formalism we view edge detection as a search in a discrete, though potentially large, set of feasible curves. First, we derive approximate expressions for the detection threshold as a function of curve length and the complexity of the search space. We then present two edge detection algorithms, one for straight edges, and the second for curved ones. Both algorithms efficiently search for edges in a large set of candidates by hierarchically constructing difference filters that match the curves traced by the sought edges. We demonstrate the utility of our algorithms in both simulations and applications involving challenging real images. Finally, based on these principles, we develop an algorithm for fiber detection and enhancement. We exemplify its utility to reveal and enhance nerve axons in light microscopy images. Index Terms—Edge detection. Fiber enhancement. Multiscale methods. Low signal-to-noise ratio. Multiple hypothesis tests. Microscopy images. F 1 INTRODUCTION This paper addresses the problem of detecting faint edges in noisy images. Detecting edges is important since edges mark the boundaries of shapes and provide cues to their relief and surface markings. Noisy images are common in a variety of domains in which objects are captured un- der limited visibility.
    [Show full text]