Iris Segmentation Using the Generalized Structure Tensor

Total Page:16

File Type:pdf, Size:1020Kb

Iris Segmentation Using the Generalized Structure Tensor Iris Segmentation Using the Generalized Structure Tensor Fernando Alonso-Fernandez and Josef Bigun Halmstad University. Box 823. SE 301-18 Halmstad, Sweden fferalo, [email protected] Abstract—We present a new iris segmentation algorithm of the outer (sclera) boundary. Since the pupil is generally based on the Generalized Structure Tensor (GST). We com- darker than the iris, there will be a sharper transition pare this approach with traditional iris segmentation systems which in principle will be easier to detect than the softer based on Hough transform and integro-differential oper- ators. Results are given using the CASIA-IrisV3-Interval transition between the iris and the sclera. We compare our database with respect to a segmentation made manually by algorithm with traditional iris segmentation systems based a human expert. The proposed algorithm outperforms the on Hough transform and integro-differential operators. We baseline approaches, pointing out the validity of the GST as also implement an eyelid detection step. We have used an alternative to classic iris segmentation systems. We also for our experiments the CASIA-IrisV3 Interval database, detect the cross positions between the eyelids and the outer iris boundary. Verification results using a publicly available with 2,639 iris images from 249 contributors acquired in iris recognition system based on 1D Log-Gabor wavelets are two sessions [7]. Reported results show the effectiveness also given, showing the benefits of the eyelids removal step. of the proposed algorithm, outperforming the baseline systems. In addition, verification results of the proposed system using 1D Log-Gabor wavelets are given, showing I. INTRODUCTION the benefits of incorporating the eyelids detection step. Biometrics has been receiving considerable attention over the last years due to the increasing demand for II. THE 2D STRUCTURE TENSOR automatic person recognition. It refers to automatic recog- The Ordinary Structure Tensor (OST) nition of an individual based on behavioral and/or physi- Given a gaussian smoothed image I[p], the 2D Ordi- ological characteristics (e.g., fingerprints, face, iris, voice, nary Structure Tensor [8]P at a given pixel p = [x; y] is etc.), which cannot be stolen, lost, or copied [1]. Among the 2£2 matrix Sw [p] = w [r] So[p ¡ r], where So is r all biometric techniques, iris has been traditionally re- · 2 ¸ garded as one of the most reliable and accurate biometric (fx [p]) fx [p] fy [p] the matrix So [p] = 2 . Index identification systems available [2], [3]. fx [p] fy [p](fy [p]) The iris is a ring around the pupil, surrounded by r ranges over coordinate pairs f¡m:::mg £ f¡m:::mg a white region called sclera (Figure 1). The pupil is and w[r] is a weighting window centered at p (typically generally darker than the iris and may have specular gaussian). The values fx[p], fy[p] are the estimated partial reflections due to light sources used in commercial ac- derivatives of I[p] at pixel p, commonly implemented quisition systems. Iris analysis begins with the detec- via convolutions with sampled gaussian-derivative filters. tion of the inner and outer iris boundaries. Early works It can be shown that the following (complex) linear make use of integro-differential operators, proposed by combinations of the real moments mp;q of the local power Daugman [4], and edge detection plus circular Hough spectrum, which are the matrix elements of Sw, directly transform, proposed by Wyldes [5]. They assume that relate to the eigenvalues ¸1; ¸2 (and their corresponding iris boundaries can be modeled as circles. Much of the eigenvectors e1; e2) of matrix Sw[p] [9]: subsequent research has tried to improve the Wildes idea, i2'e I20 = (m11 ¡ m22) + i2m12 = (¸1 ¡ ¸2) e 1 such as the inherent computational burden of the Hough I11 = (m11 + m22) = ¸1 + ¸2 transform or the lack of enough edge points to define a circle [2]. Some works also deal with the problem of detecting eyelids occlusion or specular reflections. Eyelashes Current issues include the development of acquisition Specular systems that allow larger distances between the subject reflecons and the sensor (typically meters), in which the apparent Sclera size of the iris and pupil will show much more variability, Iris with an important presence of motion blur, inconsistent Pupil illumination or variation in subject gaze angles [6]. Eyelid In this paper, we present a iris segmentation algorithm based on the Generalized Structure Tensor. By using circular filters sequentially, we first detect the inner (pupil) boundary and then, its center is used to refine the search Fig. 1. Iris image with typical elements labeled. PUPIL DETECTION SCLERA DETECTION EYELIDS OCCLUSION DETECTION Apply circular filter c(p) of variable radius Apply circular filter c(p) of variable radius Extract low standard deviaon regions in I₂₀/I₁₁ (across the whole image) and find radius for (around detected pupil centre only) and find across the detected sclera circle to find posion which maximum of I20 occurs radius for which maximum of I₂₀ occurs of cross-points between eyelids and sclera MaximumMaximum magnitu magnitude of I20de of I₂₀ p−p, p − p MaximumMaximum m magnitudeagnitu of I20de of I₂₀ i i Output 0.50.5 0.260.26 33 90 0.450.45 0.240.24 Circular Circular 2.52.5 0.220.22 0.40.4 filter filter 180 0 0.20.2 2 0.350.35 2 0.180.18 270 ue 0.30.3 Pupil ue 0.16 1.5 Val 0.16 Val 1.5 Value 0.14 0.250.25 radius 0.14 11 0.120.12 0.20.2 0.10.1 Sclera 0.50.5 0.150.15 0.080.08 radius 0.060.1 0.060.06 0 10 20 30 40 50 60 70 80 75 80 85 90 95 100 105 110 115 120 125 130 0 0 50 100 150 200 250 300 350 10 20 30 40 Filter radius 50 60 70 80 75 80 85 90 95 100Filter radius105 110 115 120 125 130 0 50 100 150 Angle 200 250 300 350 Filter radius Filter radius Angle Fig. 2. System model for iris segmentation using the Generalized Structure Tensor. p DIFFERENCE IN RADIUS Here i is the complex ¡1. Hence, I20 (complex) and 1 I11 (real) are the second order complex-moments of the power spectrum. They are also referred to as the 0.8 GST (pupil) (complex representation of the) structure tensor because Hough (pupil) <=x) Integro−differential (pupil) they represent the ordinary structure tensor fully, with the 0.6 GST (sclera) advantage that eigenvalues and eigenvectors of Sw can Hough (sclera) Integro−differential (sclera) be read out directly from I20;I11. This representation 0.4 is increasingly preferred in fingerprint analysis as it CDF (Prob nd conveniently allows a generalization of the structure 0.2 tensor, offering efficient detection of singularities such 0 as core and delta [10], [11]. It can be also demonstrated 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 that jI20j · I11 (Schwartz inequality). Normalized distance (nd) The Generalized Structure Tensor (GST) DISTANCE BETWEEN CENTRES Assuming that the inner/outer boundaries of the iris 1 can be modeled as circles, they can be detected by means of the Generalized Structure Tensor, [9], which is 0.8 essentially template matching in the tensor domain, and <=x) 0.6 can be conveniently expressed using complex version of the structure tensor, i.e. P 0.4 i2'e 2 I20 = (¸1 ¡ ¸2) e 1 = c (p)(fx (p) + ify (p)) CDF (Prob nd p ¯ ¯ 0.2 P ¯ 2¯ I11 = ¸1 + ¸2 = jc (p)j ¯(fx (p) + ify (p)) ¯ p 0 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 where c(p) is defined as the complex version of the Normalized distance (nd) structure tensor response of a circle1: Fig. 3. Pupil and sclera segmentation accuracy (test set). ¡i2' ¡ 2 2¢γ ¡ x2+y2 = 2σ2 c (p) = e e1 x + y e ( ) ( 2 ) Here γ and σ2 determine the radius of the circle and the precision of the filter (width of the circular boundary manually, depending on the database (15-70 pixels for the region). It can be shown that a high response in jI20j and CASIA-IrisV3 Interval). A peak in the response of jI20j zero argument of I20 is obtained at a point, if there are will be obtained when the radius of the circular filter fits edges at the prescribed (same) distance from that point that of the pupil boundary, as can be seen in Figure 2, and there is an agreement in terms of local orientations left. To improve detection, and to discard spurious peaks, (structure tensors) with those of a circle. It can also be a threshold to the argument of I20 is also imposed (+/-3 shown that when this happens, the Schwartz inequality degrees in this work), so peaks with argument higher than holds with equality, i.e. jI20j = I11. the threshold are discarded. After detecting the pupil boundary, another circular III. PROPOSED SYSTEM filter is used to search for the sclera boundary. The We propose the use of the GST for iris segmentation minimum radius of this filter is dynamically adjusted following the process described next (summarized in depending on the pupil radius, and the maximum radius Figure 2). is set to 140. For better accuracy, we search for the We first search for the pupil boundary with a circular maximum value of jI20j only within a small region around filter of variable radius. The range of radius values is set the center of the pupil, since the two circles are assumed 1In the ordinary structure tensor c(p) is a Gaussian defining the extent as being concentric to a certain extent [4].
Recommended publications
  • Hough Transform, Descriptors Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion University of the Negev Hough Transform
    DIGITAL IMAGE PROCESSING Lecture 7 Hough transform, descriptors Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion University of the Negev Hough transform y m x b y m 3 5 3 3 2 2 3 7 11 10 4 3 2 3 1 4 5 2 2 1 0 1 3 3 x b Slide from S. Savarese Hough transform Issues: • Parameter space [m,b] is unbounded. • Vertical lines have infinite gradient. Use a polar representation for the parameter space Hough space r y r q x q x cosq + ysinq = r Slide from S. Savarese Hough Transform Each point votes for a complete family of potential lines: Each pencil of lines sweeps out a sinusoid in Their intersection provides the desired line equation. Hough transform - experiments r q Image features ρ,ϴ model parameter histogram Slide from S. Savarese Hough transform - experiments Noisy data Image features ρ,ϴ model parameter histogram Need to adjust grid size or smooth Slide from S. Savarese Hough transform - experiments Image features ρ,ϴ model parameter histogram Issue: spurious peaks due to uniform noise Slide from S. Savarese Hough Transform Algorithm 1. Image à Canny 2. Canny à Hough votes 3. Hough votes à Edges Find peaks and post-process Hough transform example http://ostatic.com/files/images/ss_hough.jpg Incorporating image gradients • Recall: when we detect an edge point, we also know its gradient direction • But this means that the line is uniquely determined! • Modified Hough transform: for each edge point (x,y) θ = gradient orientation at (x,y) ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end Finding lines using Hough transform
    [Show full text]
  • Scale Invariant Feature Transform (SIFT) Why Do We Care About Matching Features?
    Scale Invariant Feature Transform (SIFT) Why do we care about matching features? • Camera calibration • Stereo • Tracking/SFM • Image moiaicing • Object/activity Recognition • … Objection representation and recognition • Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters • Automatic Mosaicing • http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html We want invariance!!! • To illumination • To scale • To rotation • To affine • To perspective projection Types of invariance • Illumination Types of invariance • Illumination • Scale Types of invariance • Illumination • Scale • Rotation Types of invariance • Illumination • Scale • Rotation • Affine (view point change) Types of invariance • Illumination • Scale • Rotation • Affine • Full Perspective How to achieve illumination invariance • The easy way (normalized) • Difference based metrics (random tree, Haar, and sift, gradient) How to achieve scale invariance • Pyramids • Scale Space (DOG method) Pyramids – Divide width and height by 2 – Take average of 4 pixels for each pixel (or Gaussian blur with different ) – Repeat until image is tiny – Run filter over each size image and hope its robust How to achieve scale invariance • Scale Space: Difference of Gaussian (DOG) – Take DOG features from differences of these images‐producing the gradient image at different scales. – If the feature is repeatedly present in between Difference of Gaussians, it is Scale Invariant and should be kept. Differences Of Gaussians
    [Show full text]
  • Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector
    Introduction Method Experiments Conclusions Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector P. Suau and F. Escolano {pablo,sco}@dccia.ua.es Robot Vision Group University of Alicante, Spain June 7th, 2007 P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 1 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 2 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 3 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Local feature detectors Feature extraction is a basic step in many computer vision tasks Kadir and Brady scale-saliency Salient features over a narrow range of scales Computational bottleneck (all pixels, all scales) Applied to robot global localization → we need real time feature extraction P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 4 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Salient features X HD(s, x) = − Pd,s,x log2Pd,s,x d∈D Kadir and Brady algorithm (2001): most salient features between scales smin and smax P.
    [Show full text]
  • Hough Transform 1 Hough Transform
    Hough transform 1 Hough transform The Hough transform ( /ˈhʌf/) is a feature extraction technique used in image analysis, computer vision, and digital image processing.[1] The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. The classical Hough transform was concerned with the identification of lines in the image, but later the Hough transform has been extended to identifying positions of arbitrary shapes, most commonly circles or ellipses. The Hough transform as it is universally used today was invented by Richard Duda and Peter Hart in 1972, who called it a "generalized Hough transform"[2] after the related 1962 patent of Paul Hough.[3] The transform was popularized in the computer vision community by Dana H. Ballard through a 1981 journal article titled "Generalizing the Hough transform to detect arbitrary shapes". Theory In automated analysis of digital images, a subproblem often arises of detecting simple shapes, such as straight lines, circles or ellipses. In many cases an edge detector can be used as a pre-processing stage to obtain image points or image pixels that are on the desired curve in the image space. Due to imperfections in either the image data or the edge detector, however, there may be missing points or pixels on the desired curves as well as spatial deviations between the ideal line/circle/ellipse and the noisy edge points as they are obtained from the edge detector.
    [Show full text]
  • Histogram of Directions by the Structure Tensor
    Histogram of Directions by the Structure Tensor Josef Bigun Stefan M. Karlsson Halmstad University Halmstad University IDE SE-30118 IDE SE-30118 Halmstad, Sweden Halmstad, Sweden [email protected] [email protected] ABSTRACT entity). Also, by using the approach of trying to reduce di- Many low-level features, as well as varying methods of ex- rectionality measures to the structure tensor, insights are to traction and interpretation rely on directionality analysis be gained. This is especially true for the study of the his- (for example the Hough transform, Gabor filters, SIFT de- togram of oriented gradient (HOGs) features (the descriptor scriptors and the structure tensor). The theory of the gra- of the SIFT algorithm[12]). We will present both how these dient based structure tensor (a.k.a. the second moment ma- are very similar to the structure tensor, but also detail how trix) is a very well suited theoretical platform in which to they differ, and in the process present a different algorithm analyze and explain the similarities and connections (indeed for computing them without binning. In this paper, we will often equivalence) of supposedly different methods and fea- limit ourselves to the study of 3 kinds of definitions of di- tures that deal with image directionality. Of special inter- rectionality, and their associated features: 1) the structure est to this study is the SIFT descriptors (histogram of ori- tensor, 2) HOGs , and 3) Gabor filters. The results of relat- ented gradients, HOGs). Our analysis of interrelationships ing the Gabor filters to the tensor have been studied earlier of prominent directionality analysis tools offers the possibil- [3], [9], and so for brevity, more attention will be given to ity of computation of HOGs without binning, in an algo- the HOGs.
    [Show full text]
  • The Hough Transform As a Tool for Image Analysis
    THE HOUGH TRANSFORM AS A TOOL FOR IMAGE ANALYSIS Josep Llad´os Computer Vision Center - Dept. Inform`atica. Universitat Aut`onoma de Barcelona. Computer Vision Master March 2003 Abstract The Hough transform is a widespread technique in image analysis. Its main idea is to transform the image to a parameter space where clusters or particular configurations identify instances of a shape under detection. In this chapter we overview some meaningful Hough-based techniques for shape detection, either parametrized or generalized shapes. We also analyze some approaches based on the straight line Hough transform able to detect particular structural properties in images. Some of the ideas of these approaches will be used in the following chapter to solve one of the goals of the present work. 1 Introduction The Hough transform was first introduced by Paul Hough in 1962 [4] with the aim of detecting alignments in T.V. lines. It became later the basis of a great number of image analysis applications. The Hough transform is mainly used to detect parametric shapes in images. It was first used to detect straight lines and later extended to other parametric models such as circumferences or ellipses, being finally generalized to any parametric shape [1]. The key idea of the Hough transform is that spatially extended patterns are transformed into a parameter space where they can be represented in a spatially compact way. Thus, a difficult global detection problem in the image space is reduced to an easier problem of peak detection in a parameter space. Ü; Ý µ A set of collinear image points ´ can be represented by the equation: ÑÜ Ò =¼ Ý (1) Ò where Ñ and are two parameters, the slope and intercept, which characterize the line.
    [Show full text]
  • Line Detection by Hough Transformation
    Line Detection by Hough transformation 09gr820 April 20, 2009 1 Introduction When images are to be used in different areas of image analysis such as object recognition, it is important to reduce the amount of data in the image while preserving the important, characteristic, structural information. Edge detection makes it possible to reduce the amount of data in an image considerably. However the output from an edge detector is still a image described by it’s pixels. If lines, ellipses and so forth could be defined by their characteristic equations, the amount of data would be reduced even more. The Hough transform was originally developed to recognize lines [5], and has later been generalized to cover arbitrary shapes [3] [1]. This worksheet explains how the Hough transform is able to detect (imperfect) straight lines. 2 The Hough Space 2.1 Representation of Lines in the Hough Space Lines can be represented uniquely by two parameters. Often the form in Equation 1 is used with parameters a and b. y = a · x + b (1) This form is, however, not able to represent vertical lines. Therefore, the Hough transform uses the form in Equation 2, which can be rewritten to Equation 3 to be similar to Equation 1. The parameters θ and r is the angle of the line and the distance from the line to the origin respectively. r = x · cos θ + y · sin θ ⇔ (2) cos θ r y = − · x + (3) sin θ sin θ All lines can be represented in this form when θ ∈ [0, 180[ and r ∈ R (or θ ∈ [0, 360[ and r ≥ 0).
    [Show full text]
  • Low-Level Vision Tutorial 1
    Low-level vision tutorial 1 complexity and sophistication – so much so that Feature detection and sensitivity the problems of low-level vision are often felt to Image filters and morphology be unimportant and are forgotten. Yet it remains Robustness of object location the case that information that is lost at low level Validity and accuracy in shape analysis Low-level vision – a tutorial is never regained, while distortions that are Scale and affine invariance. introduced at low level can cause undue trouble at higher levels [1]. Furthermore, image Professor Roy Davies In what follows, references are given for the acquisition is equally important. Thus, simple topics falling under each of these categories, and measures to arrange suitable lighting can help to for other topics that could not be covered in make the input images easier and less ambiguous Overview and further reading depth in the tutorial. to interpret, and can result in much greater reliability and accuracy in applications such as 3. Feature detection and sensitivity automated inspection [1]. Nevertheless, in applications such as surveillance, algorithms General [1]. Abstract should be made as robust as possible so that the Edge detection [2, 3, 4]. vagaries of ambient illumination are rendered Line segment detection [5, 6, 7]. This tutorial aims to help those with some relatively unimportant. Corner and interest point detection [1, 8, 9]. experience of vision to obtain a more in-depth General feature mask design [10, 11]. understanding of the problems of low-level 2. Low-level vision The value of thresholding [1, 12, 13]. vision. As it is not possible to cover everything in the space of 90 minutes, a carefully chosen This tutorial is concerned with low-level vision, 4.
    [Show full text]
  • Distinctive Image Features from Scale-Invariant Keypoints
    Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada [email protected] January 5, 2004 Abstract This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substantial range of affine dis- tortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be cor- rectly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual fea- tures to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a sin- gle object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance. Accepted for publication in the International Journal of Computer Vision, 2004. 1 1 Introduction Image matching is a fundamental aspect of many problems in computer vision, including object or scene recognition, solving for 3D structure from multiple images, stereo correspon- dence, and motion tracking. This paper describes image features that have many properties that make them suitable for matching differing images of an object or scene.
    [Show full text]
  • Lecture #06: Edge Detection
    Lecture #06: Edge Detection Winston Wang, Antonio Tan-Torres, Hesam Hamledari Department of Computer Science Stanford University Stanford, CA 94305 {wwang13, tantonio}@cs.stanford.edu; [email protected] 1 Introduction This lecture covers edge detection, Hough transformations, and RANSAC. The detection of edges provides meaningful semantic information that facilitate the understanding of an image. This can help analyzing the shape of elements, extracting image features, and understanding changes in the properties of depicted scenes such as discontinuity in depth, type of material, and illumination, to name a few. We will explore the application of Sobel and Canny edge detection techniques. The next section introduces the Hough transform, used for the detection of parametric models in images;for example, the detection of linear lines, defined by two parameters, is made possible by the Hough transform. Furthermore, this technique can be generalized to detect other shapes (e.g., circles). However, as we will see, the use of Hough transform is not effective in fitting models with a high number of parameters. To address this model fitting problem, the random sampling consensus (RANSAC) is introduced in the last section; this non-deterministic approach repeatedly samples subsets of data, uses them to fit the model, and classifies the remaining data points as "inliers" or "outliers" depending on how well they can be explained by the fitted model (i.e., their proximity to the model). The result is used for a final selection of data points used in achieving the final model fit. A general comparison of RANSAC and Hough transform is also provided in the last section.
    [Show full text]
  • Model Fitting: the Hough Transform I Guido Gerig, CS6640 Image Processing, Utah
    Model Fitting: The Hough transform I Guido Gerig, CS6640 Image Processing, Utah Credit: Svetlana Lazebnik (Computer Vision UNC Chapel Hill, 2008) Fitting Parametric Models: Beyond Lines • Choose a parametric model to represent a set of features simple model: lines simple model: circles complicated model: car Source: K. Grauman Fitting • Choose a parametric model to represent a set of features • Membership criterion is not local • Can’t tell whether a point belongs to a given model just by looking at that point • Three main questions: • What model represents this set of features best? • Which of several model instances gets which feature? • How many model instances are there? • Computational complexity is important • It is infeasible to examine every possible set of parameters and every possible combination of features Fitting: Issues • Noise in the measured feature locations • Extraneous data: clutter (outliers), multiple lines • Missing data: occlusions Case study: Line detection IMAGE PROCESSING LANE DEPARTURE SYSTEM WITH EDGE DETECTION TECHNIQUE USING HOUGH TRANSFORM – A.RAJYA LAKSHMI & J. MOUNIKA (link) Voting schemes • Let each feature vote for all the models that are compatible with it • Hopefully the noise features will not vote consistently for any single model • Missing data doesn’t matter as long as there are enough features remaining to agree on a good model Hough transform • An early type of voting scheme • General outline: • Discretize parameter space into bins • For each feature point in the image, put a vote in every bin in the parameter space that could have generated this point • Find bins that have the most votes Image space Hough parameter space P.V.C.
    [Show full text]
  • Analysing Arbitrary Curves from the Line Hough Transform
    Journal of Imaging Article Analysing Arbitrary Curves from the Line Hough Transform Donald Bailey ∗ , Yuan Chang and Steven Le Moan Centre for Research in Image and Signal Processing, Massey University, Palmerston North 4442, New Zealand; [email protected] (Y.C.); [email protected] (S.L.M.) * Correspondence: [email protected] Received: 24 March 2020; Accepted: 17 April 2020; Published: 23 April 2020 Abstract: The Hough transform is commonly used for detecting linear features within an image. A line is mapped to a peak within parameter space corresponding to the parameters of the line. By analysing the shape of the peak, or peak locus, within parameter space, it is possible to also use the line Hough transform to detect or analyse arbitrary (non-parametric) curves. It is shown that there is a one-to-one relationship between the curve in image space, and the peak locus in parameter space, enabling the complete curve to be reconstructed from its peak locus. In this paper, we determine the patterns of the peak locus for closed curves (including circles and ellipses), linear segments, inflection points, and corners. It is demonstrated that the curve shape can be simplified by ignoring parts of the peak locus. One such simplification is to derive the convex hull of shapes directly from the representation within the Hough transform. It is also demonstrated that the parameters of elliptical blobs can be measured directly from the Hough transform. Keywords: Hough transform; curve detection; features; edgelets; convex hull; ellipse parameters 1. Introduction to the Hough Transform A common task in many image processing applications is the detection of objects or features (for example lines, circles, etc.).
    [Show full text]