From Single Image to List of Objects Based on Edge and Blob Detection 619

Total Page:16

File Type:pdf, Size:1020Kb

From Single Image to List of Objects Based on Edge and Blob Detection 619 From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk1, Marcin Gabryel1, Marcin Korytkowski1, Rafal Scherer1, and Sviatoslav Voloshynovskiy2 1 Institute of Computational Intelligence, Cz¸estochowa University of Technology Al. Armii Krajowej 36, 42-200 Cz¸estochowa, Poland {rafal.grycuk,marcin.gabryel,marcin.korytkowski, rafal.scherer}@iisi.pcz.pl http://iisi.pcz.pl 2 University of Geneva, Computer Science Department, 7 Route de Drize, Geneva, Switzerland http://sip.unige.ch Abstract. In this paper we present a new method for obtaining a list of interest objects from a single image. Our object extraction method works on two well known algorithms: the Canny edge detection method and the quadrilaterals detection. Our approach allows to select only the significant elements of the image. In addition, this method allows to filter out unnecessary key points in a simple way (for example obtained by the SIFT algorithm) from the background image. The effectiveness of the method is confirmed by experimental research. 1 Introduction Content-based image retrieval is one of the greatest challenges of present com- puter science. Effective browsing and retrieving images is required in many vari- ous fields of life e.g. medicine, architecture, forensic, publishing, fashion, archives and many others. In the process of image recognition users search through databases which consist of thousands, even millions of images. The aim can be the retrieval of a similar image or images containing certain objects [4][6]. Retrieving mechanisms use image recognition methods. This is a sophisticated process which requires the use of algorithms from many different areas such as computational intelligence [7][8][12][20][21][22][24][27][29][32], machine learning [23], mathematics [2][13] and image processing [11]. An interesting approach for CBIR, based on nonparametric estimates (see e.g. [14][25][26]) was presented in [10]. One of important aspects of content-based image retrieval is the ability to find specific objects in the image. This is particularly important in the case of large image databases which are searched for specific objects. Many ways to detect objects can be found in the literature and one of them is image segmentation. The goal of the image segmentation is to cluster pixels into salient image re- gions, i.e. regions corresponding to individual surfaces, objects, or natural parts L. Rutkowski et al. (Eds.): ICAISC 2014, Part II, LNAI 8468, pp. 617–627, 2014. c Springer International Publishing Switzerland 2014 618 R. Grycuk et al. of objects. There are many methods for image segmentation: thresholding [3], clustering methods [1], histogram-based methods [30], edge detection [28], stereo vision based methods [16] and many others. We propose a new method for ex- tracting objects from images. This method consists of two parts: the Canny edge detection method [5] and the method for quadrilaterals detection [17]. An im- portant element in our algorithm is the additional step which consists in filtering out the key points from the background. The result is an image with key points only in the most interesting areas of the image. Our method uses the SIFT (Scale-invariant feature transform) algorithm to detect and describe local features of an image. It was presented for the first time in [18] and is now patented by the University of British Columbia. For each key point which describes the local image feature, we generate a feature vector, that can be used for further processing. SIFT contains four main steps [15][19]: extraction of potential key points, selection of stable key points (resistant to change of scale and rotation), finding key point orientation immune to the image transformation, generating a vector describing the key point. SIFT key points contain 128 values. The paper consists of several sections. In the next sections we will present methods used in the proposed approach and a method for image segmentation algorithm. The last section presents the experimental results on an original soft- warewrittenin.NET. 2CannyEdgeDetection The Canny edge detector [5] is one of the most commonly used image processing methods for detecting edges. It takes as input a gray scale image, and produces as output an image showing the positions of tracked intensity discontinuities. The algorithm runs in 5 separate steps: 1. Noise reduction. The image is smoothed by applying an appropriate Gaus- sian filter. 2. Finding the intensity gradient of the image. During this step the edges should be marked where gradients of the image have large magnitudes. 3. Non-maxima suppression. If the gradient magnitude at a pixel is larger than those at its two neighbors in the gradient direction, mark the pixel as an edge. Otherwise, mark the pixel as the background. 4. Edge tracking by hysteresis. Final edges are determined by suppressing all edges that are not connected to genuine edges. The effect of the Canny operator is determined by parameters: – The width of the Gaussian filter used in the first stage directly affects the results of the Canny algorithm, – The thresholds used during edge tracking by hysteresis. It is difficult to give a generic threshold that works well on all images. From Single Image to List of Objects Based on Edge and Blob Detection 619 3 Blob Extraction Blob extraction is one of the basic methods of image processing. It allows to detect and extract a list of blobs (objects) in the image. Unfortunately, obtain- ing homogeneous objects from an image as a list of pixels is extremely com- plicated. Especially when we deal with a heterogeneous background, i.e. the objects containing multicolored background. There are many methods for ex- tracting objects (blobs) from images [9]. In this paper we use methods imple- mented in the AForge.NET library. These algorithms are described by Andrew Kirillov [17]. There are four types of the algorithms: Convex full, Left/Right Edges, Top/Bottom Edges, Quadrilateral. Fig.1 describes these blob detection methods. Figure 6A illustrates Quadrilaterals detection method. As can be seen, round edges of the objects are not detected correctly. Much better results are obtained by the Top/Bottom Edges algorithm (Figure 1C). Edges of objects are detected mostly correctly, with individual exceptions. The Left/Right Edges method behaves similarly (Fig. 1B). The last method has a problem with the detection of vertices inside figures, e.g. star type objects (Fig. 1D). Quadrilateral detection method can be described by the following steps: 1. Locate each separate object in the input image, 2. Find object edge pixels (methods: Top/Bottom, Left/Right), 3. Detect four corners of quadrilateral, Fig. 1. Comparison of methods for blob detection used in the AForge.NET library [17] 620 R. Grycuk et al. 4. Set distortionLimit. This value determines how different the object can be. Set 0 to detect perfect quadrilateral, 5. Check how well the analyzed shape fits into the quadrilateral with the as- sumed parameter (see Fig. 1D ). 6. Check each mean distance between given edge pixels and the edge of assumed quadrilateral, 7. If mean distance is not greater then distortionLimit, then we can assume that the shape is quadrilateral. 4 Proposed Method for Segmentation Algorithm In this section we present a new approach to segmentation. When we have a single image described by pixel matrix (e.g. RGB), we can segment it into many objects. This section describes a methodology of this process. The algorithm can be divided into several steps. First of all we need to create mathematical description of the input image. In this step we use the SIFT algorithm. In return we obtain a list of key points (whole image). The next step is to create edge detected image. In this step we use the Canny algorithm described in section 2. Another step detects blobs, with all their properties such as: center of gravity, area, mean color, blob rectangle, position. When we have detected these object contours as blobs, we can extract rectangles for these objects. The next step detects and saves key points in each rectangle. In other words, we check if the key point position lies in rectangle area, if so it is assigned to this rectangle. The last step extracts objects from the original (input) image and saves them as separate objects. This method allows to extract the list of objects from the image, not only as raw part of the image but also with a list of key points as- signed to these objects. In other words, on the output of our method we obtain the list of objects (raw part) with assigned key points (vectors) to each ob- ject [31]. Figure 2 describes steps of the algorithm, witch can be represented by the following pseudo code: The blob detection step generates lots of small blobs. Fig. 2. Block diagram of the proposed method From Single Image to List of Objects Based on Edge and Blob Detection 621 INPUT: Single image, singleInputImage OUTPUT: Segmented image, list of objects with descriptors assign to each object, descriptorsList, objectsList allDescriptors := Generate Descriptor(singleInputImage); edgedImage := Canny Edge Detector(singleInputImage, thesh, threshLinking); blobs := Detect Blobs(edgedImage); foreach blob ∈ blobs do if blob.Area >= minBlobArea then extractedBlob := Extract Blob(singleInputImage, startP osition, blob.W idth, blob.Heigth); objectsList.Add To List(extractedBlob); extractedDescriptor := Extract Descriptor(allDescriptors, startP osition, blob.W idth, blob.Heigth); descriptorsList.Add To List(extractedDescriptor); end end Algorithm 1. Segmentation steps The minBlobArea describes min area for extracted objects. All blobs with area equal or greater will be extracted. For blob detection process we use Quadrilater- als detection method (see section 3). The extraction process (blob and descriptor extraction) is realized simply by copying entire blob rectangle into newly created image (pixel by pixel, key point by key point). Algorithm stages were implemented in .NET C# using AForge and Emgu CV libraries.
Recommended publications
  • Scale Invariant Interest Points with Shearlets
    Scale Invariant Interest Points with Shearlets Miguel A. Duval-Poo1, Nicoletta Noceti1, Francesca Odone1, and Ernesto De Vito2 1Dipartimento di Informatica Bioingegneria Robotica e Ingegneria dei Sistemi (DIBRIS), Universit`adegli Studi di Genova, Italy 2Dipartimento di Matematica (DIMA), Universit`adegli Studi di Genova, Italy Abstract Shearlets are a relatively new directional multi-scale framework for signal analysis, which have been shown effective to enhance signal discon- tinuities such as edges and corners at multiple scales. In this work we address the problem of detecting and describing blob-like features in the shearlets framework. We derive a measure which is very effective for blob detection and closely related to the Laplacian of Gaussian. We demon- strate the measure satisfies the perfect scale invariance property in the continuous case. In the discrete setting, we derive algorithms for blob detection and keypoint description. Finally, we provide qualitative justifi- cations of our findings as well as a quantitative evaluation on benchmark data. We also report an experimental evidence that our method is very suitable to deal with compressed and noisy images, thanks to the sparsity property of shearlets. 1 Introduction Feature detection consists in the extraction of perceptually interesting low-level features over an image, in preparation of higher level processing tasks. In the last decade a considerable amount of work has been devoted to the design of effective and efficient local feature detectors able to associate with a given interesting point also scale and orientation information. Scale-space theory has been one of the main sources of inspiration for this line of research, providing an effective framework for detecting features at multiple scales and, to some extent, to devise scale invariant image descriptors.
    [Show full text]
  • View Matching with Blob Features
    View Matching with Blob Features Per-Erik Forssen´ and Anders Moe Computer Vision Laboratory Department of Electrical Engineering Linkoping¨ University, Sweden Abstract mographic transformation that is not part of the affine trans- formation. This means that our results are relevant for both This paper introduces a new region based feature for ob- wide baseline matching and 3D object recognition. ject recognition and image matching. In contrast to many A somewhat related approach to region based matching other region based features, this one makes use of colour is presented in [1], where tangents to regions are used to de- in the feature extraction stage. We perform experiments on fine linear constraints on the homography transformation, the repeatability rate of the features across scale and incli- which can then be found through linear programming. The nation angle changes, and show that avoiding to merge re- connection is that a line conic describes the set of all tan- gions connected by only a few pixels improves the repeata- gents to a region. By matching conics, we thus implicitly bility. Finally we introduce two voting schemes that allow us match the tangents of the ellipse-approximated regions. to find correspondences automatically, and compare them with respect to the number of valid correspondences they 2. Blob features give, and their inlier ratios. We will make use of blob features extracted using a clus- tering pyramid built using robust estimation in local image regions [2]. Each extracted blob is represented by its aver- 1. Introduction age colour pk,areaak, centroid mk, and inertia matrix Ik. I.e.
    [Show full text]
  • Feature Detection Florian Stimberg
    Feature Detection Florian Stimberg Outline ● Introduction ● Types of Image Features ● Edges ● Corners ● Ridges ● Blobs ● SIFT ● Scale-space extrema detection ● keypoint localization ● orientation assignment ● keypoint descriptor ● Resources Introduction ● image features are distinct image parts ● feature detection often is first operation to define image parts to process later ● finding corresponding features in pictures necessary for object recognition ● Multi-Image-Panoramas need to be “stitched” together at according image features ● the description of a feature is as important as the extraction Types of image features Edges ● sharp changes in brightness ● most algorithms use the first derivative of the intensity ● different methods: (one or two thresholds etc.) Types of image features Corners ● point with two dominant and different edge directions in its neighbourhood ● Moravec Detector: similarity between patch around pixel and overlapping patches in the neighbourhood is measured ● low similarity in all directions indicates corner Types of image features Ridges ● curves which points are local maxima or minima in at least one dimension ● quality depends highly on the scale ● used to detect roads in aerial images or veins in 3D magnetic resonance images Types of image features Blobs ● points or regions brighter or darker than the surrounding ● SIFT uses a method for blob detection SIFT ● SIFT = Scale-invariant feature transform ● first published in 1999 by David Lowe ● Method to extract and describe distinctive image features ● feature
    [Show full text]
  • EECS 442 Computer Vision: Homework 2
    EECS 442 Computer Vision: Homework 2 Instructions • This homework is due at 11:59:59 p.m. on Friday October 18th, 2019. • The submission includes two parts: 1. To Gradescope: a pdf file as your write-up, including your answers to all the questions and key choices you made. You might like to combine several files to make a submission. Here is an example online link for combining multiple PDF files: https://combinepdf.com/. 2. To Canvas: a zip file including all your code. The zip file should follow the HW formatting instructions on the class website. • The write-up must be an electronic version. No handwriting, including plotting questions. LATEX is recommended but not mandatory. Python Environment We are using Python 3.7 for this course. You can find references for the Python stan- dard library here: https://docs.python.org/3.7/library/index.html. To make your life easier, we recommend you to install Anaconda 5.2 for Python 3.7.x (https://www.anaconda.com/download/). This is a Python package manager that includes most of the modules you need for this course. We will make use of the following packages extensively in this course: • Numpy (https://docs.scipy.org/doc/numpy-dev/user/quickstart.html) • OpenCV (https://opencv.org/) • SciPy (https://scipy.org/) • Matplotlib (http://matplotlib.org/users/pyplot tutorial.html) 1 1 Image Filtering [50 pts] In this first section, you will explore different ways to filter images. Through these tasks you will build up a toolkit of image filtering techniques. By the end of this problem, you should understand how the development of image filtering techniques has led to convolution.
    [Show full text]
  • Image Analysis Feature Extraction: Corners and Blobs Corners And
    2 Corners and Blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou [email protected] Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/). D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. M. Nixon and A. Aguado. Feature Extraction and Image Processing. Academic Press 2010. University of Ioannina - Department of Computer Science C. Nikou – Image Analysis (T-14) Motivation: Panorama Stitching 3 Motivation: Panorama Stitching 4 (cont.) Step 1: feature extraction Step 2: feature matching C. Nikou – Image Analysis (T-14) C. Nikou – Image Analysis (T-14) Motivation: Panorama Stitching 5 6 Characteristics of good features (cont.) • Repeatability – The same feature can be found in several images despite geometric and photometric transformations. • Saliency – Each feature has a distinctive description. • Compactness and efficiency Step 1: feature extraction – Many fewer features than image pixels. Step 2: feature matching • Locality Step 3: image alignment – A feature occupies a relatively small area of the image; robust to clutter and occlusion. C. Nikou – Image Analysis (T-14) C. Nikou – Image Analysis (T-14) 1 7 Applications 8 Image Curvature • Feature points are used for: • Extends the notion of edges. – Motion tracking. • Rate of change in edge direction. – Image alignment. • Points where the edge direction changes rapidly are characterized as corners. – 3D reconstruction. • We need some elements from differential ggyeometry. – Objec t recogn ition. • Parametric form of a planar curve: – Image indexing and retrieval. – Robot navigation. Ct()= [ xt (),()] yt • Describes the points in a continuous curve as the endpoints of a position vector.
    [Show full text]
  • Edges and Binary Image Analysis
    1/25/2017 Edges and Binary Image Analysis Thurs Jan 26 Kristen Grauman UT Austin Today • Edge detection and matching – process the image gradient to find curves/contours – comparing contours • Binary image analysis – blobs and regions 1 1/25/2017 Gradients -> edges Primary edge detection steps: 1. Smoothing: suppress noise 2. Edge enhancement: filter for contrast 3. Edge localization Determine which local maxima from filter output are actually edges vs. noise • Threshold, Thin Kristen Grauman, UT-Austin Thresholding • Choose a threshold value t • Set any pixels less than t to zero (off) • Set any pixels greater than or equal to t to one (on) 2 1/25/2017 Original image Gradient magnitude image 3 1/25/2017 Thresholding gradient with a lower threshold Thresholding gradient with a higher threshold 4 1/25/2017 Canny edge detector • Filter image with derivative of Gaussian • Find magnitude and orientation of gradient • Non-maximum suppression: – Thin wide “ridges” down to single pixel width • Linking and thresholding (hysteresis): – Define two thresholds: low and high – Use the high threshold to start edge curves and the low threshold to continue them • MATLAB: edge(image, ‘canny’); • >>help edge Source: D. Lowe, L. Fei-Fei The Canny edge detector original image (Lena) Slide credit: Steve Seitz 5 1/25/2017 The Canny edge detector norm of the gradient The Canny edge detector thresholding 6 1/25/2017 The Canny edge detector How to turn these thick regions of the gradient into curves? thresholding Non-maximum suppression Check if pixel is local maximum along gradient direction, select single max across width of the edge • requires checking interpolated pixels p and r 7 1/25/2017 The Canny edge detector Problem: pixels along this edge didn’t survive the thresholding thinning (non-maximum suppression) Credit: James Hays 8 1/25/2017 Hysteresis thresholding • Use a high threshold to start edge curves, and a low threshold to continue them.
    [Show full text]
  • Lecture 10 Detectors and Descriptors
    This lecture is about detectors and descriptors, which are the basic building blocks for many tasks in 3D vision and recognition. We’ll discuss Lecture 10 some of the properties of detectors and descriptors and walk through examples. Detectors and descriptors • Properties of detectors • Edge detectors • Harris • DoG • Properties of descriptors • SIFT • HOG • Shape context Silvio Savarese Lecture 10 - 16-Feb-15 Previous lectures have been dedicated to characterizing the mapping between 2D images and the 3D world. Now, we’re going to put more focus From the 3D to 2D & vice versa on inferring the visual content in images. P = [x,y,z] p = [x,y] 3D world •Let’s now focus on 2D Image The question we will be asking in this lecture is - how do we represent images? There are many basic ways to do this. We can just characterize How to represent images? them as a collection of pixels with their intensity values. Another, more practical, option is to describe them as a collection of components or features which correspond to “interesting” regions in the image such as corners, edges, and blobs. Each of these regions is characterized by a descriptor which captures the local distribution of certain photometric properties such as intensities, gradients, etc. Feature extraction and description is the first step in many recent vision algorithms. They can be considered as building blocks in many scenarios The big picture… where it is critical to: 1) Fit or estimate a model that describes an image or a portion of it 2) Match or index images 3) Detect objects or actions form images Feature e.g.
    [Show full text]
  • Interest Point Detectors and Descriptors for IR Images
    DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL STOCKHOLM, SWEDEN 2015 Interest Point Detectors and Descriptors for IR Images AN EVALUATION OF COMMON DETECTORS AND DESCRIPTORS ON IR IMAGES JOHAN JOHANSSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC) Interest Point Detectors and Descriptors for IR Images An Evaluation of Common Detectors and Descriptors on IR Images JOHAN JOHANSSON Master’s Thesis at CSC Supervisors: Martin Solli, FLIR Systems Atsuto Maki Examiner: Stefan Carlsson Abstract Interest point detectors and descriptors are the basis of many applications within computer vision. In the selection of which methods to use in an application, it is of great in- terest to know their performance against possible changes to the appearance of the content in an image. Many studies have been completed in the field on visual images while the performance on infrared images is not as charted. This degree project, conducted at FLIR Systems, pro- vides a performance evaluation of detectors and descriptors on infrared images. Three evaluations steps are performed. The first evaluates the performance of detectors; the sec- ond descriptors; and the third combinations of detectors and descriptors. We find that best performance is obtained by Hessian- Affine with LIOP and the binary combination of ORB de- tector and BRISK descriptor to be a good alternative with comparable results but with increased computational effi- ciency by two orders of magnitude. Referat Detektorer och deskriptorer för extrempunkter i IR-bilder Detektorer och deskriptorer är grundpelare till många ap- plikationer inom datorseende. Vid valet av metod till en specifik tillämpning är det av stort intresse att veta hur de presterar mot möjliga förändringar i hur innehållet i en bild framträder.
    [Show full text]
  • A Survey of Feature Extraction Techniques in Content-Based Illicit Image Detection
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universiti Teknologi Malaysia Institutional Repository Journal of Theoretical and Applied Information Technology 10 th May 2016. Vol.87. No.1 © 2005 - 2016 JATIT & LLS. All rights reserved . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 A SURVEY OF FEATURE EXTRACTION TECHNIQUES IN CONTENT-BASED ILLICIT IMAGE DETECTION 1,2 S.HADI YAGHOUBYAN, 1MOHD AIZAINI MAAROF, 1ANAZIDA ZAINAL, 1MAHDI MAKTABDAR OGHAZ 1Faculty of Computing, Universiti Teknologi Malaysia (UTM), Malaysia 2 Department of Computer Engineering, Islamic Azad University, Yasooj Branch, Yasooj, Iran E-mail: 1,2 [email protected], [email protected], [email protected] ABSTRACT For many of today’s youngsters and children, the Internet, mobile phones and generally digital devices are integral part of their life and they can barely imagine their life without a social networking systems. Despite many advantages of the Internet, it is hard to neglect the Internet side effects in people life. Exposure to illicit images is very common among adolescent and children, with a variety of significant and often upsetting effects on their growth and thoughts. Thus, detecting and filtering illicit images is a hot and fast evolving topic in computer vision. In this research we tried to summarize the existing visual feature extraction techniques used for illicit image detection. Feature extraction can be separate into two sub- techniques feature detection and description. This research presents the-state-of-the-art techniques in each group. The evaluation measurements and metrics used in other researches are summarized at the end of the paper.
    [Show full text]
  • Image Feature Detection and Matching in Underwater Conditions
    Image feature detection and matching in underwater conditions Kenton Olivera, Weilin Houb, and Song Wanga aUniversity of South Carolina, 201 Main Street, Columbia, South Carolina, USA; bNaval Research Lab, Code 7333, 1009 Balch Blvd., Stennis Space Center, Mississippi, USA ABSTRACT The main challenge in underwater imaging and image analysis is to overcome the effects of blurring due to the strong scattering of light by the water and its constituents. This blurring adds complexity to already challenging problems like object detection and localization. The current state-of-the-art approaches for object detection and localization normally involve two components: (a) a feature detector that extracts a set of feature points from an image, and (b) a feature matching algorithm that tries to match the feature points detected from a target image to a set of template features corresponding to the object of interest. A successful feature matching indicates that the target image also contains the object of interest. For underwater images, the target image is taken in underwater conditions while the template features are usually extracted from one or more training images that are taken out-of-water or in different underwater conditions. In addition, the objects in the target image and the training images may show different poses, including rotation, scaling, translation transformations, and perspective changes. In this paper we investigate the effects of various underwater point spread functions on the detection of image features using many different feature detectors, and how these functions affect the capability of these features when they are used for matching and object detection. This research provides insight to further develop robust feature detectors and matching algorithms that are suitable for detecting and localizing objects from underwater images.
    [Show full text]
  • Administrivia Scale Invariant Features Why Extract Features?
    Administrivia • Exam review session in next class CMPSCI 370: Intro to Computer Vision • Midterm in class (Thursday) Image processing • All topics covered till Feb 25 lecture (corner detection) Scale Invariant Feature Transform (SIFT) • Closed book University of Massachusetts, Amherst March 03, 2015 • Grading issues Instructor: Subhransu Maji • Include all the information needed to grade the homework • Keep the grader happy :-) • Candy wrapper extra credit for participation (5%) 2 Scale invariant features Why extract features? • Motivation: panorama stitching “blob detection” • We have two images – how do we combine them? Source: L. Lazebnik 3 Slide credit: L. Lazebnik 4 Why extract features? Why extract features? • Motivation: panorama stitching • Motivation: panorama stitching • We have two images – how do we combine them? • We have two images – how do we combine them? Step 1: extract features Step 1: extract features Step 2: match features Step 2: match features Step 3: align images Slide credit: L. Lazebnik 5 Slide credit: L. Lazebnik 6 Feature detection with scale selection Scaling • We want to extract features with characteristic scale that matches the image transformation such as scaling and translation (a.k.a. covariance) Corner All points will be classified as Matching regions across scales edges Corner detection is sensitive to the image scale! Source: L. Lazebnik 7 Source: L. Lazebnik 8 Blob detection: basic idea Blob detection: basic idea • Convolve the image with a “blob filter” at multiple scales Find maxima and minima of blob filter response in space minima • Look for extrema (maxima or minima) of filter response in and scale the resulting scale space • This will give us a scale and space covariant detector * = maxima Source: L.
    [Show full text]
  • Edge Detection and Ridge Detection with Automatic Scale Selection
    Edge detection and ridge detection with automatic scale selection Tony Lindeberg Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and Computing Science KTH (Royal Institute of Technology) S-100 44 Stockholm, Sweden. http://www.nada.kth.se/˜tony Email: [email protected] Technical report ISRN KTH/NA/P–96/06–SE, May 1996, Revised August 1998. Int. J. of Computer Vision, vol 30, number 2, 1998. (In press). Shortened version in Proc. CVPR’96, San Francisco, June 1996. Abstract When computing descriptors of image data, the type of information that can be extracted may be strongly dependent on the scales at which the image operators are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting one-dimensional image features, such as edges and ridges. A novel concept of a scale-space edge is introduced, defined as a connected set of points in scale-space at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important consequence of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail, the gradient magnitude and a differential expression derived from the third-order derivative in the gradient direction. For a certain way of normalizing these differential de- scriptors, by expressing them in terms of so-called γ-normalized derivatives,an immediate consequence of this definition is that the edge detector will adapt its scale levels to the local image structure.
    [Show full text]