An Analysis of the Scale Saliency Algorithm

Total Page:16

File Type:pdf, Size:1020Kb

An Analysis of the Scale Saliency Algorithm An analysis of the Scale Saliency algorithm Timor Kadir1, Djamal Boukerroui, Michael Brady Robotics Research Laboratory, Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, U.K. August 8, 2003 Abstract In this paper, we present an analysis of the theoretical underpinnings of the Scale Saliency algorithm recently introduced in (Kadir and Brady, 2001). Scale Saliency considers image regions salient if they are simultaneously unpredictable in some feature-space and over scale. The algorithm possesses a number of attractive properties: invariance to planar rotation, scaling, intensity shifts and trans- lation; robustness to noise, changes in viewpoint, and intensity scalings. Moreover, the approach offers a more general model of feature saliency compared with conventional ones, such as those based on kernel convolution, for example wavelet analysis. Typically, such techniques define saliency and scale with respect to a particular set of basis morphologies. The aim of this paper is to make explicit the nature of this generality. Specifically, we aim to answer the following questions: What exactly constitutes a `salient feature'? How does this differ from other feature selection methods? The main result of our analysis is that Scale Saliency defines saliency and scale independently of any particular feature morphology. Instead, a more general notion is used, namely spatial unpre- dictability. This is determined within the geometric constraint determined by the sampling window and its parameterisation. In (Kadir and Brady, 2001) this window was a circle parameterised by a single parameter controlling the radius. Under such a scheme, features are considered salient if their feature-space properties vary rapidly with an incremental change in radius. In other words, the feature favours isotropically unpredictable features. We also present a number of variations of the original algorithm: a modification for colour images (or more generally any vector-valued image) and a generalisation of the isotropic scale constraint to the anisotropic case. Keywords Visual Saliency, Scale Selection, Salient Features, Entropy, Scale-space. 1 1 Introduction Computer vision algorithms are, in general, information reduction processes. Brute-force approaches to image or image sequence analysis can quickly overwhelm most computing resources at our dis- posal. Fortunately, images are a redundant data source. The same set of inferences may be drawn from a variety of image characteristics. This becomes self-evident considering the array of different methodologies available for solving any particular vision task. Hence, the selection of a sufficient set of image regions and properties, or salient features, forms the first step in many computer vision algorithms. Two key issues face the vision algorithm designer: the subset of image properties selected for subsequent analysis and the model used to represent those properties. For example, many image matching algorithms begin with a set of `landmark' points which serve as a basis for estimating the image transformation that defines the match. In this case, well-localised and unique image regions are desirable to minimise the likelihood of false matches. For many tasks geometric and photometric invariance properties are also beneficial. Finally, there is often an implicit, but difficult to quantify, requirement that the salient regions be relevant to the task of interest | in other words, the regions or descriptions subsequently extracted from them are somehow characteristic of the scene contents they are intended to signify. Many definitions for saliency have been proposed. Perhaps the most popular have arisen out of the application of local surface differential geometry techniques to imaging. Such methods consider the image to be a discrete approximation to a surface and categorize it by application of differential operators. Closely related to these are basis projection and filtering methods. Common to both is the development of one or two dimensional features; one dimensional features include edges, lines, ridges (Bergholm, 1986, Canny, 1986); two dimensional features are often referred to as Interest points or `Corners' (Deriche and Giraudon, 1993, Harris and Stephens, 1988, Mokhtarian and Suomela, 1998). Much effort within the Scale-Space and Wavelet communities have been devoted to providing a mathematically sound basis for the application of such techniques to what are essentially discrete sets (Kœnderink, 1984, Lindeberg and ter Haar Romeny, 1994, Mallat, 1998, Witkin, 1983). In general, these methods share one assumption: that saliency is a direct property of the geometry or morphology of the image surface. While it is certainly the case that there are many useful image features that can be defined in such a manner, efforts to generalise such methods to capture a broader range of salient image regions have had limited success. We contend that one of the major factors for this is that such methods typically define both saliency and scale with respect to a small set of basis functions or geometric properties. Perhaps then, it is for this reason and the lack of a satisfactory 2 definition of what constitutes a salient feature in the broader sense, that the term `feature selection' has acquired this restricted interpretation. There are a number of exceptions to this. Phase Congruency and the related Local Energy approach (Kovesi, 1999) defines features in terms of the phase coherence of Fourier components. For example, at a step-edge all Fourier components are maximally in phase at an angle of 0◦ or 180◦ for positive or negative transitions respectively. One of the benefits of such an approach is that several feature types may be detected simultaneously. Yet despite the novelty of the model, Kovesi was primarily interested in simple one or two dimensional features typical of geometric methods. There was no effort to broaden the definition of saliency. An alternative strategy is to define saliency in terms of the probabilistic or statistical properties of the image. This approach has been most popular for region segmentation tasks (Besag, 1986, Leclerc, 1989, Li, 1995, Paragios and Deriche, 2002, Zhu and Yuille, 1996). There have also been several attempts at feature detection using statistical measures | it is well known that local variance can be employed as a basic edge detector. Other methods have attempted to estimate saliency by measuring rarity of feature properties. In (Kadir and Brady, 2001), we proposed a novel model of feature saliency. In our approach, termed Scale Saliency, regions are deemed salient if they exhibit unpredictable behaviour (in a prob- abilistic sense) simultaneously in feature-space and over scale. Scale Saliency possesses a number of attractive properties. First, it offers a more general model of feature saliency compared to conven- tional methods. Second, it incorporates an intrinsic notion of scale and a method for its selection locally. Third, it makes explicit the link between the definition of saliency and the method of de- scription. In short, it offers a coherent methodology incorporating three intimately related concepts of scale, saliency and image description. The implementation presented in (Kadir and Brady, 2001) possesses a number of other beneficial qualities: invariance to planar rotation, scaling, intensity shifts and translation; and robustness to noise, changes in viewpoint, and intensity scalings. In this paper, we present an in-depth analysis of the theoretical underpinnings of the Scale Saliency model. The aim here is to make explicit the definition of saliency in this model. Specifically, we aim to answer the following questions: What exactly constitutes a salient feature? How is this different from other feature selection methods? This paper is organised as follows. In Section 2 we provide a brief overview of the Scale Saliency algorithm. The Scale Saliency algorithm is a product of two terms measuring the unpredictability of the local PDF in feature-space and over scale respectively. Detailed analyses of these two terms are presented in Sections 3 and 4 where we derive expressions for the conditions under which Scale Saliency is maximised and discuss the underlying model. In Section 5, we present generalisations of 3 the method to colour images and anisotropic scale. In Section 6, we discuss the relationship between the Scale Saliency algorithm and transform based methods for feature detection. Finally, in Section 7 we conclude our analysis and outline a number of remaining open issues. 2 Scale Saliency In this section, we briefly describe the Scale Saliency algorithm. A more detailed discussion of the technique may be found in (Kadir and Brady, 2001). 2.1 Saliency as local unpredictability Gilles (1998) investigated the use of salient local image patches or `icons' for matching and registering two images. He defined saliency in terms of local signal complexity or unpredictability. More specifically, he estimated saliency using the Shannon entropy of local attributes. Figure 1 shows local intensity histograms from a number of image segments. Areas corresponding to high signal complexity tend to have flatter distributions, hence higher entropy. More generally, high complexity of any suitable descriptor can be used as a measure2 of local saliency. Local attributes, such as colour or edge strength, direction or phase, may be used. Given a point x, a local neighbourhood RX , and a descriptor d that takes values from D = fd1; : : : ; drg (e.g. in an 8 bit grey level image D would range from 0 to 255), local entropy (in the discrete form) is defined as: HD;RX = − pd;RX log2 pd;RX (1) Xd where pd;RX (di) is the probability of descriptor D taking the value di in the local region RX . Gilles' method has a number of limitations. It requires the specification of a window size, or scale, over which an estimate of the local PDF may be obtained. Underlying this definition of saliency is the assumption that complexity is rare in real images. This is generally true, except in the case of pure noise or self-similar images (e.g. fractals) where complexity is independent of scale and position, and textured regions, where, in general, complexity is more prevalent.
Recommended publications
  • Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector
    Introduction Method Experiments Conclusions Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector P. Suau and F. Escolano {pablo,sco}@dccia.ua.es Robot Vision Group University of Alicante, Spain June 7th, 2007 P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 1 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 2 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 3 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Local feature detectors Feature extraction is a basic step in many computer vision tasks Kadir and Brady scale-saliency Salient features over a narrow range of scales Computational bottleneck (all pixels, all scales) Applied to robot global localization → we need real time feature extraction P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 4 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Salient features X HD(s, x) = − Pd,s,x log2Pd,s,x d∈D Kadir and Brady algorithm (2001): most salient features between scales smin and smax P.
    [Show full text]
  • Topic: 9 Edge and Line Detection
    V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Topic: 9 Edge and Line Detection Contents: First Order Differentials • Post Processing of Edge Images • Second Order Differentials. • LoG and DoG filters • Models in Images • Least Square Line Fitting • Cartesian and Polar Hough Transform • Mathemactics of Hough Transform • Implementation and Use of Hough Transform • PTIC D O S G IE R L O P U P P A D E S C P I Edge and Line Detection -1- Semester 1 A S R Y TM H ENT of P V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Edge Detection The aim of all edge detection techniques is to enhance or mark edges and then detect them. All need some type of High-pass filter, which can be viewed as either First or Second order differ- entials. First Order Differentials: In One-Dimension we have f(x) d f(x) dx d f(x) dx We can then detect the edge by a simple threshold of d f (x) > T Edge dx ⇒ PTIC D O S G IE R L O P U P P A D E S C P I Edge and Line Detection -2- Semester 1 A S R Y TM H ENT of P V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Edge Detection I but in two-dimensions things are more difficult: ∂ f (x,y) ∂ f (x,y) Vertical Edges Horizontal Edges ∂x → ∂y → But we really want to detect edges in all directions.
    [Show full text]
  • A Comprehensive Analysis of Edge Detectors in Fundus Images for Diabetic Retinopathy Diagnosis
    SSRG International Journal of Computer Science and Engineering (SSRG-IJCSE) – Special Issue ICRTETM March 2019 A Comprehensive Analysis of Edge Detectors in Fundus Images for Diabetic Retinopathy Diagnosis J.Aashikathul Zuberiya#1, M.Nagoor Meeral#2, Dr.S.Shajun Nisha #3 #1M.Phil Scholar, PG & Research Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India #2M.Phil Scholar, PG & Research Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India #3Assistant Professor & Head, PG & Research Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India Abstract severe stage. PDR is an advanced stage in which the Diabetic Retinopathy is an eye disorder that fluids sent by the retina for nourishment trigger the affects the people with diabetics. Diabetes for a growth of new blood vessel that are abnormal and prolonged time damages the blood vessels of retina fragile.Initial stage has no vision problem, but with and affects the vision of a person and leads to time and severity of diabetes it may lead to vision Diabetic retinopathy. The retinal fundus images of the loss.DR can be treated in case of early detection. patients are collected are by capturing the eye with [1][2] digital fundus camera. This captures a photograph of the back of the eye. The raw retinal fundus images are hard to process. To enhance some features and to remove unwanted features Preprocessing is used and followed by feature extraction . This article analyses the extraction of retinal boundaries using various edge detection operators such as Canny, Prewitt, Roberts, Sobel, Laplacian of Gaussian, Kirsch compass mask and Robinson compass mask in fundus images.
    [Show full text]
  • Robust Edge Aware Descriptor for Image Matching
    Robust Edge Aware Descriptor for Image Matching Rouzbeh Maani, Sanjay Kalra, Yee-Hong Yang Department of Computing Science, University of Alberta, Edmonton, Canada Abstract. This paper presents a method called Robust Edge Aware De- scriptor (READ) to compute local gradient information. The proposed method measures the similarity of the underlying structure to an edge using the 1D Fourier transform on a set of points located on a circle around a pixel. It is shown that the magnitude and the phase of READ can well represent the magnitude and orientation of the local gradients and present robustness to imaging effects and artifacts. In addition, the proposed method can be efficiently implemented by kernels. Next, we de- fine a robust region descriptor for image matching using the READ gra- dient operator. The presented descriptor uses a novel approach to define support regions by rotation and anisotropical scaling of the original re- gions. The experimental results on the Oxford dataset and on additional datasets with more challenging imaging effects such as motion blur and non-uniform illumination changes show the superiority and robustness of the proposed descriptor to the state-of-the-art descriptors. 1 Introduction Local feature detection and description are among the most important tasks in computer vision. The extracted descriptors are used in a variety of applica- tions such as object recognition and image matching, motion tracking, facial expression recognition, and human action recognition. The whole process can be divided into two major tasks: region detection, and region description. The goal of the first task is to detect regions that are invariant to a class of transforma- tions such as rotation, change of scale, and change of viewpoint.
    [Show full text]
  • Lecture 10 Detectors and Descriptors
    This lecture is about detectors and descriptors, which are the basic building blocks for many tasks in 3D vision and recognition. We’ll discuss Lecture 10 some of the properties of detectors and descriptors and walk through examples. Detectors and descriptors • Properties of detectors • Edge detectors • Harris • DoG • Properties of descriptors • SIFT • HOG • Shape context Silvio Savarese Lecture 10 - 16-Feb-15 Previous lectures have been dedicated to characterizing the mapping between 2D images and the 3D world. Now, we’re going to put more focus From the 3D to 2D & vice versa on inferring the visual content in images. P = [x,y,z] p = [x,y] 3D world •Let’s now focus on 2D Image The question we will be asking in this lecture is - how do we represent images? There are many basic ways to do this. We can just characterize How to represent images? them as a collection of pixels with their intensity values. Another, more practical, option is to describe them as a collection of components or features which correspond to “interesting” regions in the image such as corners, edges, and blobs. Each of these regions is characterized by a descriptor which captures the local distribution of certain photometric properties such as intensities, gradients, etc. Feature extraction and description is the first step in many recent vision algorithms. They can be considered as building blocks in many scenarios The big picture… where it is critical to: 1) Fit or estimate a model that describes an image or a portion of it 2) Match or index images 3) Detect objects or actions form images Feature e.g.
    [Show full text]
  • Line Detection by Hough Transformation
    Line Detection by Hough transformation 09gr820 April 20, 2009 1 Introduction When images are to be used in different areas of image analysis such as object recognition, it is important to reduce the amount of data in the image while preserving the important, characteristic, structural information. Edge detection makes it possible to reduce the amount of data in an image considerably. However the output from an edge detector is still a image described by it’s pixels. If lines, ellipses and so forth could be defined by their characteristic equations, the amount of data would be reduced even more. The Hough transform was originally developed to recognize lines [5], and has later been generalized to cover arbitrary shapes [3] [1]. This worksheet explains how the Hough transform is able to detect (imperfect) straight lines. 2 The Hough Space 2.1 Representation of Lines in the Hough Space Lines can be represented uniquely by two parameters. Often the form in Equation 1 is used with parameters a and b. y = a · x + b (1) This form is, however, not able to represent vertical lines. Therefore, the Hough transform uses the form in Equation 2, which can be rewritten to Equation 3 to be similar to Equation 1. The parameters θ and r is the angle of the line and the distance from the line to the origin respectively. r = x · cos θ + y · sin θ ⇔ (2) cos θ r y = − · x + (3) sin θ sin θ All lines can be represented in this form when θ ∈ [0, 180[ and r ∈ R (or θ ∈ [0, 360[ and r ≥ 0).
    [Show full text]
  • A Survey of Feature Extraction Techniques in Content-Based Illicit Image Detection
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universiti Teknologi Malaysia Institutional Repository Journal of Theoretical and Applied Information Technology 10 th May 2016. Vol.87. No.1 © 2005 - 2016 JATIT & LLS. All rights reserved . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 A SURVEY OF FEATURE EXTRACTION TECHNIQUES IN CONTENT-BASED ILLICIT IMAGE DETECTION 1,2 S.HADI YAGHOUBYAN, 1MOHD AIZAINI MAAROF, 1ANAZIDA ZAINAL, 1MAHDI MAKTABDAR OGHAZ 1Faculty of Computing, Universiti Teknologi Malaysia (UTM), Malaysia 2 Department of Computer Engineering, Islamic Azad University, Yasooj Branch, Yasooj, Iran E-mail: 1,2 [email protected], [email protected], [email protected] ABSTRACT For many of today’s youngsters and children, the Internet, mobile phones and generally digital devices are integral part of their life and they can barely imagine their life without a social networking systems. Despite many advantages of the Internet, it is hard to neglect the Internet side effects in people life. Exposure to illicit images is very common among adolescent and children, with a variety of significant and often upsetting effects on their growth and thoughts. Thus, detecting and filtering illicit images is a hot and fast evolving topic in computer vision. In this research we tried to summarize the existing visual feature extraction techniques used for illicit image detection. Feature extraction can be separate into two sub- techniques feature detection and description. This research presents the-state-of-the-art techniques in each group. The evaluation measurements and metrics used in other researches are summarized at the end of the paper.
    [Show full text]
  • Edge Detection (Trucco, Chapt 4 and Jain Et Al., Chapt 5)
    Edge detection (Trucco, Chapt 4 AND Jain et al., Chapt 5) • Definition of edges -Edges are significant local changes of intensity in an image. -Edges typically occur on the boundary between twodifferent regions in an image. • Goal of edge detection -Produce a line drawing of a scene from an image of that scene. -Important features can be extracted from the edges of an image (e.g., corners, lines, curves). -These features are used by higher-levelcomputer vision algorithms (e.g., recogni- tion). -2- • What causes intensity changes? -Various physical events cause intensity changes. -Geometric events *object boundary (discontinuity in depth and/or surface color and texture) *surface boundary (discontinuity in surface orientation and/or surface color and texture) -Non-geometric events *specularity (direct reflection of light, such as a mirror) *shadows (from other objects or from the same object) *inter-reflections • Edge descriptors Edge normal: unit vector in the direction of maximum intensity change. Edge direction: unit vector to perpendicular to the edge normal. Edge position or center: the image position at which the edge is located. Edge strength: related to the local image contrast along the normal. -3- • Modeling intensity changes -Edges can be modeled according to their intensity profiles. Step edge: the image intensity abruptly changes from one value to one side of the discontinuity to a different value on the opposite side. Ramp edge: astep edge where the intensity change is not instantaneous but occur overafinite distance. Ridge edge: the image intensity abruptly changes value but then returns to the starting value within some short distance (generated usually by lines).
    [Show full text]
  • Lecture #06: Edge Detection
    Lecture #06: Edge Detection Winston Wang, Antonio Tan-Torres, Hesam Hamledari Department of Computer Science Stanford University Stanford, CA 94305 {wwang13, tantonio}@cs.stanford.edu; [email protected] 1 Introduction This lecture covers edge detection, Hough transformations, and RANSAC. The detection of edges provides meaningful semantic information that facilitate the understanding of an image. This can help analyzing the shape of elements, extracting image features, and understanding changes in the properties of depicted scenes such as discontinuity in depth, type of material, and illumination, to name a few. We will explore the application of Sobel and Canny edge detection techniques. The next section introduces the Hough transform, used for the detection of parametric models in images;for example, the detection of linear lines, defined by two parameters, is made possible by the Hough transform. Furthermore, this technique can be generalized to detect other shapes (e.g., circles). However, as we will see, the use of Hough transform is not effective in fitting models with a high number of parameters. To address this model fitting problem, the random sampling consensus (RANSAC) is introduced in the last section; this non-deterministic approach repeatedly samples subsets of data, uses them to fit the model, and classifies the remaining data points as "inliers" or "outliers" depending on how well they can be explained by the fitted model (i.e., their proximity to the model). The result is used for a final selection of data points used in achieving the final model fit. A general comparison of RANSAC and Hough transform is also provided in the last section.
    [Show full text]
  • Edge Detection Using Deep Learning
    International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 07 | July 2018 www.irjet.net p-ISSN: 2395-0072 Edge Detection Using Deep Learning Pooja A S1 and Smitha Vas P2 1Department of Computer Science & Engineering, LBS Institute of Technology for Women, Poojappura, Thiruvananthapuram 2Department of Computer Science & Engineering, LBS Institute of Technology for Women, Poojappura, Thiruvananthapuram -----------------------------------------------------------------***--------------------------------------------------------------------- Abstract - The paper describes the implementation of new deep learning based edge detection in image processing applications. A set of processes which aim at identifying points in an image at which the image brightness changes formally or sharply is called edge detection. The main theme of this work is to understand how efficiency is the new proposed method. The algorithm which increases the efficiency and performance is fast, accurate and simple. The traditional method shows canny edge detector has better performance and it finds more edges. The aim of this article is to find acute and precise edges that are not possible with the pre- existing techniques and to find the misleading edges, here use a deep learning method by combining it with other types of detection techniques. Here canny algorithm is used for making a comparison with the proposed technique with the MATLAB 2017b version. And also improve the canny algorithm by using the filter for reducing the noise in the image. Here uses a deep learning method to find more edge pixels by using the pre-trained neural network. Key Words: Edge Detection, Image Processing, Canny Edge Detection, Deep Learning, CNN 1. INTRODUCTION The early stage of vision processing identify the features of images are relevant to estimating the structure and properties of objects in a scene.
    [Show full text]
  • Edge Detection CS 111
    Edge Detection CS 111 Slides from Cornelia Fermüller and Marc Pollefeys Edge detection • Convert a 2D image into a set of curves – Extracts salient features of the scene – More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity • Edges are caused by a variety of factors Edge detection 1.Detection of short linear edge segments (edgels) 2.Aggregation of edgels into extended edges 3.Maybe parametric description Edge is Where Change Occurs • Change is measured by derivative in 1D • Biggest change, derivative has maximum magnitude •Or 2nd derivative is zero. Image gradient • The gradient of an image: • The gradient points in the direction of most rapid change in intensity • The gradient direction is given by: – Perpendicular to the edge •The edge strength is given by the magnitude How discrete gradient? • By finite differences f(x+1,y) – f(x,y) f(x, y+1) – f(x,y) The Sobel operator • Better approximations of the derivatives exist –The Sobel operators below are very commonly used -1 0 1 121 -2 0 2 000 -1 0 1 -1 -2 -1 – The standard defn. of the Sobel operator omits the 1/8 term • doesn’t make a difference for edge detection • the 1/8 term is needed to get the right gradient value, however Gradient operators (a): Roberts’ cross operator (b): 3x3 Prewitt operator (c): Sobel operator (d) 4x4 Prewitt operator Finite differences responding to noise Increasing noise -> (this is zero mean additive gaussian noise) Solution: smooth first • Look for peaks in Derivative
    [Show full text]
  • Image Understanding Edge Detection 1 Introduction 2 Local Operators
    Image Understanding Edge Detection 1 Introduction The goal of edge detection is to produce something like a line drawing of an image. In practice we will look for places in the image where the intensity changes quickly. Observe that, in general, the boundaries of objects tend to produce suddent changes in the image intensity. For example, different objects are usually different colors or hues and this causes the image intensity to change as we move from one object to another. In addition, different surfaces of an object receive different amounts of light, which again produces intensity changes. Thus, much of the geometric information that would be conveyed in a line drawing is captured by the intensity changes in an image. Unfortunately, there are also a large number of intensity changes that are not due to geometry, such as surface markings, texture, and specular reflections. Moreover there are sometimes surface boundaries that do not produce very strong intensity changes. Therefore the intensity boundary information that we extract from an image will tend to indicate object boundaries, but not always (in some scenes it will be truly awful). Figure 1a is an image of a simple scene, and Figure 1b shows the output of a particular edge detector on that image. We can see that in this case the edge detector produces a relatively good ‘line drawing’ of the scene. However, say we change the lighting conditions, as illustrated in Figure 2a. Now the edge detector produces an output that is substantially less like a line drawing, as shown in Figure 2b.
    [Show full text]