Fuzzy Edge Filter-Edge Detection and Feature Extraction Technique for Any JPEG Image

Total Page:16

File Type:pdf, Size:1020Kb

Fuzzy Edge Filter-Edge Detection and Feature Extraction Technique for Any JPEG Image International Journal of Computer Science & CommunicationVol. 1, No. 2, July-December 2010, pp. 387-390 Fuzzy Edge Filter-Edge Detection and Feature Extraction Technique for Any JPEG Image Neha Agrawal1, Rishabh Agrawal2 & Sushil Kumar1 1Technocrats Institute of Technology (T.I.T.), Bhopal, India 2National Productivity Council (NPC), New Delhi, India Email: [email protected], [email protected], [email protected] ABSTRACT Edge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a image. This paper highlights the ways to detect edges at five different levels which will help in pattern recognition and feature extraction with the use of fuzzy logic a method well known in Artificial intelligence. The fuzzy edge filter technique is an operator introduced in order to simulate at a mathematical level the compensatory behavior in process of decision making or subjective evaluation. The following paper introduces such operators on hand of computer vision application and supported with MATLAB 7.5. Our fuzzy edge filter detects different classes of image pixels corresponding to gray level variation in the various directions. The goal in this case is to determine in which points in the image the first derivative of the gray level as a function of position is of high magnitude. By applying the variable threshold value to get the new output image, edges are formed in all arbitrary directions. Keywords: Edge Detection, Fuzzy Logic, Fuzzy Edge Filter, Feature Extraction, Threshold 1. INTRODUCTION membership functions. The output is presented again to Edges play quite an important role in many applications the values from 0-255 and then the black, white and edge of image processing, in particular for machine vision are detected. From the experience of the tested images in systems that analyze scenes of man-made objects under this study, it is found that the best result to be achieved controlled illumination conditions. During recent years, at the range black from zero to 126 gray values and from however, substantial research has also been made on 126 to 255 meaning is trace as white. computer vision methods that do not explicitly rely on edge detection as a pre-processing step.Most of the 2. METHODOLOGY traditional edge-detection algorithms in image For edge detection a set of nine pixels, part of a 3x3 or processing typically convolute a filter operator and the 5x5 window of an image to a set of fuzzy conditions which input image, and then map overlapping input image help to highlight all the edges that are associated with an regions to output signals which lead to considerable loss image. The fuzzy conditions help to test the relative in edge detection; however there is no such loss in the values of pixels which can be present in case of presence fuzzy based method described here. on an edge in a grayscale image [3]. The method described here uses a fuzzy based logic The 3 ×3 neighborhood of pixels about the center pixel model[1] with the help of which high performance is p5 as well as the four directions in which edges may achieved along with simplicity in resulting model. Fuzzy appear. The bi-directional summed magnitude differences logic helps to deal with problems with imprecise and in gray level between p5 and others according to the fuzzy vague information and thus helps to create a model for edge filter algorithm are. image edge detection as presented here displaying the accuracy of fuzzy methods in image processing. Edge detection of an image is white line as edges of the image on the black background. Generally grayscale image lies between 0-255 where 0 represents black moving toward white. The result images contribute the contours, the black and the white areas. From the side of the fuzzy construction, the input grays is ranged from 0-255 gray intensity, and according to the desired rules the gray level is converted to the values of the 388 International Journal of Computer Science & Communication (IJCSC) The proposed approach begins by segmenting the 3. Fuzzy Edge Filter (Horizontal and Vertical) at images into regions using floating 3x3 binary matrixes Specified Threshold (Fig. 4). as shown below. A direct fuzzy inference system mapped 4. Fuzzy Edge Filter (45 degree) at Specified a range of values distinct from each other in the floating Threshold (Fig.5). matrix to detect the edges at all different levelsThe algorithm of edge detection of a JPEG image using fuzzy 5. Fuzzy Edge Filter (-45 degree) at Specified logic is as follows. Threshold (Fig.6). Algorithm: 6. Exit. Step 1: Select an input image. Step 2: Get convert the input image into Grayscale image. Step 3: Apply 3x3 pixel mask window to the image which are a set of fuzzy logic conditions at threshold value. Step 4: Get different features and patterns of edge detection on all five different levels at specified threshold. Step 5: Enter any one choice from 1-5 and also enter threshold value Step 6: Exit (level 6) Step 7: Apply same procedure on any image and get edge detected Fig. 2: Fuzzy Edge Filter at Automatic Threshold Step 8: End Fig.3: Fuzzy Edge Filter at Specified Threshold Fig.1: Gray Scale Image 3. EXPERIMENTAL ANALYSIS This paper gives a choice to get different edges on different threshold value with five different types of fuzzy edge filters applied on a gray scale image which considers the gradient as well as second derivative option at (fig. 1). Fuzzy edge filters applied on gray scale image are as follows: 1. Fuzzy Edge Filter at Automatic Threshold (Fig. 2). 2. Fuzzy Edge Filter at Specified Threshold (Fig.3). Fig. 4: Fuzzy Edge Filter (Horizontal and Vertical) at Specified Threshold Fuzzy Edge Filter-Edge Detection and Feature Extraction Technique for Any JPEG Image 389 The Roberts Cross operator (fig.8)is a simple, quick to compute 2D spatial gradient measurement of a image.The operator consists 2x2 convolution kernel detect the image at 45 and -45 degree . 5. RESULT ANALYSIS Edge Detection is a very important factor in image processing as it is use in variety of applications of machine vision and medical sciences.Various edge detection techniques like gradient methods (sobel ,prewitt,…) operator,second derivative(log) and canny operator these are either sensitive or time consuming computation and no satisfactory results though fuzzy edge filter at specified threshold is a better technique as it can be said as all technique with better result and according to the need. The fuzzy edge filter algorithm for image edge detection was tested for various images and the outputs were compared to the existing edge Fig.5: Fuzzy Edge Filter (45 degree) at Specified Threshold detection algorithms and it was observed that the outputs of this algorithm provide much more distinct marked edges at 45 and -45 degree which are useful for feature extraction and thus have better visual appearance than the ones that are being used. The sample output shown below in Fig.2(a-c) compares the “Sobel” Edge detection algorithm and “Robert” Edge detection algorithm the fuzzy edge filter pixel value algorithm is much better. It can be observed that the output that has been generated by the fuzzy method has found out the edges of the image more distinctly as compared to the ones that have been found out by the “Sobel” edge detection algorithm and Robert edge detection.Thus the Fuzzy edge filter algorithm provides better edge detection and has an exhaustive set of fuzzy conditions at specified threshold which helps to extract the edges with a very high feature extraction. Fig.6: Fuzzy Edge Filter (-45 degree) at Specified Threshold 4. COMPARISON OF FUZZY EDGE FILTER WITH OTHER EDGE DETECTION ALGORITHMS For estimating image gradients from the input image different gradient operators can be applied. Some well- known edge detection algorithms are Sobel edge detection and Roberts cross edge detection algorithm based on edge filter. For better result analysis fuzzy edge filter at all the levels are recognized for edge detection and feature extraction are compared by Sobel and Roberts edge detection algorithms [2]. These are gradient methods.As Sobel detection operator consists a pair of 3x3 convolution kernel designed to get edges at horizontal and vertical way relative to pixel grid. The output compared with fuzzy edge filter at minimum Fig. 7: Sobel Edge Detection threshold is shown in fig.7. 390 International Journal of Computer Science & Communication (IJCSC) over the other algorithms which has tremendous scope of application in various areas of image processing. The image edge detection using fuzzy edge filter algorithm give alternate methods that has been successful in obtaining the edges that are present in an image after the implementation and execution of the algorithms with various sets of images. Feature extraction of an image is very well extracted by fuzzy edge filter 45 and -45 degree at specified threshold. REFERENCES [1] Abdallah A. Alshennawy, and Ayman A. Aly Edge Detection in Digital Images using Fuzzy Logic Tecniques, IEEE Tran. [2] Shashank Mathur, Anil Ahlawat in, ”Application of Fuzzy Logic in Edge Detection”. [3] “Competitive Fuzzy Edge Detection“, Lily Rui Liang, Fig.8: Roberts Cross Edge Detection Carl G. Looney Available at Computer Science Web. [4] H.G. Barrow and J.M. Tenenbaum (1981) “Interpreting 6. CONCLUSION Line Drawings as Three-dimensional Surfaces”, Artificial Fuzzy Edge filter for image processing is a powerful tool Intelligence, 17, Issues 1-3, Pages 75-116. form to get expert knowledge edge and the combination [5] T. Lindeberg (2001) “Edge Detection”, in M.
Recommended publications
  • Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2005 Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY FAMU-FSU COLLEGE OF ENGINEERING EDGE DETECTION OF NOISY IMAGES USING 2-D DISCRETE WAVELET TRANSFORM BY VENKATA RAVIKIRAN CHAGANTI A thesis submitted to the Department of Electrical Engineering in partial fulfillment of the requirements for the degree of Master of Science Degree Awarded: Spring Semester, 2005 The members of the committee approve the thesis of Venkata R. Chaganti th defended on April 11 , 2005. __________________________________________ Simon Y. Foo Professor Directing Thesis __________________________________________ Anke Meyer-Baese Committee Member __________________________________________ Rodney Roberts Committee Member Approved: ________________________________________________________________________ Leonard J. Tung, Chair, Department of Electrical and Computer Engineering Ching-Jen Chen, Dean, FAMU-FSU College of Engineering The office of Graduate Studies has verified and approved the above named committee members. ii Dedicate to My Father late Dr.Rama Rao, Mother, Brother and Sister-in-law without whom this would never have been possible iii ACKNOWLEDGEMENTS I thank my thesis advisor, Dr.Simon Foo, for his help, advice and guidance during my M.S and my thesis. I also thank Dr.Anke Meyer-Baese and Dr. Rodney Roberts for serving on my thesis committee. I would like to thank my family for their constant support and encouragement during the course of my studies. I would like to acknowledge support from the Department of Electrical Engineering, FAMU-FSU College of Engineering.
    [Show full text]
  • Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector
    Introduction Method Experiments Conclusions Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector P. Suau and F. Escolano {pablo,sco}@dccia.ua.es Robot Vision Group University of Alicante, Spain June 7th, 2007 P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 1 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 2 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 3 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Local feature detectors Feature extraction is a basic step in many computer vision tasks Kadir and Brady scale-saliency Salient features over a narrow range of scales Computational bottleneck (all pixels, all scales) Applied to robot global localization → we need real time feature extraction P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 4 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Salient features X HD(s, x) = − Pd,s,x log2Pd,s,x d∈D Kadir and Brady algorithm (2001): most salient features between scales smin and smax P.
    [Show full text]
  • Topic: 9 Edge and Line Detection
    V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Topic: 9 Edge and Line Detection Contents: First Order Differentials • Post Processing of Edge Images • Second Order Differentials. • LoG and DoG filters • Models in Images • Least Square Line Fitting • Cartesian and Polar Hough Transform • Mathemactics of Hough Transform • Implementation and Use of Hough Transform • PTIC D O S G IE R L O P U P P A D E S C P I Edge and Line Detection -1- Semester 1 A S R Y TM H ENT of P V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Edge Detection The aim of all edge detection techniques is to enhance or mark edges and then detect them. All need some type of High-pass filter, which can be viewed as either First or Second order differ- entials. First Order Differentials: In One-Dimension we have f(x) d f(x) dx d f(x) dx We can then detect the edge by a simple threshold of d f (x) > T Edge dx ⇒ PTIC D O S G IE R L O P U P P A D E S C P I Edge and Line Detection -2- Semester 1 A S R Y TM H ENT of P V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Edge Detection I but in two-dimensions things are more difficult: ∂ f (x,y) ∂ f (x,y) Vertical Edges Horizontal Edges ∂x → ∂y → But we really want to detect edges in all directions.
    [Show full text]
  • A Comprehensive Analysis of Edge Detectors in Fundus Images for Diabetic Retinopathy Diagnosis
    SSRG International Journal of Computer Science and Engineering (SSRG-IJCSE) – Special Issue ICRTETM March 2019 A Comprehensive Analysis of Edge Detectors in Fundus Images for Diabetic Retinopathy Diagnosis J.Aashikathul Zuberiya#1, M.Nagoor Meeral#2, Dr.S.Shajun Nisha #3 #1M.Phil Scholar, PG & Research Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India #2M.Phil Scholar, PG & Research Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India #3Assistant Professor & Head, PG & Research Department of Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India Abstract severe stage. PDR is an advanced stage in which the Diabetic Retinopathy is an eye disorder that fluids sent by the retina for nourishment trigger the affects the people with diabetics. Diabetes for a growth of new blood vessel that are abnormal and prolonged time damages the blood vessels of retina fragile.Initial stage has no vision problem, but with and affects the vision of a person and leads to time and severity of diabetes it may lead to vision Diabetic retinopathy. The retinal fundus images of the loss.DR can be treated in case of early detection. patients are collected are by capturing the eye with [1][2] digital fundus camera. This captures a photograph of the back of the eye. The raw retinal fundus images are hard to process. To enhance some features and to remove unwanted features Preprocessing is used and followed by feature extraction . This article analyses the extraction of retinal boundaries using various edge detection operators such as Canny, Prewitt, Roberts, Sobel, Laplacian of Gaussian, Kirsch compass mask and Robinson compass mask in fundus images.
    [Show full text]
  • Robust Edge Aware Descriptor for Image Matching
    Robust Edge Aware Descriptor for Image Matching Rouzbeh Maani, Sanjay Kalra, Yee-Hong Yang Department of Computing Science, University of Alberta, Edmonton, Canada Abstract. This paper presents a method called Robust Edge Aware De- scriptor (READ) to compute local gradient information. The proposed method measures the similarity of the underlying structure to an edge using the 1D Fourier transform on a set of points located on a circle around a pixel. It is shown that the magnitude and the phase of READ can well represent the magnitude and orientation of the local gradients and present robustness to imaging effects and artifacts. In addition, the proposed method can be efficiently implemented by kernels. Next, we de- fine a robust region descriptor for image matching using the READ gra- dient operator. The presented descriptor uses a novel approach to define support regions by rotation and anisotropical scaling of the original re- gions. The experimental results on the Oxford dataset and on additional datasets with more challenging imaging effects such as motion blur and non-uniform illumination changes show the superiority and robustness of the proposed descriptor to the state-of-the-art descriptors. 1 Introduction Local feature detection and description are among the most important tasks in computer vision. The extracted descriptors are used in a variety of applica- tions such as object recognition and image matching, motion tracking, facial expression recognition, and human action recognition. The whole process can be divided into two major tasks: region detection, and region description. The goal of the first task is to detect regions that are invariant to a class of transforma- tions such as rotation, change of scale, and change of viewpoint.
    [Show full text]
  • Computer Vision: Edge Detection
    Edge Detection Edge detection Convert a 2D image into a set of curves • Extracts salient features of the scene • More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused by a variety of factors Edge detection How can you tell that a pixel is on an edge? Profiles of image intensity edges Edge detection 1. Detection of short linear edge segments (edgels) 2. Aggregation of edgels into extended edges (maybe parametric description) Edgel detection • Difference operators • Parametric-model matchers Edge is Where Change Occurs Change is measured by derivative in 1D Biggest change, derivative has maximum magnitude Or 2nd derivative is zero. Image gradient The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction is given by: • how does this relate to the direction of the edge? The edge strength is given by the gradient magnitude The discrete gradient How can we differentiate a digital image f[x,y]? • Option 1: reconstruct a continuous image, then take gradient • Option 2: take discrete derivative (finite difference) How would you implement this as a cross-correlation? The Sobel operator Better approximations of the derivatives exist • The Sobel operators below are very commonly used -1 0 1 1 2 1 -2 0 2 0 0 0 -1 0 1 -1 -2 -1 • The standard defn. of the Sobel operator omits the 1/8 term – doesn’t make a difference for edge detection – the 1/8 term is needed to get the right gradient
    [Show full text]
  • Vision Review: Image Processing
    Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 17, 2002 Announcements • Homework and paper presentation guidelines are up on web page • Readings for next Tuesday: Chapters 6, 11.1, and 18 • For next Thursday: “Stochastic Road Shape Estimation” Computer Vision Review Outline • Image formation • Image processing • Motion & Estimation • Classification Outline •Images • Binary operators •Filtering – Smoothing – Edge, corner detection • Modeling, matching • Scale space Images • An image is a matrix of pixels Note: Matlab uses • Resolution – Digital cameras: 1600 X 1200 at a minimum – Video cameras: ~640 X 480 • Grayscale: generally 8 bits per pixel → Intensities in range [0…255] • RGB color: 3 8-bit color planes Image Conversion •RGB → Grayscale: Mean color value, or weight by perceptual importance (Matlab: rgb2gray) •Grayscale → Binary: Choose threshold based on histogram of image intensities (Matlab: imhist) Color Representation • RGB, HSV (hue, saturation, value), YUV, etc. • Luminance: Perceived intensity • Chrominance: Perceived color – HS(V), (Y)UV, etc. – Normalized RGB removes some illumination dependence: Binary Operations • Dilation, erosion (Matlab: imdilate, imerode) – Dilation: All 0’s next to a 1 → 1 (Enlarge foreground) – Erosion: All 1’s next to a 0 → 0 (Enlarge background) • Connected components – Uniquely label each n-connected region in binary image – 4- and 8-connectedness –Matlab: bwfill, bwselect • Moments: Region statistics – Zeroth-order: Size – First-order: Position (centroid)
    [Show full text]
  • A Survey of Feature Extraction Techniques in Content-Based Illicit Image Detection
    Journal of Theoretical and Applied Information Technology 10 th May 2016. Vol.87. No.1 © 2005 - 2016 JATIT & LLS. All rights reserved . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 A SURVEY OF FEATURE EXTRACTION TECHNIQUES IN CONTENT-BASED ILLICIT IMAGE DETECTION 1,2 S.HADI YAGHOUBYAN, 1MOHD AIZAINI MAAROF, 1ANAZIDA ZAINAL, 1MAHDI MAKTABDAR OGHAZ 1Faculty of Computing, Universiti Teknologi Malaysia (UTM), Malaysia 2 Department of Computer Engineering, Islamic Azad University, Yasooj Branch, Yasooj, Iran E-mail: 1,2 [email protected], [email protected], [email protected] ABSTRACT For many of today’s youngsters and children, the Internet, mobile phones and generally digital devices are integral part of their life and they can barely imagine their life without a social networking systems. Despite many advantages of the Internet, it is hard to neglect the Internet side effects in people life. Exposure to illicit images is very common among adolescent and children, with a variety of significant and often upsetting effects on their growth and thoughts. Thus, detecting and filtering illicit images is a hot and fast evolving topic in computer vision. In this research we tried to summarize the existing visual feature extraction techniques used for illicit image detection. Feature extraction can be separate into two sub- techniques feature detection and description. This research presents the-state-of-the-art techniques in each group. The evaluation measurements and metrics used in other researches are summarized at the end of the paper. We hope that this research help the readers to better find the proper feature extraction technique or develop a robust and accurate visual feature extraction technique for illicit image detection and filtering purpose.
    [Show full text]
  • Computer Vision Based Human Detection Md Ashikur
    Computer Vision Based Human Detection Md Ashikur. Rahman To cite this version: Md Ashikur. Rahman. Computer Vision Based Human Detection. International Journal of Engineer- ing and Information Systems (IJEAIS), 2017, 1 (5), pp.62 - 85. hal-01571292 HAL Id: hal-01571292 https://hal.archives-ouvertes.fr/hal-01571292 Submitted on 2 Aug 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. International Journal of Engineering and Information Systems (IJEAIS) ISSN: 2000-000X Vol. 1 Issue 5, July– 2017, Pages: 62-85 Computer Vision Based Human Detection Md. Ashikur Rahman Dept. of Computer Science and Engineering Shaikh Burhanuddin Post Graduate College Under National University, Dhaka, Bangladesh [email protected] Abstract: From still images human detection is challenging and important task for computer vision-based researchers. By detecting Human intelligence vehicles can control itself or can inform the driver using some alarming techniques. Human detection is one of the most important parts in image processing. A computer system is trained by various images and after making comparison with the input image and the database previously stored a machine can identify the human to be tested. This paper describes an approach to detect different shape of human using image processing.
    [Show full text]
  • Edge Detection Low Level
    Image Processing Image Processing - Computer Vision Edge Detection Low Level • Edge detection masks • Gradient Detectors Image Processing representation, compression,transmission • Second Derivative - Laplace detectors • Edge Linking image enhancement • Hough Transform edge/feature finding image "understanding" Computer Vision High Level UFO - Unidentified Flying Object Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused by a variety of factors 1 Edge detection Profiles of image intensity edges Step Edge Line Edge How can you tell that a pixel is on an edge? gray value gray gray value gray x x edge edge edge edge Edge Detection by Differentiation Edge detection 1D image f(x) gray value gray 1. Detection of short linear edge segments (edgels) x 2. Aggregation of edgels into extended edges 1st derivative f'(x) threshold |f'(x)| - threshold Pixels that passed the threshold are Edge Pixels 2 Image gradient The discrete gradient The gradient of an image: How can we differentiate a digital image f[x,y]? • Option 1: reconstruct a continuous image, then take gradient • Option 2: take discrete derivative (finite difference) The gradient points in the direction of most rapid change in intensity How would you implement this as a convolution? The gradient direction is given by: The edge strength is given by the gradient magnitude Effects of noise Solution: smooth first Consider a single row or column of the image • Plotting intensity as a function of position gives
    [Show full text]
  • Lecture 10 Detectors and Descriptors
    This lecture is about detectors and descriptors, which are the basic building blocks for many tasks in 3D vision and recognition. We’ll discuss Lecture 10 some of the properties of detectors and descriptors and walk through examples. Detectors and descriptors • Properties of detectors • Edge detectors • Harris • DoG • Properties of descriptors • SIFT • HOG • Shape context Silvio Savarese Lecture 10 - 16-Feb-15 Previous lectures have been dedicated to characterizing the mapping between 2D images and the 3D world. Now, we’re going to put more focus From the 3D to 2D & vice versa on inferring the visual content in images. P = [x,y,z] p = [x,y] 3D world •Let’s now focus on 2D Image The question we will be asking in this lecture is - how do we represent images? There are many basic ways to do this. We can just characterize How to represent images? them as a collection of pixels with their intensity values. Another, more practical, option is to describe them as a collection of components or features which correspond to “interesting” regions in the image such as corners, edges, and blobs. Each of these regions is characterized by a descriptor which captures the local distribution of certain photometric properties such as intensities, gradients, etc. Feature extraction and description is the first step in many recent vision algorithms. They can be considered as building blocks in many scenarios The big picture… where it is critical to: 1) Fit or estimate a model that describes an image or a portion of it 2) Match or index images 3) Detect objects or actions form images Feature e.g.
    [Show full text]
  • Line Detection by Hough Transformation
    Line Detection by Hough transformation 09gr820 April 20, 2009 1 Introduction When images are to be used in different areas of image analysis such as object recognition, it is important to reduce the amount of data in the image while preserving the important, characteristic, structural information. Edge detection makes it possible to reduce the amount of data in an image considerably. However the output from an edge detector is still a image described by it’s pixels. If lines, ellipses and so forth could be defined by their characteristic equations, the amount of data would be reduced even more. The Hough transform was originally developed to recognize lines [5], and has later been generalized to cover arbitrary shapes [3] [1]. This worksheet explains how the Hough transform is able to detect (imperfect) straight lines. 2 The Hough Space 2.1 Representation of Lines in the Hough Space Lines can be represented uniquely by two parameters. Often the form in Equation 1 is used with parameters a and b. y = a · x + b (1) This form is, however, not able to represent vertical lines. Therefore, the Hough transform uses the form in Equation 2, which can be rewritten to Equation 3 to be similar to Equation 1. The parameters θ and r is the angle of the line and the distance from the line to the origin respectively. r = x · cos θ + y · sin θ ⇔ (2) cos θ r y = − · x + (3) sin θ sin θ All lines can be represented in this form when θ ∈ [0, 180[ and r ∈ R (or θ ∈ [0, 360[ and r ≥ 0).
    [Show full text]