A New Approach for Feature Extraction of an Image
Total Page:16
File Type:pdf, Size:1020Kb
3rd International Conference on Intelligent Computational Systems (ICICS'2013) January 26-27, 2013 Hong Kong (China) A New Approach for Feature Extraction of an Image Mamta.Juneja and Parvinder S. Sandhu sharpness of region boundaries. A connectivity-preserving Abstract—This paper presents a new approach for feature (lines, relaxation-based segmentation method, usually referred to as edges, shapes) extraction of a given image. The Proposed technique the active contour model, was proposed recently. The main is based on Canny edge detection filter for detection of lines, edges idea is to start with some initial boundary shape represented in and Enhanced Hough transform for shape (circle, rhombus, rectangle, the form of spine curves, and iteratively modifies it by triangle) detection. In proposed approach; first, an input image is applying various shrink/expansion operations according to converted into grayscale. For detecting lines in images, we used histogram for the intensity related information and threshold. Then, some energy function. an edge recognition procedure is implemented using canny edge detector to determine the edges and edge directions. In the end, an 1.2 Histogram [2] enhanced Hough transform algorithm is implemented to provide The histogram of a digital image with gray levels in the shape detection like triangle, rectangle, rhombus and circle in range [0, L-1] is a discrete function h (rk) = nk, where rk is the addition to provide line linking as per the requirement of this kth gray level and nk is the number of pixel in the image research. having gray level rk. It is common practice to normalize a histogram by dividing each of its values by the total number of Keywords— Edge detection, Image segmentation, canny edge pixel in the image, denoted by n. Thus, a normalize histogram detector, Hough transform, Feature extraction. is given by p (rk) = nk/n, for k = 0, 1……, L-1. Loosely speaking, p (rk) gives an estimate of probability of occurrence of gray level rk. Note that the sum all components of a I. INTRODUCTION normalized histogram is equal to 1.We can consider the 1.1 Segmentation histograms of our images. For the noise free image, it’s simply In computer vision, segmentation refers to the process of two spikes at i=100, i=150. For the low noise image, there are partitioning a digital image into multiple regions (sets of two clear peaks centered on i=100,i=150. For the high noise pixels). The goal of segmentation is to simplify and/or change image, there is a single peak – two grey level populations the representation of an image into something that is more corresponding to object and background have merged. We can meaningful and easier to analyze. Image segmentation is define the input image signal-to-noise ratio in terms of the typically used to locate objects and boundaries (lines, curves, mean grey level value of the object pixels and background etc.) in images [1]. pixels and the additive noise standard deviation For intensity images (i.e. those represented by point-wise | | intensity levels) four popular approaches are [2, 3]: threshold = ……… 2.1 techniques, edge-based methods, region-based techniques, and µ푏−µ표 푆 connectivity-preserving relaxation methods. �푁 휎 Threshold techniques, which make decisions based on local pixel information, are effective when the intensity levels of the objects fall squarely outside the range of levels in the background. Because spatial information is ignored, however, blurred region boundaries can create havoc. Edge-based methods center around contour detection: their weakness in connecting together broken contour lines make them, too, Fig 1.1 Signal to noise ratio of signal prone to failure in the presence of blurring. A region-based method usually proceeds as follows: the image is partitioned 1.3 Thresholding [3] into connected regions by grouping neighboring pixels of Thresholding is the simplest method of image segmentation. similar intensity levels. Adjacent regions are then merged During the thresholding process, individual pixels in an image under some criterion involving perhaps homogeneity or are marked as “object” pixels if their value is greater than some threshold value (assuming an object to be brighter than the background) and as “background” pixels otherwise. If the Mamta.Juneja is Assistant Professor in University Institute of Engineering grey level of pixel p <=T then pixel p is an object pixel Else and technology, PanjabUnivesity, Chandigarh, e-mail:[email protected]. Pixel p is a background pixel. Prof. Dr. Parvinder S. Sandhuis withRayat and Bahara Institute of Engineering and Bio-Technology,Sahauran,Punjab. 21 3rd International Conference on Intelligent Computational Systems (ICICS'2013) January 26-27, 2013 Hong Kong (China) = + ……………2.4 2 2 푔 �푔푥 푔푦 = tan ……………..2.5 −1 푔푥 훩 �푔푦� Where, g is a gradient vector is the gradient magnitude and θ is the gradient direction. Fig 1.2 Thresholding of a signal 1.4.1.2 Robert’s Cross Operator: The Roberts Cross operator performs a simple, quick to 1.4 Edge Detection [4-6] compute, 2-D spatial gradient measurement on an image The Edges define the boundaries between regions in an image, operator consists of a pair of 2×2 convolution kernels as which helps with segmentation and object recognition. Edge shown in Figure. One kernel is simply the other rotated by detection is a fundamental of low-level image processing and 90°. This is very similar to the Sobel operator. good edges are necessary for higher level processing. +1 0 0 +1 0 1 1 0 1.4.1 Edge Detection Techniques � � � � There are many ways to perform edge detection. However, the Fig 1.5 Roberts− edge− detector mask 푥 푦 majority of different methods may be grouped into two 퐺 퐺 categories: The kernels can be applied separately to the input image, to Gradient: The gradient method detects the edges by looking produce separate measurements of the gradient component in for the maximum and minimum in the first derivative of the each orientation (call these Gx and Gy). These can then be image. combined together to find the absolute magnitude of the Laplacian: The Laplacian method searches for zero gradient at each point and the orientation of that gradient. The crossings in the second derivative of the image to find edges. gradient magnitude is given by: An edge has the one-dimensional shape of a ramp and calculating the derivative of the image can highlight its | | = 2 + 2 ………..….………...2.6 location. Suppose we have the signal below and if we take the 푥 푦 gradient and laplace of this signal we get the resultant signal 퐺 �퐺 퐺 as shown below: Although typically, an approximate magnitude is computed using: | | = + …………….……….2.7 푥 푦 This is much faster퐺 to compute.�퐺 퐺 The� angle of orientation of the edge giving rise to the spatial gradient (relative to the pixel grid orientation) is given by: Fig 1.3(a) Intensity graph of a signal (b) Gradient (c) Laplace = ………………2.8 3휋 1.4.1.1 Sobel Operator: 퐺푦 훩 푎푟푐푡푎푛 � � 푥� − 4 The Sobel Edge Detector uses a simple convolution kernel 퐺 to create a series of gradient magnitudes. For those you of 1.4.1.3 Prewitt’s Operator: mathematically inclined, applying can be represented as: Prewitt operator is similar to the Sobel operator and is used for detecting vertical and horizontal edges in images. ( , ) = ( , ) ( , ) ……..2.2 1 1 1 1 1 1 0 1 푁 푥 푦 ∑푘=−1 ∑푗=−1 퐾 푗 푘 푝 푥 − 푗 푦 − 푘 So, the Sobel Edge Detector uses two convolution kernels, 1 = 0 0 0 3 = 1 0 1 one to detect changes in vertical contrast (hx) and another to 1 1 1 −1 0 1 detect horizontal contrast (hy). ℎ � � ℎ �− � 1 0 +1 +1 +2 +1 − 1.6 Pre− witt− edge detector mask− 2 0 +2 0 0 0 −1 0 +1 1 2 1 �− � � � 1.4.1.4 Laplacian of Gaussian − Fig 1.4 Sobel edge detector− − mask− The Laplacian is a 2-D isotropic measure of the 2nd spatial 푦 퐺푥 퐺 derivative of an image. The Laplacian L(x, y) of an image Therefore, we have a gradient magnitude and direction: with pixel intensity values I(x, y) is given by: ( , ) = + ………………….2.9 2 2 휕 퐼 휕 퐼 = ……………………..…….2.3 2 2 퐿 푥 푦 휕푥 휕푦 푔푥 푔 �푔푦� 22 3rd International Conference on Intelligent Computational Systems (ICICS'2013) January 26-27, 2013 Hong Kong (China) Three commonly used small kernels are shown in Figure direction has to be equal to 90 degrees or 0. degrees, depending on what the value of the gradient in the y-direction 0 1 0 1 1 1 is equal to. If GY has a value of zero, the edge direction will 1 4 1 1 8 1 equal 0 degrees. Otherwise the edge direction will equal 90 0 −1 0 −1 −1 −1 degrees. The formula for finding the edge direction is just: �− − � �− − � Fig 1.7− Laplacian edge− detector− mask− = tan The 2-D LoG function centered on zero and with Gaussian −1 퐺푦 훩 � � 푥� standard deviation σ has the form Step 4:-Once the edge direction is퐺 known, the next step is 2+ 2 1 2+ 2 to relate the edge direction to a direction that can be traced in LoG( , ) = 1 2 2 …………2.10 4 2 2 푥 푦 an image. So if the pixels of a 5x5 image are aligned as 푥 푦 − 휎 follows: 푥 푦 − 휋휎 − 휎 푒 � � x x x x x 1.4.1.5 Canny Edge Detection Algorithm x x x x x Canny operator is the result of solving an optimization x x a x x problem with constraints.