A New Approach for Feature Extraction of an Image

Total Page:16

File Type:pdf, Size:1020Kb

A New Approach for Feature Extraction of an Image 3rd International Conference on Intelligent Computational Systems (ICICS'2013) January 26-27, 2013 Hong Kong (China) A New Approach for Feature Extraction of an Image Mamta.Juneja and Parvinder S. Sandhu sharpness of region boundaries. A connectivity-preserving Abstract—This paper presents a new approach for feature (lines, relaxation-based segmentation method, usually referred to as edges, shapes) extraction of a given image. The Proposed technique the active contour model, was proposed recently. The main is based on Canny edge detection filter for detection of lines, edges idea is to start with some initial boundary shape represented in and Enhanced Hough transform for shape (circle, rhombus, rectangle, the form of spine curves, and iteratively modifies it by triangle) detection. In proposed approach; first, an input image is applying various shrink/expansion operations according to converted into grayscale. For detecting lines in images, we used histogram for the intensity related information and threshold. Then, some energy function. an edge recognition procedure is implemented using canny edge detector to determine the edges and edge directions. In the end, an 1.2 Histogram [2] enhanced Hough transform algorithm is implemented to provide The histogram of a digital image with gray levels in the shape detection like triangle, rectangle, rhombus and circle in range [0, L-1] is a discrete function h (rk) = nk, where rk is the addition to provide line linking as per the requirement of this kth gray level and nk is the number of pixel in the image research. having gray level rk. It is common practice to normalize a histogram by dividing each of its values by the total number of Keywords— Edge detection, Image segmentation, canny edge pixel in the image, denoted by n. Thus, a normalize histogram detector, Hough transform, Feature extraction. is given by p (rk) = nk/n, for k = 0, 1……, L-1. Loosely speaking, p (rk) gives an estimate of probability of occurrence of gray level rk. Note that the sum all components of a I. INTRODUCTION normalized histogram is equal to 1.We can consider the 1.1 Segmentation histograms of our images. For the noise free image, it’s simply In computer vision, segmentation refers to the process of two spikes at i=100, i=150. For the low noise image, there are partitioning a digital image into multiple regions (sets of two clear peaks centered on i=100,i=150. For the high noise pixels). The goal of segmentation is to simplify and/or change image, there is a single peak – two grey level populations the representation of an image into something that is more corresponding to object and background have merged. We can meaningful and easier to analyze. Image segmentation is define the input image signal-to-noise ratio in terms of the typically used to locate objects and boundaries (lines, curves, mean grey level value of the object pixels and background etc.) in images [1]. pixels and the additive noise standard deviation For intensity images (i.e. those represented by point-wise | | intensity levels) four popular approaches are [2, 3]: threshold = ……… 2.1 techniques, edge-based methods, region-based techniques, and µ푏−µ표 푆 connectivity-preserving relaxation methods. �푁 휎 Threshold techniques, which make decisions based on local pixel information, are effective when the intensity levels of the objects fall squarely outside the range of levels in the background. Because spatial information is ignored, however, blurred region boundaries can create havoc. Edge-based methods center around contour detection: their weakness in connecting together broken contour lines make them, too, Fig 1.1 Signal to noise ratio of signal prone to failure in the presence of blurring. A region-based method usually proceeds as follows: the image is partitioned 1.3 Thresholding [3] into connected regions by grouping neighboring pixels of Thresholding is the simplest method of image segmentation. similar intensity levels. Adjacent regions are then merged During the thresholding process, individual pixels in an image under some criterion involving perhaps homogeneity or are marked as “object” pixels if their value is greater than some threshold value (assuming an object to be brighter than the background) and as “background” pixels otherwise. If the Mamta.Juneja is Assistant Professor in University Institute of Engineering grey level of pixel p <=T then pixel p is an object pixel Else and technology, PanjabUnivesity, Chandigarh, e-mail:[email protected]. Pixel p is a background pixel. Prof. Dr. Parvinder S. Sandhuis withRayat and Bahara Institute of Engineering and Bio-Technology,Sahauran,Punjab. 21 3rd International Conference on Intelligent Computational Systems (ICICS'2013) January 26-27, 2013 Hong Kong (China) = + ……………2.4 2 2 푔 �푔푥 푔푦 = tan ……………..2.5 −1 푔푥 훩 �푔푦� Where, g is a gradient vector is the gradient magnitude and θ is the gradient direction. Fig 1.2 Thresholding of a signal 1.4.1.2 Robert’s Cross Operator: The Roberts Cross operator performs a simple, quick to 1.4 Edge Detection [4-6] compute, 2-D spatial gradient measurement on an image The Edges define the boundaries between regions in an image, operator consists of a pair of 2×2 convolution kernels as which helps with segmentation and object recognition. Edge shown in Figure. One kernel is simply the other rotated by detection is a fundamental of low-level image processing and 90°. This is very similar to the Sobel operator. good edges are necessary for higher level processing. +1 0 0 +1 0 1 1 0 1.4.1 Edge Detection Techniques � � � � There are many ways to perform edge detection. However, the Fig 1.5 Roberts− edge− detector mask 푥 푦 majority of different methods may be grouped into two 퐺 퐺 categories: The kernels can be applied separately to the input image, to Gradient: The gradient method detects the edges by looking produce separate measurements of the gradient component in for the maximum and minimum in the first derivative of the each orientation (call these Gx and Gy). These can then be image. combined together to find the absolute magnitude of the Laplacian: The Laplacian method searches for zero gradient at each point and the orientation of that gradient. The crossings in the second derivative of the image to find edges. gradient magnitude is given by: An edge has the one-dimensional shape of a ramp and calculating the derivative of the image can highlight its | | = 2 + 2 ………..….………...2.6 location. Suppose we have the signal below and if we take the 푥 푦 gradient and laplace of this signal we get the resultant signal 퐺 �퐺 퐺 as shown below: Although typically, an approximate magnitude is computed using: | | = + …………….……….2.7 푥 푦 This is much faster퐺 to compute.�퐺 퐺 The� angle of orientation of the edge giving rise to the spatial gradient (relative to the pixel grid orientation) is given by: Fig 1.3(a) Intensity graph of a signal (b) Gradient (c) Laplace = ………………2.8 3휋 1.4.1.1 Sobel Operator: 퐺푦 훩 푎푟푐푡푎푛 � � 푥� − 4 The Sobel Edge Detector uses a simple convolution kernel 퐺 to create a series of gradient magnitudes. For those you of 1.4.1.3 Prewitt’s Operator: mathematically inclined, applying can be represented as: Prewitt operator is similar to the Sobel operator and is used for detecting vertical and horizontal edges in images. ( , ) = ( , ) ( , ) ……..2.2 1 1 1 1 1 1 0 1 푁 푥 푦 ∑푘=−1 ∑푗=−1 퐾 푗 푘 푝 푥 − 푗 푦 − 푘 So, the Sobel Edge Detector uses two convolution kernels, 1 = 0 0 0 3 = 1 0 1 one to detect changes in vertical contrast (hx) and another to 1 1 1 −1 0 1 detect horizontal contrast (hy). ℎ � � ℎ �− � 1 0 +1 +1 +2 +1 − 1.6 Pre− witt− edge detector mask− 2 0 +2 0 0 0 −1 0 +1 1 2 1 �− � � � 1.4.1.4 Laplacian of Gaussian − Fig 1.4 Sobel edge detector− − mask− The Laplacian is a 2-D isotropic measure of the 2nd spatial 푦 퐺푥 퐺 derivative of an image. The Laplacian L(x, y) of an image Therefore, we have a gradient magnitude and direction: with pixel intensity values I(x, y) is given by: ( , ) = + ………………….2.9 2 2 휕 퐼 휕 퐼 = ……………………..…….2.3 2 2 퐿 푥 푦 휕푥 휕푦 푔푥 푔 �푔푦� 22 3rd International Conference on Intelligent Computational Systems (ICICS'2013) January 26-27, 2013 Hong Kong (China) Three commonly used small kernels are shown in Figure direction has to be equal to 90 degrees or 0. degrees, depending on what the value of the gradient in the y-direction 0 1 0 1 1 1 is equal to. If GY has a value of zero, the edge direction will 1 4 1 1 8 1 equal 0 degrees. Otherwise the edge direction will equal 90 0 −1 0 −1 −1 −1 degrees. The formula for finding the edge direction is just: �− − � �− − � Fig 1.7− Laplacian edge− detector− mask− = tan The 2-D LoG function centered on zero and with Gaussian −1 퐺푦 훩 � � 푥� standard deviation σ has the form Step 4:-Once the edge direction is퐺 known, the next step is 2+ 2 1 2+ 2 to relate the edge direction to a direction that can be traced in LoG( , ) = 1 2 2 …………2.10 4 2 2 푥 푦 an image. So if the pixels of a 5x5 image are aligned as 푥 푦 − 휎 follows: 푥 푦 − 휋휎 − 휎 푒 � � x x x x x 1.4.1.5 Canny Edge Detection Algorithm x x x x x Canny operator is the result of solving an optimization x x a x x problem with constraints.
Recommended publications
  • Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2005 Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY FAMU-FSU COLLEGE OF ENGINEERING EDGE DETECTION OF NOISY IMAGES USING 2-D DISCRETE WAVELET TRANSFORM BY VENKATA RAVIKIRAN CHAGANTI A thesis submitted to the Department of Electrical Engineering in partial fulfillment of the requirements for the degree of Master of Science Degree Awarded: Spring Semester, 2005 The members of the committee approve the thesis of Venkata R. Chaganti th defended on April 11 , 2005. __________________________________________ Simon Y. Foo Professor Directing Thesis __________________________________________ Anke Meyer-Baese Committee Member __________________________________________ Rodney Roberts Committee Member Approved: ________________________________________________________________________ Leonard J. Tung, Chair, Department of Electrical and Computer Engineering Ching-Jen Chen, Dean, FAMU-FSU College of Engineering The office of Graduate Studies has verified and approved the above named committee members. ii Dedicate to My Father late Dr.Rama Rao, Mother, Brother and Sister-in-law without whom this would never have been possible iii ACKNOWLEDGEMENTS I thank my thesis advisor, Dr.Simon Foo, for his help, advice and guidance during my M.S and my thesis. I also thank Dr.Anke Meyer-Baese and Dr. Rodney Roberts for serving on my thesis committee. I would like to thank my family for their constant support and encouragement during the course of my studies. I would like to acknowledge support from the Department of Electrical Engineering, FAMU-FSU College of Engineering.
    [Show full text]
  • Computer Vision: Edge Detection
    Edge Detection Edge detection Convert a 2D image into a set of curves • Extracts salient features of the scene • More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused by a variety of factors Edge detection How can you tell that a pixel is on an edge? Profiles of image intensity edges Edge detection 1. Detection of short linear edge segments (edgels) 2. Aggregation of edgels into extended edges (maybe parametric description) Edgel detection • Difference operators • Parametric-model matchers Edge is Where Change Occurs Change is measured by derivative in 1D Biggest change, derivative has maximum magnitude Or 2nd derivative is zero. Image gradient The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction is given by: • how does this relate to the direction of the edge? The edge strength is given by the gradient magnitude The discrete gradient How can we differentiate a digital image f[x,y]? • Option 1: reconstruct a continuous image, then take gradient • Option 2: take discrete derivative (finite difference) How would you implement this as a cross-correlation? The Sobel operator Better approximations of the derivatives exist • The Sobel operators below are very commonly used -1 0 1 1 2 1 -2 0 2 0 0 0 -1 0 1 -1 -2 -1 • The standard defn. of the Sobel operator omits the 1/8 term – doesn’t make a difference for edge detection – the 1/8 term is needed to get the right gradient
    [Show full text]
  • Vision Review: Image Processing
    Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 17, 2002 Announcements • Homework and paper presentation guidelines are up on web page • Readings for next Tuesday: Chapters 6, 11.1, and 18 • For next Thursday: “Stochastic Road Shape Estimation” Computer Vision Review Outline • Image formation • Image processing • Motion & Estimation • Classification Outline •Images • Binary operators •Filtering – Smoothing – Edge, corner detection • Modeling, matching • Scale space Images • An image is a matrix of pixels Note: Matlab uses • Resolution – Digital cameras: 1600 X 1200 at a minimum – Video cameras: ~640 X 480 • Grayscale: generally 8 bits per pixel → Intensities in range [0…255] • RGB color: 3 8-bit color planes Image Conversion •RGB → Grayscale: Mean color value, or weight by perceptual importance (Matlab: rgb2gray) •Grayscale → Binary: Choose threshold based on histogram of image intensities (Matlab: imhist) Color Representation • RGB, HSV (hue, saturation, value), YUV, etc. • Luminance: Perceived intensity • Chrominance: Perceived color – HS(V), (Y)UV, etc. – Normalized RGB removes some illumination dependence: Binary Operations • Dilation, erosion (Matlab: imdilate, imerode) – Dilation: All 0’s next to a 1 → 1 (Enlarge foreground) – Erosion: All 1’s next to a 0 → 0 (Enlarge background) • Connected components – Uniquely label each n-connected region in binary image – 4- and 8-connectedness –Matlab: bwfill, bwselect • Moments: Region statistics – Zeroth-order: Size – First-order: Position (centroid)
    [Show full text]
  • A Survey of Feature Extraction Techniques in Content-Based Illicit Image Detection
    Journal of Theoretical and Applied Information Technology 10 th May 2016. Vol.87. No.1 © 2005 - 2016 JATIT & LLS. All rights reserved . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 A SURVEY OF FEATURE EXTRACTION TECHNIQUES IN CONTENT-BASED ILLICIT IMAGE DETECTION 1,2 S.HADI YAGHOUBYAN, 1MOHD AIZAINI MAAROF, 1ANAZIDA ZAINAL, 1MAHDI MAKTABDAR OGHAZ 1Faculty of Computing, Universiti Teknologi Malaysia (UTM), Malaysia 2 Department of Computer Engineering, Islamic Azad University, Yasooj Branch, Yasooj, Iran E-mail: 1,2 [email protected], [email protected], [email protected] ABSTRACT For many of today’s youngsters and children, the Internet, mobile phones and generally digital devices are integral part of their life and they can barely imagine their life without a social networking systems. Despite many advantages of the Internet, it is hard to neglect the Internet side effects in people life. Exposure to illicit images is very common among adolescent and children, with a variety of significant and often upsetting effects on their growth and thoughts. Thus, detecting and filtering illicit images is a hot and fast evolving topic in computer vision. In this research we tried to summarize the existing visual feature extraction techniques used for illicit image detection. Feature extraction can be separate into two sub- techniques feature detection and description. This research presents the-state-of-the-art techniques in each group. The evaluation measurements and metrics used in other researches are summarized at the end of the paper. We hope that this research help the readers to better find the proper feature extraction technique or develop a robust and accurate visual feature extraction technique for illicit image detection and filtering purpose.
    [Show full text]
  • Computer Vision Based Human Detection Md Ashikur
    Computer Vision Based Human Detection Md Ashikur. Rahman To cite this version: Md Ashikur. Rahman. Computer Vision Based Human Detection. International Journal of Engineer- ing and Information Systems (IJEAIS), 2017, 1 (5), pp.62 - 85. hal-01571292 HAL Id: hal-01571292 https://hal.archives-ouvertes.fr/hal-01571292 Submitted on 2 Aug 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. International Journal of Engineering and Information Systems (IJEAIS) ISSN: 2000-000X Vol. 1 Issue 5, July– 2017, Pages: 62-85 Computer Vision Based Human Detection Md. Ashikur Rahman Dept. of Computer Science and Engineering Shaikh Burhanuddin Post Graduate College Under National University, Dhaka, Bangladesh [email protected] Abstract: From still images human detection is challenging and important task for computer vision-based researchers. By detecting Human intelligence vehicles can control itself or can inform the driver using some alarming techniques. Human detection is one of the most important parts in image processing. A computer system is trained by various images and after making comparison with the input image and the database previously stored a machine can identify the human to be tested. This paper describes an approach to detect different shape of human using image processing.
    [Show full text]
  • Edge Detection Low Level
    Image Processing Image Processing - Computer Vision Edge Detection Low Level • Edge detection masks • Gradient Detectors Image Processing representation, compression,transmission • Second Derivative - Laplace detectors • Edge Linking image enhancement • Hough Transform edge/feature finding image "understanding" Computer Vision High Level UFO - Unidentified Flying Object Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused by a variety of factors 1 Edge detection Profiles of image intensity edges Step Edge Line Edge How can you tell that a pixel is on an edge? gray value gray gray value gray x x edge edge edge edge Edge Detection by Differentiation Edge detection 1D image f(x) gray value gray 1. Detection of short linear edge segments (edgels) x 2. Aggregation of edgels into extended edges 1st derivative f'(x) threshold |f'(x)| - threshold Pixels that passed the threshold are Edge Pixels 2 Image gradient The discrete gradient The gradient of an image: How can we differentiate a digital image f[x,y]? • Option 1: reconstruct a continuous image, then take gradient • Option 2: take discrete derivative (finite difference) The gradient points in the direction of most rapid change in intensity How would you implement this as a convolution? The gradient direction is given by: The edge strength is given by the gradient magnitude Effects of noise Solution: smooth first Consider a single row or column of the image • Plotting intensity as a function of position gives
    [Show full text]
  • A Survey of Feature Extraction Techniques in Content-Based Illicit Image Detection
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universiti Teknologi Malaysia Institutional Repository Journal of Theoretical and Applied Information Technology 10 th May 2016. Vol.87. No.1 © 2005 - 2016 JATIT & LLS. All rights reserved . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 A SURVEY OF FEATURE EXTRACTION TECHNIQUES IN CONTENT-BASED ILLICIT IMAGE DETECTION 1,2 S.HADI YAGHOUBYAN, 1MOHD AIZAINI MAAROF, 1ANAZIDA ZAINAL, 1MAHDI MAKTABDAR OGHAZ 1Faculty of Computing, Universiti Teknologi Malaysia (UTM), Malaysia 2 Department of Computer Engineering, Islamic Azad University, Yasooj Branch, Yasooj, Iran E-mail: 1,2 [email protected], [email protected], [email protected] ABSTRACT For many of today’s youngsters and children, the Internet, mobile phones and generally digital devices are integral part of their life and they can barely imagine their life without a social networking systems. Despite many advantages of the Internet, it is hard to neglect the Internet side effects in people life. Exposure to illicit images is very common among adolescent and children, with a variety of significant and often upsetting effects on their growth and thoughts. Thus, detecting and filtering illicit images is a hot and fast evolving topic in computer vision. In this research we tried to summarize the existing visual feature extraction techniques used for illicit image detection. Feature extraction can be separate into two sub- techniques feature detection and description. This research presents the-state-of-the-art techniques in each group. The evaluation measurements and metrics used in other researches are summarized at the end of the paper.
    [Show full text]
  • Edge Detection As an Effective Technique in Image Segmentation
    International Journal of Computer Science Trends and Technology (IJCST) – Volume 2 Issue 4, Nov-Dec 2014 RESEARCH ARTICLE OPEN ACCESS Edge Detection as an Effective Technique in Image Segmentation for Image Analysis Manjula KA Research Scholar Research Development Centre Bharathiar University Tamil Nadu – India ABSTRACT Digital Image processing is a technique making use of computer algorithms to perform specific operations on an image, to get an enhanced image or to extract some useful information from it. Image segmentation, which is an important phase in image processing, is the process of separation of an image into regions or categories, which correspond to different objects or parts of objects. This step is typically used to identify objects or other relevant information in digital images. There are generic methods available for image segmentation and edge based segmentation is one among them. Image segmentation needs to segment the object from the background to read the image properly and identify the content of the image carefully. In this context, edge detection is considered to be a fundamental tool for image segmentation. Keywords :- Digital image processing, image segmentation, edge detection. I. INTRODUCTION and digital image processing. Analog Image Processing refers to the processing of image The term image processing refers to through electrical means. The most common processing of images using mathematical operations example is the television image. The television in order to get an enhanced image or to extract some signal is a voltage level which varies in amplitude useful information from it. It is a type of signal to represent brightness through the image.
    [Show full text]
  • Edge Detection CS 111
    Edge Detection CS 111 Slides from Cornelia Fermüller and Marc Pollefeys Edge detection • Convert a 2D image into a set of curves – Extracts salient features of the scene – More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity • Edges are caused by a variety of factors Edge detection 1.Detection of short linear edge segments (edgels) 2.Aggregation of edgels into extended edges 3.Maybe parametric description Edge is Where Change Occurs • Change is measured by derivative in 1D • Biggest change, derivative has maximum magnitude •Or 2nd derivative is zero. Image gradient • The gradient of an image: • The gradient points in the direction of most rapid change in intensity • The gradient direction is given by: – Perpendicular to the edge •The edge strength is given by the magnitude How discrete gradient? • By finite differences f(x+1,y) – f(x,y) f(x, y+1) – f(x,y) The Sobel operator • Better approximations of the derivatives exist –The Sobel operators below are very commonly used -1 0 1 121 -2 0 2 000 -1 0 1 -1 -2 -1 – The standard defn. of the Sobel operator omits the 1/8 term • doesn’t make a difference for edge detection • the 1/8 term is needed to get the right gradient value, however Gradient operators (a): Roberts’ cross operator (b): 3x3 Prewitt operator (c): Sobel operator (d) 4x4 Prewitt operator Finite differences responding to noise Increasing noise -> (this is zero mean additive gaussian noise) Solution: smooth first • Look for peaks in Derivative
    [Show full text]
  • Vision Based Real-Time Navigation with Unknown and Uncooperative Space Target Abstract
    Vision based real-time navigation with Unknown and Uncooperative Space Target Abstract Hundreds of satellites are launched every year, and the failure rate during their op- erational time frame, combined with the four to five percent failure rate of launch vehicles means that approximately one of every seven satellites will fail before reach- ing their planned working life. This generates an enormous amount of space debris, which can be a danger to working space platforms such as the International Space Station (ISS), shuttles, spacecrafts, and other vehicles. If debris collides with a ve- hicle with human passengers, it could cause significant damage and threaten the life and safety of those aboard, plus the collisions themselves add more junk, thereby creating more risk. Thus, active space debris removal operations are essential. The removal of space debris requires pose (position and orientation) information and mo- tion estimation of targets; thus, real-time detection and tracking of uncooperative targets are critical. In order to estimate a target's pose and motion, it must track over the entire frame. In addition, due to the harsh space environment, different types of sensors are required, which means practical sensor fusion algorithms are essential. However, as there is little prior information about a target's structure and motion, detection and tracking, pose and motion estimation and multi-sensor fusion tasks are very challenging. This thesis develops new approaches for target tracking, pose and motion estima- tion and multi-sensor fusion. For the target-tracking step, a new adaptation scheme iii of the Unscented Kalman Filter (UKF) is proposed to manage difficulties associated with real-time tracking of an uncooperative space target.
    [Show full text]
  • UNIVERSIDAD POLITÉCNICA DE MADRID Marta Luna Serrano
    UNIVERSIDAD POLITÉCNICA DE MADRID ESCUELA TÉCNICA SUPERIOR DE INGENIEROS DE TELECOMUNICACIÓN Automated detection and classification of brain injury lesions on structural magnetic resonance imaging TESIS DOCTORAL Marta Luna Serrano Ingeniero de Telecomunicación Madrid, 2016 Departamento de Tecnología Fotónica y Bioingeniería Grupo de Bioingeniería y Telemedicina E.T.S.I. de Telecomunicación UNIVERSIDAD POLITÉCNICA DE MADRID Automated detection and classification of brain injury lesions on structural magnetic resonance imaging TESIS DOCTORAL Autor: Marta Luna Serrano Directores: Enrique J. Gómez Aguilera José María Tormos Muñoz Madrid, Febrero 2016 Tribunal formado por el Magnífico y Excelentísimo Sr. Rector de la Universidad Politécnica de Madrid. Presidente: Dª. Mª Elena Hernando Pérez Doctora Ingeniera de Telecomunicación Profesora Titular de Universidad Universidad Politécnica de Madrid Vocales: D. César Cáceres Taladriz Doctor Ingeniero de Telecomunicación Profesor Contratado Doctor Universidad Rey Juan Carlos D. Pablo Lamata de la Orden Doctor Ingeniero de Telecomunicación Sir Henry Dale Fellow & Lecturer King’s College of London D. Alberto García Molina Doctor en Neurociencias Neuropsicólogo adjunto Hospital de Neurorehabilitación Institut Guttmann Secretaria: Dª. Patricia Sánchez González Doctora Ingeniera de Telecomunicación Profesora Ayudante Doctor Universidad Politécnica de Madrid Suplentes: Dña. Laura Roa Romero Doctora en Ciencias Físicas Catedrática de Universidad Universidad de Sevilla D. Javier Resina Tosina Doctor Ingeniero
    [Show full text]
  • CITS 4402 Computer Vision
    CITS 4402 Computer Vision Ajmal Mian Lecture 05 Edge Detection & Hough Transform 26/03/2018 Objectives of this lecture To understand what are edges To study the Sobel operator for gradient extraction To learn the Canny edge detection algorithm To study the general concept of the Hough transform To use the Hough transform for line and circle detection 26/03/2018 Computer Vision - Lecture 05 - Edge Detection & Hough Transform The University of Western Australia 2 Significance of Edges in Images There is strong evidence that some form of data compression occurs at an early stage in the human visual system Research suggests that one form of this compression involves finding edges and other information-high features in images From edges we can construct a line drawing of the scene. This results in a huge data compression, but the information content remains very high When we look at a sketch of a scene we do not feel that we are missing a great deal of information 26/03/2018 Computer Vision - Lecture 05 - Edge Detection & Hough Transform The University of Western Australia 3 What is an Edge? This is not well defined but the classical approach is to define edges as being step discontinuities in the image Edge detection then becomes a task of finding local maxima in the first derivative or zero-crossing of the 2nd derivative This idea was first suggested by David Marr at MIT in the late 70’s. 26/03/2018 Computer Vision - Lecture 05 - Edge Detection & Hough Transform The University of Western Australia 4 Edge Detection Goal: Identify
    [Show full text]