Design and Implementation of a Vascular Pattern Recognition System

Total Page:16

File Type:pdf, Size:1020Kb

Design and Implementation of a Vascular Pattern Recognition System Design and Implementation of a Vascular Pattern Recognition System A Thesis submitted to the Graduate School Of The University of Cincinnati In partial fulfillment of the requirements for the degree of Master of Science in the Department of Electrical and Computer Engineering of the College of Engineering and Applied Sciences July 2014 by Srikkanth Govindaraajan B.E. Electronics and Communication Engineering Anna University, 2011 Committee Chair: Dr. Carla Purdy Abstract Biometric technology is playing a vital role in the present day due to the rapid development of secure systems and home automation that have made our lives easier. But the question arises as to how far these systems are secure. With advances in hacking, the traditional username and password security protocol is not optimal for all security based systems. Though fingerprint identification systems provided a path-breaking solution, there are many methods to forge fingerprints. While other technologies like voice recognition, iris recognition, etc., co-exist, the security and safety of these technologies are also open to question. The major objective of this thesis is to provide enhanced security through a biometrics based embedded system using the technique of Vascular Pattern Recognition or Vein Pattern Recognition (VPR). Another objective is to enhance the vascular pattern image through various image processing techniques. Another target is to reduce the Comparison for Result (CFR) time by a significant factor. Finally, the aim is to implement this VPR based embedded system in a real time software environment. For the system we implemented, our experiments achieved a false accept rate of 0% and a false reject rate of 6.34%. Furthermore, it has been demonstrated in our research that the Speeded Up Robust Features (SURF) algorithm is faster than its predecessor algorithm Scale Invariant Feature Transform (SIFT). The principal conclusion of the thesis is that a safe and secure system can be developed on a small scale with precise results. Given the resources, this system could be extended to a larger scale and customized for a wide range of applications. Keywords: Vascular Pattern Recognition, Vein Pattern Recognition, image processing techniques, SURF algorithm, OpenCV. 2 Acknowledgements I would have never been able to complete my thesis without the guidance of my advisor, my committee members and help from my friends and support from my entire family. I would like to express my heartfelt gratitude and thanks to my advisor Dr.Carla Purdy for her patience, guidance and providing me with an excellent atmosphere for doing my research. Right from the beginning she has been very supportive and patient in answering my queries. I would also like to specially thank Dr.George Purdy and Dr.Wen-Ben Jone for accepting to be a part of my defense committee despite their busy schedule. I would like to thank all my friends who helped in giving their vein patterns that were used in the research and for their inputs and moral support throughout my Masters at University of Cincinnati. I would like to thank the University of Cincinnati and the Department of Electrical and Computer Engineering in specific for providing me with a great environment to learn and develop myself as an Engineer over the last few years. I would like to thank my parents Usha and Govindaraajan, my brother and my sister in law for their love, affection and support in all my endeavors. I would like to specially thank my wife’s family for their love and support. Finally, I would like to thank my wife, Sudha Badrinathan who has been very patient and caring throughout this arduous journey. She has stayed up late nights with me during this research and has motivated me throughout my Master’s degree. I thank god for giving me this life and opportunity to associate with wonderful people and blessing me for what I am today. - Srikkanth Govindaraajan 3 TABLE OF CONTENTS Abstract……………………………………………………………………………………………………………………………………………….2 Acknowledgements……………………………………………………………………………………………………………………………..3 List of Figures……………………………………………………………………………………………………………………………………….7 List of Tables……………………………………………………………………………………………………………………………………….10 1 INTRODUCTION………………………………………………………………………………………………………………….11 1.1 PROBLEM…………………………………………………………………………………………………………….11 1.2 OBJECTIVE……………………………………………………………………………………………………………12 1.3 APPROACH…………………………………………………………………………………………………………..12 1.4 ORGANIZATION……………………………………………………………………………………………………12 2 BACKGROUND AND RELATED WORK……………………………………………………………………………………13 2.1 EMBEDDED SYSTEMS……………………………………………………………………………………………13 2.2 BIOMETRICS AND ITS ROLE IN IDENTITY MANAGEMENT………………………………………14 2.2.1 CHARACTERESTICS OF BIOMETRIC SYSTEM…………………………………14 2.2.2 CONSIDERATIONS FOR A BIOMETRIC SYSTEM………………………………14 2.2.3 GENERAL WORKING OF A BIOMETRIC SYSTEM…………………………….14 2.2.4 PERFORMANCE METRICS OF A BIOMETRIC SYSTEM…………………….16 2.3 VASCULAR PATTERN RECOGNITION………………………………………………………………………17 2.3.1 VASCULAR PATTERN IN FINGERS AND HANDS………………………………19 2.4 THE OPENCV LIBRARY……………………………………………………………………………………………20 2.4.1 WHAT IS OPENCV?...........................................................................20 2.4.2 WHAT IS COMPUTER VISION?..........................................................21 2.4.3 HARDWARE FOR COMPUTER VISION SYSTEMS…………………………….22 2.5 PATTERN MATCHING IN BIOMETRICS…………………………………………………………………….22 2.5.1 FEATURE DETECTION ALGORITHMS………………………………………………23 2.5.2 TYPES OF IMAGE FEATURES………………………………………………………….23 2.5.3 FEATURE DESCIPTION…………………………………………………………………..26 2.5.4 SCALE INVARIANT FEATURE TRANSFORM (SIFT) ALGORITHM……….26 2.5.5 SPEEDED UP ROBUST FEATURES (SURF) ALGORITHM……………………29 2.6 RELATED WORK……………………………………………………………………………………………………..30 4 3 APPROACH……………………………………………………………………………………………………………………………33 3.1 BASIC CONCEPTS……………………………………………………………………………………………………33 3.1.1 VEIN DETECTION PROCESS……………………………………………………………33 3.1.2 IMAGE DATABASE…………………………………………………………………………34 3.2 HARDWARE SELECTION AND SETUP……………………………………………………………………….35 3.3 SOFTWARE REQUIREMENTS AND IMPLEMENTATION…………………………………………….38 3.3.1 THE OPEN SOURCE WORLD…………………………………………………………..38 3.3.2 OPERATING SYSTEM……………………………………………………………………..38 3.3.3 UBUNTU OPERATING SYSTEM………………………………………………………40 3.3.3.1 SYSTEM REQUIREMENTS FOR UBUNTU……………………..41 3.3.4 THE OPENCV LIBRARY…………………………………………………………………..41 3.4 OVERALL WORKING OF THE VASCULAR PATTERN RECOGNITION SYSTEM…………….42 3.5 FUNCTIONALITIES AND WORKING OF THE SYSTEM’S PROGRAM CODE…………………42 SYSTEM DIAGRAM AND FLOWCHARTS………………………………………………………………….45 4 RESULTS AND ANALYSIS……………………………………………………………………………………………………….50 4.1 ALGORITHM ANALYSIS…………………………………………………………………………………………..50 4.1.1 ONE TO ONE MATCHING………………………………………………………………50 4.1.2 ONE TO MANY MATCHING……………………………………………………………54 4.1.3 PERFORMANCE MEASURES………………………………………………………....57 4.2 AUXILIARY FUNCTIONS OF THE SYSTEM…………………………………………………………………59 4.2.1 VIEWING IMAGE ON THE DATABASE…………………………………………….59 4.2.2 PROCESS THE IMAGE…………………………………………………………………….59 4.2.3 ACCESSING THE USER MANUAL…………………………………………………….62 4.2.4 REMOVE IMAGE FROM DATABASE……………………………………………….62 4.2.5 CHANGING PASSWORD………………………………………………………………..64 4.2.6 ADDING AN IMAGE TO THE DATABASE…………………………………………65 4.3 DRAWBACKS AND LIMITATIONS…………………………………………………………………………….66 4.4 COMPARISON TO OTHER RELATED WORK……………………………………………………………..68 5 CONCLUSION AND FUTURE WORK………………………………………………………………………………………..69 5.1 SUMMARY OF WORK……………………………………………………………………………………………..69 5.2 FUTURE WORK……………………………………………………………………………………………………….69 5 Appendix A – Sample calculations for false reject rate…………………………………………………………………71 Appendix B – Sample calculations for average time taken to find an image in the database………..73 REFERENCES………………………………………………………………………………………………………………………………..77 6 LIST OF FIGURES Figure 1.1: Images of Hitachi’s, Fujitsu’s and M2SYS hand vein scanners [38][39][40]……………………………12 Figure 2.1: Basic structure of an embedded system [75]……………………………………………………………………….14 Figure 2.2: Biometric system model [5]…………………………………………………………………………………………………17 Figure 2.3: Arrangement for acquiring vascular pattern [74]…………………………………………………………………20 Figure2.4: Experimental setup and raw image obtained from the IR Camera………………………………………..20 Figure 2.5: OpenCV’s basic structure [22]………………………………………………………………………………………………22 Figure 2.6: Input and output of a corner detection algorithm. [29]………………………………………………………..25 Figure 2.7: Output of a blob detection algorithm [42]……………………………………………………………………………26 Figure 2.8: Sample image and its ridge detection output [28]………………………………………………………………..26 Figure 2.9: sample image (a) and its output (b) of a canny edge detector [44]………………………………….…..27 Figure 2.10: Stages of SIFT algorithm key points filtering [32]……………………………………………………………….28 Figure 2.11: Keypoints / Feature detectors using SURF [35]………………………………………………………………….30 Figure 2.12: Object Detection using SURF algorithm [35]………………………………………………………………………31 Figure 3.1: Optical Window for Vein Detection Process using Near Infrared Light [46]………………………….35 Figure 3.2: Interaction of Near Infrared Light on human skin [48]………………………………………………………….35 Figure 3.3: Sample Finger Vein Database images [49]……………………………………………………………………………36 Figure 3.4: Images of different LED illuminators [55]……………………………………………………………………………..37 Figure 3.5: Image of Sabrent Night Vision Webcam used in experiments………………………………………………38 Figure 3.6: Experimental hardware setup with Sabrent Night Vision Webcam………………………………………38 Figure 3.7: Interrupt sources and handling [59]…………………………………………………………………………………….40 Figure
Recommended publications
  • Scale Invariant Interest Points with Shearlets
    Scale Invariant Interest Points with Shearlets Miguel A. Duval-Poo1, Nicoletta Noceti1, Francesca Odone1, and Ernesto De Vito2 1Dipartimento di Informatica Bioingegneria Robotica e Ingegneria dei Sistemi (DIBRIS), Universit`adegli Studi di Genova, Italy 2Dipartimento di Matematica (DIMA), Universit`adegli Studi di Genova, Italy Abstract Shearlets are a relatively new directional multi-scale framework for signal analysis, which have been shown effective to enhance signal discon- tinuities such as edges and corners at multiple scales. In this work we address the problem of detecting and describing blob-like features in the shearlets framework. We derive a measure which is very effective for blob detection and closely related to the Laplacian of Gaussian. We demon- strate the measure satisfies the perfect scale invariance property in the continuous case. In the discrete setting, we derive algorithms for blob detection and keypoint description. Finally, we provide qualitative justifi- cations of our findings as well as a quantitative evaluation on benchmark data. We also report an experimental evidence that our method is very suitable to deal with compressed and noisy images, thanks to the sparsity property of shearlets. 1 Introduction Feature detection consists in the extraction of perceptually interesting low-level features over an image, in preparation of higher level processing tasks. In the last decade a considerable amount of work has been devoted to the design of effective and efficient local feature detectors able to associate with a given interesting point also scale and orientation information. Scale-space theory has been one of the main sources of inspiration for this line of research, providing an effective framework for detecting features at multiple scales and, to some extent, to devise scale invariant image descriptors.
    [Show full text]
  • Hough Transform, Descriptors Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion University of the Negev Hough Transform
    DIGITAL IMAGE PROCESSING Lecture 7 Hough transform, descriptors Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion University of the Negev Hough transform y m x b y m 3 5 3 3 2 2 3 7 11 10 4 3 2 3 1 4 5 2 2 1 0 1 3 3 x b Slide from S. Savarese Hough transform Issues: • Parameter space [m,b] is unbounded. • Vertical lines have infinite gradient. Use a polar representation for the parameter space Hough space r y r q x q x cosq + ysinq = r Slide from S. Savarese Hough Transform Each point votes for a complete family of potential lines: Each pencil of lines sweeps out a sinusoid in Their intersection provides the desired line equation. Hough transform - experiments r q Image features ρ,ϴ model parameter histogram Slide from S. Savarese Hough transform - experiments Noisy data Image features ρ,ϴ model parameter histogram Need to adjust grid size or smooth Slide from S. Savarese Hough transform - experiments Image features ρ,ϴ model parameter histogram Issue: spurious peaks due to uniform noise Slide from S. Savarese Hough Transform Algorithm 1. Image à Canny 2. Canny à Hough votes 3. Hough votes à Edges Find peaks and post-process Hough transform example http://ostatic.com/files/images/ss_hough.jpg Incorporating image gradients • Recall: when we detect an edge point, we also know its gradient direction • But this means that the line is uniquely determined! • Modified Hough transform: for each edge point (x,y) θ = gradient orientation at (x,y) ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end Finding lines using Hough transform
    [Show full text]
  • Fast Image Registration Using Pyramid Edge Images
    Fast Image Registration Using Pyramid Edge Images Kee-Baek Kim, Jong-Su Kim, Sangkeun Lee, and Jong-Soo Choi* The Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, Korea(R.O.K) *Corresponding author’s E-mail address: [email protected] Abstract: Image registration has been widely used in many image processing-related works. However, the exiting researches have some problems: first, it is difficult to find the accurate information such as translation, rotation, and scaling factors between images; and it requires high computational cost. To resolve these problems, this work proposes a Fourier-based image registration using pyramid edge images and a simple line fitting. Main advantages of the proposed approach are that it can compute the accurate information at sub-pixel precision and can be carried out fast for image registration. Experimental results show that the proposed scheme is more efficient than other baseline algorithms particularly for the large images such as aerial images. Therefore, the proposed algorithm can be used as a useful tool for image registration that requires high efficiency in many fields including GIS, MRI, CT, image mosaicing, and weather forecasting. Keywords: Image registration; FFT; Pyramid image; Aerial image; Canny operation; Image mosaicing. registration information as image size increases [3]. 1. Introduction To solve these problems, this work employs a pyramid-based image decomposition scheme. Image registration is a basic task in image Specifically, first, we use the Gaussian filter to processing to combine two or more images that are remove the noise because it causes many undesired partially overlapped.
    [Show full text]
  • Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2005 Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY FAMU-FSU COLLEGE OF ENGINEERING EDGE DETECTION OF NOISY IMAGES USING 2-D DISCRETE WAVELET TRANSFORM BY VENKATA RAVIKIRAN CHAGANTI A thesis submitted to the Department of Electrical Engineering in partial fulfillment of the requirements for the degree of Master of Science Degree Awarded: Spring Semester, 2005 The members of the committee approve the thesis of Venkata R. Chaganti th defended on April 11 , 2005. __________________________________________ Simon Y. Foo Professor Directing Thesis __________________________________________ Anke Meyer-Baese Committee Member __________________________________________ Rodney Roberts Committee Member Approved: ________________________________________________________________________ Leonard J. Tung, Chair, Department of Electrical and Computer Engineering Ching-Jen Chen, Dean, FAMU-FSU College of Engineering The office of Graduate Studies has verified and approved the above named committee members. ii Dedicate to My Father late Dr.Rama Rao, Mother, Brother and Sister-in-law without whom this would never have been possible iii ACKNOWLEDGEMENTS I thank my thesis advisor, Dr.Simon Foo, for his help, advice and guidance during my M.S and my thesis. I also thank Dr.Anke Meyer-Baese and Dr. Rodney Roberts for serving on my thesis committee. I would like to thank my family for their constant support and encouragement during the course of my studies. I would like to acknowledge support from the Department of Electrical Engineering, FAMU-FSU College of Engineering.
    [Show full text]
  • Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector
    Introduction Method Experiments Conclusions Exploiting Information Theory for Filtering the Kadir Scale-Saliency Detector P. Suau and F. Escolano {pablo,sco}@dccia.ua.es Robot Vision Group University of Alicante, Spain June 7th, 2007 P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 1 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 2 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Outline 1 Introduction 2 Method Entropy analysis through scale space Bayesian filtering Chernoff Information and threshold estimation Bayesian scale-saliency filtering algorithm Bayesian scale-saliency filtering algorithm 3 Experiments Visual Geometry Group database 4 Conclusions P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 3 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Local feature detectors Feature extraction is a basic step in many computer vision tasks Kadir and Brady scale-saliency Salient features over a narrow range of scales Computational bottleneck (all pixels, all scales) Applied to robot global localization → we need real time feature extraction P. Suau and F. Escolano Bayesian filter for the Kadir scale-saliency detector 4 / 21 IBPRIA 2007 Introduction Method Experiments Conclusions Salient features X HD(s, x) = − Pd,s,x log2Pd,s,x d∈D Kadir and Brady algorithm (2001): most salient features between scales smin and smax P.
    [Show full text]
  • A PERFORMANCE EVALUATION of LOCAL DESCRIPTORS 1 a Performance Evaluation of Local Descriptors
    MIKOLAJCZYK AND SCHMID: A PERFORMANCE EVALUATION OF LOCAL DESCRIPTORS 1 A performance evaluation of local descriptors Krystian Mikolajczyk and Cordelia Schmid Dept. of Engineering Science INRIA Rhone-Alpesˆ University of Oxford 655, av. de l'Europe Oxford, OX1 3PJ 38330 Montbonnot United Kingdom France [email protected] [email protected] Abstract In this paper we compare the performance of descriptors computed for local interest regions, as for example extracted by the Harris-Affine detector [32]. Many different descriptors have been proposed in the literature. However, it is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [3], steerable filters [12], PCA-SIFT [19], differential invariants [20], spin images [21], SIFT [26], complex filters [37], moment invariants [43], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor, and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors. Index Terms Local descriptors, interest points, interest regions, invariance, matching, recognition. I. INTRODUCTION Local photometric descriptors computed for interest regions have proved to be very successful in applications such as wide baseline matching [37, 42], object recognition [10, 25], texture Corresponding author is K.
    [Show full text]
  • Sobel Operator and Canny Edge Detector ECE 480 Fall 2013 Team 4 Daniel Kim
    P a g e | 1 Sobel Operator and Canny Edge Detector ECE 480 Fall 2013 Team 4 Daniel Kim Executive Summary In digital image processing (DIP), edge detection is an important subject matter. There are numerous edge detection methods such as Prewitt, Kirsch, and Robert cross. Two different types of edge detectors are explored and analyzed in this paper: Sobel operator and Canny edge detector. The performance of two edge detection methods are then compared with several input images. Keywords : Digital Image Processing, Edge Detection, Sobel Operator, Canny Edge Detector, Computer Vision, OpenCV P a g e | 2 Table of Contents 1| Introduction……………………………………………………………………………. 3 2|Sobel Operator 2.1 Background……………………………………………………………………..3 2.2 Example………………………………………………………………………...4 3|Canny Edge Detector 3.1 Background…………………………………………………………………….5 3.2 Example………………………………………………………………………...5 4|Comparison of Sobel and Canny 4.1 Discussion………………………………………………………………………6 4.2 Example………………………………………………………………………...7 5|Conclusion………………………………………………………………………............7 6|Appendix 6.1 OpenCV Sobel tutorial code…............................................................................8 6.2 OpenCV Canny tutorial code…………..…………………………....................9 7|Reference………………………………………………………………………............10 P a g e | 3 1) Introduction Edge detection is a crucial step in object recognition. It is a process of finding sharp discontinuities in an image. The discontinuities are abrupt changes in pixel intensity which characterize boundaries of objects in a scene. In short, the goal of edge detection is to produce a line drawing of the input image. The extracted features are then used by computer vision algorithms, e.g. recognition and tracking. A classical method of edge detection involves the use of operators, a two dimensional filter. An edge in an image occurs when the gradient is greatest.
    [Show full text]
  • Context-Aware Features and Robust Image Representations 5 6 ∗ 7 P
    CAKE_article_final.tex Click here to view linked References 1 2 3 4 Context-Aware Features and Robust Image Representations 5 6 ∗ 7 P. Martinsa, , P. Carvalhoa,C.Gattab 8 aCenter for Informatics and Systems, University of Coimbra, Coimbra, Portugal 9 bComputer Vision Center, Autonomous University of Barcelona, Barcelona, Spain 10 11 12 13 14 Abstract 15 16 Local image features are often used to efficiently represent image content. The limited number of types of 17 18 features that a local feature extractor responds to might be insufficient to provide a robust image repre- 19 20 sentation. To overcome this limitation, we propose a context-aware feature extraction formulated under an 21 information theoretic framework. The algorithm does not respond to a specific type of features; the idea is 22 23 to retrieve complementary features which are relevant within the image context. We empirically validate the 24 method by investigating the repeatability, the completeness, and the complementarity of context-aware fea- 25 26 tures on standard benchmarks. In a comparison with strictly local features, we show that our context-aware 27 28 features produce more robust image representations. Furthermore, we study the complementarity between 29 strictly local features and context-aware ones to produce an even more robust representation. 30 31 Keywords: Local features, Keypoint extraction, Image content descriptors, Image representation, Visual 32 saliency, Information theory. 33 34 35 1. Introduction While it is widely accepted that a good local 36 37 feature extractor should retrieve distinctive, accu- 38 Local feature detection (or extraction, if we want 39 rate, and repeatable features against a wide vari- to use a more semantically correct term [1]) is a 40 ety of photometric and geometric transformations, 41 central and extremely active research topic in the 42 it is equally valid to claim that these requirements fields of computer vision and image analysis.
    [Show full text]
  • Topic: 9 Edge and Line Detection
    V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Topic: 9 Edge and Line Detection Contents: First Order Differentials • Post Processing of Edge Images • Second Order Differentials. • LoG and DoG filters • Models in Images • Least Square Line Fitting • Cartesian and Polar Hough Transform • Mathemactics of Hough Transform • Implementation and Use of Hough Transform • PTIC D O S G IE R L O P U P P A D E S C P I Edge and Line Detection -1- Semester 1 A S R Y TM H ENT of P V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Edge Detection The aim of all edge detection techniques is to enhance or mark edges and then detect them. All need some type of High-pass filter, which can be viewed as either First or Second order differ- entials. First Order Differentials: In One-Dimension we have f(x) d f(x) dx d f(x) dx We can then detect the edge by a simple threshold of d f (x) > T Edge dx ⇒ PTIC D O S G IE R L O P U P P A D E S C P I Edge and Line Detection -2- Semester 1 A S R Y TM H ENT of P V N I E R U S E I T H Y DIA/TOIP T O H F G E R D I N B U Edge Detection I but in two-dimensions things are more difficult: ∂ f (x,y) ∂ f (x,y) Vertical Edges Horizontal Edges ∂x → ∂y → But we really want to detect edges in all directions.
    [Show full text]
  • Robust Corner and Tangent Point Detection for Strokes with Deep
    Robust corner and tangent point detection for strokes with deep learning approach Long Zeng*, Zhi-kai Dong, Yi-fan Xu Graduate school at Shenzhen, Tsinghua University Abstract: A robust corner and tangent point detection (CTPD) tool is critical for sketch-based engineering modeling. This paper proposes a robust CTPD approach for hand-drawn strokes with deep learning approach, denoted as CTPD-DL. Its robustness for users, stroke shapes and biased datasets is improved due to multi-scaled point contexts and a vote scheme. Firstly, all stroke points are classified into segments by two deep learning networks, based on scaled point contexts which mimic human’s perception. Then, a vote scheme is adopted to analyze the merge conditions and operations for adjacent segments. If most points agree with a stroke’s type, this type is accepted. Finally, new corners and tangent points are inserted at transition points. The algorithm’s performance is experimented with 1500 strokes of 20 shapes. Results show that our algorithm can achieve 95.3% for all-or-nothing accuracy and 88.6% accuracy for biased datasets, compared to 84.6% and 71% of the state-of-the-art CTPD technique, which is heuristic and empirical-based. Keywords: corner detection, tangent point detection, stroke segmentation, deep learning, ResNet. 1. Introduction This work originates from our sketch-based engineering modeling (SBEM) system [1], to convert a conceptual sketch into a detailed part directly. A user friendly SBEM should impose as fewer constraints to the sketching process as possible [2]. Such system usually first starts from hand- drawn strokes, which may contain more than one primitives (e.g.
    [Show full text]
  • View Matching with Blob Features
    View Matching with Blob Features Per-Erik Forssen´ and Anders Moe Computer Vision Laboratory Department of Electrical Engineering Linkoping¨ University, Sweden Abstract mographic transformation that is not part of the affine trans- formation. This means that our results are relevant for both This paper introduces a new region based feature for ob- wide baseline matching and 3D object recognition. ject recognition and image matching. In contrast to many A somewhat related approach to region based matching other region based features, this one makes use of colour is presented in [1], where tangents to regions are used to de- in the feature extraction stage. We perform experiments on fine linear constraints on the homography transformation, the repeatability rate of the features across scale and incli- which can then be found through linear programming. The nation angle changes, and show that avoiding to merge re- connection is that a line conic describes the set of all tan- gions connected by only a few pixels improves the repeata- gents to a region. By matching conics, we thus implicitly bility. Finally we introduce two voting schemes that allow us match the tangents of the ellipse-approximated regions. to find correspondences automatically, and compare them with respect to the number of valid correspondences they 2. Blob features give, and their inlier ratios. We will make use of blob features extracted using a clus- tering pyramid built using robust estimation in local image regions [2]. Each extracted blob is represented by its aver- 1. Introduction age colour pk,areaak, centroid mk, and inertia matrix Ik. I.e.
    [Show full text]
  • Edge Detection Low Level
    Image Processing - Lesson 10 Image Processing - Computer Vision Edge Detection Low Level • Edge detection masks Image Processing representation, compression,transmission • Gradient Detectors • Compass Detectors image enhancement • Second Derivative - Laplace detectors • Edge Linking edge/feature finding • Hough Transform image "understanding" Computer Vision High Level UFO - Unidentified Flying Object Point Detection -1 -1 -1 Convolution with: -1 8 -1 -1 -1 -1 Large Positive values = light point on dark surround Large Negative values = dark point on light surround Example: 5 5 5 5 5 -1 -1 -1 5 5 5 100 5 * -1 8 -1 5 5 5 5 5 -1 -1 -1 0 0 -95 -95 -95 = 0 0 -95 760 -95 0 0 -95 -95 -95 Edge Definition Edge Detection Line Edge Line Edge Detectors -1 -1 -1 -1 -1 2 -1 2 -1 2 -1 -1 2 2 2 -1 2 -1 -1 2 -1 -1 2 -1 -1 -1 -1 2 -1 -1 -1 2 -1 -1 -1 2 gray value gray x edge edge Step Edge Step Edge Detectors -1 1 -1 -1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 1 1 -1 -1 -1 -1 1 1 1 1 -1 1 -1 -1 1 1 1 1 1 1 gray value gray -1 -1 1 1 1 -1 1 1 x edge edge Example Edge Detection by Differentiation Step Edge detection by differentiation: 1D image f(x) gray value gray x 1st derivative f'(x) threshold |f'(x)| - threshold Pixels that passed the threshold are -1 -1 1 1 -1 -1 -1 -1 Edge Pixels -1 -1 1 1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 1 -1 -1 1 1 1 -1 1 1 Gradient Edge Detection Gradient Edge - Examples ∂f ∂x ∇ f((x,y) = Gradient ∂f ∂y ∂f ∇ f((x,y) = ∂x , 0 f 2 f 2 Gradient Magnitude ∂ + ∂ √( ∂ x ) ( ∂ y ) ∂f ∂f Gradient Direction tg-1 ( / ) ∂y ∂x ∂f ∇ f((x,y) = 0 , ∂y Differentiation in Digital Images Example Edge horizontal - differentiation approximation: ∂f(x,y) Original FA = ∂x = f(x,y) - f(x-1,y) convolution with [ 1 -1 ] Gradient-X Gradient-Y vertical - differentiation approximation: ∂f(x,y) FB = ∂y = f(x,y) - f(x,y-1) convolution with 1 -1 Gradient-Magnitude Gradient-Direction Gradient (FA , FB) 2 2 1/2 Magnitude ((FA ) + (FB) ) Approx.
    [Show full text]