Introduction to Computer Vision & Robotics

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Computer Vision & Robotics Introduction to Computer Vision & Robotics 16 December 2020 Lecture 7 [email protected] Georg-August University, Göttingen, Germany 0 Last time we talked about 1. Morphological image processing • Erosion, dilation, opening, closing • Detection of isolated pixels or certain (simple) shapes • Convex hull • Hole filling • Morphological gradients 2. Image Features • Gradients and Edges • Corners 1 What is a feature descriptor? • useful information • simplifies an image (or image patch) • much smaller in size than a whole image • should not depend on rotation, scale, illumination and so on Descriptors are then used for matching! 2 We can use only circle and color information Example of a naive descriptor How can we describe these images in a compressed form? 3 Example of a naive descriptor How can we describe these images in a compressed form? We can use only circle and color information 3 Hough Transform Basic shape descriptors: Hough Transformation • It was invented and patented in 1960s • Then extended to arbitrary shapes in 1970s • The idea is to find imperfect instances of objects within a certain class of shapes by a voting procedure. • The simplest case of Hough transform is detecting straight lines. • Here we’ll consider lines and circles 4 Hough Lines Different Line parameterizations possible: • Cartesian Coordinates y • Parameters: (a, b) • Equation: y = ax + b • Problem: representation can not handle vertical lines • Polar Coordinates • Parameters: (r, θ) r • Equation: y = (− cos θ )x + ( r ) sin θ sin θ θ x wiki/houghtransform 5 Hough Lines − cos θ r • y = ( sin θ )x + ( sin θ ) • r = x cos θ + y sin θ • Idea of Hough algorithm: vote for possibilities • We need to discretize our voting space (called accumulator) wiki/houghtransform 6 Hough Lines 7 Hough Algorithm • Find Feature locations in Image • For each feature point xi: • For each possibility pj in the accumulator that passes through the feature point: • Increment acumulator at this position • Find the maximum element of the accumulator, it’s location reflect the according parameter values • Map back to image space 8 Example: Hough Lines Algorithm 9 wiki/houghtransform Example: Hough Lines Algorithm wiki/houghtransform 10 OpenCV Functions import cv2 import numpy as np img = cv2.imread('dave.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,50,150,apertureSize = 3) lines = cv2.HoughLines(edges,1,np.pi/180,200) for rho,theta in lines[0]: a = np.cos(theta) b = np.sin(theta) x0 = a*rho y0 = b*rho x1 = int(x0 + 1000*(-b)) y1 = int(y0 + 1000*(a)) x2 = int(x0 - 1000*(-b)) y2 = int(y0 - 1000*(a)) cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2) 11 Example (Classical Hough) http://opencvexamples.blogspot.com 12 Example (Probabilistic Hough) 13 docs.opencv.org Circle Hough Transform Circle Hough Transform • We can also detect circles! • Circle with radius R and center at (a; b) has the parametric equations x = a + R cos θ y = b + R sin θ 14 Circle Hough Transform • What if circles of different radii should be detected? 15 Hough Circles: Example docs.opencv.org 16 OpenCV Circle Detector import numpy as np import cv2 as cv img = cv.imread('opencv-logo-white.png',0) img = cv.medianBlur(img,5) cimg = cv.cvtColor(img,cv.COLOR_GRAY2BGR) circles = cv.HoughCircles(img,cv.HOUGH_GRADIENT,1,20, param1=50,param2=30, minRadius=0,maxRadius=0) circles = np.uint16(np.around(circles)) for i in circles[0,:]: # draw the outer circle cv.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2) # draw the center of the circle cv.circle(cimg,(i[0],i[1]),2,(0,0,255),3) 17 Practical Problem Industrial Vision: Check the status of the detail 18 Practical Problem Industrial Vision: Check the status of the detail 19 Features Why Features? • Features are used for matching • Template Matching: Find/Locate a specific object inside an image (object recognition) • Find correlating features in multiple images to reconstruct the (3D) scene (navigation, mapping, tracking) 20 Template Matching 21 Template Matching 22 Panorama Creation 23 Matching 24 Feature requirements? • Rotation invariant (e.g. Harris-Corner-Detection) • Invariant under different lighting • Scale invariant 25 David Lowe 26 Outline: SIFT • Identify Scale-Invariant Keypoint-Candidates in the Image • Filter out bad Candidates • Create one Feature Vector for each valid candidate, that describes the region around the keypoint 27 Feature descriptors • useful information • simplifies an image (or image patch) • much smaller in size than a whole image • should not depend on rotation, scale, illumination and so on • Descriptors are then used for matching! More advanced feature descriptors than basic shapes, edges, corners. 28 SIFT: Scale invariant feature transform • Invented (and patented) by David Lowe (1999, 2004) • Invariant to scale, rotation, illumination, viewpoint • It gave start to a whole family of advanced feature descriptors • It has many extensions, e.g., SURF, PCA-SHIFT • It can robustly identify objects even among clutter and under partial occlusion • Within the last decades has many applications in Computer Vision, Robotics etc. 29 Basic Idea Image content is transformed into local features that are invariant to translation, rotation, scale, and other imaging parameters courses.cs.washington.edu 30 Algorithm Overview 1. Detector • Construct a scale-space • Detect scale-space extrema • Localize keypoints 2. Descriptor • Assign orientation to each key point • Compute descriptor vectors • Normalize the vectors 31 Scale-invariant feature transform (SIFT) docs.opencv.org 32 1. Scale Space Construction • Goal: identify locations and scales that can be repeatably assigned under different views of the same scene or object. • Method: search for stable features across multiple scales using a continuous function of scale. ∗ 1 −(x2+y2)=(2σ2) • L(x; y; σ) = G(x; y; σ) I(x; y), where G(x; y; σ) = 2πσ e • The scale space of an image L(x; y; σ) is produced by progressively convolving the input image with a Gaussian kernel at different scales 33 Blob Detection • How to detect blobs of specific size? • Use Difference of Gaussians • Convolution of a Gaussian kernel on an Image: blurring (suppression of lower frequencies) • Substracting two Images that were convolved by Gaussian kernels of different sizes preserves range of frequencies courses.cs.washington.edu 34 Variable Size Blob Detection • Use filters of different sizes courses.cs.washington.edu 35 Image Pyramids • Bottom level is the original image • Each next level is derived from the previous one according to some function (e.g., Downsampling and Convolution with Gaussian kernel) courses.cs.washington.edu 36 1. Scale Space Construction • In SIFT scale-space image representation consists of N octaves, defined by two parameters s and σ • Each octave is an ordered set of s + 3 images such that m m ∗ L(x; y; k σ)p = G(x; y; k σ) fi(x; y), where k = 2, fi is i-th sub-sample of I, m = 0; 1;:::; s + 2, i = 1;:::; N 37 1. Scale Space Construction s = 2, five images in each octave David Lowe 38 1. Scale Space Construction s = 2, five images in each octave David Lowe 39 2. Extrema Detection Apply Difference of Gaussian (DoG) to approximate the Laplacian of Gaussian (LoG, which is a known blob detector) David Lowe 40 2. Extrema Detection • Detect maxima and minima of DoG in scale space • Each point is compared to its 8 neighbors in the octave and to the 9 neighbors of increased and decreased scale inside the same octave David Lowe 41 3. Extrema Detection • Local extrema are refined • Reject points with bad contrast (thresholding) • Reject points with strong edge response in one direction only • Edge points are detected using Hessian matrix (similar to Harris) • Basically, we want to keep only corner-like keypoints • Now, we can compute the descriptors 42 The aperture Problem • Why do we want corner-like patches only and not edge-like features? 43 3. Extrema Detection 44 4. Orientation Assignment • For each keypoint we have coordinates and scale (σ) • Use scale of point to choose the closest image L(x; y; σ) = G(x; y; σ) ∗ I(x; y) • Compute gradient magnitude and orientation of each keypoint pm(x; y; σ) = (L(x + 1; y; σ) − L(x − 1; y; σ))2 + (L(x; y + 1; σ) − L(x; y − 1; σ))2 L(x;y+1,σ)−L(x;y−1,σ) θ(x; y; σ) = arctan L(x+1;y,σ)−L(x−1;y,σ) 45 4. Orientation Assignment For region around keypoint • Create Histogram with 36 bins for orientation • Weight each keypoint with Gaussian window of 1:5σ • Create keypoint for all peaks with value ≥ 0:8 of max bin www.inf.fu-berlin.de 46 5. Keypoint Descriptors • Each keypoint now has • location • scale • orientation • A descriptor for the local image region around each keypoint is computed • It must be highly distinctive • Invariant to changes in viewpoint and illumination 47 5. Keypoint Descriptors • Find the blurred image of closest scale • Take a neighborhood around the keypoint (16 × 16) • Rotate the gradients and coordinates according to previously computed orientation (for rotation invariance) • Separate the region into subregions (4 × 4) • Create histogram for each subregion with 8 bins • The descriptor is 4 × 4 × 8 = 128 numbers • Normalize the 128 vector for illumination invariance http://aishack.in 48 Example matching docs.opencv.org 49 OpenCV Usage import numpy as np import cv2 from matplotlib import pyplot as plt img1 = cv2.imread('box.png',0) # queryImage img2 = cv2.imread('box_in_scene.png',0) # trainImage sift = cv2.SIFT() kp1, des1 = sift.detectAndCompute(img1,None) kp2, des2 = sift.detectAndCompute(img2,None) bf = cv2.BFMatcher() matches = bf.knnMatch(des1,des2,
Recommended publications
  • Third Harmonic Generation Microscopy Zhang, Z
    VU Research Portal Third harmonic generation microscopy Zhang, Z. 2017 document version Publisher's PDF, also known as Version of record Link to publication in VU Research Portal citation for published version (APA) Zhang, Z. (2017). Third harmonic generation microscopy: Towards automatic diagnosis of brain tumors. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 05. Oct. 2021 Third harmonic generation microscopy: towards automatic diagnosis of brain tumors This thesis was reviewed by: prof.dr. J. Hulshof VU University Amsterdam prof.dr. J. Popp Jena University prof.dr. A.G.J.M. van Leeuwen Academic Medical Center prof.dr. M. van Herk The University of Manchester dr. I.H.M. van Stokkum VU University Amsterdam dr. P. de Witt Hamer VU University Medical Center © Copyright Zhiqing Zhang, 2017 ISBN: 978-94-6295-704-6 Printed in the Netherlands by Proefschriftmaken.
    [Show full text]
  • Visualize Yolo
    MSc Artificial Intelligence Master Thesis Open the black box: Visualize Yolo by Peter Heemskerk 11988797 August 18, 2020 36 EC credits autumn 2019 till summer 2020 Supervisor: Dr. Jan-Mark Geusebroek Assessor: Prof. Dr. Theo Gevers University of Amsterdam Contents 1 Introduction 2 1.1 Open the neural network black box . .2 1.2 Project Background . .2 1.2.1 Wet Cooling Towers . .2 1.2.2 The risk of Legionellosis . .2 1.2.3 The project . .3 2 Related work 4 2.1 Object Detection . .4 2.2 Circle detection using Hough transform . .4 2.3 Convolutional Neural Networks (ConvNet) . .5 2.4 You Only Look Once (Yolo) . .5 2.4.1 Yolo version 3 - bounding box and class prediction . .6 2.4.2 Yolo version 3 - object recognition at different scales . .7 2.4.3 Yolo version 3 - network architecture . .7 2.4.4 Feature Pyramid Networks . .7 2.5 The Black Box Explanation problem . .9 2.6 Network Visualization . 10 3 Approach 10 3.1 Aerial imagery dataset . 10 3.2 Yolo version 3 . 12 3.2.1 Pytorch Implementation . 12 3.2.2 Tuning approach . 12 3.3 Evaluation . 12 3.3.1 Training, test and validation sets . 12 3.3.2 Evaluation Metrics . 12 3.4 Network Visualization . 13 3.4.1 Introduction . 13 3.4.2 Grad-CAM . 13 3.4.3 Feature maps . 14 4 Experiment 15 4.1 Results . 15 4.1.1 Hough Transform prediction illustration . 15 4.1.2 Yolo Prediction illustration . 16 4.1.3 Yolo Tuning . 17 4.1.4 Yolo Validation .
    [Show full text]
  • Design of a Video Processing Algorithm for Detection of a Soccer Ball with Arbitrary Color Pattern
    Design of a video processing algorithm for detection of a soccer ball with arbitrary color pattern R. Woering DCT 2009.017 Traineeship report Coach(es): Ir. A.J. den Hamer Supervisor: prof.dr.ir. M. Steinbuch Technische Universiteit Eindhoven Department Mechanical Engineering Dynamics and Control Technology Group Eindhoven, March, 2009 Contents 1 Introduction 2 2 Literature 4 3 Basics of image processing 5 3.1 YUVandRGBcolorspaces ............................ ... 5 3.2 LinearFilters ................................... .... 7 Averagingfiltering .................................. .. 8 Gaussianlow-passfilter .. .. .. .. .. .. .. .. .. .. .. .. .. ... 8 LaplacianofGaussianfilter(LoG) . ..... 8 Unsharpfilter....................................... 9 3.3 Edgedetection ................................... ... 9 Cannymethod ...................................... 10 Sobelmethod....................................... 11 4 Circle Hough Transform (CHT) 12 4.1 Extraballcheck .................................. ... 12 5 Matlab 17 6 OpenCV 22 6.1 OpenCVreal-timetesting . ...... 23 7 Conclusion and Recommendations 25 References 28 1 1 Introduction This research is performed within the RoboCup project at the TU/e. RoboCup is an international joint project to promote A.I. (Artificial Intelligence), robotics and related fields. The idea is to perform research in the field of autonomous robots that play football by adapted FIFA rules. The goal is to play with humanoid robots against the world champion football of 2050 and hopefully win. Every year new challenges are set to force research and development to make it possible to play against humans in 2050. An autonomous mobile robot is a robot that is provided with the ability to take decisions on its own without interference of humans and work in a nondeterministic environment. A very important part of the development of autonomous robots is the real-time video processing, which is used to recognize the object in its surroundings.
    [Show full text]
  • Detection and Localization of Spherical Markers in Photographs
    Proceedings of the Croatian Computer Vision Workshop, Year 4 October 11, 2016, Osijek, Croatia Detection and Localization of Spherical Markers in Photographs Josip Tomurad Marko Subašić Faculty of Electrical Engineering and Computing Faculty of Electrical Engineering and Computing University of Zagreb University of Zagreb Zagreb, Croatia Zagreb, Croatia e-mail: [email protected] e-mail: [email protected] Abstract - This paper presents two solutions for detection II. HOUGH TRANSFORM and localization of spherical markers in photographs. Hough transform is a technique of feature extraction that The proposed solutions enable precise detection and is used in image analysis, computer vision, and digital localization in sub millimeter range. High precision image processing. The purpose of this technique is finding localization is required for brain surgery, and presented imperfect instances of objects from a certain class of shapes research effort is part of a project of developing and by application of voting. The technique was originally used deploying a robotic system for neurosurgical for line detection in images, and was later expanded to applications. Two algorithms for Hough transform using recognition of various kinds of shapes, most commonly several edge detection algorithms are proposed, and ellipses and circles. their results compared and analyzed. Results are obtained for both NIR and visible spectrum images, and Hough transform uses shape edges as its input so first required high precision is achieved in both domains. step is finding edge pixels in an image. Then each edge pixel votes in a Hough parameter space in a pattern that Keywords – project RONNA, image processing, circle describes potential shape of interest.
    [Show full text]
  • Omni-Directional Image Processing for Human Detection and Tracking
    Brno University of Technology Faculty of Information Technology Department of Computer Graphics and Multimedia Omni-directional image processing for human detection and tracking A THESIS SUBMITTED IN FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY . Ing. Igor Potú ček Supervised by: Doc.Dr.Ing.Pavel Zem čík Submitted: June 2006 State doctoral exam: 14.June 2004 Availability: Library of the Faculty of Information Technology, Brno University of Technology, Czech Republic Omni-directional image processing for human detection and tracking Abstract This work investigates the setup and the set of methods of omni-directional system for human activity detection and tracking. Research in the area of the omni-directional systems is growing, the reason are many advantages such as low cost, portability and easy installation. The main advantage is capturing a large portion of a space angle, which is 360 degrees. On the other hand, a lot of problems exist and they must be solved. Images from omni-directional systems have different properties than standard perspective images as they have lower effective resolution and suffer adverse image distortion. If the resulting image is to be presented to a human or is further processed, transformation and suitable kind of corrections must be done. Techniques for image transformation into a perspective or panoramic view and geometrical corrections are suggested in this paper. The mobile catadioptric system is usually prone to vibrations, which cause the distortion in the transformed panoramic image. Therefore the novel approach for stabilization of the image from omni-directional system was proposed. Human face and hands detection plays an important role in applications such as video surveillance, human computer interface, face recognition, etc.
    [Show full text]
  • Design and Implementation of Floating Point Based System for Pellet Size Distribution Using Circular Hough Transform
    International Journal of Computer Applications (0975 – 8887) Volume 62– No.13, January 2013 Design and Implementation of Floating Point based System for Pellet Size Distribution using Circular Hough Transform Dhiraj J.L. Raheja Vaishali Singh Sreekanth Ashutosh Machine Vision Machine Vision Dept of ECE Pusapati Gupta Lab, DSG Lab, DSG Amity School of Engg. Dept of ECE Faculty Council of Council of & Tech. Amity School of Dept of ECE Scientific & Scientific & Amity University Engg. & Tech. Amity School of Industrial Industrial Noida, U.P. Amity University Engg. & Tech. Research- Research- Noida, U.P Amity University (CSIR-CEERI), (CSIR-CEERI), Noida, U.P Pilani, Rajasthan, Pilani, Rajasthan, India India ABSTRACT 1.2 Hough Transform Detection of circular shape from grey scale images is explored For extracting the edges from the image edge detectors can be in this paper. The algorithm used for detection of circles is used. Many edge detectors have been proposed in the Circular Hough Transform which is one of the types of Hough literature, such as Canny [3], Robert [4], Prewitt [5] and Transform. The algorithm is implemented in VHDL. In this Driche [6]. Of these, paper IEEE-754 single precision standard floating point number has been discussed and also the arithmetic operations the canny edge detector has been widely successful in such as addition, subtraction, multiplication and division have extracting the edge feature from an image. The edge extracted also been covered. Image is used by many algorithms, such as the Hough Transform and the Linear Square method, to further extract General Terms shape information like straight lines, ellipses, circles, etc.
    [Show full text]
  • Original Article
    Available online at http://www.journalijdr.com International Journal of DEVELOPMENT RESEARCH International Journal of Development Research ISSN: 2230-9926 Vol. 4, Issue, 2, pp. 310-312, February, 2014 Full Length Review Article A PROPOSED ALGORITHM FOR CIRCLE DETECTION USING PREWITT EDGE DETECTION TECHNIQUE *Muntadher Khamees Mustafa Al-karawi College of Science, Diyala University, Diyala, lraq ARTICLE INFO ABSTRACT Article History: This paper presents method to detect and accurately locate circle objects in digital image. The Received 19th November, 2013 propose method consists of two steps. In first step the edge pixels are extracted using Prewitt edge Received in revised form detection algorithm and then a noise removal process is run to remove the non-circle edge points. 20th December, 2013 The second step, Circle Hough Transform (CHT) is applied to detect circle from the digital Accepted 06th January, 2014 image. st Published online 21 February, 2014 Key words: Edge detection, Circle hough transform, Pixel direction, Convolution masks. Copyright © 2014 Muntadher Khamees Mustafa Al-karawi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. INTRODUCTION to bottom, left to right neighboring pixels to get the extreme edge in image. The principle of Prewitt method is like to Sobel A human being can find out a particular object like Circle or method (Chun-ling Fan and Yuan-yuan Ren, 2010). The ellipse from various objects by observing their color, shape, convolution masks are shown in Figure 2.1. feature and texture and after that we can calculate properties of shapes like perimeter, area etc.
    [Show full text]
  • Cairn Detection in Southern Arabia Using a Supervised Automatic Detection
    Cairn Detection in Southern Arabia Using a Supervised Automatic Detection Algorithm and Multiple Sample Data Spectroscopic Clustering DISSERTATION Presented in Partial Ful¯llment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Jared Michael Schuetter, M.S. Graduate Program in Statistics The Ohio State University 2010 Dissertation Committee: Professor Tao Shi, Co-Advisor Professor Prem Goel, Co-Advisor Professor Joy McCorriston Professor Yoon Lee Professor Stuart Ludsin, GFR Copyright by Jared Michael Schuetter 2010 ABSTRACT Excavating cairns in southern Arabia is a way for anthropologists to understand which factors led ancient settlers to transition from a pastoral lifestyle and tribal narrative to the formation of states that exist today. Locating these monuments has traditionally been done in the ¯eld, relying on eyewitness reports and costly searches through the arid landscape. In this thesis, an algorithm for automatically detecting cairns in satellite imagery is presented. The algorithm uses a set of ¯lters in a window based approach to eliminate background pixels and other objects that do not look like cairns. The resulting set of detected objects constitutes fewer than 0:001% of the pixels in the satellite image, and contains the objects that look the most like cairns in imagery. When a training set of cairns is available, a further reduction of this set of objects can take place, along with a likelihood-based ranking system. To aid in cairn detection, the satellite image is also clustered to determine land- form classes that tend to be consistent with the presence of cairns.
    [Show full text]
  • A New Ball Detection Strategy for Enhancing the Performance of Ball Bees Based on Fuzzy Inference Engine
    A New Ball Detection Strategy for Enhancing the Performance of Ball Bees Based on Fuzzy Inference Engine Arwa Abulwafa ( [email protected] ) Mansoura University Faculty of Engineering Ahmed I. Saleh Mansoura University Faculty of Engineering Mohamed S. Saraya Mansoura University Faculty of Engineering Hesham A. Ali Mansoura University Faculty of Engineering Research Article Keywords: Background Subtraction, Edge Detection, Soccer, Ball Detection, Quadcopter. Posted Date: May 17th, 2021 DOI: https://doi.org/10.21203/rs.3.rs-286620/v1 License: This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License A New Ball Detection Strategy for Enhancing the Performance of Ball Bees Based on Fuzzy Inference Engine Arwa E. Abulwafa, Ahmed I. Saleh, Mohamed S. Saraya and Hesham A. Ali Dept. of Computer Eng. & Systems, Faculty of Engineering, Mansoura University, Egypt. Abstract Sports video analysis has received much attention as it is turned to be a hot research area in the field of image processing. This led to opportunities to develop fascinating applications supported by analysis of different sports especially football. Identifying the ball in soccer images is an essential task for not only goal scoring but also players’ evaluation. However, soccer ball detection suffers from several hurdles such as; occlusions, fast moving objects, shadows, poor lighting, color contrast, and other static background objects. Although several ball detection techniques have been introduced such as; Frame Difference, Mixture of Gaussian (MoG), Optical Flow and etc., ball detection in soccer games is still an open research area. In this paper, a new Fuzzy Based Ball Detection (FB2D) strategy is proposed for identifying the ball through a set of image sequences extracted form a soccer match video.
    [Show full text]
  • Mansoura University Elective Topics-2 (Computer Vision) Faculty of Computers & Information 4Th IT Year (Final Exam)
    Mansoura University Elective Topics-2 (Computer Vision) Faculty of Computers & Information 4th IT Year (Final Exam) Information Technology Department 04-01-2016 (Time: 3h) Answer the following questions (Total Marks: 60 M): 1. Fill in the spaces (8 M): a. The most common sources of image noise are ……………….., ……………….., ……………….., ……………….., and ……………….. b. Image filters in the frequency domain is a way to modify the frequencies of images to perform……………….., ……………….., and ……………….. c. Templates and Image Pyramids are a way to match a template to the image to perform ……………….., and ……………….. d. The linear blend operator can be used to perform a ……………….. between two images or videos. e. In compositing and matting processes, the intermediate representation used for the foreground object between these two stages is called………………... f. When sampling a signal at discrete intervals, the sampling frequency must be ………………... to reconstruct the original perfectly from the sampled version. g. A key advantage of interest points is that they permit matching even in the presence of ……………….., ……………….., and ……………….. h. In information/document retrieval, the term recall is used instead of ……………….., whereas the term precision is used instead of ……………….. i. In feature tracking, ……………….. is used if the images are undergoing brightness change, whereas ……………….. is used if the search range is large. 2. True or False. Correct the false statement (7 M): a. Computer vision is considered as a forward problem, in which we seek to recover some unknowns given insufficient information to fully specify the solution. b. In engineering approach, we build detailed models of the image formation process and develop mathematical techniques to invert these in order to recover the quantities of interest.
    [Show full text]
  • Detection of Symmetric Features in Images Karel Horak
    Detection of Symmetric Features in Images Karel Horak sequence. Pairing of feature points in only one isolated image Abstract—Features detection plays a significant role in many was slightly suppressed in the past. computer vision tasks. There exist a plenty of various methods The methods for feature point detection often result in list for features detection in an image processing domain. of records related to detected points of interest. Each record Symmetric features form a substantial class of them which takes represents one feature point and usually contains a foursome an advantage of robust approach and fast implementation. In this paper a basic concept of a mirror and rotational symmetry of values: image coordinates x and y, orientation and scale s is introduced and several implementation aspects are discussed. of the point. The last two values of orientation and scale are Symmetry detection process is transparently compared to the normalised to ensure independence of these parameters. A set selected standard approach and results are demonstrated on the of orientation and scale invariant feature points detected in a rotational symmetry task. robust and effective way, moreover with a high repeatability, Keywords—Feature point, Mirror Symmetry, Point of is well suited input for symmetries detection [7]–[9]. Interest, Rotational Symmetry, Symmetry. There are the two major classes of symmetry in real world images. First of them is a so-called direct symmetry, which is related to the pairs of similar feature points only under I. INTRODUCTION translational and rotational transform. A simple example of ETECTION of features in images is one of the most the direct symmetry is a car wheel.
    [Show full text]
  • Circle Detection Using a Gabor Annulus 1
    RHODES, BAI: CIRCLE DETECTION USING A GABOR ANNULUS 1 Circle Detection Using a Gabor Annulus Adam Rhodes School of Computer Science [email protected] University of Nottingham, Li Bai Nottingham, UK [email protected] Abstract We present a novel circle detection technique based on the desirable properties of Gabor wavelet filters. We describe the technique and its Gabor wavelet origins, and perform multiple experiments using both synthetic images and real microscopic images of red blood cells. Our experiments show the proposed technique’s ability to detect single and overlapping circles, with improved results over traditional techniques including both the Circular Hough Transform and its modifications which include edge orientation. The improved results are demonstrated under high levels of noise and for partially defined overlapping circles. 1 Introduction This paper introduces a new method for circle detection based on the properties of Gabor wavelet filters. Circles are found frequently in nature, as perfect circular structures provide an optimal area-to-perimeter ratio. The ability to accurately detect circles is therefore useful in a range of practical image processing applications. One common application is to au- tomatically count cells in biological experiments [5, 11], providing quantitative results as experimental parameters are varied. This process may be infeasibly time consuming when counting cells manually for large numbers of experiments [6, 23]. A further more advanced application is the tracking of cancer or stem cells over time [3,9], as understanding the movement, lineage and therefore differentiation of these cells as they grow into some specific tissue is vital to the understanding of these biological processes.
    [Show full text]