Administrivia Scale Invariant Features Why Extract Features?

Total Page:16

File Type:pdf, Size:1020Kb

Administrivia Scale Invariant Features Why Extract Features? Administrivia • Exam review session in next class CMPSCI 370: Intro to Computer Vision • Midterm in class (Thursday) Image processing • All topics covered till Feb 25 lecture (corner detection) Scale Invariant Feature Transform (SIFT) • Closed book University of Massachusetts, Amherst March 03, 2015 • Grading issues Instructor: Subhransu Maji • Include all the information needed to grade the homework • Keep the grader happy :-) • Candy wrapper extra credit for participation (5%) 2 Scale invariant features Why extract features? • Motivation: panorama stitching “blob detection” • We have two images – how do we combine them? Source: L. Lazebnik 3 Slide credit: L. Lazebnik 4 Why extract features? Why extract features? • Motivation: panorama stitching • Motivation: panorama stitching • We have two images – how do we combine them? • We have two images – how do we combine them? Step 1: extract features Step 1: extract features Step 2: match features Step 2: match features Step 3: align images Slide credit: L. Lazebnik 5 Slide credit: L. Lazebnik 6 Feature detection with scale selection Scaling • We want to extract features with characteristic scale that matches the image transformation such as scaling and translation (a.k.a. covariance) Corner All points will be classified as Matching regions across scales edges Corner detection is sensitive to the image scale! Source: L. Lazebnik 7 Source: L. Lazebnik 8 Blob detection: basic idea Blob detection: basic idea • Convolve the image with a “blob filter” at multiple scales Find maxima and minima of blob filter response in space minima • Look for extrema (maxima or minima) of filter response in and scale the resulting scale space • This will give us a scale and space covariant detector * = maxima Source: L. Lazebnik 9 Source: N. Snavely 10 Blob filter Recall: sharpening filter Laplacian of Gaussian: Circularly symmetric operator for I = blurry(I)+sharp(I) sharp(I)=I blurry(I) − blob detection in 2D = I e I g ⇤ − ⇤ σ = I (e g ) ⇤ − σ 2 2 2 ∂ g ∂ g ∇ g = + unit impulse ∂x2 ∂y 2 Gaussian Laplacian of Gaussian Source: L. Lazebnik 11 12 Recall: edge detection Edge detection using a Laplacian f signal f Edge 2 Second derivative g edge filter, e.g. d of Gaussian Derivative of a 2 g (Laplacian) Gaussian dx 2 edge response d Edge = zero crossing f*g f ∗ g of second derivative dx2 Source: S. Seitz 13 Source: S. Seitz 14 From edges to blobs Scale selection • edge = ripple • We want to find the characteristic scale of the blob by convolving it with Laplacians at several scales and looking • blob = superposition of two ripples for the maximum response • However, Laplacian response decays as scale increases: maximum Spatial selection: the magnitude of the Laplacian response will achieve a maximum at the center of original signal increasing σ the blob, provided the scale of the Laplacian is (radius=8) “matched” to the scale of the blob Source: L. Lazebnik 15 Source: L. Lazebnik 16 Scale normalization Scale normalization • The response of a derivative of Gaussian filter to a perfect • The response of a derivative of Gaussian filter to a perfect step edge decreases as σ increases step edge decreases as σ increases • To keep response the same (scale-invariant), must multiply Gaussian derivative by σ 1 • Laplacian is the second Gaussian derivative, so it must be σ 2π multiplied by σ2 Source: L. Lazebnik 17 Source: L. Lazebnik 18 Effect of scale normalization Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for Original signal Unnormalized Laplacian response blob detection in 2D Scale-normalized Laplacian response & ∂ 2 g ∂ 2 g # Scale-normalized: ∇2 g = σ 2 $ + ! norm $ ∂x2 ∂y 2 ! maximum % " Source: L. Lazebnik 19 Source: L. Lazebnik 20 Scale selection Scale selection • At what scale does the Laplacian achieve a maximum • At what scale does the Laplacian achieve a maximum response to a binary circle of radius r? response to a binary circle of radius r? • To get maximum response, the zeros of the Laplacian have to be aligned with the circle • The Laplacian is given by (up to scale): 2 2 2 (x2 + y 2 − 2σ 2 ) e−(x + y )/ 2σ • Therefore, the maximum response occurs at σ = r / 2. circle r r 0 Laplacian image Laplacian image Source: L. Lazebnik 21 Source: L. Lazebnik 22 Characteristic scale Scale-space blob detector • We define the characteristic scale of a blob as the scale 1. Convolve image with scale-normalized Laplacian at that produces peak of Laplacian response in the blob several scales center characteristic scale T. Lindeberg (1998). "Feature detection with automatic scale selection." International Journal of Computer Vision 30 (2): pp 77--116. Source: L. Lazebnik 23 Source: L. Lazebnik 24 Scale-space blob detector: Example Scale-space blob detector: Example 25 26 Scale-space blob detector Scale-space blob detector: Example 1. Convolve image with scale-normalized Laplacian at several scales 2. Find maxima of squared Laplacian response in scale- space Source: L. Lazebnik 27 Source: L. Lazebnik 28 Efficient implementation Efficient implementation • Approximating the Laplacian with a difference of Gaussians: 2 L = σ (Gxx (x, y,σ ) + Gyy (x, y,σ )) (Laplacian) DoG = G(x, y,kσ ) − G(x, y,σ ) (Difference of Gaussians) 2 2 2 (x2 + y 2 − 2σ 2 ) e−(x + y )/ 2σ Is the Laplacian separable? David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004. Source: L. Lazebnik 29 30 From feature detection to description Eliminating rotation ambiguity • Scaled and rotated versions of the same neighborhood will • To assign a unique orientation to circular image windows: give rise to blobs that are related by the same transformation • Create histogram of local gradient directions in the patch • What to do if we want to compare the appearance of these • Assign canonical orientation at peak of smoothed histogram image regions? • Normalization: transform these regions into same-size circles • Problem: rotational ambiguity 0 2 π Source: L. Lazebnik 31 Source: L. Lazebnik 32 SIFT features From feature detection to description • Detected features with characteristic scales and orientations: how should we represent the patches? David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004. Source: L. Lazebnik 33 Source: L. Lazebnik 34.
Recommended publications
  • Scale Invariant Interest Points with Shearlets
    Scale Invariant Interest Points with Shearlets Miguel A. Duval-Poo1, Nicoletta Noceti1, Francesca Odone1, and Ernesto De Vito2 1Dipartimento di Informatica Bioingegneria Robotica e Ingegneria dei Sistemi (DIBRIS), Universit`adegli Studi di Genova, Italy 2Dipartimento di Matematica (DIMA), Universit`adegli Studi di Genova, Italy Abstract Shearlets are a relatively new directional multi-scale framework for signal analysis, which have been shown effective to enhance signal discon- tinuities such as edges and corners at multiple scales. In this work we address the problem of detecting and describing blob-like features in the shearlets framework. We derive a measure which is very effective for blob detection and closely related to the Laplacian of Gaussian. We demon- strate the measure satisfies the perfect scale invariance property in the continuous case. In the discrete setting, we derive algorithms for blob detection and keypoint description. Finally, we provide qualitative justifi- cations of our findings as well as a quantitative evaluation on benchmark data. We also report an experimental evidence that our method is very suitable to deal with compressed and noisy images, thanks to the sparsity property of shearlets. 1 Introduction Feature detection consists in the extraction of perceptually interesting low-level features over an image, in preparation of higher level processing tasks. In the last decade a considerable amount of work has been devoted to the design of effective and efficient local feature detectors able to associate with a given interesting point also scale and orientation information. Scale-space theory has been one of the main sources of inspiration for this line of research, providing an effective framework for detecting features at multiple scales and, to some extent, to devise scale invariant image descriptors.
    [Show full text]
  • View Matching with Blob Features
    View Matching with Blob Features Per-Erik Forssen´ and Anders Moe Computer Vision Laboratory Department of Electrical Engineering Linkoping¨ University, Sweden Abstract mographic transformation that is not part of the affine trans- formation. This means that our results are relevant for both This paper introduces a new region based feature for ob- wide baseline matching and 3D object recognition. ject recognition and image matching. In contrast to many A somewhat related approach to region based matching other region based features, this one makes use of colour is presented in [1], where tangents to regions are used to de- in the feature extraction stage. We perform experiments on fine linear constraints on the homography transformation, the repeatability rate of the features across scale and incli- which can then be found through linear programming. The nation angle changes, and show that avoiding to merge re- connection is that a line conic describes the set of all tan- gions connected by only a few pixels improves the repeata- gents to a region. By matching conics, we thus implicitly bility. Finally we introduce two voting schemes that allow us match the tangents of the ellipse-approximated regions. to find correspondences automatically, and compare them with respect to the number of valid correspondences they 2. Blob features give, and their inlier ratios. We will make use of blob features extracted using a clus- tering pyramid built using robust estimation in local image regions [2]. Each extracted blob is represented by its aver- 1. Introduction age colour pk,areaak, centroid mk, and inertia matrix Ik. I.e.
    [Show full text]
  • Feature Detection Florian Stimberg
    Feature Detection Florian Stimberg Outline ● Introduction ● Types of Image Features ● Edges ● Corners ● Ridges ● Blobs ● SIFT ● Scale-space extrema detection ● keypoint localization ● orientation assignment ● keypoint descriptor ● Resources Introduction ● image features are distinct image parts ● feature detection often is first operation to define image parts to process later ● finding corresponding features in pictures necessary for object recognition ● Multi-Image-Panoramas need to be “stitched” together at according image features ● the description of a feature is as important as the extraction Types of image features Edges ● sharp changes in brightness ● most algorithms use the first derivative of the intensity ● different methods: (one or two thresholds etc.) Types of image features Corners ● point with two dominant and different edge directions in its neighbourhood ● Moravec Detector: similarity between patch around pixel and overlapping patches in the neighbourhood is measured ● low similarity in all directions indicates corner Types of image features Ridges ● curves which points are local maxima or minima in at least one dimension ● quality depends highly on the scale ● used to detect roads in aerial images or veins in 3D magnetic resonance images Types of image features Blobs ● points or regions brighter or darker than the surrounding ● SIFT uses a method for blob detection SIFT ● SIFT = Scale-invariant feature transform ● first published in 1999 by David Lowe ● Method to extract and describe distinctive image features ● feature
    [Show full text]
  • EECS 442 Computer Vision: Homework 2
    EECS 442 Computer Vision: Homework 2 Instructions • This homework is due at 11:59:59 p.m. on Friday October 18th, 2019. • The submission includes two parts: 1. To Gradescope: a pdf file as your write-up, including your answers to all the questions and key choices you made. You might like to combine several files to make a submission. Here is an example online link for combining multiple PDF files: https://combinepdf.com/. 2. To Canvas: a zip file including all your code. The zip file should follow the HW formatting instructions on the class website. • The write-up must be an electronic version. No handwriting, including plotting questions. LATEX is recommended but not mandatory. Python Environment We are using Python 3.7 for this course. You can find references for the Python stan- dard library here: https://docs.python.org/3.7/library/index.html. To make your life easier, we recommend you to install Anaconda 5.2 for Python 3.7.x (https://www.anaconda.com/download/). This is a Python package manager that includes most of the modules you need for this course. We will make use of the following packages extensively in this course: • Numpy (https://docs.scipy.org/doc/numpy-dev/user/quickstart.html) • OpenCV (https://opencv.org/) • SciPy (https://scipy.org/) • Matplotlib (http://matplotlib.org/users/pyplot tutorial.html) 1 1 Image Filtering [50 pts] In this first section, you will explore different ways to filter images. Through these tasks you will build up a toolkit of image filtering techniques. By the end of this problem, you should understand how the development of image filtering techniques has led to convolution.
    [Show full text]
  • Image Analysis Feature Extraction: Corners and Blobs Corners And
    2 Corners and Blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou [email protected] Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/). D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. M. Nixon and A. Aguado. Feature Extraction and Image Processing. Academic Press 2010. University of Ioannina - Department of Computer Science C. Nikou – Image Analysis (T-14) Motivation: Panorama Stitching 3 Motivation: Panorama Stitching 4 (cont.) Step 1: feature extraction Step 2: feature matching C. Nikou – Image Analysis (T-14) C. Nikou – Image Analysis (T-14) Motivation: Panorama Stitching 5 6 Characteristics of good features (cont.) • Repeatability – The same feature can be found in several images despite geometric and photometric transformations. • Saliency – Each feature has a distinctive description. • Compactness and efficiency Step 1: feature extraction – Many fewer features than image pixels. Step 2: feature matching • Locality Step 3: image alignment – A feature occupies a relatively small area of the image; robust to clutter and occlusion. C. Nikou – Image Analysis (T-14) C. Nikou – Image Analysis (T-14) 1 7 Applications 8 Image Curvature • Feature points are used for: • Extends the notion of edges. – Motion tracking. • Rate of change in edge direction. – Image alignment. • Points where the edge direction changes rapidly are characterized as corners. – 3D reconstruction. • We need some elements from differential ggyeometry. – Objec t recogn ition. • Parametric form of a planar curve: – Image indexing and retrieval. – Robot navigation. Ct()= [ xt (),()] yt • Describes the points in a continuous curve as the endpoints of a position vector.
    [Show full text]
  • Edges and Binary Image Analysis
    1/25/2017 Edges and Binary Image Analysis Thurs Jan 26 Kristen Grauman UT Austin Today • Edge detection and matching – process the image gradient to find curves/contours – comparing contours • Binary image analysis – blobs and regions 1 1/25/2017 Gradients -> edges Primary edge detection steps: 1. Smoothing: suppress noise 2. Edge enhancement: filter for contrast 3. Edge localization Determine which local maxima from filter output are actually edges vs. noise • Threshold, Thin Kristen Grauman, UT-Austin Thresholding • Choose a threshold value t • Set any pixels less than t to zero (off) • Set any pixels greater than or equal to t to one (on) 2 1/25/2017 Original image Gradient magnitude image 3 1/25/2017 Thresholding gradient with a lower threshold Thresholding gradient with a higher threshold 4 1/25/2017 Canny edge detector • Filter image with derivative of Gaussian • Find magnitude and orientation of gradient • Non-maximum suppression: – Thin wide “ridges” down to single pixel width • Linking and thresholding (hysteresis): – Define two thresholds: low and high – Use the high threshold to start edge curves and the low threshold to continue them • MATLAB: edge(image, ‘canny’); • >>help edge Source: D. Lowe, L. Fei-Fei The Canny edge detector original image (Lena) Slide credit: Steve Seitz 5 1/25/2017 The Canny edge detector norm of the gradient The Canny edge detector thresholding 6 1/25/2017 The Canny edge detector How to turn these thick regions of the gradient into curves? thresholding Non-maximum suppression Check if pixel is local maximum along gradient direction, select single max across width of the edge • requires checking interpolated pixels p and r 7 1/25/2017 The Canny edge detector Problem: pixels along this edge didn’t survive the thresholding thinning (non-maximum suppression) Credit: James Hays 8 1/25/2017 Hysteresis thresholding • Use a high threshold to start edge curves, and a low threshold to continue them.
    [Show full text]
  • Lecture 10 Detectors and Descriptors
    This lecture is about detectors and descriptors, which are the basic building blocks for many tasks in 3D vision and recognition. We’ll discuss Lecture 10 some of the properties of detectors and descriptors and walk through examples. Detectors and descriptors • Properties of detectors • Edge detectors • Harris • DoG • Properties of descriptors • SIFT • HOG • Shape context Silvio Savarese Lecture 10 - 16-Feb-15 Previous lectures have been dedicated to characterizing the mapping between 2D images and the 3D world. Now, we’re going to put more focus From the 3D to 2D & vice versa on inferring the visual content in images. P = [x,y,z] p = [x,y] 3D world •Let’s now focus on 2D Image The question we will be asking in this lecture is - how do we represent images? There are many basic ways to do this. We can just characterize How to represent images? them as a collection of pixels with their intensity values. Another, more practical, option is to describe them as a collection of components or features which correspond to “interesting” regions in the image such as corners, edges, and blobs. Each of these regions is characterized by a descriptor which captures the local distribution of certain photometric properties such as intensities, gradients, etc. Feature extraction and description is the first step in many recent vision algorithms. They can be considered as building blocks in many scenarios The big picture… where it is critical to: 1) Fit or estimate a model that describes an image or a portion of it 2) Match or index images 3) Detect objects or actions form images Feature e.g.
    [Show full text]
  • Interest Point Detectors and Descriptors for IR Images
    DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL STOCKHOLM, SWEDEN 2015 Interest Point Detectors and Descriptors for IR Images AN EVALUATION OF COMMON DETECTORS AND DESCRIPTORS ON IR IMAGES JOHAN JOHANSSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC) Interest Point Detectors and Descriptors for IR Images An Evaluation of Common Detectors and Descriptors on IR Images JOHAN JOHANSSON Master’s Thesis at CSC Supervisors: Martin Solli, FLIR Systems Atsuto Maki Examiner: Stefan Carlsson Abstract Interest point detectors and descriptors are the basis of many applications within computer vision. In the selection of which methods to use in an application, it is of great in- terest to know their performance against possible changes to the appearance of the content in an image. Many studies have been completed in the field on visual images while the performance on infrared images is not as charted. This degree project, conducted at FLIR Systems, pro- vides a performance evaluation of detectors and descriptors on infrared images. Three evaluations steps are performed. The first evaluates the performance of detectors; the sec- ond descriptors; and the third combinations of detectors and descriptors. We find that best performance is obtained by Hessian- Affine with LIOP and the binary combination of ORB de- tector and BRISK descriptor to be a good alternative with comparable results but with increased computational effi- ciency by two orders of magnitude. Referat Detektorer och deskriptorer för extrempunkter i IR-bilder Detektorer och deskriptorer är grundpelare till många ap- plikationer inom datorseende. Vid valet av metod till en specifik tillämpning är det av stort intresse att veta hur de presterar mot möjliga förändringar i hur innehållet i en bild framträder.
    [Show full text]
  • A Survey of Feature Extraction Techniques in Content-Based Illicit Image Detection
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universiti Teknologi Malaysia Institutional Repository Journal of Theoretical and Applied Information Technology 10 th May 2016. Vol.87. No.1 © 2005 - 2016 JATIT & LLS. All rights reserved . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 A SURVEY OF FEATURE EXTRACTION TECHNIQUES IN CONTENT-BASED ILLICIT IMAGE DETECTION 1,2 S.HADI YAGHOUBYAN, 1MOHD AIZAINI MAAROF, 1ANAZIDA ZAINAL, 1MAHDI MAKTABDAR OGHAZ 1Faculty of Computing, Universiti Teknologi Malaysia (UTM), Malaysia 2 Department of Computer Engineering, Islamic Azad University, Yasooj Branch, Yasooj, Iran E-mail: 1,2 [email protected], [email protected], [email protected] ABSTRACT For many of today’s youngsters and children, the Internet, mobile phones and generally digital devices are integral part of their life and they can barely imagine their life without a social networking systems. Despite many advantages of the Internet, it is hard to neglect the Internet side effects in people life. Exposure to illicit images is very common among adolescent and children, with a variety of significant and often upsetting effects on their growth and thoughts. Thus, detecting and filtering illicit images is a hot and fast evolving topic in computer vision. In this research we tried to summarize the existing visual feature extraction techniques used for illicit image detection. Feature extraction can be separate into two sub- techniques feature detection and description. This research presents the-state-of-the-art techniques in each group. The evaluation measurements and metrics used in other researches are summarized at the end of the paper.
    [Show full text]
  • Image Feature Detection and Matching in Underwater Conditions
    Image feature detection and matching in underwater conditions Kenton Olivera, Weilin Houb, and Song Wanga aUniversity of South Carolina, 201 Main Street, Columbia, South Carolina, USA; bNaval Research Lab, Code 7333, 1009 Balch Blvd., Stennis Space Center, Mississippi, USA ABSTRACT The main challenge in underwater imaging and image analysis is to overcome the effects of blurring due to the strong scattering of light by the water and its constituents. This blurring adds complexity to already challenging problems like object detection and localization. The current state-of-the-art approaches for object detection and localization normally involve two components: (a) a feature detector that extracts a set of feature points from an image, and (b) a feature matching algorithm that tries to match the feature points detected from a target image to a set of template features corresponding to the object of interest. A successful feature matching indicates that the target image also contains the object of interest. For underwater images, the target image is taken in underwater conditions while the template features are usually extracted from one or more training images that are taken out-of-water or in different underwater conditions. In addition, the objects in the target image and the training images may show different poses, including rotation, scaling, translation transformations, and perspective changes. In this paper we investigate the effects of various underwater point spread functions on the detection of image features using many different feature detectors, and how these functions affect the capability of these features when they are used for matching and object detection. This research provides insight to further develop robust feature detectors and matching algorithms that are suitable for detecting and localizing objects from underwater images.
    [Show full text]
  • Edge Detection and Ridge Detection with Automatic Scale Selection
    Edge detection and ridge detection with automatic scale selection Tony Lindeberg Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and Computing Science KTH (Royal Institute of Technology) S-100 44 Stockholm, Sweden. http://www.nada.kth.se/˜tony Email: [email protected] Technical report ISRN KTH/NA/P–96/06–SE, May 1996, Revised August 1998. Int. J. of Computer Vision, vol 30, number 2, 1998. (In press). Shortened version in Proc. CVPR’96, San Francisco, June 1996. Abstract When computing descriptors of image data, the type of information that can be extracted may be strongly dependent on the scales at which the image operators are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting one-dimensional image features, such as edges and ridges. A novel concept of a scale-space edge is introduced, defined as a connected set of points in scale-space at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important consequence of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail, the gradient magnitude and a differential expression derived from the third-order derivative in the gradient direction. For a certain way of normalizing these differential de- scriptors, by expressing them in terms of so-called γ-normalized derivatives,an immediate consequence of this definition is that the edge detector will adapt its scale levels to the local image structure.
    [Show full text]
  • Machine Learning for Blob Detection in High-Resolution 3D Microscopy Images
    DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Machine learning for blob detection in high-resolution 3D microscopy images MARTIN TER HAAK KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Machine learning for blob detection in high-resolution 3D microscopy images MARTIN TER HAAK EIT Digital Data Science Date: June 6, 2018 Supervisor: Vladimir Vlassov Examiner: Anne Håkansson Electrical Engineering and Computer Science (EECS) iii Abstract The aim of blob detection is to find regions in a digital image that dif- fer from their surroundings with respect to properties like intensity or shape. Bio-image analysis is a common application where blobs can denote regions of interest that have been stained with a fluorescent dye. In image-based in situ sequencing for ribonucleic acid (RNA) for exam- ple, the blobs are local intensity maxima (i.e. bright spots) correspond- ing to the locations of specific RNA nucleobases in cells. Traditional methods of blob detection rely on simple image processing steps that must be guided by the user. The problem is that the user must seek the optimal parameters for each step which are often specific to that image and cannot be generalised to other images. Moreover, some of the existing tools are not suitable for the scale of the microscopy images that are often in very high resolution and 3D. Machine learning (ML) is a collection of techniques that give computers the ability to ”learn” from data. To eliminate the dependence on user parameters, the idea is applying ML to learn the definition of a blob from labelled images.
    [Show full text]