<<

Master Thesis Electrical Engineering February 2018

Master Thesis Electrical Engineering with emphasis on Signal Processing February 2018

Estimation and Correction of the in Forensic Image due to Rotation of the Photo

Sathwika Bavikadi Venkata Bharath Botta

Department of Applied Signal Processing Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden

This thesis is submitted to the Department of Applied Signal Processing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering with Emphasis on Signal Processing.

Contact Information: Author(s):

Sathwika Bavikadi E-mail: [email protected]

Venkata Bharath Botta E-mail: [email protected]

Supervisor: Irina Gertsovich

University Examiner: Dr. Sven Johansson

Department of Applied Signal Processing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract

Images, unlike text, represent an effective and natural communica- tion media for humans, due to their immediacy and the easy way to understand the image content. Shape recognition and pattern recog- nition are one of the most important tasks in the image processing. should always be in focus and there should be always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. There- fore, the camera must be on a grounded platform such as . Due to the rotation of the camera around the camera center there exist the distortion in the image which must be minimized. The distorted image shall be corrected using transformation method. This task is quite challenging and crucial because any change in the images may misidentify an object for investigators. We used the Hough transform (HT) technique to resolve the distortion in the image obtained due to rotation of the camera around the camera center.

Forensic image processing can help the analyst extract information from low quality, noisy image or geometrically distorted. Obviously, the desired information must be present in the image although it may not be apparent or visible. Considering challenges in complex forensic investigation, we understand the importance and sensitivity of data in a forensic images.The HT is an effective technique for detecting and finding the images within noise. It is a typical method to detect or segment geometry objects from images. Specifically, the straight-line detection case has been ingeniously exploited in several applications. The main advantage of the HT technique is that it is tolerant of gaps in feature boundary descriptions and is relatively unaffected by image noise. The HT and its extensions constitute a popular and robust method for extracting analytic curves. HT attracted a lot of research efforts over the decades. The main motivations behind such interest are the noise immunity, the ability to deal with occlusion, and the expandability of the transform. Many variations of it have evolved. They cover a whole spectrum of shape detection from lines to irregular shapes.

i This master thesis presents contribution within the field of forensic im- age processing. Two different approaches, Hough Line Transformation (HLT), Hough Circular Transformation(HCT) are followed to address this problem. Error estimation and validation is done with the help of root mean square method. The performance of both the methods is evaluated by comparing them. We present our solution as an application the MATLAB environment, specifically designed to be used as a forensic tool for forensic images.

Keywords: Hough Transformation, Hough Circular Transformation, Image Rotation, Forensic Scales.

ii

Acknowledgement

First and foremost, we would like to express our deep and sincere gratitude to our supervisor Irina Gertsovich for her continuous support and motivation throughout our thesis study and research. We would like to express our deepest gratitude to the entire department of applied signal processing for helping us throughout our research and our masters education in Blekinge Institute of Technology(BTH). Also, we would like to thank our teachers Dr.Benny Lövström, Dr.Josef Ström Bartunek, Irina Gertsovich who taught us alot during our study here in BTH. A special thanks to our fellow students, friends who have been a great moral support to us during our study in BTH. Finally, a huge thanks to our family, mainly our parents for being very supportive and believing in us throughout our studies and encouraging us to travel to Sweden, to do research in the area of our interest.

Sathwika Bavikadi

Venkata Bharath Botta

iv Contents

Abstract i

1 Introduction 1 1.1 Motivation ...... 1 1.2 Aim and Objectives ...... 1 1.3 Problem Statement ...... 2 1.4 Research Questions ...... 2 1.5 Survey of Related Works ...... 2 1.6 Proposed Solution Based on Related Work ...... 3 1.7 Outline of the Thesis ...... 3

2 Background 4 2.1 ...... 4 2.2 Rotation in a Photo Camera ...... 4 2.3 Possible Issues During Acquisition of Scale Images ...... 5 2.3.1 of the Capture of the Image ...... 5 2.3.2 Viewing Distance ...... 5 2.4 Forensic Scales ...... 5

3 Method 7 3.1 Hough Transformation ...... 7 3.2 Hough Linear Transformation ...... 8 3.3 Hough Circular Transformation ...... 9 3.4 Image Rotation ...... 10 3.5 Edge Detection ...... 10

4 Implementation 12 4.1 Analysis ...... 12 4.2 Proposed Method ...... 12 4.2.1 Equipment used to Acquire the Images ...... 12 4.2.2 Flow Charts ...... 13 4.2.3 Limitations ...... 13 4.3 Correcting Image Distortion using HCT ...... 14 4.3.1 Input Image ...... 14

v 4.3.2 Gray Scale ...... 14 4.3.3 Rotation of Image about Origin ...... 15 4.3.4 Edge Detection ...... 16 4.3.5 Measuring Rotated Angle in the Distorted Image . . . . . 17 4.3.6 Correction of Rotated Image ...... 17 4.3.7 Comparison between Reference Image and Restored Image 18 4.4 Correcting Image Distortion using HLT ...... 19 4.4.1 Input Image ...... 19 4.4.2 Gray Scale ...... 20 4.4.3 Rotation of Image about Origin ...... 21 4.4.4 Edge Detection ...... 21 4.4.5 Line Detection using HLT ...... 22 4.4.6 Estimating the Angle of Rotation in Distorted Image . . . 23 4.4.7 Correction of Rotated Image ...... 23 4.4.8 Comparison between Reference Image and Restored Image 24 4.5 Correcting Image Distortion Using HCT Without Reference Image 25 4.5.1 Error Analysis ...... 28 4.6 Correcting Image Distortion using HLT Without Reference Image 30 4.6.1 Error Analysis ...... 33

5 Results And Discussion 35 5.1 Comparison of HCT with HLT ...... 35 5.2 Discussion ...... 42

6 Conclusions and Future Work 44 6.1 Conclusion ...... 44 6.2 Future Works ...... 45

References 46

vi List of Figures

2.1 Forensic Scales ...... 6

3.1 Polar Representation of a Straight Line ...... 8 3.2 Image Rotation ...... 10

4.1 Flow Chart of Method using With Reference Image...... 13 4.2 Flow Chart of Method using Without Reference Image...... 13 4.3 Input Image Without any Rotation...... 14 4.4 Gray Scaling of Input Image ...... 15 4.5 Input Image Rotated to 30° about Origin...... 16 4.6 Canny Edge Detection ...... 16 4.7 Measuring Angle in Distorted Image ...... 17 4.8 Inverse Rotation of Image about Origin (Restored Image). . . . . 18 4.9 Comparing Reference Image and Restored Image ...... 18 4.10 Overlap of Reference Image and Restored Image...... 19 4.11 Input Image Without any Rotation...... 20 4.12 Gray Scaling of Input Image ...... 20 4.13 Input Image Rotated to 45° about Origin...... 21 4.14 Canny Edge Detection ...... 22 4.15 Line Detection using HLT ...... 22 4.16 Estimating the Angle of Rotation ...... 23 4.17 Inverse Rotation of Image about Origin (Restored Image). . . . . 24 4.18 Comparing Reference Image and Restored Image ...... 24 4.19 Overlap of Reference Image and Restored Image ...... 25 4.20 Input Image with Unknown Angle of Rotation ...... 26 4.21 Gray-Scaling of Input Image ...... 26 4.22 Edge Detection ...... 27 4.23 Locating the Circles in the Image ...... 27 4.24 Estimation of Angle of Rotation ...... 28 4.25 Inverse Rotation of Image about Origin (Restored Image). . . . . 28 4.26 Error Analysis ...... 29 4.27 Error analysis ...... 29 4.28 Input Image with Unknown Angle of Rotation ...... 30 4.29 Gray Scaling of Input Image ...... 31

vii 4.30 Edge Detection of the Given Image ...... 31 4.31 Line Detection of the Longest Leg of the Scale ...... 32 4.32 Estimation of Angle of Rotation ...... 32 4.33 Inverse Rotation of Image about Origin (Restored Image). . . . . 33 4.34 Error Analysis ...... 34 4.35 Error analysis ...... 34

5.1 HCT for Image having L-shaped Forensic Scale with 5-circles . . . 36 5.2 HLT for Image having L-shaped Forensic Scale with 5-circles . . . 37 5.3 HCT for Image having Forensic Scale with 2-Circles ...... 38 5.4 HLT for Image having Forensic Scale with 2-Circles ...... 39 5.5 HCT for Image having Forensic Scale with 3-Circles ...... 40 5.6 HLT for Image having Forensic Scale with 3-Circles ...... 41

viii List of Tables

5.1 Comparing HCT and HLT for Scale with 5-circles ...... 35 5.2 Comparing HCT and HLT for Scale with 3-Circles ...... 42 5.3 Comparing HCT and HLT for Scale with 2-Circles ...... 42 5.4 Comparing Different Scales ...... 43

ix List of Abbreviations

ABFO Scale American Board of Forensic Odontology Scale

HCT Hough Circular Transformation

HLT Hough Line Transformation

HT Hough Transformation

RCD Randomized Algorithm

RMSE Root Mean Square Error

x

Chapter 1 Introduction

1.1 Motivation Image analysis is one of the fundamental components for variety of activities. It can be as simple task as reading a bar coded tags or as sophisticated as identifying a person from their face. In the field of forensics, crime scene can be major source of evidence that is used to associate suspects to scenes. Crime scene photography differs from other variations of photography. Usually, it has very specific purpose of capturing each image. These images are later analyzed by analysts/experts from forensic laboratory.

Crime scene photographs should always be in focus and there should always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. Therefore, the camera must be on a grounded platform such as tripod. This task is quite challenging because any distortion of the images may misidentify an object for investigators.

1.2 Aim and Objectives The main aim of this thesis is to reduce the distortion in the image obtained due to rotation of the camera around the camera center. The objectives of the thesis are as follows: a) Investigating various methods proposed to address the problem of rotation. b) Finding how much angle the camera is rotated. c) Correcting the distorted image using transformation method. d) Minimizing of error in corrected image. e) Implementing the correction of distorted image and error validation is imple- mented in MATLAB.

1 Chapter 1. Introduction 2

1.3 Problem Statement When photographs are taken without due care by a photographer the camera may not be correctly orientated with respect to the horizontal. This results in a tilted, or rotated, image. The rotation can be corrected by rotating the electronic image by an angle equal to the angle to the rotation.

So, post processing the images with distortion caused by photo camera to improve the visual appearance has been an active research area which could serve the purpose for several forensic image applications.

1.4 Research Questions • How to estimate whether the image is having distortion?

• Why is it significant to choose transformation method for correcting the rotation?

• How to validate the minimization of error?

1.5 Survey of Related Works Research in image processing became very popular now a days. Various ap- proaches are developed for distortion correction in a photo camera.

Irina Gertsovich [9] proposed a new approach an automatic detection of a resolu- tion of a scale or a ruler in forensic images. The proposed method could be used to automatically rescale images to an equal scale thus allowing to compare the images digitally.

Chee-Woo Kang [15] proposed a rotation transformation method which extracts straight line segments from an edge image by rotating the edge image and search- ing the rotated image. It is also shown that the rotation transform is a generalized Hough transform.

Chen [6] proposes a different approach for detecting the circle using Random- ized Algorithm (RCD). Here the main concept is first selecting four edge points randomly in the image and define a distance criterion to find whether there is a possible circle that passes through the edge points, after determining a possible circle, the author undergone evidence-collecting process to investigate whether the possible circle is true or not. Chapter 1. Introduction 3

1.6 Proposed Solution Based on Related Work The forensic scales generally have a unique pattern on it, most of them are L- shaped and they also have a few circles on the scales. The pattern and the circles have a crucial role in finding out whether the scales is rotated or not, with respect to the photo camera.

Hough Transformation (HT) is one of the important method in the field of image processing, to efficiently find the lines or curves in an image. In 1962 Hough earned the patent for a method [17]. It is an important tool even after the golden jubilee year of existence, as evidenced by more than 2500 research papers dealing with its variants, generalizations, properties and applications in diverse fields. Hough transform is most commonly used for the detection of regular curves such as lines, circles, ellipses, etc.

Detecting a single circle in a given image is simple, it becomes complicated when there are multiple circles in the image. Rudolf Scitovski [20] addressed the prob- lem of detecting multiple circles in a given image. The multiple circle detection problem has been solved by the application of center-based clustering to recon- struct or detect circles.

1.7 Outline of the Thesis The report has been structured as follows: Chapter 1 Introduces to the topic of the thesis work. Chapter 2 Provides the conceptual learning about distortion correction, trans- formation methods to address the problem of . Chapter 3 Describes the theory related to the transformation method proposed in the thesis. Chapter 4 Explains the implementation of the proposed method. Chapter 5 Discusses the results obtained Chapter 6 Concludes the work and provides a path for continuing the work. Chapter 2 Background

In this chapter a detailed explanation of all the selected methods used in the thesis is given.The knowledge obtained from this chapter is helpful for understanding of the future chapters discussed in this thesis document.

2.1 Perspective The term perspective can be described in many ways, but in the context of pho- togrammetry it means “the relative size of objects within an image as captured by the camera system”. In simple terms the appearance of the object becomes smaller if the camera is moving away from the object. This relativity allows us measurements within image to correlate with real world data [3].

The relationship between the captured angle of an image and the angle at which the image is viewed have an influence on perspective distortion.

2.2 Rotation in a Photo Camera In some instances, even when the camera is put perpendicularly to the object, the image obtained can have some rotation to it. The current thesis is based on forensic images, where each and every in the image is of high importance. Any kind of distortion can lead to loss of information. So if the image of the object is rotated, the information obtained from the image might be misunderstood. To obtain an image that can be measured, is by making the image properly restored through image processing. The focus of research is to obtain the transformation model of images.

4 Chapter 2. Background 5

2.3 Possible Issues During Acquisition of Scale Im- ages

2.3.1 Angle of View of the Capture of the Image The point of view at which the camera captures the image completely accounts to the perspective distortion in the image, when photographs are viewed from a normal viewing distance. Perspective distortion is especially observable in images captured at short camera-to-subject distance with wide-angle . For instance when a person is photographed with an ultra-wide-angle up close, their nose, eyes and lips can appear unrealistically large, while their ears can look extremely small or even completely disappear from the image. Encircling a similar subject indistinguishably while utilizing a moderate telephoto or long focus lens, flatten the image to more flattering viewpoint [1].

2.3.2 Photograph Viewing Distance The viewing distance of an image is approximately equal to diagonal of the image. While viewing at this distance the perspective distortion in the image is appar- ent. However, theoretically, if one views image at a closer distance having exten- sion distortion (wide angle), enlarging the angle of view abates the phenomenon. Similarly viewing image at a greater distance having compression distortion, nar- rowing the angle of view abates the phenomenon. In both cases the apparent distortion completely disappears at certain distance. According to Nikki Turner “Sometimes two people could look at the same picture and see different images. Position determines perspective.” [19]

2.4 Forensic Scales Forensic scales [8] act as a geometrical reference for a crime scene object during the . Using these scales while capturing the image allows the investigators to rebuild the dimensional context of the crime scene and gives a way to reproduce photos of physical evidence. The most common ruler used in the crime scene photography is L shaped plastic rules. Figure 2.1(a) shows the different scales used in the crime scene photography. The community recognized the American Board of Forensic Odontology (ABFO) N0. 2 standard reference scale as a reliable and precise reference scales, see Figure 2.1(b). In the dimensional review of forensic photography, the authors surveyed the scales which are commercially available and had found the lack of consistency and absence of strict adherence to the standards in the manufacturing process [8]. And authors study also seeks to assess the quality of forensic scales, document the Chapter 2. Background 6 manufacturing process, and recommend pathways for building up standards for forensic scales that will fill in as a method for guaranteeing accuracy and client confidence.

(a) Commonly used Forensic Scales

(b) ABFO Scales Figure 2.1: Forensic Scales Chapter 3 Method

This chapter discusses about the method implemented for detection, estimation, correction of tilt in a image. There exist many methods to do this, but the main motive in selecting this approach is because it is more reliable and the evolution in HT is an ever going process.

3.1 Hough Transformation The Hough Transform was introduced and patent by Paul V.C Hough in 1962 [13] for detecting lines in the picture. Later HT is extended by Richard O. Duda and Peter E. Hart to find curve fitting like circles [7] in image processing and pattern recognition area. As HT is applied only after image undergoing some pre-processing techniques like gray scale and edge detection, the pre-processing increases the accuracy while detecting the shapes [2]. The concept is, each pixel in the image is converted to parametric form and candidate peaks in accumulator array are selected through voting processing. These votes are accumulated in accumulator array and the maximum votes are recognized as a desired pattern [5]. The HT is an effective technique for detecting and finding the lines within noise images. It is a typical method to detect or segment geometric shapes in the im- ages. Specifically, the straight-line detection case has been ingeniously exploited in several applications. The main advantage of the HT technique is that it is tolerant of gaps in feature boundary descriptions and is relatively unaffected by image noise. The HT and its extensions constitute a popular and robust method for extracting analytic curves. It was initially applied to the recognition of straight lines (by P.V.C Hough, 1962) [13], later extended to circles and general curve fitting (by Richard O. Duda and Peter E. Hart, 1972) [7] and arbitrarily shaped objects (D.H Ballard, 1980) [5]. The HT technique is particularly used to identify features of a particular shape within a image such as straight lines, curves and circles [18] [12] .

The evolution of the transform will keep going. As the transform has had a fruitful history, it has a good chance of a bright future for decades to come.

7 Chapter 3. Method 8

3.2 Hough Linear Transformation The most basic HT is Hough Line Transform (HLT). To detect lines [10], edge points in given image are found by edge detection techniques and then HLT is applied to the edge points, where each edge point is considered to be a point on possible set of lines. We can describe a line segment analytically in numerous forms. However, a convenient equation for describing a set of lines uses polar or normal notation according

(x cos(θ)) + (y sin(θ)) = ρ, (3.1) where (x,y) indicates the position of edge pixel on the line in image space, ρ is the distance from origin to the line and θ is the orientation of ρ to X-axis as shown in Figure 3.1.

Figure 3.1: Polar Representation of a Straight Line

Algorithm for line detection:

• Partition of ρθ-plane plane into accumulator A with parameters ρ ∈ [ρmin, ρmax]; θ ∈ [θmin, θmax]. • The range of θ is 90◦ (horizontal lines have θ = 0◦ and vertical lines have θ = 90◦). √ • The range of ρ is  M 2 + N 2 if size of the image is M × N.

• The discretization of θ and ρ must happen with values δθ and δρ giving acceptable precision and sizes of the parameter space. Chapter 3. Method 9

• The accumulator matrix (i,j) corresponds to associated with parameter val-

ues (θj, ρi) • Fix one parameter and loop for the others and increment corresponding entry in accumulator i.e, A(ρ, θ) = A(ρ, θ)+1. • Peaks in the accumulator are chosen as lines in the image.

3.3 Hough Circular Transformation The Hough Circular Transform (HCT) is a specialization of HT [14], the purpose of the technique is to find the circles in images. Each edge pixel in the image is transformed into 3D Hough space so called accumulator. The first two dimen- sions of the accumulator corresponds to the coordinates of the circle center and the third dimension is its radius. The circle candidates are produced by voting all possible circles from the edge points in the accumulator. The local maximum voted circles of the accumulator cell gives Hough space and the accumulator cell with greater number of votes are the centers.

The HT can also be used to estimate the parameters of a circle when many points that fall on the perimeter are known. The characteristic equation of a circle with radius ‘r’ and center (a, b) is given by

(x − a)2 + (y − b)2 = r2 (3.2)

Algorithm for detection of circle:

• Preprocessing to the image such as to grayscale, edge detection, fil- tering.

• Initialize a 3D accumulator ‘A’ with parameters a, b, r (A(a, b, r)=0).

• Fix one parameter and loop for the others.

• Increment corresponding entry in accumulator i.e. A(a, b, r) = A(a, b, r)+1 and vote all possible circles in A.

• The local maximum voted circle of accumulator gives circle Hough space.

• The maximum voted circle of accumulator gives the candidate circle. Chapter 3. Method 10

3.4 Image Rotation The image rotation performs a geometric transformation [4] which maps the posi- tion P(x,y) in original image onto a position P1(x1, y1) in output image by rotating it through a specified angle θ about origin. Assume that the distance of any point P(x,y) on original image to center is r, the angle to the x axis is α as shown in Figure 3.2.

Figure 3.2: Image Rotation

From Figure 3.2: x = r cos(α) and y = r sin(α) After rotation of angle θ the equations are: x1 = r cos(α + θ) = r cos(α) cos(θ) − r sin(α) sin(θ) x1 = x cos(θ) − y sin(θ) y1 = r cos(α + θ) = r sin(α) cos(θ) + r cos(α) sin(θ) y1 = y cos(θ) + x sin(θ) The matrix expression for above equations is:       x1 cos(θ) − sin(θ) 0 x  y  =  sin(θ) cos(θ) 0   y   1     . 0 0 0 1 0 Then, inverse rotation expression is:       x cos(θ) sin(θ) 0 x1  y  =  − sin(θ) cos(θ) 0   y       1 . 0 0 0 1 0

3.5 Edge Detection Significant transitions in an image are called as edges. Most of the shape infor- mation of an image is enclosed in edges [11]. So first we detect these edges in an Chapter 3. Method 11 image and by using these filters and then by enhancing those areas of image which contains edges, sharpness of the image will increase and image will become clearer.

Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.Common edge detection algorithms include Sobel, Canny, Prewitt, Laplacian, Roberts, and fuzzy logic methods.

In this thesis we are using Canny edge detection since as many edges are detected and Canny edge detection method is less sensitive to noise [16]. Chapter 4 Implementation

This chapter focuses on the implementation and detailed analysis of the method- ologies followed in the thesis to correct the distortion in the forensic scale images.

4.1 Analysis Perspective are a natural effect when projecting a three dimensional scene onto a two dimensional plane photographically. Perspective distortions cause objects close to the viewer to appear much larger than objects further away in the background. The closer you get to a subject the stronger the effect. As lenses with a short force you to get closer to your subject, photos taken with wide angle lenses are more prone to show strong perspective distortions than telephoto lenses. Once again this is not an effect of the lens but an effect of the perspective (distance) of the viewer or camera to the scene. Any taken image in general can have perspective distortion, the image can also have some tilt and rotation to it.

4.2 Proposed Method This section explains the flow of implementation of the two methodologies that are proposed to correct the distortion in the forensic images are implemented in MATLAB.The input images are different kinds of forensic scales which are cap- tured by a photo camera. The implementation is carried out with an assumption the reference image is captured without perspective distortion.

4.2.1 Equipment used to Acquire the Images Camera: Canon EOS 700D Lens: Canon EF-S 18-55mm f/3.5-5.6 IS STM Tripod: Manfrotto 190XPRO Aluminium 4-Section with Manfrotto 804RC2

12 Chapter 4. Implementation 13

Basic Pan & Tilt Head Software: Canon EOS utility (remote shooting)

4.2.2 Flow Charts In the thesis work we have used two methods: HLT, HCT for correcting the error in the image caused due to the position of the camera. Here we considered two cases one is using reference image, though it’s not a real case scenario we have used it for analysis purpose it is shown in Figure 4.1. The second case is without reference image, real case scenario it is shown in Figure 4.2.

Figure 4.1: Flow Chart of Method using With Reference Image.

Figure 4.2: Flow Chart of Method using Without Reference Image.

4.2.3 Limitations • The scales should have certain pattern like ABFO, see Figure 2.1(b). Chapter 4. Implementation 14

• The whole scale is visible in the image.

• The scale is the single object in the image, that is no other patterns such as fingerprints, shoe prints etc. are present.

4.3 Correcting Image Distortion using HCT

4.3.1 Input Image The input image is a white forensic ruler with black background of resolution 3456 × 5184 × 3 (three color channels, therefore ×3) captured using camera and tripod, we assume the image captured is without any rotation as shown in Figure 4.3.

Figure 4.3: Input Image Without any Rotation.

4.3.2 Gray Scale The input image, Figure 4.3. is converted to gray scale image of resolution 3456 × 5184 which will be used in next step to save the computation time, as time is the important factor in the image processing. Figure 4.4 shows the gray scale image. Chapter 4. Implementation 15

Figure 4.4: Gray Scaling of Input Image

4.3.3 Rotation of Image about Origin For the evaluation of the proposed method the input image is now rotated [4] to an angle, let us take 30° anti-clockwise direction about origin (center of the image) using rotation matrix as shown in equation 4.1. (see Figure 4.5).      x1 cos(30) − sin(30) 0 x  y   sin(30) cos(30) 0  y  (4.1)  1  =   , 0 0 0 1 0 where (x,y) are coordinates of pixels in the input image as shown in Figure 4.1 and (x1, y1) are new coordinates of pixels in the rotated image as shown in Figure 4.5. For our convenience rotated image is maintained with same resolution as input image. Chapter 4. Implementation 16

Figure 4.5: Input Image Rotated to 30° about Origin.

4.3.4 Edge Detection The Canny edge detection is applied to the gray-scale image to compute the edges of sharp intensity as shown in Figure 4.6. These edges are further used in locating the circles using Hough circular transforms.

Figure 4.6: Canny Edge Detection Chapter 4. Implementation 17

4.3.5 Measuring Rotated Angle in the Distorted Image The major task is to find the angle of rotation in the rotated image, as shown in the Figure 4.7. This is estimated from the image angle of the scale rotation is expected to be equal to 30 degrees. For this estimation we have considered circle centers on the short scale. Let us assume D1, D2 as first and second circle centers. A straight line is drawn between points D1, D2 and another line is drawn vertically down from D1 to E as shown in Figure 4.7. If D1=(c1,c2), D2=(c3,c4) and rotation angle is calculated using equation 4.2.

 c − c  θ = tan−1( 4 2 ) . (4.2) c3 − c1

Therefore the angle of distortion (rotated) is θ = -29.4591 degrees.

Figure 4.7: Measuring Angle in Distorted Image

4.3.6 Correction of Rotated Image The rotated image is corrected with restored angle using inverse rotation matrix as shown in equation 4.3 (see Figure 4.8).      x cos(−29.5007) sin(−29.5007) 0 x1  y   − sin(−29.5007) cos(−29.5007) 0  y  (4.3)   =   1 , 0 0 0 1 0 where (x,y) are coordinates of pixels of the restored image and (x1, y1) are coor- dinates of pixels in the distorted image. Chapter 4. Implementation 18

Figure 4.8: Inverse Rotation of Image about Origin (Restored Image).

4.3.7 Comparison between Reference Image and Restored Image Comparing the coordinates of the centers of the circles of the restored image see Figure 4.8(b) with respect to the reference image see Figure 4.8(a) gives the error, if there is any.

(a) Reference Image (b) Restored Image Figure 4.9: Comparing Reference Image and Restored Image

RMSE was adopted as a measure of the error after image restoration according to the equation 4.4.

γ 1 X q 2 2 e = (y − y0) + (x − x0 ) , (4.4) cir γ i i i i n=1 Chapter 4. Implementation 19

Here γ is the number of circles in the scale, (x1, y1) are the centers of the original image, detected as described in section 4.3.3 and 0 0 centers of the corre- (x1, y1) sponding circles in the restored image. Comparison of original image with re- stored image is as shown in Figure 4.9. Overlap of reference image and restored image is as shown in Figure 4.10.

Figure 4.10: Overlap of Reference Image and Restored Image.

4.4 Correcting Image Distortion using HLT

4.4.1 Input Image The input image is a white forensic ruler with black background of resolution 3456 × 5184 × 3 pixels (three color channels, therefore ×3) captured using camera and tripod, we assume the captured image is without any rotation as shown in the Figure 4.11. This image is used as reference to compare with output (image after correction of distortion). Chapter 4. Implementation 20

Figure 4.11: Input Image Without any Rotation.

4.4.2 Gray Scale The obtained input image is converted to gray scale image of resolution 3456 × 5184 which will be used in next step to save the computation time, as time is the important factor in the image processing. Figure 4.12 shows the gray scale image.

Figure 4.12: Gray Scaling of Input Image Chapter 4. Implementation 21

4.4.3 Rotation of Image about Origin For evaluation of the purposed method the input image is now rotated to 45° anti-clockwise direction about origin (center of the image) using rotation matrix as shown in equation 4.5.      x1 cos(45) − sin(45) 0 x  y   sin(45) cos(45) 0  y  (4.5)  1  =   , 0 0 0 1 0 where (x,y) are coordinates of pixels in the input image and (x1, y1) are new coordinates of pixels in the rotated image. For our convenience rotated image is maintained with same resolution as input image (see Figure 4.13).

Figure 4.13: Input Image Rotated to 45° about Origin.

4.4.4 Edge Detection The Canny edge detection is applied to the gray-scale image to compute the edges of sharp intensity as shown in Figure 4.14. These edges are further used in locating the circles using HCT. Chapter 4. Implementation 22

Figure 4.14: Canny Edge Detection

4.4.5 Line Detection using HLT In difference from the previously described method, here the edges obtained from the edge detection see Figure 4.15(a) is passed through the HLT algorithm as explained earlier in chapter 3. The set of edge points on one line is represented by a set of sine curves in Hough space with parameters ρ and θ as shown in Figure 4.15 the maximum voted peaks in the Hough space, see Figure 4.15(a), represent the lines in the correspondent image, see Figure 4.15(b).

(b) Line Detection using HLT (a) Hough Space for Lines Figure 4.15: Line Detection using HLT Chapter 4. Implementation 23

4.4.6 Estimating the Angle of Rotation in Distorted Image For calculation purpose the longest line from the Figure 4.15(b) is considered. Using start and end points of the line another horizontal is drawn from E1 to D as shown in Figure 4.16. Measuring angle between these lines gives the scale rotation angle. If E1=(c1,c2), E2=(c3,c4) and rotation angle is calculated using equation 4.6.  c − c  θ = tan−1( 4 2 ) . (4.6) c3 − c1 Therefore the angle of distortion (rotated) is θ = -44.35 degrees.

Figure 4.16: Estimating the Angle of Rotation

4.4.7 Correction of Rotated Image The rotated image is corrected with restored angle using inverse rotation matrix as shown in equation 4.7.      x cos(−44.35) sin(−44.35) 0 x1  y   − sin(−44.35) cos(−44.35) 0  y  (4.7)   =   1 , 0 0 0 1 0 where (x,y) are coordinates of pixels of the restored image and (x1, y1) are new coordinates of pixels in the distorted image (see Figure 4.17). Chapter 4. Implementation 24

Figure 4.17: Inverse Rotation of Image about Origin (Restored Image).

4.4.8 Comparison between Reference Image and Restored Image Comparing the centers of the circles of the restored image Figure 4.18(b) with respect to the reference image Figure 4.18(a) gives the error, if there is any.

(a) Reference Image (b) Restored Image Figure 4.18: Comparing Reference Image and Restored Image

RMSE was adopted as a measure of the error after image restoration according to the equation 4.8.

γ 1 X q 2 2 e = (y − y0) + (x − x0 ) (4.8) line γ i i i i n=1 Chapter 4. Implementation 25

Here γ is the number of circles in the scale, (x1, y1) are the centers of the original image, detected as described in section 4.4.3 and 0 0 centers of the corre- (x1, y1) sponding circles in the restored image. Comparison of the original image with the restored image is as shown in Figure 4.18. Overlap of reference image and restored image is as shown in Figure 4.19.

Figure 4.19: Overlap of Reference Image and Restored Image

4.5 Correcting Image Distortion Using HCT With- out Reference Image The input image is captured with unknown angle of rotation with respect to horizontal line as shown in Figure 4.20. The resolution of the image is 3456 × 5184 × 3 pixels (three color channels, therefore ×3). The obtained input image is converted to gray scale image of resolution 3456 × 5184 as shown in Figure 4.21 then this gray scale image is used to detect edges with the help of Canny edge detection see Figure 4.22. These edges are passed through HCT algorithm and circles are drawn as shown in Figure 4.23. Using circle centers on the short scale, the rotation angle of the scale is calculated as shown in Figure 4.24. Now the distortion in input image is corrected with obtained angle using rotation matrix. Figure 4.25 shows the corrected image. Chapter 4. Implementation 26

Figure 4.20: Input Image with Unknown Angle of Rotation

Figure 4.21: Gray-Scaling of Input Image Chapter 4. Implementation 27

Figure 4.22: Edge Detection

Figure 4.23: Locating the Circles in the Image Chapter 4. Implementation 28

Figure 4.24: Estimation of Angle of Rotation

Figure 4.25: Inverse Rotation of Image about Origin (Restored Image).

4.5.1 Error Analysis In ideal situation (i.e., the camera sensor is parallel to the scale surface) the three circle center coordinates on part of the short scale should lie on a strictly vertical line or circle center coordinates on part of the long scale should lie on a strictly horizontal line. Let us consider part of the short scale. From the restored image see Figure 4.25 using circle center coordinates on part of the short scale a dotted line (yellow) is drawn as shown in Figure 4.26. Chapter 4. Implementation 29

Figure 4.26: Error Analysis

Using these circle center coordinates and distance between them as reference, a vertical line (red) is drawn as shown in Figure 4.26 and with the help of distance between circle centers in restored image, new circle centers are pointed out on the vertical line (assuming these are ideal case circle centers) as shown in Figure 4.26(a),(b). Now calculating RMSE between these circle center coordinates gives the actual error after corrected the distortion

γ 1 X q 2 2 e = (y − y0) + (x − x0 ) . (4.9) cir γ i i i i n=1

Here γ is the no of circle on part of the short scale in Figure 4.26(a), (xi, yi) are the circle center coordinates of the original image and 0 0 are the ideal case (xi, yi) circle center coordinates. And the calculated error is 0.7459 pixels

(a) (b) Figure 4.27: Error analysis Chapter 4. Implementation 30

4.6 Correcting Image Distortion using HLT With- out Reference Image The input image is captured with unknown angle of rotation with respect to a horizontal line as shown in Figure 4.28. The resolution of the image is 3456 × 5184 × 3 pixels (three color channels, therefore ×3).We convert the input image to gray-scale image of resolution 3456 × 5184 as shown in Figure 4.29 then this gray-scale image is used to detect edges with the help of Canny edge detection see Figure 4.30. These edges are passed through HLT algorithm and lines are drawn. For our convenience only one line is drawn on the image (the red line in the image) as shown in Figure 4.31. Using first and end points of the line, the rotation angle of the scale is calculated as shown in Figure 4.32. Now the distortion in input image is corrected with obtained angle using rotation matrix. Figure 4.33 shows the corrected image.

Figure 4.28: Input Image with Unknown Angle of Rotation Chapter 4. Implementation 31

Figure 4.29: Gray Scaling of Input Image

Figure 4.30: Edge Detection of the Given Image Chapter 4. Implementation 32

Figure 4.31: Line Detection of the Longest Leg of the Scale

Figure 4.32: Estimation of Angle of Rotation Chapter 4. Implementation 33

Figure 4.33: Inverse Rotation of Image about Origin (Restored Image).

4.6.1 Error Analysis In ideal situation (i.e., the camera sensor is parallel to the scale surface) the three circle center coordinates on part of the short scale should lie on a strictly vertical line or circle center coordinates on part of the long scale should lie on a strictly horizontal line. Let us consider part of the short scale. From the restored image see Figure 4.34 using circle center coordinates on part of the short scale a line (yellow) is drawn as shown in Figure 4.35(a). Using these circle center coordinates as reference, ideal case circle centers are pointed out and a vertical line (red) is drawn between them as shown in Figure4.35(b). Now calculating RMSE between these circle center coordinates gives the actual error after corrected the distortion

γ 1 X q 2 2 e = (y − y0) + (x − x0 ) . (4.10) line γ i i i i n=1

Here γ is the no of circle on part of the short scale, (xi, yi) are the circle cen- ter coordinates of the original image and 0 0 are the ideal case circle center (xi, yi) coordinates. And the calculated error is 0.6596 pixels. Chapter 4. Implementation 34

Figure 4.34: Error Analysis

(a) (b) Figure 4.35: Error analysis Chapter 5 Results And Discussion

This section deals about results obtained by implementing the approaches dis- cussed in the earlier chapters. It also discusses the evaluation of performance of the proposed method.

5.1 Comparison of HCT with HLT A detailed description of both the approaches and corresponding outputs for HCT, HLT for different images containing forensic scales as inputs are given in subsections below.

Table 5.1: Comparing HCT and HLT for Scale with 5-circles

Angle (degree) HCT HLT

ecir (pixels) αrotc (degree) eline(pixels) αrotl (degree) 45 1.6492 -44.35 1.2028 -44.92 30 1.4560 -29.4591 1.0828 -29.794 20 1.5232 -19.345 1.3536 -19.82 10 0.9428 -9.332 0.9271 -9.903

35 Chapter 5. Results And Discussion 36

(a) Input Image locating the circles

(b) Estimating angle using HCT

(c) Output image after doing HCT Figure 5.1: HCT for Image having L-shaped Forensic Scale with 5-circles Chapter 5. Results And Discussion 37

(a) Input Image locating the circles

(b) Estimating Angle Using HLT

(c) Output image after doing HLT Figure 5.2: HLT for Image having L-shaped Forensic Scale with 5-circles Chapter 5. Results And Discussion 38

(a) Input Image locating the circles

(b) Estimating Angle using HCT

(c) Output image after doing HCT Figure 5.3: HCT for Image having Forensic Scale with 2-Circles Chapter 5. Results And Discussion 39

(a) Input Image locating the Circles

(b) Estimating angle using HLT

(c) Output image after doing HLT Figure 5.4: HLT for Image having Forensic Scale with 2-Circles Chapter 5. Results And Discussion 40

(a) Input Image locating the Circles

(b) Estimating angle using HCT Angle rotated is 45 degrees clockwise

(c) Output image after doing HCT Figure 5.5: HCT for Image having Forensic Scale with 3-Circles Chapter 5. Results And Discussion 41

(a) Input Image Locating the Circles

(b) Estimating Angle using HCT Angle rotated is 45 degrees clockwise

(c) Output Image after doing HCT Figure 5.6: HLT for Image having Forensic Scale with 3-Circles Chapter 5. Results And Discussion 42

Table 5.2: Comparing HCT and HLT for Scale with 3-Circles

Angle (degree) HCT HLT

ecir (pixels) αrotc (degree) eline(pixels) αrotl (degree) 45 1.4120 -44.8252 0.8047 -45 30 1.6667 -30.0164 0.9428 -29.8614 20 1.0828 -19.347 0.9571 -19.555 10 1.3536 -9.066 0.9571 -9.826

Table 5.3: Comparing HCT and HLT for Scale with 2-Circles

Angle (degree) HCT HLT

ecir (pixels) αrotc (degree) eline(pixels) αrotl (degree) 45 1.6492 -43.128 1.2761 -44.951 30 1.4560 -29.3572 0.6714 -29.98 20 1.5232 -18.1925 1.6882 -19.64 10 0.9428 -9.259 0.8471 -9.75

5.2 Discussion

HLT is suitable than HCT: HLT use many points that belong to edges that is found in edge detection process (using Canny method). Hence HLT is based on much more points. In case of HCT the points of interest are limited (centers of the circles) where the points on the circle circumference are also found by edge detection process. We use these circle center points to find the angle of rotation. Thus making HCT based on limited points compared to HLT.

Considering different scales: Now no reference method error analysis is per- formed to the output image (i.e., restored image) of reference method assuming we have no reference image. And again, compared these results with respect to HLT and HCT as shown in table 5.4. And we can say the errors calculated is very low when compared to the resolution 3456×5184 pixels.

From the Table 5.4, we can observe that not only HLT has smaller error as compared to HCT. The distortion of the smaller scale (with three circles) was estimated and restored with higher precision as compared to larger scale (with five circles). This is because the larger scale is having resolution 3456×5184 pixels and smaller scale is having resolution 1944×2896. So as resolution decreases the calculated error decreases. Chapter 5. Results And Discussion 43

Table 5.4: Comparing Different Scales

Type of scale Angle (degree) HCT ecir(pixels) HLT eline(pixels) 30 1.5560 1.2094 Scale with 5 circles 45 1.9852 1.7595 20 1.581 1.4711 30 0.5950 0.5030 Scale with 3 circles 45 1.0214 0.9476 20 0.9373 0.8472 Chapter 6 Conclusions and Future Work

6.1 Conclusion In summary, forensic image processing can help the analyst extract information from low quality, noisy image or geometrically distorted image. Obviously, the desired information must be present in the image although it may not be appar- ent or visible. Considering challenges in complex forensic investigation, we now understand the importance and sensitivity of data in a forensic images. The the- sis proposes two different methods for addressing the problem of rotation caused by the photo camera in the case of forensic images.

Advantages: • A method for object recognition • Robust to partial deformation in shape • Tolerant to noise • Can detect multiple occurrences of a shape in the same pass Disadvantages: • Lot of memory and computation is required • Some times Hough Transform fails to detect the end points of the detected lines – during HT localized information is lost • In presence of noise or parallel edges, peak points in the accumulator can be difficult to identify • With increase in image resolution, efficiency of the algorithm decreases The advantages of the HT include robustness to noise, robustness to shape dis- tortions and to occlusions/missing parts of an object. Its main disadvantage is the fact that computational and storage requirement of the algorithm increase as a power of the dimensionality of the curve. This means that for straight lines the computational complexity and storage requirement is less when compared to circles.

44 Chapter 6. Conclusions and Future Work 45

6.2 Future Works The future work to the thesis research include: Since the proposed algorithms (HCT and HLT) are less efficient when resolution of image increases, efficiency can be increased for high resolution images and it would be interesting if other objects are considered along with the scale like shoe prints or crime scene objects.

In general, a given image contains geometrical distortions, perspective distortion such as tilt to it. Using Hough transforms the problem of tilt can be addressed.

This methods can also be applied to images recorded by a , because moving images are being continuously recorded onto a recording medium, it is advantageous to correct the distortion obtained due to the orientation of camera before the moving images are stored. References

[1] Perspective distortion. https://en.wikipedia.org/wiki/Perspective_ distortion_%28photography%29, Accessed: April 2017.

[2] Line detection by hough transformation. http://web.ipac.caltech.edu/ staff/fmasci/home/astro_refs/HoughTrans_lines_09.pdf, Accessed: October 2017.

[3] Analysis - single image photogrammetry | perspective (graphical) | closed circuit television, Accessed: September 2017.

[4] Three-dimensional rotation matrices. https://scipp.ucsc.edu/~haber/ ph216/rotation_12.pdf, Accessed: Spring 2017. [5] D H Ballard. Generalizing the hough transform to detect arbitrary shapes. Pattern Recognition, 13:111 – 122, 1981. [6] Teh-Chuan Chen and Kuo-Liang Chung. An efficient randomized algorithm for detecting circles. Computer Vision and Image Understanding, 83(2):172 – 191, 2001.

[7] Richard O. Duda and Peter E. Hart. Use of the hough trasformtion to detect lines and curves in pictures. Stanford Research Institute, Menlo Park, California, 1971. [8] Massimiliano Ferrucci, Theodore D. Doiron, Robert M. Thompson, John P. Jones, Adam J. Freeman, and Janice A. Neiman. Dimensional review of scales for forensic photography. Journal of Forensic Sciences, 61(2):509–519, 2016.

[9] I. Gertsovich, M. Nilsson, J.S. Bartůněk, and I. Claesson. Automatic estima- tion of a scale resolution in forensic images. Forensic Science International, 283:58 – 71, 2018.

[10] Karin Althoff Rafeef Ghassan Hamarneh and Abu-Gharbieh. Project re- port for the computer vision course automatic line detection. Department of Signals and Systems Chalmers University of Technology, 2, September 1999.

46 References 47

[11] Rafael C. Gonzalez and Richard E. Woods. Processing (2nd Ed). Prentice Hall, 2002. [12] Allam Shehata Hassanein, Sherien Mohammad, Mohamed Sameer, and Mo- hammad Ehab Ragab. A survey on hough transform, theory, techniques and applications. CoRR, abs/1502.02160, 2015. [13] Paul C. Hough V. Method and means for recognizing complex patterns. (3069654), December 1962.

[14] Kjeldgaard Just, Simon Pedersen. Circular hough transform. Aalborg Uni- versity, Vision, Graphics, and Interactive Systems, November 2007. [15] Chee-Woo Kang, Rae-Hong Park, and Kwae-Hi Lee. Extraction of straight line segments using rotation transformation: generalized hough transforma- tion. Pattern Recognition, 24(7):633 – 641, 1991. [16] Raman Maini and Himanshu Aggarwal. Study and comparison of various image edge detection techniques. International Journal of Image Processing (IJIP), 3:1–11, Febraury 2009. [17] Priyanka Mukhopadhyay and Bidyut B. Chaudhuri. A survey of hough trans- form. Pattern Recognition, 48(3):993 – 1010, 2015. [18] Srikanta Murthy K. Nandini N. and G. Hemantha Kumar. Estimation of skew angle in binary document images using hough transform. World Academy of Science, Engineering and Technology, Vol:18 2008-06-23. [19] B. Peng, W. Wang, J. Dong, and T. Tan. Position determines perspective: Investigating perspective distortion for image forensics of faces. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1813–1821, July 2017. [20] Rudolf Scitovski and Tomislav Marošević. Multiple circle detection based on center-based clustering. Pattern Recognition Letters, 52:9 – 16, 2015.