<<

Implementation of Distance Transformation using Processing

By

Rama Prasada Reddy Peddireddy

A project report submitted to the Graduate Faculty of the

University of Colorado at Colorado Springs

in the partial fulfillment of the

requirements for the degree of

Master of Science in Computer Science Engineering

Department of Computer Science

2017

1 | P a g e

This report for the Master of Science degree

by

Rama Prasada Reddy Peddireddy has been approved for the

Department of Computer Science

by

______

Advisor: Dr. Sudhanshu Semwal

______

Committee member: Dr. Edward Chow

______

Committee member: Dr. T.S. Kalkur

______

Date

2 | P a g e

Acknowledgements

First and foremost, I thank my academic advisor, Professor Sudhanshu Semwal, for accepting me into his group, for continuous support of my Master’s study and related research. My sincere gratitude to him for his patience, motivation, and immense knowledge. His guidance helped me in all the time of this project work. I could not have imagined having a better advisor and mentor for my Master’s study.

Additionally, I would like to thank my committee members Professor Edward Chow and Professor T.S. Kalkur for their interest in my work. My sincere gratitude for their insightful suggestions and encouragement.

Beside my committee, I would like to thank my parents, my sister and brother-in-law for their continuous support and encouragement throughout my Master’s.

I would like to thank all the people who contributed in some way to the work described in this project.

3 | P a g e

Abstract

In this project, we describe three different approaches for determining or finding a distance map for a binary image. The algorithms that solve such problems are known as Distance Transforms. These algorithms that solve such problems are known as Distance Transformations. These algorithms operate on binary images but can be extended to receive any type of digital image if a conversion algorithm that converts a digital image into a binary digital image is executed prior to executing the Distance Transform algorithm. Therefore, we also examine how to transform any regular digital image into a binary image, that is, into a black and white image. A Distance Transformation algorithm operates on a binary image consisting of featured pixels and non- featured pixels. It outputs a distance map or distance matrix where each cell matches a pixel of the input image and contains a value indicating the distance to the nearest featured pixel. Distance Transforms represent a natural way to blur feature locations geometrically and they allow other image effects like skeletonizing, image matching, object recognition, path planning and navigation. They can be applied on 2D data grids, in this case we are dealing with pixels. In this project, we implemented sequential algorithm for three different distance transformation techniques and compared them with execution time as a benchmark. With this I found that board and City block distance transformations are faster than Euclidean distance transformation.

4 | P a g e

Contents 1.1 Introduction ...... 9 1.2 Fundamentals ...... 9 1.2.1 Binary image ...... 9 1.2.2 Grayscale image ...... 9 1.2.3 Pixel ...... 9 1.2.4 Pixel value ...... 10 1.2.5 Pixel connectivity ...... 10 1.2.5.1 4-connected pixel connectivity ...... 10 1.2.5.2 Diagonal pixel connectivity ...... 10 1.2.5.3 8-connected pixel connectivity ...... 10 2.1 Processing...... 12 3.1 Distance Transformation ...... 13 4.1 Distance transformation types ...... 17 4.2 distance transformation ...... 17 4.3 City-block distance transformation ...... 18 4.4 Euclidean distance transformation ...... 20 5.1 Implementation ...... 22 5.2 A function that converts a digital image into a binary image ...... 22 5.3 Minimum distance calculation ...... 24 5.4 Main Transformation Process ...... 26 5.6 Platform for the Implementation ...... 28 6.1 Results and comparison ...... 29 6.1 Test 1 ...... 29 6.2 Test 2 ...... 32 6.3 Test3...... 34 6.5 Test5...... 39 6.6 Execution Time Comparison Between Three Distance Transformation Techniques...... 43 7.1 Conclusion ...... 44 7.2 Lessons Learned...... 44 7.3 Future Work ...... 44 References ...... 45 Appendix: A ...... 47

5 | P a g e

Appendix: B ...... 48

6 | P a g e

List of figures

Fig. 1. Forward mask propagation………………………………………………………15

Fig. 2. Backward mask propagation…………………………………………………….16

Fig. 3. Example of Chess board distance transformation…………………………….....17

Fig. 4. Result of Chess board distance transform input and output……………………..18

Fig. 5. Example of City block distance transformation…………………………………19

Fig. 6. Result of City Block Distance transform Input and output……………………...19

Fig. 7. Example of the Euclidean distance transformation……………………………...20

Fig. 8. Result of the Euclidean distance transformation………………………………...20

Fig. 9. Digital color image to binary digital conversion………………………………..24

Fig. 10. Input Image for Test1…………………………………………………………..29

Fig. 11. Result of the Chess board distance transformation for Test1…………………...30

Fig. 12. Result of the City block distance transformation for Test1……………………..31

Fig. 13. Result of the Euclidean distance transformation for Test1……………………...31

Fig. 14. Input Image for Test2……………………………………………………………32

Fig. 15. Result of the Chess board distance transformation for Test2……………………33

Fig. 16. Result of the City block distance transformation for Test2……………………...33

Fig. 17. Result of the Euclidean distance transformation for Test2………………………34

Fig. 18. Input Image for Test3…………………………………………………………….34

Fig. 19. Result of the Chess board distance transformation for Test3…………………….35

Fig. 20. Result of the City block distance transformation for Test3………………………36

Fig. 21. Result of the Euclidean distance transformation for Test3……………………….36

Fig. 22. Input Image for Test4……………………………………………………………37

7 | P a g e

Fig. 23. Result of the Chess board distance transformation for Test4……………………..38

Fig. 24. Result of the City block distance transformation for Test4……………………….38

Fig. 25. Result of the Euclidean distance transformation for Test4………………………..39

Fig. 26. Input Image for Test5…………………………………………………………….40

Fig. 27. Result of the Chess board distance transformation for Test5……………………..40

Fig. 28. Result of the City block distance transformation for Test5……………………….41

Fig. 29. Result of the Euclidean distance transformation for Test5………………………..41

Fig .30. Execution Times for three different techniques on different input images………42

Fig. 31. Execution Time Comparison……………………………………………………..43

Fig. 32. Processing exe file………………………………………………………………..48

Fig .33. Processing sketch environment…………………………………………………...49 Fig .34. Processing output windows……………………………………………………….50

8 | P a g e

Chapter 1

1.1 Introduction

Image processing is a widely-used technology for many applications. Image processing is usually related to computer graphics and computer vision. In computer graphics, images are computationally made from physical models of objects, environments and lighting instead of being acquired from natural scenes. On the other hand, computer vision is often considered as a high- level image processing out of which a machine, computer or a software intends to decipher the physical contents of an image or a sequence of images.

The objective of this project is to implement distance transformation using processing programming language using a binary image as an input. To understand distance transformations, first we need to understand the fundamentals of image processing such as image types, pixels in an image, pixel values and pixel connectivity.

1.2 Fundamentals

In general, an image is a representation of an object or a scene which has a visible impression. Mainly there are two types of images which are essential for this project and they are binary image and grayscale image.

1.2.1 Binary image

A binary image is a digital image which consists of two pixel values. That means either 0 or 255 and it has only two possible colors which are white and black [2].

1.2.2 Grayscale image

A grayscale digital image is an image which has a pixel values varies between 0 and 255. For different pixels in an image these values varies and represents the various values of gray [2].

1.2.3 Pixel

A pixel is a physical element or a smallest element. For example, like an atom in a particle. So, we can say many of pixels together is an image [2].

9 | P a g e

1.2.4 Pixel value

A pixel value defines the intensity of a pixel in an image and the color of the pixel. In case of binary image a pixel, value is either 0 or 255. In case of grayscale image a pixel, value is varying between 0 and 255. Typically, zero is taken to be black and 255 taken to be white. The pixel values in between make up the different intensities of gray showed in the below image [17].

1.2.5 Pixel connectivity

Pixel connectivity is a relation between the neighboring pixels. There are different types of pixel connectivity’s. For 2-dimensional images we explained three basic pixel connectivity’s and they are explained below [17].

1.2.5.1 4-connected pixel connectivity

In 4-connected pixel connectivity pixels are connected by their sides. This means that a pair of pixels are connected by horizontally or vertically [17]. Every pixel that has the coordinates (x ± 1, y) or (x, y ± 1) is connected to the pixel at (x, y).

1.2.5.2 Diagonal pixel connectivity

In diagonal connected pixel connectivity pixels are connected by their corners. This means that a pair of pixels are connected diagonally [17]. Every pixel that has the coordinates (x-1, y±1) or (x + 1, y ± 1) is connected to the pixel at (x, y).

1.2.5.3 8-connected pixel connectivity

This connectivity is union of both 4-connected pixel connectivity and diagonal connected pixel connectivity [17]. Every pixel that has the coordinates (x ± 1, y ± 1) is connected to the pixel at (x, y).

Apart from these fundamentals we have major objectives of the project on the other hand and they are distance transformation and various techniques and implementations. Distance transformations are well known as a technique to compute distances from the featured pixel to non-featured pixels. There are various distance transformations are proposed and among all these there are three distance transformations that have been proposed for 2-dimesionsal image spaces. The main

10 | P a g e purpose of this project is to implement these distance transformations using the “Processing” Language. In upcoming chapters will give clear-cut explanation for all the work which is done.

11 | P a g e

Chapter 2

2.1 Processing

Processing is an open source software environment and a programming language for artists who wants to program images, animation and sound. The objective behind the creation of processing is to guide fundamentals of computer programming with in a visual context and to serve the programmers as a software sketchbook and professional production tool. Processing is developed by artists and designers to serve artists as a tool to design sketches with in the same domain. Processing is consisting of all the principles, structures and concepts like other programming languages [19]. The main advantages of the Processing software are free to download by anyone and runs on the Mac, Windows, and Linux platforms and it’s developing environments is very flexible as well as user friendly to code. The processing programming language is a text programming language and as mentioned above its main aim is to generate new images as well as alterations for existing images [19].

12 | P a g e

Chapter 3

3.1 Distance Transformation

Distance transformations are well known as a technique to compute distances from the featured pixel to non-featured pixels. There are various distance transformations are proposed and among all these there are three distance transformations have been proposed for 2-dimesions [2, 3].

A distance transformation is an operation that converts a binary picture, consisting of feature and nonfeature elements, to a picture where each element has a value that approximates the distance to the nearest feature element [3]. The distance values of these featured and non-featured pixels are computed based on its neighbors. Computing the distance between a non-feature pixel to a feature pixel is essentially a global operation. In case of large images these operations are computationally costly unless the images are very small [2].

Consider a binary image which consists of a featured pixel and non-feature pixels. These featured pixels can be points, edges or objects and non-featured pixels are unoccupied (blanks). The challenge is to calculate the distance from non-featured pixel to nearest featured pixel. This distance calculation is computationally costly to begin with every non-featured pixel and to scan the image until we find a featured pixel [2]. In this project, we particularly concentrated on implementing the distance transformation algorithms which are proposed by Borgefors. There are several distance metrics to calculate the distance between non-featured pixel and featured pixel. In 2-dimensions there are three main distance transformations are proposed and they are known as city block, chess board and Euclidean distances [1]. These distance transformations can be implemented using both sequential and parallel algorithms [2]. A brief explanation about these techniques is in the next chapter. The same basic idea is involved in all three distance transformations and it is approximations of the global Euclidean distance which then can be computed using local operations.

Several issues arise when dealing with distance transform algorithms. One of these issues and perhaps the most significant of all is accuracy. Some distance transform algorithms produce results with minimal errors others are theoretically proven to be error free; from a scientific point of view it is important to be able to validate the implementation of the algorithm. A second important issue relates to the computational time that it takes to provide an output. It is relatively straightforward

13 | P a g e to develop an exhaustive method that requires a great deal of processing time and computes the distance map previously mentioned [5]. In general distance transform algorithms exhibit varying degrees of accuracy of the result, computational complexity, hardware requirements (such as parallel processors) and conceptual complexity of the algorithms themselves.

Distance transform algorithms are relying on the idea of propagating distances between pixels. As it will be explained later in this document, calculating the distance map can have a high computational time cost therefore the strategy applied by distance transform algorithms is to approximate the global or real distances by propagating the local distances, that is, the distance between neighbor pixels [2]. The propagation can be accomplished sequentially or in parallel. The algorithms presented in this document were developed using a sequential approach.

In distance transform algorithms, the distance map has the same dimension as the input image and it’s initialize as follows [3, 5]:

If x = [i, j] is a featured pixel then dist_map [i, j] = 0 else dist_map [i, j] = Infinity

Note that every featured pixel will have a value of 0 in the distance map and every non-featured pixel a value of Infinity. Setting these initial values this way will prove to be necessary when propagating local distances and calculating the rest of the distance map.

Thus, for the calculation of the distance matrix has been said to use a distance measure but no description of such distance measure has been provided. The distance measures that will be applied in the algorithm described throughout this paper are: the Manhattan or City Block Distance, the Chessboard Distance and the Euclidean Distance.

After the initialization stage a distance transform algorithm executes two stages [3] over the image, one is called forward propagation and the other backward propagation. In these stages, the algorithm typically uses a mask that is distance measure dependent and is divided into two masks, one for the forward propagation known as forward mask and another for the backward propagation known as backward mask. The forward propagation consists of a sweep of the image from left to right and top to bottom applying the forward mask every moment and updating the value of cells in the distance map as follows

14 | P a g e

Dist_map[j] ←min(Dist_map[j], Dist_map[j-1] +1)

In the forward propagation, Figure. 1., the distance map is updated by considering the value of the previous cell adding the neighbor distance (adjacent cells are always at distance 1) and the distance value of the current cell. Note that in the last formula refers to the value of the pixel at position

X = (i, j).

In the backward propagation, Figure. 2., you sweep the image from right to left and bottom to top applying the backward mask and updating values according to the next formula

Dist_map[j] ←min(Dist_map[j], Dist_map[j+1] +1)

As one can notice the formula both formulas are similar, the only transcendent change is the direction of the update. Assuming a distance transform algorithm and a Manhattan distance measure (explained shortly) which receive as input an image with a single featured pixel in the middle the results obtained from the forward and backward passes would be the ones illustrated in the following figure.

1 2

1 2 3

2 3 4

Fig. 1. Forward mask Propagation

15 | P a g e

4 3 2 3 4

3 2 1 2 3

2 1 1 2

3 2 1 2 3

4 3 2 3 4

Fig. 2. Backward mask Propagation

After exploring details of distance transform algorithms, a description of every distance measure and its associated distance mask will be provided during the next sections.

16 | P a g e

Chapter 4

4.1 Distance transformation types

After getting a good idea on distance transformation three distance transformation techniques explained next. These are Chessboard, City-block and Euclidean distance transformations based on their distance metrics.

4.2 Chessboard distance transformation

The chessboard distance metric measures the path between the pixels based on 8-connected neighborhood. Pixels are connected if their edges or corners touch. This means that if two adjoining pixels are on, they are part of the same object, regardless of whether they are connected along the horizontal, vertical, or diagonal direction [2]. Pixels whose edges or corners are one unit apart. On the other hand, we can say this metric assumes that you can make moves on the pixel grid as if you are a making moves in chess game. As shown in Figure. 3. And results are as shown in Figure. 4.

Suppose that two points (x1, y1), (x2, y2) are given.

The chess board distance is then defined as: D = Max {|x1 - x2|, |y1- y2|}

* * * * * 2 2 2 2 2

* * * * * 2 1 1 1 2

* * 0 * * 2 1 0 1 2

* * * * * 2 1 1 1 2 * * * * * 2 2 2 2 2

Fig. 3. Example of chess board distance transformation.

17 | P a g e

Fig. 4. Result of Chess board distance transform input and output

4.3 City-block distance transformation

The city block distance metric measures the path between the pixels based on a 4-connected neighborhood. Pixels whose edges touch are one of unit apart, pixels diagonally touching are two units apart. It is also known as the Manhattan distance. This metric assumes that it is going from one pixel to other. It is only possible to travel directly along pixel gridlines. In this technique, diagonal moves are not allowed [2, 3].

Suppose that two points (x1, y1), (x2, y2) are given.

Then the City block distance is then defined as: D =|x1-x2|+|y1-y2|.

18 | P a g e

* * * * * 4 3 2 3 4

* * * * * 3 2 1 2 3

* * 0 * * 2 1 0 1 2 * * * * * 3 2 1 2 3

* * * * * 4 3 2 3 4

Fig. 5. Example of City block distance transformation

Where “0” pixel is feature pixel and “*” are none-feature pixel Figure. 5., left image is original image and right is after city block transform are applied to the image Figure. 6.

Fig. 6. Result of City Block Distance Transform Input and Output

19 | P a g e

4.4 Euclidean distance transformation

The Euclidean distance transformation is a familiar straight line distance between two points [2] as shown in Figure 7 & 8.

Suppose that two points (x1, y1), (x2, y2) are given.

Then the chess board distance is then defined as:

D = √(x1 − x2) ∗ (x1 − x2) + (y1 − y2) ∗ (y1 − y2)

* * * * * 1.4142 1 1.4142 2.2361 3.1623

* 0 * * * 1 0 1 2 2.2361

* * * * * 1.4142 1 1.4142 1 1.4142

* * * 0 * 2.2361 2 1 0 1

* * * * * 3.1623 2.2361 1.4142 1 1.4142

Fig. 7. Example of the Euclidean distance transformation.

20 | P a g e

Fig. 8. Result of the Euclidean distance transformation

Left is original image and Right is image after chess board distance transform is applied.

After analyzing these three-distance metrics it would be possible to detail a pseudo code for the Distance Transform algorithm. Note that regardless of the distance measure applied the skeleton for this algorithm is basically the same and is presented in the forthcoming chapters.

21 | P a g e

Chapter 5

5.1 Implementation

Implementation of previously mentioned three distance transformation techniques in this project were done by using Processing.

While doing this project various challenges were faced. The challenges such as digital image to binary image conversion and further binary image to gray-scale image. The analysis of each algorithm is performed using a pseudo code for image conversion, distance calculation and mask propagations on the image.

5.2 A function that converts a digital image into a binary image

As we are dealing with distance transformations in this project it is one of the important challenge to convert a digital color images into a binary image because by the definition of distance transformations we can say it is a process of converting a binary image into a gray scale image [15].

To calculate image distance transformation, we need to have a colored or gray scale converted to binary image with only white and black pixels.

PImage function digitalTobinary (file path, threshold)

PImage source Image, destination Image;

Source Image = loadImage(filepath); ///Read image from file path

Destination Image = create Image (source Image. Width, source Image. height, RGB) ;///create empty image

Source Image.loadPixels()

Destination Image.loadPixels() for x = 0; To srcImage.width-1 for y = 0; To srcImage.height-1 c = source Image. get (x, y);

22 | P a g e brightness: = (red(c)+green(c)+blue(c))/3;///brightness of given pixel at the position (x, y)

If brightness< threshold temppixel = color (0,0,0)

Else temppixel = color (255,255,255)

Destination Image. Set (x, y, temppixel) ///set pixel value to destination Image return destination Image.

In conversion of color or digital image into a binary image we must deal with pixel R, G, B values. To obtain a binary image we must make the changes to RGB value of a pixel and for binary image these pixel R, G, B values are equal which means R=G=B. In this pseudo code, we are thresholding the processed pixels according to their brightness. If any pixel has a lighter or a lesser value than threshold we are changing the pixel value to R=G=B=0 and it considered as a black pixel. As well as if the pixel value is brighter or higher value than a threshold value we are changing the pixel value to R=G=B=255 and it considered as a white pixel.

The algorithm to convert a digital image to a binary image is very simple. First, we explore pixel by pixel in the digital image and obtain their red, green and blue colors. If a linear of these colors surpasses a given threshold then the resulting black and white image at the position that corresponds to that pixel is colored black otherwise is colored white. The formula would be the following.

0.3*r + 0.59*g + 0.11*b > 127

In this document, we proposed 127 to be the threshold for deciding whether a pixel is colored black or white. The other values were adjusted through experimentation.

Let’s see the binary image obtained after applying this algorithm for a digital image in the Fig. 9.

23 | P a g e

Fig. 9. Digital color image to binary digital conversion.

5.3 Minimum distance calculation

In the following pseudo code, we are going to explain how are we calculating the minimum distance from non-feature pixel to the nearest featured pixel with the mask. In the first part of the pseudo code we are focusing backward mask propagation and in the later part about forward mask propagation. The both forward and backward propagations are very similar except in the direction of mask propagation. To calculate this minimum distance value, we need to assign the pixel values as Infinity for non-featured pixel and zero for featured pixel. Based on mask size here we are calculating a bandwidth which is a side width of the mask for both forward mask and backward mask propagations. We are using for loops here to find out various pixels located on the images. Current state parameter gives the distance value of the pixel once we identify the location of the pixel. The co-ordinates of the pixels are useful to find out distance based on distance transformation technique [2, 3].

24 | P a g e function minimum Backward (posx, posy, mask Size)///Backward propagation tempParam: = Infinitive

For i = -bandWidth+1 To bandwidth

For j = -bandWidth+1 To bandwidth

If (i == 0 && j <= 0) Then continue;

If (i! = 0 || j! = 0) Then ///instead of the noticed pixel curState: = distMatrixFromMask[posx+i] [posy+j] + abs(i)+abs(j);//////city block distance if(tempParam>curState) tempParam = curState;

If tempParam==Infinitive Then return Infinitive

Else return tempParam function minimum Forward (posx, posy, mask Size) ///Forward propagation tempParam: = Infinitive

For i = -bandWidth+1 To bandwidth

For j = -bandWidth+1 To bandwidth

If (i == 0 && j >= 0) Then continue;

If (i! = 0 || j! = 0) Then ///instead of the noticed pixel curState: = distMatrixFromMask[posx+i] [posy+j] + abs(i)+abs(j);//////city block distance if(tempParam>curState) tempParam = curState;

If tempParam==Infinitive Then return Infinitive

Else return tempParam

25 | P a g e

5.4 Main Transformation Process

In this section, we are discussing the main process of the project and it is distance transformation of a binary image using sequential using forward scan and backward scan for sequential algorithm [2]. To process output image, we must initialize a process for creating an empty image along with a distance matrix with an empty data metrics. The next step is to get a pixel value at the position (i, j) which is a co-ordinate of image. The distance matrix identifies these positions of the pixel with image height and width. To perform distance transformation, we need to find the pixel’s state at the position and we can achieve this by comparing the temporary distance transformation array which is a distance matrix. In case of sequential algorithms this process takes place twice. One is forward scan and other one is backward scan. In the forward scan, it scans the image from left to right and top to bottom to find out the distance based on nearest featured pixel and other hand the backward scan propagates from right to left as well as bottom to top [3]. In this project, I implemented sequential algorithms for all mentioned distance transformations. In the next chapter I am going to present comparisons between three distance transformations methods. The following pseudo code is common for all three techniques except distance metric to calculate distance between featured and non-featured pixels.

Mask size: = size of Mask for scan

Bandwidth: = (Masksize+1)/2; //side size of Mask (mask or filter) imgW = width of in Image imgH = height of in Image out Image: =create Image (imgW, imgH, RGB) //create empty image for output

//init data matrix for transform.

// distMatrixFromMask is 1:1 correspondence with the input image.

For i = 0 to imgH-1

For j = 0 to imgW-1

If the pixel value at the (j, i) position is 0 Then

DistMatrixFromMask [ i] [ j]: = INFINITIVE // It can be infinitely large or very large

26 | P a g e

Else

DistMatrixFromMask [ i] [ j]: = 0

End If

//main transform processing

/*Forward scan: forward propagation */

For i = bandwidth To imgH-bandwidth

For j= bandwidth To imgW-bandwidth distMatrixFromMask[i][j] = minimum Forward (i, j, mask Size);

/*Backward scan: backward propagation*/

For i = imgW-bandwidth; To bandwidth-1

For (j = imgH-bandWidth-1; To bandwidth-1 prevst = distMatrixFromMask[i][j] ///previous value curst = minimum Backward (i, j, mask Size) ///current expected distance value at nearest neighbor

If prevst is equal INFINITIVE Then distMatrixFromMask[i][j] = curst

Else

If prevst > curst Then distMatrixFromMask[i][j] = curst

5.5 Making output Image Once we perform the main process which is implementing the algorithm it is important to the output image. The following pseudo code is to reset the pixel values of each pixel in the image after distance transformation using distance matrix.

27 | P a g e

For i= 0 To imgH - 1

For j = 0 To imgW-1

//set image pixel value according to distMatrixFromMask[i][j]; pixel info: = distMatrixFromMask[j][i];

If pixel info is equal Infinitive Then

//set pixel of output Image at (i, j) as 0

Else

//set pixel of output Image at (i, j) as pixel info return out Image

5.6 Platform for the Implementation

For the implementation, we used my personal laptop. I downloaded processing software from www.processing.org and we used processing 3.2.3 version with Java library to implement all the mentioned three distance transformation techniques and processing software is open source and we can download it for free of cost.

I used Dell inspiron 5447 laptop and we used Windows 10 Home edition version 1607 and OS Build 14393.1066. The Processor is Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz 2.40GHz. Installed RAM is 6.00GB and system type is 64-bit operating system, x64-based processor. When I executed the codes in the above-mentioned system execution process went smooth.

28 | P a g e

Chapter 6

6.1 Results and comparison

To test our algorithms, we will have a set of images and we will compare the results obtained using the images created with the distance maps outputted by each technique. The test input images are chosen randomly based on their complexity from simple to complicate. Test Input images 1 and 2 are created by me in paint rest three test inputs are downloaded from Google images [18]. All tests images presenting in this section are examining the three different distance transformation techniques and producing the outputs for each technique.

6.1 Test 1

Here I choose this random binary image which I drew in paint. While discussing with Dr. Semwal he explained me the distance transformations with an image something like this on paper as we wanted to observe the distance from the edges of an image. Even though we are giving binary image is an input the conversion algorithms will apply on input gives the output same as input.

Input binary image for Test1

Fig. 10. Input Image for Test1

29 | P a g e

Chessboard

Fig. 11. Result of the Chess board distance transformation for Test1

The above Figure .11 is resulting image after applying the chess board distance transformation on Test1 binary image. In the output as shown in the figure the left image is input image but its displayed in the output after applying a digital to binary image conversion. On the right-hand side, we have our final output image after applying chessboard distance transformation image with different distances from non-featured pixels to nearest featured pixel.

City block

The below Figure 12. shows the output image after applying city block distance transformation algorithm. As we can see in the below Figure. 12. The image showing input binary image Test1 on the left-hand side and final city block distance transformation on the right-hand side.

30 | P a g e

Fig. 12. Result of the City block distance transformation for Test1

Euclidean

Fig. 13. Result of the Euclidean distance transformation for Test1

Figure. 13. shows the resulting image after Euclidean distance transformation applied. As we can see the output of this method is very smooth compared to other two previously mentioned.

31 | P a g e

6.2 Test 2

In the Test2 also I am choosing a binary image as shown in Figure. 14. The Rambus, Square, and circle normally used to represent city block, chessboard and Euclidean distance transformations. In this Test2 I want to observe the resulting image if we apply distance transformations on these three shapes.

input

Fig. 14. Input Image for Test2

Chessboard

The below Figure. 15. shows the resulting images after conversion algorithm applied on input binary image on the left-hand side and on the right- hand side chess board distance transformation output.

As we can see in the Fig. 15. the output image is displaying the chessboard moves in the final output image. Along with the outer part of these three shapes we can observe this clearly.

32 | P a g e

Fig. 15. Result of the Chess board distance transformation for Test2

City block

The below Figure. 16. is showing result of city block distance transformation on input test2 image. The resulted distance map is showing the city block distances in the output image.

Fig. 16. Result of the City block distance transformation for Test2

33 | P a g e

Euclidean

Fig. 17. Result of the Euclidean distance transformation for Test2

The above Figure. 17. showing the result of the Euclidean distance transformation on test2 image and we can see the noise less output here. This happens because of its distance metric which is a straight-line distance between the pixels.

6.3 Test3

Input

Fig. 18. Input Image for Test3

34 | P a g e

The above Figure. 18. is the input image for Test3 and I downloaded this pic from a freepik website [18]. With this binary image, we wanted to observe the output for three distance transformation techniques and we can observe the medial axis in the output with a good approximation with this image.

Chessboard

Fig. 19. Result of the Chess board distance transformation for Test3

The above Figure. 19. shows the result of the Chessboard distance transformation for input binary image test3. With the chessboard distance transformation, we are seeing plenty of noise in the output image and we can see the medial axis inside of human shape.

City block

The below Figure. 20. showing the result of city block distance transformation for input binary image test3. This output image is resulted after applying a city block distance metric distance transformation on the input image. We can see the medial axis in side of human body as shown in Figure. 20.

35 | P a g e

Fig. 20. Result of the City block distance transformation for Test3

Euclidean

Fig. 21. Result of the Euclidean distance transformation for Test3

36 | P a g e

The above Figure. 21. shows the result of Euclidean distance transformation output for input binary image Test3. In the result, we can clearly observe the medial axis and output transformed gray- scale image with less noise compared to other two methods chessboard and city block distance transformations.

6.4 Test4

The test number4, in this test we are taking digital image as an input as shown in Figure .22. We can observe different colors on the image. For this the digital to binary conversion algorithm converts input digital image into a binary image with only black and white pixels. We chose this image to observe the its complicated structure as you can see the lines and different tile shapes in the image.

Input

Fig. 22. Input Image for Test4

The below Figure. 23. shows the output for digital binary conversion as well as chess board distance transformation. Because of its R, G, B values the binary converted images is not complex to perform distance transformation.

37 | P a g e

Chess board

Fig. 23. Result of the Chess board distance transformation for Test4

City block

Fig. 24. Result of the City block distance transformation for Test4

38 | P a g e

Euclidean

Fig. 25. Result of the Euclidean distance transformation for Test4

The above Figure. 24. and Figure. 25. shows the Results for the City block and Euclidean distance transformations respectively for the Test4. The resulted left-hand side image is showing the output for digital to binary image conversion. On the right-hand side, we have resulted gray scale images for City block and Euclidean distance transformations.

6.5 Test5

In this section, we are conducting Test 5. The below input digital image is shown in Figure. 26. I considered this image because of its complexity and to check the conversion algorithm with its brightness.

39 | P a g e

Input

Fig. 26. Input Image for Test5

Chessboard

Fig .27. Result of the Chess board distance transformation for Test5

The above Figure. 27. shows the result of chess board distance transformation on Test number 5 input image and you can observe the result here because of its poor thresholding it displays the poor output result.

40 | P a g e

City block

Fig. 28. Result of the City block distance transformation for Test5

Euclidean

Fig. 29. Result of the Euclidean distance transformation for Test5

41 | P a g e

Figure. 28. and Figure. 29. shows the results of the city block and Euclidean distance transformations. As there is no good R, G, B values it is difficult to observe the output in this image.

The following table shows the execution times for each distance transformation techniques for the above explained tests. In all five tests chess board distance transformation is faster and Euclidean distance transformation is slower. We can say based on their distance metric calculation the execution varies between these three distance transformation techniques.

Test Number Chessboard City block Euclidean (Execution time in (Execution time in (Execution time in milliseconds) milliseconds) milliseconds)

Test 1 143 176 233

Test 2 137 196 265

Test 3 115 142 237

Test 4 127 165 252

Test 5 122 154 238

Fig. 30. Execution Times for three different techniques on different input images.

In the above Figure. 30. we presented the execution time for each distance transformation technique for tests from 1 to 5. From these values, we are saying that there is no much difference in execution time as we are calculating them in milliseconds. In ray-tracing applications still it matters for less computation time [3]. Please refer section 5.6 for the system or computer configurations which we used to calculate execution time and for all the coding part.

42 | P a g e

6.6 Execution Time Comparison Between Three Distance Transformation Techniques

300

250

200

150

100

50 Execution Time in milli seconds milliinTime Execution

0 Test 1 Test 2 Test 3 Test 4 Test 5 Different Inputs

chessboard cityblock Euclidean

Fig. 31. Execution Time Comparison

From the above Figure. 31. we compared execution times of three different distance transformation techniques conducted on five different test input images. With the above comparison, we are saying here that Euclidean distance transformation is slower that other two distance transformation techniques where chessboard distance transformation is faster than other two distance transformation techniques because of its poor distance metric approximation.

As a future prospective we can compare the accuracy of output images where we are unable to present here for time being.

43 | P a g e

Chapter 7

7.1 Conclusion

We described three distance transform algorithms and with the three different distance measures that are used in these algorithms. Namely we detailed the City Block, Chessboard and Euclidean distances. As per the execution time we are concluding that Chess board and City block techniques are faster than Euclidean distance transformation. However, we achieved the implementation of three distance transformation techniques and compared them with their execution times.

During the implementation and comparison section which implemented different images we could see how distance transforms using the Euclidean distance provided the most time consuming and whereas City block and Chessboard are faster than Euclidean distance transformation.

7.2 Lessons Learned

In this project, we implemented three different distance transformation techniques based on three distance metrics named as Chessboard, City block and Euclidean distances which needs to have a strong fundamental knowledge in the field of computer graphics and computer vision. The lesson learned from this project is understanding the distance transformations as well as the fundamentals of Image Processing. As a new learner of image processing field this project gave me an opportunity to widen my skills in Image Processing field and to learn various skills in the field of computer science. For better results, we need to work on standard datasets and we can have better observations for different applications.

7.3 Future Work

As future work, we recommend implementing new methods that could provide better results and try to incorporate to these new algorithms mask of size greater values. We can implement more distance transformation techniques as well as we can perform more benchmark calculations like error comparisons. So that we can find out which algorithms is more accurate. On other hand, we have parallel algorithms to implement and we can perform benchmarking between different distance transformation techniques. I focused my efforts in getting the implementation of distance transformation techniques and as my suggestion more work needed in evaluating these techniques.

44 | P a g e

References

[1]. G. Borgefors, “Distance transformations in arbitrary dimensions”. Comput. Vision, Graphics Image Process. 27, 1984, pp. 321-345.

[2]. G. Borgefors, “Distance transformations in digital images, Computer Vision, Graphics, Image Processing, 34, 1986, pp. 344-371

[3] S. K. Semwal and H. Kvarnstrom. Directed Safe Zones and the Dual Extend Algorithms for Efficient Grid Tracing during Ray Tracing. In Proceedings of Graphics Interface '97, pp. 76–87, May 1997.

[4]. G Borgefors, "Hierarchical chamfer matching: a parametric edge matching algorithm", IEEE Trans PAMI, vol. 10, no. 6, pp. 849-865, 1988

[5]. Grevera, G.J.: Distance transform algorithms and their implementation and evaluation. In: Deformable Models, pp 33–60, 2007.

[6]. Meijster, A., Roerdink, J. and Hesselink, W. “A General Algorithm for Computing Distance Transforms in Linear Time.”. Math. Morph, and Its Appls. to Image and Signal Proc. pp.331– 340. Dordrecht: Kluwer Academic Publishers. [Meijster et al. 00]

[7]. E. L. van den Broek, T. E. Schouten, "Distance transforms: Academics versus industry", Recent Patents Comput. Sci., vol. 4, no. 1, pp. 1-15, 2011.

[8]. Mark W. Jones, J. Andreas Baerentzen, Milos Sramek, 2006. “3D Distance Fields: A Survey of Techniques and Applications”. IEEE Transactions on Visualization and Computer Graphics, v.12 n.4, p.581-599, July 2006.

[9]. H. Breu, J. Gil, D. Kirkpatrick, M. Werman, "Linear time Euclidean Distance Transform Algorithms", IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, pp. 529-533, 1995

[10]. Calvin R. Maurer, Jr., Rensheng Qi, Vijay Raghavan, A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.2, p.265-270, February 2003.

45 | P a g e

[11]. Pedro F. Felzenszwalb, Daniel P. Huttenlocher. “Distance Transform of Sampled Functions”. Tech. Rep., Cornell Computing and Information Science.

[12]. R. Satherley, M.W. Jones, "Hybrid Distance Field Computation", Proc. Conf. Volume Graphics, pp. 195-209, 2001.

[14]. R. Satherley, M.W. Jones, "Vector-City Vector Distance Transform", Computer Vision and Image Understanding, vol. 82, no. 3, pp. 238-254, 2001.

[15]. Morse, Bryan S., "Lecture 4: Thresholding," Brigham Young University, 1998-2000, pp. 1- 5.

[16]. P. K. Sahoo, S. Soltani, A. K. C. Wong, "A survey of thresholding techniques", Comput. Vis. Graph. Image Process., vol. 41, pp. 233-260, 1988.

[17]. Bryan S. Morse, “Lecture 2- Image Processing Review, Neighbors, Connected Components, and Distance “, Brigham Young University, 1998-2000, pp. 1-5.

[18]. Casey Reas, Ben Fry, John Maeda, Processing: A Programming Handbook for Visual Designers and Artists, The MIT Press, 2007.

46 | P a g e

Appendix: A

This is a short description about this project software usage. we have written all the source codes in processing software version 3.2.3. For a complete source codes please see the Project_DT.rar in rpeddire folder in graduate school committee walrus server. Here is the link for it http://walrus.uccs.edu/~gsc/pub/master/rpeddire/.

All the source codes have explanations and all the variables defined are easy to understand. The execution time calculated in milliseconds. All the work done for this project is stored in the above- mentioned link.

Processing software can be downloaded from www.processing.org and install exe setup file. Once you installed processing software in your system you can download the source codes from walrus link. After downloading the source codes, you can open the source code and run them very easily.

47 | P a g e

Appendix: B

A small description on demo to run the source codes explained here. As explained in Appendix: A the installation procedure is very simple just like normal exe file installation.

Once you are done with installation part we can run the exe file which shown in below figure.

Fig. 32. Processing exe file

The folders shown in the above fig. 32. are comes with processing software package and are support for the processing.

48 | P a g e

As shown in below fig. 33. the sketch environment where you can write your source codes.

Fig. 33. Processing sketch environment

By pressing the run button (play button) in the above figure on the left top corner of the processing window. Once you execute code it displays the output window with distance transformation image on the right-hand side and digital to binary converted image on the left-hand side. Make sure about input images whether you have them in the right folder.

The below fig. 34. displays the output image in the selected area. The output window size depends on the size declaration in the void setup () function as size ();

49 | P a g e

Fig. 34. Processing output windows

50 | P a g e