A ROBUST LINE DETECTION METHOD USING UNIT GRADIENT VECTORS

BY

VEYHONG CHHOR

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE (ENGINEERING AND TECHNOLOGY) SIRINDHORN INTERNATIONAL INSTITUTE OF TECHNOLOGY THAMMASAT UNIVERSITY ACADEMIC YEAR 2014 A ROBUST LINE DETECTION METHOD USING UNIT GRADIENT VECTORS

BY

VEYHONG CHHOR

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE (ENGINEERING AND TECHNOLOGY) SIRINDHORN INTERNATIONAL INSTITUTE OF TECHNOLOGY THAMMASAT UNIVERSITY ACADEMIC YEAR 2014

Abstract

A ROBUST LINE DETECTION METHOD USING UNIT GRADIENT VECTORS

by

VEYHONG CHHOR

Bachelor of Engineering in Computer Science, Institute of Technology of Cambodia, 2012

In this thesis, we present a robust line detection method by using unit gradient vectors (UGVs). The proposed method comprises two parts. Robust is performed using UGVs in the first part. Since UGVs are a feature that is essentially invariant to varying lighting condition, edge can be detected regardless of various image contrasts even within the same image. We then employ a gray-scale version of (GHT) as the second part. The benefit of using the GHT is that we do not need to adjust a threshold value to detect the edges. In the case of the UGVs based edge detection, peaks in Hough space largely depend on the lengths of the lines. Thus, we can easily set up a proper threshold value in Hough space. Simulation results show that the proposed method can detect lines successfully even in an image captured under the non-uniform illumination.

Keywords: Line detection, Illumination-invariant, Hough transform, Gray-scale Hough transform

ii

Acknowledgements

Foremost, I would like to express my sincere gratitude to my supervisor Assoc. Prof. Dr. Toshiaki Kondo for his encouragement, useful critiques, and insightful guidance throughout my study and research, which motivated me to devote myself into this research. Without his guidance this thesis work would not have been a success. I truely express my gratitude to the committee members, Assoc. Prof. Dr. Waree Kongprawechnon, Asst. Prof. Dr. Itthisek Nilkhamhang, and Asst. Prof. Dr Supatana Auethavekiat for their inspiration and enlightened suggestions. I am also grateful to Assoc. Prof. Dr. Kazunori Kotani, for offering me sponsor for the internship opportunity in his laboratory at Japan Advanced Institute of Science and Technology (JAIST). My heartily thanks to all the Sirindhorn International Institute of Technology (SIIT), faculty members and staffs for their benevolent and potential helps during my study and research in SIIT. I also thank my friends in SIIT for their love and company. My wholehearted thanks to all of my family members, especially my beloved mother, for her love, affection, and support both financial and motivation for my studies in Sirindhorn International Institute of Technology, Thammasat University, Thailand.

iii

Table of Contents

Chapter Title Page

Signature Page i Abstract ii Acknowledgments iii Table of Contents iv List of Tables v List of Figures vi List of Acronyms vii

1 Introduction 1 1.1 General Information 1 1.2 Statement of Problem 1 1.3 Purpose of Study 2

2 Review of Literatures 3 2.1 Development of the Hough Transform 3 2.1.1 Robust Hough Transform 4 2.1.2 Development of Application using HT. 4 2.1.3 Development of HT in Hough Space or HT Accumulator Space. 5 2.2 Gradient Orientation Information 6 2.3 Gray scale Hough Transform 7

3 Design Method and Procedures 9 3.1 Traditional Hough transform with Gradient Orientation Information 9 3.2 Gray-scale Hough Transform with Gradient Orientation Information 10

4 Experimental Result and Discussion 12 4.1 Comparison between traditional and unit-gradient vectors based edge detection methods 13 4.1.1 Experiment on artificial images 14 4.1.2 Experiment on real images 16 4.2 Comparison between the standard HT and the gray-scale HT 17 4.2.1 Experiment on artificial images 20 4.2.2 Experiment on real images 22

5 Conclusion 24

References 25

iv List of Tables

Table Page

4.1 Comparison between edge detection result of HT and GT 22 4.2 Comparison table of proposed method and GT 22 4.3 Comaprison result of ht and proposed method 23

v List of Figures

Figure Page

2.1 Hough Transform 7

3.1 First proposed method. 10 3.2 Second approached 11

4.1 Experimental result of High and low contrasted image 13 4.2 Line detected in low contrasted image by varying threshold value 14 4.3 Experimental result of shaded and non-shaded images by using HT 15 4.4 Experimental result of shaded and non-shaded images by using UGVs 16 4.5 Lane image with virtual shaded 17 4.6 An edge image obtained by the 17 4.7 An edge image obtained my proposed method 18 4.8 Experiment result of the traditional HT and proposed method. 18 4.9 Line detected by optimizing the threshold value 19 4.10 Original artificial image without shaded area 21 4.11 The edge extracted from all of the artificial image 22 4.12 A high-contrast shaded image with HT and proposed method 23

vi List of Acronyms

ADAS Advance Driver Assistant System BG Background BW Black and White FN False Negative FP False Positive GSHT Gray Scale Hough Transform GT Ground Truth GOSTs Gradient Orientation Structure Tensors GOI Gradient Orientation Information HP Hough Peak HS Hough Space HT Hough Transform LPF Low Pass Filter ROI Region of Interest TN True Negative TP True Positive UGVs Unit Gradient Vectors

vii Chapter 1 Introduction

1.1 General Information

Line detection is a very important problem and an essential task in digital image pro- cessing, because most of the objects are constructed by line segments, and most of the re- gion of interest in image are also constructed by lines. For example, if we want to develop an application to detect the lane mark for advance driver assistant system (ADAS), streets, building and other objects of interest from satellite or auto-pilot airplane, and etc. In fact, at first phase of all, we must extract the line segments, and then find way to define that object later. In order to detect the line, there are two main tasks. Firstly, we extract edge from a gray scale image, and then use the Hough transform to extract the line. Edge extraction is a process of extracting information from the gray into black and white (BW) image or we call it a binary image. Since after edges are extracted, we get a binary image, yet some information in the original image might be lost. Therefore edge detection is a critical task in digital image processing. In order to be successfully extract the line from gray scale image by using Hough transform. Instead of using pixels information, we use gradient orientation information or unit gradient vectors (UGVs) because it is known to be robust to the illumination-variant within image. In computer vision and digital image processing, the Hough transform is one of the most widely used algorithms and a robust method for the detection of unconnected straight lines in nosy image [2,7]. Edges are first detected in an image before performing the Hough transform. The detected edges are then transformed into sinusoidal curves in the parameter space. The sinusoidal curves intersect at the same spot repeatedly, producing a prominent peak in the parameter space. The lines can be determined by reading the coordinate of the peak in the parameter space.

1.2 Statement of Problem

In practice, images are often noisy when they are captured at night or with a low qual- ity camera, even some images are not taken properly (blur image caused by hand shakes). All of the problems mentioned are challenging problems that we are facing every day, and most of the time, it has been created the shaded image. Hence, shaded image is a practical problem to overcome in extracting the edges from image. For example, as the image that had been taken in Fig. 5, there is a shadow laid in the middle of the lane side. After performing the edge detection, image is completely binary (black and white), and it is ready to perform the Hough transform, But unfortunately if there are lots of information from the original image is lost, and that information is the interest region, and unfortunately after performing the edge detection we cannot recover the lost information. Therefore the line detection result is not correct. Thus, in the Hough transform, we cannot get any lines at all, so even we try to manually optimized the best performance of Hough transform, but we still cannot detect all lines. In short, we can conclude that all procedures start with edge detection which is equiva- lent to detecting large magnitudes of gradients in an image. This indicates that line detection

1 becomes more difficult in a low-contrasted image unless a threshold value for detecting edges is adaptively adjusted. The selection of a proper threshold value is not always an easy task to perform. It should be noted that the task will be far more difficult when contrast varies within the image. In this thesis, we propose a preprocessing for the Hough transform to overcome the illumination-variant within image. Since gradient orientation information is known to be robust to vary illumination within image [16], we aim to add one more essential feature to the Hough transform, an illumination-invariant Hough transform.

1.3 Purpose of Study

Our purpose of study is to develop a new method which:

can detect the edge robustly to low-high contrast intensity within an image, and illumination- • variant,

extend the proposed method to detect line by combination with the line detection • method the Hough transform,

use these methods to apply in real applications in robust lane mark detection, advanced • driver assistant system (ADAS).

2 Chapter 2 Review of Literatures

In Chapter 2, we describe the literature related to our proposed method. We divide the literature review into three main parts. Firstly, we talk about the development history of the Hough transform (HT) and the standard Hough transform that we select to use in the experiment in Chapter 3. Secondly, we describe the robustness of the method that had been added into the process of the proposed method to make our method invariant to varying illumination within image. At the last section, we describe the gray-scale Hough transform which is an improved version of the HT in eliminating the threshold value selection phase.

2.1 Development of the Hough Transform

The HT is a feature extraction technique used in image analysis, computer vision, and digital image processing [2]. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is car- ried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the HT.

The classical HT was concerned with the identification of lines in the image, and later has been extended to identifying positions of arbitrary shapes, most commonly circles or ellipses. The HT as it is universally used today was invented by Richard Duda and Pe- ter Hart in 1972 [1, 7], who called it the “generalized Hough transform” after the related 1962 patent of Paul Hough [2]. Since then the most popular HT for standard line extraction method that has been used until now is [2]. The transform was popularized in the computer vision community by Dana H. Ballard through a 1981 journal article titled “Generalizing the HT to detect arbitrary shapes”.

In 1987, J. Illingworth and J. Kittler [10] published a paper which is a collection of all research papers related to HT and other similar research related to line extraction. This paper indicates that at that time HT is an important method for many computer vision tasks, and it is a very robust method in the presence of extra data (extra data here is to improve the performance of the HT which lead us to limit area of interest and eliminate such unimportant area such as background) and can cope well with situations where some items of data are missing. Nevertheless, there have traditionally been difficulties due to which it has not been enthusiastically adopted to the specific problem. According to the limitation of the compu- tation cost at that time, the major problems of HT is that it required a lot of computation cost and a lot of storage for high-dimensional arrays. However, this paper states that there are much research on focusing approaches to very fast and/or space-efficient digital implemen- tations. The relationship to other transforms has also led to the development of real-time analog implementations. Significant initial work has been started on topics such as peak detection and enhancement methods, the detailed analysis of accumulator distributions for both noise and real image features, low-dimensional parameterizations of curves, and new

3 specialized computer architectures for the HT.

In 1990, there are many researchers who focused on their research on random HT (RHT). A paper names ”A new curve detection method Randomized Hough transform (RHT)” which is published by L. XU, E. OJA, and P. Kultanen is represented as one of the Randomized HT. According to [17] and [24], Randomized HT is an improved algorithm of the HT that can detect various analytically geometric shapes in a binary image by using probability method. Furthermore, an extended algorithm of RHT and generalized Hough transform called randomized generalized HT have been used for the experiment in the paper of Ping Fu Fung, Wing Sze Lee, and I. King for 2-D gray scale object detection [8]. This algorithm is a combination of Generalized HT and Randomized HT, which shows that it can detect arbitrary objects of various scale and orientation in gray level image with high speed, low storage requirement, high accuracy, and arbitrary resolution compared to Generalized HT and Randomized HT. Another extended version of Randomized HT which is called Probabilistic HT that is using the similar method to vote into Hough space by using proba- bility [14]. But in Probabilistic HT the process of voting, firstly the random subset of point is selected and a HT is subsequently performed in the subset. Later, Progressive Probabilistic HT (PPHT) has been proposed by J. Matas, C. Galambos and J. Kittler in 1999 [11] where HT is performed on a pre-selected fraction of input points, PPHT minimizes the amount of computation needed to detect lines by exploiting the difference in the fraction of votes needed to detect reliable lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a prior knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data.

2.1.1 Robust Hough Transform In 1998, J. Kim and R. Krishnapuram [13] proposed robust Hough transform which addresses many of the problems associated with the conventional Hough Transform (HT). They solved the bin-splitting problem by the use of robust clustering for peak detection, the accuracy problem is solved by means of an analog Hough space (HS), the bias problem is solved by the multiple-point method combined with random sampling, and the spurious peak problem is solved by the use of a validity measure. In their paper, most improvements are done in the Hough space.

2.1.2 Development of Application using HT. Since HT is one of the popular method for extracting lines in an image, it has been used in many important area in digital image processing. In this section, we present the use of HT in medical image processing, Robust Iris localization using circle HT, object detection, and robust lane detection. In the research paper of R. Okada [21] for object detection using HT presented a part-based approach for detecting objects with large variations of appearance. By extracting local image patches as local features both from the object and from the background in training images to learn an object part model discriminatively. The object part model discriminates the local features whether they are an object part or not. Based on the discrimination results, each local feature casts probabilistic votes for the object location and size which are learned from the training images. The object part model of this paper also requires regression performance

4 for predicting the object location and size through the voting procedure. The experimental results on hand detection with large pose variations show that their approach outperforms conventional generalized Hough transform.

In 2010, G. Liu and F. Worgotter, and I. Markelic proposed a method which use com- bining Statistical Hough Transform and Particle Filter for robust lane detection and track- ing [18]. In order to take into account both benefits for robust lane tracking, this paper introduced Statistical Hough Transform and the Particle Filter, and showed its application for lane detection and tracking on Inverse Perspective Mapping images.

Circle Hough transform is another version of the HT that extended to use in medical image processing application. In 2010, Jaehan Koh, V. Govindaraju, and V. Chaudhary developed a robust iris localization method using an active contour model and Hough trans- form [15]. Iris segmentation is one of the crucial steps in building an iris recognition system since it affects the accuracy of the iris matching significantly. This segmentation should accurately extract the iris region despite the presence of noise such as varying pupil sizes, shadows, specular reflections, and highlights. Considering these obstacles, several attempts have been made for robust iris localization and segmentation.

In 2013, D. Herumurti, K. Uchimura, G. Koutaki, and T. Uemura [9] presented an ap- proach of road extraction in urban area by combining the Hough transform and region grow- ing. In this case, they used Digital Surface Mode (DSM) data which is based on the elevation of land surface, building, and so on to overcome the disadvantage of aerial photo images. The main problem in extracting the road in urban areas from an aerial photo is the shadow cast by the buildings. The shadow will lead to an inappropriate road segment. Another bene- fit of using the DSM data in urban areas is the significant difference of the elevation between the road and the building. A simple thresholding of this data could extract some of the road. The HT is used in this paper in order to improve the result of detecting and recognition of the road as a line and use this information to make a better threshold. Furthermore, they also used the seeding region growing method to expand the road network. The seeds for region growing are obtained from the perimeter of the threshold segmentation resulted by Hough transform. Finally, the post processing is required to remove a false road by employing mor- phological operations. The experiment result shows that the proposed method improves the quality result with a very good performance.

2.1.3 Development of HT in Hough Space or HT Accumulator Space. In 2010, D. G. Duan and M. Xie [6] proposed another improved version of HT named modify Hough transform (MHT) and windowed random Hough transform (WRHT). These approaches are designed to solve the uncertain problem that is caused by noise. This paper has introduced an improved Hough transform for line detection, which is based on the MHT and the Windowed RHT that can solve the uncertainty information caused by noise. This paper’s experiment results indicate that the algorithm improves the precision of line detec- tion and is more comfortable for the real-time requirement. Although there are many major improvements over the traditional HT, their computational costs are intensive.

Finally, after reviewing many of the literature related to the improvement of the standard HT, we decided to keep using the standard HT, since most of the researches are

5 focusing on improving the complexity of Hough space to extract the line [3, 4, 6, 13, 19, 20, 22]. On the other hand, we know that a binary image is obtained by applying a specific threshold value to gray scale image which is the cause of losing some important features from original gray scale image, therefore in this thesis, our motivation is to improve the pre- processing of HT or also called it edge extraction process. The standard HT that we are using in the experiment can be considered as a point to curve transformation and it is used to detect the parameters of straight lines in binary images. A straight line is described by its polar representation as

r = xi cosq + yi sinq (2.1)

where (xi,yi) are the coordinates of pixels of the lines. In a binary image, all pixels (xi,yi) correspond to a point (r,q) in the HT space. Additionally, any point (xi,yi) in the image space is mapped into a sinusoidal curve in the HT space. Thus, the HT can be considered as a point-to-curve transformation. In the discrete case, the Hough space is implemented through an accumulator array C. In the accumulator array C, if 1/sfq is the step for the variable u, then q [ 90 , 90 + 1/sf ,...,180 ]. 2 q q = q sf (2.2) C · q and

q˜ = Round(qC) (2.3) where the Round function gives the nearest integer value. Similarly, r [r ,r + 2 1 1 1/sf ,..,r ] and r = r sf r˜ = Round(r ), where 1/sf is the step for the variable r, r 2 C · r C q and r1,r1 denote the minimum and maximum values of r, respectively. Using the above definitions, it easy to show that in the CHT each pixel of the image is mapped into a set of points in the accumulator array C. These points belong to a sinusoidal curve and increase the contents of the mapped accumulator cells by one. Obviously, if the image includes a straight line then the points of the straight line constitute a local maximum in C. Using the coordi- nates (r,q) of this local maximum we can detect the exact polar parameters of the desired straight line, but unfortunately not the exact position of its pixels, and this is important for many applications. This procedure becomes more complex when the image contains many lines.

2.2 Gradient Orientation Information

Regarding the concern in the proposed method, a robust line detection, the robustness to the intensities varies within an image is very important. In 2007, T. Kondo [16] pro- posed a motion estimate using gradient orientation which had shown that the unit gradient vectors are invariant to the global changes of the image intensity. This paper presented the technique differs from conventional motion estimation techniques using gradient structure tensors (GSTs) in that gradient orientation structures Tensors are based on only gradient orientation information and independent of the gradient magnitude. Since gradient orienta- tion is in-variant to global changes of image intensities, the method in this paper performs motion estimation robustly regardless of time-varying image intensities often caused by ir- regular lighting conditions. Below is the description of how in this paper gradient orientation information was extracted from an image.

6 Figure 2.1: (a)(r,q) parameterization of line in the xy-plane. (b) Sinusoidal curves in the rq-plane; the point of intersection (r 0 ,q 0 ) corresponds to the line passing through points(xi,yi) and (x j,y j) in the xy-plane. (c) Division of the rq-plane into accumulator cells.

Let I(x,y) be the image intensities at pixel coordinates (x,y). The gradient vectors of I(x,y) are approximated by partial derivatives:

I (x,y)=I(x,y) k x ⇤ x (2.4) I (x,y)=I(x,y) k ( x ⇤ x where the symbol denotes convolution, and k and k are first-derivative operators in the ⇤ x y x and y directions, respectively. The Sobel operators may be used as the first-derivative operators. Gradient orientation information can be obtained as unit gradient vectors that are computed by dividing gradient vectors by their magnitudes:

Ix(x,y) nx(x,y)= pI2(x,y)+I2(x,y) x y (2.5) 8 Iy(x,y) ny(x,y)= 2 2 < pIx (x,y)+Iy (x,y) where we assign zeros to nx and:ny if the denominator is very small to exclude plain regions in an image.

2.3 Gray scale Hough Transform

There are very few researchers that have researched on gray scale Hough transform. In short, gray scale Hough transform is an extended version of Hough transform which is the different is the standard Hough transform using binary image,Black and White(BW), as input and extract it into Houg Space, but in this case, image is converted to gray scale image first from color image, therefore to convert the gray scale image to BW, a threshold value selection is a required. Fortunately a gray scale Hough transform lets us extract the line directly from the gray scale image.

During the literature review, we have separated gray-scale Hough transform into two main types. The first type is developed in 1996, in the approach proposed by Lo and

7 Tsai, a method that allows the extraction of the parameters of gray-scale lines directly from the gray-scale image into gray Hough space without using any threshold value as the tra- ditional HT. But it is expensive in terms of storage space since it needs higher dimension HT space. This approach to gray linear band detection using a new extension of the Con- ventional HT, namely the GHT, without the need of the pre-processing steps of threshold and edge detection (or thinning) has been proposed. The approach is robust to gray scales, noise and discontinuity of lines in an image. Although the space complexity is one dimen- sion higher than the CHT. The CHT can be viewed as a special case of the GHT with input shapes being one-pixel wide with two gray levels. We could conclude that in this proposed method, every pixel in the image, vote as one in Hough Space. This method is significance slower since the third dimension of the Hough space is very high [5].

In 1998, B. Uma Shankar, C.A. Murthy and S.K. Pal proposed a new gray-scale Hough transform technique for detection of homogeneous line segments directly from gray level images. The algorithm is able to extract gray level regions irrespective of their shape and size. The “region” in this paper refer to line segments, with constraints on its length and variance, is provided. The effectiveness of the method is demonstrated on Indian Remote- sensing Satellite (IRS) images. In this paper, there is no restriction on the shape of the “region” thus obtained. The only restriction, that they used on the size of the region, is a weak one. The method does not need any prior representation of the shape of the region to be detected. Therefore, it can extract regions of arbitrary shape and size [23].

Secondly, in 2000, A.L. Kesidis, N. Papamarkos proposed a method that reverses back the data in the gray Hough space into a gray scale image. The proposed gray-scale inverse Hough transform (GIHT) algorithm is suitable for detecting and filtering of straight lines. The line filtering procedure allows the detection of gray-scale lines according to con- ditions associated with the polar parameters, the gray-scale value, and the size of the lines. The method does not split the original gray-scale image into bi-level images neither does it use the half toning version of the original. The method uses the gray-scale distribution information stored in HT space. Due to the inversion algorithm, the filtered lines are de- tected exactly as they are in the original image. The GIHT was extensively tested with many gray-scale images and the experimental results confirm its efficiency. In short, this method is an inverse the information in gray Hough space to original gray scale image, yet since this technique is significant faster than the previous technique, we decide to use one part of this paper to use in our experiment. The gray scale Hough transform in this method is voting in the gray Hough space according to the gray scale value in the image respectively. Hence both performance and accuracy rate are better since the longer line would have higher peak compared to method in the first paper by Lo and Tsai [12].

8 Chapter 3 Design Method and Procedures

This chapter describes the methodology and procedure of the proposed method, a robust line detection using unit gradient vectors (UGVs).

3.1 Traditional Hough transform with Gradient Orientation Information

Below is the description of the methodology step by step for the proposed method as shown in Figure 3.1. Step 1 : An input color image is converted to a gray-scale image, I(x,y). Step 2 : We compute the vertical (x-axis) and horizontal (y-axis) gradients of I(x,y) by convoluting the Sobel operators (kx and ky) as shown below.

I (x,y)=I(x,y) k x ⇤ x (3.1) I (x,y)=I(x,y) k ( y ⇤ y Step 3 : We divide the gradients by their norms to normalize them as shown below.

Ix(x,y) nx(x,y)= pI2(x,y)+I2(x,y) x y (3.2) 8 Iy(x,y) ny(x,y)= 2 2 < pIx (x,y)+Iy (x,y)

where we assign zeros to n:x and ny if the denominator is very small to exclude plain regions in an image. Step 4 : We apply a low-pass filter (LPF) to both nx and ny separately to extract regions where UGVs are uniform. We use the mean filter of size 3 by 3 pixels as the LPF. Step 5 : We compute the magnitude of the output from the LPF. Step 6 : We apply threshold to the magnitude to obtain a binary edge map. Step 7 : Finally, we feed the binary edge map to the traditional HT for detecting lines. In Step 7, there are three individual parts. The first part is the conversion of image pixels into sinusoidal curves into Hough space. The second part is the detection of the peaks in the Hough space. And thirdly the line detection part.

9 Figure 3.1: First proposed method.

3.2 Gray-scale Hough Transform with Gradient Orientation Information

Below is the description of the methodology step by step for the proposed method as shown in Figure 3.2. Step 1 : An input color image is converted to a gray-scale image, I(x,y). Step 2 : We compute the vertical (x-axis) and horizontal (y-axis) gradients of I(x,y) by convolution of the Sobel operators (kx and ky) as shown below.

I (x,y)=I(x,y) k x ⇤ x (3.3) I (x,y)=I(x,y) k ( y ⇤ y Step 3 : We divide the gradients by their norms to normalize them as shown below.

Ix(x,y) nx(x,y)= pI2(x,y)+I2(x,y) x y (3.4) 8 Iy(x,y) ny(x,y)= 2 2 < pIx (x,y)+Iy (x,y)

where we assign zeros to n:x and ny if the denominator is very small to exclude plain regions in an image.

10 Step 4 : We apply a low-pass filter (LPF) to both nx and ny separately to extract regions where UGVs are uniform. We use the mean filter of size 3 by 3 pixels as the LPF. Step 5 : We compute the magnitude of the output from the LPF. Step 6: Finally, we feed the magnitude of the image map to the gray-scale HT for detecting lines. From Step 5, we skip the threshold value selection (Step 6 of the part 3.1) from the output of gradient orientation information extraction, and use the output of the unit gradient vectors directly to the gray-scale Hough transform in Step 6 where there are three individual parts. The first part is the conversion image pixels into sinusoidal curves in Hough space. The second part is the detection of the peaks in the Hough space. And thirdly the line detection part.

Figure 3.2: Second approached

11 Chapter 4 Experimental Result and Discussion

This chapter contains two separated experiments. There are the comparison between traditional edge detection and unit gradient vectors (UGVs) and the comparison between the standard Hough transform and the gray-scale Hough transform. The Sobel operator is considered as the traditional edge detection in this experiment. In the first experiment, we divides into two parts: experiments on artificial and real images. In second experiment, we also divides into two parts as the first experiment, but this time we show the mathematical comparison result such as F1-score, accuracy, and precision.

12 4.1 Comparison between traditional and unit-gradient vectors based edge detection methods

Figure 4.1 shows the lines detected by the traditional Hough transform (HT) and the proposed method. Figure 4.1(a) and Figure 4.1(b) show a high-contrast road image where lines are detected by the traditional HT and the proposed method, respectively. The two techniques are both successful in detecting the lines in the image. Figure 4.1(c) and Figure 4.1(d), on the other hand, show a low-contrast road image with the lines detected by the traditional HT and the proposed method. We use the same parameters irrespective of different image contrast. It is obvious that the traditional approach fails to detect lines in such a low-contrasted image, while the proposed method can perform line detection much better. Fortunately, the tradition edge detection method called Canny edge detection can be optimized selecting the threshold value depend on the calculation if the image pixel value of image intensity (selecting different threshold value depending on image intensity, low intensity and high intensity image). Thus at the second experiment, Figure 4.2, we try to get the best the edge detection result by selecting a different threshold value for low-contrasted image. The result of this experiment show that lines are detected in a lower intensity image as we are lower the threshold value.

Figure 4.1: (a) A high-contrast image with the lines detected by the traditional Hough trans- form (HT), (b) the same high- contrast image as (a) with the lines detected by the proposed method, (c) a low-contrast image with the lines detected by the traditional HT, and (d) the same low-contrast image as (c) with the lines detected by the proposed method.

13 Figure 4.2: Line detected in the low-contrast image by optimize the threshold value for low contrast image with HT.

4.1.1 Experiment on artificial images Figure 4.3 demonstrates the line detection performance by the traditional Hough trans- form. The first row of Figure 4.3 shows three artificial test images with different type of shading, Figure 4.3 (a) is non-shaded image, and Figure 4.3(b) and Figure 4.3(c) are the shaded images. In the second row, Figure 4.3 (d), (e), and (f) are the edge images of Figure 4.3 (a), (b) and (c) which is obtained by using Sobel operator with the auto adjustment of threshold value. Figure 4.3 shows clearly that the Sobel operators are failing to extract the line segments in the shaded regions. We can assume that the traditional Hough transform may probably fail to detect lines since most of the line information is lost and cannot be recovered. Differently, Figure 4.3 clearly shows that the traditional approach is susceptible to varying contrast within the image. Conversely, Figure 4.4 demonstrates that the proposed method performs line detection robustly regardless of varying image contrast. Therefore we can expect that every lines can be detected through the traditional Hough transform.

14 Figure 4.3: First row, image (a) is non-shaded image, and (b) and (c) are the shaded images. The second row, image (d), (e), and image (f) are the edge images of image (a), and (b) and (c) which is obtained by using Sobel operator with the auto adjustment of threshold value.

15 Figure 4.4: First row, image (a) is non-shaded image, and (b) and (c) are the shaded images. The second row, image (d), (e), and image (f) are the edge images of image (a), and (b) and (c) which is obtained by using the proposed method.

4.1.2 Experiment on real images Figure 4.5 shows a real image with artificial shadow cast on the road. The red rect- angular box shows the region of interest (ROI) for comparing a traditional edge detection method and the proposed method. The edges detected by the traditional approach and the proposed method are shown in Figure 4.6 and Figure 4.7. As is obvious in Figure 4.6, it is not an easy task to detect edges in the shadow because the image contrast is low, result- ing in low image gradients. By contrast, Figure 4.7 shows that the proposed technique can detect edges both inside and outside the shadow. There is virtually no negative impact of the shadow on the proposed method. The comparison clearly highlights the advantage of the proposed method over the traditional approach.

16 Figure 4.5: Lane image with virtual shaded by add the transparency black show. The red rectangle is the region that we use for experiment.

Figure 4.6: An edge image obtained by the Sobel operator

4.2 Comparison between the standard HT and the gray-scale HT

Figure 4.8 shows the lines detected by the traditional Hough transform (HT) and the proposed method. Figures 4.8(a) and 4.8(b) show a high-contrast road image together with the lines detected by traditional HT and the proposed method, respectively. The two tech- niques are both successful in detecting the lines in the image. Figure 4.8(c) and Figure 4.8(d), on the other hand, show a low-contrast road image with the lines detected by the traditional HT and the proposed method. We use the same parameters irrespective of differ- ent image contrasts. It is obvious that the traditional approach fails to detect lines in such low-contrasted image, while the proposed method can perform line detection much better. The second experiment, we have optimized the edge detection method of the Sobel op- erator using the different threshold value depending on the intensity of image. As the result of the experiment in Figure 4.9, the lines are detected in the low-contrast image as the high- contrast image. In addition the line is detected in the low-contrast image successfully by optimized threshold value, but the problem is to optimize we must define a specific thresh- old value to each image, and optimization algorithm is to sum up the value of all intensity of

17 Figure 4.7: An edge image obtained by using the proposed method. the all pixels in the image and define the threshold value. This means that the optimization will not work if illumination is not uniform. This time we separate the experiment into two main parts. Firstly, we will experiment with artificial images and secondly, with real images.

Figure 4.8: (a) A high-contrast image with the lines detected by the traditional Hough trans- form (HT), (b) the same high- contrast image as (a) with the lines detected by the proposed method, (c) a low-contrast image with the lines detected by the traditional HT, and (d) the same low-contrast image as (b) with the lines detected by the proposed method.

18 Figure 4.9: Line detected in a low-contrast image by optimizing the threshold value for HT.

19 4.2.1 Experiment on artificial images Figure 4.10 shows the artificial image that we shall use for experiment. Figure 4.10 (a) is the original image, Figure 4.10 (b) is the original image with artificial shade added to the original image, and Figure 4.10(c) is the edge image that is considered as the ground truth (GT) of the edge image from original image. Figure 4.11 demonstrates the line de- tection performance by the traditional HT and proposed method. Figure 4.11 (a) shows the edges detection using the Sobel operators, and Figure 4.11 (d) shows the edges detected using UGVs map. Figure 4.11 (b) and (e) represent the edge extracted from Figure 4.11 (a) and (d) in Hough space and GHS respectively. Figure 4.11(c) shows the result of line detected in high contrast area only, but Figure 4.11(f) shows that the proposed method can detect lines in both high and low contrast area. Figure 4.11 (a), (b), and (c) clearly show that the traditional approach is susceptible to varying contrast within the image. Conversely, Figure 4.11 (d), (e), and (f) demonstrates that the proposed method performs line detection robustly regardless of varying image contrast.

Table 4.1 shows a comparison between line detection result of the traditional HT and GT image as shown in Figure 4.10(c) and Figure 4.11(c), and Table 4.2 shows a com- parison between line detection result of proposed method and GT image as shown in Figure 4.10 (f) and Figure 4.11(c). Table I shows that the numbers of true positive (TP) is 1149, false negative (FN) is 2521, false positive (FP) is 0, and true negative (TN) is 86330, and Table II show the numbers of TP is 3630, FN is 40, FP is 0, and TN is 86330. According to value in both tables, we can see that the TP of table II is much higher than table I, and FN of Table 4.2 is much lower than Table 4.1. Therefore, it means the tradition HT is fail to detect lines in shaded area. In Table 4.3, the sensitivity of the proposed method is 98.91 percents, though this is much better than the traditional approach. Finally, we can assume that the traditional HT detects lines with partial success in the high intensity area only, whereas the proposed method detect lines successfully both at high intensity and shaded area. According to the result show in Table III, it seems that the proposed method is outperformance the traditional HT in experimental with artificial image, hence the next experiment we will do experiment on real image.

20 Figure 4.10: (a) Original image 300 x 300 pixels, (b) is the shaded image of original image (a), and (c) is considered as GT of line detection which is the edge image extracted from original image (a).

21 Figure 4.11: (a) The edge extracted from shaded image in Figure 4.10 (c), (b) 3-D plot of the parameter space of (a), (c) is the result of line detection from image (a) of the traditional HT, (d) is the UGVs map of Figure 4.10 (b), (e) 3-D plot of the parameter space of (d), and (f) line detected result from (d) of the proposed method.

Table 4.1: Comparison between edge detection result of HT and GT Line detection result of HT GT Edge (1) BG (0) Edge (1) 1149(TP) 2521(FN) BG (0) 0(FP) 86330(TN)

Table 4.2: Comparison between edge detection result of proposed method and GT Line detection result of proposed method GT Edge (1) BG (0) Edge (1) 3630(TP) 40(FN) BG (0) 0(FP) 86330(TN)

4.2.2 Experiment on real images Figure 4.5 shows a real road image with an artificial shadow cast on the road. The rectangle box shows the region of interest (ROI) for comparing a traditional edge detection method and the proposed method. The edges detected by the traditional approach and the proposed methods are shown in Figure 4.12. As it is obvious in Figure 4.12 (a), it is not an easy task to detect edges in the shadow because the image contrast is low, resulting in

22 Table 4.3: Comaprison result between tradition ht and proposed method Line detection result HT(%) Proposed method(%) Sensitivity 31.31 98.91 Specificity 100 100 Precision 100 100 F1 score 47.69 99.45 Accuracy 97.19 99.95 low image gradients. By contrast, Figure 4.12 (b) shows that the proposed technique can detect edges both inside and outside the shadow. There is virtually no negative impact from the shadow on the proposed method. The comparison clearly highlights the advantage of the proposed method over the traditional approach.

Figure 4.12: (a) A high-contrast shaded image with the lines detected in high contrast area by the traditional Hough transform (HT), (b) the same high-contrast shaded image as (a) with the lines detected in the shaded area by the proposed method.

23 Chapter 5 Conclusion

This thesis presents a novel method called an illumination-invariant line detection by combining UGVs based edge detection and gray-scale Hough transform (GHT). The method for detecting edges proposed in this thesis is based on gradient orientations or unit gradient vectors (UGVs). We use gradient orientation instead of pixel intensities since it is observed from the literature to be remarkably insensitive to varying lighting conditions. The method can be considered as a preprocessing for the HT and GHT. In this thesis, we have divided the experiment into two parts. The first part of the ex- periment is done with the HT. The experimental results in this part show that, this proposed method is robust to perform line detection at both low-contrasted images and high-contrasted images irrespective of the threshold value. Its performance is also robust in non-uniformly illuminated images. On the contrary there are some disadvantages, that is the UGVs val- ues are not binary and HT requires a binary input. Therefore we need to apply threshold adjustment technique to convert the UGVs information into binary information. In the sec- ond phase, to eliminate the threshold adjustment process we decide to use GSHT instead of HT, since GSHT accepts the gray-scale value extracted from UGVs information. The result shows the robustness of the proposed method to illumination variation and success- fully detects lines in both high and low contrast image. In addition, the second part of the experiment with GSHT does not require a threshold value adjustment. We believe that this proposed method is suitable to apply in real application such as robust lane mark detection. We also expect to expand our research by improving the speed, accuracy, and complexity of lane mark detection with more experimental images.

24 References

[1] D. Ballard. Generalizing the hough transform to detect arbitrary shapes. Pattern Recog- nition, 13(2):111 – 122, 1981.

[2] H. C. Method and means for recognizing complex patterns, Dec. 18 1962. US Patent 3,069,654.

[3] L. Chao, W. Zhong, and L. Lin. An improved ht algorithm on straight line detection based on freeman chain code. In Image and Signal Processing, 2009. CISP ’09. 2nd International Congress on, pages 1–4, Oct 2009.

[4] X. Chen and X. Liu. A sub peak tracker based hough transform for accurate and robust linear edge extraction. In Electrical and Control Engineering (ICECE), 2010 International Conference on, pages 288–291, June 2010.

[5] R. chin Lo and W.-H. Tsai. Gray-scale hough transform for thick line detection in gray-scale images. Pattern Recognition, 28(5):647 – 661, 1995.

[6] D. Duan, M. Xie, Q. Mo, Z. Han, and Y. Wan. An improved hough transform for line detection. In Computer Application and System Modeling (ICCASM), 2010 Interna- tional Conference on, volume 2, pages V2–354–V2–357, Oct 2010.

[7] R. O. Duda and P. E. Hart. Use of the hough transformation to detect lines and curves in pictures. Commun. ACM, 15(1):11–15, 1972.

[8] P.-F. Fung, W.-S. Lee, and I. King. Randomized generalized hough transform for 2-d gray scale object detection. In Pattern Recognition, 1996., Proceedings of the 13th International Conference on, volume 2, pages 511–515 vol.2, Aug 1996.

[9] D. Herumurti, K. Uchimura, G. Koutaki, and T. Uemura. Urban road extraction based on hough transform and region growing. In Frontiers of Computer Vision, (FCV), 2013 19th Korea-Japan Joint Workshop on, pages 220–224, Jan 2013.

[10] J. Illingworth and J. Kittler. A survey of the hough transform. Computer Vision, Graph- ics, and Image Processing, 44(1):87 – 116, 1988.

[11] C. G. J. Matas and J. Kittler. Progressive probabilistic hough transform. pages 256 – 265, 1998.

[12] A. Kesidis and N. Papamarkos. On the gray-scale inverse hough transform. Image and Vision Computing, 18(8):607 – 618, 2000.

[13] J. Kim and R. Krishnapuram. A robust hough transform based on validity. In Fuzzy Systems Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, volume 2, pages 1530–1535 vol.2, May 1998.

[14] N. Kiryati, Y. Eldar, and A. Bruckstein. A probabilistic hough transform. Pattern Recognition, 24(4):303 – 316, 1991.

25 [15] J. Koh, V. Govindaraju, and V. Chaudhary. A robust iris localization method using an active contour model and hough transform. In Pattern Recognition (ICPR), 2010 20th International Conference on, pages 2852–2856, Aug 2010.

[16] T. Kondo. Motion estimation using gradient orientation structure tensors. In Inno- vative Computing, Information and Control, 2007. ICICIC ’07. Second International Conference on, pages 450–450, Sept 2007.

[17] P. Kultanen, L. Xu, and E. Oja. Randomized hough transform (rht). In Pattern Recogni- tion, 1990. Proceedings., 10th International Conference on, volume i, pages 631–635 vol.1, Jun 1990.

[18] G. Liu, F. Worgotter, and I. Markelic. Combining statistical hough transform and parti- cle filter for robust lane detection and tracking. In Intelligent Vehicles Symposium (IV), 2010 IEEE, pages 993–997, June 2010.

[19] T. T. Nguyen, X. D. Pham, and J. Jeon. An improvement of the standard hough trans- form to detect line segments. In Industrial Technology, 2008. ICIT 2008. IEEE Inter- national Conference on, pages 1–6, April 2008.

[20] T. T. Nguyen, X. D. Pham, D. Kim, and J. W. Jeon. A test framework for the accuracy of line detection by hough transforms. In Industrial Informatics, 2008. INDIN 2008. 6th IEEE International Conference on, pages 1528–1533, July 2008.

[21] R. Okada. Discriminative generalized hough transform for object dectection. In Com- puter Vision, 2009 IEEE 12th International Conference on, pages 2000–2005, Sept 2009.

[22] A. Sere, O. Sie, and E. Andres. Extended standard hough transform for analytical line recognition. In Sciences of Electronics, Technologies of Information and Telecom- munications (SETIT), 2012 6th International Conference on, pages 412–422, March 2012.

[23] B. U. Shankar, C. Murthy, and S. Pal. A new gray level based hough transform for region extraction: An application to IRS images. Pattern Recognition Letters, { } 19(2):197 – 204, 1998.

[24] L. Xu and E. Oja. Randomized hough transform (rht): Basic mechanisms, algorithms, and computational complexities. CVGIP: Image Understanding, 57(2):131 – 154, 1993.

26 List of publications:

1. V.H. Chhor and T. Kondo, An illumination-invariant hough transform, The Interna- tional Conference on Information and Communication Technology for Embedded Sys- tems (ICICTES2014), Ayutthaya, Thailand, January 2014.

2. V.H. Chhor and T. Kondo, Illumination-invariant line detection with the Gray-scale Hough transform, The 7th IEEE International Conference onCybernetics and Intel- ligent Systems(CIS) and Robotics, Automation and Mechatronics(RAM) 15-17 July, 2015 at Angkor Wat, Cambodia.

27