
TEXTURE BASED ROAD SURFACE DETECTION by GUANGYU CHEN Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Advisor: Dr. Guo-Qiang Zhang Department of Electrical Engineering and Computer Science CASE WESTERN RESERVE UNIVERSITY August, 2008 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES We hereby approve the thesis/dissertation of ________________Guangyu Chen__________________________ candidate for the __Master of Science________________degree *. (signed) Guo-Qiang Zhang________________________________ (chair of the committee) H.Andy Podgurski _______________________________ Francis Merat_____________________________________ ________________________________________________ ________________________________________________ ________________________________________________ (date) 06/16/2008_________________ *We also certify that written approval has been obtained for any proprietary material contained therein. To my parents with Love and Gratitude Table of Contents 1. Introduction .................................. 10 1.1Background................................ 10 1.2DEXTER................................. 13 1.3Motivation................................. 15 1.4 Thesis Outline ............................... 15 2. Image Sequence Time filter ........................ 17 2.1 Image Sequence Time Filter ISTF ................... 17 2.2 ISTF effect Number One: connecting the dash lane marks ..... 18 2.3 ISTF effect Number Two: smooth the road images .......... 21 3. Color Texture Analysis ........................... 27 3.1 Texture Analysis of Road Images .................... 27 3.2CovarianceMatrixofColorValueDifferential............. 28 3.3Thestrengthoftextureanisotropy................... 32 3.4TextureAnalyzingResultsandBenefits................. 33 3.4.1 ResultImages........................... 33 3.4.2 Benefits of Color Texture Analysis Comparing to Other Ap- proaches.............................. 33 4. Image Pixel Scale Problem ........................ 40 4.1VisionPerceptualdistortion....................... 40 4.2VisionPerceptualdistortioncorrection................. 40 5. Implementation ............................... 49 5.1OrganizationoftheAlgorithm...................... 49 5.2SegmentationAlgorithm......................... 52 4 6. Results and Discussion .......................... 57 6.1 Results of Road Segmentation Algorithm ................ 57 6.2FutureWork................................ 61 7. Appendix A: Road image samples from NASA Plum Brook and Case Farm ...................................... 63 8. Appendix B: Documentation of Road Segmentation Software ... 66 9. Appendix C: C code of DLL file embedded in Road Segmentation Software .................................... 69 9.1TimeClosingFunction.......................... 69 9.2TextureAnalysis............................. 72 9.3 Segmentation Algorithm–To Draw Horizontal White Lines and Output Road Edge Points ............................ 86 10. BIBLIOGRAPHY ............................. 92 5 List of Figures 1.1 DEXTER with road detection sensors highlighted. blue = color cam- eras;green=LIDAR;red=IRcamera(22).............. 13 1.2DEXTER’sSoftwareArchitecture(22).................. 14 2.1 15 continuous road images from a front mounted vehicle camera . 19 2.2TimeDilationResults.......................... 20 2.3TimeErosionResults........................... 22 2.4 Time Opening Results .......................... 23 2.5TimeclosingResults........................... 24 2.6 Original Input Image ........................... 26 2.7 Noise Reduced Image by ISFT ..................... 26 3.1 Road Images ................................ 29 3.2TextureResults.............................. 34 3.3 Input Image (1) .............................. 36 3.4 Color Texture Results of Input Image (1) ................ 36 3.5 Gray Texture Results of Input Image (1) ................ 36 3.6 Edge Detection Results of Input Image (1) ............... 37 3.7 Input Image (2) .............................. 38 3.8 Color Texture Results of Input Image (2) ................ 38 3.9 Gray Texture Results of Input Image (2) ................ 38 3.10 Edge Detection Results of Input Image (2) ............... 39 4.1 Image of Road ............................... 41 4.2 part of calibration matrix (meters) ................... 42 4.3 Horizontal (left) and Vertical(right) distance to center line (X=0) and bottom line (Y=0) respectively ..................... 42 6 4.4 Coordinate of an image .......................... 43 4.5 relationship between pixel’s Real World Vertical Size and Vertical Po- sition.................................... 44 4.6 relationship between pixel’s Real World Horizontal Size and Vertical Position.................................. 45 4.7 The distribution of pixel’s Real World Vertical Size to its Horizontal Position.................................. 45 4.8 The distribution of pixel’s Real World Horizontal Size and its Horizon- talPosition................................ 46 5.1 The front panel (a) and program blocks (b) of Road Segmentation System 50 5.2OrganizationchartoftheAlgorithm.................. 51 5.3 Color Texture Image for segmentation (left) and Segmented Road Im- age(Right)................................ 52 6.1 Results of Road Segmentation ...................... 58 6.2 Results of Road Segmentation ...................... 59 7.1 Road Image Samples of NASA Plum Brook .............. 64 7.2 Road Image Samples of Case Farm ................... 65 8.1 User Interface Of Road Segmentation Software ............. 67 7 Acknowledgement I would like to thank my supervisors, Dr. Guo-qiang Zhang and Dr. Frank Merat, for their support and review of this thesis. Their wide knowledge and rigorous way of thinking have been of great value to me. Their understanding and guidance have provide a good basis for this thesis. My appreciation as well to Scott McMichael for his help in understanding the problems and using Labview at the beginning of the project and his help in debugging the code. Special thanks to Fangping Huang who provide his experience on Latex. 8 Texture Based Road Surface Detection Abstract by GUANGYU CHEN Using computer vision techniques to identify drivable areas for autonomous ve- hicles is a challenging problem due to unpredicted situations of the road, different road types, rough road surface and image noises. This thesis describes a system to recognize and segment road surfaces in real-world road images. This system uses a new-technique called ISTF–Image Sequence Time Filtering to reduce the noise in road images. To calculate the strength of anisotropy of each pixel, texture analysis based upon the covariance matrix of color changes in the image is used. A threshold segmentation algorithm is then applied to segment the road surface in the resulting texture image. This system was developed in LabView and C. It operates in real-time and has been extensively tested with real road image sequences. It has been deployed into DEXTER, the autonomous vehicle built by Team Case for the 2007 DARPA Ur- ban Challenge. Empirical comparative evaluations were also performed with similar approaches and the results of color texture analysis are significantly better than those using gray-scale texture analysis and edge detection methods. 9 1. INTRODUCTION This thesis addresses vision-based road detection. The road scenes were obtained during development of DEXTER, an autonomous vehicle built by Team Case for the 2007 DARPA Urban Challenge(26). 1.1 Background There are more than 200 million vehicles on the roads of North America and many people have been killed in car accidents because of the failure to follow a road or stay on a road. Driving a long distance is also a challenge for individuals. As a result driver assistance systems and even unmanned vehicles have been researched for many years to help drivers drive safely and comfortably. The first requirement of such a system is to detect other vehicles and obstacles. Many former driver assistance systems are gradually appearing in new vehicles. Recently, luxury cars have been equipped (23) with front and rear facing radars for detecting objects in front of and behind the vehicle. The new VolvoS80 has Blind Spot Detectors (24) which use cameras positioned in both side rear view mirrors. This system can detect vehicles in the blind spot humans encounter when driving and is crucial to safe lane changes. The next step in driving system development is to track and follow other vehicles and the road. The ultimate goal is autonomous vehicles which can operate in all environments without drivers. Transportation is crucial to the development and defense of societies and nations. This is the reason why many countries are developing unmanned systems such as unmanned fighter jets to replace piloted fighter jets, robots to replace human soldiers 10 and autonomous vehicles to transport materials. Carnegie Mellon University has been a leader in autonomous vehicle research. Their most famous unmanned vehicle is ALV INN which is a vision-only based au- tonomous driving vehicle. ALVINN (25)used relatively primitive vision technology (30*32 pixels) images processed by a neural network to control vehicle steering. This system worked very well in spite of the low resolution image used. In November 2007, DARPA (Defense Advanced Research Projects Agency ) held a competition called the DARPA Urban Challenge to select the autonomous vehicle which could best navigate urban streets. Unlike the DARPA Grand Challenge which was set in the desert, this competition
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages94 Page
-
File Size-