applied sciences

Article Space and Obstacle Detection Based on a Vision Sensor and Checkerboard Grid Laser

Shidian Ma 1, Zhongxu Jiang 2, Haobin Jiang 2, Mu Han 3,* and Chenxu Li 2

1 Automotive Engineering Research Institute, Jiangsu University, Zhenjiang 212013, China; [email protected] 2 School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang 212013, China; [email protected] (Z.J.); [email protected] (H.J.); [email protected] (C.L.) 3 School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China * Correspondence: [email protected]

 Received: 18 February 2020; Accepted: 3 April 2020; Published: 9 April 2020 

Abstract: The accuracy of automated parking technology that uses ultrasonic or camera vision for obstacles and parking space identification can easily be affected by the surrounding environment especially when the color of the obstacles is similar to the ground. Additionally, this type of system cannot recognize the size of the obstacles detected. This paper proposes a method to identify parking spaces and obstacles based on visual sensor and laser device recognition methods by installing a laser transmitter on the car. The laser transmitter produces a checkerboard-shaped laser grid (mesh), which varies with the condition encountered on the ground, which is then captured by the camera and taken as the region of interest for the necessary image processing. The experimental results show that this method can effectively identify obstacles as well as their size and parking spaces even when the obstacles and the background have a similar color compared to when only using sensors or cameras alone.

Keywords: parking space recognition; obstacle recognition; machine vision; checkerboard-shaped laser grid

1. Introduction The advancements of science and technology, coupled with the rapid improvement in the standard of living, have resulted in higher demand for intelligent vehicles. One such demand aspect of intelligent vehicles is the Automatic Parking System (APS). As a result of rapid developments in most cities around the world, parking spaces are becoming narrower, and parking a vehicle without the guidance of a third party is becoming quite problematic and takes too much time to accomplish even with a parking guide. However, with the rapid development of APS technology, this task will be accomplished with ease and with minimal time needed without a parking guide. This system uses sensors or cameras to assess the parking situation and automatically adjust the angle of the device to accomplish a successful parking process [1,2]. APS that use sensors like ultrasonic radar sensors to find parking spaces have the advantage of high-speed data processing and long detection distance, free from the influence of light. However, ultrasonic radar cannot distinguish the types of obstacles, and it is easy to misjudge when the ultrasonic radar scans low barriers or potholes on the ground [3,4]. In addition to the ultrasonic radar, the parking space can also be recognized by the use of camera vision. The advantage of the camera is that it can acquire a large amount of data about the surrounding environment and can detect inclined objects. However, it is greatly influenced by the surrounding environment, and the accuracy of the recognized obstacles on the ground is low in situations where the color of the obstacles is similar to the color of the ground. Additionally, monocular cameras cannot

Appl. Sci. 2020, 10, 2582; doi:10.3390/app10072582 www.mdpi.com/journal/applsci Appl. Sci. 2020, 10, 2582 2 of 17 recognize the size of objects, and binocular cameras are more expensive and the algorithms more complex [5–8]. Another technology that APS uses for parking space recognition is the LiDAR sensing system. This technology was originally developed and used by the military. It detects the distance of the surrounding environment by launching a multi-beam pulsed laser rotating 360 degrees, and it can also draw a 3D map. However, as a result of the high cost of this technology, LiDAR with a low wire harness is used in vehicles, which comes with a decrease in resolution, making it susceptible to producing blind spots and resulting in safety concerns [9–11]. The multi-sensor fusion technique for APS as proposed by many researchers is to overcome the above shortcomings of the other technologies. However, the high cost of this technology coupled with the fact that it has not sufficiently matured hinder its adoption [12–14]. Additionally, the theory of parking space recognition rarely considers the general situation of the obstacles in the parking space, which can lead to inaccurate obstacle identification [15]. In order to solve the above challenges, a new parking space recognition scheme is proposed in this paper. A laser transmitter is added to the vision sensor system. A chessboard effect laser grid is presented on the ground by a laser emitter mounted on the vehicle, and the shape of the laser grid will change when there are obstacles on the ground. After these changes are captured by the camera, through image processing, the laser mesh region is taken as the region of interest. This method can effectively identify parking spaces and obstacles in parking spaces and significantly improve the recognition rate of sufficient parking spaces.

2. System Structure and Principle The system structure of the parking space and obstacle detection based on the vision sensors and the laser device is shown in Figure1. In addition to the original 360 ◦ camera sensors installed on the body of the vehicle, the system is also equipped with a checkerboard grid effect laser transmitter. The images of the parking space and the surrounding environment are captured by the 360◦ cameras. Then, the visual processor carries out the subsequent image processing and connects with the parking Appl.controller Sci. 2020 through, 10, x FOR the PEER communication REVIEW serial port. 3 of 17

The checkerboard The checkerboard Parking grid effect laser shaped laser environment transmitters.

Video output Visual Processing 360° surround interface Unit camera

Serial Vehicle LCD communication

Steering angle controller

CANBUS

Vehicle speed sensor

Power control Steering control Brake control

Figure 1. System construction.

Figure 1. System construction.

Figure 2. System schematic.

After acquiring the environment image of the parking space, the system first preprocesses the image, which mainly includes gamma conversion image enhancement, image graying, mean filtering by smoothing and denoising, and binarized edge detection. Then, the region of interest will be extracted. After that, contour detection and convex hull detection are performed in the region of interest. Finally, the contour and convex hull will be displayed in the images. The algorithm flow is shown in Figure 3. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 17

The checkerboard The checkerboard Parking grid effect laser shaped laser environment transmitters.

Video output Visual Processing 360° surround interface Unit camera

Serial Vehicle LCD communication

Appl. Sci. 2020, 10, 2582 3 of 17 Steering angle Parking sensor controller

The principle of the system is shown in FigureCANBUS2. The laser emitting devices are illuminated on the ground to present the shape of the checkerboard grid. When obstacles exist on the ground, such Vehicle speed as vehicles in the parking space,sensor the shape of the laser mesh will change. Similarly, when there are obstacles such as stones, potholes, walls, and floor locks on the ground, the laser mesh will produce different changes. The cameras capture the changes of the laser mesh, making it easier to recognize the parking spaces and the obstacles from images. As a result, the recognition efficiency and success Power control Steering control Brake control rate will be improved. Furthermore, when parking at night, the laser emitting devices can emit light autonomously, which reduces the requirements for lighting conditions. In addition, the safety and applicability of the parking system willFigure be improved.1. System construction.

FigureFigure 2.2. SystemSystem schematic.

After acquiring the environment image of the parking space, the system first preprocesses the After acquiring the environment image of the parking space, the system first preprocesses the image, which mainly includes gamma conversion image enhancement, image graying, mean filtering image, which mainly includes gamma conversion image enhancement, image graying, mean filtering by smoothing and denoising, and binarized edge detection. Then, the region of interest will be by smoothing and denoising, and binarized edge detection. Then, the region of interest will be extracted. After that, contour detection and convex hull detection are performed in the region of extracted. After that, contour detection and convex hull detection are performed in the region of interest. Finally, the contour and convex hull will be displayed in the images. The algorithm flow is interest. Finally, the contour and convex hull will be displayed in the images. The algorithm flow is shown in Figure3. Appl.shown Sci. in2020 Figure, 10, x FOR 3. PEER REVIEW 4 of 17

Start

Image acquisition

Gamma enhancement

Grey processing

Image smoothing and denoising

Binarization edge detection

ROI extraction

Contour detection and convex hull detection

Contour and convex hull drawing

Finish

Figure 3. AlgorithmAlgorithm flow. flow. 2.1. Realization of Checkerboard Laser Grid 2.1. Realization of Checkerboard Laser Grid The effect of the grid laser emitter on the ground is shown in Figure4. The laser emitter illuminates The effect of the grid laser emitter on the ground is shown in Figure 4. The laser emitter the ground to present a checkerboard grid effect, and the laser line in the grid will change shape when illuminates the ground to present a checkerboard grid effect, and the laser line in the grid will change shape when illuminating obstacles on the ground. Moreover, the cameras can capture these shape changes for further image processing or machine learning to identify obstacles. Due to the high brightness, high directivity, and high monochromaticity of the laser, when the camera captures the laser mesh, the corresponding filter or the corresponding color detection in image processing can improve the capture of the special detection of the mesh and reduce the amount of image processing in the later stage.

Figure 4. Rendering of the laser grid on the ground.

As shown in Figure 4, the laser emitter is obliquely irradiated to the ground; the center line of the laser emitter is at α degrees to the horizontal plane; the laser emitter is at h2 above the horizontal ground; and the distance between the laser emitter and the checkerboard laser grid is l. As shown in Figure 5, in an ideal case, the laser emitter is illuminated on the level ground, presenting a uniformly distributed laser grid with a single grid length of x0, and the width is y0. In reality, the laser grid is deformed by the laser emitter tilted on the ground. As shown in Figure 6, the laser network is an 8×8 grid; the laser line closest to the laser transmitter is the X-axis, and the central axis of the laser network is the Y-axis. The vertical distance between the flat laser line and the origin is 0, y1, y2, …, y8, respectively. Due to the deformation of the vertical laser line, the endpoint is from −x4, −x3, …, 0, …, x3, x4 to −x4’, −x3’, …, 0, …, x3’, x4’. Therefore, the X-direction of each grid is changed to: Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 17

Start

Image acquisition

Gamma enhancement

Grey processing

Image smoothing and denoising

Binarization edge detection

ROI extraction

Contour detection and convex hull detection

Contour and convex hull drawing

Finish

Figure 3. Algorithm flow.

2.1. Realization of Checkerboard Laser Grid Appl. Sci.The2020 effect, 10, 2582 of the grid laser emitter on the ground is shown in Figure 4. The laser emitter4 of 17 illuminates the ground to present a checkerboard grid effect, and the laser line in the grid will change illuminatingshape when illuminating obstacles on obstacles the ground. on the Moreover, ground the. Moreover, cameras the can cameras capture thesecan capture shape changesthese shape for furtherchanges image for further processing image or processing machine learning or machine to identify learning obstacles. to identify Due toobstacles. the high Due brightness, to the high directivity,brightness, andhigh high directivity, monochromaticity and high monochromati of the laser,city when of thethe cameralaser, when captures the camera the laser captures mesh, the correspondinglaser mesh, the filter corresponding or the corresponding filter or the color corresponding detection inimage color processingdetection in can image improve processing the capture can ofimprove the special the capture detection of ofthe the special mesh detection and reduce of the the mesh amount and of reduce image the processing amount inof theimage later processing stage. in the later stage.

Figure 4. Appl. Sci. 2020, 10, x FOR PEER REVIEWFigure 4. Rendering of the laser grid on the ground. 5 of 17 As shown in Figure4, the laser emitter is obliquely irradiated to the ground; the center line of As shown in Figure 4, the laser emitter xxisii'− obliquely irradiated to the ground; the center line of the laser emitter is at α degrees to the horizontalxici == plane;,( 1,2,3,4) the laser emitter is at h2 above the horizontal(1) the laser emitter is at α degrees to the horizontal8 plane; the laser emitter is at h2 above the horizontal ground; and the distance between the laser emitter and the checkerboard laser grid is l. ground; and the distance between the laser emitter and the checkerboard laser grid is l. As shown in Figure5, in an ideal case, the laser emitter is illuminated on the level ground, AsFurthermore, shown in theFigure coordinates 5, in an of ideal each case, grid intersecthe lasertion emitter can be iscalculated illuminated when on the the obstacle level ground, appears presentingin the laser agrid. uniformly The coordinate distributed range laser of the grid obstac withle a singleaccording grid to length the position of x0, and of the the deformation width is y0 .can In presenting a uniformly distributed laser grid with a single grid length of x0, and the width is y0. In reality, the laser grid is deformed by the laser emitter tilted on the ground. reality,also be evaluated.the laser grid is deformed by the laser emitter tilted on the ground. As shown in Figure 6, the laser network is an 8×8 grid; the laser line closest to the laser transmitter is the X-axis, and the central axis of the laser network is the Y-axis. The vertical distance between the flat laser line and the origin is 0, y1, y2, …, y8, respectively. Due to the deformation of the vertical laser line, the endpoint is from −x4, −x3, …, 0, …, x3, x4 to −x4’, −x3’, …, 0, …, x3’, x4’. Therefore, the X-direction of each grid is changed to:

Figure 5. The ideal laser grid image. Figure 5. The ideal laser grid image. As shown in Figure6, the laser network is an 8 8 grid; the laser line closest to the laser transmitter × is the X-axis, and the central axis of the laser network is the Y-axis. The vertical distance between the flat laser line and the origin is 0, y1, y2, ... , y8, respectively. Due to the deformation of the vertical laser line, the endpoint is from x , x , ... , 0, ... , x , x to x ’, x ’, ... , 0, ... , x ’, x ’. Therefore, the − 4 − 3 3 4 − 4 − 3 3 4 X-direction of each grid is changed to:

x x x = | i0 − i|, (i = 1, 2, 3, 4) (1) ci 8

Figure 6. The actual laser grid image.

As shown in Figure 7, the coordinates of the rectangular obstacles in the figure are: (−x12, y14), (−x19, y33), (x11, y42), (x18, y25). Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 17

xxii'− xici ==,( 1,2,3,4) (1) 8 Furthermore, the coordinates of each grid intersection can be calculated when the obstacle appears in the laser grid. The coordinate range of the obstacle according to the position of the deformation can also be evaluated.

Appl. Sci. 2020, 10, 2582 5 of 17 Figure 5. The ideal laser grid image.

FigureFigure 6.6. The actual laser grid image.

Furthermore,As shown in theFigure coordinates 7, the coordinates of each grid of intersection the rectangular can be obstacles calculated in when the figure the obstacle are: (− appearsx12, y14), in(−x the19, y laser33), (x grid.11, y42 The), (x18 coordinate, y25). range of the obstacle according to the position of the deformation can also be evaluated. As shown in Figure7, the coordinates of the rectangular obstacles in the figure are: ( x , y ), − 12 14 (Appl.x Sci., y 2020), (,x 10, ,xy FOR), PEER (x , yREVIEW). 6 of 17 − 19 33 11 42 18 25

FigureFigure 7.7. Obstacle area testing. 2.2. Image Acquisition 2.2. Image Acquisition The image is captured by 360 cameras. The 360 cameras are equipped with four camera The image is captured by 360°◦ cameras. The 360°◦ cameras are equipped with four camera components mounted on the front, rear, left, and right sides of the car. When installing the cameras, components mounted on the front, rear, left, and right sides of the car. When installing the cameras, the front camera is installed on the logo or the grill, and the rear camera is installed near the license the front camera is installed on the logo or the grill, and the rear camera is installed near the license plate light or trunk handle, as close as possible to the central axis. Left and right cameras are installed plate light or trunk handle, as close as possible to the central axis. Left and right cameras are installed under the left and right rearview mirrors. The number and position of the cameras can also be under the left and right rearview mirrors. The number and position of the cameras can also be adjusted according to the actual situation. Wide-angle cameras can be used, which can take complete adjusted according to the actual situation. Wide-angle cameras can be used, which can take complete environmental images around the vehicle at the same time. Images of the surrounding environment are environmental images around the vehicle at the same time. Images of the surrounding environment simultaneously collected through the 360 cameras. After the image correction and image mosaicking, are simultaneously collected through ◦the 360° cameras. After the image correction and image a 360 image is formed [16]. mosaicking,◦ a 360° image is formed [16]. Camera Calibration Camera Calibration The calibration of the cameras mainly involves the transformation of four coordinate systems: The calibration of the cameras mainly involves the transformation of four coordinate systems: world coordinate system, camera coordinate system, image plane coordinate system, and image world coordinate system, camera coordinate system, image plane coordinate system, and image pixel coordinate system. The calibration methods mainly include the Zhengyou Zhang calibration method [17], the RAC (Radial Alignment Constraint) two-stage method of Tsai, and the DLT (Direct Linear Transform) method [18–20]. Using a given known calibrator, the external parameter rotation matrix of camera R, translation matrix T, principal point coordinates of parametric images (Cx, Cy), the height and width of a single- pixel Sx, Sy, focal length f, and distortion factor k can be calculated.

3. Image Preprocessing The image acquired by the camera is affected by the surrounding noise and the environment, so that the target object to be recognized is not highly distinguishable from the surrounding environment image and cannot be directly used. Therefore, there is a need for image preprocessing of the original image.

3.1. Grayscale Processing The process of transforming a color image into a gray image is called gray processing of the image. According to the principle of the three primary colors of red, green, and blue, any color F can be mixed with different ratios of R, G, and B. FrRgGbB=+[] [] + [] (2) The color depth of a pixel in a gray image is called the gray value. The difference between a grayscale image and a black-and-white image is that the grayscale image contains the concept of color Appl. Sci. 2020, 10, 2582 6 of 17 pixel coordinate system. The calibration methods mainly include the Zhengyou Zhang calibration method [17], the RAC (Radial Alignment Constraint) two-stage method of Tsai, and the DLT (Direct Linear Transform) method [18–20]. Using a given known calibrator, the external parameter rotation matrix of camera R, translation matrix T, principal point coordinates of parametric images (Cx, Cy), the height and width of a single-pixel Sx,Sy, focal length f, and distortion factor k can be calculated.

3. Image Preprocessing The image acquired by the camera is affected by the surrounding noise and the environment, so that the target object to be recognized is not highly distinguishable from the surrounding environment image and cannot be directly used. Therefore, there is a need for image preprocessing of the original image.

3.1. Grayscale Processing The process of transforming a color image into a gray image is called gray processing of the image. According to the principle of the three primary colors of red, green, and blue, any color F can be mixed with different ratios of R, G, and B. F = r[R] + g[G] + b[B] (2)

The color depth of a pixel in a gray image is called the gray value. The difference between a grayscale image and a black-and-white image is that the grayscale image contains the concept of color depth, which is the gray level. Since the color image has three channels and the gray image has only one channel, the processing speed for the color image is slow compared to the gray image. The grayscale processed image can greatly reduce the amount of subsequent calculation, and the information contained in the grayscale image is enough for calculation and analysis. The image can be converted into a grayscale image by decomposing the color of color pixels into R, G, and B components using the following formula.

F(x, y) = 0.299 R(x, y) + 0.587 G(x, y) + 0.114 B(x, y) (3) × × × When all the pixels in the color image are transformed by the above formula, the color image will be converted into a grayscale image [21].

3.2. Smoothing and Denoising of Images In the process of acquiring and transmitting the image can be affected by noise. This noise causes the image quality to degrade and can mask some of the characteristics of the image, which have the potential to make the analysis of the image difficult. While removing noise from the target image, it is also necessary to retain the details of the original image as much as possible, and the quality of the processing will directly affect the effectiveness and reliability of subsequent image processing and analysis. Commonly used methods for smoothing and denoising include mean filtering, median filtering, bilateral filtering, and Gaussian filtering [22].

Mean Filtering In this paper, the smoothing and denoising method of mean filtering is adopted. The mean filtering is a typical linear filtering algorithm, which can help eliminate the sharp noise of images and realize image smoothing and blurring. The mean filtering algorithm replaces the pixel points in the image by selecting an appropriate template operator and replaces the gray value of the pixel with a weighted average of the gray pixel Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 17 depth, which is the gray level. Since the color image has three channels and the gray image has only one channel, the processing speed for the color image is slow compared to the gray image. The grayscale processed image can greatly reduce the amount of subsequent calculation, and the information contained in the grayscale image is enough for calculation and analysis. The image can be converted into a grayscale image by decomposing the color of color pixels into R, G, and B components using the following formula. Fxy( , )=× 0.299 Rxy ( , ) +× 0.587 Gxy ( , ) +× 0.114 Bxy ( , ) (3) When all the pixels in the color image are transformed by the above formula, the color image will be converted into a grayscale image [21].

3.2. Smoothing and Denoising of Images

In the process of acquiring and transmitting the image can be affected by noise. This noise causes the image quality to degrade and can mask some of the characteristics of the image, which have the potential to make the analysis of the image difficult. While removing noise from the target image, it is also necessary to retain the details of the original image as much as possible, and the quality of the processing will directly affect the effectiveness and reliability of subsequent image processing and analysis. Commonly used methods for smoothing and denoising include mean filtering, median filtering, bilateral filtering, and Gaussian filtering [22].

Mean Filtering In this paper, the smoothing and denoising method of mean filtering is adopted. The mean Appl. Sci. 2020 10 filtering is a, typical, 2582 linear filtering algorithm, which can help eliminate the sharp noise of images7 ofand 17 realize image smoothing and blurring. valuesThe in mean its neighborhood. filtering algorithm After smoothingreplaces the and pixel denoising points in by the means image of by the selecting mean filter an appropriate algorithm, imagetemplatef (i, operator j) is transformed and replaces into imagethe grayg( x,value y), and of the the pixel equation with is: a weighted average of the gray pixel values in its neighborhood. After smoothing and denoising by means of the mean filter algorithm, 1 X image f(i, j) is transformed into image g(x,x, yy), and = the equationf (x, y) is: (4) M 1 i,j S gxy(, )= ∈ f (, xy ) ijS, ∈ (4) In the formula, M is the total number ofM pixels, including the current pixel in the template. Generally,In the theformula, template M is operators the total are numberm m. ofIf pixels, the template including operator the current is 3 3,pixel the in total the number template. of × × templateGenerally, pixels the tempM islate nine. operators The central are pixelm × m. value If the is calculated template operator using the is following 3 × 3, the formula: total number of template pixels M is nine. The central pixel value is calculated using the following formula: 9 1X S =1 9 ai (5) Sa= 9 i (5) 9 i=1i=1

As shown in Figure8 8,, afterafter thethe meanmean filtering,filtering, the the value value of of the the center center pixel pixel becomes: becomes:

3 +3415234234+++++++++ 1 + 5 + 2 + 3 + 4 + 2 + 3 S = S ==3= 3(6) (6) 99

Figure 8. The conversion of a pixel by the mean filter. filter. 3.3. Image Enhancement Technology 3.3. Image Enhancement Technology Image enhancement is necessary to either improve the image quality or the image interpretation and recognition effect by emphasizing the overall or local characteristics of the image. The commonly used techniques in image enhancement include histogram equalization, Laplacian, log transform, gamma transform, and image enhancement based on the fuzzy technique [23].

Gamma Transform The gamma transform is the enhancement technique that is adopted in this paper. It is mainly used for image correction, and the image with too high a gray level or low a gray level is corrected to enhance contrast and image detail [24]. The formula is as follows:

r Sout = crin (7) where rin is the pixel value of the image, which is a non-negative real number, and c is called the grayscale scaling coefficient, used for the overall stretched image grayscale, which is a constant, whose value ranges from zero to one, usually taking a value of one. The correction effect of the gamma transform on the image is achieved by enhancing the details of the low gray or high gray scale, as shown in Figure9. It can be intuitively understood from the gamma curve: γ takes one as the dividing line. When γ < 1, the intensity of light increases, and the extension effect on the low gray part of the image is enhanced, which is called gamma compression; when γ > 1, the expansion effect on the high gray portion of the image is enhanced, and the illumination intensity is weakened, which is called gamma expansion. Therefore, by taking different gamma values, the effect of enhancing the details of the low or high gray levels can be achieved. In general, the image enhancement effect of gamma transform is evident when the image contrast is low and the overall brightness value is high. Appl.Appl. Sci. Sci. 2020 2020, ,10 10, ,x x FOR FOR PEER PEER REVIEW REVIEW 88 ofof 1717

ImageImage enhancement enhancement is is necessary necessary to to either either improv improvee the the image image quality quality or or the the image image interpretation interpretation andand recognition recognition effect effect by by emphasizing emphasizing the the overall overall or or local local characteristics characteristics of of the the image. image. The The commonly commonly usedused techniquestechniques inin imageimage enhancementenhancement includeinclude histogramhistogram equalization,equalization, Laplacian,Laplacian, loglog transform,transform, gammagamma transform,transform, andand imageimage enhancemenenhancementt basedbased onon thethe fuzzyfuzzy techniquetechnique [23].[23].

GammaGamma TransformTransform TheThe gammagamma transformtransform isis thethe enhancenhancementement techniquetechnique thatthat isis adoptedadopted inin thisthis paper.paper. ItIt isis mainlymainly usedused forfor imageimage correction,correction, andand thethe imageimage withwith tootoo highhigh aa graygray levellevel oror lowlow aa graygray levellevel isis correctedcorrected toto enhanceenhance contrastcontrast andand imageimage detaildetail [24].[24]. TheThe formulaformula isis asas follows:follows:

r r ScrScroutout== in in (7)(7) wherewhere rrinin isis thethe pixelpixel valuevalue ofof thethe image,image, whichwhich isis aa non-negativenon-negative realreal number,number, andand cc isis calledcalled thethe grayscalegrayscale scalingscaling coefficient,coefficient, usedused forfor thethe overalloverall stretchedstretched imageimage grayscale,grayscale, whichwhich isis aa constant,constant, whosewhose valuevalue rangesranges fromfrom zerozero toto onone,e, usuallyusually takingtaking aa valuevalue ofof one.one. TheThe correctioncorrection effecteffect ofof thethe gammagamma transformtransform onon thethe imageimage isis achievedachieved byby enhancingenhancing thethe detailsdetails ofof thethe lowlow graygray oror highhigh graygray scale,scale, asas shownshown inin FigureFigure 9.9. ItIt cancan bebe intuitivelyintuitively understoodunderstood fromfrom thethe gammagamma curve:curve: γγ takestakes oneone asas thethe dividingdividing line.line. WhenWhen γγ << 1,1, thethe intensityintensity ofof lightlight increases,increases, andand thethe extensionextension effecteffect onon thethe lowlow graygray partpart ofof thethe imagimagee isis enhanced,enhanced, whichwhich isis calledcalled gammagamma compression;compression; whenwhen γγ >> 1,1, thethe expansionexpansion effecteffect onon thethe highhigh graygray portionportion ofof thethe imageimage isis enhanced,enhanced, andand thethe illuminationillumination intensityintensity isis weakened,weakened, whichwhich isis calledcalled gammagamma expansion.expansion. ThereforTherefore,e, byby takingtaking differentdifferent gammagamma values,values, thethe effecteffect ofof enhancingenhancing thethe detailsdetails ofof thethe lowlow oror highhigh graygray levelslevels cancan bebe achieved.achieved. InIn

Appl.general,general, Sci. 2020 the the, image 10image, 2582 enhancement enhancement effect effect of of gamma gamma tran transformsform is is evident evident when when the the image image contrast contrast is is8 oflowlow 17 andand thethe overalloverall brightnessbrightness valuevalue isis high.high. out out s s Output intensity level, intensity Output Output intensity level, intensity Output

.. Figure 9. Gamma transformation curve. FigureFigure 9.9. GammaGamma transformationtransformation curve.curve. As shown in Figure 10a,b, since the original image is too bright, it is difficult to separate the laser grid fromAsAs shown shown the background. in in Figure Figure 10a,b, 10a,b, After since since the gammathe the original original conversion, image image is is the too too contrast bright, bright, ofitit is is the difficult difficult image to isto greatlyseparateseparate improved. the the laser laser Thegridgrid features fromfrom thethe of thebackground.background. laser grid areAfterAfter shown thethe moregammagamma clearly, convconv whichersion,ersion, reduces thethe contrastcontrast the amount ofof thethe of image processingimage isis greatlygreatly in the nextimproved.improved. step. TheThe featuresfeatures ofof thethe laserlaser gridgrid areare shshownown moremore clearly,clearly, whichwhich reducesreduces thethe amountamount ofof processingprocessing inin thethe nextnext step.step.

((aa)) ((bb))

Figure 10. (a) The original image; (b) gamma transform corrected image.

3.4. Image Binarization The representation of an image with the visual effects of only black and white is known as image binarization. This process reduces the amount of data in the image in order to highlight the contour of the target object and to separate it from the boundary. This is accomplished by applying a threshold on the image, which can be adjusted to observe the specific features of the target object in the image.

4. Feature Extraction and Recognition of Obstacles Changes in the laser grid indicate a potential obstacle on the ground. This deformation region is then taken as the region of interest (ROI), and the necessary image processing is then applied to filter out the ROI for obstacle recognition.

4.1. Contour Detection Processing the image results in pixel gray values of similarity and discontinuity, which makes it easy for contour boundary detection [25,26]. The effects of this are shown in Figure 11 below. Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 17 Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 17

Figure 10. (a) The original image; (b) gamma transform corrected image.

3.4. Image Binarization The representation of an image with the visual effects of only black and white is known as image binarization. This process reduces the amount of data in the image in order to highlight the contour of the target object and to separate it from the boundary. This is accomplished by applying a threshold on the image, which can be adjusted to observe the specific features of the target object in the image.

4. Feature Extraction and Recognition of Obstacles Changes in the laser grid indicate a potential obstacle on the ground. This deformation region is then taken as the region of interest (ROI), and the necessary image processing is then applied to filter out the ROI for obstacle recognition.

4.1. Contour Detection Appl. Sci. 2020, 10, 2582 9 of 17 Processing the image results in pixel gray values of similarity and discontinuity, which makes it easy for contour boundary detection [25,26]. The effects of this are shown in Figure 11 below.

Figure 11. The contour of images. Figure 11. The contour of images. 4.2. Convex Hull Detection 4.2. Convex Hull Detection The convex hull is a concept in computational geometry or graphics, which may be defined as the The convex hull is a concept in computational geometry or graphics, which may be defined as smallestThe convexconvex pointshull is containinga concept in the computational given points, whichgeometry is the or intersectiongraphics, which of all may convex be pointsdefined that as the smallest convex points containing the given points, which is the intersection of all convex points thecontain smallest the given convex points points in thecontaining Euclidean the plane given or poin space.ts, which It can is be the imagined intersection as a rubber of all convex band that points just that contain the given points in the Euclidean plane or space. It can be imagined as a rubber band thatwraps contain all the the points. given points in the Euclidean plane or space. It can be imagined as a rubber band that just wraps all the points. that justA useful wraps method all the points. for understanding the shape contour of an object is to calculate the convex hull A useful method for understanding the shape contour of an object is to calculate the convex hull of theA objectuseful and method then calculatefor understanding the convex the defect shape [27 contou]. Ther performanceof an object is of to many calculate complex the convex objects hull can of the object and then calculate the convex defect [27]. The performance of many complex objects can ofbe the represented object and by then this calculate defect. the convex defect [27]. The performance of many complex objects can be represented by this defect. be representedAs shown by in Figurethis defect. 12, convex defects can be illustrated by human hands. The dark contour line As shown in Figure 12, convex defects can be illustrated by human hands. The dark contour line is a convexAs shown hull in around Figure the 12, hand. convex The defects regions can (A-F) be il betweenlustrated the by convexhuman hullhands. and The the dark hand contour contour line are is a convex hull around the hand. The regions (A-F) between the convex hull and the hand contour isconvex a convex defects. hull around the hand. The regions (A-F) between the convex hull and the hand contour are convex defects.

Figure 12. the convex hull and convex defects. Figure 12. Thethe convex convex hull hull and and convex convex defects. defects.

4.3. Division Division of of Obstacle Obstacle Areas Areas Changes in the laser grids on the ground indicate that there is a potential obstacle on the ground, which results in different pixel gray values for the background and the obstacles. The obstacles and the background are captured by the discontinuity of the boundary regions between them through the adjustments of the contour threshold of the ground laser grid area, segmentation of the ground laser grid area, the obstacle layer, and locating the coordinate region where the obstacle is located through the laser mesh deformation area, as shown in Figure 13. Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 17

Changes in the laser grids on the ground indicate that there is a potential obstacle on the ground, which results in different pixel gray values for the background and the obstacles. The obstacles and the background are captured by the discontinuity of the boundary regions between them through the adjustments of the contour threshold of the ground laser grid area,

Appl.segmentation Sci. 2020, 10 ,of 2582 the ground laser grid area, the obstacle layer, and locating the coordinate region10 of 17 where the obstacle is located through the laser mesh deformation area, as shown in Figure 13.

(a) (b)

Figure 13. Division of obstacle areas. ( a)) The The area area of the obstacle; ( b) the area of the ground.

4.4. Parking Space Identification Identification The recognition recognition of of the the parking parking space space mainly mainly involves involves the theacquisition acquisition of the of type the of type parking of parking space spaceand judging and judging whether whether the identified the identified parking parking space space can canmeet meet the theparking parking conditions. conditions. The The types types of ofparking parking spaces spaces are are shown shown in inTable Table 1.1 .We We can can cl classifyassify them according to thethe parkingparking modes,modes, thethe existence of parking lines, and the existenceexistence ofof vehiclesvehicles around.around. Because most research is mainly about parkingparking spaces spaces with with parking parking lines lines and and surrounding surrounding vehicles, vehicles, this part this only part includes only includes the situation the withoutsituation parking without lines parking and vehicleslines and on vehicles both sides. on bo Becauseth sides. the Because parallel the parking parallel mode parking and the mode vertical and parkingthe vertical mode parking are similar mode in are terms similar of the in identificationterms of the identification methods, this methods, part takes this the latterpart takes as an the example. latter as an example. Table 1. Different types of parking spaces. As shown in Figures 14 and 15, the laser emitting devices are illuminated on the ground to present theParking shape Modes of the checkerboard grid. Parking We divide Lines the region of vehicles Vehicles and parking Around space by acquiringVertical the contours parking of vehicles when Standard proces parkingsing the lines captured image of Onetheside parking space. AccordingParallel to the parking contours of the A, B, D, Only and parkingE cars, angleswe can judge the posture Both of the sides vehicle body, thereby weOblique can acquire parking the type of paring No space, parking such line as a vertical parking space None and an oblique parking space. The laser grid area between two cars’ contours can be considered as the area of the parkingAs shownspace. If in there Figures are 14 no and obstacles 15, the in laser this emitting area, then devices the parking are illuminated space can on be the considered ground to as present valid. the shape of the checkerboard grid. We divide the region of vehicles and parking space by acquiring Table 1. Different types of parking spaces. the contours of vehicles when processing the captured image of the parking space. According to the contours of theParking A, B, D, Modes and E cars, we can Parking judge Lines the posture of the Vehicles vehicle Around body, thereby we can acquire the typeVertical of paring parking space, such Standard as a vertical parking parking lines space and an obliqueOne side parking space. The laser grid areaParallel between parking two cars’ contoursOnly can parking be considered angles as the area ofBoth the sides parking space. If there Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 17 are no obstaclesOblique in this parking area, then the parking No parking space can line be considered as valid. None

Figure 14. Vertical parking mode. Figure 14. Vertical parking mode.

Figure 15. Oblique parking mode.

5. Experiments The experiment was made up of the hardware parts, as shown in Figure 16, and the software parts. The hardware consisted of a grid laser emitter with a 51 × 51 laser grid on the ground and a camera, while the software part consisted of image processing and obstacle detection by the on-board computer. The implementation of the software part was done using C++ with Visual Studio 2017 as the development environment using OpenCV library Version 3.4.4 for image processing and obstacle detection.

(a) (b)

Figure 16. Experiment platform. (a) front view; (b) side view.

In order to verify the effectiveness of the proposed method for identifying parking spaces and obstacles based on vision sensors and laser devices, various obstacles and scenes encountered in real- life parking environments were simulated and tested by the experimental platform in Figure 16. The laser transmitter and the camera mounted on the vehicle were simulated on the experimental Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 17

Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 17

Figure 14. Vertical parking mode.

Appl. Sci. 2020, 10, 2582 Figure 14. Vertical parking mode. 11 of 17

Figure 15. Oblique parking mode. Figure 15. Oblique parking mode. 5. Experiments Figure 15. Oblique parking mode. 5. Experiments 5. ExperimentsThe experiment was made up of the hardware parts, as shown in Figure 16, and the software The experiment was made up of the hardware parts, as shown in Figure 16, and the software parts. The hardware consisted of a grid laser emitter with a 51 × 51 laser grid on the ground and a parts.The The experiment hardware consistedwas made of up a gridof the laser hardware emitter pa withrts, aas 51 shown51 laserin Figure grid on16, theand ground the software and a camera, while the software part consisted of image processing and× obstacle detection by the on-board camera,parts. The while hardware the software consisted part of consisted a grid laser of image emitte processingr with a 51 and × obstacle51 laser grid detection on the by ground the on-board and a computer. The implementation of the software part was done using C++ with Visual Studio 2017 as computer.camera, while The the implementation software part consisted of the software of image part processing was done and using obstacle C++ detwithection Visual by the Studio on-board 2017 the development environment using OpenCV library Version 3.4.4 for image processing and obstacle ascomputer. the development The implementation environment of the using software OpenCV part library was done Version using 3.4.4 C++ forwith image Visual processing Studio 2017 and as detection. obstaclethe development detection. environment using OpenCV library Version 3.4.4 for image processing and obstacle

detection.

(a) (b) FigureFigure(a) 16 16.. ExperimentExperiment platform. platform. ( (aa)) front front view; view; ( (bb)) side side view. view.(b)

InIn order order to to verify the Figure effectiveness eff ectiveness16. Experiment of the platform. pr proposedoposed (a) front method view; for (b) identifying side view. parking spaces and obstaclesobstacles based based on on vision vision sensors sensors and and laser laser devices, devices, various various obstacles obstacles and andscenes scenes encountered encountered in real- in In order to verify the effectiveness of the proposed method for identifying parking spaces and lifereal-life parking parking environments environments werewere simulated simulated and tested and tested by the by experimental the experimental platform platform in Figure in Figure 16. The 16 . obstacles based on vision sensors and laser devices, various obstacles and scenes encountered in real- laserThe lasertransmitter transmitter and andthe thecamera camera mounted mounted on onthe the vehicle vehicle were were simulated simulated on on the the experimental platform,life parking then environments the laser emitter were andsimulated camera and were tested adjusted by the to experimental the appropriate platform angle in so Figure that the 16. laser The transmitterlaser transmitter was evenly and the distributed camera mounted on the ground on th toe vehicle render awere checkerboard simulated laser on the grid experimental effect while the parking space scenes were simulated through the model cars. As shown in Figures 17 and 18, after collecting the images of the simulated scenes, codes were written that called the function of the OpenCV library to preprocess the image and detect the contour and convex hull. In the process of image processing, only the laser grid on the ground was used as the region of interest, and the image threshold was changed to reveal the contour of the laser mesh. The effect diagram is shown in Figure 19. Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 17 Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 17 Appl.platform, Sci. 2020 then, 10, xthe FOR laser PEER emitter REVIEW and camera were adjusted to the appropriate angle so that the12 laserof 17 platform,transmitter then was the evenly laser distributed emitter and on camera the ground were toad renderjusted toa checkerboard the appropriate laser angle grid so effect that whilethe laser the transmitterplatform,parking space then was scenesthe evenly laser were distributed emitter simulated and on camera thethrough ground were the toad mo renderjusteddel cars. toa checkerboard the As appropriate shown inlaser Figure angle grid so 17effect thatand while the18, laserafter the Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 17 parkingtransmittercollecting space the was imagesscenes evenly wereof distributed the simulated simulated on thethrough scenes, ground the code to mo renders delwere cars. a writtencheckerboard As shown that calledin laser Figure gridthe 17 functioneffect and while 18, of after the parking space scenes were simulated through the model cars. As shown in Figure 17 and 18, after collectingplatform,OpenCV librarythenthe imagesthe to laser preprocess of emitter the simulated the and image camera scenes, and were detect code adjusted sthe were contour to written the appropriateand that convex called hull.angle the In sofunction the that process the of laser the of collecting the images of the simulated scenes, codes were written that called the function of the OpenCVtransmitterimage processing, library was evenly to only preprocess distributedthe laser the gr idonimage on the the groundand ground detect to wasrender the usedcontour a checkerboard as the and region convex oflaser interest,hull. grid In effect andthe process thewhile image the of OpenCV library to preprocess the image and detect the contour and convex hull. In the process of imageparkingthreshold processing, space was changedscenes only were theto reveal lasersimulated grtheid contour on through the ground of the lasermo wasdel mesh.used cars. as The As the effectshown region diagram inof Figureinterest, is shown17 and and the in 18, Figureimage after image processing, only the laser grid on the ground was used as the region of interest, and the image thresholdcollecting19. wasthe changedimages of to the reveal simulated the contour scenes, of the code lasers were mesh. written The effect that diagram called the is shownfunction in Figureof the threshold was changed to reveal the contour of the laser mesh. The effect diagram is shown in Figure 19.OpenCV The librarygray value to preprocess of the background the image in and the detect scenes the, such contour as the and wall convex and hull.the parking In the process line, was of 19. imagedifferentThe processing, grayfrom valuethe only gray of thethe value laser background ofgr idthe on ground the in groundthe lasescenes rwas line;, such used therefore, asas the regionwall it would and of interest,the be parkingfiltered and the line,out. image wasThe The gray value of the background in the scenes, such as the wall and the parking line, was differentthresholddeformation fromwas region changedthe gray of tothe value reveal laser of thegrid the contour onground the of ground thelase laserr line; was mesh. therefore,then The taken effect it aswould diagram the region be is filtered shown of interest, inout. Figure The by different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation19.adjusting the regionthreshold of theof the laser image, grid revealing on the ground the contours was then of obstacles, taken as as the shown region in Figureof interest, 20. by Appl.deformation Sci. 2020, 10 region, 2582 of the laser grid on the ground was then taken as the region of interest,12 of by 17 adjustingThe graythe threshold value of ofthe the background image, revealing in the thescenes contours, such of as obstacles, the wall asand shown the parking in Figure line, 20. was adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20. different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20.

Figure 17. The image of the parking space. Figure 17. The image of the parking space. Figure 17. The image of the parking space. Figure 17. The image of the parking space.

Figure 17. The image of the parking space.

Figure 18. Gamma-enhanced image. Figure 18. Gamma-enhanced image. Figure 18. Gamma-enhanced image.

Figure 18. Gamma-enhanced image.

Figure 19. The contour of the laser grid. Figure 19. The contour of the laser grid. Figure 19. The contour of the laser grid. The gray value of the backgroundFigure in 19. the The scenes, contour such of the as thelaser wall grid. and the parking line, was different from the gray value of the ground laser line; therefore, it would be filteredout. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contoursFigure of obstacles, 19. The contour as shown of the in laser Figure grid. 20 .

Figure 20. The contour of the cars. Figure 20. The contour of the cars. As shown in Figures 21 and Figure22, when 20. Thethere contour were ofobstacles the cars. and parking locks in the parking space,As the shown contours in Figures could be 21 adjusted and 22, bywhen taking there the were laser obstaclesgrid and theand deformation parking locks region in the of theparking laser space,As the shown contours in Figures could be 21 adjusted and Figure22, bywhen taking 20. Thethere th contoure were laser of obstaclesgrid the and cars. theand deformation parking locks region in the of theparking laser space, the contours could be adjustedFigure by taking20. The th contoure laser of grid the andcars. the deformation region of the laser As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking space, As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking the contours could be adjusted by taking the laser grid and the deformation region of the laser grid on space, the contours could be adjusted by taking the laser grid and the deformation region of the laser the ground as the region of interest, respectively. It showed the area where the obstacle was located and the size of the obstacles, making it convenient for the identification of the parking space area and the judgment of the adequate parking space. Appl. Sci. 2020, 10, x FOR PEER REVIEW 13 of 17

Appl.grid Sci.on 2020the , ground10, x FOR as PEER the REVIEW region of interest, respectively. It showed the area where the obstacle13 ofwas 17 located and the size of the obstacles, making it convenient for the identification of the parking space gridarea onand the the ground judgment as the of theregion adequate of interest, parking resp space.ectively. It showed the area where the obstacle was Appl. Sci. 2020, 10, 2582 13 of 17 located and the size of the obstacles, making it convenient for the identification of the parking space area and the judgment of the adequate parking space.

(a)

(a)

(b) (c)

Figure 21. The obstacle(b )exists in the parking space. (a) The image of the parking(c) space; (b) the contour

Figureof the laser 21. The grid; obstacle (c) the existscontour in theof cars parking and the space. obstacle. (a) The image of the parking space; (b) the contour ofFigure the laser 21. The grid; obstacle (c) the exists contour in the of cars parking and thespace. obstacle. (a) The image of the parking space; (b) the contour of the laser grid; (c) the contour of cars and the obstacle.

(a)

(a)

(b) (c)

Figure 22.22. AA parkingparking lock lock exist exist in thein the parking parking space. space. (a) The (a) imageThe image of the of parking the parking space; (space;b) the contour(b) the ofcontour the laser of the grid; laser (c) thegrid;(b) contour (c) the ofcontour the cars of andthe cars the parkingand the lock.parking lock. (c) Figure 22. A parking lock exist in the parking space. (a) The image of the parking space; (b) the Forcontour the of classificationclassification the laser grid; and (c) experimentthe contour of of the didifferent ffcarserent and typestypes the parking ofof obstacles lock. that may exist in the parking space inin the the actual actual parking parking environment, environment, an experiment an experiment with regularwith regular obstacles, obstacles, potholes, potholes, and obstacles and withobstaclesFor a similar the with classification colora similar to the color and background experiment to the background was of carrieddifferent was out, types carried as shownof obstacles out, in as Figures shown that may 23 in and existFigures 24 in, respectively.the 23 parkingand 24, Nevertheless,spacerespectively. in the Nevertheless,actual when parking the color when environment, of thethe obstaclecolor ofan wastheex perimentobstacle similar towaswith the similar regular background, to obstacles, the background, it was potholes, challenging it wasand toobstacleschallenging identify with theto aidentify outline similar ofthe color the outline obstacleto the of backgroundthe area obstacle through wasarea image carried through processing out, image as shown processing alone. in However, Figures alone. 23However, when and we24, experimentedrespectively.when we experimented Nevertheless, with the proposedwith when the proposed method the color of method combiningof the ofobstacle combining the camera was the similar vision camera to and visionthe the background, laser and the grid, laser the it grid, areawas andchallengingthe area contour and to ofcontour identify the obstacles of the outlineobstacles could easilyof could the be obstacle easily obtained be area obtained even through when even theimage when obstacle processing the wasobstacle similar alone. was to However,similar the color to ofwhenthe the color we background, ofexperimented the background, as indicated with asthe indicated inproposed Figures in 25method Figures and 26 of .25 combining and 26. the camera vision and the laser grid, the areaTable and 2 showscontour the of test the resultsobstacles of differentcould easily types be ofobtained obstacles’ even recognition when the accuracy.obstacle was similar to the color of the background, as indicated in Figures 25 and 26. Table 2 shows the test results of different types of obstacles’ recognition accuracy.

Appl. Sci. 2020, 10, x FOR PEER REVIEW 14 of 17

Appl.Appl. Sci. 2020,, 10,, 2582x FOR PEER REVIEW 1414 of 1717 Appl. Sci. 2020, 10, x FOR PEER REVIEW 14 of 17

(a) (b) (c)

(a) (b) (c) (a) (b) (c)

(d) (e) (f)

(d) Figure 23. Obstacle(d) and pothole. (a) The regular block(e) obstacle; (b) the contour of the ground;(f) (c) the (e) (f) contour of the block obstacle; (d) the regular pothole; (e) the contour of the ground; (f) the contour of Figure 23. Obstacle and pothole. (a) The regular block obstacle; (b) the contour of the ground; (c) the Figurethe regular 23. Obstacle pothole. and pothole. ( a)) The regular block obstacle; ( b) the contour of the ground; (c) the contour of the block obstacle; (d) the regular pothole; (e) the contour of the ground; (f) the contour of contour of the block obstacle; (d) the regular pothole; (e) the contour of the ground; (f) the contour of the regular pothole. the regular pothole.pothole.

(a) (b)

Figure 24. (a) The irregular(a) obstacle that was similar in color to the background;(b) (b) the irregular (a) (b) contour of the obstacle is very vague. Figure 24.24. (a) The irregular obstacle that was similar in colorcolor toto thethe background;background; ((b)) thethe irregularirregular Figure 24. (a) The irregular obstacle that was similar in color to the background; (b) the irregular contourcontour ofof thethe obstacleobstacle isis veryvery vague.vague. contour of the obstacle is very vague.

(a)

(a) (a)

(b) (c)

Figure 25. The irregular (b) obstacle that was similar in color to the background. ( a(c) )The irregular obstacle (b) (c) ((bb)) thethe contourcontour ofof thethe ground;ground; ((cc)) thethe contourcontour ofof thethe irregularirregular obstacle.obstacle. Figure 25. The irregular obstacle that was similar in color to the background. (a) The irregular obstacle Figure 25. The irregular obstacle that was similar in color to the background. (a) The irregular obstacle (b) the contour of the ground; (c) the contour of the irregular obstacle. (b) the contour of the ground; (c) the contour of the irregular obstacle. Appl. Sci. 2020, 10, 2582 15 of 17 Appl. Sci. 2020, 10, x FOR PEER REVIEW 15 of 17

(a)

(b) (c)

Figure 26. TheThe irregular irregular pothole that was similar in color to the background. ( a)) The The irregular irregular pothole (b) the contour of the ground; ((c)) thethe contourcontour ofof thethe irregularirregular pothole.pothole.

TableTable2 shows 2. The results the test of results recognizing of di thefferent contours types of of different obstacles’ types recognition of obstacles accuracy. in parking spaces.

TableModel 2. The resultsTypes of recognizingRegular the contoursIrregular of di fferent Similar types ofto obstacles the Ground in parking Color spaces. 63 = 98.4% Model TypesVehicle Regular64 - Irregular Similar- to the Ground Color 63 Vehicle 64 = 98.4% -- 20 20 = 20 = 20 Wall/PillarWall/Pillar 20 = 100%100% - - 100% 20 = 100% 43 20 20 Parking lock 46 = 93.5% -- Stone 37 = 97.4% 37 = 97.4% 36 = 94.7% 3843 38 38 Pothole 38 ==95.0%93.5% 39 = 97.5% 38 = 95.0% Parking lock 4046 - 40 - 40

37 37 36 6. Conclusions and Future Research= 97.4% = 97.4% = 94.7% Stone 38 38 38 In order to overcome the problems existing in current automatic parking schemes, ultrasonic radar 38 39 38 and cameras are not effi cient in identifying= 95.0% adequate= 97.5% parking spaces and= 95.0% cannot identify obstacles Pothole 40 40 40 in parking paths or parking spaces. This paper presents a method to identify parking spaces and obstacles based on visual sensor and laser device recognition methods. This method only needs to 6.install Conclusions the laser transmitterand Future onResearch the body of the car. The changing images of the laser mesh are captured by theIn camera,order to and overcome then, the the changing problems area existing of the in laser current grids automatic can be identified parking by schemes, image processingultrasonic radartechnology and cameras to realize are the not identification efficient in ofidentifying parking spacesadequate and parking obstacles. spaces This and method cannot is expectedidentify obstaclesto become in a parking practical paths solution. or parking The experimental spaces. This paper results presents showed a thatmethod this to method identify could parking effectively spaces andidentify obstacles obstacles based and on parking visual sensor spaces. and laser device recognition methods. This method only needs to installForfuture the laser research, transmitter more obstacleon the body samples of th wille car. be taken The changing into consideration. images of The the methodslaser mesh to deal are withcaptured the automatic by the camera, classification and then, of obstacles the changing need toarea be found.of the Inlaser addition, grids can experiments be identified for identifying by image processingthe parking technology space and to acquiring realize the the identification size of the parking of parking space, spaces as well and as obstacles. obstacles This still needmethod to beis expectedconducted. to Overall,become thisa practical research solution. provided The a new expe solutionrimental to recognize results showed obstacles that for this automatic method parking. could Theeffectively future identify research obstacles will also greatlyand parking help thespaces. development of this technology. AuthorFor Contributions: future research,S.M. more and Z.J.obstacle designed samples the scheme. will be taken H.J., M.H., into consideratio and C.L. checkedn. The the methods feasibility to of deal the withscheme. the S.M. automatic and H.J. providedclassification the resources of obstacles Z. J. performed need theto softwarebe found. simulation In addition, and experiments experiments Z.J. wrote for identifyingthe paper with the the parking help of S.M.space All and authors acquiring have read the and size agreed of the to parking the published space, version as well of theas obstacles manuscript. still needFunding: to beThis conducted. research was Overall, supported this by research the Natural provided Science a Fund new forsolution Colleges to and recognize Universities obstacles in Jiangsu for automaticProvince under parking. Grant The 12KJD580002 future research and Grant will 16KJA580001, also greatly help in part the by development the Innovation of Plan this fortechnology. Postgraduate Research of Jiangsu Province in 2014 under Grant KYLX1057, and in part by the National Natural Science AuthorFoundation Contributions: of China under S.M. Grant and Z.J. 51675235. designed the scheme. H.J., M.H., and C.L. checked the feasibility of the scheme.Conflicts S.M. of Interest: and H.J. Theprovided authors the declare resources no conflictZ. J. perf oformed interest. the software simulation and experiments Z.J. wrote the paper with the help of S.M.

Funding: This research was supported by the Natural Science Fund for Colleges and Universities in Jiangsu Province under Grant 12KJD580002 and Grant 16KJA580001, in part by the Innovation Plan for Postgraduate Appl. Sci. 2020, 10, 2582 16 of 17

References

1. Song, Y.; Liao, C. Analysis and review of state-of-the-art automatic parking assist system. In Proceedings of the 2016 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Beijing, China, 10–12 July 2016; pp. 1–6. 2. Bibi, N.; Majid, M.N.; Dawood, H.; Guo, P. Automatic parking space detection system. In Proceedings of the 2017 2nd International Conference on Multimedia and Image Processing (ICMIP), Wuhan, China, 17–19 March 2017; pp. 11–15. 3. Prophet, R.; Hoffmann, M.; Vossiek, M.; Li, G.; Sturm, C. Parking space detection from a radar based target list. In Proceedings of the 2017 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), Nagoya, Japan, 19–21 March 2017; pp. 91–94. 4. Luo, Q.; Saigal, R.; Hampshire, R.; Wu, X. A Statistical Method for Parking Spaces Occupancy Detection via Automotive . In Proceedings of the 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, 4–7 June 2017; pp. 1–5. 5. Ma, S.; Jiang, H.; Han, M.; Xie, J.; Li, C. Research on automatic parking systems based on parking scene recognition. IEEE Access 2017, 5, 21901–21917. [CrossRef] 6. Suhr, J.K.; Jung, H.G.J.S. A universal vacant parking slot recognition system using sensors mounted on off-the-shelf vehicles. Sensors 2018, 18, 1213. [CrossRef][PubMed] 7. Chen, F. Research on Automatic Parking Technology Based on Machine Vision. Master’s Thesis, Electronic Science and Technology Univ., Chengdu, China, 2016. 8. Jung, H.G.; Kim, D.S.; Yoon, P.J.; Kim, J. Light stripe projection based parking space detection for intelligent parking assist system. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 962–968. 9. Catapang, A.N.; Ramos, M. Obstacle detection using a 2D LIDAR system for an Autonomous Vehicle. In Proceedings of the 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Batu Ferringhi, Malaysia, 25–27 November 2016; pp. 441–445. 10. Shi, X.L. Research of Automatic Parking System Based on Laser Radar. Master’s Thesis, Shanghai Jiao Tong Univ., Shanghai, China, 2010. 11. Lee, B.; Wei, Y.; Guo, I.Y. Automatic parking of self-driving car based on lidar. Remote Sens. Spat. Inf. Sci. 2017, 42, 241–246. [CrossRef] 12. Jiang, H.B.; Shen, Z.N. Intelligent identification of automatic parking system based on information fusion. J. Mech. Eng. 2017, 53, 125–133. [CrossRef] 13. Suhr, J.; Jung, H.J.E.L. Sensor fusion-based precise obstacle localisation for automatic parking systems. Electron. Lett. 2018, 54, 445–447. [CrossRef] 14. Ibisch, A.; Stümper, S.; Altinger, H.; Neuhausen, M.; Tschentscher, M.; Schlipsing, M.; Salinen, J.; Knoll, A. Towards autonomous driving in a parking garage: Vehicle localization and tracking using environment-embedded lidar sensors. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013; pp. 829–834. 15. Park, J.; Lee, J.H.; Son, S.H. A survey of obstacle detection using vision sensor for autonomous vehicles. In Proceedings of the 2016 IEEE 22nd International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Daegu, Korea, 17–19 August 2016; p. 264. 16. Ho, T.; Budagavi, M. Dual-fisheye lens stitching for 360-degree imaging. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2172–2176. 17. Zhang, Z. Flexible Camera Calibration by Viewing a Plane from Unknown Orientations. In Proceedings of the 7th IEEE International Conference on (ICCV’99), Kerkyra, Greece, 20–27 September 1999; pp. 666–673. 18. Zhang, J.; Wang, D.; Ma, L. The self-calibration technology of camera intrinsic parameters calibration methods. J. Imaging Sci. Photochem. 2016, 34, 15–22. 19. Sun, Q.; Wang, X.; Xu, J.; Wang, L.; Zhang, H.; Yu, J.; Su, T.; Zhang, X.J.O. Camera self-calibration with lens distortion. Opt. In. J. Light Electron Opt. 2016, 127, 4506–4513. [CrossRef] Appl. Sci. 2020, 10, 2582 17 of 17

20. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. 21. Zhang, X.; Wang, X. Novel survey on the color-image graying algorithm. In Proceedings of the 2016 IEEE International Conference on Computer and Information Technology (CIT), Nadi, Fiji, 8–10 December 2016; pp. 750–753. 22. Gupta, V.; Gandhi, D.K.; Yadav, P. Removal of fixed value impulse noise using improved mean filter for image enhancement. In Proceedings of the 2013 Nirma University International Conference on Engineering (NUiCONE), Ahmedabad, India, 28–30 November 2013; pp. 1–5. 23. Huang, S.; Cheng, F.; Chiu, Y. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [CrossRef] 24. Mahashwari, T.; Asthana, A. Image enhancement using fuzzy technique. Int. J. Res. Eng. Sci. Technol. 2013, 2, 1–4. 25. Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 1395–1403. 26. Gurav, R.M.; Kadbe, P.K. Real time finger tracking and contour detection for gesture recognition using OpenCV. In Proceedings of the 2015 International Conference on Industrial Instrumentation and Control (ICIC), Pune, India, 28–30 May 2015; pp. 974–977. 27. Singh, N.; Arya, R.; Agrawal, R. A convex hull approach in conjunction with Gaussian mixture model for salient object detection. Digit. Signal Process. 2016, 55, 22–31. [CrossRef]

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).