<<

Weather Condition Estimation Using Fixed and Mobile Based Cameras

Koray Ozcan1(&), Anuj Sharma1(&), Skylar Knickerbocker2(&), Jennifer Merickel3(&), Neal Hawkins2(&), and Matthew Rizzo3(&)

1 Institute for Transportation, Iowa State University, Ames, IA, USA {koray6,anujs}@iastate.edu 2 Center for Transportation Research and Education, Ames, IA, USA {sknick,hawkins}@iastate.edu 3 University of Nebraska Medical Center, Omaha, NE, USA {jennifer.merickel,matthew.rizzo}@unmc.edu

Abstract. Automated interpretation and understanding of the driving environment using image processing is a challenging task, as most current vision-based systems are not designed to work in dynamically-changing and naturalistic real-world settings. For instance, road weather condition classifica- tion using a camera is a challenge due to high variance in weather, road layout, and illumination conditions. Most transportation agencies, within the U.S., have deployed some cameras for operational awareness. Given that weather related crashes constitute 22% of all crashes and 16% of crash fatalities, this study proposes using these same cameras as a source for estimating roadway surface condition. The developed model is focused on three con- ditions resulting from weather including: Clear (clear/dry), Rainy-Wet (rainy/slushy/wet), and (snow-covered/partially snow-covered). The camera sources evaluated are both fixed Closed-circuit Television (CCTV) and mobile (snow plow dash-cam). The results are promising; with an achieved 98.57% and 77.32% road weather classification accuracy for CCTV and mobile cameras, respectively. Proposed classification method is suitable for autono- mous selection of snow plow routes and verification of extreme road conditions on roadways.

Keywords: Road weather classification Scene classification VGG16 Neural networks CCTV Mobile cameraÁ Á Á Á Á

1 Introduction

According to the Federal Administration, there were 1.2 million weather related crashes from 2005 to 2014 [1]. As a result, 445,303 individuals were injured and 5,897 people lost their lives. Weather related crashes constitute 22% of all vehicle crashes and 16% of crash fatalities. It has also been shown that traffic flow rate, and

© Springer Nature Switzerland AG 2020 K. Arai and S. Kapoor (Eds.): CVC 2019, AISC 943, pp. 192–204, 2020. https://doi.org/10.1007/978-3-030-17795-9_14 Road Weather Condition Estimation Using Fixed and Mobile Based Cameras 193 vehicle speeds are significantly reduced under inclement weather conditions [2]. Therefore, accurate and comprehensive weather condition monitoring has a key role in road safety, effective winter maintenance, and traveler information and advisories. Imbedded road sensors have been used for the purpose of automated traffic control via variable traffic message signs [3]. However, road sensors are highly dependent on thresholds to estimate the road surface conditions based on sensor measurements such as wetness, temperature, etc. Lasers and other electro-optical technologies have been used to measure road surface grip condition especially for snow and [4]. Surveil- lance and mobile cameras can also provide automated monitoring of road weather conditions. Recently, Minnesota Department of Transportation established a snowplow camera network to share frequently captured images with public [5]. This research paper considers providing road weather condition estimations from both snowplow and highway surveillance camera networks.

1.1 Mobile Sourced Images With the increasing popularity of mobile cameras for driver assistance and autonomous driving applications, there has been an increasing demand for using in-vehicle camera systems. Son and Baek [6] proposed a design and implementation taxonomy using real- time in-vehicle camera data for driver assistance and traffic congestion estimation. A camera and other sensors were used to estimate the road’s condition by capturing the vehicle’s response to the roadway. Rajamohan et al. [7] developed road condition and texture estimation algorithms for classes of smooth, rough, and bumpy . To do so, they combined in-vehicle camera, GPS, and accelerometer sensors. Kutila et al. [8] proposed an algorithm for road condition monitoring by combining laser scanners and stereo cameras. They reported 95% accuracy with the help of sensor fusion methods for road surface types: dry, wet, snow, and ice. Laser scanners were helpful to estimate depth of the road surface layer for various weather scenarios. Support vector machine (SVM) based classifiers have been used for classifying surfaces for videos recorded with in-vehicle cameras. For three major roadway condition classes (i.e., bare, snow-covered, and snow-covered with tracks), they provided overall accuracy of 81% to 89%. For distinguishing dry and wet road conditions, Yamada et al. performed a multi-variate analysis of images captured by vehicle mounted cameras. Nguyen et al. [9] employed polarized stereo cameras for 3D tracking of hazards on the road. Moreover, Abdic et al. [10] developed a deep learning based approach to detect road surface wetness from audio and noise. Kuehnle and Burghout [11] developed a neural network with three or four input features such as mean and standard deviation of color levels to classify dry, wet, and snowy conditions from video cameras. They achieved 40% to 50% correct classification accuracy with the network architecture. Pan et al. [12] estimated road conditions based on clear and various snow covered conditions using pre-trained deep convolutional neural network (CNN). They proved CNN based algorithms perform better than traditional, random tree, and random forest based models for estimating if the road is snow covered or not, while omitting rainy/wet 194 K. Ozcan et al. scenario in general. Finally, Qian et al. [13] tested various features and classifiers for road weather condition analysis on a challenging dataset that was collected from an uncalibrated dashboard camera. They achieved 80% accuracy for road images of classes: clear vs. snow/ice covered and 68% accuracy for classes: clear/dry, wet, and snow/ice- covered. Our dataset consists of classes from clear/dry, wet, and snow/ice- covered classes that are annotated manually for this project.

1.2 Fixed Source Images from Road Weather Information Stations (RWIS) Another approach to estimating road conditions involves using a camera mounted above the roadway and looking directly down on the road surface. Jonsson [14] used weather data and camera images to group and classify weather conditions. The study used grayscale images and weather sensor features to estimate road condition group- ings across dry, ice, snow, track (snowy with wheel tracks), and wet. They also provided a classification algorithm for the road condition classes dry, wet, snowy, and icy [15]. Similarly, using the road surface’s color and texture characteristics provided suitable classification with k-nearest neighbor, neural network and SVM for dry, mild snow coverage, and heavy snow coverage [16]. However, these approaches are limited to the locations where cameras are installed and with a confined field of view. In other words, the cameras are only looking down on the road from above while they are attached to poles near road verges. It would be too costly to monitor every segment of the road with such implementations.

1.3 Fixed Source Images from Closed-Circuit Television (CCTV) CCTV cameras provide surveillance of road intersections and highways. They can be utilized as observance sensors for estimating features such as traffic density and road weather conditions. Lee et al. [17] proposed a method to extract weather information from road regions of CCTV videos. They analyzed CCTV video edge patterns and colors to observe how they are correlated with weather conditions. However, rather than estimating road weather conditions, they developed algorithms that estimated overall weather conditions across the scene. They presented 85.7% accuracy with three weather classes: sunny, rainy, and cloudy. Moreover, snowfall detection algorithm, particularly in low visibility scenes, was developed in [18] using CCTV camera images. For modeling various road conditions, we selected VGGNets since it has been proven to be effective for object detection and recognition in stationary images [19]. Recently, Wang et al. implemented VGGNet models in large-scale Places365 datasets [20]. The model was trained with approximately 1.8 million images from 365 scene classes. The model achieved 55.19% for top-1 class accuracy and 85.01% top-5 class accuracy on test dataset. With the model developed by Zhou et al. [21], it has been Road Weather Condition Estimation Using Fixed and Mobile Based Cameras 195 shown that place recognition can be achieved with high accuracy. As it is explained in the next sections, features learned from this network is shown to be useful for differ- entiating various road image features for defined weather classes. This paper adapts place recognition models to fine-tune the last three layers of the network for road condition estimation.

1.4 Objective In this paper, we are proposing to estimate road surface weather conditions. To the authors’ best knowledge, it is the first application of road weather condition estimation using CCTV cameras observing road intersections and highways. Our experimental results show feasibility and utility on CCTV datasets that monitor the road surface. Further algorithm development is also presented to improve road weather condition classification from forward facing snow plow mobile cameras. We have developed a model for classifying the road condition based on three major weather classes: clear/dry (marked as Clear), rainy/slushy/wet (marked as RainyWet), and snow-covered/partially snow-covered (marked as Snow). The model provides 77.32% accuracy for mobile camera solution, which improves upon prior method presented by Qian et al. [13], which presented 68% of classification accuracy.

2 Proposed Method

2.1 Model Description The proposed model was derived from the promising VGG16 implementation on the Places365 dataset. Figure 1 demonstrates the final classification layers of the network as modified for our application of weather condition classification. The last three network layers were replaced to adapt classification with the fully connected layer along with the softmax and output layers. They were then connected to the last pooling layer of the VGG16 network trained with the Places365 dataset. A useful Keras implementation, which the model was adopted from, is available through previously to classify places [22]. The model consists of five convolution layers after image batches are inputted to the learning architecture. Features learned from Places365 dataset are fine-tuned for classification of defined weather conditions. Learned features are transformed into fully connected layers after the pooling operation. Then, softmax layer is an activation function so that the output is a categorical probability distribution of the final classification output. The image dataset has been augmented to use flipped images along with rotation up to ±20°. Furthermore, the class representatives are made to be balanced so that we have equal number of images for each weather class definition. 196 K. Ozcan et al.

Fig. 1. Final layers of the network after pooling was modified for 3-class classification.

To test the model, the dataset was divided randomly into training (80%), validation (10%), and testing (10%) subsets. The training and validation parts of the dataset were used during the transfer learning stage. In the next sections, we are to explain how the training was performed for (i) CCTV surveillance camera images; and (ii) snow plow camera images. The road conditions were separated into three major classes of snowy, rainy/wet, and clear. Validation accuracy is the percentage of the images that are correctly classified in the validation dataset. Epoch number corresponds to how many times the model processed all of the images in training dataset. Loss is a summation of errors from each example in training or validation datasets. When the model’s pre- diction is ideal, it gets closer to zero; otherwise, loss is greater. Over training period, the loss is expected to approach zero as long as the model is learning useful features to predict defined classes more accurately.

2.2 Road Weather Condition Estimation Using CCTV Images from CCTV surveillance cameras were also collected over a one year period in Iowa. Images were gathered from snow plow when they are operating on the roads of Iowa. The total number of images used for training was 8,391 where each class had a balanced number of training examples of 2,797 per class. Data augmentation was used for the training data to give random pixel movements (±20 pixels) on the hori- zontal direction. For VGG16, we used a 4,096-dimensional feature vector from acti- vation of the fully connected layer. Training, validation, and test images were resized to 224 244 pixels and cropped to a width of 60% and a height of 80% of original image size. The Stochastic Gradient Descent (SGD) was started at a learning rate of 0.0001, which allowed for fine-tuning to make progress while not destroying the initialization. Since road regions are mostly concentrated in the middle of the image for CCTV cameras, cropping the image allowed us to focus more on the road regions. Final validation reached 98.92% accuracy after five epochs of training. It took about 68 min to complete the training for a single NVIDIA™ GTX 1080 Ti GPU. Figure 2 shows the training and validation accuracy along with loss while training the model with 5 epochs. Road Weather Condition Estimation Using Fixed and Mobile Based Cameras 197

2.3 Road Weather Condition Estimation Using Mobile Cameras Images from snow plow vehicles were also collected over a one-year period on roadways in Iowa. The images were gathered from the cameras on the state of Iowa roadways. The total number of images used for training was 4,443, divided evenly across classes (1,481 per class). Each image of the dataset was manually classified or annotated according to selected weather conditions on roadways. Training data were augmented to introduce more variety into the dataset by applying (i) a ±20 pixel movement in the horizontal direction and (ii) a flipped image version. Training, vali- dation, and test images were resized to 224 244 pixels and red, green, and blue color channels before being fed into the network. We started with SGD at a learning rate of 0.0001. Final validation accuracy reached 71.16% training for 10 epochs. It took about 53 min to complete the training for a single GPU. The trained model fit the training data well over time, despite having difficulty generalizing well for validation and test data. Figure 3 shows the training and validation accuracy along with loss while training the model with 10 epochs.

Fig. 2. Training accuracy and loss while training CCTV weather model. 198 K. Ozcan et al.

Fig. 3. Training accuracy and loss while training mobile camera weather model.

3 Experimental Results

After training the proposed model via transfer learning, we used 10% of the dataset for testing. Figure 4 presents the confusion matrix for road weather condition estimation using CCTV camera feeds. Each row of the confusion matrix represents instances in a predicted or target class while each column represents the instances in an output class. The top row of each box shows the percent accuracy for the correct classification. The bottom row of each box shows the number of images for each target-output combi- nation. All of the road weather condition classes had more than a 96% correct clas- sification accuracy. The trained model generalized well for validation and test datasets. Road Weather Condition Estimation Using Fixed and Mobile Based Cameras 199

Example images from the CCTV dataset, with correctly classified images, are presented in Fig. 5. Resulting images and their highest probability for estimated clas- sification are presented with examples from each class in the dataset. The proposed model is capable of predicting each class with a high degree of accuracy on test dataset, which is in accordance with the overall accuracy on the validation set while training the model. Overall accuracy of 98.57% is achieved for correctly classifying three classes clear, rainy-wet, and snow covered.

Fig. 4. Confusion matrix for CCTV camera weather estimations.

Figure 6 presents the confusion matrix for road weather condition estimation using snow plow vehicle camera feeds. The overall accuracy for correct classification was 77.32% across the three major weather classes. While the overall accuracy is high for snowy and clear roads, the model was less accurate on rainy/wet road condition clas- sification. As shown from the training process, the model fit the training data well, but lacked improving accuracy on the validation dataset over training epochs. While the snow plow camera feed images are challenging to classify, increasing the training data may help the model’s generalizability and reduce overall error in the validation dataset. Example images with the highest probability of correct classification are shown in Figs. 5 and 7. It should be observed that the model is able to estimate road class correctly when it is snow covered and the surface is wet. Moreover, the model is able to classify a clear road condition, even when the snow accumulation is present by the side of the road. Overall, when the frontal view of the camera is clearly observing the road, the model performs well to classify road conditions with high accuracy. Figure 8 presents example images that were misclassified as a result of the incorrect highest probability for condition estimation. From the snow plow camera views on the first row, road layout and the camera location were significantly different than the straight roads in the frontal view. Moreover, we observed misclassifications when the camera view was blurry due to wetness. Also, light road color was confused with snow covered areas, resulting in misclassifications. 200 K. Ozcan et al.

Fig. 5. Example images showing the classification results with highest probability.

Fig. 6. Confusion matrix for Snow Plow camera weather estimations. Road Weather Condition Estimation Using Fixed and Mobile Based Cameras 201

Fig. 7. Example images showing the classification results with highest probability.

Fig. 8. Example images that are misclassified. 202 K. Ozcan et al.

3.1 Experimental Results on Naturalistic Driving Images With the trained model developed for snow plow vehicle images, we tested the developed model on sample naturalistic dataset of video footage from in-vehicle cameras that face towards the roadway. Cameras were installed into personal vehicles and captured individuals’ daily driving in their typical environment. Initial results show promising weather condition estimation on the road. Example figures from naturalistic video recordings are presented in Fig. 9. In order to concentrate on the actual road regions while testing the developed method, we masked the road regions in front of the vehicle with a trapezoid, as it can be observed in the last row of Fig. 9. Since the other environmental regions change significantly and they don’t contribute much details for the weather estimation task on road, they are not included for images from naturalistic driving recordings. Although the model classifies rainy/wet and clear conditions, there is still room for improvement for correctly classifying the snow-covered regions. The current dataset doesn’t have any snow-covered road regions in front of the vehicle. Also, a maximum confidence score for a correct classification result is comparably lower since these naturalistic driving images are quite different from the initial training dataset.

Fig. 9. Example images showing the classification results on naturalistic driving video. Road Weather Condition Estimation Using Fixed and Mobile Based Cameras 203

4 Conclusions

This study developed a customized model for estimating the road weather conditions using both CCTV cameras and snow plow vehicle mobile cameras. The model is adapted from Places205-VGGNet model for scene recognition. The developed model has been trained and tested on two new datasets prepared for road weather condition estimation. Both of the datasets included a large number of images collected from camera feeds over different locations from the state of Iowa. The model developed achieved 98.57% and 77.32% accuracy with the images from CCTV cameras and snow plow cameras, respectively. While there is still room for improvement, the model provides promising results for weather condition estimation using CCTV surveillance cameras. It should be noted that snow plow camera images are highly variable depending on the vehicle’s location, environmental brightness, and road layout. Still, the proposed model provides an 11% accuracy improvement over the previous models for road weather classification datasets using dashcam images. The model for mobile images was tested on a naturalistic driving dataset and it provided promising results for differentiating clear and wet road conditions. All in all, developed model is shown to be suitable for CCTV camera feeds as well as mobile camera feeds from snow plow vehicles.

5 Future Work

To increase the overall accuracy of the weather classification results, the team is planning to extract the road regions, especially for the mobile images in a robust manner. Currently, the model is suffering from the variety of challenging images under extreme weather, illumination, and road layout conditions. It might be noteworthy to have a less noisy dataset with more images for model training. Model results based on the CCTV images are encouraging with hopes that this can be supplemented and integrated with images from mobile sources. As image datasets expand, further improvements could increase the number of weather conditions estimated such as icy, slushy, foggy, etc. As a future implementation direction, we plan to run the developed models to estimate the road condition with feeds from CCTV surveillance cameras as well as snow plow vehicles on the road. Developed model is beneficial for autonomous selection of snow plow routes and verification of extreme road conditions on roadways.

References

1. U.S. DOT Federal Highway Administration: How do weather events impact roads? https:// ops.fhwa.dot.gov/weather/q1_roadimpact.htm. Accessed 01 Aug 2018 2. Rakha, H., Arafeh, M., Park, S.: Modeling inclement weather impacts on traffic stream behavior. Int. J. Transp. Sci. Technol. 1(1), 25–47 (2012) 3. Haug, A., Grosanic, S.: Usage of road weather sensors for automatic traffic control on motorways. Transp. Res. Procedia 15, 537–547 (2016) 4. Ogura, T., Kageyama, I., Nasukawa, K., Miyashita, Y., Kitagawa, H., Imada, Y.: Study on a road surface sensing system for snow and ice road. JSAE Rev. 23(3), 333–339 (2002) 204 K. Ozcan et al.

5. Hirt, B.: Installing snowplow cameras and integrating images into MnDOT’s traveler information system. National Transportation Library (2017) 6. Son, S., Baek, Y.: Design and implementation of real-time vehicular camera for driver assistance and traffic congestion estimation. Sensors 15(8), 20204–20231 (2015) 7. Rajamohan, D., Gannu, B., Rajan, K.S.: MAARGHA: a prototype system for road condition and surface type estimation by fusing multi-sensor data. ISPRS Int. J. Geo- Inf. 4(3), 1225–1245 (2015) 8. Kutila, M., Pyykönen, P., Ritter, W., Sawade, O., Schäufele, B.: Automotive LIDAR sensor development scenarios for harsh weather conditions. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro (2016) 9. Nguyen, C.V., Milford, M., Mahony, R.: 3D tracking of water hazards with polarized stereo cameras. In: IEEE International Conference on Robotics and Automation (ICRA) (2017) 10. Abdic, I., Fridman, L., Brown, D.E., Angell, W., Reimer, B., Marchi, E., Schuller, B.: Detecting road surface wetness from audio: a deep learning approach. In: 23rd International Conference on Pattern Recognition (ICPR) (2016) 11. Kuehnle, A., Burghout, W.: Winter road condition recognition using video image classification. Transp. Res. Rec. J. Transp. Res. Board 1627, 29–33 (1998) 12. Pan, G., Fu, L., Yu, R., Muresan, M.I.: Winter road surface condition recognition using a pre-trained deep convolutional neural network. In: Transportation Research Board 97th Annual Meeting, Washington DC, (2018) 13. Qian, Y., Almazan, E.J., Elder, J.H.: Evaluating features and classifiers for road weather condition analysis. In: IEEE International Conference on Image Processing (ICIP), September 2016 14. Jonsson, P.: Road condition discrimination using weather data and camera images. In: 14th International IEEE Conference on Intelligent Transportation Systems (ITSC) (2011) 15. Jonsson, P.: Classification of road conditions: from camera images and weather data. In: IEEE International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA) Proceedings (2011) 16. Sun, Z., Jia, K.: Road surface condition classification based on color and texture information. In: Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (2013) 17. Lee, J., Hong, B., Shin, Y., Jang, Y.-J.: Extraction of weather information on road using CCTV video. In: International Conference on Big Data and Smart Computing (BigComp), Hong Kong (2016) 18. Kawarabuki, H., Onoguchi, K.: Snowfall detection under bad visibility scene. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), October 2014 19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 20. Wang, L., Lee, C.-Y., Tu, Z., Lazebnik, S.: Training deeper convolutional networks with deep supervision. arXiv preprint arXiv:1505.02496 (2015) 21. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40 (6), 1452–1464 (2018) 22. Kalliatakis, G.: Keras-places. https://github.com/GKalliatakis/Keras-VGG16-places365. Accessed 15 Nov 2018