Car Make and Model Recognition Under Limited Lighting Conditions at Night
Total Page:16
File Type:pdf, Size:1020Kb
Pattern Anal Applic (2017) 20:1195–1207 DOI 10.1007/s10044-016-0559-6 THEORETICAL ADVANCES Car make and model recognition under limited lighting conditions at night 1 2 Noppakun Boonsim • Simant Prakoonwit Received: 24 October 2015 / Accepted: 4 May 2016 / Published online: 23 May 2016 Ó The Author(s) 2016. This article is published with open access at Springerlink.com Abstract Car make and model recognition (CMMR) has 1 Introduction become an important part of intelligent transport systems. Information provided by CMMR can be utilized when During the past decade, car make and model recognition license plate numbers cannot be identified or fake number (CMMR) has been an interesting research topic in intelli- plates are used. CMMR can also be used when a certain gent transport systems (ITS). CMMR could be a robust model of a vehicle is required to be automatically identified method to significantly improve the accuracy and relia- by cameras. The majority of existing CMMR methods are bility of car identification. More information can be drawn designed to be used only in daytime when most of the car from a CMMR system in terms of manufacturers, models, features can be easily seen. Few methods have been shapes, and colors, etc., to help specify cars. This leads to developed to cope with limited lighting conditions at night more accurate results, rather than using only a license plate. where many vehicle features cannot be detected. The aim In addition, CMMR can assist the police to detect sus- of this work was to identify car make and model at night by pected or blacklisted cars, or unknown license plates, via using available rear view features. This paper presents a CCTV cameras in surveillance systems. one-class classifier ensemble designed to identify a par- Many techniques have been presented in the past dec- ticular car model of interest from other models. The ade. The majority of the techniques are feature based. Low- combination of salient geographical and shape features of level features such as edge contour, geographical parame- taillights and license plates from the rear view is extracted ters, and corner features are used in the process as well as and used in the recognition process. The majority vote high-level features, for example, scale invariant feature from support vector machine, decision tree, and k-nearest transform (SIFT), speeded up robust features (SURF), neighbors is applied to verify a target model in the clas- pyramid histogram of gradient (PHOG), and Gabor fea- sification process. The experiments on 421 car makes and tures. In some techniques, both low-level and high-level models captured under limited lighting conditions at night features are combined. In addition, not only are many kinds show the classification accuracy rate at about 93 %. of feature used to classify CMM, but also several classifiers have been explored in order to increase the recognition Keywords Car make and model recognition Á Night Á One- rate. class classifier ensemble The majority of previous studies have developed solu- tions to the problem during daytime when most vehicle features are visible. However, there are a few methods designed to identify car makes and models at night. & Noppakun Boonsim However, the accuracies of those methods are rather low. [email protected] This paper presents a new CMMR method for scenes 1 Department of Computer Science and Technology, under limited lighting conditions or at night. The proposed University of Bedfordshire, Luton, UK method uses new feature selection methods and a one-class 2 Department of Creative Technology, Bournemouth classifier ensemble to recognize a vehicle of interest. At University, Bournemouth, UK night or under limited lighting conditions, cars’ front 123 1196 Pattern Anal Applic (2017) 20:1195–1207 headlights tend to be on. Due to brightness and glare of accuracy rate of recognition at 97.2 %. Another method headlights, most important features captured using the front was proposed by Zhang [11]. He presented the cascade view are, therefore, blurred or incomplete resulting in classifier ensembles which consist of two main stages. The serious recognition inaccuracy. Therefore, in this work, the first is the ensemble of four classifiers: SVM, kNN, random authors propose that a vehicle’s features, to be used in the forest, and multiple-layer perceptrons (MLP) accepting CMMR process, should be captured from the rear view Gabor features and PHOG features. The outputs of the first where features are less prone to brightness and glare. stage are accepted class and rejected class. The rejected Moreover, the distinctive shapes of a vehicle’s back lights class from the previous process is sent for re-verification in and the license plate position contain information which the second process. The second stage is implemented by can be utilized in the recognition process. A genetic rotation forest (RF) of MLPs as components to predict the algorithm is then applied to select the most optimal subset unclassified subject. This method is reported with 98 % of features for the recognition process. To increase the accuracy rate over 21 classes. Last, Kafai and Bhanu [19] vehicle identification accuracy, the majority vote of three presented a novel method in MMR using dynamic Baye- classifiers is employed to classify the captured salient sian networks. They use geographical parameters of the car features. rear view, such as taillight shape, angle between taillight The following section describes related work including and license plate and region of interest. The recognition background knowledge, past research techniques, and result was better than kNN, LDA, and SVM. contributions. The proposed system architecture and As mentioned earlier, CMMR offers valuable enhance- methodology are presented in Sect. 3. The experiments and ment to support additional information in car identification results are shown in Sect. 4, followed by the conclusions in systems. Even though the recognition rates of existing the last section. methods are impressive with more than 90 % accuracy, there are some serious drawbacks. Most algorithms only work well under good lighting conditions, where most 2 Related work and contributions vehicle features are visible, and without occlusions. As stated in [20], most CMMR systems have difficulties in 2.1 Related work limited lighting conditions, e.g., at night, and when all vehicles’ features are not fully visible in the scene. In those Over the past decade, many studies have been proposed for conditions, cameras cannot clearly capture the full CMMR. In previous techniques, feature-based approaches appearance of vehicles. Vehicles’ shapes, colors, headlight have been commonly used, such as geographical feature shapes, grills, and logos may not correctly or fully appear [1], edge-based feature [2, 3], histogram of gradient (HoG) in the captured images or video resulting in recognition feature [4, 5] contour point feature [6], curvelet transform inaccuracy. feature [7], and contourlet transform feature [8, 9]. In To date, car model classification in limited lighting addition, some works used combinations of two features in remains challenging and requires more research to deter- order to gain better results, for example, integration of mine a better solution. Therefore, this work aims to present wavelet and contourlet features [10], and combination of a novel method to address the problem. PHOG and Gabor features [11]. A number of works have presented solutions to vehicle Other feature-based methods (called sub-feature based), recognition in nighttime lighting conditions, for example, such as SIFT [12] and SURF [13, 14], are insensitive to driver assistance systems (DAS) [21–24] and vehicle type, changes in object scales. They have demonstrated inex- such as car, bus and truck classification at night [25, 26]. pensive computation time and high accuracy rate for object However, none of the works could fully perform CMMR recognition. A variety of classifiers have also been pro- effectively. Under limited lighting conditions, accurate posed to class those features such as k-nearest neighbor image recognition is challenging due to the reduced num- (kNN) [15], support vector machines (SVM) [16], neural ber of visible features. Kim et al. [22] presented a multi- networks [17], Bayesian method [18], and ensemble level threshold to detect the head and tail spotlights for methods [11]. DAS and applied the SVM classifier to distinguish the Baran et al. [13] presented a new recognition technique characteristics of spotlights, such as the area and the dis- to deal with real-time and non-real-time situations. SURF tance between each blob. Moreover, time-to-collision features and SVM are implemented to identify car models (TTC) is implemented in [23] to estimate the time before in real time. The accuracy rate of the recognition was car chasing after detecting the spotlight. Furthermore, rear reported at 91.7 %. To increase the accuracy, they used the lamp detection and tracking by standard low-cost camera combination of edge histogram, SIFT, and SURF features were proposed in [24]. The author of the work optimized to recognize CMM in off-line conditions. They reported the the camera configuration for rear taillight detection and 123 Pattern Anal Applic (2017) 20:1195–1207 1197 used Kalman filtering to track the rear lamp pair. In addi- classifiers [28]. In this research, an ensemble of three tion, several works dealt with vehicle recognition at night classifiers, one-class SVM (OCSVM), decision tree (DT) by discriminating vehicle types. Gritsch et al. [25] pro- and kNN, is employed in the classification stage. We posed a vehicle classification system at night using smart choose three diverse classifiers which differ in their deci- eye traffic data sensor (TDS) for vehicle headlight detec- sion making and can complement each other to increase tion and used a distance parameter between the headlight classification accuracy. As mentioned earlier, OCC strat- pair to categorize the car-like and truck-like.