<<

EXPLORATION OF DETECTION AND VISIBILITY ESTIMATION FROM CAMERA IMAGES

Wiel Wauben and Martin Roth R&D Observations and Data Technology Royal Netherlands Meteorological Institute (KNMI) P.O. Box 201, 3730 AE De Bilt, The Netherlands email: [email protected] and [email protected]

ABSTRACT In this report several methods to determine the presence of fog or to estimate the visibility from camera images during daytime are implemented and evaluated. The landmark discrimination method by using edge detection and visibility estimation from contrast attenuation require selection of objects at known distances. Results of the landmark discrimination method during fog conditions are good, but are affected by inhomogeneous visibility conditions, which can be identified as such. The contrast reduction method is very sensitive to the availability of suitable objects. The visibility range covered by these methods is limited by the distances of the available objects. Global image features can also be used to obtain information on visibility. A decision tree using the mean number of edges and the variation of the horizontal averages of the transmission has been constructed to determine the presence of dense fog. A linear regression model including also the mean brightness of an image has been considered to estimate the visibility. Both these methods show promising results. However, further refinement of the methods and more extensive validation against other image data sets is required.

1. INTRODUCTION Visibility is a meteorological variable which is important for the safety and capacity of and air . Visibility is defined as the greatest distance at which a black object of suitable dimensions, situated near the ground, can be seen and recognized when observed against a background (WMO, 2014). Traditionally visibility was estimated manually by observers using visibility markers at known distances. In these situations the reported visibility is partly determined by the availability of suitable visibility markers such as buildings and other objects during day-time and lamps at night. Physically, visibility can be defined in terms of the transparency of the atmosphere, which is reduced due to and absorption by molecules and particles. The transparency or transmittance of the atmosphere is related to the atmospheric extinction. The transmittance can be measured using so-called transmissometers that measure the attenuation of a light source with known intensity over a fixed distance. By definition the visibility (Meteorological Optical Range) is the distance required to reduce the intensity of a light source to 5% of its original value. The relationship between the visibility (MOR) and the atmospheric extinction coefficient σ is given by MOR = ln(0.05)/σ ≈ 3/σ. KNMI uses a transmissometer as a reference instrument for visibilities up to 2 km at the test field in De Bilt. So-called forward scatter sensors are employed in the meteorological network of KNMI to determine the atmospheric extinction by the amount of scattering, assuming extinction is mainly caused by scattering. Visibility can be accurately measured by transmissometers and forward scatter visibility sensors, but these sensors are rather costly and the sample volume of these sensors is small. Therefore the measurements are representative for a small area only whereas visibility can vary largely on a small spatial scale. Hence a meteorological measurement network is generally not dense enough to detect all occurrences of fog. Nowadays cameras are widely used for various applications such as security, supervision, traffic, construction and tourism and images are readily available on internet. Furthermore, image processing software is available for the interpretation of these images. Visibility extracted from camera images has therefore the potential to provide useful information. KNMI operates visibility sensors at about 25 automatic stations throughout The Netherlands while Rijkswaterstaat (Dutch road authority) has about 5000 traffic cameras along the motorways and near tunnels, bridges etc. (see Figure 1). Using existing cameras, even if they are not evenly distributed around the country, is an efficient means to 1

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

obtain additional information on visibility or fog. It is a so-called big data application and is of mutual interest of meteorological institutes and road authorities. In this paper an exploration of applying image processing techniques to camera images in order to study the feasibility of deriving visibility has been performed. The exploration considers fog detection as well as quantitative estimation of the visibility. Several methods have been considered to derive visibility such as; (i) edge detection; (ii) contrast and (iii) dehazing techniques. The techniques have been applied to camera images that are available at some KNMI measurement sites and compared to the visibility measurements. Preliminary results of this exploration are presented below.

Figure 1: Overview of the automatic weather stations in The Netherlands measuring visibility (left) and the locations of traffic cameras of Rijkswaterstaat around Amsterdam.

2. SITES AND CAMERA IMAGES Although camera images are readily available on internet, there is generally no archive of these images that can be used for analysis. The exploration also required that reference visibility values are available for the evaluation of the results. Therefore the exploration is restricted to KNMI sites equipped with visibility sensors and cameras. The sites considered are the test field at the main premises of KNMI in De Bilt (WMO number 06261); the automatic (AWS) Twente (06290) and Amsterdam Schiphol Airport (06240). The test field in De Bilt is equipped with a AXIS 214 PTZ network camera that takes images of several sensors under test every 10 minutes. An overview of the test field is obtained with 1x zoom looking north with a horizontal viewing angle of about 50°. The image with a 768x576 resolution is stored in JPEG-format. The overview images of the test field (see Figure 2, left) show several sensors in the foreground including the forward scatter visibility sensor indicated by the red rectangle. In the background the radiosonde shed, the main building of KNMI and the radar tower are visible. The automatic weather station Twente is equipped with a fixed AXIS P3364 network camera with a horizontal viewing angle of about 80° and a resolution of 1280x960. The camera at AWS Twente (see Figure 2, right) is intended for monitoring the AWS site and its sensors and is mounted on the mast looking downwards to the southeast. Hence the image shows only a small fraction of the sky and the horizon is distorted. Again an image is available every 10 minutes. The camera at Schiphol is a Sony SNC-VB630 fixed network camera. It is located east of runway 18R looking northwest towards touchdown. These images are available every second through a Milestone video server. Unfortunately images could not be exported from the server so that in this paper only some screenshots are used for the exploration (see Figure 5).

2

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

Figure 2: Example of an image of the test field of KNMI in De Bilt (left) and the automatic weather station at Twente (right). The forward scatter visibility sensor is denoted by the red rectangle.

3. LANDMARK DISCRIMINATION - EDGE DETECTION METHOD The first method considered is the landmark discrimination method using edge detection. The method resembles the way the visibility is estimated by human observers, i.e. by using a set of landmarks at known distances. The method is illustrated in Figure 3. The method requires some preparations. For each camera objects need to be identified on a clear image and the corresponding distances of these objects from the camera need to be determined. In the images of the test field in De Bilt the radiation sensors in the lower left at 17 m; the radiosonde shed at 108 m; the KNMI building at 170 m and the radar tower at 310 m have been selected for this purpose. Note that also the tree line at the horizon could be used. Unfortunately there are no objects at large distances visible on this image. Next areas are defined around the selected objects. These areas, the yellow rectangles in Figure 3, are not closely fitted around the objects so that small misalignments of the camera have no effect. In these areas the number of edges will be determined (lower left panel). The edges are computed by convolution of the image with a Gaussian derivative with standard deviation of 1 (choosing a higher standard deviation notably decreases the number of edge pixels detected and a lower standard deviation makes the image too noisy). Subsequently, a threshold is set to determine thee number of strong edge pixels which generally correspond with objects. The threshold was determined manually to correspond to the value of edge pixels on an image where a targets is just visible according to the operator’s perspective. The threshold thus obtained was an intensity of 5. Note that and contamination on the dome covering the camera can also trigger strong edges. An object is considered visible if a sufficient amount of edges is contained in tthe selected area. Here a fraction of strong edge pixels inside the areas of 10% or more proved to be a good value. The results can be presented as shown in the lower right panel of Figure 3, that is a transparent red area indicates that the object is not visible whereas green denotes a visible object. The distance of the object is reported above the rectangle.

3

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

Figure 3: Illustration of the landmark discrimination method using edge detection. The panels show the selected objects (top left), the image under evaluation (top right); the strong edges detected for that image (bottom left) and indication whether objects are visible (bottom right). The visibility reported by the forward scatter sensor is 272 m. The landmark discrimination method using edge detection described above has been applied to images for De Bilt Test. Since there are no objects at large distance and the discrimination method only determines whether objects are visible or not the method can only be applied to situations with low visibility. Otherwise the method would only report that visibility is good (meaning above 310 m). Some results are given in Figure 4. The upper panel shows examples where visibility is good (left) or poor (right) when all objects are visible or none are visible. When visibility drops a situation as in the lower left panel can occur, indicating that the visibility is between 17 and 108 m. However, often situations occur when the estimated visibility is inconsistent (lower right) where the objects on the field indicate a visibility between 17 and 108 m, but the radar tower at 310 m is visible. Such situations can occur during shallow fog or in case of fog banks or patches of fog. In such situations the landmark discrimination method also works and indicates that the situation is inhomogeneous. The requirements of the user should then be considered on what visibility should be reported in such conditions, e.g. lowest visibility. Comparing the visibility range obtained from the images with the visibility reported by the sensor often gives inconsistent results. The visibility reported by the visibility sensor might not be representative for the situation at the test field. Hence a manual evaluation by visual inspection of images around fog events has been performed. For De Bilt Test 37 images haven been manually evaluated and on 26 out of 37 images (=70 %) all objects were correctly identified as visible or non-visible. Fog occurred in De Bilt Test on the mornings of July 3, 2015 and October 4, 2015 and the fog lifted at about 4:10 and 7:40 UT, respectively. This corresponds with the time that all objects become visible.

4

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

Figure 4:: Examples of results obtained with the landmark discrimination method using edge detectionn for the camera at De Bilt Test. The visibility reported by the forward scatter sensor is 866 m (top left), 61 m (top right), 339 m (bottom left) and 122 m (bottom right). The landmark discrimination method using edge detection has also been applied to images obtained at Schiphol using the same thresholds as derived for De Bilt Test. At Schiphol a lamppost, viaduct (both at 300 m) and two groups of trees (at about 1 km) serve as objects (see Figure 5). On the left image the atmosphere is clear and all the landmarks are visible in agreement with the results of the landmark discrimination method. The visibility sensor at 18R touchdown reported a MOR of 9340 m. The right image shows a situation with (sensor reported MOR of 3130 m). By visual inspection the group of trees on the left is still barely visible, while the method classifies this object as non-visible. On inspection of the change in intensity with position on the image, it turns out that in this region the change is too small in comparison with the threshold for edge strength. The object becomes visible if the threshold is lowered to 2. The recognition of the landmarks on the other images obtained on November 2, 2015 were in agreement with visual inspection of the images. For Schiphol 7 images haven been manually evaluated and on 6 out of 7 images (=86%) all objects were correctly identified as visible or non-visible. The landmark discrimination results obtained at Schiphol indicate that the thresholds are quite good although there is room for improvement. Unfortunately, also at Schiphol no landmarks at large distances are available in the camera images to extend and test the discrimination method to larger visibility values. Note that fog generally starts at night/in the morning and disappears shortly after sunrise. Hence the number of fog cases is limited since daytime only is considered here. Fog can also be estimated by studying number of edges on the whole image and/or without detailed knowledge on objects and their distances. Also the number/strength of edges may give an estimate of the visibility. This is considered in section 5.1.

5

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

Figure 5:: Examples of results obtained with the landmark discrimination method using edge detectionn for the camera at Schiphol. The visibility reported by the forward scatter sensor is 9340 m (left) and 3130 m (right).

4. VISIBILITY ESTIMATION FROM CONTRAST REDCUTION BETWEEN 2 TARGETS The second method that was considered estimates the visibility from the contrast reduction. Koschmieder’s law gives the relationship between the apparent contrast C(x) of an object at a distance x, seen against the horizon sky by an observer and its inherent contrast (C0). The latter is the contrast that the object would have against the horizon when seen from very short range. Koschmieder’s relationship can be written as: C(x) = C0 exp[-σx], with σ the extinction coefficient. C0, a constant dependent on scene illumination and the properties of the object itself, can be eliminated by comparing the contrasts of two objects with the same inherent contrast at different distances (x1 and x2). This results in C(x2)/C(x1) = exp[-σ(x2-x1)]. From this expression the extinction coefficient σ and thus the MOR can be estimated. Contrast quantifies the degree to which objects are visible based upon difference in brightness or colour. There are several ways to define contrast, but the most suitable one for visibility estimation is the so-called Weber contrast C=(I-IB)/IB where I represents the intensity of the object under consideration and IB the intensity of the background. The contrast of a nearby black object is high (C=-1) whereas in poor visibility conditions the object will fade into the background (C=0). Note that the contrast can also be determined for individual RGB channels and that the contrast is a ratio so that changes in illumination are, in a first order approximation, taken into account. To illustrate this method the images obtained at De Bilt Test on August 1, 2015 between 12 and 18 UT are considered. During this period the visibility is good and conditions are homogeneous. For these images objects were chosen that have similar optical properties and which were sufficiently spaced. In the pictures below (Figure 6) two patches of trees were chosen and for the background a patch of the sky. On the same image also two other objects, namely the bright radar tower, a dark patch of trees and another patch of sky, were chosen. The pixel values were averaged over the selected areas to reduce spatial variations. From these pictures the visibility is obtained from the contrast reduction for each of the 3 colour channels (RGB) and for each of the 2 object pairs and is plotted in the right panel of Figure 6. The object pair using trees is indicated by the ‘trees’ curves in the legend and the pair using the radar tower and a patch of trees as targets is indicated by the ‘tower’ curves. For comparison the values of a forward scatter sensor, located at the same scene, are shown in the same figure. From the graphs it can be seen that there is a significant underestimation of the visibility obtained from the ‘tower’. When objects are not black or sufficiently dark, the derived visibility is frequently significantly underestimated since the added reflected light from these objects results in a lower contrast and thus a lower estimated visibility. Note that for the ‘trees’ the visibility is worst for the blue channel, while this colour has the highest estimated visibility for the ‘tower’ curves. When using similar dark objects (trees), the visibility on this clear day is lowest for blue due to Rayleigh scattering, since shorter wavelengths are scattered more. Annother point worthy of mentioning is that the green and red channels fluctuate more than the blue channel. This is mainly caused by the large fluctuations in the background sky in these channels. There is a clear dominance of red over the whole of each image of the test field at De Bilt. This is possibly due 6

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

to the automatic white balance setting of the camera. Other objects that have been considered are buildings and grass. The results of the first are often affected by reflections, the latter by shadows and the position of the . Hence one has to be very careful in selecting suitable objects for the contrast reduction method.

Figure 6: Illustration of results obtained with the contrast reduction method using images obtained with the camera at the De Bilt Test field during good visibility conditions (left). The brown, blue and yellow rectangles denote the nearby object, farther object and background respectively. The derived visibility between 12 and 18 UT and the MOR reported by the forward scatter sensor is given (right). The contrast reduction method was also used to estimate the visibility on a day with fog (October 4, 2015). The chosen objects now include part of a black pole nearby and trees. The visibility graph clearly shows a large overestimation by about a factor 6 in the early morning (5-7 UT) during fog. The main cause (aside from the general ones listed below) is non-suitability of objects. The trees were not visible in the morning, which makes them not suitable as a target. However, there were no sufficiently dark objects available, which were still distinguishable at these low visibilities. Also notice the dip at around 10 UT for all three colours. This is caused by the sun casting shadows on the closest target. Note that the temporal and spectral variations of the visibility derived by the contrast reduction method in some situations is an indicator of possible problems for specific conditions.

Figure 7: As Figure 6, but during a period with fog. Again the availability of suitable objects is essential for the contrast reduction method. A target is suitable if it’s sufficiently dark and large enough to be detectable by the camera. It is advantageous to choose the targets along a straight line. This reduces the influence of background stray lights and atmospheric inhomogeneity’s along the image. Furthermore, the distance to the near target and the distance between the targets should be large enough for sufficient attenuation of the target contrast. 7

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

These two distances determine the visibility regime which can be detected within reasonable error. The error in the visibility increases when the visibility is much larger or much smaller than the distance between targets. In the first case the contrast reduction is too small and for the latter the object(s) fade into the background making it (them) not suitable for deriving the visibility. Besides the availability of suitable objects, images should have objects which are similarly illuminated by the sun. Inhomogeneous lighting is mostly caused by the change in position of the sun during the day and changes in cloudiness. In order to achieve this requirement a good approximation is to use images on which the sun is behind the camera on a relatively clear day. Overexposure also introduces errors and should either be taken into account by the camera or such images should not be considered, as was done in this report.

5. GLOBAL IMAGE FEATURES The landmark discrimination and contrast reduction methods presented above both require that specific targets need to be selected and their distances have to be determined. An approach to overcome this by focusing on global image features is presented below. First, the two features that are considered here are given. The feature computation is done in R and the source code can be obtained at https://github.com/MartinRoth/visDec. Section 6 presents two different ways to use these features for fog classification and visibility estimation, respectively. 5.1. Mean edges Edge detection was used already in the landmark discrimination method. Instead of focusing on specific targets, the mean number of edges in a given image is now calculated. The banner with the title of the image, the top 16 lines for the images of De Bilt Test, is not considered. Under daylight conditions the mean number of edges can be viewed as a relative indication of the visibility. In a cclear situation (upper panel of Figure 9) the image contains 2.03 % edges and in a foggy situation this reduces to 0.72 % edges. The correlation is, however, rather poor when the fraction of edges of the image is plotted against the MOR measured at the test field, see Figure 8. Hence, further refinement of the method is required as will be presented in the next section. Note that in night conditions fog can result in an increase of the fraction of edges, because of the scattering of light sources by the water particles.

Figure 8: Scatter plot of the mean fraction of edges of an image versus the natural logarithm of the MOR for De Bilt Test.

8

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

5.2. Transmission estimate using the Dark Channel prior He et al. (2011) present an approach for haze removal from (single) images based on the so-called dark channel prior. The dark channel prior is based on the observation “that most patches (small sub- images) in a haze-free outdoor image contain some pixels which have very low intensities in at least one color channel”. This approach uses haze removal to improve the quality of an image. Instead of focusing on the haze removal, the focus is now on the haze itself. Therefore, the transmission in the local patches with the technique developed by He et al. (2011) is estimated. Horizontal averages are calculated from the transmission to infer the transition between sky and non-sky regions. In clear situations this transition is relatively sharp, whereas in a foggy situation the transition is less clear (see Figure 9).

Figure 9: Illustration of the transmission and the horizontal averages in a clear (top) and foggy (bottom) situation. The idea is to use the horizontal averages and compute the change point or a measure of the variation of the curve and use this as the feature of the image. It is obvious that different features could be used as well. For instance the mean brightness of the image.

6. FOG DETECTION AND VISIBILITY ESTIMATION FROM IMAGE FEATURES 6.1. Fog classification using a decision tree Using the above mentioned features (and in future possibly others) a classification (and finally a clustering) scheme has been developed to predict whether an image is foggy, in this case dense fog with MOR < 250 m, or not. As training set the images and MOR data obtained at De Bilt, from June 1 2015 until December 31 2015 (16001 images) have been used. The evaluation set are the images for the same camera for the period January 1 till June 30 2016 (13883 images). Based on the training set we obtain the preliminary decision tree shown in Figure 10. Under each node of the tree stands the criterion. When this criterion is fulfilled one goes to the left branch and otherwise to the right. Therefore, the tree has to be read as follows for an image with meanEdge = 0.0039 and changePoint = 295. From the top node the lower right node, with number 3, is reached because meanEdge ≥ 0.0041 is false. In the next step the lower left node, with number 6, is reached because changePoint < 296 is true. This node number 6, tells us that there were 24 instances 9

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

in the training set (0 % of the training set) that ended up in this node and from these a fraction of 0 were foggy, i.e. dense fog with MOR < 250 m. Therefore, the chance that an image in this class is foggy is set to zero. Via this approach a decision tree to predict the probability of fog from the features of the images in the test set is obtained where results in node number 7, and to a lesser extend node number 11 indicate the possibility of fog. Node 4 and 10 have a negligible probability for fog, so images ending up in these nodes are considered not foggy. The decision tree was determined using a training set that included 25 situations with dense fog (MOR < 250 m). The decision tree correctly classified 16 images as foggy (Probability of Detection, POD = 64 %), whereas 3 faulty classification of fog occurred (False Alarm Rate, FAR = 16 %). From the 9 situations with dense fog, that were not classified correctly, 7 occur in the morning of November 2, 2015. Visual inspection shows that these images are quite clear (there was shallow fog in the preceding hour). In the other two situations it is already dark. Therefore, the POD is actually much higher than the 64 % reported above. The FAR on the other hand is lower, as the three faulty classifications correspond to dark situations.

Figure 10: Decision tree for classification of foggy situations (in green) using two image features. 47 images in the evaluation set of 2016 are situations with dense fog, i.e. a MOR below 250 meters. All images were classified correctly by the decision tree (POD = 100 %). The decision tree approach leads to three faulty classifications of foggy situations (FAR = 6 %). In two of the three cases it is already dark and the MOR is around 750 meeters, which corresponds only to light fog. In the last case the MOR is 1390 meter. However, the radar tower, which is only 310 meters from the camera, is not visible, so here the camera based approach gives correct results. The situation in Twente is different for two reasons, the wide angle of the camera makes the horizontal averaging of the transmission rate less appropriate. Moreover, even on a clear day there are only a few edges in the image (most of which are very close to the camera, i.e. from the equipment of the automatic weather station). Nevertheless, it was quite simple to detect situations when the visibility reported by sensor was not representative or for no obvious reason different from the information extracted from the image using the decision tree. For example during the afternoon of August 23, 2015 the sensor reported MOR < 250 m, although the image show no sign of reduced visibility. 10

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

6.2. Visibility estimation using regression A regression model can be used to get a quantitativee estimation of the visibility. In a first exploratory approach log(MOR) has been modelled as a linear function of meanEdge, changePoint and meanBrightness. The best linear fit to log(MOR) of the training set was obtained by 13.3 + 282 *meanEdge – 0.02 *changePoint – 479 *meanBrightness. The modelled response versus the observed MOR is shown in Figure 11.

Figure 11: Modelled versus observed log(MOR) using all training data of De Bilt Test of 2015. The dotted vertical lines indicate MOR values of 250, 1000, 3000 and 5000 meters. There are some apparent features in this plot. For instance the points with the very low MOR and a modelled value of about 10 are mostly due to situations with shallow fog when the sensor reports low MOR values, but the image correctly suggest higher MOR values. The largest modelled values with values exceeding 12 on the other hand correspond to different sceneries in the picture (so the alignment of the camera was incorrect) or situations where it is dark. The point on the lower right corner with a modelled value below 7 corresponds again to a different scenery. The faulty values mentioned above and some other data points that correspond mostly to dark situations, a total of 509 images, have been removed from the training data set. After this cleaning, 15492 images remain. In the left panel of Figure 12, the modelled log (MOR) values for the cleaned data set are givenn versus the observed values. The cleaning improves the correlation of the modelled and observed MOR data. It is clear, that the observed values have a larger spread than the modelled ones. Therefore, a smooth regression line was plotted on top of the scatter plot. The right panel of Figure 12, shows the situation for 2016 using the evaluation data set. Again too dark images and a few occasions, where the alignment of the camera was incorrect, have been removed. The fit to the training data appears to apply to the evaluation data as well. The cluster of high modelled log(MOR) values corresponds to situations with many drops on the dome protecting the camera. This needs to be researched more in detail. Table 1 shows the POD and FAR for the training and evaluation set for situations with visibility below 250 and 1000 meters and above 3000 and 5000 meters. For that purpose the threshold has been applied to the observed MOR (vertical lines in Figure 12). The threshold for the modelled MOR is given by the value of the smooth regression for the observed MOR at the threshold (horizontal lines in Figure 12). Moreover, Table 1 shows the number of hits, i.e. number of situations where both the observed and the modelled MOR meet the threshold (the number of points in the red boxes of Figure 12). The POD for the 250 and 1000 meter threshold is higher for the evaluation set than for the training set. The opposite is the case for the 3000 and 5000 meter lower threshold. For the 250 and 1000 meter thresholds FAR is

11

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

worse for the evaluation set, but there is hardly any difference for the 3000 and 5000 meter threshold. For even higher thresholds the POD and FAR, obtained by the current regression fit, seem to be less useful, which is due to the large variation of the residuals for higher MOR. Table 1: POD, FAR and number of Hits for the training and evaluation data set of De Bilt using different visibility thresholds, based on the regression fit shown in Figure 12.

< 250 m < 1000 m ≥ 3000 m ≥ 5000 m

Training Evaluation Training Evaluation Training Evaluation Training Evaluation

POD (%) 89 98 88 89 97 96 90 83

FAR (%) 0 18 13 30 1 1 1 2

Hits (#) 16 46 28 81 14795 12095 13475 10195

Figure 12: As Figure 11, but after cleaning of the training data set of 2015 (left) and for the evaluation set of 2016 (right). The blue curve shows a smooth regression based in both plots on the training data set of 2015. The red lines indicate the corresponding values of the modelled log(MOR) for MOR values of 250, 1000, 3000 and 5000 meters.

7. SUMMARY, CONCLUSIONS AND OUTLOOK In this report some methods to assess the presence of fog or the visibility from camera images during daytime are implemented and evaluated.  The landmark discrimination method using edge detection estimates visibility by determining whether selected objects at known distances can be recognized from the background. Since only objects at relatively short distances from the camera were available the evaluation was limited to situations with low visibility. Two limited sets of images under varying visibility conditions were used. The sets are obtained aat two sites using different cameras, A manual evaluation by visual inspection of images during fog events showed that on 26 out of 37 images (70 %) and on 6 out of 7 images (86 %) all objects were correctly identified as visible or non-visible. The same thresholds were used for both data sets. The main cause of faulty recognitions is the threshold used to determine strong edges. Often situations occur when the estimated visibility is inconsistent. Such situations can occur during

12

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

shallow fog or in case of fog banks or patches of fog. In such situations the method also works and indicates that the situation is inhomogeneous.  Visibility can be estimated from the reduction in contrast between two targets. Again the targets (dark objects with identical inherent contrast) need to be selected and their distance known. This method yields results which are in fair agreement with data from local visibility sensors on a clear day. The availability of suitable objects is however essential for the contrast reduction method. Results are often affected by reflections and shadows and when objects are not black or sufficiently dark, the derived visibility is frequently significantly underestimated since the added reflected light from these objects results in a lower contrast. The error in the visibility increases when the visibility is much larger or much smaller than the distance between targets. In the first case the contrast reduction is too small and for the latter the object(s) fade into the background making it (them) not suitable for deriving the visibility.  The presence of fog can be determined using a decision tree based on the mean number of edges of an image and the variation of the horizontal averages of the transmission. The latter is used to infer the transition between sky and non-sky regions. The decision tree was determined using a training set that included 25 situations with dense fog (MOR < 250 m). The decision tree correctly classified 16 images as foggy (POD = 64 %), whereas 3 faulty classification of fog occurred (FAR = 16 %). For the evaluation set 50 images are classified as foggy of which 47 were correct and no cases of dense fog were missed by the decision tree (POD = 100 % and FAR = 6 %). In cases of disagreement the MOR value is either close to 250 m or the image features values are close to the thresholds used in the decision tree. In some cases it was already dark. Also non-representative or faulty MOR values reported by the visibility sensor can be the cause of the disagreement.  Visibility can be estimated from a linear regression model including the mean number of edges, the mean brightness and the variation of the horizontal averages of the transmission obtained from the image. The results showed several outliers that could be traced to specific sensor problems (see decision tree) or situations where the scenery of the image was incorrect. After cleaning of the data set the linear regression model showed a good relationship with the observed MOR. Choosing a threshold for dense fog (MOR < 250 m) the observed MOR and the corresponding value for the modelled MOR, obtained by a smooth regression, resulted in a POD of 89 % (98 %) and FAR of 0 % (18 %) for the cleaned training (evaluation) set. For larger MOR thresholds the scores deteriorate. It should be noted that the results presented in this paper is work in progress. The methods considered are still in development and thresholds can be further optimized. Other image features and the temporal changes should be considered in the future. The images and reference MOR data available for this study also needs to be expanded so that more situations with fog become available and the sensitivity and robustness of the method to location and image scenery as well as camera settings and image quality can be investigated. The results presented here look promising and justify further study, especially considering the high number of potential camera sites that could give information on visibility. Note that the visibility information derived from camera images could also be used for validation of the measurements of visibility sensors and as an awareness tool for forecasters, for example showing colour coded results of presence of fog on a map, with the functionality that the user can zoom in on an icon to get details or view the original image for verification.

ACKNOWLEDGEMENTS It is our pleasure to thank:  Zubin Ramlakhan who during his traineeship at KNMI performed an exploration that is presented in sections 3 and 4 (Ramlakhan, 2015);  Alexander Los and Frans Vos from Dexa Solar - Opening New Resources For Solar Energy and TU Delft - Applied Sciences: Imaging Physics, respectively, for the supervision of the traineeship;  the Study Group Mathematics with Industry 2016 that initiated the exploration presented in sections 5 and 6 (SWI, 2016, Castelli et al., 2016); and  Andrea Pagani for implementing the software at KNMI in “R” and performing further analysis.

13

Wauben and Roth Exploration of fog detection and visibility estimation from camera images TECO-2016

The methods in section 3 and 4 are implemented in MATLAB using the DIP image toolbox (http://www.diplib.org). The methods in section 5 and 6 were implemented in R, using in particular the imager package (https://github.com/dahtah/imager) based on the CImg library by David Tschumperlé.

REFERENCES Castelli, R., Frolkovič, P., Reinhardt, C., Stolk, C.C., Tomczyk, J. and A. Vromans: Fog detection from camera images, to appear in the proceedings of SWI-2016, May 13, 2016. He, K., Sun, J. and X. Tang. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33 no. 12, 2341-2353, December 2011. Ramlakhan, Z: Feasibility study of fog detection and visibility estimation using camera images, KNMI internal report, De Bilt, December 2015. SWI-2016: Studiegroep Wiskunde met de Industrie, Radboud University Nijmegen, 25-29 January 2016, (http://www.ru.nl/math/research/vmconferences/swi-2016). WMO: Guide to Meteorological Instruments and Methods of Observation, World Meteorological Organization No. 8, Geneva, Switzerland, 2014 Edition (https://www.wmo.int/pages/prog/www/ IMOP/CIMO-Guide.html).

14