
Lightning Detection from Weather Radar Da Fan Drew Polasky [email protected] [email protected] Department of Meteorology and Atmospheric Science, Department of Meteorology and Atmospheric Science, Pennsylvania State University Pennsylvania State University State College, PA State College, PA Sree Sai Teja Lanka Sumedha Prathipati [email protected] [email protected] Department of Computer Science and Engineering, Department of Computer Science and Engineering, Pennsylvania State University Pennsylvania State University State College, PA State College, PA ACM Reference Format: data, with an F1 score of 0.29. Da Fan, Drew Polasky, Sree Sai Teja Lanka, and Sumedha Prathipati. 2019. Lightning Detection from Weather Radar. In IST 597 Fall’19: Keywords: Geostationary Lightning Mapper, Next Gen- Deep Learning, December 16, 2019, State College, PA. ACM, New York, eration Weather Radar, UNet, ResNet, Inception V3, Data NY, USA, 8 pages. https://doi.org/10.1145/1122445.1122456 Augmentation, Data Downsampling 1 ABSTRACT 2 INTRODUCTION In this study, we use Radar Images in deep learning algo- rithm to detect lightning. Radar reflectivity represents the Lightning is a significant and difficult to predict weather quantity and size of water and ice particles in the atmosphere. hazard, causing an average of about 50 deaths and 9000 wild- This value does not directly relate to the presence of light- land fires annually in the United States ning, but similar processes that produce high reflectivity (https://www.weather.gov/safety/lightning-victims). Light- values also lead to a greater probability of lightning. We ning is often accompanied by heavy rainfall, hail, and strong use radar data along with lightening labels from the Geo- winds. It’s important to predict lightning and provide timely stationary Lightning Mapper to train deep learning models alerts about the possible lightning strikes. However, it’s still for lightning detection. The radar image was captured once a hard task to give precise information about their timing every 5 minutes.The lightning strikes were captured once and location. Current methods for predicting lightning in every 20 seconds and combined into one lightning label ev- operational settings rely on simple thresholds from radar ery 5 minute. These data are available from Mar, 2018 to Oct, images [13]. 2019, giving around 150,000 images in total. We use data Artificial Intelligence can be used to predict various weather augmentation and downsampling to overcome the unbal- phenomenon [5]. One possible way to predict lightning is to anced nature of the data and reduce the memory demands use machine learning algorithms, which has already been ap- of the model. We test UNet, Google Inception v3 and ResNet plied to weather prediction. Logistic regression and random architectures initially. In the initial testing, UNet performed forest models were employed by Ruiz and Villa (2007)[8] to the best. Training a new UNet model from scratch we find distinguish convective and non-convective systems based on that it can reasonably predict lightning locations from radar features extracted from satellite images. Similarly, Veillette et al.(2013)[12] applied decision tree and neural networks to Permission to make digital or hard copies of all or part of this work for predict convection initiation from various features including personal or classroom use is granted without fee provided that copies satellite image and radar data. are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights Deep learning methods [4] offer the ability to encode spa- for components of this work owned by others than the author(s) must tial features at multiple scales and levels of abstraction with be honored. Abstracting with credit is permitted. To copy otherwise, or the explicit goal of encoding the features that maximize pre- republish, to post on servers or to redistribute to lists, requires prior specific dictive skill. Schon et al. (2018)[10] trained tree classifiers permission and/or a fee. Request permissions from [email protected]. and neural networks with optical flow error based on satellite IST 597 Fall’19, December 16, 2019, State College, PA images to predict lightning, and achieve a high accuracy but © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. also a high false alarm rate. Yunish et al. (2019)[14] applied ACM ISBN 978-1-4503-9999-9/18/06...$15.00 also artificial neural networks with storm parameters from https://doi.org/10.1145/1122445.1122456 polarimetric radar to predict and nowcast lightning. IST 597 Fall’19, December 16, 2019, State College, PA Da, Polasky, Lanka, Prathipati In this work, we used three different Convolutional Neural Network (CNN) models, including UNet, ResNet, and Incep- tion V3, to predict lightning based on NEXRAD radar images and lightning labels from Geostationary Lightning Mapper (GLM) data. 3 DATA Dataset In this analysis, the following datasets were used: 1) the lightning data from the Geostationary Lightning Mapper Figure 2: Sample lightning label. The green dot indicates (GLM), 2) the composite radar reflectivity fields from the lightning strike at the grid box. National Severe Storms Laboratory (NSSL) 3D mosaic Next Generation Weather Radar (NEXRAD). Our analysis covers the period from 1 March 2018 until 31 October 2019, during Unbalanced Data which GLM data are available. Lightning is relatively rare, present in about 0.000072 of the Radar reflectivity images come from the National Severe pixels in our dataset. This presents a challenge for training Storms Laboratory (NSSL) 3D mosaic Next Generation Weather models, as the accuracy of model that never predicts light- Radar (NEXRAD). The radar mosaic data have a 1-km hori- ning will be highly accurate. To overcome this issue, we zontal resolution, a 5-min temporal resolution, with 2600x6000 tune the data, and select training cases with more lightning pixels per image, covering the Continental US (CONUS). The present. For this reason, we do not focus on accuracy as the sample radar image is shown in figure 1. primary metric for evaluating these models. The lightning labels utilized in this work were detected by the GLM instrument aboard the GOES-16 satellite. GLM Data Downsampling camera pixels detect lightning flashes day and night with a Downsampling is used to reduce the total size of the images, horizontal resolution ranging between 8 and 12 km, with an in order to more easily train the models. Although shrinking average detection efficiency nearing 90 percent (Goodman an image does not require filling in new space as in the case et al. 2013[2]). of upsampling, care must still be taken to ensure that minimal The lightning data centroids initially stored in 20-s intervals useful information is lost. For example, consider an image were binned in 5-minute intervals and then projected onto made up of alternating black and white pixels. If you shrink a uniform 10-km grid with 260x600 data points within the this image to half its size by directly sampling the values Contiguous United States. The sample radar lightning label of every other pixel, you would end up with a completely is shown in figure 2. white or black image. The resolution of the lightning labels is one tenth that of the radar data. We downsample the radar images to match the resolution of the lightning images, to reduce the requirements on the model, while losing as little useful information as possible. Data Augmentation Using the augmentation techniques likes resize, rotate, zoom, to create the different angles of lightening if needed asthe lightening percentage is 1 compared to non lightening. So we have tried to add the labels noise to increase the number of 1s compared to 0 which is lightning vs non lightning proportion. Figure 1: Sample radar image. The shading in the radar im- We followed some techniques of resizing, zooming, scaling age indicates radar reflectivity. and scaling for adding the noise to crop the images with more portion of lightning, which in other terms is called as ’Adding Adversarial Label Noise’. Adding Adversarial Label Noise. We have tried to resize the image and increase the fraction of lightning and non light- ning zooming the image to match the fraction to 3:4. Here are the sample of an image when the image is original(Figure 2 IST 597 Fall’19, December 16, 2019, State College, PA Da, Polasky, Lanka, Prathipati 3) and when the image is cropped(Figure 4) and we can visu- 3X3 CNN layer with the number of feature maps equal to alise the dark portion of original to the dark portion in the the number of segments desired. [9] cropped image. To detect the lightning in the radar images, the UNet model is trained from scratch on the GLM data to produce the seg- mented images as the output. The yellow segmented regions in the output denote the presence of lightning in the image. Figure 3: Original radar image Figure 5: Architecture of UNet Model Figure 4: Image obtained after adding adversarial label noise ResNet Training deep neural networks with gradient based optimiz- ers and learning methods can cause vanishing and exploding 4 TRANSFER LEARNING AND MODELS gradients during backpropagation. With the help of residual Three image deep learning models (UNet, Resnet, and Google blocks, we can increase the number of hidden layers without Inception V3), are evaluated for use on radar data. worrying about this problem. Residual blocks enables the net- work to preserve what it had learnt previously by having an UNet identity mapping weight function where the output is equal This architecture [7] looks like a ‘U’ which justifies its name. to the input, preserving what the neural network has learnt This architecture consists of three sections: the contraction, by not applying diminishing transformations.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-