<<

The Pennsylvania State University

The Graduate School

College of

IMPROVING THE DETECTION OF FEATURES IN

BACKSCATTER LIDAR SCANS USING FEATURE

EXTRACTION

A Thesis in

Electrical Engineering

by

Eric S. Rotthoff

c 2012 Eric S. Rotthoff

Submitted in Partial Fulfillment of the Requirements for the Degree of

Master of Science

December 2012 ii

The thesis of Eric S. Rotthoff was reviewed and approved∗ by the following:

Timothy J. Kane Professor of Electrical Engineering Thesis Adviser

Julio Urbina Associate Professor of Electrical Engineering

Kultegin Aydin Professor of Electrical Engineering Head of the Department of Electrical Engineering

∗Signatures are on file in the Graduate School. iii Abstract

This thesis presents the results of applying image segmentation techniques to incoherent LIDAR data to improve the detection of wind features. Improving the de- tection and analysis of wind information from incoherent LIDAR systems will allow for the adoption of these relatively low cost instruments in environments where the size, complexity, and cost of other options is prohibitive. By applying filtering and segmen- tation techniques to major features in each scan the detection and isolation of trackable features was accomplished. The same process was applied to NEXRAD reflectivity data to confirm the process described is instrument agnostic. The NEXRAD data also pro- vides an estimate of radial particle motion allowing for a comparison with independent measurements. These techniques continue the development of a robust and accurate method of wind estimation using non-coherent LIDAR systems. iv Table of Contents

List of Tables ...... vi

List of Figures ...... vii

Acknowledgments ...... x

Chapter 1. Introduction ...... 1 1.1 Background ...... 1 1.2 Motivation ...... 1 1.3 REAL LIDAR ...... 4 1.4 NEXRAD ...... 5

Chapter 2. Data Format and Conditioning ...... 7 2.1 Running Median High Pass Filter ...... 8 2.2 Cartesian Mapping with Linear Interpolation ...... 9 2.3 Temporal-Mean Image Creation ...... 13

Chapter 3. Segmentation ...... 14 3.1 Segmentation ...... 14 3.2 Segmentation Results ...... 17

Chapter 4. Results ...... 19 4.1 LIDAR Results ...... 19 4.1.1 Neighboring ...... 20 4.1.2 Tracking A Feature ...... 31 4.1.3 Errant Results ...... 41 4.1.4 Autocorrelation ...... 48 4.1.5 Comparison ...... 57 4.2 NEXRAD Results ...... 59 4.2.1 Example 1 ...... 59 4.2.2 Autocorrelation ...... 75 4.2.3 Comparison ...... 81

Chapter 5. Conclusions ...... 82 5.1 Summary ...... 82 5.2 Future Work ...... 83

Appendix A. REAL LIDAR Data Extraction ...... 85 v

Appendix B. Code Description ...... 87 B.1 read translate.py ...... 87 B.2 avg filt make movie.py ...... 88 B.3 segment image.py ...... 88 B.4 analyze log.py ...... 90

References ...... 91 vi List of Tables

4.1 Peak Shape Moments for Correlations of REAL Scan Features (N=291) 58 4.2 Peak Shape Moments for Correlations of NEXRAD Features (N=128) . 81

A.1 Record Specification for REAL’s Data ...... 86 vii List of Figures

1.1 Raman-shifted Eye-safe LIDAR (REAL) in Operation ...... 5 1.2 WSR-88D KCCX Located in State College, Pennsylvania ...... 6

2.1 Bilinear interpolation example...... 11 2.2 Bilinear interpolation in Radial Coordinate System...... 12

3.1 Contour Output ...... 17 3.2 Contour Output Detail ...... 18 3.3 Feature in Frame ...... 18 3.4 Feature Detail ...... 18

4.1 Cloud 1 Frame 1 ...... 21 4.2 Cloud 1 Isolated ...... 22 4.3 Cloud 1 Search Space ...... 22 4.4 Cloud 1 Raw Correlation ...... 23 4.5 Cloud 1 Filtered Correlation ...... 24 4.6 Cloud 1 Motion Estimate ...... 24 4.7 Cloud 2 Frame 1 ...... 25 4.8 Cloud 2 Isolated ...... 25 4.9 Cloud 2 Search Space ...... 26 4.10 Cloud 2 Raw Correlation ...... 26 4.11 Cloud 2 Filtered Correlation ...... 27 4.12 Cloud 2 Motion Estimate ...... 27 4.13 Cloud 3 Frame 1 ...... 28 4.14 Cloud 3 Isolated ...... 28 4.15 Cloud 3 Search Space ...... 29 4.16 Cloud 3 Raw Correlation ...... 29 4.17 Cloud 3 Filtered Correlation ...... 30 4.18 Cloud 3 Motion Estimate ...... 30 4.19 Cloud in Frame 1 ...... 31 4.20 Cloud Isolated in Frame 1 ...... 32 4.21 Cloud Search Space in Frame 2 ...... 32 4.22 Cloud Raw Correlation Frame 1 ...... 33 4.23 Cloud Filtered Correlation Frame 1 ...... 34 4.24 Cloud Motion Estimate From Frame 1 to Frame 2 ...... 34 4.25 Cloud in Frame 2 ...... 35 4.26 Cloud Isolated in Frame 2 ...... 35 4.27 Cloud Search Space in Frame 3 ...... 36 4.28 Cloud Raw Correlation Frame 2 ...... 36 4.29 Cloud Filtered Correlation Frame 2 ...... 37 4.30 Cloud Motion Estimate From Frame 2 to Frame 3 ...... 37 4.31 Cloud in Frame 3 ...... 38 viii

4.32 Cloud Isolated in Frame 3 ...... 38 4.33 Cloud Search Space in Frame 4 ...... 39 4.34 Cloud Raw Correlation Frame 3 ...... 39 4.35 Cloud Filtered Correlation Frame 3 ...... 40 4.36 Cloud Motion Estimate From Frame 3 to Frame 4 ...... 40 4.37 Cloud 2 Frame 1 ...... 42 4.38 Cloud 2 Isolated ...... 42 4.39 Cloud 2 Search Space ...... 43 4.40 Cloud 2 Raw Correlation ...... 43 4.41 Cloud 2 Filtered Correlation ...... 44 4.42 Cloud 2 Motion Estimate ...... 44 4.43 Cloud in Error 2 ...... 45 4.44 Cloud Isolated in Error 2 ...... 45 4.45 Cloud Search Space in Error 2 ...... 45 4.46 Cloud Raw Correlation Error 2 ...... 46 4.47 Cloud Filtered Correlation Error 2 ...... 47 4.48 Cloud Motion Estimate Error 2 ...... 47 4.49 Cloud 1 Frame 1 ...... 48 4.50 Segmented Autocorrelation of Cloud 1 ...... 49 4.51 Raw Autocorrelation of Cloud 1 ...... 50 4.52 Cloud 2 Frame 1 ...... 51 4.53 Segmented Autocorrelation of Cloud 2 ...... 52 4.54 Raw Autocorrelation of Cloud 2 ...... 53 4.55 Cloud 3 Frame 1 ...... 54 4.56 Segmented Autocorrelation of Cloud 3 ...... 55 4.57 Raw Autocorrelation of Cloud 3 ...... 56 4.58 Radar Cloud 1 ...... 61 4.59 Radar Cloud 1 Isolated ...... 61 4.60 Radar Cloud 1 Search Space ...... 62 4.61 Radar Cloud 1 Correlation ...... 63 4.62 Radar Cloud 1 Motion ...... 64 4.63 Radar Cloud 2 ...... 65 4.64 Radar Cloud 2 Isolated ...... 65 4.65 Radar Cloud 2 Search Space ...... 66 4.66 Radar Cloud 2 Correlation ...... 67 4.67 Radar Cloud 2 Motion ...... 68 4.68 Radar Cloud 3 ...... 69 4.69 Radar Cloud 3 Isolated ...... 69 4.70 Radar Cloud 3 Search Space ...... 70 4.71 Radar Cloud 3 Correlation ...... 71 4.72 Radar Cloud 3 Motion ...... 72 4.73 NEXRAD Radial Velocity Measurement of Cloud 1 ...... 73 4.74 NEXRAD Radial Velocity Measurement of Cloud 2 ...... 73 4.75 NEXRAD Radial Velocity Measurement of Cloud 3 ...... 74 4.76 Cloud 1 Frame 1 ...... 75 ix

4.77 Segmented Autocorrelation Cloud 1 ...... 76 4.78 Raw Autocorrelation Cloud 1 ...... 77 4.79 Cloud 2 Frame 1 ...... 77 4.80 Segmented Autocorrelation Cloud 2 ...... 78 4.81 Raw Autocorrelation Cloud 2 ...... 78 4.82 Cloud 3 Frame 1 ...... 79 4.83 Segmented Autocorrelation Cloud 3 ...... 79 4.84 Raw Autocorrelation Cloud 3 ...... 80 x Acknowledgments

I would like to acknowledge Dr. Tim Kane for his guidance, insight, and support during this research and the preparation of this thesis. I also thank Dr. Shane Mayor for providing the basis of this research and the LIDAR data that made this work possible.

I am indebted to Brady Bickel for serving as a sounding board for ideas and a source of motivation during moments of doubt. I appreciate the benefits of the Raytheon Company for financing this endeavor as well as providing a flexible schedule and the stimulating environment that made this effort possible. This document would not exist without the encouragement of Dr. Gary Greene whose philosophy on thesis writing got me started.

Finally, I could not have done this without the love, support, and patience shown by

Jessica Rotthoff. Her care of Sara and Kyle, neither of whom were present at the start of this journey, is much appreciated and under-rewarded. Thank you! 1

Chapter 1

Introduction

1.1 Background

Light Detection and Ranging (LIDAR) has many essential applications in atmo- spheric research. The basic principle is identical to the more familiar Radio-wave De- tection and Ranging (radar). Rather than use electromagnetic radiation from the radio portion of the spectrum, LIDAR uses radiation in and near the visible wavelengths.

The difference in wavelengths of radiation allows LIDAR to provide insight into different physical regimes than radar. Whereas radar, with wavelengths of a few millimeters to tens of meters, typically interacts with macroscopic objects, LIDAR with wavelengths of about 250 nanometers to 10 micrometers is sensitive to microscopic objects. This allows

LIDAR instruments to sense the reflection of light from airborne such as water vapor, air pollution, and smoke.[1]

1.2 Motivation

The parallels between radar and LIDAR also hold for the capability to sense the motion of the objects they are tracking. For the motion is determined by the phase change of the received energy with an adjacent pulse due to the Doppler Effect.

For LIDARs any change in frequency is indicative of relative motion between the target and the receiver. This information can be used to compute the radial motion of each 2 return. The instruments that are capable of this measurement are known as Doppler

LIDAR systems. While the techniques to perform the necessary measurements of motion estimation are well known, the instruments are more expensive than their incoherent relatives due to the added complexity needed to measure the exact frequency shift.[1][2]

In other domains, such as fluid mechanics, particle tracking has been an active research topic as there is a need to study the turbulent flow of objects as they pass through the water.[3] The application of these techniques to the simple location infor- mation present in the returns of a incoherent scanning LIDAR instrument could provide a similar estimate of motion. Deployment of these simpler and cheaper instruments ca- pable of performing a similar function would be welcome where cost and complexity are driving factors.

Other active areas of research which could benefit from simpler and cheaper in- struments include aerosol threats in enclosed venues and complex environments[4], which also requires eye-safe operation, measuring wind fields for sporting events and optimization.[5] Space borne applications include mapping desert dust layers, burning, tracking volcano aerosols, and measuring the flow of pollution.[6] Potential mo- bile applications include the airborne tracking of airplane wake vortices and detection of clear air turbulence.[7]

Initial work was done in this area by Dr Shane Mayor and Dr Edwin Eloranta.

In 2001 they published a study of estimates of wind velocity over Lake Michigan near

Sheboygan, Wisconsin using data from the Volume Imaging LIDAR (VIL) system. The

VIL system produced data by scanning over either a fixed elevation angle in a 30◦ to

60◦ arc or at a fixed azimuth in a 20◦ arc. The scans with a constant elevation angle 3 produce images that are analogous to Plan Position Indicator (PPI) plots. Their study produced a spatially resolved wind field. The basic procedure for producing the wind

field is to filter the return of each pulse, map the result to Cartesian coordinates, remove the mean image from each frame, and compute the cross-correlation for squares about

500 meters on each side.[8]

Looking at the sequence of images it is easy to see clouds of aerosols moving through the scanned area. These clouds are not the same as a cloud in the sky. Me- terological clouds are visible masses of water vapor or ice crystals. The definition of the cloud used in this thesis is any collection of aerosols which reflect the energy emitter by the LIDAR. To improve the calculation of , especially in areas of high aerosol den- sity, isolating the clouds and only performing the cross-correlation on the the isolated cloud in the cloud’s neighborhood should produce better results than just correlating chunks of data without regard to each chunk’s content. The proposed method for iso- lating the cloud is to use an image segmentation technique to establish each cloud’s extent. The region immediately around the cloud will then be used as one input into the cross-correlation with the other input being an enlarged version of the same box from the subsequent frame.

In this process we assume the motion of the cloud is due solely to wind effects, that the displacement of the cloud for a given time span is roughly equal to the wind speed and the change in elevation across the frame is minimal. 4

1.3 REAL LIDAR

The LIDAR data used in this thesis was provided by Dr Shane Mayor. It was collected by the Raman-shifted Eye-safe Aerosol LIDAR (REAL) which is show in Figure

1.1. This instrument was designed to be eye-safe at an operational wavelength of 1.54 micrometers. This wavelength is created by using stimulated Raman , however, it is not a Raman LIDAR. It was built by the National Center for Atmospheric Research

(NCAR). When operating at a nearly horizontal elevation it has a useful range of approx- imately 10 kilometers. The instrument is mounted so that it can operate in a scanning mode. This allows the formation of data frames that resemble a PPI display common to radar systems. It can scan in both horizontal and vertical directions. The REAL system has a 4 nanosecond pulse duration allowing for a theoretical range resolution of 1.2 me- ters. The system employs a data digitizer that operates at 50 mega-samples per second which gives it an effective range resolution of approximately 3 meters.[9] Unfortunately, at the resolution of the data from the REAL instrument, this means feature tracking is useless for processing localized turbulence. The data in this thesis was collected on

April 1, 2007 around midnight at Dixon, CA during and National Science Foundation program called Canopy Horizontal Array Turbulence Study.[10] The REAL data format is described in Appendix A. 5

Fig. 1.1 Raman-shifted Eye-safe Aerosol LIDAR (REAL) in Operation from http://physics.csuchico.edu/lidar accessed July 19, 2012.

1.4 NEXRAD Radar

Also present in this thesis is data from the Next Generation

(NEXRAD) program. The radars used in NEXRAD are the Weather Surveillance Radar-

1988 Doppler (WSR-88D) radars. The NEXRAD program consists of 136 systems that provide almost complete coverage of the continental United States. The radars are doppler systems that operate in the S-Band with a frequency range of 2.7 to 3.0 GHz.[11]

As a doppler system, in addition to the usual reflectivity data, the system can produce measurements of mean radial velocity and the spectrum width of objects. The resolution of the measurements is 0.25 km radially and 0.5 m/s for velocity out to 230 km range while reflectivity’s resolution is 1 km radially to 460 km. Both types of measurements have an angular resolution of 1 degree. Each radar can scan at 14 preset elevation angles.

The scan strategy for mode covers 9 elevations every 6 minutes while the severe weather mode covers all 14 elevations every 5 minutes.[12] The data included here was from radar station KCCX located in State College, Pennsylvania as depicted in 6

Figure 1.2.[13] The data is from from a random day while the instument was collecting in precipitation mode. The data was collected on 05/09/2010 from 00:01 to 00:22 GMT.

Fig. 1.2 WSR-88D KCCX Located in State College, Pennsylvania from http://www.erh.noaa.gov/ctp/pictures/KCCX RDA.jpg accessed July 19, 2012. 7

Chapter 2

Data Format and Conditioning

The first step to determining wind information from LIDAR data is to prepare the data for analysis. The data format for the REAL instrument is described in Appendix A.

The procedure for preparing the data is the same as used by Mayor and Eloranta in [8].

The first step is to perform a running median high-pass filter on the raw data. The data is then transformed from a polar coordinate system to a Cartesian space to ease the application of image processing and visualization methods from existing toolkits. The

final step is to perform mean image removal from each data frame. This will remove any features that are static over the time-span of the data and aid in the detection of motion from frame to frame. Mayor and Eloranta also perform a histogram normalization.

This is not necessary as the plotting tools automatically perform this procedure when generating a plot and the internal representation of the data is double precision floating point.

One aspect of the data that is not addressed here is the fact that the image formed from a single scan is not a snapshot, that is not all of the data in one frame is cotemporal.

Because the LIDAR can only perform a one-dimensional measurement, to form a two- dimensional measurement several individual scans must be combined. Completing a scan requires the instrument to slew across elevation or, as in these data, azimuth. The effect on the estimate of winds here is small as the measured wind speeds are under 6m/s and 8 the time between adjacent scans is 10.6 seconds. If each measurement took longer to make, the winds under consideration were significantly faster, or the number of scans per image much larger, the impact of the temporal difference between samples in each frame would either need to be accounted for or mitigated.

2.1 Running Median High Pass Filter

A median filter is a nonlinear filter with the primary purpose of noise reduction while keeping the impact to high frequency edge components to a minimum. The only parameter to the filter is a length over which the median is computed. The length is specified in meters and should be chosen based on some characteristic length of the features expected in the data. This application used a length of 450 meters. The running median high pass is performed independently on the each LIDAR data series of fixed azimuth in the radial direction resulting in a one dimensional filter.

The usual implementation of a median filter is to replace the input’s value with the median of the window at that point. The filter used here instead subtracts the median value from the input’s value. For half-filter length k and number of input data samples N:

x(n) = x(n) − median(x(n − k), . . . , x(n + k)) for n = k, . . . , N − k (2.1)

This change enhances the filter’s ability to highlight outliers, which are presumably due to the aerosol backscatter, and preserves the removal of background drift[14]. 9

2.2 Cartesian Mapping with Linear Interpolation

To allow ease of plotting and the application of image processing techniques con- tained in popular toolkits it is necessary to convert the data from the native polar coordinate system to the Cartesian coordinate system. Because the REAL instrument functions by performing an azimuth sweep over a fixed elevation the output of these measurements is not a horizontal plane. Any wind speeds and directions detected will have to be projected onto the horizontal and vertical dimensions. The simple conversion from polar to Cartesian coordinates is

x = r ∗ cos(θ)

and

y = r ∗ sin(θ)

As this is a digital system the straight transformation of coordinates will not always give the a sample on the desired integer boundaries. To get meaningful data at each point, data must be estimated from the transformed data using a bilinear interpolation.

Bilinear interpolation is a resonable approach to filling in samples using the four nearest actual data points. The usual method of performing the bilinear transform in Cartesian coordinates is to find four surrounding points f(x1, y1), f(x1, y2), f(x2, y1), and f(x2, y2) of (x, y) as shown in Figure 2.1. A linear interpolation along each y-axis is performed 10 giving the intermediate results of

x − x1 x2 − x f(x, y1) = f(x1, y1) + f(x2, y1) (2.2) x2 − x1 x2 − x1 and

x − x1 x2 − x f(x, y2) = f(x1, y2) + f(x2, y2) (2.3) x2 − x1 x2 − x1

. The final step in the bilinear interpolation is to interpolate along the x-axis to get to the point of interest

y − y1 y2 − y f(x, y) = f(x, y1) + f(x, y2) (2.4) y2 − y1 y2 − y1

.[15] Because the REAL data is natively in polar coordinates it is evident the bilinear interpolation should occur in the polar coordinate system.[16] Each intermediate inter- polation occurs along the data of a single LIDAR shot. This has the added advantage of interpolating data that are from the same time instant as there is an inherent delay between the measurements of differing azimuths. The surrounding points are determined by computing the polar representation of the desired Cartesian coordinates (x, y) which p yield r = x2 + y2 and θ = atan2(y, x). The data necessary for the interpolation are found by comparing the list of angles and distances present in the frame. The closest an- gles larger and smaller than necessary are designated θ+ and θ− respectively. Identically the ranges immediately larger and smaller are r+ and r−. The intermediate interpolation produces

r+ − r r − r− f(r, θ+) = f(r+, θ+) + f(r−, θ+) (2.5) r+ − r− r+ − r− 11

Fig. 2.1 Bilinear interpolation example. 12 and

r+ − r r − r− f(r, θ−) = f(r+, θ−) + f(r−, θ−) (2.6) r+ − r− r+ − r−

The result of interpolating the intermediate results gives the desired value as seen in

Figure 2.2.

θ+ − θ θ − θ− f(x, y) = f(r, θ) = f(r, θ+) + f(r, θ−) (2.7) θ+ − θ− θ+ − θ−

Applying this interpolation to each Cartesian point in each frame gives the natural data

Fig. 2.2 Bilinear interpolation in Radial Coordinate System.

representation desired for using image processing libraries and display toolkits. 13

2.3 Temporal-Mean Image Creation

To reduce the possibility of finding false correlation points it is necessary to remove instrument artifacts or static object returns from each frame. This is accomplished by forming a mean image and subtracting it from each data frame. The mean image is formed by doing an element-by-element sum of matrices representing the data of each frame and simply dividing by the total number of frames. Because each data set contains many frames spread over a rather long time period and the algorithm is looking for motion, this should not materially impact any points of interest in the individual frames.

The contribution to the correlation of any point that is part of a dynamic feature but is present in each frame will be reduced. This is desirable as such points do not contribute any information to our results.[8] 14

Chapter 3

Segmentation

Segmentation can be used to reduce the complexity of an image’s content.[17]

Humans can easily see a particle cloud move from frame to frame. This thesis tries to leverage that human perception using a automated technique.

3.1 Cloud Segmentation

Isolating clouds of particles is the first step to identifying the motion present between frames. Here a cloud is simply a spatially adjacent group of aerosols visible in the LIDAR return. It is visually intuitive that tracking the clouds of particles causing the scattering of the LIDAR’s signal between scanning frames will be proportional to the wind speed and follow the direction of the wind. By isolating a cloud from one frame prior to performing a correlation with a corresponding area in a subsequent frame, the strongest correlation will represent the cloud in the next frame. As each frame is composed of measurements made sequentially in time, this method is prone to some error. For the sake of simplicity this complication is assumed to be negligable here.

A simple method of segmenting the image in the desired manner is form a binary image and implement a border following method. One such method is described by

Suzuki and Abe.[17] To prepare the image for the algorithm the values in the gray-scale image are used to form a histogram. Any values above the 90th percentile are preserved, 15 values below the threshold are zeroed out. The 90th percentile threshold was picked as due to performance considerations in an effort to limit the number of segments identified during this step. This process was performed once on the first data frame of the sequence and the fixed value of 255 was used for all frames present in the data. Next the border following method is applied. Any resulting borders comprised of less than 10 points are

filtered out. This removes any small clouds as they do not contribute to the intuitive understanding achieved by a human.

The segmentation is performed by finding the outermost borders of each particle cloud. The method to accomplish this starts by scanning along each row. Considering a point at (i, j) if f(i, j − 1) = 0 and f(i, j) = 1 or f(i, j) = 1 and f(i, j + 1) = 0 interrupt the scan to follow the border. If f(i, j) = 1 and f(i, j + 1) = 0 set f(i, j) to

−BID else set f(i, j) to BID where BID is a one-up border id number. The border following is performed by setting (i2, j2) to (i, j − 1) and looking clockwise around (i, j) until a non-zero pixel is found. Assign the location of the non-zero pixel to (i1, j1). If no pixel is found the border is complete and scanning is resumed, otherwise set (i2, j2) to

(i1, j1) and (i3, j3) to (i, j). Starting at (i2, j2) look counter-clockwise around (i3, j3) and assign the first non-zero pixel to (i4, j4). Set f(i3, j3) to −BID if f(i3, j3 + 1) = 0 and it was examined in the previous step, else set f(i3, j3) to BID. If (i4, j4) = (i, j) and

(i3, j3) = (i1, j1) the border is complete and scanning is resumed. Otherwise, set (i2, j2) to (i3, j3) and (i3, j3) to (i4, j4) and repeat the steps starting at the count-clockwise scan.

The resulting borders form the clouds the need to be identified in the subsequent frame. To isolate the clouds the first step is to determine the minimum and maximum indices of the border in the horizontal and vertical directions. This cloud containing 16 subset of the image is then cross-correlated with the corresponding region of the subse- quent frame. The region of the subsequent frame is enlarged by an appropriate amount of samples that allow for wind speed. If the maximum wind speed in the area of mea- surement is 50m/s, the resolution of samples is 10m and the time between frames is 20s, the search region of the next frame will be 50m/s/10m ∗ 20s = 100 samples.

Due to the large number of borders found in the data corresponding to the area closest to the instrument, any borders from this region are removed. The instrument’s sensitivity to these closest clouds is highest due to the received power’s 1/r2 range proportionality. This is also in line with the goal of being able to reproduce a human’s intuitive motion estimation from frame to frame. A section that is too busy to easily segment is of no use. The results of correlations in this area are more suspect because of the increased noise level in this portion of the data, the more spatially dense number of samples due to each pulse’s measurement being so close to the previous pulse may cause measurements to be non-independent and the narrow range of space in which the cloud can be searched reduces the chance of finding the same feature in the next frame.

The wind measurement for this cloud is determined by finding the maximum magnitude of the cross-correlation surface and computing the cloud’s measured distance traveled while dividing by the time elapsed between frames.

q 2 2 W = xmax + ymax/t (3.1) 17

3.2 Segmentation Results

This section presents the typical results of the segmentation technique. The im- plementation used here is included in the Open toolkit.[18] Figure 3.1 shows the position of the segmented feature in the image frame and Figure 3.2 presents an enlarged picture of the same results with extents determined by the minimum and maximum indices of each dimension. This region is highlighted in Figure 3.3 which shows the original input to the segmentation routine. The isolated cloud is displayed in Figure

3.4. The decision not to make the boundary a mask is evident by the inclusion of a non-zero component in the upper right corner of the image. The inclusion of neighbor- ing clouds should help determine where the segmented feature is in a subsequent frame when the search space includes multiple clouds as this preserves the spatial relationship between the features.

Fig. 3.1 Contour Output 18

Fig. 3.2 Contour Output Detail

Fig. 3.3 Feature in Frame

Fig. 3.4 Feature Detail 19

Chapter 4

Results

This chapter describes the output of the above processing technique. Section

4.1 includes the results of the algorithm on scanning LIDAR data while Section 4.2 presents the results of the same technique on NEXRAD data to show the applicability of the method to other types of data. A comparison of the results to actual wind measurements is not possible with the LIDAR data as no wind infromation is available.

Such a comparison was made using LIDAR data in [19], while NEXRAD measurements can be obtained directly from radial velocity estimates and are mentioned in Section

4.2.1. In any case, the assumption that the motion detected corresponds to the intuitive motion of the feature from one frame to the next can be verified by seeing that the results of the correlation predict a plausible new location in the next frame. This is presented in the following sections as an overlay on the later frame of the translation determined by correlation to have occurred from the earlier to later frame. If the feature appears to have moved to the location indicated by the overlay, the algorithm has succeeded.

4.1 LIDAR Results

The following sections describe selected results from the processing described above. The data used here is from the REAL instrument. Sections 4.1.1 and 4.1.2 represent successful processing while Section 4.1.3 shows some limitations of the system. 20

Section 4.1.4 includes the results of performing an autocorrelation on a feature while

Section 4.1.5 determines whether the data should be thresholded before the correlation is perfomed.

4.1.1 Neighboring Clouds

This section presents the results of neighboring particle clouds in a single frame.

Given the localized nature of these features, one would expect the calculated wind motion to be very similar for each. Each feature is isolated and then correlated to an enlarged region in the subsequent frame as described above in Chapters 2 and 3. The results show the intermediate steps of each processing step. First the feature is highlighted in the frame, it is then isolated, the search space in the next frame is computed, the results the correlation of thresholded and non-thresholded data, and the resulting motion estimate are depicted for each case as in Figures4.1 to 4.6 respectively.

The measured translation for these three examples, in pixels, are (8.5, 21), (8, 11.5),

(8, 19.5) respectively. The resolution for this data is 2 meters per pixel. The time delta between frames is 10.6 seconds. The range of the spacing roughly perpendicular to the wind field is the horizontal direction in the images. The features are spread out over roughly 45 pixels or 90 meters. The estimate of the horizontal wind for each fea- ture is 4.6m/s oriented 112.0◦ counter-clockwise from horizontal,2.8m/s oriented 126.0◦ coutner-clockwise from horizontal, and 4.2m/s oriented 112.3◦ counter-clockwise from horizontal respectively. Visually the features do appear in the subsequent frames at these locations as would be intuitively expected. The non-negligible deformation, such as the 21 divergence and curl of the wind field, that occurred between frames does not seem to have a serious impact on the processing.

The result of the correlation is presented in Figures 4.6, 4.12, and 4.18. These

figures show the data from the second frame in the correlation. The colored lines depict the extents of the correlation peak from the corners of the isolated feature as identified in the original image. The correlation extents are found by determining the half-maximum edge of the correlation peak in the vertical and horizontal directions. Thus, if there is a cloud within the lines in the image it is the estimated position of the feature isolated from the initial frame in the following data. These correlation extents do appear to describe the deformation of the peak from frame to frame on a qualitative basis.

Fig. 4.1 Cloud 1 Frame 1 22

Fig. 4.2 Cloud 1 Isolated

Fig. 4.3 Cloud 1 Search Space 23

Fig. 4.4 Cloud 1 Raw Correlation 24

Fig. 4.5 Cloud 1 Filtered Correlation

Fig. 4.6 Cloud 1 Motion Estimate 25

Fig. 4.7 Cloud 2 Frame 1

Fig. 4.8 Cloud 2 Isolated 26

Fig. 4.9 Cloud 2 Search Space

Fig. 4.10 Cloud 2 Raw Correlation 27

Fig. 4.11 Cloud 2 Filtered Correlation

Fig. 4.12 Cloud 2 Motion Estimate 28

Fig. 4.13 Cloud 3 Frame 1

Fig. 4.14 Cloud 3 Isolated 29

Fig. 4.15 Cloud 3 Search Space

Fig. 4.16 Cloud 3 Raw Correlation 30

Fig. 4.17 Cloud 3 Filtered Correlation

Fig. 4.18 Cloud 3 Motion Estimate 31

4.1.2 Tracking A Feature

In this section are the results of what maybe the most attractive feature of seg- menting the frame prior to producing a correlation, the tracking of a particle cloud as it progresses through a sequence of frames. Here a cloud is tracked through three consecutive LIDAR scans. The results are presented in the same format as Section 4.1.1.

Here the displacement between frames is measured as (3.5, 23), (18, 20.5), (12.5, 24.5) pixels from the earliest estimate to the latest. These measurements correspond to winds of 4.7m/s, 5.5m/s, and 5.5m/s. These measurements are similar to those in the previ- ous section in both magnitude and direction. The data in the previous section is from approximately two minutes earlier than that of Section 4.1.1. This temporal proximity reinforces the similarity of measurements in both sections is expected.

Fig. 4.19 Cloud in Frame 1 32

Fig. 4.20 Cloud Isolated in Frame 1

Fig. 4.21 Cloud Search Space in Frame 2 33

Fig. 4.22 Cloud Raw Correlation Frame 1 34

Fig. 4.23 Cloud Filtered Correlation Frame 1

Fig. 4.24 Cloud Motion Estimate From Frame 1 to Frame 2 35

Fig. 4.25 Cloud in Frame 2

Fig. 4.26 Cloud Isolated in Frame 2 36

Fig. 4.27 Cloud Search Space in Frame 3

Fig. 4.28 Cloud 2 Raw Correlation Frame 2 37

Fig. 4.29 Cloud Filtered Correlation Frame 2

Fig. 4.30 Cloud Motion Estimate From Frame 2 to Frame 3 38

Fig. 4.31 Cloud in Frame 3

Fig. 4.32 Cloud Isolated in Frame 3 39

Fig. 4.33 Cloud Search Space in Frame 4

Fig. 4.34 Cloud Raw Correlation Frame 3 40

Fig. 4.35 Cloud Filtered Correlation Frame 3

Fig. 4.36 Cloud Motion Estimate From Frame 3 to Frame 4 41

4.1.3 Errant Results

The processing presented in this thesis is not without its pitfalls. This section includes a couple of examples where the automatic processing produces obviously incor- rect results. The same processing products are included here as the previous Sections

4.1.1 and 4.1.2. The problem with both examples is the feature being searched for is relatively small. In the search space for both there are several features present. The larger features of the search space dominate the correlation surface. One possible solu- tion to this problem would to intelligently decide what space to search, especially if prior results are available to the current processing. Another possibility would be to segment the search space and only allow features which of similar size to the original feature to be present in the search space. Here the first example’s displacement is estimated as

(−16.5, −7) which is against the apparent motion of the data. This corresponds to a wind speed of 3.6m/s. The second examples displacement is measured as (−3, −1) which is also oriented against the apparent motion of the rest of the data. This displacement is equivalent to a wind speed of 0.6m/s. Given that this feature is from the same spatial region and data frame as the data in Section 4.1.1, it is obvious this measurement is in error. 42

Fig. 4.37 Cloud 2 Frame 1

Fig. 4.38 Cloud 2 Isolated 43

Fig. 4.39 Cloud 2 Search Space

Fig. 4.40 Cloud 2 Raw Correlation 44

Fig. 4.41 Cloud 2 Filtered Correlation

Fig. 4.42 Cloud 2 Motion Estimate 45

Fig. 4.43 Cloud in Error 2

Fig. 4.44 Cloud Isolated in Error 2

Fig. 4.45 Cloud Search Space in Error 2 46

Fig. 4.46 Cloud Raw Correlation Error 2 47

Fig. 4.47 Cloud Filtered Correlation Error 2

Fig. 4.48 Cloud Motion Estimate Error 2 48

4.1.4 Autocorrelation

By comparing the autocorrelation of some cloud features we can get a sense of the expected width of a correlation peak. This provides a point of comparison with the processing correlation and gives us insight into the deformation of a feature from one frame to the next. The features selected for this section match those in 4.1.1.

Both the thresholded and non-thresholded autocorrelations are presented here. All these examples highlight the fact the autocorrelation is not a simple peak. The nonsymmetric components of the correlation surface are due to neighboring clouds present in the search space.

Fig. 4.49 Cloud 1 Frame 1 49

Fig. 4.50 Segmented Autocorrelation of Cloud 1 50

Fig. 4.51 Raw Autocorrelation of Cloud 1 51

Fig. 4.52 Cloud 2 Frame 1 52

Fig. 4.53 Segmented Autocorrelation of Cloud 2 53

Fig. 4.54 Raw Autocorrelation of Cloud 2 54

Fig. 4.55 Cloud 3 Frame 1 55

Fig. 4.56 Segmented Autocorrelation of Cloud 3 56

Fig. 4.57 Raw Autocorrelation of Cloud 3 57

4.1.5 Comparison

The application of a threshold to the data prior to performing the a correlation is performed to allow the segmentation algorithm to isolate a feature as a binary figure is necessary for the border following technique. To determine whether the thresholding should be preserved through the correlation computation, a comparison of results using thresholded and non-thresholded data is warrented. The criteria presented in this section are based on the shape statistics of a probability density function. The values were computed using a symmetric number of samples from the correlation surface around the peak in both the horizontal and vertical directions. The computations are mean,

X µ = px ∗ x (4.1) x standard deviation, s X 2 σ = px ∗ (x − µ) (4.2) x skewness, P p ∗ (x − µ)3 γ = x x (4.3) 1 σ3 and kurtosis, P p ∗ (x − µ)4 γ = x x − 3 (4.4) 2 σ4

Where px is the value of the correlation surface at x normalized by the sum of all px in the range so the total is 1 and the range of x is symmetric about the maximum peak value. While these quantities have meaning in the distribution of a random variable’s 58 values those meanings are not applicable here. These calculations were determined to be useful in comparing the thresholded and non-thresholded correlation peaks in a quantified manner.

Table 4.1.5 summarizes the results of the shape computations for each correlation type in the horizontal and vertical directions. The table lists the mean and standard deviations of 291 peaks for all for shape statistics. The values are essentially the same for the thresholded and non-thresholded correlations.

Thresholded Correlations Horizontal Results Mean Standard Deviation Mean 0.49 2.51 Standard Deviation 6.56 3.43 Skewness 0.03 0.59 Kurtosis 0.18 1.66 Vertical Results Mean Standard Deviation Mean -0.53 2.09 Standard Deviation 6.56 3.48 Skewness -0.14 0.43 Kurtosis 0.11 1.43 Non-Thresholded Correlations Horizontal Results Mean Standard Deviation Mean 0.40 2.42 Standard Deviation 6.55 3.31 Skewness 0.02 0.54 Kurtosis 0.09 1.33 Vertical Results Mean Standard Deviation Mean -0.57 2.02 Standard Deviation 6.79 3.55 Skewness -0.14 0.38 Kurtosis 0.03 1.29

Table 4.1 Peak Shape Moments for Correlations of REAL Scan Features (N=291) 59

4.2 NEXRAD Results

This section details the results of applying the processing technique to reflectiv- ity data from a WSR-88D weather radar. Section 4.2.1 includes selected results from processing.

4.2.1 Example 1

The NEXRAD example included in this section represents the same concept as that in Section 4.1.1, that neighboring features should give similar estimates of the wind due to their spatial proximity. The intermediate products and processing on the

NEXRAD data are identical to the LIDAR procedures described above. The only ex- ception is the computation for the search space was modified for the different spatial and temporal resolutions of the NEXRAD data. The data for the NEXRAD reflectivity

file was exported from its native format using NOAA’s Weather and Climate Toolkit into a GeoTIFF image of 797 pixels wide by 646 pixels high.[20] The extent of the data covered a range specified in latitude and longitude from 38.6839 North to 43.0885 North, and 75.3692 West to 80.6388 West. As there 60 nautical miles per degree of latitude, the vertical resolution is 755.3 meters per pixel. The spacing of the meridians of lon- gitude are dependent on the cosine of the latitude in question. In our case the factor is cos(lat) = 0.75 is appropriate since it is roughly in the middle of the vertical range.

Using this information we arrive at a horizontal resolution of 570.7 meters per pixel. The time elapsed between each frame is 4 minutes and 15 seconds. 60

Here the measured offsets between frames is (−2, −9), (−3, −8.5), and (−2.5, −10) respectively. The average wind speed over the time period is computed to be 21.0m/s,

21.0m/s, and 23.6m/s. As in Section 4.1.1, these neighboring features are moving at nearly the same speed as each other as well as in basically the same directions.

One of the products offered by the NEXRAD instrument is a measurement of radial velocity. The features from the three examples in this section are located at

(59, 83.5), (−55, 87.5), and (32, 115.5) relative to the radar at the origin. Calculating the projections of the offsets to the radial directions yields 11.7m/s, −14.5m/s, and 2.5m/s respectively. The measurements in the NOAA’s Weather and Climate Toolkit for these points are 10 knots, −26 knots, and −1 knots which in indentical units are 5.1m/s,

−13.4m/s, and −0.5m/s as seen in Figures 4.73, 4.74, and 4.75. Of note, the highest magnitude radial velocity measured in this data set is 36 knots or 20.1m/s.

The radial velocities are hard to compare for each result. Unfortunately, the rel- atively low wind speed resolution of the radar data, which has measurement levels of magnitude 0, 10, 20, 26, 36, 50, and 64 knots coupled with the low spatial resolution of each pixel being separated by more than a half kilometer, make validating this technique using NEXRAD radar data difficult. Interpolating the peak of the correlation to give a sub-sample estimate of deflection has the potential to impact the radial velocity estimate by as much as 3m/s. This coupled with the lack of resolution makes each measurement plausably within range of the radar’s estimate. On a more positive note, the maximum radial velocity measured by the NEXRAD radar matches the absolute wind speeds esti- mate by this technique. This shows that the correlation technique can produce accurate wind speed estimates even if the heading estimates are in question. 61

Fig. 4.58 Radar Cloud 1

Fig. 4.59 Radar Cloud 1 Isolated 62

Fig. 4.60 Radar Cloud 1 Search Space 63

Fig. 4.61 Radar Cloud 1 Correlation 64

Fig. 4.62 Radar Cloud 1 Motion 65

Fig. 4.63 Radar Cloud 2

Fig. 4.64 Radar Cloud 2 Isolated 66

Fig. 4.65 Radar Cloud 2 Search Space 67

Fig. 4.66 Radar Cloud 2 Correlation 68

Fig. 4.67 Radar Cloud 2 Motion 69

Fig. 4.68 Radar Cloud 3

Fig. 4.69 Radar Cloud 3 Isolated 70

Fig. 4.70 Radar Cloud 3 Search Space 71

Fig. 4.71 Radar Cloud 3 Correlation 72

Fig. 4.72 Radar Cloud 3 Motion 73

Fig. 4.73 Screenshot of NOAA WCT Tool Depicting the NEXRAD Measurement of the Radial Veloctiy for Cloud 1

Fig. 4.74 Screenshot of NOAA WCT Tool Depicting the NEXRAD Measurement of the Radial Veloctiy for Cloud 2 74

Fig. 4.75 Screenshot of NOAA WCT Tool Depicting the NEXRAD Measurement of the Radial Veloctiy for Cloud 3 75

4.2.2 Autocorrelation

The autocorrelation data for the NEXRAD processing is presented in this section.

As with the LIDAR autocorrelations in Section 4.1.4, comparing the NEXRAD autocor- relations with the processing correlations gives an idea of the effect of the presence of neighboring clouds in the seach space. The features included here match those in Section

4.2.1.

Fig. 4.76 Cloud 1 Frame 1 76

Fig. 4.77 Segmented Autocorrelation Cloud 1 77

Fig. 4.78 Raw Autocorrelation Cloud 1

Fig. 4.79 Cloud 2 Frame 1 78

Fig. 4.80 Segmented Autocorrelation Cloud 2

Fig. 4.81 Raw Autocorrelation Cloud 2 79

Fig. 4.82 Cloud 3 Frame 1

Fig. 4.83 Segmented Autocorrelation Cloud 3 80

Fig. 4.84 Raw Autocorrelation Cloud 3 81

4.2.3 Comparison

This section contains the same analysis as done for the the LIDAR data in Section

4.1.5 performed on thresholded and non-thresholded correlations of NEXRAD features.

The number of radar peaks considered was 128. The results, summarized in Table 4.2.3, match those of the LIDAR data in that there is no material difference in correlation shape between thresholded and non-thresholded data.

Thresholded Correlations Horizontal Results Mean Standard Deviation Mean -0.07 2.32 Standard Deviation 8.15 6.21 Skewness -0.02 0.70 Kurtosis 0.42 3.28 Vertical Results Mean Standard Deviation Mean 0.42 4.07 Standard Deviation 15.39 15.59 Skewness 0.03 0.28 Kurtosis -0.38 0.63 Non-Thresholded Correlations Horizontal Results Mean Standard Deviation Mean -0.08 2.28 Standard Deviation 8.25 6.30 Skewness -0.01 0.71 Kurtosis 0.43 3.17 Vertical Results Mean Standard Deviation Mean 0.51 3.79 Standard Deviation 15.68 15.89 Skewness 0.02 0.24 Kurtosis -0.42 0.56

Table 4.2 Peak Shape Moments for Correlations of NEXRAD Features (N=128) 82

Chapter 5

Conclusions

This thesis presents a process for isolating features in the return of a scanning incoherent LIDAR using image processing techniques. A application of the technique to reflectivity data from a NEXRAD reflectivity scan is also explored. Section 5.1 summa- rizes the findings presented above. Section 5.2 notes potential avenues of research that could extend these results beyond the scope presented here.

5.1 Summary

In this work the processing described in [8] is reproduced to establish a starting point. This includes reading in the data, filtering each radial measurement, projecting the data from polar to Cartesian coordinates, and subtracting the median image to remove stationary features. Next an image processing concept, binary segmentation using border following, is applied to each frame to isolate features of interest. Finally, the region containing the feature is correlated with the subsequent frame to estimate the motion of the feature, and thus achieve an estimate of the wind, from frame to frame.

As thresholding the image is necessary to perform the segmentation, and the results of correlating the thresholded data and the same region without the thresholding applied to the subsequent frame are compared. I found that the shapes of the peaks based on probability distribution shape metrics were substantially equivalent in each case. One 83 extension to the results presented in [8] that are included here is the consideration of the correlation peak’s width at half the peak’s maximum value being meaningful estimators of a feature’s deformation between frames.

The new processing results compare to those of a straight correlation technique.

The segmentation of the image provides useful wind information in the region where features are found. The extent of the correlation peaks appear to have a physical basis.

5.2 Future Work

After performing this research, many ideas for other avenues of investigation re- garding the processing of LIDAR scans using image processing techniques seem inter- esting. Once a frame has been segmented, there are many feature tracking approaches other than cross-correlation that are available. Utilizing optical flow algorithms from the computer vision realm such as image registration methods or the phase correlation technique may yield superior results for these easily deforming features.

Some improvements to the methods included here are:

• Varying minimum feature size

• Changing pixel thresholds prior to segmentation

• Determining the feature search space in the second frame

• Optimizing parameter selection based on perfomance

• Consider spatial relationships by performing a joint correlation on neighboring

features 84

• Analysis on the shape of the correlation peaks to estimate deformation

• Filter features to reduce impact of deformation on correlation

Comparing the results of the NEXRAD data sets with radial velocity estimates nearer to the resolution limit of the instrument is another potentially useful investi- gation. There are pre-processing techniques that would enhance fidelity of the images produced by the LIDAR data. The scanning nature of the REAL instrument requires tens of seconds to produce a full image frame. This means not all parts of the image are time-coincident. A temporal interpolation, perhaps based on an initial correlation, could be used to create an image that represents a single time. Comparing the output of various image segmentation methods may also be instructive. There are many algo- rithms of higher sophistication that do not require a binary image, thus eliminating the thresholding step.

Finally, the most interesting potential is to perform processing or forecasting based types of features in the frame. Classifying features based on past observations is possible to automatically detect events or conditions of interest. Building a database of interesting features or interesting motions of features would be one way to implement such a system. 85

Appendix A

REAL LIDAR Data Extraction

The REAL system’s data was graciously provided by Dr Shane Mayor. The format of the data is a binary representation of data records as described in Table A.

The majority of the fields in the record are of fixed length, but the length of the received energy array of each channel is specified by field in the header. The Data Size entry in the record specifies how many floats are present in the Channel 1 and Channel 2 arrays. The data is already range corrected in this format. The information for two channels is used for phase de-correlation measurements which is useful when the aerosols are non-spherical. This feature is not utilized in the processing presented here. 86

Field Number Field Description 1 Hour 2 Minute 3 Second 4 Empty 5 Empty 6 Empty 7 Azimuth 8 Elevation 9 Empty 10 Empty 11 Empty 12 Month 13 Day 14 Year 15 Empty 16 Empty 17 Empty 18 Empty 19 Latitude 20 Longitude 21 Range 22 Range Resolution 23 Empty 24 Empty 25 Empty 26 Empty 27 Channel 1 28 Channel 2 29 Data Size 30 Scan Number

Table A.1 Record Specification for REAL’s Data 87

Appendix B

Code Description

The code produced for this thesis includes four Python source files:

• analyze log.py

• avg filt make movie.py

• read translate.py

• segment image.py

B.1 read translate.py

The read translate.py file requires the LIDAR data file as a command line argument. It then opens the file for reading, picks x− and y − axis resolutions of 2 meters, then reads the data format described in Appendix A using the Python struct package[21]. It applies the median high-pass filter described in Section 2.1 to each line as it reads it in, and once a complete scan has been read it performs the polar- to-cartesian coordinate transformation as in Section 2.2 which is then written out in

Portable Network Graphics (PNG) format using matplotlib[22] and as binary data file in the numpy format[23]. As it is processing each frame it accumulates the data and after each frame has been read an average image is produced. Finally it writes the average image in PNG format and as binary data file. 88

B.2 avg filt make movie.py

The avg filt make movie.py file loads the average image data and then reads the binary data for each frame as was written by read translate.py. After each frame is read the average image is subtracted and the newly filtered frames are stored as PNG images and binary data on disk. After all frames have been produced in this fashion a system call to mencoder is executed to produce an Audio Video Interleave (AVI) movie[24].

B.3 segment image.py

The segmentation and correlation of frames is implemented in segment image.py.

The file takes the binary data file names of the frame to be segmented and the frame against which the segments will be correlated as command line arguments as well as a number which is included in the filename of all intermediate outputs. The first processing completed is performing a histogram of the input file. This information can be used to compute a threshold automatically but currently it was just used once for finding a fixed threshold. Next the data is thresholded. For LIDAR data a fixed threshold is applied while for NEXRAD data a threshold is computed as in Equation B.1.

threshold = intensity + 0.2 ∗ (intesitymax − intensity) (B.1)

The data is then binned for conversion to 8-bit integers as required by the OpenCV implementation of the segmentation algorithm FindContours[18]. Next small features are removed from consideration by only selecting results from the segmentation routine 89 which include more than 10 or 20 points in the boundaries for LIDAR and NEXRAD data respectively.

The data for the second frame is then read into memory. For each feature selected the region of the feature is found by looking over all of the indices included in the boundary and selecting the minimum and maximum extents in both the vertical and horizontal directions. The subset of the original frame enclosed by these extents is then correlated with a subset of the second frame. The second frame’s subset is defined by the same extents as the feature but padded by an offset corresponding to the maximum displacement expected. In this implementation the offset was 20 pixels which corresponds to winds of 40m/s. Next, intermediate results are saved in PNG format including the feature and search spaces highlighted in the full frame of both data sets, as well as the subset of the data actually used for the correlations.

Using the results of the correlation, the feature’s motion from first to second frame is the computed using the half-maximum along each axis of the correlation peak to estimate the feature’s deformation. The figures of merit for each peak including the mean, standard deviation, skewness, and kurtosis are also computed in the vertical and horizontal directions and displayed. The results of the motion estimate are saved as images and are displayed as (xmin, ymin), (xmin, ymax), (xmax, ymin), and xmax, ymax) from the corners of box that encloses the feature. The correlation surface generated by the feature is also saved as an image.

For the sake of comparison, the whole process for each frame is performed on same data again with the only difference as the feature’s subset is not thresholded. All 90 of the same computations are performed and all of the same intermediate results are produced.

B.4 analyze log.py

The last source file produced for this thesis is analyze log.py. If the messages displayed during the running of segment image.py are stored in a text file, the mean and standard deviations of the peak figures of merit can be accumulated and processed to find the mean and standard deviation of each. Currently the processing expects blocks of 4 results, those in the horizontal and vertical directions for both the thresholded and non-thresholded correlation peaks. 91 References

[1] U. Wandinger, Introduction to Lidar, ch. 1. Springer Science+Business Media Inc.,

2005.

[2] A. Ansmann and D. M¨uller, Lidar and Atmospheric Aerosol Particles, ch. 4.

Springer Science+Business Media Inc., 2005.

[3] A. Doshi and A. Bors, “Robust processing of optical flow of fluids,” Image Process-

ing, IEEE Transactions on, vol. 19, pp. 2332 –2344, sept. 2010.

[4] N. Ho, F. Emond, F. Babin, D. Healy, J.-R. Simard, S. Buteau, and J. E. McFee,

“Short-range lidar for bioagent detection and classification,” vol. 7665, p. 76650A,

SPIE, 2010.

[5] R. Frehlich and N. Kelley, “Measurements of wind and turbulence profiles with

scanning doppler lidar for wind energy applications,” Selected Topics in Applied

Earth Observations and , IEEE Journal of, vol. 1, pp. 42 –47, march

2008.

[6] M. P. McCormick, Airborne and Spaceborne Lidar, ch. 13. Springer Sci-

ence+Business Media Inc., 2005.

[7] C. Werner, Doppler Wind Lidar, ch. 12. Springer Science+Business Media Inc.,

2005. 92

[8] S. D. Mayor and E. W. Eloranta, “Two-dimensional vector wind fields from volume

imaging lidar data,” Journal of Applied , vol. 40, pp. 1331–1346, 2001.

[9] S. D. Mayor, “Raman-shifted eye-safe aerosol lidar (real) in 2010: Instrument status

and two-component wind measurements,” in Proceedings of the 16th International

School for Quantum Electronics, vol. 7747, 2010.

[10] S. Mayor, “ research group.” [Online]. Available:

http://physics.csuchico.edu/lidar.

[11] L. Martin, “Nexrad wsr-88d.” [Online]. Available:

http://www.qsl.net/n9zia/pdf/wsr-88d.pdf.

[12] G. E. Klazura and D. A. Imy Bull. Amer. Meteor. Soc., vol. 74, pp. 1293 – 1311,

1993.

[13] N. C. D. Center, “Ncdc data inventory search.” [Online]. Available:

http://www.ncdc.noaa.gov/nexradinv/chooseday.jsp?id=KCCX.

[14] A. W. Moore, Jr. and J. W. Jorgenson, “Median filtering for removal of low-

frequency background drift,” Anal. Chem., vol. 65, pp. 188–191, 1993.

[15] K. Gribbon and D. Bailey, “A novel approach to real-time bilinear interpolation,”

in Electronic Design, Test and Applications, 2004. DELTA 2004. Second IEEE In-

ternational Workshop on, pp. 126 – 131, jan. 2004.

[16] I. Dragan, B. Iavors’kyi, and L. Chorna, “Bilinear interpolation from polar to rect-

angular point raster for inverse problem solving,” in Mathematical Methods in Elec-

tromagnetic Theory, 1996., 6th International Conference on, pp. 429 –431, sep 1996. 93

[17] S. Suzuki and K. Abe, “Topological structural analysis of digitized binary images

by border following,” Computer Vision, Graphics, and Image Processing, vol. 30,

pp. 32–46, 1985.

[18] “Opencv.” [Online]. Available: http://opencv.willowgarage.com/wiki/.

[19] D. H. Lenschow, M. Lothon, S. D. Mayor, P. P. Sullivan, and G. Canut, “A com-

parison of higher-order vertical velocity moments in the convective boundary layer

from lidar with in situ measurements and large-eddy simulation,” Boundary-Layer

Meteorology, vol. 143, no. 1, pp. 107–123, 2012.

[20] NOAA, “Noaa’s weather and climate toolkit.” [Online]. Available:

http://www.ncdc.noaa.gov/oa/wct/.

[21] P. S. Foundation, “Python programming language official website.” [Online]. Avail-

able: http://www.python.org.

[22] J. Hunter, D. Dale, and M. Droettboom, “matplotlib.” [Online]. Available:

http://matplotlib.sourceforge.net/.

[23] “Numpy.” [Online]. Available: http://numpy.scipy.org/.

[24] “Chapter 6. basic usage of mencoder.” [Online]. Available:

http://www.mplayerhq.hu/DOCS/HTML/en/mencoder.html.