(Lidar) and Hyperspectral Imagery (HSI) Data
Total Page:16
File Type:pdf, Size:1020Kb
Integrated analysis of Light Detection and Ranging (LiDAR) and Hyperspectral Imagery (HSI) data Angela M. Kim, Fred A. Kruse, and Richard C. Olsen Naval Postgraduate School, Remote Sensing Center and Physics Department, 833 Dyer Road, Monterey, CA, USA; ABSTRACT LiDAR and hyperspectral data provide rich and complementary information about the content of a scene. In this work, we examine methods of data fusion, with the goal of minimizing information loss due to point-cloud rasterization and spatial-spectral resampling. Two approaches are investigated and compared: 1) a point-cloud approach in which spectral indices such as Normalized Difference Vegetation Index (NDVI) and principal com- ponents of the hyperspectral image are calculated and appended as attributes to each LiDAR point falling within the spatial extent of the pixel, and a supervised machine learning approach is used to classify the resulting fused point cloud; and 2) a raster-based approach in which LiDAR raster products (DEMs, DSMs, slope, height, aspect, etc) are created and appended to the hyperspectral image cube, and traditional spectral classification techniques are then used to classify the fused image cube. The methods are compared in terms of classification accuracy. LiDAR data and associated orthophotos of the NPS campus collected during 2012 { 2014 and hyperspectral Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data collected during 2011 are used for this work. Keywords: LiDAR, Hyperspectral, HSI, Raster Fusion, Point Cloud Fusion 1. INTRODUCTION Identification and mapping of materials for applications such as target detection, trafficability, land use, en- vironmental monitoring, and many others requires accurate characterization of both surface composition and morphology. Typical remote sensing approaches to these problems, however, usually concentrate on one specific data type (modality) and thus don't provide a complete description of the materials of interest. The research described here combines hyperspectral imaging (HSI) and Light Detection and Ranging (LiDAR) in an attempt to enhance remote sensing mapping capabilities. Imaging spectrometers or \Hyperspectral Imaging (HSI)" sensors are passive systems that rely on reflected sunlight to simultaneously collect spectral data in up to hundreds of image bands with a complete spectrum at each pixel.1 This allows identification of materials based on their specific spectral signatures and production of maps showing the surface distribution of these materials. The solar spectral range, 0.4 to 2.5 micrometers, provides abundant information about many important Earth-surface materials such as minerals, man-made objects, and vegetation.2, 3 HSI data provide useful spectral information for detecting targets and/or mapping surface ground cover; however, this is based solely on the surface spectral signature and does not incorporate surface morphology. LiDAR is an active remote sensing method in which pulses of laser energy are emitted, and the time-of-flight is recorded for echoes returning to the sensor. The time-of-flight information, along with the precise location and pointing direction of the sensor in space, is used to determine the exact location from which laser pulses have been reflected. The transmission of many laser pulses enables building up a point cloud dataset representing the 3D arrangement of objects in the scene. Most current LiDAR systems operate at a single wavelength, and the intensity of the reflected pulses is recorded for each of the returned points. This intensity value is typically not calibrated, however, and so provides only a relative measure of reflectivity. Further author information: A.M.K.: E-mail: [email protected], Telephone: 1 401 647 3536 Laser Radar Technology and Applications XXI, edited by Monte D. Turner, Gary W. Kamerman, Proc. of SPIE Vol. 9832, 98320W · © 2016 SPIE CCC code: 0277-786X/16/$18 · doi: 10.1117/12.2223041 Proc. of SPIE Vol. 9832 98320W-1 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx The strengths of hyperspectral imaging and LiDAR are complementary, and the successful fusion of the two data types has huge potential, but multiple challenges to successfully fusing HSI and LiDAR exist. HSI sensors produce 2D raster-based images that have relatively low spatial resolution as compared to the LiDAR data. LiDAR sensors produce 3D, irregularly gridded point clouds. There isn't a direct one-to-one correspondence between the two datasets. A successful fusion approach will maintain the spectral detail of the HSI data, and also the spatially detailed 3D information of the LiDAR point cloud. In the work presented in this paper, two fusion approaches are examined. In the first approach to fusion, data are fused into an integrated 3D point cloud, and processed in the 3D environment. To incorporate the 2D HSI information into the 3D LiDAR point cloud, relevant information from the HSI data are extracted and attributed to each LiDAR point as a vector. This is a one to many relationship, in which one HSI raster attribute is mapped to many irregularly spaced LiDAR points. Additional attributes from the LiDAR point cloud based on statistics of local neighborhoods of points are used to quantify the spatial arrangement of LiDAR points. A supervised machine learning decision tree classifier is then used to classify the point cloud attribute vectors. Working with irregularly gridded point cloud data is more challenging than traditional raster-based approaches, but this approach minimizes information loss due to rasterization of the point cloud data. The second fusion approach rasterizes relevant information from the LiDAR point cloud data (including height, intensity, and the statistical measures of local neighborhoods of points) and fuses these with the spectral data by combining them into a raster datacube. This is a many to one relationship, in which many LiDAR attributes are averaged and mapped to one raster pixel corresponding to the spatial resolution of the HSI data. The fused raster datacube is classifed using standard supervised classification approaches. The raster approach is more computationally feasible than the point cloud approach. Individual LiDAR and HSI analyses along with fusion results are presented in Section 5. An evaluation of the success of the fusion processes is accomplished by examining the classification accuracy the fused data products. A discussion of the relative success of working within the point-cloud domain as compared to the raster-domain is given in Section 6. 2. BACKGROUND Multiple studies have demonstrated the utility of combining hyperspectral imaging and LiDAR data, particularly in a raster-based fashion. An overview of previous work, as well as results of an IEEE GRSS data fusion competition for classification of an urban scene, is given in Debes et al., 2013.4 Previous work fusing LiDAR and spectral information in the 3D domain is not as common, but a series of papers by researchers at the Finnish Geodetic Institute introduce prototype terrestrial laser scanning systems for collecting hyperspectral LiDAR point clouds. In these prototype instruments, a supercontinuum \white laser" source is combined with a hyperspectral time-of-flight sensor to actively measure reflectance over wavelengths ranging from 480 { 2200 nm. Results show improved classification of various tree species using the fused data product over either the HSI or the LiDAR data individually. The authors also demonstrate extraction of spectral indices in 3D geometries.5{7 Buckley 2013 concurrently collected terrestrial laser scanner data (Riegl LMS-Z420i) and terrestrial hyper- spectral camera (HySpex SWIR-320 m) data with the goal of integrated geological outcrop mapping.8 Products from the hyperspectral camera data are projected into 3D space for improved visual analysis. 3. STUDY AREA AND DATA The area chosen for this study covers a small portion of the city of Monterey, California, which includes the campus of the Naval Postgraduate School (NPS). This area was selected because of the diversity of materials and morphologies at the site and the availability of both hyperspectral and LiDAR data. Figure 1 shows the study area and a small subarea selected for ground validation. Proc. of SPIE Vol. 9832 98320W-2 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx ''`.~ , Figure 1: (Left) AVIRIS true color composite image of the study area. The red box outlines the small subarea shown in greater spatial detail at right. (Right) An aerial photograph with 0.15 m spatial resolution showing the small subarea used for ground validation and accuracy assessment. 3.1 Light Detection and Ranging (LiDAR) data LiDAR data for this study were collected in October and November of 2012 with an Optech Orion C-200 system flown onboard a Bell 206L helicopter from approximately 450 m above ground level, with a 1541 nm laser having a spot size of approximately 50 cm on the ground. The point density is on average approximately 60 points/m2 across the study area. A scan angle field of view of 30 degrees, as well as multiple overlapping flightlines, gives a point cloud dataset with excellent canopy penetration and some coverage of the sides of vertical objects. 3.2 Hyperspectral Imagery (HSI) data HSI data for this study area were collected at 2.4 m spatial resolution using NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), an airborne hyperspectral sensor with 224 bands at 10 nm spectral resolution over the range 0.4 { 2.5 micrometers.9 The AVIRIS data used for this investigation were acquired 30 September, 2011. A total of 80 visible and near infrared (VNIR) bands from 0.4 { 1.2 micrometers and 51 shortwave infrared (SWIR) bands from 2.0 { 2.5 micrometers were used in these analyses. The SWIR data between 1.2 { 2 micrometers were not used. 4. APPROACH AND METHODS Two approaches are presented in this study. The point cloud processing approach is discussed first, followed by a description of the raster-based processing approach.