drones

Article Quantifying the Spatial Variability of Annual and Seasonal Changes in Riverscape Vegetation Using Drone Laser Scanning

Jonathan P. Resop 1,* , Laura Lehmann 2 and W. Cully Hession 2

1 Department of Geographical Sciences, University of Maryland, College Park, MD 20740, USA 2 Department of Biological Systems Engineering, Virginia Tech, Blacksburg, VA 24060, USA; [email protected] (L.L.); [email protected] (W.C.H.) * Correspondence: [email protected]

Abstract: Riverscapes are complex ecosystems consisting of dynamic processes influenced by spa- tially heterogeneous physical features. A critical component of riverscapes is vegetation in the stream and floodplain, which influences flooding and provides habitat. Riverscape vegetation can be highly variable in size and structure, including wetland plants, grasses, shrubs, and trees. This veg- etation variability is difficult to precisely measure over large extents with traditional surveying tools. Drone laser scanning (DLS), or UAV-based lidar, has shown potential for measuring topography and vegetation over large extents at a high resolution but has yet to be used to quantify both the temporal and spatial variability of riverscape vegetation. Scans were performed on a reach of Stroubles Creek in Blacksburg, VA, USA six times between 2017 and 2019. Change was calculated both annually and seasonally over the two-year period. Metrics were derived from the lidar scans to represent different   aspects of riverscape vegetation: height, roughness, and density. Vegetation was classified as scrub or tree based on the height above ground and 604 trees were manually identified in the riverscape, Citation: Resop, J.P.; Lehmann, L.; which grew on average by 0.74 m annually. Trees had greater annual growth and scrub had greater Hession, W.C. Quantifying the Spatial Variability of Annual and Seasonal seasonal variability. Height and roughness were better measures of annual growth and density was a Changes in Riverscape Vegetation better measure of seasonal variability. The results demonstrate the advantage of repeat surveys with Using Drone Laser Scanning. Drones high-resolution DLS for detecting seasonal variability in the riverscape environment, including the 2021, 5, 91. https://doi.org/ growth and decay of floodplain vegetation, which is critical information for various hydraulic and 10.3390/drones5030091 ecological applications.

Academic Editors: Diego Keywords: UAVs; lidar; streams; canopy height; roughness; vegetation density; change detection González-Aguilera and Pablo Rodríguez-Gonzálvez

Received: 27 July 2021 1. Introduction Accepted: 5 September 2021 Published: 7 September 2021 A riverscape is a spatially heterogeneous representing a complex ecosystem that comprises the stream beds, banks, channels, riparian zones, floodplains, and basins

Publisher’s Note: MDPI stays neutral spanning the many interconnected reaches of a system [1–3]. Fausch et al. [1] proposed with regard to jurisdictional claims in a “continuous view” of this riverscape environment is necessary at multiple spatial and published maps and institutional affil- temporal scales to properly study ecological processes. This “continuous view” is possible iations. through state-of-the-art remote-sensing technologies, such as digital imagery and laser scanning, which can measure the physical properties of the riverscape at both fine and coarse scales [2]. Carbonneau et al. [2] combined a 0.03 m aerial image and a 5 m digital elevation model (DEM) over a 16 km stream reach to extract physical riverscape measures over multiple scales, such as channel width and depth, particle size, and slope, which were Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. used to estimate the spatial distribution of salmon habitat patches. Dietrich et al. [3] used This article is an open access article helicopter-based imagery and structure-from-motion (SfM) to derive a 0.1 m DEM for a distributed under the terms and 32 km stream reach, calculated metrics such as channel width, slope, and sinuosity along conditions of the Creative Commons 3 m cross-sections, and explored their relationships with various land class and geomorphic Attribution (CC BY) license (https:// variables. These studies demonstrated the need for multi-scale analyses for ecological creativecommons.org/licenses/by/ studies involving riverscape environments and the benefit of applying remotely-sensed 4.0/). data over not just a large extent but also at a high resolution.

Drones 2021, 5, 91. https://doi.org/10.3390/drones5030091 https://www.mdpi.com/journal/drones Drones 2021, 5, 91 2 of 22

Remote sensing technologies, such as imagery and lidar, rely on a range of platforms including satellite, aerial, terrestrial, mobile, and, more recently, drone systems for measur- ing physical riverscape properties. Farid et al. [4] classified vegetation using 0.5 m resolution canopy elevation models and intensity rasters derived from aerial laser scanning (ALS) data. Heritage and Hetherington [5] scanned a 150 m reach with terrestrial laser scanning (TLS) to create a high-resolution (0.01 m) DEM. Resop et al. [6] applied TLS to a 100 m reach to classify cobble and boulders and calculate percent in-stream rock cover at 0.02 m resolution. Woodget et al. [7] used drone-based imagery and SfM to survey a 120 m reach and classify riverscape features, such as cobble, boulders, gravel, grass, and trees. Yang et al. [8] compared 0.25 m drone-based and 10 m satellite-based multispectral imagery to classify vegetation in a coastal area and found the higher-resolution drone data resulted in improved delineation of features, such as mangroves. Drone laser scanning (DLS), or unmanned aerial vehicle (UAV)-based lidar, is well suited in particular for measuring fine details in riverscapes due to its resolution and survey extent. Resop et al. [9] found that DLS (at 0.1 m resolution) was more accurate than ALS for measuring physical features in the riverscape and detecting micro-changes in the environment, such as streambank profiles and small herbaceous vegetation. While these studies demonstrated the ability of remote sensing for surveying and classifying riverscapes at multiple scales, they all take place over a single survey representing a single moment in time. More studies are needed that take advantage of repeat surveys for better measuring the temporal variability of riverscape features. A number of studies have used lidar to detect physical changes in riverscapes. Huang et al. [10] combined 1 m ALS-derived inundation maps from 2007 and 2009 with 30-m Landsat images to monitor inundated area change. Anders et al. [11] estimated geomorpho- logical changes based on 2 m ALS-derived digital terrain models (DTMs) from 2003 and 2011. While these studies demonstrate the benefits of lidar for annual change detection, many riverscape processes occur seasonally (in particular, those involving vegetation), which requires multiple surveys per year to properly monitor. The lack of widespread and continuous ALS data is a significant limitation for lidar-based change detection stud- ies [12]. More often than not, locations only have a single ALS dataset at best, which means other remotely-sensed data, such as SfM or coarser-resolution satellite imagery, are required for change detection. In addition, ALS from high-altitude aircraft has limited resolution, typically around 0.5 or 1 m, which is not fine enough to detect small changes in vegetation [9]. A more flexible lidar platform than ALS, such as terrestrial, mobile, or drone, is required for more frequent surveys at higher resolution. Lidar platforms such as TLS and mobile laser scanning (MLS) have shown to be successful for measuring change over repeat surveys and have been used to measure streambank retreat, bluff , and riparian vegetation change [13–18]; however, these studies typically scan a limited extent, such as a single streambank. For example, Resop and Hession [13] scanned an 11 m streambank six times over two years with TLS (at 0.02 m resolution) and observed seasonal variations in streambank retreat rates, demonstrating the advantage of repeat surveys. More research is needed to study the potential of DLS for detecting vegetation change along entire riverscapes at high resolution. There are many metrics available to quantify riverscape vegetation, such as height, roughness, and density. These metrics have a history of being derived using lidar data, most commonly with ALS, for a range of ecological applications. Vegetation height, represented by a canopy height model (CHM), has been used to estimate tree height, above-ground biomass, bulk density, and canopy fuel weight over large extents [19,20]. Vegetation roughness, a measure of the level of smoothness or roughness [21], is related to a commonly used parameter in flood modeling, the hydraulic roughness, which is estimated over the channel and floodplain based on a combination of topographic and vegetative complexity [22]. Vegetation density, represented by metrics such as the laser penetration index (LPI), has been used to estimate percent canopy cover or leaf area index (LAI) [20,23]. Drones 2021, 5, 91 3 of 22

These vegetation metrics could be quantified at high resolution using DLS and vegetation change could be measured seasonally with repeat surveys. Many approaches have been used to estimate canopy height: (1) vector-based normal- ization of point clouds, (2) raster-based differences between digital surface models (DSMs) representing surface features and DTMs representing ground, and (3) full waveform lidar approaches. Vector-based methods normalize point cloud elevations to derive height above ground and are commonly used for tree detection [24–26]. Raster-based methods classify lidar points as vegetation or ground returns and rasterize them into DSMs representing maximum vegetation elevation and DTMs representing ground elevation with the dif- ference resulting in a normalized digital surface model (nDSM) or CHM [9,20,27]. If full waveform lidar data is available, which is less common and generally has larger footprints, then canopy height can be estimated through decomposition of the waveform peaks [28,29]. Raster-based methods are most often used due to the flexibility of fixed-grid data formats for a variety of applications and because they are most practical for DLS due to the high resolution and discrete format of the point clouds. Two approaches have been used to estimate vegetative roughness: (1) calculating roughness based on the variability of lidar point elevation within a moving window and (2) classifying lidar-derived rasters based on observed or calibrated roughness values. Mundt et al. [21] estimated roughness as a way to classify sagebrush distribution us- ing the standard deviation of normalized ALS point heights within 4.6 m raster pixels. Dorn et al. [27] applied a supervised classification to ALS-derived CHMs to classify veg- etation (i.e., grass, shrub, forest) corresponding to Manning’s roughness (n) values from Chow [30]. Both methods have been effective at estimating roughness, but have only recently been explored with DLS. Prior et al. [31] estimated Manning’s n from both DLS- and SfM-derived CHMs through empirical methods as well as by classifying vegetation (i.e., grass, scrub, small trees, and large trees) and calibrating roughness with observed stream velocity data and 2D HEC-RAS. Multiple metrics have been used to measure vegetation density, ranging from canopy cover, crown closure, gap fraction, or LAI. Lidar metrics estimating density take advan- tage of the physics of laser pulses passing through gaps in the canopy and include both vector-based and raster-based approaches. Vector-based methods classify lidar points as vegetation or ground and then calculate the LPI, defined as the ratio of ground points to total points in a given area [23,32]. Alternatively, raster-based methods derive a CHM at high resolution to define locations of canopy cover and then calculate the percent of canopy pixels within a coarser resolution grid or sample area [20,26]. Both approaches have a similar conceptual background and result in a raster representing the percent vegetation density or cover but have not yet been applied to DLS data. The objectives of this study were: (1) to scan a reach of Stroubles Creek six times over two years (between April 2017 and March 2019) at high resolution with DLS; (2) to produce three lidar-based metrics of riverscape vegetation (i.e., height, roughness, and density); (3) to classify distinct vegetation classes in the riverscape (i.e., scrub and tree) and locate them with respect to the stream channel; and (4) to calculate and compare annual and seasonal changes in vegetation metrics spatially over the riverscape.

2. Materials and Methods 2.1. Study Area The study area was a 0.65 km reach of Stroubles Creek located in Blacksburg, VA, USA (Figure1). This reach has a history of agricultural use and underwent a that was completed in May 2010 [33]. The restoration consisted of best management prac- tices such as livestock exclusion, reshaping, and riparian vegetation planting [33]. The Stream Research, Education, and Management (StREAM) Lab at Virginia Tech continuously monitor water quality, stream flow, and weather at sampling located along the reach as part of ongoing research efforts [34]. Resop and Hession [13] monitored stream- bank retreat on an 11 m bank of this reach using TLS between 2007 and 2009 and found Drones 2021, 5, x FOR PEER REVIEW 4 of 22

Drones 2021, 5, 91 4 of 22

monitored streambank retreat on an 11 m bank of this reach using TLS between 2007 and 2009 and found that TLS was more accurate than traditional surveying methods for meas- that TLS was more accurate than traditional surveying methods for measuring topographic uring topographic change; however, the stationary nature of TLS limited the study to a change; however, the stationary nature of TLS limited the study to a single streambank single streambank and was not ideal for larger extents. Resop et al. [9] scanned multiple and was not ideal for larger extents. Resop et al. [9] scanned multiple streambanks and streambanks and the using DLS in 2017, demonstrating the potential of DLS to the floodplain using DLS in 2017, demonstrating the potential of DLS to perform change perform change detection over the entire riverscape. Prior et al. [31] used DLS and SfM detectiondata for this over reach the entirefrom 2018 riverscape. to estimate Prior hy etdraulic al. [31 roughness] used DLS based and on SfM vegetation data for height this reach fromand 2018velocity to estimatedata. hydraulic roughness based on vegetation height and velocity data.

FigureFigure 1. The 1. The study study area, area, a 0.65a 0.65 km km reach reach of of StroublesStroubles Creek, show showinging the the approximate approximate extent extent of ofthe the drone drone laser laser scanning scanning (DLS) surveys. The actual extent varied slightly between scans. (DLS) surveys. The actual extent varied slightly between scans.

2.2. Lidar Data Collection The drone used for this study was an AeroVironment Vapor35 (Arlington, VA, USA). The lidar system on board was a YellowScan Surveyor Core (Saint-Clément-de-Rivière France). The drone weighed approximately 13.6 kg with a payload (i.e., the lidar system) of about 2.3 kg. The battery provides a practical maximum flight time of 40 min at an altitude Drones 2021, 5, x FOR PEER REVIEW 5 of 22

2.2. Lidar Data Collection The drone used for this study was an AeroVironment Vapor35 (Arlington, VA, USA). The lidar system on board was a YellowScan Surveyor Core (Saint-Clément-de-Rivière Drones 2021, 5, 91 France). The drone weighed approximately 13.6 kg with a payload (i.e., the lidar system)5 of 22 of about 2.3 kg. The battery provides a practical maximum flight time of 40 min at an altitude of 20 m above ground level (AGL). The lidar operated with a pulse rate of 300 kHz,of 20 used m above a near-infrared ground level (NIR (AGL).) wavelength The lidar operatedof 905 nm with and a recorded pulse rate up of 300to two kHz, returns used pera near-infrared pulse. Additional (NIR) technical wavelength details of 905 for nm the and drone recorded and lidar up tosystems two returns can be per found pulse. in ResopAdditional et al. technical[9], which details used forthe thesame drone system and to lidar compare systems DLS can and be foundALS for in the Resop same et al.study [9], area.which used the same system to compare DLS and ALS for the same study area. TheThe reach reach was was scanned scanned six six times times over over two two years: years: 5 April 5 April2017, 2 2017, August2 August 2017, 9 2017 No-, vember9 November 2017, 20173 April, 3 April2018, 2018,9 October 9 October 2018, 2018,and 20 and March 20 March 2019. 2019. Three Three scans scans occurred occurred dur- ingduring the leaf-off the leaf-off dormant dormant vegetation vegetation season season (April (April 2017, 2017, April April 2018, 2018, and and March 20192019)) andand threethree scans scans occurred occurred during during the the leaf-on leaf-on season season at varying stages of foliage ((AugustAugust 20172017,, NovemberNovember 2017, 2017, and October 2018; Figure2 2).). TheThe extentextent waswas mostlymostly similarsimilar forfor all all six six scansscans (Figure(Figure 11).) . Each Each flight flight consisted consisted of of six six flightlines flightlines and accounted for overlap betweenbetween swaths.swaths. All All scans scans flew flew at at a a consistent consistent altitude (about 20 m AGL),AGL), launchlaunch location,location, andand speedspeed to to produce produce similar similar point point densities, with the exception of the March 20192019 scan,scan, whichwhich testedtested a a higher higher pulse pulse density. density.

FigureFigure 2. 2. PhotosPhotos of of the the study study area, area, Stroubles Stroubles Creek, Creek, taken taken downstream downstream of the concrete from an on-site towertower cameracamera aroundaround the the same same dates dates as as the the drone drone scans: ( (aa)) April April 2017, 2017, ( b) August 2017, (c) January 20182018 (Note:(Note: DueDue toto aa malfunctionmalfunction inin the tower camera, this is the closest photo to the November 2017 scan), (d) April 2018, (e) October 2018, (f) March 2019. the tower camera, this is the closest photo to the November 2017 scan), (d) April 2018, (e) October 2018, (f) March 2019.

2.3.2.3. Lidar Lidar Data Data Preprocessing Preprocessing TheThe lidar data werewere georeferencedgeoreferenced with with the the on-board on-board GPS GPS and and local local National National Geodetic Geo- deticSurvey Survey CORS CORS base base station station data data [9]. [9]. For For each each scan, scan, the the DLS DLS data data was was processedprocessed intointo sixsix LAS LAS files, files, one one for for each each swath. swath. The TheLAS LASfiles for files each for scan each were scan added were addedto a LAS to dataset, a LAS projecteddataset, projected to WGS 1984 to WGS UTM 1984 Zone UTM 17N, Zone and 17N, furthe andr classified further classified and rasterized and rasterized with ArcGIS with 10.6ArcGIS (Redlands, 10.6 (Redlands, CA, USA) CA, and USA) LAStools and LAStools (Gilching, (Gilching, ). Germany). Data analysis Data analysis and visuali- and zationvisualization were performed were performed with Python with Python3.7.7 and 3.7.7 various and variousopen-source open-source modules modules (e.g., numpy, (e.g., pandas,numpy, matplotlib, pandas, matplotlib, sklearn). sklearn). TheThe study study area area contained contained three prominent structures that served asas controlcontrol pointspoints toto comparecompare the the DLS DLS point point clouds clouds between scans: one concrete bridge usedused forfor vehiclesvehicles atat thethe northnorth end end of of the the reach reach and and two two wooden wooden sampling sampling bridges bridges in in the the middle middle and and south south end end of of the reach (Figure1). For each scan, lidar points representing bridges were manually classified as building with ArcGIS. Points representing other anthropogenic features (e.g., cars and people conducting the flights) were also classified as building so they would be ignored in further analysis. During point cloud post-processing and georeferencing, an issue of concern occurred— some LAS files were not properly aligned with respect to the others. Two misalignment Drones 2021, 5, x FOR PEER REVIEW 6 of 22

the reach (Figure 1). For each scan, lidar points representing bridges were manually clas- sified as building with ArcGIS. Points representing other anthropogenic features (e.g., cars and people conducting the flights) were also classified as building so they would be ig- nored in further analysis. Drones 2021, 5, 91 6 of 22 During point cloud post-processing and georeferencing, an issue of concern oc- curred—some LAS files were not properly aligned with respect to the others. Two misa- lignment phenomena were observed: (1) within each scan, one of the six LAS files might bephenomena misaligned were with observed: respect (1) to withinthe other each five, scan, and one (2) of thebetween six LAS each files of might the six be mis-scans, there aligned with respect to the other five, and (2) between each of the six scans, there might be might be an elevation bias. This issue was easily observed at bridges, which made it ap- an elevation bias. This issue was easily observed at bridges, which made it appear as if pearthere as were if there multiple were bridges multiple at the bridges same location at the same (Figure location3). (Figure 3).

Figure 3.3. AnAn exampleexample misaligned misaligned point point cloud cloud representing representing one of one the of wooden the wooden sampling sampling bridgesthat bridges that cross thethe streamstream showing showing an an elevation elevation bias bias in the in Novemberthe November 2017 scan 2017 (i.e., scan the (i.e., green the points) green relative points) relative toto the otherother scans scans before before correction correction with with CloudCompare. CloudCompare. Misaligned LAS files were corrected with the open-source software CloudCom- pare 2.10Misaligned (https://www.cloudcompare.org/ LAS files were corrected accessed with the on open-source 2 July 2021). A software two-stage CloudCompare process 2.10was used:(https://www.cloudcompare.o (1) within each scan to correctrg/). misaligned A two-stage LAS process files and was (2) betweenused: 1) eachwithin scan each scan toand correct the April misaligned 2017 baseline LAS scanfiles toand correct 2) between bias. The each three scan bridges and the were April used 2017 as control baseline scan topoints correct during bias. alignment. The three Points bridges were were finely used registered as control with thepoints “Iterative during Closest alignment. Point Points were(ICP)” finely tool to registered a root mean with squared the “Iterative difference Clos (RMSD)est Point of 0.00001 (ICP)” m andtool ato transformation a root mean squared differencematrix was (RMSD) calculated. of The0.00001 transformation m and a transformation matrix was then matrix applied was to thecalculated. entire point The trans- formationcloud. After matrix correcting was allthen six applied scans, the to elevationthe entire bias point was cloud. calculated After between correcting all scans all six scans, and the April 2017 baseline scan as a measure of relative accuracy. the elevation bias was calculated between all scans and the April 2017 baseline scan as a The extent of each lidar scan was delineated with ArcGIS [9]. The “LAS Point Statistics measureas Raster” of tool relative created accuracy. a 0.1-m raster representing all pixels with at least one point. The “Expand”The extent and “Shrink” of each tools lidar performed scan was a morphologicaldelineated with closing ArcGIS and [9]. removed The “LAS small Point data Statis- ticsgaps. as TheRaster” final tool raster created was converted a 0.1-m raster to a polygon representing representing all pixels the scanwith extent. at least Key one point. Thedifferences “Expand” in extents and “Shrink” include the tools November performed 2017 a scan, morphological which had a closing smaller extentand removed south small dataof the gaps. concrete The bridge, final raster and the was March converted 2019 scan, to whicha polygon had a representing larger extent expanding the scan extent. out Key differencesinto the floodplain in extents (Figure include4). The the extent November intersection 2017 was scan, calculated which had with a the smaller “Intersect” extent south oftool the to concrete normalize bridge, comparisons and the between March scans. 2019 Thescan, “annual which extent had a intersection” larger extent (between expanding out intoApril the 2017, floodplain April 2018, (Figure and March 4). The 2019) extent was 11.02 intersection ha and the was “seasonal calculated extent with intersection the “Intersect” (between all scans) was 8.29 ha (Figure4). tool to normalize comparisons between scans. The “annual extent intersection” (between Drones 2021, 5, x FOR PEER REVIEW 7 of 22

Drones 2021, 5, 91 7 of 22 April 2017, April 2018, and March 2019) was 11.02 ha and the “seasonal extent intersection (between all scans) was 8.29 ha (Figure 4).

FigureFigure 4.4.The The extentextent ofof eacheach dronedrone laserlaser scanning (DLS) su surveyrvey as as well well as as the the extent extent intersection intersection of of all six scans (“Seasonal Extent”) and of only the annual leaf-off season scans (April 2017, April 2018, all six scans (“Seasonal Extent”) and of only the annual leaf-off season scans (April 2017, April 2018, and March 2019; “Annual Extent”). and March 2019; “Annual Extent”). 2.4. Lidar Data Classification 2.4. Lidar Data Classification With respect to the lidar data collected for this study, noise was rare and most likely With respect to the lidar data collected for this study, noise was rare and most likely represented data artifacts, such as “bird hits.” The “Classify LAS Noise” tool in ArcGIS represented data artifacts, such as “bird hits.” The “Classify LAS Noise” tool in ArcGIS identified outlier points based on absolute minimum and maximum elevation thresholds. identified outlier points based on absolute minimum and maximum elevation thresholds. Afterwards, the point clouds from each scan were inspected with the “Profile View” tool Afterwards, the point clouds from each scan were inspected with the “Profile View” tool and any remaining outliers were manually classified as noise. and any remaining outliers were manually classified as noise. After manually classifying building and noise, automated classification tools in After manually classifying building and noise, automated classification tools in ArcGIS ArcGIS were used to classify ground and vegetation points [9]. The “Classify LAS were used to classify ground and vegetation points [9]. The “Classify LAS Ground” Ground” tool identified ground and “Classify LAS by Height” classified the remaining tool identified ground and “Classify LAS by Height” classified the remaining points as points as unassigned or vegetation based on a height threshold of 0.1 m (Table 1). Points unassigned or vegetation based on a height threshold of 0.1 m (Table1). Points with a with a height greater than 0.1 m were classified as vegetation and points less than 0.1 m height greater than 0.1 m were classified as vegetation and points less than 0.1 m were were classified as unassigned. The unassigned class served as a measure of uncertainty, classified as unassigned. The unassigned class served as a measure of uncertainty, as these points were within the precision of DLS [35] and could represent ground or vegetation. Drones 2021, 5, 91 8 of 22

Table 1. Lidar point data classes used in this study.

Class Definition Ground Points most likely representing bare earth topography Unassigned Points between 0 m < height above ground < 0.1 m Vegetation Points between 0.1 m < height above ground < 15 m Building Points identified as human-made or built structures (e.g., bridges or cars) Noise Points identified as noise (e.g., bird-hits or lidar artifacts)

Researchers have observed that point clouds from a YellowScan lidar, which is based on the Velodyne Puck (San Jose, CA), can result in a “thickness” of points of a few cen- timeters when scanning flat ground, producing surfaces that are not as “crisp” as they should [36]. This phenomenon was observed in the point clouds produced for this study. Upon investigation, a cross-section of points along the concrete bridge at the northern end of the study area showed an approximate 0.05 m thickness of what should have been a solid surface. To account for this thickness, the unassigned class, representing points with a height above ground less than 0.1 m, was used in addition to the ground class to define terrain for bare earth models (i.e., DTMs). The automated classification algorithms produced two common misclassifications: (1) ground points misclassified as vegetation on high-gradient stream banks and (2) vegeta- tion points misclassified as ground under dense canopy [9]. These misclassifications need to be corrected manually due to limitations in current lidar point classification algorithms. Unfortunately, the manual correction of large point clouds, especially from DLS, is very time-consuming and labor-intensive [9]. The April 2017 scan was corrected previously [9], but the other five scans were not fully verified. An investigation into a more efficient classification correction process was outside the scope of this study.

2.5. Lidar Vegetation Metrics Three lidar metrics were selected to represent riverscape vegetation: (1) height (canopy height model; CHM), (2) roughness (vegetative roughness index; VRI), and (3) density (lidar vegetation index; LVI; Table2). All three metrics were calculated in raster format for each DLS survey using data processing pipelines created with Model Builder in ArcGIS. All outputs had a pixel size of 0.1 m. Based on an average point spacing of 0.047 m over the six scans, this is about five lidar points per pixel.

Table 2. Metrics produced from drone laser scanning (DLS) data to represent different components of riverscape vegetation.

Lidar Vegetation Metrics Definition Value Range Units Measure of vegetation height Canopy height model (CHM) 0 to ~13 Meters (veg. elev.) − (ground elev.) Measure of vegetative roughness Vegetative roughness index (VRI) 0 to ~9 Meters St. dev. of vegetation height Measure of vegetation density Decimal Lidar vegetation index (LVI) 0 to 1 (count veg.)/(count all points) Percent

The CHM was derived from two rasters: the DTM (i.e., the minimum ground and unassigned point elevation per pixel) and the DSM (i.e., the maximum vegetation point elevation per pixel) [9]. The DTM and DSM were calculated with the “LAS Dataset to Raster” tool (Figure5). The difference between DSM and DTM resulted in a normalized digital surface model (nDSM) [9]. Data gaps (e.g., water surfaces and lidar shadows) and pixels with a height less than 0.1 m (i.e., the smallest height of vegetation points), were assigned zero values to produce the final CHM. The CHM ranged from 0 m (i.e., bare earth) to about 13 m (i.e., the height of the tallest tree). Drones 2021, 5, x FOR PEER REVIEW 9 of 22

Drones 2021, 5, x FOR PEER REVIEW 9 of 22

pixels with a height less than 0.1 m (i.e., the smallest height of vegetation points), were assigned zero values to produce the final CHM. The CHM ranged from 0 m (i.e., bare earth)pixels towith about a height 13 m (i.e.,less thanthe height 0.1 m of (i.e., the the tallest smallest tree). height of vegetation points), were Drones 2021, 5, 91 assigned zero values to produce the final CHM. The CHM ranged from 0 m (i.e.,9 bare of 22 earth) to about 13 m (i.e., the height of the tallest tree).

Figure 5. The workflow deriving the canopy height model (CHM) based on the maximum height above ground of lidar

Figurevegetation 5. The points workflow in each deriving raster pixel. the canopy height model (CHM) based on the maximum height above ground of lidar vegetationFigure 5. The points workflow in each deriving raster pixel. the canopy height model (CHM) based on the maximum height above ground of lidar vegetation points in each raster pixel.The vegetative roughness index (VRI) estimated the local variability of vegetation height.The The vegetative VRI was roughnesscalculated with index the (VRI) “LAS estimated Height Metrics” the local tool variability as the standard of vegetation devi- ationheight.The of vegetation Thevegetative VRI was point roughness calculated height index per with pixel (VRI) the (Figure “LASestimated 6). Height Points the Metrics”local with variability a height tool as less of the thanvegetation standard 0.1 m wereheight.deviation ignored. The of VRI vegetation Low was standardcalculated point heightdeviations with the per “LAS pixelcorresponded Height (Figure Metrics”6). to Points smooth tool with assurfaces athe height standard while less thandevi-high standardation0.1 m of were vegetation deviations ignored. point Lowcorresponded standardheight perdeviations to pixel rough (Figure su correspondedrfaces. 6). Points Data togapswith smooth awere height surfaces assigned less whilethan a VRI 0.1 high mof werezero.standard Theignored. deviationsVRI ranged Low standard corresponded from about deviations 0 m, to roughrepresen corresponded surfaces.ting relatively Data to gapssmooth uniform were surfaces vegetation, assigned while a to VRI high9 m, of representingstandardzero. The deviations VRI more ranged complex corresponded from aboutvegetation. 0 to m, rough representing surfaces. relatively Data gaps uniform were vegetation,assigned a VRI to 9 m,of zero.representing The VRI more ranged complex from about vegetation. 0 m, representing relatively uniform vegetation, to 9 m, representing more complex vegetation.

Figure 6. The workflow estimating the vegetative roughness index (VRI) based on the standard deviation of vegetation Figure 6. The workflow estimating the vegetative roughness index (VRI) based on the standard deviation of vegetation height within each raster pixel. height within each raster pixel. Figure 6. The workflow estimatingThe the lidar vegetative vegetation roughness index index (LVI) (VRI) represented based on the the percentage standard deviation of vegetation of vegetation points per height within each raster pixel. pixel. While CHM and VRI were both absolute measures, the LVI was the only relative measure. The number of vegetation points per pixel was calculated with the “LAS Point Statistics as Raster” tool and then the number of total points per pixel was calculated (Figure7). The vegetation point count was divided by the total point count on a pixel-by- pixel basis, resulting in a decimal percentage. Data gaps were assigned an LVI of zero. The LVI ranged from zero, representing no vegetation or open canopy, to one, representing heavy vegetation or dense canopy. Drones 2021, 5, x FOR PEER REVIEW 10 of 22

The lidar vegetation index (LVI) represented the percentage of vegetation points per pixel. While CHM and VRI were both absolute measures, the LVI was the only relative measure. The number of vegetation points per pixel was calculated with the “LAS Point Statistics as Raster” tool and then the number of total points per pixel was calculated (Fig- ure 7). The vegetation point count was divided by the total point count on a pixel-by-pixel basis, resulting in a decimal percentage. Data gaps were assigned an LVI of zero. The LVI Drones 2021, 5, 91 ranged from zero, representing no vegetation or open canopy, to one, representing 10heavy of 22 vegetation or dense canopy.

Figure 7. The workflow calculating the lidar vegetation index (LVI) based on the percentage of vegetation points within Figure 7. The workflow calculating the lidar vegetation index (LVI) based on the percentage of vegetation points within each raster pixel as a measure of density. each raster pixel as a measure of density. 2.6. Vegetation Classification, Distance to Water, and Tree Identification 2.6. Vegetation Classification, Distance to Water, and Tree Identification The CHM for each scan was classified into the following land classes: ground, scrub, The CHM for each scan was classified into the following land classes: ground, scrub, and tree. These classes were defined with a simple threshold method based on a manual investigationand tree. These of classes vegetation were in defined the riverscape. with a simple Pixels threshold with a CHM method value based (i.e., heighton a manual above ground)investigation less than of vegetation 0.1 m were in classified the riverscape. as ground, Pixels pixels with between a CHM 0.1 value and 2(i.e., m were height classified above asground) scrub, less and than pixels 0.1 greater m were than classified 2 m were as classifiedground, pixels as tree. between 0.1 and 2 m were clas- sifiedThe as scrub, CHM forand the pixels April greater 2017 baseline than 2 m was were used classified to establish as tree. the stream location. A land classThe for waterCHM wasfor definedthe April by 2017 taking baseline advantage was ofused the to physics establish of NIR the lidarstream pulses, location. which A areland absorbed class for by water water was surfaces define andd by result taking in advantage “No Data” of pixels. the physics The “LAS of PointNIR lidar Statistics pulses, as Raster”which are tool absorbed counted by the water lidar pointssurfaces in and each result 0.1 m in pixel “No and Data” those pixels. with moreThe “LAS than threePoint pointsStatistics represented as Raster” ground. tool counted The “Expand” the lidar andpoints “Shrink” in each tools 0.1 m were pixel used and to those close with small more data gaps.than three At this points point, represented “No Data” ground. gaps represented The “Expand” the water and surface.“Shrink” Once tools the were water used class to wasclose defined, small data the gaps. “Euclidean At thisDistance” point, “No tool Da wasta” gaps used represented to determine the the water distance surface. between Once eachthe water pixel class in the was riverscape defined, and the the“Euclidean stream. Distance” tool was used to determine the dis- tanceTo between quantify each the pixel annual in growththe riverscape of vegetation, and the trees stream. were manually identified from the AprilTo 2017 quantify CHM the based annual on pixels growth classified of vegetation with a, heighttrees were greater manually than 2 identified m. The riverscape from the wasApril inspected, 2017 CHM crowns based wereon pixels identified, classified and with stems a wereheight marked. greater Thethan environment 2 m. The riverscape for this studywas inspected, had a fairly crowns open were canopy, identified, so it was and easy stems to manually were marked. identify The trees. environment Each identified for this treestudy was had verified a fairly byopen inspecting canopy, theso it point was easy cloud. to Themanually “Buffer” identify tool createdtrees. Each 0.5 midentified buffers aroundtree was each verified tree stemby inspecting and themaximum the point clou treed. height The “Buffer” was calculated tool created from the0.5 CHMsm buffers for Aprilaround 2017, each April tree 2018,stem and Marchthe maximum 2019. tree height was calculated from the CHMs for April 2017, April 2018, and March 2019. 2.7. Annual and Seasonal Change Detection 2.7. AnnualAnnual and and Seasonal seasonal Change changes Detection were calculated for each vegetation metric (Table2). The overallAnnual two-year and periodseasonal was changes represented were calculated by the April for 2017 each to vegetation March 2019 metric scans. (Table Annual 2). changeThe overall was two-year measured period between was leaf-off represented scans (Aprilby the 2017April to 2017 April to 2018March and 2019April scans. 2018 An-to Marchnual change 2019) waswithin measured the annual between extent leaf-off intersection scans (April (Figure 20174). to Seasonal April 2018 change and April was mea- 2018 suredto March between 2019) all within six DLS the scans annual within extent the in seasonaltersection extent (Figure intersection 4). Seasonal (Figure change4). was measuredAn effective between method all six of DLS identifying scans within pixel-level the seasonal change extent is to calculate intersection DEMs (Figure of difference 4). (DoDs) by subtracting rasters representing two moments in time: the newer raster minus the older raster [12,37,38]. For CHMs, change represented vegetation growth or decay [15]. For VRIs, change represented increased roughness or smoothness. For LVIs, change represented increased or decreased vegetation density. Change was determined to be significant, as opposed to noise, by applying a minimum level of detection (LoD), representing the uncertainty of the lidar sensor [37,38]. An LoD of 0.05 m was used for the CHM and VRI DoDs [35], which is consistent with LoDs used by change detection studies involving lidar at a similar range [17]. For the LVI DoD, the unassigned class (Table1) represented the point classification uncertainty. Drones 2021, 5, 91 11 of 22

3. Results 3.1. Point Density Comparison and Elevation Bias between the Six Lidar Scans The point count varied significantly between the scans due to extent differences (Figure4) , ranging from 42.6 to 70.1 million points. When considering only the seasonal extent intersection the point counts were more consistent (Table3). Most scans contained between 41.7 and 43.0 million points with the exception of the March 2019 scan, which con- tained 51.2 million points due to a higher pulse density. While most scans were consistent (491 to 502 pulses/m2), the March 2019 scan had a pulse density of 590 pulses/m2.

Table 3. Drone laser scanning (DLS) statistics for each scan within the extent intersection.

Point Density Pulse Density Scan Date Point Count (All Returns/m2) (First Returns/m2) April 2017 41,661,008 502.43 492.39 August 2017 42,148,141 508.30 501.89 November 2017 42,389,739 511.21 490.65 April 2018 42,994,259 518.51 501.61 October 2018 42,584,883 513.57 500.07 March 2019 51,135,652 616.69 590.30

Agreement between the six DLS scans was determined based on the average elevation (Z) bias between DEMs produced from each scan and the April 2017 baseline. Bridge DEMs were created using points representing the concrete bridge and two sampling bridges. Before aligning the data in CloudCompare, the average Z bias was as large as 1.15 m (March 2019) with smaller biases of 0.23 m (November 2017) and −0.11 m (October 2018; Figure3). After alignment, the average Z biases ranged from −0.03 m to 0.01 m over all six scans (Table4), which is well within the precision of YellowScan lidar systems [35].

Table 4. The average elevation (Z) bias between each drone laser scanning (DLS) survey and the April 2017 scan after all point clouds were aligned with CloudCompare.

Mean Z Bias (m) Mean Z Bias (m) Scan Date Bridge DEMs Ground DTMs April 2017 Baseline Baseline August 2017 −0.02 0.25 November 2017 −0.01 0.06 April 2018 0.01 −0.01 October 2018 −0.02 0.17 March 2019 −0.03 −0.03

Between DTMs, the average Z biases had a wider range, −0.03 m to 0.25 m (Table4) . The scans with a higher bias included August 2017 (0.25 m), October 2018 (0.17 m), and November 2017 (0.06 m), which occurred during leaf-on seasons. The Z bias was negatively correlated to the time of year. Going forward in time from August to October to Novem- ber, as the amount of vegetation decreased there was a lower likelihood of vegetation understory misclassified as ground, which decreased the bias. For the other leaf-off scans (April 2018 and March 2019), the bias was within the precision of the lidar system (−0.01 m and −0.03 m, respectively) [35].

3.2. Annual and Seasonal Change of Lidar Point Classifications The three primary data classes were ground, unassigned, and vegetation (Table1) . Vegetation represented objects with a height above ground greater than 0.1 m that were not identified as a built structure, which allowed DLS to detect not just trees and bushes but also small herbaceous vegetation [9]. The scans were clipped to the seasonal extent intersection and the point statistics (percent terrain vs. percent vegetation) were compared. As expected, Drones 2021, 5, x FOR PEER REVIEW 12 of 22

Between DTMs, the average Z biases had a wider range, −0.03 m to 0.25 m (Table 4). The scans with a higher bias included August 2017 (0.25 m), October 2018 (0.17 m), and November 2017 (0.06 m), which occurred during leaf-on seasons. The Z bias was nega- tively correlated to the time of year. Going forward in time from August to October to November, as the amount of vegetation decreased there was a lower likelihood of vege- tation understory misclassified as ground, which decreased the bias. For the other leaf-off scans (April 2018 and March 2019), the bias was within the precision of the lidar system (−0.01 m and −0.03 m, respectively) [35].

3.2. Annual and Seasonal Change of Lidar Point Classifications The three primary data classes were ground, unassigned, and vegetation (Table 1). Vegetation represented objects with a height above ground greater than 0.1 m that were

Drones 2021, 5, 91 not identified as a built structure, which allowed DLS to detect not just trees and12 of bushes 22 but also small herbaceous vegetation [9]. The scans were clipped to the seasonal extent intersection and the point statistics (percent terrain vs. percent vegetation) were com- pared. As expected, the percent vegetation varied periodically throughout the seasons as associatedthe percent with vegetation the growth varied and periodically decay of vege throughouttation in the the seasons riverscape, as associated with a maximum with the of 56%growth in andAugust decay 2017 of vegetation (leaf-on) inand the a riverscape,minimum withof 12% a maximum in April of2017 56% (leaf-off; in August Figure 2017 8). During(leaf-on) the and leaf-on a minimum scans, of there 12% inwas April a clear 2017 negative (leaf-off; trend Figure in8). the During percent the leaf-onvegetation scans, going there was a clear negative trend in the percent vegetation going forward in time from forward in time from August (56%) to October (45%) to November (38%), likely due to the August (56%) to October (45%) to November (38%), likely due to the decay of leaves over decay of leaves over the autumn season. the autumn season.

FigureFigure 8. Drone laserlaser scanningscanning (DLS) (DLS) point point classifications classifications within within the the extent extent intersection intersection over over all six all six scansscans showing thethe seasonalseasonal variation variation between betwee terrainn terrain points points and and vegetation vegetation points. points. There was a gradual increase in percent vegetation over time between the leaf-off There was a gradual increase in percent vegetation over time between the leaf-off scans going forward in time from April 2017 (12%) to April 2018 (15%) to March 2019 scans going forward in time from April 2017 (12%) to April 2018 (15%) to March 2019 (23%; Figure8). This positive trend was likely reflecting the annual growth of vegetation (23%;in the Figure riverscape. 8). This The positive higher pulsetrend densitywas likely during reflecting the March the annual 2019 scan growth (Table of3 vegetation) may inhave the contributedriverscape. toThe the higher higher pulse percent density vegetation during observed the March for 2019 this scan,scan (Table but the 3) level may of have influence is not clear.

3.3. Annual Change of Maximum Tree Height A total of 604 trees were identified in the April 2017 CHM based on a manual in- spection of areas classified as tree (CHM > 2 m; Figure9). The average tree growth over the two-year period was 1.48 m. A majority of trees had positive growth (n = 570; mean = 1.62 m) while a few had negative growth (n = 34; mean = −0.96 m). Most trees with negative growth fell due to natural causes over the two-year period. Tree growth was great- est from 2018 to 2019 (mean = 0.83 m) compared to the period 2017 to 2018 (mean = 0.65 m). Based on the tree’s distance to stream (i.e., pixels classified as water), trees within 20 m grew faster (n = 544; mean = 1.56 m) over the two-year period compared to trees farther than 20 m (n = 60; mean = 0.70 m). Drones 2021, 5, x FOR PEER REVIEW 13 of 22

contributed to the higher percent vegetation observed for this scan, but the level of influ- ence is not clear.

3.3. Annual Change of Maximum Tree Height A total of 604 trees were identified in the April 2017 CHM based on a manual inspec- tion of areas classified as tree (CHM > 2 m; Figure 9). The average tree growth over the two-year period was 1.48 m. A majority of trees had positive growth (n = 570; mean = 1.62 m) while a few had negative growth (n = 34; mean = −0.96 m). Most trees with negative growth fell due to natural causes over the two-year period. Tree growth was greatest from 2018 to 2019 (mean = 0.83 m) compared to the period 2017 to 2018 (mean = 0.65 m). Based on the tree’s distance to stream (i.e., pixels classified as water), trees within 20 m grew Drones 2021, 5, 91 13 of 22 faster (n = 544; mean = 1.56 m) over the two-year period compared to trees farther than 20 m (n = 60; mean = 0.70 m).

Figure 9. Trees taller than 2 m were identified from the April 2017 canopy height model (CHM) and maximum tree height Figure 9. Trees taller than 2 m were identified from the April 2017 canopy height model (CHM) and maximum tree height was measured for: ( a) April 2017; ( b) April 2018; ( c) March 2019. Not only did the trees show steady average growth (April 2017 = 4.37 m, April Not only did the trees show steady average growth (April 2017 = 4.37 m, April 2018 2018 = 5.02 m, March 2019 = 5.84 m), but the variability or standard deviation of height = 5.02 m, March 2019 = 5.84 m), but the variability or standard deviation of height in- increased over time as well (April 2017 = 1.28 m, April 2018 = 1.58 m, March 2019 = 1.90 m). creased over time as well (April 2017 = 1.28 m, April 2018 = 1.58 m, March 2019 = 1.90 m). This positive trend in standard deviation demonstrates not just an increase in tree height Drones 2021, 5, x FOR PEER REVIEWThis positive trend in standard deviation demonstrates not just an increase in tree 14height of 22 over time but also tree height diversity and variability (Figure 10). Overall, the annual over time but also tree height diversity and variability (Figure 10). Overall, the annual scans showed a consistent, steady growth of woody vegetation over the riverscape. scans showed a consistent, steady growth of woody vegetation over the riverscape.

FigureFigure 10.10. MaximumMaximum treetree height height statistics statistics of of the the 604 604 trees trees identified identified in thein the riverscape riverscape for fo eachr each of the of threethe three leaf-off leaf-off scans. scans. The averageThe average and standardand standard deviation deviation of maximum of maximum tree heighttree height increased increased over over the two-year the two-year period. period.

3.4. Baseline Lidar Vegetation Metrics The vegetation metrics (CHM, VRI, and LVI) were derived for the April 2017 scan as a baseline (Figure 11). The CHM ranged from 0 m (bare ground) to 12.97 m (the height of the tallest tree). The VRI ranged from 0 m (smooth surfaces) to 8.41 m (rough surfaces). The LVI ranged from 0 (open terrain) to 1 (dense vegetation). The April 2017 scan was classified as: ground (CHM < 0.1 m), scrub (0.1 m < CHM < 2 m) and tree (CHM > 2 m; Figure 9). As expected, pixels classified as ground had a value close to zero for all three metrics. For pixels classified as vegetation, tree areas consistently had higher vegetation metrics than scrub areas. The biggest difference was between average height (CHM; scrub = 0.48 m and tree = 3.52 m), which is to be expected since the classes were determined based on height. The differences between roughness (VRI; scrub = 0.08 m and tree = 0.67 m) and density (LVI; scrub = 0.45 and tree = 0.61) were much smaller.

Figure 11. Lidar vegetation metrics for April 2017 showing: (a) height (CHM); (b) roughness (VRI); (c) density (LVI). Drones 2021, 5, x FOR PEER REVIEW 14 of 22

Drones 2021, 5, 91 14 of 22 Figure 10. Maximum tree height statistics of the 604 trees identified in the riverscape for each of the three leaf-off scans. The average and standard deviation of maximum tree height increased over the two-year period.

3.4. Baseline Baseline Lidar Lidar Vegetation Vegetation Metrics The vegetation metrics (CHM,(CHM, VRI,VRI, andand LVI)LVI) were were derived derived for for the the April April 2017 2017 scan scan as as a abaseline baseline (Figure (Figure 11 11).). The The CHM CHM ranged ranged from from 0 m 0 (barem (bare ground) ground) to 12.97to 12.97 m (them (the height height of the of thetallest tallest tree). tree). The The VRI VRI ranged ranged from from 0 m 0 (smooth m (smooth surfaces) surfaces) to 8.41 to 8.41 m (rough m (rough surfaces). surfaces). The TheLVI rangedLVI ranged from 0from (open 0 (open terrain) terrain) to 1 (dense to 1 vegetation).(dense vegetation). The April The 2017 April scan 2017 was scan classified was classifiedas: ground as: (CHM ground < 0.1(CHM m), scrub< 0.1 m), (0.1 scrub m < CHM(0.1 m < < 2 CHM m) and < 2 tree m) (CHMand tree > 2(CHM m; Figure > 2 9m;). FigureAs expected, 9). As pixelsexpected, classified pixels as classified ground hadas ground a value had close a value to zero close for all to threezero metrics.for all three For metrics.pixels classified For pixels as vegetation,classified as tree vegetation, areas consistently tree areas hadconsistently higher vegetation had higher metrics vegetation than metricsscrub areas. than Thescrub biggest areas. differenceThe biggest was difference between was average between height average (CHM; height scrub (CHM; = 0.48 m scrub and =tree 0.48 = 3.52m and m), tree which = 3.52 is to m), be expectedwhich is sinceto be theexpected classes since were the determined classes were based determined on height. basedThe differences on height. between The differences roughness between (VRI; roughness scrub = 0.08 (VRI; m and scrub tree = =0.08 0.67 m m)and and tree density = 0.67 m)(LVI; and scrub density = 0.45 (LVI; and scrub tree == 0.61)0.45 and were tree much = 0.61) smaller. were much smaller.

Figure 11. LidarLidar vegetation vegetation metrics for April 2017 showing: ( (aa)) height height (CHM); (CHM); ( (b)) roughness roughness (VRI); ( c) density (LVI).

Very weak negative correlations were observed between CHM (R2 = 0.12) and VRI (R2 = 0.07) with respect to the distance to the stream channel, but overall the taller and rougher vegetation tended to be located closer to the stream (Figure 12). There was no correlation between LVI (R2 = 0.01) and distance to stream; however, it is important to note that within the stream channel itself the LVI ~ 1 due to the fact that the water surface absorbed the lidar pulses resulting in a lack of ground points measured by the lidar system (Figure 11).

3.5. Annual Change of Lidar Vegetation Metrics The vegetation metrics for the annual leaf-off scans (April 2017, April 2018, and March 2019) were overlaid with each vegetation class (scrub and tree) and clipped to the annual extent intersection (total area = 11.02 ha). Both scrub and tree areas increased steadily over the two-year period (Figure 13). Scrub, as a percent of the total area, increased from 15.28% for April 2017 to 22.02% for April 2018 to 36.04% for March 2019 while tree area increased from 2.83% to 4.02% to 5.31%. In tree areas, all three metrics increased over the two-year period (Figure 13). An increase in height (CHM) was driven by annual tree growth (Figure 10). However, increases in roughness (VRI) and density (LVI) were Drones 2021, 5, x FOR PEER REVIEW 15 of 22

Very weak negative correlations were observed between CHM (R2 = 0.12) and VRI (R2 = 0.07) with respect to the distance to the stream channel, but overall the taller and rougher vegetation tended to be located closer to the stream (Figure 12). There was no correlation between LVI (R2 = 0.01) and distance to stream; however, it is important to note that within the stream channel itself the LVI ~1 due to the fact that the water surface ab- sorbed the lidar pulses resulting in a lack of ground points measured by the lidar system (Figure 11).

Drones 2021, 5, x FOR PEER REVIEW 15 of 22

Drones 2021, 5, 91 15 of 22 Very weak negative correlations were observed between CHM (R2 = 0.12) and VRI (R2 = 0.07) with respect to the distance to the stream channel, but overall the taller and rougherlikely also vegetation influenced tended by the to growthbe located of understory. closer to the On stream the other (Figure hand, 12). scrub There areas was didno correlationnot show any between significant LVI (R annual2 = 0.01) trends and distance with respect to stream; to any however, metric it (Figure is important 13). Annual to note thatvariations within in the scrub stream vegetation channel were itself likely the LVI impacted ~1 due by tothe the growth fact that of the new water scrub surface and decay ab- sorbedof old scrub,the lidar while pulses trees resulting had more in consistenta lack of ground growth points without measured much change by the in lidar the system overall (Figuretree population. 11).

Figure 12. The relationship between vegetation metrics and distance to stream for April 2017: (a) height (CHM); (b) rough- ness (VRI); (c) density (LVI).

3.5. Annual Change of Lidar Vegetation Metrics The vegetation metrics for the annual leaf-off scans (April 2017, April 2018, and March 2019) were overlaid with each vegetation class (scrub and tree) and clipped to the annual extent intersection (total area = 11.02 ha). Both scrub and tree areas increased stead- ily over the two-year period (Figure 13). Scrub, as a percent of the total area, increased from 15.28% for April 2017 to 22.02% for April 2018 to 36.04% for March 2019 while tree area increased from 2.83% to 4.02% to 5.31%. In tree areas, all three metrics increased over the two-year period (Figure 13). An increase in height (CHM) was driven by annual tree growth (Figure 10). However, increases in roughness (VRI) and density (LVI) were likely also influenced by the growth of understory. On the other hand, scrub areas did not show any significant annual trends with respect to any metric (Figure 13). Annual variations in

scrub vegetation were likely impacted by the growth of new scrub and decay of old scrub, Figure 12. The relationship between vegetation metrics and distance to stream for April 2017: (a) height (CHM); (b) rough- Figure 12. The relationship betweenwhile trees vegetation had more metrics consistent and distance growth to stream withou fort Aprilmuch 2017: change (a) height in the (CHM); overall (b tree) rough- popula- ness (VRI); (c) density (LVI). ness (VRI); (c) density (LVI).tion. 3.5. Annual Change of Lidar Vegetation Metrics The vegetation metrics for the annual leaf-off scans (April 2017, April 2018, and March 2019) were overlaid with each vegetation class (scrub and tree) and clipped to the annual extent intersection (total area = 11.02 ha). Both scrub and tree areas increased stead- ily over the two-year period (Figure 13). Scrub, as a percent of the total area, increased from 15.28% for April 2017 to 22.02% for April 2018 to 36.04% for March 2019 while tree area increased from 2.83% to 4.02% to 5.31%. In tree areas, all three metrics increased over the two-year period (Figure 13). An increase in height (CHM) was driven by annual tree growth (Figure 10). However, increases in roughness (VRI) and density (LVI) were likely also influenced by the growth of understory. On the other hand, scrub areas did not show any significant annual trends with respect to any metric (Figure 13). Annual variations in

scrub vegetation were likely impacted by the growth of new scrub and decay of old scrub, Figure 13. TrendsTrends in in annual annualwhile vegetation vegetation trees change had change more over over consistent the the riverscape: riverscape: growth (a ()a total) withou total land landt class much class area; area;change (b) ( baverage) in average the height overall height (CHM); tree (CHM); popula- (c) average roughness (VRI); (d) average density (LVI). (c) average roughness (VRI);tion. (d) average density (LVI). There was a clear increase in total scrub and tree area over each year (Figure 13). By looking at the 0.1 m pixel-level change, one can investigate how land class changed from one year to the next. From April 2017 to April 2018, scrub area increased from 1.68 ha to 2.43 ha; however, a majority (75%) of the April 2018 scrub area was previously ground (Figure 14) and a majority (62%) of the April 2017 scrub area became ground in April 2018. This same trend was observed between April 2018 and March 2019 when the scrub area increased from 2.43 ha to 3.97 ha and again a majority (69%) was previously ground (Figure 14) and a near majority (48%) of the previous scrub became ground. This demonstrates the volatility of scrub areas and could have dramatic effects on physical riverscape properties such as hydraulic roughness from year to year. In contrast, tree areas showed more consistency. From April 2017 to April 2018 to March 2019 the tree area increased from 0.31 ha to 0.44 ha to 0.59 m. Unlike scrub area, a majority of the tree area in Figure 13. Trends in annual bothvegetation 2018 andchange 2019 over was the previously riverscape: classified (a) total land as tree, class54% area; and (b) average 65%, respectively height (CHM); (Figure (c) 14). average roughness (VRI); (d) average density (LVI). Drones 2021, 5, x FOR PEER REVIEW 16 of 22

There was a clear increase in total scrub and tree area over each year (Figure 13). By looking at the 0.1 m pixel-level change, one can investigate how land class changed from one year to the next. From April 2017 to April 2018, scrub area increased from 1.68 ha to 2.43 ha; however, a majority (75%) of the April 2018 scrub area was previously ground (Figure 14) and a majority (62%) of the April 2017 scrub area became ground in April 2018. This same trend was observed between April 2018 and March 2019 when the scrub area increased from 2.43 ha to 3.97 ha and again a majority (69%) was previously ground (Fig- ure 14) and a near majority (48%) of the previous scrub became ground. This demonstrates the volatility of scrub areas and could have dramatic effects on physical riverscape prop- erties such as hydraulic roughness from year to year. In contrast, tree areas showed more consistency. From April 2017 to April 2018 to March 2019 the tree area increased from 0.31 Drones 2021, 5, 91 16 of 22 ha to 0.44 ha to 0.59 m. Unlike scrub area, a majority of the tree area in both 2018 and 2019 was previously classified as tree, 54% and 65%, respectively (Figure 14).

FigureFigure 14.14. Change matrices matrices representing representing the the pixel- pixel-levellevel change change in in land land class class between: between: (a) (aApril) April 2017 2017 and and April April 2018; 2018; (b) (Aprilb) April 2018 2018 and and March March 2019. 2019.

Looking atat the DoDs, one can observe the pixel-levelpixel-level growth andand decaydecay ofof vegetationvegetation spatiallyspatially overover the the riverscape riverscape and and temporally temporally over over the two-yearthe two-year study study (Figure (Figure 15). The 15). DoDs The forDoDs height for height (CHM) (CHM) and roughness and roughness (VRI) showed(VRI) showed similar similar trends; trends; although, although, the CHM the hadCHM a greaterhad a greater overall overall magnitude magnitude of change of comparedchange co tompared the VRI. to Mostthe VRI. of the Most positive of the change positive in vegetationchange in occurredvegetation near occurred the stream near channel the stream and waschannel primarily and was due toprimarily the growth due of to trees. the Farthergrowth intoof trees. the floodplain,Farther into there the wasfloodplain, more spatial there variabilitywas more inspatial the growth variability and decayin the ofgrowth scrub. and From decay year of to scrub. year, From some year sections to year, of scrub some showed sections positive of scrub change showed and positive some showedchange and negative some change. showed Overall, negative tree change. areas showedOverall, greater tree areas growth showed in CHM greater and growth VRI over in theCHM two-year and VRI period over the compared two-year to period scrub compared areas (Figure to scrub 16). On areas the (Figure other hand, 16). On vegetation the other densityhand, vegetation (LVI) consistently density (LVI) had positiveconsistently change had overpositive the entirechange riverscape over the entire (Figure riverscape 15) and the(Figure change 15) wasand similarthe change between was similar scrub and betw treeseen (Figurescrub and 16). trees This (Figure consistent 16). growth This con- in vegetationsistent growth density in vegetation suggests that density while suggest scrubs and that trees while are scrub growing and attrees different are growing rates, the at canopydifferent density rates, andthe vegetationcanopy density understory and vege is increasingtation understory more uniformly. is increasing more uni- formly.The vegetation metrics did not show much correlation with distance to stream. How- ever, when selecting only tree areas, some annual trends were observed. There was a negative correlation for tree height and roughness with distance to stream (Figure 17). The negative correlation increased in magnitude over time from April 2017 to April 2018 to March 2019 (Figure 17). Generally, trees grew faster within 20 m of the stream compared to trees farther than 20 m. The average CHM growth from 2017 to 2019 was 1.03 m within 20 m and 0.07 m farther than 20 m. The average increase in VRI over the same period was 0.40 m within 20 m and 0.12 m farther than 20 m. While tree height and roughness showed a negative correlation with distance to stream, vegetation density showed a posi- tive correlation over the two-year period (Figure 17). This positive correlation in tree areas decreased over time from 2017 to 2019 as the average LVI increased by +0.05 within 20 m but decreased by −0.09 farther than 20 m. Overall, the annual trend for all three metrics showed a greater increase in vegetation closer to the stream. While these annual trends were observed in tree areas, there were no significant correlations or trends observed in scrub areas. This is likely a result of the volatility of the scrub area over the riverscape that was previously described.

3.6. Seasonal Change of Lidar Vegetation Metrics All six scans were clipped to the seasonal extent intersection (total area = 8.29 ha). Based on the change in vegetation area, two periods were associated with growth (April 2017 Drones 2021, 5, 91 17 of 22

to August 2017 and April 2018 to October 2018) and three periods were associated with decay (August 2017 to November 2017, November 2017 to April 2018, and October 2018 to March 2019; Figure 18). These trends were similar to what was previously observed when looking at the classified lidar point clouds (Figure8). Much like the observations of annual change for individual trees and the pixel-level change of the vegetation metrics, tree areas gradually increased in height (CHM) and roughness (VRI) over time (Figure 18). On the DronesDrones 20212021,, 55,, xx FORFOR PEERPEER REVIEWREVIEWother hand, scrub areas were more constant for both metrics (Figure 18). Density (LVI)1717 of wasof 2222

highly variable for both scrub and trees (Figure 18).

FigureFigure 15.15. DEMsDEMs ofof DifferenceDifference (DoDs)(DoDs) overover thethe two-yeartwo-year studystudy fromfrom AprilApril 20172017 toto MarchMarchMarch 201920192019 representingrepresentingrepresenting pixel-levelpixel-levelpixel-level changechange in:in: ((aa)) heightheight (CHM);(CHM); ((bb)) roughnessroughness (VRI);(VRI);(VRI); (((ccc))) densitydensitydensity (LVI).(LVI).(LVI).

FigureFigure 16.16. TheThe averageaverage pixel-levelpixel-level annualannual changechange in in eacheach metric,metric, ((aa)) heightheight (CHM),(CHM), ((bb)) roughnessroughness (VRI),(VRI), andand ((cc)) densitydensity (LVI), over each land class and over each year in the two-year study. (LVI),(LVI), overover eacheach landland classclass andand overover eacheach yearyear inin thethe two-yeartwo-year study.study.

TheThe vegetationvegetation metricsmetrics diddid notnot showshow muchmuch correlationcorrelation withwith distancedistance toto stream.stream. How-How- ever,ever, whenwhen selectingselecting onlyonly treetree areas,areas, somesome annualannual trendstrends werewere observed.observed. ThereThere waswas aa neg-neg- ativeative correlationcorrelation forfor treetree heightheight andand roughnessroughness withwith distancedistance toto streamstream (Figure(Figure 17).17). TheThe negativenegative correlationcorrelation increasedincreased inin magnitudemagnitude ovoverer timetime fromfrom AprilApril 20172017 toto AprilApril 20182018 toto MarchMarch 20192019 (Figure(Figure 17).17). Generally,Generally, treestrees grewgrew fasterfaster withinwithin 2020 mm ofof thethe streamstream comparedcompared toto treestrees fartherfarther thanthan 2020 m.m. TheThe averageaverage CHMCHM growthgrowth fromfrom 20172017 toto 20192019 waswas 1.031.03 mm withinwithin 2020 mm andand 0.070.07 mm fartherfarther thanthan 2020 m.m. TheThe averagaveragee increaseincrease inin VRIVRI overover thethe samesame periodperiod waswas 0.400.40 mm withinwithin 2020 mm andand 0.120.12 mm fartherfarther thanthan 2020 m.m. WhileWhile treetree heightheight andand roughnessroughness showedshowed aa negativenegative correlationcorrelation withwith distancedistance toto strestream,am, vegetationvegetation densitydensity showedshowed aa positivepositive cor-cor- Drones 2021, 5, x FOR PEER REVIEW 18 of 22

relation over the two-year period (Figure 17). This positive correlation in tree areas de- creased over time from 2017 to 2019 as the average LVI increased by +0.05 within 20 m but decreased by −0.09 farther than 20 m. Overall, the annual trend for all three metrics showed a greater increase in vegetation closer to the stream. While these annual trends were observed in tree areas, there were no significant correlations or trends observed in scrub areas. This is likely a result of the volatility of the scrub area over the riverscape that was previously described.

Drones 2021, 5, x FOR PEER REVIEW 18 of 22

relation over the two-year period (Figure 17). This positive correlation in tree areas de- creased over time from 2017 to 2019 as the average LVI increased by +0.05 within 20 m but decreased by −0.09 farther than 20 m. Overall, the annual trend for all three metrics showed a greater increase in vegetation closer to the stream. While these annual trends Drones 2021, 5, 91 were observed in tree areas, there were no significant correlations or trends observed18 of in 22 scrub areas. This is likely a result of the volatility of the scrub area over the riverscape that was previously described.

Figure 17. Linear regression models relating vegetation metrics and distance to stream annually over time for tree areas: (a) height (CHM); (b) roughness (VRI); (c) density (LVI).

3.6. Seasonal Change of Lidar Vegetation Metrics All six scans were clipped to the seasonal extent intersection (total area = 8.29 ha). Based on the change in vegetation area, two periods were associated with growth (April 2017 to August 2017 and April 2018 to October 2018) and three periods were associated with decay (August 2017 to November 2017, November 2017 to April 2018, and October 2018 to March 2019; Figure 18). These trends were similar to what was previously ob- served when looking at the classified lidar point clouds (Figure 8). Much like the observa- tions of annual change for individual trees and the pixel-level change of the vegetation

metrics, tree areas gradually increased in height (CHM) and roughness (VRI) over time Figure 17. Linear regression (Figure models models 18).relating relating On vegetation vegetationthe other metricshand, metrics scrub and and distance distance areas we to to restream stream more annually annually constant over over for time time both for for metrics tree tree areas: areas: (Figure (a) height (CHM); (b) roughness (VRI); (c) density (LVI). (a) height (CHM); (b) roughness18). Density (VRI); (c )(LVI) density was (LVI). highly variable for both scrub and trees (Figure 18). 3.6. Seasonal Change of Lidar Vegetation Metrics All six scans were clipped to the seasonal extent intersection (total area = 8.29 ha). Based on the change in vegetation area, two periods were associated with growth (April 2017 to August 2017 and April 2018 to October 2018) and three periods were associated with decay (August 2017 to November 2017, November 2017 to April 2018, and October 2018 to March 2019; Figure 18). These trends were similar to what was previously ob- served when looking at the classified lidar point clouds (Figure 8). Much like the observa- tions of annual change for individual trees and the pixel-level change of the vegetation metrics, tree areas gradually increased in height (CHM) and roughness (VRI) over time (Figure 18). On the other hand, scrub areas were more constant for both metrics (Figure 18). Density (LVI) was highly variable for both scrub and trees (Figure 18).

Figure 18. Trends in seasonal vegetation change over the riverscape: ( a)) total total land land class class area; area; ( b)) average height (CHM); (CHM); (c) average roughness (VRI);(VRI); ((d)) averageaverage densitydensity (LVI).(LVI).

Temporal linear regression was used to remove the annual trend of each metric for scrub and tree areas and isolate the seasonal variability, which was represented by the normalized root mean squared error (NRMSE) of the regression residuals (Table5). Out of all the combinations of lidar metric and land class, only height and roughness in tree areas had significant annual trends (both had R2 = 0.70). Both scrub and tree areas demonstrated seasonal variability for height and roughness; however, the variability was greater for scrub (CHM NRMSE = 17.8%; VRI NMRSE = 25.6%) compared to trees (CHM NRMSE = 5.1%; VRI NMRSE = 9.2%). Vegetation density did not have any significant annual trends. However, it was highly seasonally variable for both scrub areas (LVI NRMSE = 23.1%) and Figure 18. Trends in seasonaltree vegetation areas (LVI change NRMSE over =the 17.8%). riverscape: (a) total land class area; (b) average height (CHM); (c) average roughness (VRI); (d) averageNo temporal density trends (LVI). could be observed for the average vegetation density (LVI) when viewing the data sequentially (Figure 18). However, by reordering the scans as they would fall in a single growing season, a trend emerges representing vegetation decay (Figure 19). The scans were reordered August (2017) to October (2018) to November (2017) to April (2018). This time shift does not work well for height and roughness since these are absolute measures and are affected by annual vegetation growth. On the other hand, density is a relative measure based on the local percentage of vegetation points. By reordering the scans seasonally, one can observe a negative trend in LVI over both scrub and tree areas (Figure 19). This trend likely represents the seasonal decay of vegetation in Drones 2021, 5, x FOR PEER REVIEW 19 of 22

Temporal linear regression was used to remove the annual trend of each metric for scrub and tree areas and isolate the seasonal variability, which was represented by the normalized root mean squared error (NRMSE) of the regression residuals (Table 5). Out of all the combinations of lidar metric and land class, only height and roughness in tree areas had significant annual trends (both had R2 = 0.70). Both scrub and tree areas demon- strated seasonal variability for height and roughness; however, the variability was greater for scrub (CHM NRMSE = 17.8%; VRI NMRSE = 25.6%) compared to trees (CHM NRMSE = 5.1%; VRI NMRSE = 9.2%). Vegetation density did not have any significant annual trends. However, it was highly seasonally variable for both scrub areas (LVI NRMSE = 23.1%) and tree areas (LVI NRMSE = 17.8%).

Table 5. Regression results showing the annual trend (based on R2) and seasonal variability (based on normalized root mean squared error [NRMSE]) for each metric and land class. Scrub Land Class Areas Tree Land Class Areas Annual Trend Seasonal Variability Annual Trend Seasonal Variability Vegetation Metric (R2) (NRMSE) (R2) (NRMSE) Drones 2021Height, 5, 91 (CHM) 0.16 17.8% 0.70 5.1% 19 of 22 Roughness (VRI) 0.08 25.6% 0.70 9.2% Density (LVI) 0.08 23.1% 0.00 17.8% the riverscape as foliage falls and DLS is able to penetrate further through the canopy to recordNo ground temporal points. trends could be observed for the average vegetation density (LVI) when viewing the data sequentially (Figure 18). However, by reordering the scans as they would Table 5. Regression resultsfall showing in a single the annual growing trend season, (based a on trend R2) and emerges seasonal representing variability (basedvegetation on normalized decay (Figure root 19). mean squared error [NRMSE])The for scans each were metric reordered and land class. August (2017) to October (2018) to November (2017) to April (2018). This time shift does not work well for height and roughness since these are abso- lute measuresScrub Land and Class are affected Areas by annual vegetation growth. Tree Land On Class the other Areas hand, density Annualis a relative Trend measureSeasonal based on Variability the local percenAnnualtage of Trendvegetation poinSeasonalts. By reordering Variability the Vegetation Metric 2 2 scans(R )seasonally, one can(NRMSE) observe a negative trend(R in) LVI over both scrub(NRMSE) and tree areas Height (CHM)(Figure 0.16 19). This trend likely 17.8% represents the seasonal 0.70 decay of vegetation in 5.1% the riverscape Roughness (VRI)as foliage 0.08 falls and DLS is 25.6% able to penetrate further 0.70 through the canopy to 9.2% record ground Density (LVI)points. 0.08 23.1% 0.00 17.8%

Figure 19. 19. TrendsTrends in inseasonal seasonal vegetation vegetation decay decay over over the riverscape the riverscape going going from Augu fromst August (2017) to (2017) October to October (2018) to (2018) Novem- to berNovember (2017) to (2017) Aprilto (2018):April (2018)(a) total:(a )land total class land classarea; area; (b) average (b) average height height (CHM); (CHM); (c) (caverage) average roughness roughness (VRI); (VRI); ( (dd)) average average density (LVI). density (LVI).

4. Discussion Discussion Limitations with the Current Study and Future Research One of the the limitations limitations of raster-based analysis is that only a single Z value is allowed per pixel. When When quantifying quantifying height height or or classifying classifying different different types types of of vegetation vegetation this this limita- limita- tiontion does does not not allow allow for for an an accurate accurate representa representationtion of of complex complex environments; environments; in in particular, particular, when multiple vegetation types exist at the same location. For example, when grass or scrub grows under a tree canopy. With traditional image-based remote sensing, these distinctions are often impossible to make. However, since lidar can penetrate through the tree canopy, it is possible to derive lidar measures that represent this vegetation diversity. Many applications take advantage of this aspect of lidar data by generating metrics such as average height, standard deviation, point density, and height percentiles to represent forest structure [20]. The lidar metrics generated in this study, representing height, roughness, and density, allowed for different perspectives of vegetation structure. These metrics could allow for more complex vegetation classes, such as combinations of grass, scrub, and trees. This could be accomplished using machine learning algorithms [20], but would require additional field data for model training. Another area of uncertainty is the impact of pixel size or resolution on the calculation of the vegetation metrics. It is expected that two of the metrics, VRI (which deals with roughness, defined by the standard deviation of height) and LVI (which deals with density, defined by the percentage of vegetation points) are both heavily influenced by the pixel size, as these metrics are dependent on the number of lidar points in each pixel. As the size of the pixel increases, more lidar points are included, which results in a more generalized rather than localized value for each pixel. A sensitivity analysis is needed to determine the effect of pixel size on each of these metrics, but this is an area of future study. Drones 2021, 5, 91 20 of 22

In this study, we quantified and classified riverscape vegetation at 0.1-m resolution using an active system (i.e., DLS). Other studies have classified vegetation using passive systems, such as drone-based imagery [7,8,31]. Woodget et al. [7] used drone-based imagery and SfM to create 0.02-m DEMs and classified images with a mean elevation error of 0.05 m, similar to the elevation errors observed in our study. Yang et al. [8] used multispectral drone-based imagery to create 0.25 m classified images. Prior et al. [31] created 0.1 m classified images from DLS and SfM, but the datasets were collected in different years. Both remote sensing technologies have advantages and disadvantages. Lidar is better suited for producing 3D point clouds and penetrating vegetation canopy. Imagery is better suited for collecting a range of spectral information. There is a long history of data fusion applications that have combined active and passive remote sensing datasets to take advantage of their respective strengths [21]. The results of this study could theoretically be improved by combining the DLS data with SfM data. However, such data fusion would require collecting both lidar and imagery data at each survey, presenting additional field management challenges, and was outside the scope of this study.

5. Conclusions Comparing the annual and seasonal change of all three lidar vegetation metrics (height [CHM], roughness [VRI], and density [LVI]) over both vegetation classes (scrub and tree), the following trends and patterns emerge over the two-year period: 1. Trees were defined by annual change, while seasonal variability dominated scrub. 2. Trees closer to the stream (within 20 m) grew faster than trees farther from the stream (greater than 20 m), although this trend was not observed with scrub. 3. The trends observed in height (CHM) and roughness (VRI) were very similar and the differences were mainly in terms of the scale of each metric between scrub and trees. 4. Height (CHM) and roughness (VRI) were more influenced by annual change. 5. Density (LVI) was more influenced by seasonal variability. Based on these results, it is clear that all three metrics are representing different aspects of the riverscape vegetation as it changes both annually and seasonally. It is recommended that for any fluvial application able to utilize high-resolution survey data such as from DLS, whether it is for ecological modeling or hydraulic modeling, to consider integrating all three metrics as possible explanatory variables to describe the dynamic physical changes occurring to vegetation over the riverscape.

Author Contributions: Conceptualization, J.P.R.; data curation, L.L.; formal analysis, J.P.R.; funding acquisition, W.C.H.; methodology, J.P.R. and L.L.; supervision, W.C.H.; writing—original draft, J.P.R.; writing—review and editing, W.C.H. and L.L. All authors have read and agreed to the published version of the manuscript. Funding: This research was supported by an Instrumentation Discovery Travel Grant (IDTG) from the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI), sponsored by the National Science Foundation (NSF). This work was also supported by the Virginia Agricultural Experiment Station (Blacksburg, VA, USA) and the U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (Washington, DC, USA). Acknowledgments: Thanks to everyone with the Virginia Tech StREAM Lab (vtstreamlab.weebly.com/ accessed on 6 July 2021), including Charles Aquilina for performing field measurements and Alexa Reed for extracting photos from the on-site tower camera. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Fausch, K.D.; Torgersen, C.E.; Baxter, C.V.; Li, H.W. to Riverscapes: Bridging the Gap between Research and Conservation of Stream Fishes. BioScience 2002, 52, 483–498. [CrossRef] 2. Carbonneau, P.; Fonstad, M.A.; Marcus, W.A.; Dugdale, S.J. Making Riverscapes Real. Geomorphology 2012, 137, 74–86. [CrossRef] 3. Dietrich, J.T. Riverscape Mapping with Helicopter-Based Structure-from-Motion Photogrammetry. Geomorphology 2016, 252, 144–157. [CrossRef] Drones 2021, 5, 91 21 of 22

4. Farid, A.; Rautenkranz, D.; Goodrich, D.C.; Marsh, S.E.; Sorooshian, S. Riparian Vegetation Classification from Airborne Laser Scanning Data with an Emphasis on Cottonwood Trees. Can. J. Remote Sens. 2006, 32, 15–18. [CrossRef] 5. Heritage, G.; Hetherington, D. Towards a Protocol for Laser Scanning in Fluvial Geomorphology. Earth Surf. Process. Landf. 2007, 32, 66–74. [CrossRef] 6. Resop, J.P.; Kozarek, J.L.; Hession, W.C. Terrestrial Laser Scanning for Delineating In-Stream Boulders and Quantifying Habitat Complexity Measures. Photogramm. Eng. Remote Sens. 2012, 78, 363–371. [CrossRef] 7. Woodget, A.S.; Austrums, R.; Maddock, I.P.; Habit, E. Drones and Digital Photogrammetry: From Classifications to Continuums for Monitoring River Habitat and Hydromorphology. Wiley Interdiscip. Rev. Water 2017, 4, 1–20. [CrossRef] 8. Yang, B.; Hawthorne, T.L.; Torres, H.; Feinman, M. Using Object-Oriented Classification for Coastal Management in the East Central Coast of Florida: A Quantitative Comparison between UAV, Satellite, and Aerial Data. Drones 2019, 3, 60. [CrossRef] 9. Resop, J.P.; Lehmann, L.; Hession, W.C. Drone Laser Scanning for Modeling Riverscape Topography and Vegetation: Comparison with Traditional Aerial Lidar. Drones 2019, 3, 35. [CrossRef] 10. Huang, C.; Peng, Y.; Lang, M.; Yeo, I.-Y.; McCarty, G. Wetland Inundation Mapping and Change Monitoring Using Landsat and Airborne LiDAR Data. Remote Sens. Environ. 2014, 141, 231–242. [CrossRef] 11. Anders, N.S.; Seijmonsbergen, A.C.; Bouten, W. Geomorphological Change Detection Using Object-Based Feature Extraction from Multi-Temporal LiDAR Data. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1587–1591. [CrossRef] 12. Okyay, U.; Telling, J.; Glennie, C.L.; Dietrich, W.E. Airborne Lidar Change Detection: An Overview of Earth Sciences Applications. Earth-Sci. Rev. 2019, 198, 102929. [CrossRef] 13. Resop, J.P.; Hession, W.C. Terrestrial Laser Scanning for Monitoring Streambank Retreat: Comparison with Traditional Surveying Techniques. J. Hydraul. Eng. 2010, 136, 794–798. [CrossRef] 14. O’Neal, M.A.; Pizzuto, J.E. The Rates and Spatial Patterns of Annual Riverbank Erosion Revealed through Terrestrial Laser- Scanner Surveys of the South River, Virginia. Earth Surf. Process. Landf. 2011, 36, 695–701. [CrossRef] 15. Saarinen, N.; Vastaranta, M.; Vaaja, M.; Lotsari, E.; Jaakkola, A.; Kukko, A.; Kaartinen, H.; Holopainen, M.; Hyyppä, H.; Alho, P. Area-Based Approach for Mapping and Monitoring Riverine Vegetation Using Mobile Laser Scanning. Remote Sens. 2013, 5, 5285–5303. [CrossRef] 16. Day, S.S.; Gran, K.B.; Belmont, P.; Wawrzyniec, T. Measuring Bluff Erosion Part 1: Terrestrial Laser Scanning Methods for Change Detection. Earth Surf. Process. Landf. 2013, 38, 1055–1067. [CrossRef] 17. Flener, C.; Vaaja, M.; Jaakkola, A.; Krooks, A.; Kaartinen, H.; Kukko, A.; Kasvi, E.; Hyyppä, H.; Hyyppä, J.; Alho, P. Seamless Mapping of River Channels at High Resolution Using Mobile LiDAR and UAV-Photography. Remote Sens. 2013, 5, 6382–6407. [CrossRef] 18. Leyland, J.; Hackney, C.R.; Darby, S.E.; Parsons, D.R.; Best, J.L.; Nicholas, A.P.; Aalto, R.; Lague, D. Extreme Flood-Driven Fluvial Bank Erosion and Sediment Loads: Direct Process Measurements Using Integrated Mobile Laser Scanning (MLS) and Hydro-Acoustic Techniques. Earth Surf. Process. Landf. 2017, 42, 334–346. [CrossRef] 19. Andersen, H.-E.; McGaughey, R.J.; Reutebuch, S.E. Estimating Forest Canopy Fuel Parameters Using LIDAR Data. Remote Sens. Environ. 2005, 94, 441–449. [CrossRef] 20. Huang, W.; Dolan, K.; Swatantran, A.; Johnson, K.; Tang, H.; O’Neil-Dunne, J.; Dubayah, R.; Hurtt, G. High-Resolution Mapping of Aboveground Biomass for Forest Carbon Monitoring System in the Tri-State Region of Maryland, Pennsylvania and Delaware, USA. Environ. Res. Lett. 2019, 14, 095002. [CrossRef] 21. Mundt, J.T.; Streutker, D.R.; Glenn, N.F. Mapping Sagebrush Distribution Using Fusion of Hyperspectral and Lidar Classifications. Photogramm. Eng. Remote Sens. 2006, 72, 47–54. [CrossRef] 22. Arcement, G.J.; Schneider, V.R. Guide for Selecting Manning’s Roughness Coefficients for Natural Channels and Flood Plains; United States Geological Survey: Denver, CO, USA, 1989. 23. You, H.; Wang, T.; Skidmore, A.K.; Xing, Y. Quantifying the Effects of Normalisation of Airborne LiDAR Intensity on Coniferous Forest Leaf Area Index Estimations. Remote Sens. 2017, 9, 163. [CrossRef] 24. Kato, A.; Moskal, L.M.; Schiess, P.; Swanson, M.E.; Calhoun, D.; Stuetzle, W. Capturing Tree Crown Formation through Implicit Surface Reconstruction Using Airborne Lidar Data. Remote Sens. Environ. 2009, 113, 1148–1162. [CrossRef] 25. Jakubowski, M.K.; Li, W.; Guo, Q.; Kelly, M. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-Based Segmentation Approaches. Remote Sens. 2013, 5, 4163–4186. [CrossRef] 26. Wu, X.; Shen, X.; Cao, L.; Wang, G.; Cao, F. Assessment of Individual Tree Detection and Canopy Cover Estimation Using Unmanned Aerial Vehicle Based Light Detection and Ranging (UAV-LiDAR) Data in Planted Forests. Remote Sens. 2019, 11, 908. [CrossRef] 27. Dorn, H.; Vetter, M.; Höfle, B. GIS-Based Roughness Derivation for Flood Simulations: A Comparison of Orthophotos, LiDAR and Crowdsourced Geodata. Remote Sens. 2014, 6, 1739–1759. [CrossRef] 28. Lefsky, M.A.; Harding, D.J.; Keller, M.; Cohen, W.B.; Carabajal, C.C.; Espirito-Santo, F.D.B.; Hunter, M.O.; de Oliveira, R. Estimates of Forest Canopy Height and Aboveground Biomass Using ICESat. Geophys. Res. Lett. 2005, 32.[CrossRef] 29. Zhou, T.; Popescu, S. Waveformlidar: An R Package for Waveform LiDAR Processing and Analysis. Remote Sens. 2019, 11, 2552. [CrossRef] 30. Chow, V.T. Open-Channel Hydraulics; McGraw-Hill: New York, NY, USA, 1959; ISBN 978-0-07-085906-7. Drones 2021, 5, 91 22 of 22

31. Prior, E.M.; Aquilina, C.A.; Czuba, J.A.; Pingel, T.J.; Hession, W.C. Estimating Floodplain Vegetative Roughness Using Drone- Based Laser Scanning and Structure from Motion Photogrammetry. Remote Sens. 2021, 13, 2616. [CrossRef] 32. Barilotti, A.; Sepic, F.; Abramo, E.; Crosilla, F. Improving the Morphological Analysis for Tree Extraction: A Dynamic Approach to Lidar Data. In Proceedings of the ISPRS Workshop on Laser Scanning 2007 and SilviLaser 2007, Espoo, Finland, 12–14 September 2007. 33. Wynn, T.; Hession, W.C.; Yagow, G. Stroubles Creek Stream Restoration; Virginia Department of Conservation and Recreation: Richmond, VA, USA, 2010. 34. Wynn-Thompson, T.; Hession, W.C.; Scott, D. StREAM Lab at Virginia Tech. Resour. Mag. 2012, 19, 8–9. [CrossRef] 35. YellowScan YellowScan Surveyor: The Lightest and Most Versatile UAV LiDAR Solution. Available online: https://www. yellowscan-lidar.com/products/surveyor/ (accessed on 21 March 2020). 36. Isenburg, M. Processing Drone LiDAR from YellowScan’s Surveyor, a Velodyne Puck Based System. Available online: https: //rapidlasso.com/2017/10/29/processing-drone-lidar-from-yellowscans-surveyor-a-velodyne-puck-based-system/ (accessed on 4 September 2021). 37. Wheaton, J.M.; Brasington, J.; Darby, S.E.; Sear, D.A. Accounting for uncertainty in DEMs from repeat topographic surveys: Improved sediment budgets. Earth Surf. Process. Landf. 2010, 35, 136–156. [CrossRef] 38. Williams, R. DEMs of Difference. Geomorphol. Tech. 2012, 2, 1–17.