MAPPING INVASIVE IN THE OLD WOMAN CREEK ESTUARY USING REMOTE SENSING

Tharindu Hasantha Abeysinghe

A Thesis

Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of

MASTER OF SCIENCE

May 2019

Committee:

Anita Simic Milas, Advisor

Andrew Gregory

Angélica Vázquez-Ortega

© 2019

Tharindu Abeysinghe

All Rights Reserved iii ABSTRACT

Anita Simic Milas, Advisor

The application of remote sensing techniques in mapping, classifying and monitoring land cover, land use and vegetation are popular among the researchers and scientists for several decades. It became more productive and economical in recent years with the advancement of information technology in a sophisticated and revolutionary manner. Currently, remote sensing is a widely used effective technique that provides spatial and temporal information about vegetation and invasive species in . The first objective of this study was to assess the effectiveness of the data obtained via Unmanned Ariel Vehicles (UAV) to identify invasive Phragmites australis in the Old Woman Creek (OWC) in Ohio. Secondly, the study aimed to determine the most suitable algorithm to distinguish between Phragmites australis and other vegetation types using pixel based and object based classification methods and a combination of feature layers derived from the UAV images.

Pixel based classification found to be performing better than object based classification.

Pixel based Neural Network (NN) was identified as the best classifier to map Phragmites in

OWC with the least error of omission of 1.59% and the overall accuracy of 94.80% based on the

Sequoia image acquired in August that was stacked with Canopy Height Model (CHM) from

August and NDVI, which was derived using UAV data acquired in October (NDVIOct). The study emphasizes the necessity of a suitable sampling method and the use of optimum parameters of non-parametric classifiers. The study provides future directions for data acquisition to map

Phrgamites at early and mid-summer to find data to eradicate Phragmites effectively in OWC estuary. iv

This thesis is dedicated to

all my teachers who has always been a source of support and encouragement

in building my career and shaping up my life. v ACKNOWLEDGEMENTS

I would like to thank everyone who supported me professionally and personally in this endeavor. First and foremost, I am indebted to my advisor and the committee, their expertise, and thoughtful critiques, which made this thesis a success. I extend my gratitude to my advisor Dr.

Anita Simic Milas for her guidance, encouragement, advice, generosity, and patience throughout this research project. Also, I appreciate guidance, advice, and the insights received from my committee members, Dr. Andrew Gregory, and Dr. Angélica Vázquez-Ortega to improve the outcome of the research. It has been a privilege to work with them.

Secondly, I express my sincere thank to Dr. Kristin Arend, Research Coordinator, Old

Woman Creek National Estuarine Research Reserve for granting me the permission to collect data and for her support during the fieldwork. Also, I appreciate the support of Ms. Breann

Hohman, Firelands Coastal Tributaries (FCT) Watershed Coordinator and Assistant District

Director, who has shared her ecological knowledge and experience about the estuary with me. I am grateful to Mr. Alex Fried for sharing his knowledge and experience about UAVs and his time spent to teach me how to fly a UAV. I also extend my thanks to the staff members of Old

Woman Creek Estuarine Research Reserve for supporting me in numerous ways.

Thirdly, I would like to thank the faculty of the Departments of Geology and Geography,

Bowling Green State University (BGSU) for providing me positive educational and leadership experience during my time as a graduate student, research assistant, and a teaching assistant. I want to make a special mention of Dr. Yu Zhou of the Department of Geography for his valuable support and understanding while I was working as a teaching assistant under his supervision. I am grateful to the Department of Geology, BGSU for providing me with the financial support through the BGSU Geology Foundation Fund during my fieldwork. Further, I express my vi gratitude to Mr. William A. Butcher of the Department of Geology, BGSU for providing technical support. Also, I sincerely thank the Graduate Coordinator Dr. Kurt Panter, administrative staff of the Department of Geology and the staff of the library of BGSU.

Further, I wish to thank my friends at BGSU and elsewhere. A special thank goes to

Patrick Reil, for his enormous support during the field work. Prabha Rupasinghe, Dayal

Wijayarathne, and Nayani Illangakoon helped me to find relevant literature. Katerina

Konstantinidis and Nicholas Faust assisted me with data processing. Anuruddha Marambe and

Asanga Ramanayake shared their experience and views about the analytical methods. Sureni

Sumathipala, Suthakaran Ratnasingam, and Amila Weerasinghe offered their friendly support during the completion of this project. I appreciate the generous support I received from each of them.

Moreover, I would like to express my appreciation for the support and encouragement received from my cousins Samudra and Tissa to finalize this thesis. And, I am especially grateful to my parents, and my two sisters Chathuri and Darshi, and my brother-in-law Kenath, who gave me emotional strength all the time from the other part of the world. Finally, my special thanks go to Nathasha for her continued support and most importantly, for her love, understanding, and patience while I have been away. vii

TABLE OF CONTENTS

Page

CHAPTER I: INTRODUCTION………………………………………………………...... 1

1.1 Background…………………………………………………………………….. 1

1.2 Problem Statement……………………………………………………………. .. 4

1.3 Goal and Objectives of the Study……………………………………………… 4

CHAPTER II: LITERATURE REVIEW………………………………………………….. 6

2.1 Remote Sensing………………………..………………………………………. 6

2.2 Remote Sensing Platforms and Satellites………………………………………. 7

2.3 Importance of Wetlands………………………………………………………. .. 8

2.3.1 Coastal Wetlands of the Great ………………………………… 10

2.3.2 Invasive in Wetlands………………………………………….. 11

2.4 Remote Sensing for Land Cover and Mapping………………………… 12

2.4.1 Mapping Invasive Plants Using Remote Sensing……………………. 14

2.5 UAV Applications in Vegetation Classification……………………………… .. 18

2.6 Classification Methods and Algorithms (Classifiers)…………………………... 19

2.6.1 Pixel and Object Based Image Classification………………………… 22

2.6.2 Image Classification Algorithms……………………………………… 24

2.7 Layer Features – Derived Image Layers to Enhance Classification……………. 27

2.7.1 Vegetation Indices…………………………………………………….. 27

2.7.2 Texture Features………………………………………………………. 28

2.7.3 Data Fusion of Vertical and Horizontal Data…………………………. 28

viii

CHAPTER III: DATA AND METHODS…………………………………………………. 30

3.1 Study Area……..……………………………………………………………….. 30

3.2 Field Data Collection…………………………………………………………… 34

3.2.1 UAV Imagery Acquisition……………………………………………. 34

3.2.2 Spectroradiometer and GPS Measurements………………………….. 35

3.3 UAV Data Processing…..………………………………………………………. 37

3.3.1 UAV Image Pre-processing…………………………………………… 37

3.3.2 Derived UAV Products……………………………………………….. 38

3.3.2.1 Band Indices………………………………………………… 39

3.3.2.2 Image Texture………………………………………………. 40

3.3.2.3 Principal Components………………………………………. 40

3.3.2.4 Canopy Height Model………………………………………. 40

3.4 Intermediate Steps in the Analysis – Masking Process to Enhance the

Classification…………………………………………………………………… 41

3.4.1 Sampling Design……………………………………………………… 41

3.4.2 Workflow of the Study……………………………………………….. 42

3.5 Image Classification Algorithms and Accuracy Assessment..…………………. 44

3.5.1 Image Classification Algorithms……………………………………… 45

3.5.2 K-fold Cross-Validation and Accuracy Assessment………………….. 47

CHAPTER IV: RESULTS…………………………………….……………………………. 49

4.1 Image Pre-processing and Masking…………………………………………….. 49

4.2 Spectral Library (Spectral Signatures)….………………………………………. 52

4.3 ANOVA Results: NDVI for August and October………………………………. 54 ix

4.4 Parametrization Using Four Original Bands with No Additional Feature

Layers (4sq).…………………………………………………………………….. 55

4.5 Classified Maps and Accuracy Assessments Using Different Combinations

of Feature Layers for each Proposed Classifier………………………………… 56

4.5.1 Pixel Based Classification.…………………………………………… 56

4.5.2 Object Based Classification.…………………………………………. 64

CHAPTER V: DISCUSSION……………………………………………………………… 69

5.1 UAVs in Identifying Invasive Plants…………………………………………… 69

5.2 Importance of Sampling Methods in Classification and Validation……………. 72

5.3 Classifier Algorithms for Identifying Invasive Phragmites……………………. 74

5.4 Recommendations for Phragmites Eradication………………………………… 78

5.5 Uncertainties in the Study………………………………………………………. 79

CHAPTER VI: CONCLUSION.....………………………………………………………… 81

REFERENCES……………………………………………………………………………… 83

APPENDIX A: PARAMETER OPTIMIZATION TABLES….…………………………… 108

APPENDIX B: CLASSIFIED IMAGES…………………………………………………… 110 x

LIST OF FIGURES

Figure Page

1 Types of remote sensors...... 7

2 Study area located on the southern shoreline of Erie ...... 31

3 The common plants in the study site...... 33

4 Workflow of the study ...... 43

5 The sample splitting method for three-fold cross-validation ...... 48

6 Sequoia and RGB camera images ...... 49

7 RGB camera images of Lotus, Lily, Phragmites, and Cattails ...... 50

8 The Sequoia mosaic taken on August 8, 2017 ...... 51

9 Enlarged image subset ...... 51

10 Image with tall trees on the estuary bank and image masked with the CHM ...... 52

11 Spectral signatures of the types in the estuary ...... 53

12 Pixel based NN classification results ...... 61

13 Zoomed Phragmites patch classified with NN classifier ...... 62

14 Pixel based ML classification results ...... 63

15 Object based SVM classification results...... 66

16 A Phragmites patch observed in different images ...... 67

xi

LIST OF TABLES

Table Page

1 Attributes of urban/suburban and natural landscapes with their corresponding

minimum spatial and spectral resolution requirements ...... 13

2 Summary of recent studies that have classified invasive species using remote

sensing data ………………………………………………………………………… 17

3 Summary of remote sensing classification techniques ...... 21

4 Wavelength ranges of Sequoia camera ...... 35

5 Normalized and simple band indices used in the study ...... 39

6 Mean NDVI values of 10 pixels for each Phragmites, Cattails, and Lotus from

the August 8 and October 18 images ...... 54

7 Results of ANOVAs performed among NDVI values of Phragmites, Cattails,

and Lotus generated for the August 8 and October 18 images ...... 54

8 Results of Tukey Kramer test among NDVI values of Phragmites, Cattails,

and Lotus for October 18 ...... 55

9 Optimum parameter values selected for each pixel and object based classifier ...... 56

10 Confusion matrix of 4sq using three-fold cross-validation ...... 56

11 Errors of commission and omission for each class of the 4sq image averaged

after the three-fold cross-validation iterations ...... 57

12 Overall accuracy (O.A.), Kappa value, errors of commission and omission for

Phragmites class for each layer separately stacked to 4sq ...... 58

13 Overall accuracy (O.A.) and Kappa value for classification with stacking feature

layers to 4sq image - pixel based ...... 60 xii

14 Errors of commission and omission for Phragmites class - pixel based ...... 60

15 Phragmites classification accuracy (%) with stacking feature layers - pixel based 63

16 Overall classification accuracy and Kappa value with stacking feature layers

to 4sq image - object based ...... 64

17 Errors of commission and omission for Phragmites class - object based...... 65

18 Phragmites classification accuracy (%) with stacking feature layers - object based 65

xiii

ABBREVIATIONS

BRDF – Bidirectional Reflectance Distribution Function CHM – Canopy Height Model DSM – Digital Surface Model DTM – Digital Terrain Model GIS – Geographical Information System GLCM – Gray Level Co-occurrence Matrix GPS – Global Positioning System kNN – k Nearest Neighbor LiDAR – Light Detection and Ranging MAP – Morphological Attribute Profile ML – Maximum Likelihood NDGI – Normalized Difference Green Index NDRE – Normalized Difference Red Edge Index NDVI - Normalized Difference Vegetation Index NERRS – National Estuarine Research Reserve System NIR – Near-Infrared NN – Neural Network NOAA – National Oceanic and Atmospheric Administration OWC – Old Woman Creek RADAR – Radio Detection and Ranging RBF – Radial Basis Function RGB – Red, Green, Blue ROI – Region of Interest SAVI - Soil Adjusted Vegetation Index SVM – Support Vector Machine UAV – Unmanned Aerial Vehicle USGS – United States Geological Survey

1

CHAPTER I: INTRODUCTION

1.1 Background In the recent decades, the scientific data collection and analysis techniques have been changing with the technological innovations paving the way to detailed information and more profound insights into natural phenomena. Increased usage of satellite and airborne imagery, and availability of digital databases together with the development of computer technology, especially processing speed, data capacity and software development, have advanced research at multiple scales over relatively large geographic areas in a cost-effective way (Jones, 2008;

Rogan & Chen, 2004). Today, with a wide range of remote sensing instruments of finer spatial resolution, more accurate data are acquired making natural phenomena easier to understand.

Remote sensors collect information about the conditions of an object or area, from ground-based, airborne, or orbital platform over a particular time period. Different materials absorb, transmit and reflect different amounts of energy at different wavelength regions of the electromagnetic spectrum (Richards, 2013; Sabins, 2007). The optical remote sensors detect reflected energy from objects providing spectral information across the spectral range (a.k.a. spectral signature) over a spatial unit of land area. The unique pattern of variation in reflected energy with respect to the wavelength is commonly used to differentiate an object from its surrounding area.

Remote sensing techniques have been used for mapping, classifying and monitoring land cover, vegetation and land use for many years (Anderson et al., 1976). In recent years, with the increasing demand for quality of information and technology (Rogan & Chen, 2004), better linkages of remote sensing, spatial data to Global Positioning System (GPS) data, and 2

Geographic Information System (GIS) data as well as better modeling capabilities, this technology presents a practical and economical way to study land cover over large areas

(Franklin, 2001; Nordberg & Evertson, 2003).

Remote sensing has been widely used in different wetland related studies, including land cover changes, carbon cycle and release of carbon by peatland fires as well as the impact of climate warming on wetland flora and fauna, and hydrology (Guo et al., 2017). Mapping wetlands are critical to acquire information related to the impact of global climate change and to understand the human impact on the natural wetland ecosystem to plan, manage and to protect the resources (Lam, 2008; Xie et al., 2008). The use of remote sensing in monitoring wetlands has advantages over field-based methods due to the cost effectiveness and capability of collecting data in inaccessible areas, commonly with user-defined time intervals (Gallant, 2015;

Mahdavi et al., 2018).

Although being advantageous over large areas, remote sensing data collected over wetlands face several challenges due to high variability of wetland vegetation, similarity of spectral responses from different vegetation types and species, and frequent water level fluctuations causing difficulties to identify wetland boundaries (Lane et al., 2014; Mahdavi et al.,

2017). Besides, water between plants and soil, observed in shallow areas, commonly affect the reflectance spectra of wetland plants (Adam et al., 2010).

Many commercially available satellite and airborne sensors provide images with an insufficient spatial resolution for wetland mapping where small patches of vegetation are commonly not captured. The ultra-high resolution images obtained from Unmanned Aerial

Vehicles (UAVs) are becoming more popular in mapping vegetation in recent years as they 3 provide solutions for those drawbacks. Main reasons for this popularity are 1) revisit periods can be user controlled; 2) sensors can observe the ground from more proximal positions due to low altitude; 3) ability to collect finer spatial resolution data; and 3) being cost-effective (Anderson &

Gaston, 2013). This sophisticated and revolutionary technology enhances the vegetation related research because of the availability of detailed spectral information derived at local scales

(Mesev & Walrath, 2007; Salami et al., 2014).

Moreno-Mateos et al. (2012) described the wetlands as one of the most productive and economically valuable ecosystems in the world. Wetlands are vital ecosystems that support diverse habitats for flora and fauna, enhance water quality in lakes and rivers and maintain the stability of the global environment (Verhoeven & Meuleman, 1999). Also, wetlands absorb excess rainwater during high precipitation periods as well as slow down water flows by trees and roots, reducing flood hazards (Mitsch &Gosselink, 2000; Postel, 2003). Furthermore, wetlands filter sediments and toxic substances, and store carbon, which reduces emission (Kayranli et al.,

2010). Also, many wetlands offer numerous recreational opportunities (Ozesmi & Bauer, 2002).

There were an estimated 110.1 million acres of wetlands in the United States of America (USA) in 2009; 95% of them were freshwater whereas the rest belonged to the marine or estuarine saltwater systems (Dahl, 2011).

Wetlands have been degraded throughout history, being altered or removed from the landscape due to climate factors and direct or indirect human influence. Due to their vital role in the ecology, wetland conservation and restoration have been a focus of research and management strategies since the 1970s. Protection of water quality, fisheries, wildlife, and overall coastal ecosystem health has been a concern of the federal and state governments for some time, 4 including National Oceanic and Atmospheric Administration (NOAA) and National Estuarine

Research Reserve System (NERRS).

1.2 Problem Statement Mapping, identification, and classification of plant types and species are vital in planning, restoring and managing wetlands. In particular, capturing the distribution of non-native plant species and controlling invasive plant species is a significant challenge that wetland managers and policy makers face (Zedler & Kercher, 2004). There are large-scale efforts, including rapid identification, response, and control actions, being implemented in response to the spread of invasive plant species in wetlands. Early identification and accurate information about the distribution of invasive species are necessary to assess and control the negative impacts of invasive plants. Remote sensing is a widely used effective technique, which provides spatial and temporal information about vegetation and invasive species in wetlands.

Many of the studies on wetland remote sensing have used aerial photographs of various resolutions, hyperspectral imagery, RAdio Detection And Ranging (RADAR), and Light

Detection And Ranging (LiDAR) data. However, the use of UAVs for wetland vegetation classification is still an emerging area of study. This study uses several classification methods to map vegetation, primarily the invasive Phragmites australis, in the Old Woman Creek (OWC) estuary, located in Ohio. The study focuses specifically on optimum use of sample design, feature layers derived from UAV data, and classification algorithms (i.e., classifiers).

1.3 Goal and Objectives of the Study The goal of this research project is to explore the effectiveness of UAVs in mapping the invasive plants in OWC National Estuarine Research Reserve, Ohio. The finer spatial resolution

(13 cm) of the UAV multispectral imagery, combinations of different feature layers derived from 5

UAV data and the use of two images from the summer and fall may improve the classification methods and extraction of invasive Phragmites australis in the estuary.

The objectives of this study are:

1. Assess the effectiveness of the UAVs data to identify invasive Phragmites australis in

OWC estuary;

2. Identify the most suitable algorithm to distinguish between invasive Phragmites australis

and other vegetation types using 1) pixel based and 2) object based classification methods

and various combinations of feature layers derived from the UAV images.

6

CHAPTER II: LITERATURE REVIEW

2.1 Remote Sensing Remote sensing is the method which acquires the data from an object, area or phenomena with a sensor that is not in direct contact (Lillesand et al., 2013), using ground-based instruments, airborne or satellite (Earth orbiting) sensors and platforms. The technology has diverse applications widely used in the fields such as agriculture, ecology, hydrology, geology, and urbanization for several advantages, including broad coverage of data acquisition and temporal repetition, minimizing the necessity of field work.

Remote sensing technology has been determined by three interrelated factors: development in sensor technology and data quality, improved and standardized remote sensing methods and research applications (Franklin, 2001). The data often contain distortions and incorrect pixel values due to the uneven shapes of the earth’s surface, irregular motions of the sensors, the wavelength dependence of the light and adverse atmospheric conditions (Richards,

1999). Geometric, radiometric and atmospheric corrections are applied during the image processing stage to overcome these distortions. Only corrected images are used for further processing, analysis, and interpretations.

Interpretations are made with methods such as image classification and modeling where various enhancement methods are used to improve the image interpretation processes. Using data fusion of images, which includes a combination of images acquired through time, the inclusion of elevation measures and extraction of features such as band indices, textural measurements, and principal components, are the additional techniques used to improve the interpretations and classification methods. As the remote sensing data analysis getting advanced, data integration 7 methods such as multi-sensor and temporal data fusion become popular to enhance the extraction of information from raw remote sensing data.

2.2 Remote Sensing Platforms and Satellites Remote sensing instruments collect information on a broader range of the electromagnetic spectrum than the human eye can observe. They are classified into several categories based on the aspects of data acquisition. Based on the use of energy, the remote sensors are divided into two categories: active sensors or passive sensors (Kerle et al., 2004).

Active sensors operate in the microwave portion of the electromagnetic energy while the passive sensors commonly operate in the visible, infrared, and thermal infrared portion of the electromagnetic field

(Figure 1).

Figure 1: Types of remote sensors Note: non-imaging (i.e., point) sensors are typically ground based sensors. Source: Zhu et al., 2017

8

Earth observation satellites such as Landsat, SPOT, Pleiades, EROS, GeoEye and

WorldView (Zhu et al., 2018) as well as UAVs’ optical cameras are passive sensors and, these sensors collect data by receiving naturally available reflected energy (e.g. reflected sunlight) from the object on the earth’s surface (Turner et al., 2003). The optical sensors are commonly classified as multispectral or hyperspectral, which depends on the number of spectral bands used in the datasets (Lillesand et al., 2013).

Based on the distance from the sensor to the studied object, remote sensors are classified into different platforms; namely, ground based, spaceborne, and airborne, including UAVs.

Ground based sensors record detailed information about a point or small surface area, whereas the aerial/airborne sensors acquire data over local or regional areas at a desired time.

Comparatively, satellites have continuous repetitive coverage of the earth’s surface.

The spaceborne and airborne manned sensors provide data in a broader field of view compared to the near surface operations carried out by UAV; however, UAVs provide data with high spatial resolution at a low cost and flexible acquisition time. UAVs are mostly used in studies with specific objectives in small areas of interests (De Castro et al., 2017; Salami et al.,

2014; Samiappan et al., 2017; Simic Milas et al., 2017; Simic Milas et al., 2018).

2.3 Importance of Wetlands Although there is no well-established definition of wetlands due to their variety of landscape settings, water regime, and geomorphology (Sharitz & Batzer, 1999), wetlands are commonly defined as areas that are saturated or inundated with water, for a period of time long enough to maintain specific physical, chemical and biological properties and tolerate the flood effects (Keddy, 2010). The boundary between a wetland and surrounding area could not be 9 delineated precisely because of constantly changing water levels where seasonal changes are the key factor for maintaining the wetland ecosystem (Sharitz & Batzer, 1999).

The importance of wetlands includes carbon storage, water purification, flood mitigation, shoreline surge protection, and biodiversity conservation (Moreno-Mateos et al., 2012; Sghair &

Goma, 2013). Although wetlands cover a significantly small percentage of the earth’s land surface, they contain a large quantity of the world’s carbon stored in the water bodies. The accumulated stores a significant amount of carbon reducing the global carbon budget and the greenhouse effect (Kayranli et al., 2010).

Secondly, wetlands act as water purification filters that trap sediments, break down organic matter dissolved in the water and absorb nutrients by the tall emergent vegetation

(Verhoeven & Meuleman, 1999). During this filtering system, wetlands remove sediments, pollutants, and toxic substances which halt the aquatic life in lakes and rivers.

Thirdly, wetlands play an important role in flood control and shoreline surge protection.

The wetland plants slow excess water of heavy rains and store in the low-line areas. It helps to prevent overflowing water and damaging lives and the property (Mitsch & Gosselink, 2000).

Also, during high winds, wetlands work as a storm buffer. The vegetation cover buffers the storms and protects land from erosion.

Most importantly, wetlands consist of greater species diversity and nutrient cycling than any other ecosystems in the world. They provide habitats and food for highly diverse flora and fauna that may change throughout a season due to varying water regimes (Bergkamp & Orlando,

1999). That includes a wide range of species such as , waterfowl, mammals, aquatic insects, invertebrates, reptiles and for plants, also many endangered and threatened species. Wetlands are 10 fish production nurseries and also wildlife pantries, and also, migratory birds rest and feed in the wetlands through their seasonal journeys. In addition, wetlands offer numerous recreational opportunities and services.

2.3.1 Coastal Wetlands of the Great Lakes Water regime is the main determinant for wetland vegetation. In the Great Lakes region, fluctuation of water levels of wetlands along the shoreline occur at different scales: short-term, seasonal, and inter-annual fluctuations influence vegetation in different ways (Herdendorf,

1992). There are nine types of vegetation zonation and key species identified in the Great Lakes:

Lake Superior Poor Fen, Northern Rich Fen, Northern Great Lakes Marsh, Green Bay Disturbed

Marsh, Lake Michigan Lacustrine Estuaries, Saginaw Bay Lakeplain Marsh, Lake Erie-St. Clair

Lakeplain Marsh, Lake Ontario Lagoon Marshes and St. Lawrence River Estuaries. Herdendorf

(1992) identified the coastal wetlands of Lake Erie as several categories, in terms of the protection type available to the wetland vegetation: coastal lagoons, behind-barrier beaches, managed marshes protected by earthern and rip-rap dikes, and estuarine tributary mouths.

Estuaries are a unique type of coastal wetlands; they are bodies of water surrounded by wetlands. They mainly consist of water habitats, they are found in areas where rivers meet the sea or lake water (McLusky et al., 2004). Freshwater estuaries occur where the river or stream water drains off into a massive freshwater body, such as the Great Lakes (Herdendorf, 1990).

Estuaries consist of diverse habitats for terrestrial and aquatic wetland plant and animal communities; thus, estuaries are known for high biodiversity where some species are well adapted just for the wetland environment (Day et al., 2013). Estuaries are commonly protected from the extreme water level fluctuations and wave actions of the adjacent water bodies by barrier features such as sand islands and bars. In that sense, estuaries are in natural isolation from 11 lake storms. Being directly related to surrounding wetland areas, they are under the same environmental threat due to the impacts of human activities, climatic changes and invasive species (Day, 1989; Herdendorf, 1990; Kennish, 2002).

2.3.2 Invasive Plants in Wetlands Invasive plants in a specific region can be defined as non-native plants that show a tendency to spread over larger areas causing harm to the existing ecosystem health (Boresma et al., 2006). These alien plants make impacts on the composition and function of both natural and managed ecosystems with substantial economic cost in response to losing or degrading land use and eradication efforts (Gray et al., 2011; Vilá et al., 2011).

These plants spread aggressively once introduced to a new habitat, and some of them become dominant in only a few years (Callaway & Aschehoug, 2000). A powerful reproductive system and high survival rate of invasive species allow them to spread and grow rapidly in a variety of soil types and weather conditions (Blossey & Notzold, 1995). Their impact on native plants, mainly observed through ecosystem changes where soil structure and nutrient cycles are altered (Weidenhamer & Callaway, 2010). For example, the dense mats formed by some invasive plant types block the shallow waterways and cause to increase sedimentation and create habitats favorable to mosquito breeding (Sainty et al., 1997).

The Laurentian Great Lakes have been subjected to invasions of exotic species since early 1800 when the European settlements started (Mills et al., 1993). It shows that 86% out of the 139 established non-native aquatic organisms documented, (42% are plants), are native to

Eurasia. According to NOAA (2011), over 180 documented invasive species have entered the

Great Lakes and many of them since the opening of the St. Lawrence Seaway to the Atlantic 12

Ocean in 1959. However, there are some confusions of identification due to morphological similarities of the native and invasive types. For example, Phragmites australis (Common ) in particular, was considered earlier as a native plant.

Phragmites australis is a tall perennial grass which aggressively dispersed over the eastern North America during the last two decades (Chambers et al., 1999; Hudon et al., 2005).

The Phragmites haplotype M which is introduced from Eurasia has been rapidly replacing its native types and other local plants in most North American wetlands (Samiappan et al., 2017b).

Phragmites australis forms homogenous, dense stands, which then reduce the plant and faunal diversity in ecosystems (Hudon et al., 2005). They disperse to new areas predominantly by seed germination and spread asexually by stolons or rhizomes around the existing patches (Mal &

Narine, 2004; Mauchamp et al., 2001). Dense Phragmites australis patches cause to reduce the quality of habitats for fish and species, especially drying out the littoral zones of marshes by increasing sedimentation (Leonard et al., 2002; Meyer et al., 2001).

2.4 Remote Sensing for Land Cover and Wetland Mapping Land cover is the description of natural or man-made cover of the Earth’s surface

(Coffey, 2013). Changes in land cover make impacts in a localized manner and could scale up to have global atmospheric and climatic changes (Meyer & Turner, 1992). Understanding of land cover is essential in the planning and decision making process in a given area (Coffey, 2013).

Using remote sensing in land cover mapping in the USA goes back to the 1970s

(Anderson et al., 1976). Since then, the use of remote sensing applications developed with the use of moderate spatial resolution imagery to study large vegetation patches rather than mapping details (Wallace et al., 2016). Today remote sensing has been used in different environments, 13 such as vegetation canopy and shrub cover, change in urban/suburban cover, wetland monitoring, and crop mapping and monitoring (Rogan & Chen, 2004).

Table 1: Attributes of urban/suburban and natural landscapes with their corresponding minimum spatial and spectral resolution requirements Minimum spatial resolution Minimum spectral requirements resolution requirements Urban/suburban attribute Land cover/use Level I: USGS 20-100 m V-NIR-MIR-Radar Level II: USGS 5-20 m V-NIR-MIR-Radar Level III: USGS 1-5 m Panchromatic-V-NIR-MIR Level IV: USGS 0.25-1 m Panchromatic Natural attribute Forest class Level I: Land cover 20-1 km V-NIR-MIR-Radar Level II: Cover types 10-100 m V-NIR-MIR-Radar Level III: Species dominance 1.0-30 m Panchromatic-V-NIR-MIR Level IV: Species identification 0.1-2.0 m Panchromatic Source: Rogan & Chen, (2004)

As Table 1 shows the specifications of the remote sensors selected to map the land cover depends on the characteristics of the study area. When the study becomes more specific, the specifications also need to be more sensitive to accomplish the objectives accurately.

The tradeoffs among the spatial extent, pixel size (spatial resolution), number of spectral bands available and their range (spectral resolution) and the frequency of data collection

(temporal resolution) of remote sensors affect the quality, quantity and timeliness of remote sensing imagery (Bradley, 2014; Rogan & Chen 2004). In other words, the detailing of the images is limited by the combination of these properties. Therefore, remote sensing data for land cover should be chosen considering the optimum combination of spatial, spectral and temporal resolution because the characteristics of plants should be well represented by the remote sensing images in order to accurately classify the plant types (Müllerová et al., 2016). For example, 14 invasive plant species should consist of a unique spectral, textural or phenological signal which differs from the neighboring native plants (Bradley, 2014; Müllerová et al., 2017; Shouse et al.,

2013).

Wetland landscapes are complex; therefore, high spatial (e.g., <1 m) resolution images should be used. For example, Landsat TM was found to be more effective over the SPOT multispectral sensor at distinguishing between shrubs and in Northern California

(Basham May et al., 1997). However, both sensors have not distinguished effectively among different types because of their spatial resolution suggesting that finer spatial resolution is needed for species detection (Basham May et al., 1997). Dronova (2015) stated the necessity of high spatial resolution imagery to map wetlands to compensate the spectral similarity among plant types and possible effects introduced by soil moisture variability. Further, the boundaries of fine scale wetland features and class boundaries could be delineated accurately with high spatial resolution images than low and medium resolution satellite images (Bradley, 2014; Dronova

2015; Laliberte et al., 2010; Müllerová et al., 2017).

2.4.1 Mapping Invasive Plants Using Remote Sensing Hyperspectral data provide more details on the spectral characteristics of plants than multispectral sensors with their continuous spectral band configuration (Lawrence et al., 2006).

For example, CASI-1500 and AHS airborne hyperspectral sensors used to identify the invasive plant Spartina densiflora in a wetland, showed promising results using four spectral target detection algorithms (Bustamante et al., 2016).

The hyperspectral imagery of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) with 224 bands (4 – 20 m spatial resolution) is capable of mapping invasive plants with high 15 overall accuracy on a large scale (Underwood et al., 2003). However, AVIRIS hyperspectral data are not sufficient to map small and highly heterogeneous areas comprised of invasive plants due to the inadequate spatial resolution (Müllerová et al., 2017).

The commercially available high resolution (sub-meter spatial resolution, e.g.,

WorldView) satellites provide more spatially detailed images with a small geometric distortion

(Zhou & Li, 2000). A classification study conducted to distinguish emergent invasive plants in a diked wetland in the western basin of Lake Erie using QuickBird (2.4 m spatial resolution) images resulted in an overall accuracy of 64% (Ghioca-Robrecht et al., 2008). Several studies suggest that spatial resolution may be more important than spectral resolution in detecting invasive species. QuickBird has shown its ability to distinguish the long and narrow patches of invasive plants (Phragmites australis and Typha) with the finer spatial resolution (Ghioca-

Robrecht et al., 2008) while the hyperspectral Hyperion imagery (30 m spectral resolution) was not successful in identifying the small and linear arrangements of Phragmites australis in the west coast of the Green Bay shoreline (Pengra et al., 2007).

Distribution maps of the invasive plant Hakea sericea monitored with WorldView 2

(WV2) images showed an overall accuracy of 80.98% and Kappa value of 0.77 (Alvarez-

Taboada et al., 2017). However, the authors mentioned that the maps were not suitable to detect

Hakea sericea at early stages of invasion due to the insufficient spatial resolution of WV2 images (~0.5 m) (Alvarez-Taboada et al., 2017).

The somewhat higher spectral resolution of Sentinel 2 data was also explored for invasive species detection in several studies. The researchers introduced the necessity to investigate the capability of four Sentinel 2 red edge bands in fine scale vegetation mapping. The Sentinel 2 16 satellites comprise of red, green, blue and near-infrared bands (10 m spatial resolution) and four narrow red edge bands (20 m spatial resolution). Ng et al. (2017) pointed out that freely available images, processing software and the temporal resolution of 5 days inspire the researchers to explore the scope of Sentinel 2 data in invasive plant studies. However, the capability of monitoring invasive plants with recently launched Sentinel 2 satellites is not well explored yet.

Table 2 summarizes the recent studies done on invasive plants using different remote sensing sensors.

A major drawback of using a commercially available high resolution satellite data is the high cost of the images and pre-ordering process related to data acquisition, and thus, the use of

UAVs for detection of invasive species is seen as an economical way of obtaining remote sensing images at any desired time. The capability to manipulate the temporal and spatial resolutions and the availability of a range of payloads for particular tasks are major reasons to choose UAVs over satellite images for remote sensing applications (Pande-Chhetri et al., 2017;

Salami et al., 2014). UAVs are capable of acquiring very high spatial resolution data (<10 cm) with a user defined flight plan and flexible revisit time, in contrast to fixed revisit periods of the satellites (Lechner et al., 2012).

17

Table 2: Summary of recent studies that have classified invasive species using remote sensing data Invasive Study Type Remote Sensing Spatial Spectral Resolution Source Species Area Sensors Resolution Hoary Cress Idaho 0.5m to 1m tall HyMap 3m 450-2500nm, channels at Mundt et rhizomatis perennial (hyperspectral) 15-20nm each al. (2005) Iceplant, California Iceplant – mat –forming AVIRIS 4.3m 174 channels 374-394nm, Underwood jubata grass succulent, jubata grass – (hyperspectral) 1133-1464nm, 1773- et al. grass 2051nm (2003) Cheatgrass North Grass Landsat MSS, TM, 30m 1 thermal band at Bradley & (Bromus Dakota ETM 10400nm, 3-7 spectral Mustard tectorum) bands from 400nm to (2006) 2550nm Water New Water caltrop – Quickbird, photo Quickbird 450-890nm (non- Laba et al. caltrop, York submerged stem, leaves interpretation of = 2.4m continuous), 4 channels (2008) Phragmites floating on the surface, aerial photos australis, and Phragmites – large (multispectral) Purple perennial grass, Purple loosestrife loosestrife – herbaceous Late Japan Clonal growth, seed AISA Eagle 1.5m 67 channels, 397nm to Ishi & goldenrod dispersal, highly (hyperspectral) 983nm, 8.9nm each Washitani competitive (2013) Chinese North Understory shrub Optech ALTM 1 point/m2 N/A Singh et al. privet Carolina Gemini 3100 (2015) LiDAR Leafy spurge, Montana Leafy spurge – Probe-1 3m 128 channels, 440nm to Lawrence and spotted herbaceous perennial, (hyperspectral) 2507nm et al. knapweed Spotted knapweed – (2006) herb AVIRIS = Airborne Visible/Infrared Imaging Spectrometer, MSS = Multispectral Scanner, TM =Thermatic Mapper, ETM = Enhanced Thermatic Mapper, AISA = Advanced Imaging Spectrometer for Applications

Source: Liu (2018) 18

2.5 UAV Applications in Vegetation Classification The UAV is defined as a general aircraft without a human pilot on board and a component of an unmanned aircraft system. Development and use of UAVs for civilian and research purposes were starting to become popular among smaller research organizations in the late 1990s (Laliberte et al., 2010; Moguel et al., 2017; Watts et al., 2012). Vegetation classification is one of the most popular recent applications of UAV data. Several recent studies

(Berni et al., 2009; Komarek et al., 2018; Müllerová et al., 2017; Nipadhkar et al., 2017; Pande-

Chhetri et al., 2017; Samiappan et al., 2017; Tóth, 2018) have proved that the use of UAV borne remote sensing is an effective method to classify vegetation with the use of different types of features.

Pande-Chhetri (2017) used UAV data to classify wetland vegetation with pixel based and hierarchical object based classification approaches. The object based classification with Support

Vector Machine (SVM) classifier resulted in the highest overall accuracy (70.8%) in the study.

However, the two invasive plants showed a high error of omission underestimating the invasive plant classes.

While searching for the optimum method to discriminate invasive Lantana camara from the forested landscape, Nipadhkar et al. (2017) found that the object based classification provided the better visual organization of plant classification and performed satisfactorily. The authors showed a minimal difference in classification accuracies (0 to 0.04%) between the object and pixel based classification methods.

Müllerová et al. (2017) highlighted the importance of the temporal flexibility in data collection with UAVs over Pleiades satellite images in monitoring invasive plants. The best 19 classification accuracy (100%) for invasive Giant Hogweed was reached during the flowering time using the object based classification approach, whereas the classification accuracy dropped to 60% for the same plant later in the vegetation season. This study highlighted the importance of collecting data at the correct time of the growing season.

Samiappan et al. (2017) used five band UAV images to map invasive Phragmites australis in a tidal marsh to identify the impact of the Normalized Difference Vegetation Index

(NDVI), Soil Adjusted Vegetation Index (SAVI) and Morphological Attribute Profiles (MAPs).

Further, Canopy Height Model (CHM) generated using Digital Terrain Model (DTM) and the Digital Surface Model (DSM) derived from UAV data becomes popular among remote sensing researchers (Puliti et al., 2015; Tang & Shao, 2015). Use of UAV and LiDAR derived

CHM was identified as an important feature to improve the accuracy in vegetation classification

(Lisein et al., 2013). Furthermore, Zhang et al. (2015) reported a significant improvement of classification accuracy with the use of DSM derived from UAV data.

Besides, UAV allows flying at different heights which can be utilized to adjust the spatial resolution of the images (Tóth, 2018). Consequently, very high spatial resolution imagery captured by UAVs became popular in natural resource management to monitor invasive plant species in several different ecosystems (Niphadkar et al., 2017; Samiappan et al., 2017; Thomas et al., 2018; Tóth, 2018).

2.6 Classification Methods and Algorithms (Classifiers) Mapping and identifying land cover using remote sensing have been used in physical models to derive biophysical and biochemical variables such as vegetation index, biomass, and carbon content as well as to collect land cover information for planning and management 20 purposes of the natural and artificial resource. Therefore, it is essential to have accurate image classification methods to map the land cover accurately and efficiently. Various algorithms (i.e., classifiers) exist within each method (Table 3). It is a common perception that supervised methods perform better, as shown in numerous studies. Thus, mapping land cover is commonly performed using supervised classification methods by training the classifiers based on field samples to classify the whole image into the classes that are represented by the samples

(Richards, 1999). These sophisticated methods and techniques require extensive training and human supervision (Lam, 2008).

Image classification extracts information into separate classes, and its accuracy depends on many factors such as the complexity of the study area, type of the data available and classifier used in the process. Three main steps are involved in the process of classification 1) pre- processing data, 2) actual image classification, and 3) accuracy assessment (Al-doski et al.,

2013). There are two types of supervised classification methods, namely, pixel based image classification and object-based image classification.

21

Table 3: Summary of remote sensing classification techniques. Note: The classifiers used in this study are supervised classifiers (parametric or non-parametric). Method Examples Characteristics Parametric Maximum Likelihood classification Assumptions: Data area normally and Unsupervised classification etc. distributed Prior Knowledge of class density functions Non- Nearest Neighbor classification, No prior assumptions are made Parametric Fuzzy classification, Neural networks and Support Vector Machines, etc. Non- metric Rule-based Decision tree Can compare on both real-valued data classification and normal scaled data statistical analysis Supervised Maximum Likelihood, Minimum Analyst identifies training sites to Distance, and Parallelepiped represent in classes, and each pixel is classification, etc. classified based on statistical analysis Unsupervised ISODATA and K-means, etc. Prior ground information not known. Pixels with similar spectral characteristics are grouped according to specific statistical criteria Hard Supervised and unsupervised Classification using discrete categories (parametric) classifications Soft (non- Fuzzy Set Classification logic Considers the heterogeneous nature of Parametric) real world. Each pixel is assigned a proportion of the land cover type found within the pixel Per-Pixel Classification of the image pixel by pixel Object based Image regenerated into homogeneous objects. Classification performed on each object Hybrid Includes expert systems and artificial approaches intelligence Source: Jensen (2005)

22

2.6.1 Pixel and Object Based Image Classification Pixel based classification is the most commonly used supervised image classification methods. Typically, the pixel based classification is used to map vegetation using low to moderate resolution satellite imagery (Guo et al., 2017). Pixel based classification is used in vegetation classification for its ease of automation, repeatability, and applicability to apply over large areas (Moffett & Gorelick, 2013; Tuxen et al., 2011). In this method, the pixels of an image are separated into classes using an algorithm according to the spectral signature of each pixel in such a way that the difference of the pixels in a class is optimal between the classes (Matinfar et al., 2007; Moffett & Gorelick, 2013). The use of the pixel based method for classification of wetland plants introduces several additional challenges due to the small sizes of vegetation patches, low spectral distinctions among plant types and the effects of soil, water regime and atmospheric vapor on the reflectance spectra (Mutunga et al., 2012; Ozesmi & Bauer, 2002).

The object based image classification provides a framework to avoid the challenges experienced with pixel based classification methods (Dronova, 2015). Object based classification is generally used to classify images with high or very high spatial resolution. This analysis consists of two steps: 1) segmentation and 2) actual classification. Segmentation divides the images into objects by aggregating pixels with similar features such as shape, color, and texture

(Guo et al., 2017), and classification produces a meaningful map by assigning the objects into pre-defined classes (Moffett & Gorelick, 2013; Wan et al., 2014). Segmentation and classification are usually performed in multiple scales to achieve advanced research goals such as mapping the invasive plants in wetlands (Mallinis et al., 2008; Moffett & Gorelick, 2013;

Niphadkar et al., 2017; Pande-Chhetri et al., 2017). 23

Several parameters should be defined for segmentation to create meaningful objects

(segments) before classification. The widely used multiresolution segmentation creates segments based on user defined importance of relative spectral band weights and a tradeoff between the geometry of features on the ground and pixel values (Dronova, 2015). The size of the objects created in this process is determined by the scale parameter. When the value of the scale parameter is higher, larger objects are created. The shape and compactness parameters decide the weight assigned to spectral values and the smoothness of the object boundaries (Yan et al.,

2006). Each object represents a spectral value that is created by smoothing the values of the pixels aggregated. Classification is performed on these objects as a single or multi-step process.

These segmentation and classification steps are usually performed to identify the best set of segmentation parameters for a particular classification (Pande-Chhetri et al., 2017).

Previous works emphasized several advantages of object based classification over pixel based classification. Object based classification reduces the variation within the objects by clustering neighboring pixels in heterogeneous vegetation environments like wetlands (Blaschke,

2010; Dronova, 2015; Guo et al., 2017). Also, object based classification is capable of incorporating textural, thematic and other ancillary data in the process, besides the spectral information of pixels (Pande-Chhetri et al., 2017) (See Section 2.7). Besides, incorporating the hierarchy of the different sizes of vegetation patches by nesting the objects in different segmentation levels is a unique advantage of object based classification (Dronova, 2015).

Dronova (2015) emphasized object based classification as successful with high and very high- resolution UAV images due to the reduction of local noise and increase of spectral contrasts by creating objects. However, longer period of time is required to decide the optimum segmentation scale parameter and the requirement of expert knowledge of the post classification refinement 24 stage are some drawbacks of object based classification (Pande-Chhetri et al., 2017). Further, the reduction of spectral variability among the pixels in segmentation (Addink et al., 2007) could introduce misclassification at the boundaries between patches of different plant types.

2.6.2 Image Classification Algorithms Widely used classifiers that resulted in high classification accuracy in recent remote sensing image analysis are Maximum Likelihood (ML), k Nearest Neighbor (kNN), Neural

Networks (NN), and Support Vector Machine (SVM) (Pande-Chhetri et al., 2017; Xie et al.,

2008). Classification algorithms assign pixels or objects in the images into user defined classes based on different statistical methods (Maxwell et al., 2018).

The ML classifier is identified as one of the most accurate algorithms among the parametric classifiers (Erdas Field Guide, 1999). It assumes a normal distribution of the pixel values of each image layer (Yang et al., 2011). Pixels are classified into the classes according to the maximum probability for each pixel to belong to a class (Yang et al., 2011). Erinjery et al.

(2018) demonstrated that ML performed with higher overall accuracy than the Random Forest classifier for vegetation using high-resolution satellite imagery. Further, both SVM and ML classifiers resulted in similar classification accuracies in the classifications carried out by Otukei and Blaschke (2010). The main advantage of the ML is that it considers both variance and covariance of the data within a class (Erdas Field Guide, 1999). However, the heavy reliance on the normal distribution assumption of the input data values is a major disadvantage of the ML classifier.

The kNN classifier is a relatively simple, non-parametric algorithm. Each pixel of the image to be classified is labeled with the class which the most common of the k nearest training 25 samples in the feature space (Maxwell et al., 2017). This classifier outperformed ML in the comparison studies by Yan et al. (2006) and Yu et al. (2006). Yu et al. (2006) performed an object based vegetation classification using kNN and concluded that the selection of k = 1 resulted in the highest classification accuracy. kNN resulted in the higher overall accuracy and producer accuracy than SVM and Decision Tree classifiers in the object based land cover classification (Tehrany et al., 2014). The significantly lower speed of this classifier makes it not suitable for pixel based classification (Hardin & Thompson, 1992).

The NN classifier is a black box model which recognizes patterns similar to the human brain without making assumptions about the data distribution (Candade & Dixon, 2004;

Ndehedehe et al., 2013; Shao & Lunetta, 2012). Mas and Flores (2008) reported that NN outperforms the conventional classifiers in many classification studies. However, this classifier has some limitations such as lack of user friendliness, difficulty to set up the correct parameters and consumption of a long time during processing (Mas & Flores, 2008). Ndehedehe (2013) classified land cover with NN with an overall accuracy of 87% and concluded that the image classification produced accurate outputs when the number of hidden layers was restricted to 1 with ENVI software package. Candade and Dixon (2004) found that Landsat images classified with SVM had higher accuracy than when classified using NN. The sensitivity of the NN classifier for the increase of dimensions was attributed as the cause for its lower accuracy.

SVM is a non-parametric classification method. It aims to find an optimal boundary between the closest training samples of two different classes (Pal & Foody, 2010). These two closest samples from two classes are called the support vectors. Since SVM creates this boundary solely based on the support vectors, it is essential to select the training samples cautiously (Maxwell et al., 2018). This makes the SVM classifier applicable even when the 26 training samples of some classes are limited in number. This was shown in a study carried out to classify herbaceous vegetation that resulted in SVM performing better than ML. The reduction of the training samples caused a decrease of the accuracy of ML while SVM was not affected significantly (Burai et al., 2015). Several studies demonstrated the better performance of SVM over the ML and Random Forest classifiers (Kulkarni & Lowe, 2016; Pal & Mather, 2005).

The accuracy of the depicted land cover maps is assessed by comparing with reference samples taken from the true land cover through an accuracy assessment (Stehman, 2009). A confusion or error matrix displays the portions of the vegetation classes that are classified or misclassified (Stehman, 2009). Overall accuracy and accuracy for each class are then generated from the confusion matrix (Stehman, 2009). Foody (2002) explained that the error matrix was the core feature in accuracy assessment studies. The error matrix represents the allocation of the classes by the classification with respect to the reference data. Also, errors of omission and commission are two important measures in an accuracy assessment. Error of commission represents the ratio between samples classified correctly into a class and a total number of samples classified in the same class whereas the error of omission characterizes the probability of the classifying validation samples into the correct class (Congalton, 2001). The Kappa values vary between 0 and 1 representing the agreement between classified and random values based on the confusion matrix (Congalton, 2001; Harris Geospatial Solutions, 2018). Landis and Koch

(1977) assigned Kappa values into three categories: values over 0.80 as strong agreement, values between 0.40 and 0.80 as moderate agreement, and values below 0.40 as poor agreement.

The development of classification techniques has lead in new algorithms, but also in new data fusion approaches where various intermediate feature layers generated from UAV data are combined and used in the classification processes. The accuracy of a land cover classification 27 can be improved by incorporating additional features such as spectral indices and texture, extensive field knowledge, auxiliary data and expert knowledge (Xie et al., 2008).

2.7 Layer Features - Derived Image Layers to Enhance Classification

2.7.1 Vegetation Indices Vegetation indices are transformed spectral responses of two or more bands that are used to improve the identification of vegetation types and conditions by reducing the effects of plant structure, surrounding water, and soil reflectance (Moulin, 1999). Application of the vegetation indices depends on the peaks and troughs in the spectral signatures of the plant species and the surrounding features (Xue & Su, 2017). Vegetation indices are added into the classification procedure before, during and after the classification of the images to numerically separate or stretch pixel values in order to increase the classification accuracy (Mwakapuja et al., 2013).

Most of the vegetation indices are created based on the significant difference of reflectance between NIR and red regions recognized in vegetation (Teillet et al., 1997). Jackson

& Huete (1991) suggested that the Ratio Vegetation Index (RVI = NIR / Red) (known as Simple

Ratio, SR) could be the first and most commonly used band index, which highlights the higher

NIR reflectance of vegetation compared to the red reflectance (Jordan, 1969). Xu (2006) emphasized the ability to distinguish between water and vegetation areas in remote sensing imagery using this approach.

Different environments include their own characteristics of the plants, and therefore, the vegetation indices should be selected for a particular analysis with an insight of the capability and the limits of each vegetation index (Xue & Su, 2017). The Normalized Difference

Vegetation Index NDVI = (NIR - Red)/ (NIR + Red) is formulated based on the strong 28 absorption of electromagnetic energy in the red region and strong reflectance in the NIR region exhibited by green vegetation (Moran et al., 1997). Therefore, NDVI is widely used in vegetation monitoring to incorporate spectral variations among plant types and variation in the reflectance of electromagnetic energy with time (Gandhi et al., 2015; Tucker et al., 2005; Zhang et al.,

2003). Gitelson (2004) derived Wide Dynamic Range Vegetation Index WDRVI = (a*NIR -

Red) / (a*NIR + Red) by modifying NDVI to increase the performance with different phenological stages.

2.7.2 Texture Features Textural features include visual characteristics of physical objects, such as brightness, color, size, slope, and smoothness (Materka & Strzelecki, 1998). The image texture is identified as a unique pattern of pixels towards a specific direction among the surrounding pixels (Bradley,

2014). Textural measurements are calculated within a moving window (kernel), which consists of a user-specified number of pixels. For example, the Mean function assigns the average pixel value to the kernel while the Variance function assigns the dispersion of each pixel value from the mean pixel value to the kernel (Harris Geospatial Solution, 2018). Image texture has been a useful feature for vegetation classification in many studies. For example, a texture based method was developed to map Phragmites australis successfully in the wetland situated in the lower

Pearl River basin using UAV data and ground reference maps (Samiappan et al., 2017). The average accuracy and the Kappa coefficient of the classification were 85% and 0.70, respectively

(Samiappan et al., 2017).

2.7.3 Data Fusion of Vertical and Horizontal Data Schmitt and Zhu (2016) described data fusion in remote sensing as the integration of measurement, signals or observations from multiple sources or derived products to achieve a 29 better classification outcome than what could have acquired without the combination by using the original data set. Improving spatial resolution by techniques such as pan-sharpening or a combination of data between, for instance, SPOT optical and SAR imagery are stated as a feature-level data fusion (Ranchin, 2001; Zhang, 2010).

Huang and Asner (2009) stressed the importance of multi-temporal data fusion to consider the differences in phenological characteristics between invasive and native plants. The integration of canopy height data, generated from Digital Surface Model (DSM) and Digital

Terrain Model (DTM) using LiDAR data combined with multispectral and hyperspectral data has demonstrated promising improvements of classification accuracies (Dalponte et al., 2012;

Lisein et al., 2013). Commonly, DSM and DTM are combined to generate a canopy height model that serves as a source of vertical information about vegetation, which is then combined with spatial reflectance measurements and various feature layers, as explained in the previous paragraph.

30

CHAPTER III: DATA AND METHODS

3.1 Study Area The study took place at Old Woman Creek (OWC), a natural estuary located at the southernmost point (41º 22 ʹ N, 82º 30ʹ W) of the Lake Erie shoreline near the town of Huron,

Ohio (Figure 2). OWC extends approximately 2.1 km2 from the southern shore of Lake Erie

(Klarer & Millie, 1992). It is one of the 28 areas protected under the NERRS, and it is known for high biodiversity and unique water regime (Whyte et al., 2008). Scientists, students, and citizens use OWC as a field laboratory or to enjoy nature for more than 30 years (Herdendorf et al.,

2006).

The estuary and wetlands at OWC consist of a shallow open water area, woodlands, an island, and a barrier beach. The barrier beach controls the connection between the creek and

Lake Erie. It closes the mouth of the beach during the high rainfall time and open it during summer when the water level changes by seiches and storm surges of Lake Erie (Herdendorf et al., 2006; Klarer & Millie, 1992). The extent and duration of the water level fluctuation directly influence the variety of vegetation in the estuary (Whyte et al., 2008).

31

Figure 2: Study area located on the southern shoreline of Lake Erie. (Source: Esri base maps, 2018)

32

The plant types in OWC range from terrestrial plants, which tolerate occasional submergence to the plants completely adapted to survive only in an aquatic environment

(Herdendorf et al., 2006). Over 800 terrestrial and aquatic species of vascular plants have been identified in its watershed (NOAA, 2011). Whyte (2009) classified 37 common wetland plant types in OWC into four categories: submerged macrophytes, floating-leaved macrophytes, emergent macrophytes, and floodplain macrophytes.

It has been identified that the nutrient concentration and distribution, as well as distribution of flora and fauna, have changed in the last several decades with prolonged high and low water cycles (Herdendorf et al., 2006). While increased water levels transform several former wetlands to shallow open-water areas with a small amount of aquatic vegetation (Klarer

& Millie, 1992), the reduction of the water level, on the other hand, resulted in the increase of invasive plants in OWC (Whyte et al., 2008). Phragmites australis spread rapidly towards the south along the banks of the creek during the low water level periods (Whyte et al., 2008), and some others, e.g., Eurasian water-milfoil, are introduced from Lake Erie by wind and wave activity (Whyte & Franko, 2001). Currently, Phragmites australis is a dominant invasive plant type in OWC (Aday, 2007) and purple loosestrife (Lythrum salicaria), Eurasian water-milfoil

(Myriophyllum spicatum), frogbit (Limnobium laevigatum), and garlic mustard (Alloaria petiolata) are also being observed.

33

(a) (b)

(c) (d)

(e)

Figure 3: The common plants in the study site (a) Phragmites, (b) Cattails (c) Louts (d) Lily (e) Duckweed

34

Figure 3 demonstrates the dominant types in the estuary: American water

Lotus (Nelumbo lutea), White water lily (Nymphaea odorata), Lesser duckweed (Lemna minor),

Common reed (Phragmite australis) and Cattails (Typha).. The names of these plants are referred as Lotus, Lily, Duckweed, Phragmites, and Cattails hereafter.

This study site was chosen for two reasons: i) observation of Phragmites patches in the area, and ii) relatively higher accessibility than the other parts of the estuary. Dense patches of invasive Phragmites were observed near the estuary mouth and on both sides of the road running through the site. Besides, Phragmites are often seen within several Cattails patches on the site.

The study site is surrounded by tall trees on the banks of the estuary. The star shaped island covers a larger area of the study site, and the rest is covered with aquatic plants and water. The aquatic area is mostly covered with floating and emerging plants. Access is restricted to most of the south, west, and southwest sections of the estuary with densely grown, submerged, and floating plants.

3.2 Field Data Collection

3.2.1 UAV Imagery Acquisition UAV imagery over the study site was acquired on two days: August 8, 2017, and October

18, 2017. The UAV used in this study was a senseFly eBee Ag model (senseFly, Cheseaux-sur-

Lausanne, Switzerland). The fixed wing eBee provides a convenient flight preparation procedure. Further, the flight plans of eBee can be adjusted during the flights. Also, the weight of the eBee is approximately 700g and the flight time was between 25 and 30 minutes with optimum flying conditions. 35

Flight planning was performed with the eMotion 2 (senseFly, Cheseaux-sur-Lausanne,

Switzerland) software package. The ceiling of the flights was set at 120 m. Lateral and longitudinal overlap of the flight plans were set to 65% to 75%. The flight radius was set to 880 m. The flights were carried out in clear weather conditions with wind speed between 5 to 10 knots.

A Parrot Sequoia camera attached to the UAV, with Green, Red, Red edge, and Near-

Infrared spectral bands, was used to acquire the images of the estuary. The spectral ranges of the

Sequoia camera are specified in Table 4. The spatial resolution of the images taken with the

Parrot Sequoia camera was 13.90 cm. One flight with the SONY DSC WX 220 was performed following the flight with the Sequoia camera on August 8. The purpose of acquiring images with this RGB camera was to use it as a visual aid to identify the plant types in the estuary, which were clearly distinguished with 3.43 cm spatial resolution of this camera. This camera does not have the NIR band, and it was not used in the classification.

Table 4: Wavelength ranges of Sequoia camera Band Number Band Name Wavelength Range (nm) 1 Green 530-570 2 Red 640- 680 3 Red Edge 730- 740 4 NIR 770- 810 Source: MicaSense Knowledge base

The vegetation reflectance information collection was performed using the spectroradiometer and GPS instrument.

3.2.2 Spectroradiometer and GPS Measurements A handheld PSR 3000 Spectral Evolution spectroradiometer was used to collect in-situ spectral measurements of five plant types found in the study site: Lotus, Lily, Duckweed, 36

Phragmites and Cattails in the period from July 31, 2017, to August 21, 2017. The spectroradiometer covers the wavelength range from 350 nm to 2500 nm. Spectral resolutions of the instrument are 3 nm from 350 nm to 700 nm, 8 nm from 700 nm to 1500 nm, and 6 nm from

1500 nm to 2100 nm of the spectral range (Spectral Evolution, Lawrence, Massachusetts).

Reflectance measurements were taken with enough sunlight conditions to minimize the uncertainties. The spectroradiometer was calibrated using a white reflectance panel.

Thirty spectral measurements were taken over five to six spatial clusters for each selected plant type. All measurements were made by keeping the spectroradiometer tip approximately 0.5 inches above the leaves. Only one measurement was taken from a single plant of each type as an average of three reflectance measurements from each leaf selected. Each spectral measurement was averaged with ten scans. The purpose of these measurements was to recognize critical regions on the hyperspectral signatures of the plants and to select vegetation index based on these regions which would possibly enhance the process of classification.

The spectroradiometer measurements of each plant type were imported to DARWin SP

(Spectral Evolution, Lawrence, Massachusetts). This software provides the capability to visualize multiple spectral measurements at once and to calculate spectral indices within user defined spectral ranges. The spectral signatures were checked carefully, and the measurements with significant deviations were removed. The remaining spectral measurement files were imported into Microsoft Excel to calculate the mean spectra for each plant type by averaging the reflectance values for each wavelength. The wavelength ranges from 350 nm to 850 nm were selected for each averaged spectral signature to match the spectral range of the Sequoia- generated images to create the spectral library for the plants (according to the spectral ranges provided in Table 4). 37

Field identification of wetland plants was made by an expert in wetland plant

(Breann Hohman, Firelands Coastal Tributaries Watershed Coordinator and Assistant District

Director). The places with Phragmites patches were defined and recorded with a handheld

Garmin Etrex 10 GPS, in addition to the built-in GPS in the spectroradiometer. The locations of

Phragmites patches were also visually verified the following spring, in April 2018.

3.3 UAV Data Processing

3.3.1 UAV Image Pre-processing Pre-processing of UAV images included geotagging, mosaicking the raw images and generation of Digital Surface Model (DSM) and Digital Terrain Model (DTM). UAV images taken by the cameras were geotagged with eMotion 2 software using the log files generated by eBee during each flight. The geotagged images were imported into the Pix4Dmapper Pro for further processing where they were orthorectifed and mosaicked to create reflectance images

(Pix4D, San Francisco, California). The projected coordinate system was set to WGS 1984 UTM

Zone 17 N. DSM represents the elevation above mean sea level (AMSL) of the objects on the ground such as vegetation and built-up areas. DTM represents the elevation AMSL values of the bare ground surface.

The four bands (Green, Red, Red edge, NIR) acquired by the camera were stacked to create a single multispectral image. The pixel values of the image were rescaled into the range from 0.0 to 1.0 using band math tool of ENVI 5.4 software package (Harris Geospatial Solutions,

Broomfield, Colorado). The same steps were followed to generate the image from the RGB camera, which provided three spectral bands (Red, Green, Blue), data acquired on August 8,

2017. As mentioned, the RGB camera was not used in the analysis process, but it was used to enhance georegistration of the Sequoia generated images visually and to select training areas in a 38 precise manner in the process of classification and validation (i.e., the region of interest, see the

Classification section). All other intermediate images and all analysis were based on the images collected by the Sequoia camera.

In order to use data from different dates, georegistration of images was additionally performed to make sure that the data overlap was accurate. This step is particularly crucial for

UAV images as they have a fine spatial resolution and any small shift would cause errors in the calculations. The Sequoia generated multispectral image taken on October 18 was georegistered to the image taken on August 8. The registration was performed with the image registration workflow tool of ENVI 5.4 using six reference points on each image. The corresponding reference points were carefully selected by examining the two images. The RMSE error between the two images was 1.32 pixels.

3.3.2 Derived UAV Products The derived UAV products as feature layers (Band Indices, Texture, CHM, and PC bands) were added to the process of classification one by one to explore if and how each layer might enhance the classification. First, one by one layer was stacked with the original four bands

(G, R, RE, and NIR) in the process of classification and the accuracy assessment was conducted for each combination. The purpose of this sensitivity analysis was to select one feature layer within each feature group (vegetation indices, texture properties, PCA and CHM, as explained below) that produced the lowest errors of omission for Phragmites, combine them and monitor the trends and possible improvements in accuracy assessments. The goal had been to reduce not just the overall accuracy, but also the omission errors for Phragmites, what could allow a more conservative removal process within the management plan. 39

3.3.2.1 Band Indices To possibly enhance the process of classification, six simple band ratios and three normalized band indices were used in the study (Table 5). These band indices were calculated with band math tool in ENVI 5.4. The simple band ratios included NIR/ Green, NIR/ Red, NIR/

Red edge, Red edge/ Red, Green/ Red edge, and Red/ Green. The normalized band indices included Normalized Difference Vegetation Index (NDVI), Normalized Difference Red Edge

(NDRE) and Normalized Difference Green Index (NDGI). The band indices were calculated for the images taken on August 8 and taken on October 18.

Table 5: Normalized and simple band indices used in the study Band Index Equation Reference NDVI (NIR-Red) / (NIR+Red) Gamon et al. (1995) NDRE (NIR-Red edge) / (NIR+Red edge) Barnes et al. (2000) NDGI (NIR-Green) / (NIR+Green) Han et al. (2007) SR1 NIR / Red Jordan (1969) SR2 NIR / Green Buschmann & Nagel (1993) SR3 NIR / Red edge Xue & Su (2017) SR4 Red edge / Red Gitelson & Merzlyak (1996) SR5 Red / Green Xue & Su (2017) SR6 Green / Red edge Gitelson & Merzlyak (1996)

Although the classification was performed on the image from August and its feature layers, it was explored if the band indices from October played a role in the classification.

Different phenology state of plants in the summer and fall was expected to enhance the process due to the better separability of spectral signature between the plants. Bradley (2014) highlighted the importance of the use of differences in phenological pattern between invasive and native plants to better identify invasive plants. 40

3.3.2.2 Image Texture Textural features were used as additional attributes to introduce variations among plant types due to the differences in the shapes of the plant leaves. Chavez (1992) showed that the spatial distribution of the spectral variability of vegetation is higher within the NIR band than within the visible (RGB) spectral region over an area. Thus, the NIR band of the August 8 image was used to calculate the texture values. The Gray Level Co-occurrence Matrix (GLCM) tool within ENVI 5.4 software was used. ENVI calculates 8 GLCM measures: Mean, Contrast,

Homogeneity, Second moment, Correlation, Dissimilarity, Entropy, and Variance.

The GLCM is a measure of the angular relationship and the distance between a pair of adjacent pixels. The GLCM pixel values represent the number of occurrences of a particular property between a pixel and selected neighboring pixels within the kernel (Harris Geospatial

Solutions, 2018). Since the main focus was to map Phragmites, which is a thin, erect plant, the kernel size was kept to 3*3 pixels (the smallest kernel size available).

3.3.2.3 Principal Components The Principal Component transformation was performed with the Forward Principal

Component Analysis (PCA) Rotation tool in ENVI 5.4. PCA converts the original multispectral bands to a new set of bands emphasizing the highest variance in the first PCA band. Band 1 has the highest variance among the PCA bands, and the variance is reduced sequentially from band 1 to band 4 (Harris Geospatial Solutions, 2018).

3.3.2.4 Canopy Height Model The Canopy Height Model (CHM) indicates the absolute heights of the objects within the image extent. CHM consists of the difference between DSM and DTM. CHM was created for

August 8 data using band math tool in ENVI 5.4. The spatial resolutions of DSM and DTM 41 created with Pix4D were 13.04 cm and 65.02 cm. Thus, both DSM and DTM were resampled to

13.90 cm spatial resolution to keep the pixel size consistent between images.

3.4 Intermediate Steps in the Analysis - Masking Process to Enhance the Classification The wetland vegetation needed to isolate from non-vegetation objects to reduce the errors that effected on the classification. The area with wetland vegetation was isolated in the Sequoia image taken in August with two masking steps.

First, the areas consisted of water and built-up features were removed from the image layers, applying a mask created by using the NDVI layer from August 8 and a threshold of 0.22.

Healthy vegetation shows high NDVI values, but both built-up and water areas show low NDVI values when compared to vegetation (USGS, 2018). Consequently, with the aid of the vegetation spectroradiometer measurements and the RGB image, the area below the NDVI of 0.22 was masked.

Second, the areas with high trees were masked. In the selected wetland plant types,

Phragmites are the tallest plant type which grows up to maximum heights of 3 m - 3.5 m in the estuary. Thus, the vegetated area with the height over 4 m was masked to remove the vegetation taller than 4 m.

3.4.1 Sampling Design As supervised classification methods were used in the study (Section 3.5), training areas had to be selected from the images. A stratified random sampling design was used to create sampling and validation sub-areas. Several patches were chosen for each plant type, and then regions of interests (ROIs) were randomly selected within each patch. Selected patches for each plant types were taken as sampling strata. The selected ROIs for each stratum were distributed 42 spatially across the images. Each ROI was created based on the GPS coordinates recorded in the spectroradiometer during spectral measurements. Additional ROIs were selected with the help of the RGB image as mentioned above.

In ENVI 5.4, the selection of ROIs started with one pixel, and the rest of the pixels were calculated by the grow pixel function. The one-pixel ROIs were allowed to grow, using the criteria of the within-two-standard deviations of pixel values, up to the maximum number of 8 neighboring pixels (Harris Geospatial Solutions, 2018). As Stehman and Czaplewski (1998) described, the grow pixels method is a spatially smooth technique.

The number of training data could heavily impact the classification accuracy (Foody et al., 2006). In this study, the number of ROIs for each plant type was selected according to the proportional area covered with each plant type. The field observations and data collection, as well as the RGB images, were used to carefully select the sufficient number of ROIs per class.

Forty eight ROIs were selected for each Lotus, Lily, and Cattails set of plants as these plants were most abundant. A lesser number of ROIs was selected from Duckweed and

Phragmites as they were less abundant. The numbers of ROIs selected were 39 and 33 for

Duckweed and Phragmites, respectively. The ROIs were consisted of at least 4 pixels after applying the grow region function. ROIs were not taken from shaded areas to reduce the uncertainty in the classification. The use of the ROIs for classification and validation is explained later in the image classification section.

3.4.2 Workflow of the Study Figure 4 shows the workflow of the study. 43

Figure 4: Workflow of the study

44

3.5 Image Classification Algorithms and Accuracy Assessment Image classification was performed with both pixels based and object based classifiers.

ENVI 5.4 software package was used for pixel based classification, and Trimble eCognition

Developer 9 (Trimble, Westminster, Colorado) was used to perform object based classification.

All the image bands were scaled to the pixel value range between 0 and 100 before the classification to be able to generate meaningful objects in object based classification (as Dr.

Jarlath O’Neil-Dunne explained in eCognition user community, 2018).

Images (after the masking processes) were classified into five classes (plant types) namely, Phragmites, Cattails, Lotus, Lily, and Duckweed. These classes were selected by examining the field data and RGB images captured with eBee.

While each pixel of the image was assigned to one of the predefined classes in the pixel based classification method, the pre-classification steps of segmentation were required prior to the object based classification. The images were separated into segments (object) of the user- defined sizes containing statistically similar adjacent pixels. The segments were then classified into the classes defined by the user based on classification algorithms as explained below

(Whiteside et al., 2011). Several parameters were modified before performing the object based classification. First, the image was segmented using the multiresolution segmentation tool with

10, 20, 40, 50, 75 and 100 scale parameters and then it was classified into the five classes. The boundaries of the segments were examined at each segmentation level to make sure that the segments did not mix multiple plant types. The best scale parameter for segmentation of the image was identified as 40. 45

Shape and compactness parameters were used as 0.1 and 0.5. Shape parameter was set to

0.1 to maximize the consideration of the band spectral information. The higher value of the shape parameter decreases the consideration of spectral information during segmentation.

Several segmentations were performed to find the best compactness parameters. With the fixed scale and shape parameters of 40 and 0.1, respectively, the best results were reached. In the end, the best compactness parameter of 0.5 was selected. The weights for each image layer were set to

1, to consider all the bands equally important in the classification.

3.5.1 Image Classification Algorithms In this study, the ML, SVM and NN classifiers were used in the pixel based classification and two classifiers, SVM and kNN, were used in the object based classification. All algorithms used in the study, except the ML classifier, make no assumptions on the data distribution (Table

3).

The parameters of the non-parametric classifiers were optimized before the process of classification. Optimum parameter values define the best classification boundaries in the feature space (Maxwell, 2018). The best possible boundaries between each pair of classes are defined by adjusting the tradeoff of misclassification of the pixels near a boundary between the two classes

(Foody & Mathur, 2004).

The image was classified with different combinations of parameters with each classifier.

The overall classification accuracy, Kappa value, errors of omission and commission for the

Phragmites class were recorded after each classification. The optimum set of parameter values for each classifier was decided by selecting the set of parameters resulted in the least error of omission for the Phragmites class. In addition, the least error of omission for the Phragmites 46 class, the highest overall classification accuracy, and the highest Kappa value was also taken into account when selecting the optimum set of parameter values.

The ML classifier was selected because of its ease of implementation, less processing time consumed and wide availability in different software packages (Maxwell et al., 2018). The probability value was kept as 0.95 with all ML classification runs to reduce the number of misclassified pixels in the classified images. ML classifier was used in the study as the basal classifier to compare with the performance of the other classifiers.

The NN classification was performed by the Neural Net function implemented in ENVI

5.4. This tool classifies the images based on layered feed forward Neural Network based method

(Harris Geospatial Solutions, 2018). The best parameter values were selected as Training

Threshold Contribution = 0.2, Training Rate = 0.2, Training Momentum = 0.9, Root Mean

Square Exit criteria = 0.01, Number of Hidden Layers = 1, Number of Iterations = 1000. The values were selected by running all the parameter values used in Ndehedehe (2013) at the parameter optimization step. A back propagation and feed forward Neural Network was used in this study (Harris Geospatial Solutions, 2018). The NN classifier was selected for classification as it could learn complex patterns in training data, generalize the noise of data and its ability to perform classification with the least number of samples than ML classifier (Mas & Flores, 2008).

Radial Basis Function (RBF) was used with the SVM classifier as it performed better than other kernels in previous similar classification studies (Duro et al., 2012; Qian et al., 2014).

The RBF kernel parameters were selected by running a parameter optimization. The C and

Gamma parameters were set to 50 and 0.25 for pixel based classification respectively in ENVI

5.4. The two parameters were set to 100 and 0.001 in the same order with object based 47 classification. SVM was selected for the study as previous studies have proven that SVM could classify images accurately with a small number of training samples (Foody & Mathur, 2004;

Mountrakis et al., 2011; Rupasinghe et al., 2018).

The kNN classifier was used in object based classification with eCognition Developer 9.

The k parameter for this classifier was set to 2 during the parameter optimization. The kNN classifier is a widely used object based classification algorithm, and it was selected for the study due to the ease of implementation, and as the basal classifier algorithm for object based classification.

3.5.2 K-fold Cross-Validation and Accuracy Assessment As no excessive sampling ROIs were available for the study, collected ROIs were shared between the process of classification and the after-classification process of validation. In order to minimize the uncertainties due to sampling design and the number of ROIs, the three-fold cross- validation approach was used to assign ROIs to the classification and validation processes.

Although the k-fold cross-validation is computationally extensive, it is most suitable when the study aims to find the precise error rates of the classifiers (Kotsiantis, 2007). Further, this method compensates the errors that could arise due to a smaller number of samples used in the classification and validation. Figure 5 illustrates the sample split method in the three-fold cross- validation.

The ROIs were split into three equal sets allocating an approximately similar number of

ROIs from each plant patch. The spatial distribution of ROIs was kept equal when splitting the

ROIs to 3 sets. Two sets of ROIs were used as classification data, (Figure 5: orange color) and the remaining set of ROIs (Figure 5: blue color) was selected for validation. This process was 48 repeated three times, in order to use all the ROIs as classification and validation data. The classification accuracy results were averaged after the three steps of the cross validation.

Figure 5: The sample splitting method for three-fold cross-validation

Accuracy assessment was performed with the confusion matrix using the ground truth

ROIs tool in ENVI 5.4. Classification accuracy for the pixel based classification was obtained by the threefold cross validation process. The three-fold cross-validation process could not be applied to the object based classification as the selection of ROIs in this study was kept constant for all classifications and yet, most of the segments covered the locations of pixel based ROIs from only two sets. Therefore, only one step of the classification and validation was performed with the object based classification. The training samples for object based classification were selected at the sample points of classification ROIs set 3, which resulted in highest classification accuracy in among three steps of pixel based cross validation. The images classified with the object based approach were imported to ENVI 5.4 from the eCognition Developer 9 to ENVI 5.4 to perform the accuracy assessment. The overall accuracy and errors of omission and commission for the Phragmites class, in particular, were the main focus of the study. The low error of omission in the detection of invasive plant species such as Phragmites is desirable for the invasive species management plans.

49

CHAPTER IV: RESULTS

4.1 Image Pre-processing and Masking The image mosaics created for the study area are shown in Figure 6. The spatial resolution of Sequoia camera images (Figures 6 (a) and (b)) was 13.9 cm. The disappearance of the most plants in the middle of the open water areas can be observed in October (Figure 6 (b)).

The plants in the Sequoia images are shown in pink and water in green. False colors are used to display the features in the Sequoia images due to the unavailability of the blue band. The RGB camera image has the spatial resolution of 3.4 cm. The finer spatial resolution of the RGB camera images provided the ability to identify the leaves of several floating plant types (Figures

6 (c) and (7)), and thus, it can be used to enhance the reregistration between the image and classification.

(a) (b) (c)

Figure 6: Sequoia and RGB camera images (a) taken using Sequoia on August 8 2017; (b) taken using Sequoia on October 18, 2017; and (c) RGB image taken on August 8, 2017. 50

The zoomed RGB image shows an area with Lotus and Lily on the water (Figure 7).

(a) (b) Figure 7: RGB camera images of Lotus, Lily, Phragmites, and Cattails (a) Area with Lotus (large) and Lily (small) leaves, (b) small Phragmites patch (light green at right) next to a large Cattails patch (dark green)

The Sequoia image taken on August 8, before and after applying two masks, created with an NDVI value over 0.22 and elevation over 4.0 m in the CHM layer, is shown in Figure 8. Most of the shaded areas observed in the original Sequoia image are successfully removed by the masking process, as a large portion of shades covered the water areas. Some small shaded areas can still be observed in the image, along the western side of the island and the eastern margin of the image. The masked image in Figure 8 (b) is called as 4sq hereafter.

Figure 9 shows the shade areas on the water. Only the plants in the shaded areas are left in the image after masking. 51

(a) (b)

Figure 8: The Sequoia mosaic taken on August 8, 2017 (a) The Sequoia mosaic (left); (b) water, built-up areas, and tree masks applied on the image.

(a) (b)

Figure 9: Enlarged image subset (a) Original image, (b) NDVI mask applied to the image 52

The application of the CHM mask has removed most of the tall trees around the estuary and trees on the Island in the middle of the estuary. Some of the treetops at the edges were manually masked using the user’s knowledge gained during the field work (Figure 10).

(a) (b) Figure 10: Image with tall trees on the estuary bank and image masked with the CHM (a) Image with tall trees on the estuary bank (b) image masked with the CHM mask

4.2 Spectral Library (Spectral Signatures) Figure 11 shows the spectral signatures of the plant types, which were considered to help decide on band indices used for classification. The color bars represent the wavelength ranges of the Sequoia camera multispectral bands. The reflectance values of all the plant types are between

5% and 15% in the green region. Cattails, Lotus, and Lily, show very similar reflectance values compared to the lower reflectance value of Phragmites and higher reflectance value of

Duckweed in the green spectral region. Similarly, the reflectance values of all the plants in the red region vary between 5% and 10%, except for the higher reflectance of Duckweed about 10%.

Although the reflectance values of other plants decrease for about 10% in the spectral region from green to red region, Phragmites show almost similar values in the both regions. The red- edge band is the narrowest spectral band of the Sequoia camera with a wavelength range of only 53

10 nm compared to the 40 nm wavelength ranges of the three other multispectral bands. The maximum reflectance value within the red-edge region (before reaching the NIR plateau) is approximately 15% for Phragmites while it ranges from 30% to approximately 65% for Lotus.

Figure 11: Spectral signatures of the plant types in the estuary The color bars show the spectral ranges of Green (G), Red (R), Red Edge (RE) and Near-Infrared (NIR) bands of Sequoia camera

The highest variation in reflectance values among plant types is observed in the NIR region. In the NIR region, Phragmites show the least reflectance value of about 20% and the second lowest reflectance value is approximately 30% in Duckweed. The higher reflectances are observed for Cattails, Lily and Lotus classes with approximate values of 50%, 60%, and 70%, respectively. The similarity of the reflectance in the red and green spectral regions and the apparent differences in the NIR region between Phragmites and other plants could be used to 54 discriminate Phragmites from the other plant types used in the study. Use of the reflectance measurements in order to discriminate Phragmites from other plants is discussed in Section 4. 3.

4.3 ANOVA Results: NDVI for August and October The previous section on spectral library emphasizes the lower reflectance value of

Phragmites in the NIR region compared to Cattails and Lotus classes. The mean NDVI values of

10 pixels for each Phragmites, Cattails, and Lotus, extracted from the images taken on August 8, and October 18, are shown in Table 6.

Table 6: Mean NDVI values of 10 pixels for each Phragmites, Cattails, and Lotus from the August 8 and October 18 images Mean NDVI value Plant type August 8 October 18 Phragmites 0.76 0.45 Cattails 0.77 0.23 Lotus 0.81 0.12

The reduction in NDVI values from August to October is greater in Cattails and Lotus compared to Phragmites. The Significant level of ANOVA test was set to 0.05. The results of the

ANOVA are shown in Table 7.

Table 7: Results of ANOVAs performed among NDVI values of Phragmites, Cattails, and Lotus generated for the August 8 and October 18 images ANOVA Result and Conclusion Date α p-value Conclusion August 8 0.05 0.26 No significant difference between the means of each pair of classes October 18 0.05 0.00 There is a significant difference between the means of at least one pair of classes

The result suggests that there is no significant difference in the mean NDVI values among Phragmites, Cattails, and Lotus for August 8 (p-value = 0.26). 55

ANOVA performed with NDVI values from the October image shows a significant difference in at least one mean NDVI value.

The result of the Tukey-Kramer test for the October 18 data is shown in the Table 8. The highest absolute difference can be observed between Phragmites and Cattails. The pairs of

Cattails and Lotus, and Phragmites and Lotus produce lower absolute differences of 19.61 and

14.78, respectively. All the absolute differences show higher values than the critical value of

5.84 implying that there are significant differences among the mean NDVI values of all three classes extracted from the image acquired on October 18.

Table 8: Results of Tukey Kramer test among NDVI values of Phragmites, Cattails, and Lotus for October 18

Comparison Absolute Difference Critical Range Results Phragmites vs Cattails 34.39 5.84 Significant Phragmites vs Lotus 14.78 5.84 Significant Cattails vs Lotus 19.61 5.84 Significant

4.4 Parametrization Using Four Original Bands with No Additional Feature Layers (4sq)

Optimum parameter values for each classifier are shown in Table 9. First, the 4sq image was classified with several combinations of parameters using each classifier. The combination of parameters resulted in the lowest errors of omission and commission for Phragmites class, as well as the highest classification accuracy and Kappa value, were selected as the optimum parameter set for each classifier. (Tables of the complete results of parameter optimization are shown in Appendix A -Tables 1.1, 1.2, 1.3 and 1.4).

56

Table 9: Optimum parameter values selected for each pixel and object based classifier Type Classifier Parameter Value C 50 SVM Gamma 0.25 TTC 0.20 TR 0.20 Pixel based TM 0.90 NN RMSEC 0.01 NHL 1 NI 1000 C 100 SVM Object based Gamma 0.001 kNN K 2 TTC- Training Threshold Contribution TR- Training Rate TM- Training Momentum RMSEC- Root Mean Square Exit Criteria NHL- Number of Hidden Layers NI- Number of Iterations

4.5 Classified Maps and Accuracy Assessments Using Different Combinations of Feature Layers for each Proposed Classifier

4.5.1 Pixel Based Classification To remind the reader, the image acquired on August 8, 2017, with four bands (G, R, RE,

NIR) (4sq), was classified with pixel based SVM classifier. The image was classified into five classes (Phragmites, Cattails, Lotus, Lily, and Duckweed) and the validation was performed using all the samples within the threefold cross validation method. The results of the classification are shown below in Table 10 and Table 11.

Table 10: Confusion matrix of 4sq using three-fold cross-validation (pixel based classification using SVM) Ground Truth % Class Phragmites Cattails Lotus Lily Duckweed Phragmites 82.42 1.15 8.89 0.00 0.00 Cattails 1.59 98.28 1.67 0.00 0.00 Lotus 15.99 0.57 82.35 5.71 0.00 Lily 0.00 0.00 7.08 90.51 2.17 Duckweed 0.00 0.00 0.00 3.79 97.83 Total 100.00 100.00 100.00 100.00 100.00 57

The confusion matrix shows that 82.42% of Phragmites are classified correctly. The rest of Phragmites are misclassified as Cattails or Lotus classes, 1.59%, and 15.99%, respectively.

The errors of commission and omission are shown in the Table 11.

Table 11: Errors of commission and omission for each class of the 4sq image averaged after the three-fold cross-validation iterations Class Commission % Omission % Phragmites 15.68 17.58 Cattails 2.70 1.72 Lotus 16.95 17.65 Lily 8.90 9.49 Duckweed 4.64 2.17

According to the values shown in Table 11, the error of commission for the Phragmites class suggests that 15.68% of Phragmites on the classified image are not Phragmites on the ground. It can be observed in Table 10 that 1.15% of Cattails and 8.89% Lotus are classified as

Phragmites.

The error of omission for the Phragmites class indicates that that 17.58% of Phragmites on the ground are not shown as the Phragmites class in the classified image. The confusion

(error) matrix (Table 10) further shows that there is no misclassification between Phragmites and

Lily or Duckweed classes, meaning that the classifier separates these classes using only four bands (without additional feature layers).

This next step of the classification includes the feature layers generated from the UAV data, as explained in the method section (band indices, texture, CHM, and PC bands), which are, one by one, added to the 4sq image and SVM conducted for each combination. Accuracy assessment is recorded at each step and the results are reported in Table 12.

58

Table 12: Overall accuracy (O.A.), Kappa value, errors of commission and omission for

Phragmites class for each layer separately stacked to 4sq (pixel based classification using SVM) Feature type Layer O.A.% Kappa C.E.% O.E. % (Ph) (Ph) Band Indices NIR/Green 92.29 0.90 4.00 17.24 (October 18) NIR/Red 90.86 0.86 8.16 22.41 NDVI 93.43 0.92 1.92 12.07 NDGI 91.14 0.89 9.80 20.69 NDRE 92.57 0.91 8.16 22.41 NIR/Red-edge 91.71 0.90 5.88 17.24 Red-edge/ Red 90.86 0.89 8.00 20.69 Green/Red-edge 90.86 0.89 8.16 22.41 Red/Green 93.14 0.91 0.00 17.24 Texture Mean 91.71 0.90 10.20 24.14 (August 8) Variance 91.71 0.90 7.69 17.24 Homogeneity 93.14 0.91 2.08 18.97 Contrast 90.57 0.88 8.16 22.41 Dissimilarity 93.14 0.91 2.08 17.97 Entropy 91.71 0.90 2.13 20.69 Second moment 92.28 0.90 2.08 18.97 Correlation 92.57 0.91 9.80 20.69 Elevation (August 8) CHM 93.14 0.91 0.00 13.79 Principal PC1 91.14 0.89 0.00 18.97 Components PC2 93.71 0.92 0.00 15.52 (August 8) PC3 90.00 0.87 15.09 22.41 PC4 90.28 0.88 8.16 22.41

59

It has been found that band indices, namely NDVI, from August are not contributing to the classification accuracy. However, as the ANOVA test shows the significant differences in

NDVI values among Phragmites, Cattails, and Lotus in October, the classification is improved with the October NDVI image (Table 8).

As shown in Table 12, the lowest error of omission for Phragmites is demonstrated for the NDVI layer calculated from the October 18 image. The second lowest error of omission for the Phragmites class is achieved by using the CHM layer. The third and fourth lowest errors were achieved by using PC2 and Variance (Var) layers, respectively.

To remind the reader, after accuracies were calculated for each step, those with the minimum errors of omission for Phragmites within each group of features (band indices, texture,

CHM, and PC) were used in the subsequent analysis to further reduce the error of omission for

Phragmites.

The improved classification accuracies by adding feature layers are shown in Table 13.

The results suggest that by using only 4sq, SVM classifier results in the highest overall classification accuracy with 90.47% and the highest Kappa value of 0.88. The NN classifier results in the lowest overall classification accuracy of 84.58% and the lowest Kappa values of

0.81. However, among the three pixel based classifiers, when both NDVIOct and CHM layers are stacked, the highest overall classification accuracy with the value 94.80% and the highest Kappa value of 0.93 are achieved with the NN classifier. The lowest overall classification accuracy of

92.92% and the lowest Kappa value of 0.91 are achieved with the ML classifier. The overall accuracy and Kappa values decrease when adding more layers to the NN and SVM classifiers 60 while the ML classifier does not produce any meaningful results (The classified images are included in Appendix B - Figures 3.1, 3.2 and 3.3).

Table 13: Overall accuracy (O.A.) and Kappa value for classification with stacking feature layers

to 4sq image - pixel based

4sq + NDVIOct 4sq + NDVIOct 4sq + NDVIOct 4sq 4sq + NDVIOct +CHM Classifier +CHM +CHM +PC2 +PC2+Var O.A% Kappa O.A% Kappa O.A% Kappa O.A% Kappa O.A% Kappa NN 84.58 0.81 91.26 0.89 94.80 0.93 92.96 0.91 94.68 0.93 SVM 90.47 0.88 95.73 0.95 94.58 0.93 94.33 0.93 93.12 0.91 ML 88.23 0.85 93.46 0.92 92.92 0.91 N.A. N.A. N.A. N.A.

As the focus of the study is to map Phragmites with higher accuracy, the achievement of lower errors of commission and omission is more important than the overall accuracy. Table 14 shows the errors of commission (C.E.) and omission (O.E.) in the classifications.

Table 14: Errors of commission and omission for Phragmites class – pixel based

4sq + 4sq +NDVIOct 4sq 4sq + NDVIOct 4sq + NDVIOct+CHM NDVIOct+CH +CHM+PC2 Classifier M +PC2+Var C.E. C.E% O.E.% C.E.% O.E.% C.E.% O.E.% C.E.% O.E.% % O.E.% NN 22.53 17.70 6.06 3.97 4.51 1.59 3.13 3.97 2.96 2.42 SVM 15.68 17.58 4.86 5.56 3.03 4.76 3.03 5.55 3.03 5.55 ML 20.58 12.74 3.62 4.76 2.22 3.17 N.A. N.A. N.A. N.A.

With the inclusion of the NDVIOct layer, the error of commission for Phragmites mostly decreases for ML (from 20.58% to 3.62%), suggesting that ML exhibits the best performance if only the NDVIOct layer is added. On the other hand, the error of omission indicates that NN 61

exhibits the best performance for the case when only the NDVIOct layer is added (from 17.70% to

3.97%). Both classifiers, ML for the error of commission and NN for the error of omission keep decreasing by adding the CHM layer, and no further clear improvement is observed. The least errors of omission and commission reach 1.59% and 2.22% with NN and ML classifiers, respectively, with the inclusion of NDVIOct and CHM feature layers.

Figure 12 (a) and (b) shows the classified images with Pixel based NN classifier, 4sq, and

4sq+ NDVIOct+ CHM respectively.

Figure 12: Pixel based NN classification results

(a) 4Sq, (b) 4Sq + NDVIOct + CHM

62

(a) (b)

Figure 13: Zoomed Phragmites patch classified with NN classifier

(a) 4Sq, (b) 4Sq + NDVIOct + CHM

The area in the black boxes, which represents a Phragmites patch in Figure 12, is zoomed and shown in Figure 13. The reduction of the Phragmites patches from Figure 12 (a) to Figure 12

(b) represents the reduction of error of commission for Phragmites class from 22.53% to 4.51%.

In Figure 13 (a), the west boundary of the Phragmites patch is misclassified into Lotus class, which is observed between Phragmites and Lily patches. This area is classified correctly into

Phragmites class in Figure 13 (b).

If the effect of adding layers is monitored for each classifier separately (e.g., ML), it can be observed that the change of Phragmites class from Figure 14 (a) to (c) shows the improvement of Phragmites classification accuracy from 87.26% to 96.83% (Table 15) with the

ML classifier. This coincides with the results in Table 14, and the conclusion mentioned above that ML generates its least errors of commission and omission of 2.22% and 3.17%, respectively, with the band combination of 4Sq+ NDVIOct+ CHM (Figure 14 (c)).

63

(a) (b) (c)

Figure 14: Pixel based ML classification results

(a) 4Sq, (b) 4Sq + NDVIOct, (c) 4Sq + NDVIOct + CHM

Table 15: Phragmites classification accuracy (%) with stacking feature layers – pixel based (extracted values from confusion matrices)

4sq+ 4sq + 4sq +NDVIOct Classifier 4sq 4sq + NDVIOct NDVIOct+CHM NDVIOct+CHM +CHM+PC2 +PC2+Var NN 82.30 96.03 98.41 96.03 97.58 SVM 82.42 94.44 95.24 94.45 94.45 ML 87.26 95.24 96.83 N.A. N.A.

From the accuracy (rather than the error) point of view, the finding coincides with Table

14 suggesting that the highest classification accuracy is achieved with NN classifier using the

4Sq+NDVIOct+CHM band combination (Table 15). The main conclusion of this section is that all three classifiers exhibit their highest Phragmites classification accuracy with the same band combination (4Sq+NDVIOct+CHM). The addition of more bands reduces the accuracy. 64

4.5.2 Object Based Classification The segmented images were classified following the same band stacking sequence used with the pixel based classifiers. The results of the classifications are shown in Tables 16 and 17

(The classified images are included in Appendix B - Figure 3.4 and Figure 3.5).

Table 16: Overall classification accuracy and Kappa value with stacking feature layers to 4sq image - object based

4sq+NDVIOct 4sq + 4sq+NDVIOct 4sq 4sq +NDVIOct +CHM+ NDVIOct+CHM +CHM+PC2 Classifier PC2+Var O.A % Kappa O.A% Kappa O.A% Kappa O.A% Kappa O.A% Kappa SVM 87.69 0.84 89.23 0.86 92.31 0.90 92.30 0.90 91.54 0.89 kNN 86.92 0.84 84.62 0.81 89.23 0.86 90.77 0.88 84.62 0.81

With 4sq image, SVM performs better than the kNN; it results in the highest overall accuracy of 87.69% and the highest Kappa value of 0.84. The addition of the NDVIOct to SVM improves the overall classification accuracy of approximately 2%, while for kNN, the overall accuracy reduces by 2%. The highest overall classification accuracy has been achieved with

SVM for the band combination of 4Sq+ NDVIOct+CHM. The stacking of more bands does not improve the overall accuracy and Kappa values, with an exception when PC2 is added to kNN.

The changes of the errors of commission and omission for the Phragmites class with the stacking feature layers are shown in Table 17. The stacking of NDVIOct layer reduces the error of commission for the Phragmites class from 28.57% to 0.00% with SVM but increases the same errors for kNN. However, similarly to the pixel based classification approach (see Section 4.5.1), 65

the minimum errors of omission are observed for the 4sq + NDVIOct +CHM combination, and no further reduction is observed by adding the layers.

Table 17: Errors of commission and omission for Phragmites class – object based

4sq + NDVIOct 4sq + NDVIOct 4sq + NDVIOct 4sq 4sq + NDVIOct +CHM Classifier +CHM +CHM +PC2 +PC2+Var C.E.% O.E.% C.E.% O.E.% C.E.% O.E.% C.E.% O.E.% C.E.% O.E.% SVM 28.57 25.00 0.00 20.00 10.00 10.00 0.00 10.00 0.00 10.00 kNN 2.13 20.69 0.00 40.00 0.00 10.00 0.00 10.00 0.00 10.00

From the accuracy (rather than the error) point of view, the same can be observed. The stacking of the PC2 and Variance layers does not change the classification accuracy of

Phragmites for neither classifier (Table 18).

Table 18: Phragmites classification accuracy (%) with stacking feature layers – object based (extracted values from confusion matrices)

4sq + 4sq + 4sq +NDVIOct Classifier 4sq 4sq + NDVIOct NDVIOct+CHM NDVIOct+CHM +CHM+PC2 +PC2+Var SVM 75.00 80.00 90.00 90.00 90.00 kNN 75.00 60.00 90.00 90.00 90.00

Figure 15 shows the improvement of classification accuracy for SVM with stacking the feature layers. The error of omission is introduced by exclusion of 10% of Phragmites in the classified image. 66

(a) (b) (c)

Figure 15: Object based SVM classification results

(a) 4Sq, (b) 4Sq + NDVIOct, (c) 4Sq + NDVIOct + CHM

The main conclusion of this section is that the 4Sq + NDVIOct + CHM is the combination that represents the cutoff point where the errors of omission and commission do not improve after adding additional layers. The kNN exhibit lower errors of commission while SVM is more consistent and predictable in reducing the errors of omission.

In comparison, between some pixel based (Section 4.5.1) and object based (this section) classifiers, there is a noticeable trend that NN (pixel based approach) is the best classifier out of those shown in this study to detect Phragmites. Figure 16 compares a Phragmites patch observed in RGB image and the same Phragmites patch classified using 4Sq+NDVIOct+CHM with three classification methods used in the study. The grayish patch in the middle of Figure 16 (a) is

Phragmites, and the darker green plants around the patch are identified as Cattails. 67

(a) (b)

(c) (d)

Figure 16: A Phragmites patch observed in different images (a) RGB image, (b) pixel based NN (c) pixel based SVM, (d) object based SVM classified image (b, c, d are classified using 4Sq+NDVIOct+CHM)

Overall, the lowest error of omission for the Phragmites class results in the pixel based

NN classification (Table 13). Figure 16 (b) generated with pixel based NN classifier corresponds to the highest classification accuracy of Phragmites and lowest misclassification of Phragmites into Cattails. 68

The object based SVM (Figure 16 (d)) shows the lowest Phragmites classification accuracy among the classifications with 4Sq+NDVIOct+CHM mostly due to the misclassification of Phragmites into Cattails class.

69

CHAPTER V: DISCUSSION

5.1 UAVs in Identifying Invasive Plants This study demonstrates the advantage of using UAVs to map invasive Phragmites in

OWC estuary, which would not be possible with coarse airborne and satellite images. This has been well demonstrated in our study, as all the classifiers have a higher overall accuracy (from

O.A. = 92.92% to O.A. = 94.80%) and the relatively low errors of commission and omission for the plant types of interest (from C.E. = 2.22% to C.E. = 4.51%, and from O.E. = 1.59% to O.E. =

4.76%, respectively). To satisfy Objective 1, several classifiers and different input information have been used in each classification method to show the effectiveness of the UAVs data in identifying invasive Phragmites australis in OWC estuary. The classifiers have been chosen based on the most common methods used in literature exploring their effects in a single study.

Each classifier has a different statistical background (see Table 3 and Section 2.6.2), which allows the various statistical approaches to be explored and compared at the same time. As emphasized in several studies, UAVs are preferable for land cover mapping at the local scale, because of the very high spatial resolution of UAV images and its flexibility to collect data with a user defined temporal resolution at low cost (Michez et al., 2016; Müllerová et al., 2017;

Nipadhkar et al., 2017; Samiappan et al., 2017).

To satisfy objective 2 of this study, the NN pixel based classifier has been identified as the most suitable algorithm to distinguish between invasive Phragmites australis and other vegetation types in OWC estuary (see below for more discussion). Furthermore, it has been demonstrated that the combination of the raw images (4sq) with feature layers NDVIOct, and

CHM, derived from UAV, produces the high overall accuracy (O.A. = 94.80%) and the lowest error of omission (O.E. = 1.59%) as well as relatively low error of commission (C.E. = 4.51%) 70 for Phragmites. In particular, low errors of omission are important for the management of invasive species, and one of the goals of this study has been to find the most optimal classifier, which exhibits the lowest omission error for Phragmites. It was observed, during the field work, that Phragmites spread faster in the estuary than other plant types. Therefore, it is important not to exclude any of the Phragmites plants in eradication processes.

The impact of a series of different feature layers derived from UAV images suggests that the user has to be careful when selecting the optimal number of layers and that the selection has to be based on the requirements that may be related to overall accuracy or on the more specific requirement targeting one plant type/species. In the other words, different classifiers produce different results and different feature layers impact each of the accuracies (the overall accuracy or the error of commission/omission for a specific plant) differently. However, as demonstrated in this study, it should be emphasized that the use of too many features could decrease the classification accuracy (Price et al., 2002) as well as increase the processing time. Consequently, the feature layers should be selected in such a way that they are most optimal in differentiating the class of interest (Lu & Weng, 2007).

The findings in this study suggest that the classification of the original bands is not sufficient to reach the best accuracy. Adding different layers to the original data, through time

(August vs. October) and through the proposed data fusion approach, improves all the classifications used in this study. As we started by adding one layer to the original data, the efficiency of each additional feature layer suggests that the band (vegetation) indices have a small impact on the accuracy, unless they are acquired at different times of the growing season.

Although the main idea of the study has been to concentrate on the August image, the integration of data acquired at different time, in the late summer and the mid-fall, has advantages with the 71 respect of band indices. The NDVI layer reduces the errors of omission and commission of the original data more than other simple and normalized band indices. Using the 4sq data, the classifiers have resulted in the misclassification of Phragmites with Lotus and Cattails.

Compared to Duckweed and Lily, these three plant types are darker in the green spectral region, and the remote sensing (reflectance) information is similar. Larger differences in the phenology and vigor between invasive Phragmites and other (native) plants are more prominent at the end of the growing season than in the middle of growing season (Bradly, 2014). Thus, the image captured by the Sequoia camera on October 18, improves the overall accuracy as well as the accuracy of Phragmites identification. A similar observation was reported by Lantz & Wang

(2013) who mentioned that the similarity in NDVI between the plants during the growing season could result in a lower classification accuracy for Phragmites. The higher NIR reflectance values of Phragmites at the end of the growing season were used to differentiate Phragmites from the rest of plants, including Cattails (Gilmore et al., 2018). Several other studies used images at the end of the summer to successfully detect invasive Phragmites (Laba et al., 2010; Lantz & Wang,

2013; Samiappan et al., 2017a). The same observation occurs in the current study, as the NDVI layer from October 18 reduces the misclassification of Phragmites into the Cattails and Lotus classes with all classifiers, especially with the SVM classifier.

Several previous studies (De Castro et al., 2018; Martin et al., 2018; Sankey et al., 2017) showed the importance of incorporating CHM data to locate the invasive species using UAV images. For example, reduction of the errors of omission and commission for Phragmites was observed in the study of Samiappan et al. (2017a), where he used the SVM classifier. Similarly, the results of this study show the importance of using CHM to identify Phragmites, as it reduces both errors of omission and commission for Phragmites. 72

In literature, the textural measurements have been successfully used for Phragmites identification using UAV and high resolution imagery (Laba et al., 2010; Samiappan et al.,

2017b). However, the use of textural measurements does not show a considerable reduction of errors of omission and commission for Phragmites in this study. Similarly, Bradly (2014) mentioned that texture did not produce better results in invasive plant identification when the target plant density was small. In this study, the Phragmites patches on the study site are not dense except the patch situated just south to the road. Liu (2018) used Mean, Variance, and

Entropy for the invasive plant detection, as these three features were least correlated in his/her study. The use of Variance in this study does not make a considerable impact on any of classifiers; there is no further decrease in the errors of commission and omission. The similar trend has been observed in both object based and pixel based classification methods.

The combinations of different layers were used in other studies as well, although it is not clear how the addition of different layers impacted the classification methods. This is exactly what this study emphasizes. The highest classification accuracies were achieved when NDVIOct and CHM are used as the additional feature layers to the original 4Sq image with all the classification approaches. It should be noted that in this study, it has been assumed that all four original bands (G, R, RE, and NIR) are important in the process of classification, although the use of multiple vegetation indices instead of all original data could be an option to further reduce the number of input layers in an optimal classification approach.

5.2 Importance of Sampling Methods in Classification and Validation The sampling design of collecting training and validation ROIs is an important factor for supervised image classification and validation, respectively. Stehman and Czaplewski (1998) defined the sampling design as the protocol that should be followed during the selection of 73 sampling units as training and validation ROIs. Lu and Weng (2007) highlighted that the complexity of the study site, characteristics of the remote sensing data, image pre-processing methods, and classification approach dictate the sampling design.

The sampling design is comprised of a collection of basic sampling units. Janssen and

Van der Wel (1994) claimed that a single pixel is the most suitable sample unit in a raster image for a pixel based classification. A sample should include a sufficient number of basic classification sample units to represent all spectral properties of each class (Foody & Mathur,

2004; Lu & Weng, 2007). However, if there are limitations such as poor accessibility, a sampling unit can consist of multiple pixels with an applied spatial smoothing technique (Lucas et al,

1994).

As described by Stehman (2009), an ideal sampling design should be cost-effective, providing meaningful results to achieve the classification objectives and accommodate any sampling data errors. However, often it is not practical to create a perfect sample design due to the limits of resources and field accessibility. Consequently, the sample design should be adequate to address the particular research objectives rather than being a perfect sample design

(Stehman, 2009). Foody et al. (2006) studied possible methods to reduce the required training sample size without losing the classification accuracy. The study concluded that it is possible to use a small dataset when the mechanism of the classifier is known, and the objective of the study is to map a specific class.

Further, a good sampling design is vital to achieve highly accurate classification accuracy, and on the other hand, a biased field sampling designs can result in low classification accuracies (Laba et al., 2010). However, selecting a sufficient number of training samples 74 becomes a challenge in classification studies if the landscape consists of less number of patches of a particular plant type or if the landscape is complex (Lu & Weng, 2007). Therefore, the stratified random sampling design has been used in this study to reduce uncertainties related to the sampling design. For sample stratification, both geographic distribution and abundance of the five plant types in the estuary are considered (Stehman & Czaplewski, 1998). The reduction of the number of training and validation samples for Phragmites and Duckweed classes may have affected the classification accuracies to a certain level in the study, but it is believed that the stratified random sampling design decreases the possible negative effect.

In addition, a cross validation method has been applied to compensate for the relatively low (but still sufficient) number of samples per class. The grown ROIs include 4 or 5 pixels on average for each sampling point. This growth of ROIs including more pixels has included more spectral variability within each class. This approach could increase the probability of classifying more pixels into Phragmites class. In other words, the method can potentially reduce the errors of omission. The inclusion of more heterogeneous pixels in training and validation data is found to be promising to improve the classification accuracy (Shao & Lunetta, 2012).

5.3 Classifier Algorithms for Identifying Invasive Phragmites The use of optimum parameters for classification with non-parametric classifiers (SVM,

NN, and kNN) has resulted in the higher classification accuracies in this study. High classification accuracies with the same classifiers were also achieved in previous studies of

Foody & Mathur (2004), Ndehedehe et al. (2013) and Qian et al. (2015). Parameter optimization for the NN classifier requires more effort than the other classifiers due to six existing parameters.

The parameters used by Ndehedehe et al. (2013) for the NN classifier have provided the highest classification accuracies in this study as well. 75

Many studies have suggested that the non-parametric classifiers resulted in higher classification accuracies than parametric classifiers (Ghimire et al., 2012; Hansen & Reed, 2000;

Huang et al., 2002; Shao & Lunetta, 2012). In this study, both non-parametric SVM and NN classifiers have resulted in slightly higher overall classification accuracies than parametric ML classifier.

However, all three pixel based classifiers have achieved the relatively high overall classification accuracies, between 92.9% and 94.8%. The error of omission for Phragmites is between 1.59% and 4.76%. In contrast to the previous studies, which suggested that SVM outperformed NN in overall classification accuracy (Mas & Flores, 2008; Omer et al., 2015;

Shao & Lunetta, 2012), the current study shows the similar trend for all pixel-based classifiers having relatively high overall accuracy. As mentioned above, all classifiers have produced meaningful classification results for the 4Sq+NDVIOct+CHM combination. Therefore, as we mentioned in the objectives, the best feature layers stack with 4sq to successfully map Phragmites are NDVIOct and CHM.

In this study, the best classification results are achieved with the pixel based NN classifier. In a previous study, it was recommended to use only one hidden layer to achieve high classification accuracy with NN classifier (Ndehedehe et al., 2013). Both the highest classification accuracy and the lowest error of omission for Phragmites are reached with a NN classifier restricted to one hidden layer. Higher flexibility of NN classifier due to the availability of a large number of different weights between each pair of nodes (Maxwell et al., 2018) has classified more pixels into the Phragmites class that are situated at the boundaries between

Phragmites and Cattails patches. Therefore, a lower error of omission is produced by NN classifier than by SVM or ML. 76

Several studies concluded that the object based classification performs better than the pixel based classification because it creates uniform objects by merging similar pixels into one object (Pande-Chhetri et al., 2017; Whiteshed et al., 2011; Yu et al., 2006). Specifically, Lantz and Wang (2013) emphasized that the object based classification method resulted in higher accuracy over pixel based classification to identify invasive Phragmites, while Bradley (2014) and Pande-Chhetri et al. (2017) reported opposite results when identifying some other invasive plant. Interestingly, the current study results in the higher error of omission within the object based methods than within the pixel based classification, similar to the study of Pande-Chhetri et al. (2017). The error of omission for Phragmites in this study is 10% using the object based classification while for the pixel based classification, the error omission for Phragmites range between 1.59% and 4.76%.

The segmentation of images, generated prior to object based classification, created the pixel aggregates consisted of roughly equal values, and the segments were classified based on the single mean values of the segments. Thus, the segmentation could be slightly erroneous to the extent where Phragmites pixels are confused with the pixels of Cattails due to similar pixel values. This confusion could lead Phragmites pixels to be aggregated in the segments that include Cattails. This situation was observed during the process of segmentation in this study, especially at the boundaries between Cattails and Phragmites. Therefore, the process of segmentation was an initial step where some uncertainties were generated suggesting that this step is critical for the object based classifications. An edge enhancement technique can be used to overcome the errors at the boundaries of the segments and improve the classification accuracy

(Ali & Clausi, 2001), which will be considered in the future study. 77

Applying the three-fold cross-validation to the pixel based classifications and performing the object based classifications using only one ROI set could be one of the reasons for the difference in accuracies between the two classification methods and for the best performance of the pixel based classifier. As mentioned, the selections of ROIs was kept constant for all classifications in this study. However, due to the small study area and small patches of plants, especially Phragmites, the segmentation process within the object based classifications could not include all three ROI sets and the three-fold cross-validation could not be applied to the object based classification. This observation may suggest that the three-fold cross validation method may be critical for any classification. Although a solid conclusion could be drawn only if the study site and plant patches are large enough to apply the three-fold cross validation to the object based classifications as well, one can surmise that the selection of ROIs may be more important than the classification method for UAV images.

Several advantages and disadvantages of classification methods are observed in the study.

According to the results, the pixel based approach is preferred over object based classification to map Phragmites in this study. Pixel based classification is much easier to perform than object based classification. In the object based method, a considerable amount of time is needed to set the segmentation and classification parameters. Further, repetition of the pixel based methods is much easier than the object based methods when the feature layers are modified and added.

Although the ability to incorporate features such as texture during segmentation is a key benefit in object based classifications (Dronova, 2015), it increases the processing time significantly. To reduce the processing time, texture layers were calculated only with ENVI 5.4.

Although the pixel based classification methods have less steps than the object based methods, it is important to mention that the NN classifier takes the longest processing time when 78 compared with other pixel based methods. In the other words, the highest classification accuracy was achieved with NN in this study; however, the fact that it takes the longest processing time should be considered if the tradeoff between the accuracy and processing time is allowed.

Overall, the parameter optimization of SVM and NN classifiers need more time and effort when compared to ML, which can be implemented easily. However, the disadvantage encountered with ML classifier in this study suggests that this classifier does not provide meaningful results when the number of bands increases more than six. This was also demonstrated in the study of

Cheeseman et al. (1988). In contrast, non-parametric classifiers (NN and SVM) are not considerably affected when the number of feature layers increases, except for the slight decrease in accuracy. In the object based classifications, the parameter optimization of kNN classifier is much easier to set than for SVM; however, the improvement of classification accuracy is more affected (stays consistent) with the addition of the feature layers.

5.4 Recommendations for Phragmites Eradication Phragmites spread in wetland sexually by seed germination and asexually through stolons and rhizomes (Kettenring & Mock, 2012). Rhizomes and stolons spread Phragmites around the areas where Phragmites are already established while seeds predominantly cause to spread to new distant areas. Among the three methods of reproduction, Phragmites mostly disperse by the stolons and rhizomes than via seed as the seeds are often sterile and slow in the development of new Phragmites patches (Hudon et al., 2005; Mal & Narine, 2004; Mauchamp et al., 2001). However, when the size of the existing Phragmites patches is large, the number of seed production is increased. Consequently, the spread of Phragmites through seed germination become more prominent (Hudon et al., 2005). Since the study site has a few patches of

Phragmites that are not well established, it is critical to limit the seed dispersion which happens 79 in the fall. Therefore, it is essential to eradicate the existing Phragmites patches before the start of the fall to restrict the expansion of Phragmites to new areas from existing patches.

The difference in phenology of Phragmites with other plants in the estuary is identified as useful in the study. As discussed in the previous paragraphs, it is vital to detect Phragmites before the beginning of fall. Bradley (2014) claims that invasive species have phenological patterns with longer growing seasons than native plants. Therefore the NDVI value of

Phragmites should be higher than native plants at the beginning of summer. If UAV data are acquired at the beginning of summer instead of mid-fall, it is expected that the method used in this study could be used to locate Phragmites before the end of summer. It is now anticipated that the estuary managements can eradicate the dense Phragmites patches and reduce the dispersion of Phragmites in the estuary substantially using UAVs and the results of the study.

Our efforts to collect UAV data over the study site at the beginning of summer 2018 was not successful due to the frequent disturbances for UAV flights by eagles nesting in the vicinity.

5.5 Uncertainties in the Study Similar to other studies, a few limitations regarding using UAVs are involved in this study. The main limitation is due to the challenge of identifying hidden thin patches of

Phragmites that were located under the dense tree canopies near the estuary bank. That means that some of Phragmites, which are found under the densely grown trees, cannot be identified with the fixed-wing UAVs. It is surmised that multi-rotor UAVs may be more practical for these hidden areas. Also, few sparse Phragmites patches and a few areas with individually standing

Phragmites plants may not have been captured by the UAV images. The Sequoia camera with

13.9 cm spatial resolution successfully captures the Phragmites patches with the area more than its pixel size. Data acquisition using UAVs limits the area of the study site due to the limited 80 battery power and the limits of the distance of the UAV to its controlling station. Therefore, this method cannot be applied to much larger areas than our study site.

Further, the presence of shaded areas hampers information extracted by altering the pixel values in remote sensing imagery. This problem is more emphasized in the UAV-related studies due to the finer spatial resolution of UAV images than in satellite data representing a real challenge when shaded areas had to be removed (Simic Milas et al., 2017). ROIs were not selected from the shaded areas in the study to prevent the errors that can be introduced by shades in classification and validation steps, but this certainly involves some bias. Liu (2018) suggested applying a method to remove the shadows or to adjust the classification algorithms to perform better in the shaded areas. Since our shaded areas are observed over water, which was masked, it is not expected that they considerably affect our results.

Another limitation is related to atmospheric corrections, which were not applied in this study. However, this was not a major concern as most of the feature layers were calculated using the image taken on August 8. The errors that could introduce by atmospheric and BRDF

(Bidirectional Reflectance Distribution Function) effects were compensated to a certain extent as only the NDVI band was based on the image taken on October 18.

The variations among the spectroradiometer measurements taken at different locations in the estuary would have impacted on the averaged spectral signatures. However, this variation may not cause to introduce an error in the results of the study as the spectral signatures were used only to identify the visual spectral differences among the plant types. 81

CHAPTER VI: CONCLUSION

The very high resolution images acquired by UAVs are useful in mapping and evaluating wetlands because of the image spatial resolution, ease handling and time and cost flexibility.

Mapping invasive plants in wetlands are vital in the wetland management efforts. The objectives of this study were to assess the use of UAVs in the identification of invasive plant species

Phragmites, to find out the best classification algorithm and to explore temporal data fusion to better identify the invasive plant Phragmites in the Old Woman Creek estuary.

This study used three pixel based classifiers (NN, SVM, and ML) and two object based classifiers (SVM and kNN) to extract the locations of Phragmites. Four bands (Green, Red, Red edge, and NIR) acquired by the Sequoia camera mounted on the eBee UAV was originally used, and several feature layers (product of UAV data) were stacked one by one to the original images.

Those with the highest accuracy were selected for the further combination and analysis. The feature layers were stacked one by one in the sequence of NDVIOct, CHM, PC2, and Var.

Accuracy assessments were used as criteria to select the best classifier based on the overall accuracy and the errors of commission and omission for Phragmites. The best set of feature layers was identified as August 8 Sequoia image stacked with NDVIOct and CHM. The highest overall classification accuracy (94.80%) and the lowest error of omission for Phragmites were achieved with pixel based NN classifier with this combination. Further stacking of features reduced the overall accuracy and increased the error of omission.

The results of the study showed that UAVs could be used effectively to map invasive

Phragmites in OWC estuary. The pixel based approach with Neural Network classifier was identified as the best classification method to identify Phragmites in the study site. 82

The findings suggested that the pixel based classification was a more appropriate classification method than the object based approach to identify the invasive Phragmites with the lowest error of omission. This error is considered more sensitive and more important in the management practices of invasive plant species. It was also found that the selection of a proper sampling method could provide high classification accuracies regardless of the classifier.

Furthermore, the use of data acquired in early August and mid-October improved the accuracy of identifying invasive Phragmites than using data from a single day. That shows the importance of careful consideration to collect data at the correct time when the phenology of plants changes. If data can be collected at the beginning and middle of the summer, mapping

Phragmites with this method could be more effective as eradication could be done by reducing

Phragmites in late summer and fall.

Several requirements might be critical before the process of classification: 1) correct selection of the most optimal parameters, and 2) proper sampling strategy - a stratified random sampling method for training classification and validation in this study, 3) k-fold cross-validation method to reduce the errors due to sampling design and the number of samples.

83

REFERENCES

Aday, D. D. (2007). The Presence of and invasive macrophyte (Phragmites australis) Does not

Influence Juvenile Fish Habitat Use in a Freshwater Estuary. Journal of Freshwater

Ecology, 22 (3), 535-537.

Adam, E., Mutanga, O., & Rugege, D. (2010). Multispectral and hyperspectral remote sensing

for identification and mapping of wetland vegetation: A review. Wetlands Ecology and

Management, 18 (3), 281-296.

Addink, E. A., De Jong, S. M., & Pebesma, E. J. (2007). The importance of scale in object-based

mapping of vegetation parameters with hyperspectral imagery. Photogrammetric

Engineering & Remote Sensing, 73 (8), 905-912.

Al-doski, J., Mansorl, S. B., & Shafri, H. Z. M. (2013). Image classification in remote sensing.

Department of Civil Engineering, Faculty of Engineering, University PutraMalaysia.

Ali, M., & Clausi, D. (2001). Using the Canny edge detector for feature extraction and

enhancement of remote sensing images. In Geoscience and Remote Sensing Symposium,

2001. IGARSS'01. IEEE 2001 International, 5, 2298-2300.

Alvarez-Taboada, F., Paredes, C., & Julin-Pelaz, J. (2017). Mapping of the invasive species

Hakea sericea using unmanned aerial vehicles (UAV) and WorldView-2 imagery and an

object-oriented approach. Remote Sensing, 9 (9), 913.

Anderson, J. R. (1976). A land use and land cover classification system for use with remote

sensor data (Vol. 964). US Government Printing Office. 84

Anderson, K., & Gaston, K. J. (2013). Lightweight unmanned aerial vehicles will revolutionize

spatial ecology. Frontiers in Ecology and the Environment, 11 (3), 138-146.

Barnes, E. M., Clarke, T. R., Richards, S. E., Colaizzi, P. D., Haberland, J., Kostrzewski, M.,

Thompson, T. (2000). Coincident detection of crop water stress, nitrogen status and canopy

density using ground based multispectral data. Paper presented at the Proceedings of the

Fifth International Conference on Precision Agriculture, Bloomington, MN, USA, 1619.

Basham May, A. M., Pinder, J. E., & Kroh, G. C. (1997). A comparison of Landsat thematic

mapper and SPOT multi-spectral imagery for the classification of shrub and meadow

vegetation in Northern California, U.S.A. International Journal of Remote Sensing, 18 (18),

3719-3728.

Bergkamp, G., & Orlando, B. (1999). Wetlands and climate change. Paper presented at the

Exploring Collaboration between the Convention on Wetlands and the United Nations

Framework Convention on Climate Change. Background Paper from the World

Conservation Union (IUCN).

Berni, J. A., Zarco-Tejada, P. J., Suárez Barranco, M. D., & Fereres Castiel, E. (2009). Thermal

and narrow-band multispectral remote sensing for vegetation monitoring from an unmanned

aerial vehicle. Institute of Electrical and Electronics Engineers.

Blaschke, T. (2010). Object based image analysis for remote sensing. ISPRS journal of

Photogrammetry and Remote Sensing, 65 (1), 2-16. 85

Blossey, B., & Notzold, R. (1995). Evolution of increased competitive ability in invasive

nonindigenous plants: A hypothesis. Journal of Ecology, 83 (5), 887-889.

Boersma, P. D., Reichard, S. H., & Van Buren, A. N. (2006). Invasive species in the Pacific

Northwest University of Washington Press.

Bradley, B. (2014). Remote detection of invasive plants: A review of spectral, textural and

phenological approaches. Biological Invasions, 16 (7), 1411-1425.

Burai, P., Deák, B., Valkó, O., & Tomor, T. (2015). Classification of herbaceous vegetation

using airborne hyperspectral imagery. Remote Sensing, 7 (2), 2046-2066.

Buschmann, C., & Nagel, E. (1993). In vivo spectroscopy and internal optics of leaves as basis

for remote sensing of vegetation. International Journal of Remote Sensing, 14 (4), 711-722.

Bustamante, J., Aragonés, D., Afán, I., Luque, C. J., Pérez-Vázquez, A., Castellanos, E. M., &

Díaz-Delgado, R. (2016). Hyperspectral Sensors as a Management Tool to Prevent the

Invasion of the Exotic Cordgrass Spartina densiflora in the Doñana Wetlands. Remote

Sensing, 8 (12), 1001.

Callaway, R. M., & Aschehoug, E. T. (2000). Invasive plants versus their new and old

neighbors: A mechanism for exotic invasion. Science, 290 (5491), 521-523.

Candade, N., & Dixon, B. (2004). Multispectral classification of Landsat images: A comparison

of support vector machine and neural network classifiers. Paper presented at the ASPRS

Annual Conference Proceedings, Denver, Colorado 86

Chambers, R. M., Meyerson, L. A., & Saltonstall, K. (1999). Expansion of Phragmites australis

into tidal wetlands of North America. Aquatic Botany, 64 (3-4), 261-273.

Chavez Jr., P. S. (1992). Comparison of spatial variability in visible and near-infrared spectral

images. Photogrammetric Engineering and Remote Sensing, 58 (7), 957-964.

Cheeseman, P. C., Self, M., Kelly, J., Taylor, W., Freeman, D., & Stutz, J. C. (1988). Bayesian

Classification. AAAI, 88, 607-611.

Coffey, R. (2013). The difference between “land use” and “land cover”.

https://www.canr.msu.edu/news/the_difference_between_land_use_and_land_cover

Congalton, R. G. (2001). Accuracy assessment and validation of remotely sensed and other

spatial information. International Journal of Wildland Fire, 10 (4), 321-328.

Dahl, T. E. (2011). Status and trends of wetlands in the conterminous United States 2004 to 2009

US Department of the Interior, US Fish and Wildlife Service, Fisheries and Habitat

Conservation, Washington, D.C.

Dalponte, M., Bruzzone, L., & Gianelle, D. (2012). Tree species classification in the Southern

Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral

images and LiDAR data. Remote Sensing of Environment, 123, 258-270.

Day Jr, J. W., Yanez-Arancibia, A., Kemp, W. M., & Crump, B. C. (2013). Introduction to

estuarine ecology. Estuarine Ecology, 2. 87

De Castro, A. I., Torres-Sánchez, J., Peña, J. M., Jiménez-Brenes, F. M., Csillik, O., & López-

Granados, F. (2018). An automatic random forest-obia algorithm for early weed mapping

between and within crop rows using UAV imagery. Remote Sensing, 10 (2), 285.

Dronova, I. (2015). Object-based image analysis in wetland research: A review. Remote Sensing,

7 (5), 6380-6413.

Duro, D. C., Franklin, S. E., & Dubé, M. G. (2012). A comparison of pixel-based and object-

based image analysis with selected machine learning algorithms for the classification of

agricultural landscapes using SPOT-5 HRG imagery. Remote Sensing of Environment, 118,

259-272. eCognition User Community (2018). http://www.ecognition.com/community

Erdas Field Guide. (1999). Erdas Inc. Atlanta, Georgia.

Erinjery, J. J., Singh, M., & Kent, R. (2018). Mapping and assessment of vegetation types in the

tropical rainforests of the Western Ghats using multispectral Sentinel-2 and SAR Sentinel-1

satellite imagery. Remote Sensing of Environment, 216, 345-354.

Foody, G. M. (2002). Status of land cover classification accuracy assessment. Remote Sensing of

Environment, 80 (1), 185-201.

Foody, G. M., & Mathur, A. (2004). Toward intelligent training of supervised image

classifications: Directing training data acquisition for SVM classification. Remote Sensing

of Environment, 93 (1-2), 107-117. 88

Foody, G. M., Mathur, A., Sanchez-Hernandez, C., & Boyd, D. S. (2006). Training set size

requirements for the classification of a specific class. Remote Sensing of Environment, 104

(1), 1-14.

Franklin, S. E. (2001). Remote sensing for sustainable forest management CRC press.

Gallant, A. L. (2015). The challenges of remote monitoring of wetlands.

Gamon, J. A., Field, C. B., Goulden, M. L., Griffin, K. L., Hartley, A. E., Joel, G., Valentini, R.

(1995). Relationships between NDVI, canopy structure, and photosynthesis in three

californian vegetation types. Ecological Applications, 5 (1), 28-41.

Gandhi, G. M., Parthiban, S., Thummalu, N., & Christy, A. (2015). NDVI: vegetation change

detection using remote sensing and GIS–a case study of Vellore District. Procedia

Computer Science, 57, 1199-1210.

Ghimire, B., Rogan, J., Galiano, V. R., Panday, P., & Neeti, N. (2012). An evaluation of

bagging, boosting, and random forests for land-cover classification in cape cod,

Massachusetts, USA. GIScience & Remote Sensing, 49 (5), 623-643.

Ghioca-Robrecht, D. M., Johnston, C. A., & Tulbure, M. G. (2008). Assessing the use of

multiseason QuickBird imagery for mapping invasive species in a Lake Erie coastal marsh.

Wetlands, 28 (4), 1028-1039.

89

Gilmore, M. S., Wilson, E. H., Barrett, N., Civco, D. L., Prisloe, S., Hurd, J. D., & Chadwick, C.

(2008). Integrating multi-temporal spectral and structural information to map wetland

vegetation in a lower Connecticut River tidal marsh. Remote Sensing of Environment, 112

(11), 4048-4060.

Gitelson, A. A. (2004). Wide dynamic range vegetation index for remote quantification of

biophysical characteristics of vegetation. Journal of Plant Physiology, 161 (2), 165-173.

Gitelson, A. A., & Merzlyak, M. N. (1996). Signature analysis of leaf reflectance spectra:

Algorithm development for remote sensing of chlorophyll. Journal of Plant Physiology, 148

(3-4), 494-500.

Gray, A. N., Barndt, K., & Reichard, S. H. (2011). Nonnative invasive plants of Pacific coast

forests: A field guide for identification. Gen.Tech.Rep.PNW-GTR-817.Portland, OR: US

Department of Agriculture, Forest Service, Pacific Northwest Research Station.91 P., 817

Guo, M., Li, J., Sheng, C., Xu, J., & Wu, L. (2017). A review of wetland remote sensing.

Sensors, 1 7 (4), 777.

Han, Y., Li, M., & Li, D. (2007). Vegetation index analysis of multi‐source remote sensing data

in coal mine wasteland. New Zealand Journal of Agricultural Research, 50 (5), 1243-1248.

Hansen, M. C., & Reed, B. (2000). A comparison of the IGBP DISCover and University of

Maryland 1 km global land cover products. International Journal of Remote Sensing, 21 (6-

7), 1365-1373. 90

Hardin, P. J., & Thomson, C. N. (1992). Fast nearest neighbor classification methods for

multispectral imagery. The Professional Geographer, 44 (2), 191-202.

Harris Geospatial Solutions (2018).

https://www.harrisgeospatial.com/Software-Technology/ENVI

Herdendorf, C. E., Klarer, D. M., & Herdendorf, R. C. (2006). The ecology of Old Woman

Creek, Ohio: An estuarine and watershed profile.

Herdendorf, C. E. (1990). Great Lakes estuaries. Estuaries, 13 (4), 493-503.

Herdendorf, C. E. (1992). Lake Erie coastal wetlands: An overview. Journal of Great Lakes

Research, 18 (4), 533-551.

Huang, C. Y., & Asner, G. P. (2009). Applications of remote sensing to alien invasive plant

studies. Sensors, 9 (6), 4869-4889.

Huang, C., Davis, L. S., & Townshend, J. R. G. (2002). An assessment of support vector

machines for land cover classification. International Journal of Remote Sensing, 23 (4),

725-749.

Hudon, C., Gagnon, P., & Jean, M. (2005). Hydrological factors controlling the spread of

common reed (Phragmites australis) in the St. Lawrence River (Québec, Canada).

Ecoscience, 12 (3), 347-357.

Jackson, R. D., & Huete, A. R. (1991). Interpreting vegetation indices. Preventive Veterinary

Medicine, 11 (3-4), 185-200. 91

Jensen, J. R. (2005). Introductory digital image processing 3rd edition. Upper saddle river:

Prentice hall.

Jones, K. B. (2008). Importance of land cover and biophysical data in landscape-based

environmental assessments. North America Land Cover Summit. Association of American

Geographers, Washington, DC, USA, 215, 249.

Jordan, C. F. (1969). Derivation of leaf‐area index from quality of light on the forest floor.

Ecology, 50 (4), 663-666.

Kayranli, B., Scholz, M., Mustafa, A., & Hedmark, Å. (2010). Carbon storage and fluxes within

freshwater wetlands: A critical review. Wetlands, 30 (1), 111-124.

Keddy, P. A. (2010). Wetland ecology: Principles and conservation Cambridge University Press.

Kennish, M. J. (2002). Environmental threats and environmental future of estuaries.

Environmental Conservation, 29 (1), 78-107.

Kerle, N., Janssen, L. L., & Huurneman, G. C. (2004). Principles of remote sensing. ITC,

Educational Textbook Series, 2, 250.

Kettenring, K. M., & Mock, K. E. (2012). Genetic diversity, reproductive mode, and dispersal

differ between the cryptic invader, Phragmites australis, and its native conspecific.

Biological invasions, 14 (12), 2489-2504. 92

Klarer, D. M., & Millie, D. F. (1992). Aquatic macrophytes and algae at Old Woman Creek

estuary and other Great Lakes coastal wetlands. Journal of Great Lakes Research, 18 (4),

622-633.

Komárek, J., Klouček, T., & Prošek, J. (2018). The potential of unmanned aerial systems: A tool

towards precision classification of hard-to-distinguish vegetation types? International

Journal of Applied Earth Observation and Geoinformation, 71, 9-19.

Kotsiantis, S. B., Zaharakis, I., & Pintelas, P. (2007). Supervised machine learning: A review of

classification techniques. Emerging Artificial Intelligence Applications in Computer

Engineering, 160, 3-24.

Kulkarni, A. D., & Lowe, B. (2016). Random forest algorithm for land cover classification.

International Journal on Recent and Innovation Trends in Computing and Communication,

4 (3), 58-63.

Laba, M., Blair, B., Downs, R., Monger, B., Philpot, W., Smith, S., Baveye, P. C. (2010). Use of

textural measurements to map invasive wetland plants in the Hudson River national

estuarine research reserve with IKONOS satellite imagery. Remote Sensing of Environment,

114 (4), 876-886.

Laliberte, A. S., Herrick, J. E., Rango, A., & Winters, C. (2010). Acquisition, orthorectification,

and object-based classification of unmanned aerial vehicle (UAV) imagery for rangeland

monitoring. Photogrammetric Engineering & Remote Sensing, 76 (6), 661-672. 93

Lam, N. S. N. (2008). Methodologies for mapping land cover/land use and its change. In

Advances in Land Remote Sensing, 341-367.

Landis, J. R., & Koch, G. G. (1977). An application of hierarchical kappa-type statistics in the

assessment of majority agreement among multiple observers. Biometrics, 33, 363-374.

Lane, C. R., Liu, H., Autrey, B. C., Anenkhonov, O. A., Chepinoga, V. V., & Wu, Q. (2014).

Improved wetland classification using eight-band high resolution satellite imagery and a

hybrid approach. Remote Sensing, 6 (12), 12187-12216.

Lantz, N. J., & Wang, J. (2013). Object-based classification of Worldview-2 imagery for

mapping invasive common reed, Phragmites australis. Canadian Journal of Remote

Sensing, 39 (4), 328-340.

Lawrence, R. L., Wood, S. D., & Sheley, R. L. (2006). Mapping invasive plants using

hyperspectral imagery and Breiman Cutler classifications (RandomForest). Remote Sensing

of Environment, 100 (3), 356-362.

Lechner A. M., Fletcher A., Johansen K., & Erskine P. (2012). Characterising upland swamps

using object-based classification methods and hyper-spatial resolution imagery derived from

an unmanned aerial vehicle. ISPRS Annals of the Photogrammetry, 101-106.

Leonard, L. A., Wren, P. A., & Beavers, R. L. (2002). Flow dynamics and sedimentation in

Spartina alterniflora and Phragmites australis marshes of the Chesapeake Bay. Wetlands, 22

(2), 415-424. 94

Lillesand, T., Kiefer, R. W., & Chipman, J. (2014). Remote sensing and image interpretation

John Wiley & Sons.

Lisein, J., Pierrot-Deseilligny, M., Bonnet, S., & Lejeune, P. (2013). A photogrammetric

workflow for the creation of a forest canopy height model from small unmanned aerial

system imagery. Forests, 4 (4), 922-944.

Liu, J. (2018). A combined method for vegetation classification based on visible bands from UAV

images: a case study for invasive wild parsnip plant (Master's Thesis).

Lu, D., & Weng, Q. (2007). A survey of image classification methods and techniques for

improving classification performance. International Journal of Remote Sensing, 28 (5), 823-

870.

Lucas, I., Janssen, F., & van der Wel, Frans JM. (1994). Accuracy assessment of satellite derived

landcover data: A review. Photogrammetric Engineering & Remote Sensing, 60 (4), 479-

426.

Mahdavi, S., Salehi, B., Granger, J., Amani, M., Brisco, B., & Huang, W. (2018). Remote

sensing for wetland classification: a comprehensive review. GIScience & Remote Sensing,

55 (5), 623-658.

Mal, T. K., & Narine, L. (2004). The biology of Canadian weeds. 129. Phragmites australis

(Cav.) Trin. ex Steud. Canadian Journal of Plant Science, 84 (1), 365-396. 95

Mallinis, G., Koutsias, N., Tsakiri-Strati, M., & Karteris, M. (2008). Object-based classification

using Quickbird imagery for delineating forest vegetation polygons in a Mediterranean test

site. ISPRS Journal of Photogrammetry and Remote Sensing, 63 (2), 237-250.

Martin, F., Müllerová, J., Borgniet, L., Dommanget, F., Breton, V., & Evette, A. (2018). Using

single-and multi-date UAV and satellite imagery to accurately monitor invasive knotweed

species. Remote Sensing, 10 (10), 1662.

Mas, J. F., & Flores, J. J. (2008). The application of artificial neural networks to the analysis of

remotely sensed data. International Journal of Remote Sensing, 29 (3), 617-663.

Materka, A., & Strzelecki, M. (1998). Texture analysis methods–a review. Technical University

of Lodz, Institute of Electronics, COST B11 Report, Brussels, 9-11.

Matinfar, H. R., Sarmadian, F., Alavi Panah, S. K., & Heck, R. J. (2007). Comparisons of object-

oriented and pixel-based classification of land use/land cover types based on Lansadsat7,

etm spectral bands (case study: Arid Region of Iran). American-Eurasian Journal of

Agricultural & Environmental Sciences, 2 (4), 448-456.

Mauchamp, A., Blanch, S., & Grillas, P. (2001). Effects of submergence on the growth of

Phragmites australis seedlings. Aquatic Botany, 69 (2-4), 147-164.

Maxwell, A. E., Warner, T. A., & Fang, F. (2018). Implementation of machine-learning

classification in remote sensing: an applied review. International Journal of Remote

Sensing, 39 (9), 2784-2817. 96

McLusky, D. S., Elliott, M., & Elliott, M. (2004). The estuarine ecosystem: ecology, threats and

management. Oxford University Press on Demand.

Mesev, V., & Walrath, A. (2007). GIS and remote sensing integration: in search of a definition

(1-16). John Wiley and Sons, Chichester.

Meyer, D. L., Johnson, J. M., & Gill, J. W. (2001). Comparison of nekton use of Phragmites

australis and Spartina alterniflora marshes in the Chesapeake Bay, USA. Marine Ecology

Progress Series, 209, 71-83.

Meyer, W. B., & Turner, B. L. (1992). Human population growth and global land-use/cover

change. Annual Review of Ecology and Systematics, 23 (1), 39-61.

MicaSense Knowledge base.

https://support.micasense.com/hc/en-us/articles/217112037-What-spectral-bands-does-the-

Sequoia-camera-capture-

Michez, A., Piégay, H., Lisein, J., Claessens, H., & Lejeune, P. (2016). Classification of riparian

forest species and health condition using multi-temporal and hyperspatial imagery from

unmanned aerial system. Environmental Monitoring and Assessment, 188 (3), 1-19.

Mills, E. L., Leach, J. H., Carlton, J. T., & Secor, C. L. (1993). Exotic species in the Great

Lakes: A history of biotic crises and anthropogenic introductions. Journal of Great Lakes

Research, 19 (1), 1-54.

Mitsch, W. J., & Gosselink, J. G. (2000). The value of wetlands: importance of scale and

landscape setting. Ecological economics, 35 (1), 25-33. 97

Moffett, K. B., & Gorelick, S. M. (2013). Distinguishing wetland vegetation and channel

features with object-based image segmentation. International Journal of Remote Sensing, 34

(4), 1332-1354.

Moguel, E., Conejero, J. M., Sánchez-Figueroa, F., Hernández, J., Preciado, J. C., Sánchez-

Figueroa, F., & Rodríguez-Echeverría, R. (2017). Towards the use of unmanned aerial

systems for providing sustainable services in smart cities. Sensors, 18 (1), 64.

Moran, M. S., Inoue, Y., & Barnes, E. M. (1997). Opportunities and limitations for image-based

remote sensing in precision crop management. Remote sensing of Environment, 61 (3), 319-

346.

Moreno-Mateos, D., Power, M. E., Comín, F. A., & Yockteng, R. (2012). Structural and

functional loss in restored wetland ecosystems. PLoS Biology, 10 (1),

Moulin, S. (1999). Impacts of model parameter uncertainties on crop reflectance estimates: a

regional case study on . International Journal of Remote Sensing, 20 (1), 213-218.

Mountrakis, G., Im, J., & Ogole, C. (2011). Support vector machines in remote sensing: A

review. ISPRS Journal of Photogrammetry and Remote Sensing, 66 (3), 247-259.

Müllerová, J., Bråna, J., Dvořák, P., Bartaloš, T., & Vítková, M. (2016). Does the data

resolution/origin matter? satellite, airborne and UAV imagery to tackle plant invasions.

Int.Arch.Photogramm.Remote Sens.Spatial Inf.Sci, 41, B7. 98

Müllerová, J., Brůna, J., Bartaloš, T., Dvořák, P., Vítková, M., & Pyšek, P. (2017). Timing is

important: Unmanned aircraft vs. satellite imagery in plant invasion monitoring. Frontiers

in Plant Science, 8, 887.

Mutanga, O., Adam, E., & Cho, M. A. (2012). High density biomass estimation for wetland

vegetation using WorldView-2 imagery and random forest regression algorithm.

International Journal of Applied Earth Observation and Geoinformation, 18, 399-406.

Mwakapuja, F., Liwa, E., & Kashaigili, J. (2013). Usage of indices for extraction of built-up

areas and vegetation features from Landsat TM image: A case of Dar Es Salaam and

Kisarawe Peri-urban areas, Tanzania.

Ndehedehe, C., Ekpa, A., Simeon, O., & Nse, O. (2013). Understanding the neural network

technique for classification of remote sensing data sets. NY Sci J, 6, 26-33.

Ng, W., Rima, P., Einzmann, K., Immitzer, M., Atzberger, C., & Eckert, S. (2017). Assessing the

potential of sentinel-2 and pliades data for the detection of prosopis and vachellia spp. in

Kenya. Remote Sensing, 9 (1), 74.

Niphadkar, M., Nagendra, H., Tarantino, C., Adamo, M., & Blonda, P. (2017). Comparing pixel

and object-based approaches to map an understorey invasive shrub in tropical mixed forests.

Frontiers in Plant Science, 8, 892.

Nordberg, M., & Evertson, J. (2003). Monitoring change in mountainous dry-heath vegetation at

a regional scale using multitemporal Landsat TM data. Ambio, 502-509. 99

Omer, G., Mutanga, O., Abdel-Rahman, E. M., & Adam, E. (2015). Performance of support

vector machines and artificial neural network for mapping endangered tree species using

WorldView-2 data in Dukuduku forest, South Africa. IEEE Journal of Selected Topics in

Applied Earth Observations and Remote Sensing, 8 (10), 4825-4840.

Old Woman Creek National Estuarine Research Reserve Management Plan 2011 – 2016 (2011),

National Oceanic and Atmospheric Administration, Retrieved on August 12, 2018.

Otukei, J. R., & Blaschke, T. (2010). Land cover change assessment using decision trees, support

vector machines and maximum likelihood classification algorithms. International Journal of

Applied Earth Observation and Geoinformation, 12, S27-S31.

Ozesmi, S. L., & Bauer, M. E. (2002). Satellite remote sensing of wetlands. Wetlands Ecology

and Management, 10 (5), 381-402.

Pal, M., & Foody, G. M. (2010). Feature selection for classification of hyperspectral data by

SVM. IEEE Transactions on Geoscience and Remote Sensing, 48 (5), 2297-2307.

Pal, M., & Mather, P. M. (2005). Support vector machines for classification in remote sensing.

International Journal of Remote Sensing, 26 (5), 1007-1011.

Pande-Chhetri, R., Abd-Elrahman, A., Liu, T., Morton, J., & Wilhelm, V. L. (2017). Object-

based classification of wetland vegetation using very high-resolution unmanned air system

imagery. European Journal of Remote Sensing, 50 (1), 564-576. 100

Pengra, B. W., Johnston, C. A., & Loveland, T. R. (2007). Mapping an invasive plant,

Phragmites australis, in coastal wetlands using the EO-1 Hyperion hyperspectral sensor.

Remote Sensing of Environment, 108 (1), 74-81.

Postel, S. L. (2003). Securing water for people, crops, and ecosystems: new mindset and new

priorities. Paper presented at the Natural Resources Forum, 27 (2), 89-98.

Price, K. P., Guo, X., & Stiles, J. M. (2002). Optimal Landsat TM band combinations and

vegetation indices for discrimination of six grassland types in Eastern Kansas. International

Journal of Remote Sensing, 23 (23), 5031-5042.

Puliti, S., Ørka, H. O., Gobakken, T., & Næsset, E. (2015). Inventory of small forest areas using

an unmanned aerial system. Remote Sensing, 7 (8), 9632-9654.

Qian, Y., Zhou, W., Yan, J., Li, W., & Han, L. (2014). Comparing machine learning classifiers

for object-based land cover classification using very high resolution imagery. Remote

Sensing, 7 (1), 153-168.

Ranchin, T., Télédétection, G., & Antipolis, M. S. (2001). Data fusion in remote sensing:

examples. Paper presented at the 4th Annual Conference on Information Fusion, Sophia

Antipolis.

Remote Sensing Phenology. (2018). Unites States Geological Survey.

https://phenology.cr.usgs.gov/ndvi_foundation.php

Richards, J. A. (1999). Remote Sensing Digital Image Analysis (Vol. 3). Berlin: Springer. 101

Rogan, J., & Chen, D. (2004). Remote sensing technology for mapping and monitoring land-

cover and land-use change. Progress in Planning, 61 (4), 301-325.

Rupasinghe, P. A., Simic Milas, A., Arend, K., Simonson, M. A., Mayer, C., & Mackey, S.

(2018). Classification of shoreline vegetation in the Western Basin of Lake Erie using

airborne hyperspectral imager HSI2, Pleiades and UAV data. International Journal of

Remote Sensing, 1-21.

Sabins, F. F. (2007). Remote sensing: Principles and Applications Waveland Press.

Sainty, G., McCorkelle, G., & Julien, M. (1997). Control and spread of alligator weed

alternanthera philoxeroides (mart.) griseb., in Australia: lessons for other regions. Wetlands

Ecology and Management, 5 (3), 195-201.

Salamí, E., Barrado, C., & Pastor, E. (2014). UAV flight experiments applied to the remote

sensing of vegetated areas. Remote Sensing, 6 (11), 11051-11081.

Samiappan, S., Turnage, G., Hathcock, L., Casagrande, L., Stinson, P., & Moorhead, R. (2017).

Using unmanned aerial vehicles for high-resolution remote sensing to map invasive

phragmites australis in coastal wetlands. International Journal of Remote Sensing, 38 (8-

10), 2199-2217.

Samiappan, S., Turnage, G., Hathcock, L. A., & Moorhead, R. (2017). Mapping of invasive

phragmites (common reed) in Gulf of Mexico coastal wetlands using multispectral imagery

and small unmanned aerial systems. International Journal of Remote Sensing, 38(8-10),

2861-2882. 102

Sankey, T., Donager, J., McVay, J., & Sankey, J. B. (2017). UAV lidar and hyperspectral fusion

for forest monitoring in the southwestern USA. Remote Sensing of Environment, 195, 30-43.

Schmitt, M., & Zhu, X. X. (2016). Data fusion and remote sensing: an ever-growing relationship.

IEEE Geoscience and Remote Sensing Magazine, 4 (4), 6-23.

Sghair, A., & Goma, F. (2013). Remote sensing and GIS for wetland vegetation study (Doctoral

dissertation, University of Glasgow).

Shao, Y., & Lunetta, R. S. (2012). Comparison of support vector machine, neural network, and

CART algorithms for the land-cover classification using limited training data points. ISPRS

Journal of Photogrammetry and Remote Sensing, 70, 78-87.

Sharitz, R. R., & Batzer, D. P. (1999). An introduction to freshwater wetlands in North America

and their invertebrates. In Invertebrates in Freshwater Wetlands of North America: Ecology

and Management, DP Batzer, RB Rader Y SA Wissinger (Eds.). Willey, New York, 1-21.

Shouse, M., Liang, L., & Fei, S. (2013). Identification of understory invasive exotic plants with

remote sensing in urban forests. International Journal of Applied Earth Observation and

Geoinformation, 21, 525-534.

Simic Milas, A., Arend, K., Mayer, C., Simonson, M. A., & Mackey, S. (2017). Different

colours of shadows: classification of UAV images. International Journal of Remote

Sensing, 38 (8-10), 3084-3100. 103

Simic Milas, A., Sousa, J. J., Warner, T. A., Teodoro, A. C., Peres, E., Gonçalves, J. A., . . . &

Woodget, A. (2018). Unmanned aerial systems (UAS) for environmental applications

special issue preface. International Journal of Remote Sensing, 39 (15-16), 4845-4851.

Stehman, S. V. (2009). Sampling designs for accuracy assessment of land cover. International

Journal of Remote Sensing, 30 (20), 5243-5272.

Stehman, S. V., & Czaplewski, R. L. (1998). Design and analysis for thematic map accuracy

assessment: fundamental principles. Remote Sensing of Environment, 64 (3), 331-344.

Tang, L., & Shao, G. (2015). Drone remote sensing for forestry research and practices. Journal

of Forestry Research, 26 (4), 791-797.

Tehrany, M. S., Pradhan, B., & Jebuv, M. N. (2014). A comparative assessment between object

and pixel-based classification approaches for land use/land cover mapping using SPOT 5

imagery. Geocarto International, 29 (4), 351-369.

Teillet, P. M., Staenz, K., & William, D. J. (1997). Effects of spectral, spatial, and radiometric

characteristics on remote sensing vegetation indices of forested regions. Remote Sensing of

Environment, 61(1), 139-149.

Thomas, Z. A., Turney, C., Palmer, J. G., Lloydd, S., Klaricich, J., & Hogg, A. (2018).

Extending the observational record to provide new insights into invasive alien species in a

coastal dune environment of New Zealand. Applied Geography, 98, 100-109.

Tóth, V. R. (2018). Monitoring spatial variability and temporal dynamics of phragmites using

unmanned aerial vehicles. Frontiers in Plant Science, 9, 728. 104

Tucker, C. J., Pinzon, J. E., Brown, M. E., Slayback, D. A., Pak, E. W., Mahoney, R., ... & El

Saleous, N. (2005). An extended AVHRR 8‐km NDVI dataset compatible with MODIS and

SPOT vegetation NDVI data. International Journal of Remote Sensing, 26 (20), 4485-4498.

Turner, W., Spector, S., Gardiner, N., Fladeland, M., Sterling, E., & Steininger, M. (2003).

Remote sensing for biodiversity science and conservation. Trends in Ecology & Evolution,

18 (6), 306-314.

Tuxen, K., Schile, L., Stralberg, D., Siegel, S., Parker, T., Vasey, M., . . . Kelly, M. (2011).

Mapping changes in tidal wetland vegetation composition and pattern across a salinity

gradient using high spatial resolution imagery. Wetlands Ecology and Management, 19 (2),

141-157.

Underwood, E., Ustin, S., & DiPietro, D. (2003). Mapping nonnative plants using hyperspectral

imagery. Remote Sensing of Environment, 86 (2), 150-161.

Verhoeven, J. T., & Meuleman, A. F. (1999). Wetlands for wastewater treatment: opportunities

and limitations. Ecological Engineering, 12 (1-2), 5-12.

Vilà, M., Espinar, J. L., Hejda, M., Hulme, P. E., Jarošík, V., Maron, J. L., . . . Pyšek, P. (2011).

Ecological impacts of invasive alien plants: a meta‐analysis of their effects on species,

communities and ecosystems. Ecology Letters, 14 (7), 702-708.

105

Wallace, C. S., Walker, J. J., Skirvin, S. M., Patrick-Birdwell, C., Weltzin, J. F., & Raichle, H.

(2016). Mapping presence and predicting phenological status of invasive buffelgrass in

Southern Arizona using MODIS, climate and citizen science observation data. Remote

Sensing, 8 (7), 524.

Wan, H., Wang, Q., Jiang, D., Fu, J., Yang, Y., & Liu, X. (2014). Monitoring the invasion of

spartina alterniflora using very high resolution unmanned aerial vehicle imagery in Beihai,

Guangxi (china). The Scientific World Journal, 2014, 1-7.

Watts, A. C., Ambrosia, V. G., & Hinkley, E. A. (2012). Unmanned aircraft systems in remote

sensing and scientific research: Classification and considerations of use. Remote Sensing, 4

(6), 1671-1692.

Whyte, R. S. (2009). Photographic atlas of wetland plants of the Old Woman Creek state nature

preserve and national estuarine research reserve (Huron, Ohio).

http://coastal.ohiodnr.gov/portals/coastal/pdfs/owc/owcatlas_wetlandplants.pdf

Weidenhamer, J. D., & Callaway, R. M. (2010). Direct and indirect effects of invasive plants on

soil chemistry and ecosystem function. Journal of Chemical Ecology, 36 (1), 59-69.

Whiteside, T. G., Boggs, G. S., & Maier, S. W. (2011). Comparing object-based and pixel-based

classifications for mapping savannas. International Journal of Applied Earth Observation

and Geoinformation, 13 (6), 884-893. 106

Whyte, R. S., & Francko, D. A. (2001). Dynamics of a pioneer population of Eurasian

watermilfoil (Myriophyllum spicatum L.) in a shallow Lake Erie wetland. Journal of Aquatic

Plant Management, 39, 136-139.

Whyte, R. S., Trexel-Kroll, D., Klarer, D. M., Shields, R., & Francko, D. A. (2008). The

invasion and spread of Phragmites australis during a period of low water in a Lake Erie

coastal wetland. Journal of Coastal Research, 111-120.

Xie, Y., Sha, Z., & Yu, M. (2008). Remote sensing imagery in vegetation mapping: A review.

Journal of Plant Ecology, 1 (1), 9-23.

Xu, H. (2006). Modification of normalised difference water index (NDWI) to enhance open

water features in remotely sensed imagery. International journal of remote sensing, 27 (14),

3025-3033.

Xue, J., & Su, B. (2017). Significant remote sensing vegetation indices: a review of

developments and applications. Journal of Sensors, 2017, 1-17.

Yan, G., Mas, J. F., Maathuis, B. H. P., Xiangmin, Z., & Van Dijk, P. M. (2006). Comparison of

pixel‐based and object‐oriented image classification approaches—a case study in a coal fire

area, Wuda, Inner Mongolia, China. International Journal of Remote Sensing, 27 (18),

4039-4055.

Yang, C., Everitt, J. H., & Murden, D. (2011). Evaluating high resolution SPOT 5 satellite

imagery for crop identification. Computers and Electronics in Agriculture, 75 (2), 347-354. 107

Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., & Schirokauer, D. (2006). Object-based

detailed vegetation classification with airborne high spatial resolution remote sensing

imagery. Photogrammetric Engineering & Remote Sensing, 72 (7), 799-811.

Zedler, J. B., & Kercher, S. (2004). Causes and consequences of invasive plants in wetlands:

opportunities, opportunists, and outcomes. Critical Reviews in Plant sciences, 23 (5), 431-

452.

Zhang, J. (2010). Multi-source remote sensing data fusion: Status and trends. International

Journal of Image and Data Fusion, 1 (1), 5-24.

Zhang, Q., Qin, R., Huang, X., Fang, Y., & Liu, L. (2015). Classification of ultra-high resolution

orthophotos combined with DSM using a dual morphological top hat profile. Remote

Sensing, 7(12), 16422-16440.

Zhang, X., Friedl, M. A., Schaaf, C. B., Strahler, A. H., Hodges, J. C., Gao, F., ... & Huete, A.

(2003). Monitoring vegetation phenology using MODIS. Remote sensing of environment, 84

(3), 471-475.

Zhou, G., & Li, R. (2000). Accuracy evaluation of ground points from IKONOS high-resolution

satellite imagery. Photogrammetric Engineering and Remote Sensing, 66 (9), 1103-1112.

Zhu, L., Suomalainen, J., Liu, J., Hyyppä, J., Kaartinen, H., & Haggren, H. (2017). A review:

Remote sensing sensors. Multi-purposeful application of geospatial data, 2, 20-42.

108

APPENDIX A: PARAMETER OPTIMIZATION TABLES

1.1 SVM- Pixel Based Classification

P1 P2 P3 P4 P5 P6 P7 P8 P9 Gamma 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 C 0.10 0.50 0.25 1 50 100 500 1000 10000 O.A.% 72.26 78.00 78.86 79.43 90.57 90.57 88.86 89.43 89.43 Kappa 0.65 0.72 0.71 0.74 0.88 0.88 0.86 0.87 0.87 C.E.% 54.26 51.61 52.31 49.12 8.16 12.00 10.87 8.89 6.82 O.E.%(Ph) 25.86 48.28 46.55 50.00 22.41 24.14 29.31 29.31 29.31 (Ph)

1.2 NN- Pixel Based Classification

Parameter P1 P2 P3 P4 P5 P6 P7 P8 TTC 0.90 0.90 0.50 0.90 0.90 0.90 0.90 0.20 TR 0.90 0.90 0.10 0.20 0.20 0.20 1.00 0.20 TM 0.10 0.10 0.90 0.90 0.90 0.90 0.90 0.90 RMSEC 0.05 0.08 0.90 0.10 0.06 0.10 0.01 0.01 NHL 1 1 1 1 1 1 1 1 NI 1000 1000 100 1000 1000 200 500 1000 O.A.% 77.59 77.59 35.07 71.71 71.71 80.86 76.86 82.57 Kappa 0.72 0.72 0.16 0.65 0.65 0.76 0.71 0.78 C.E.% (Ph) 46.51 46.51 0.00 59.83 59.83 16.67 75.00 22.22 O.E.% (Ph) 20.69 20.69 100.00 18.97 18.97 65.52 98.00 27.59

1.3 SVM- Object Based Classification

P1 P2 P3 P4 P5 C 1 10 100 1000 10000 Gamma 0.001 0.001 0.001 0.001 0.001 O.A.% 76.59 80.57 82.29 80.00 77.71 Kappa 0.70 0.75 0.78 0.75 0.71 C.E. % (Ph) 18.05 0.00 24.00 26.09 19.05 O.E. % (Ph) 46.23 65.52 34.48 41.38 41.38

109

1.4 kNN- Object Based Classification

k 1 2 3 4 5 6 O.A. % 79.43 82.57 76.00 78.86 79.43 77.71 Kappa 0.74 0.78 0.70 0.73 0.74 0.72 C.E. % (Ph) 0.00 2.13 13.33 0.00 0.00 0.00 O.E. % (Ph) 34.48 20.69 55.17 62.07 58.62 65.52

110

APPENDIX B: CLASSIFIED IMAGES

2.1 NN Classifier- Pixel Based

(a) (b)

(c)

Classified images with NN classifier (a) 4sq (b) 4sq + NDVIOct, (c) 4sq + NDVIOct + CHM, (d) 4sq

+ NDVIOct + CHM

111

2.2 SVM Classifier- Pixel Based

(a) (b)

(c)

Classified images with SVM classifier (a) 4sq (b) 4sq + NDVIOct, (c) 4sq + NDVIOct + CHM, (d) 4sq

+ NDVIOct + CHM 112

2.3 ML Classifier- Pixel based

(a) (b)

(c)

Classified images with ML classifier (a) 4sq (b) 4sq + NDVIOct, (c) 4sq + NDVIOct + CHM, (d) 4sq

+ NDVIOct + CHM 113

2.4 SVM Classifier- Object Based

(a) (b)

(c)

Classified images with SVM classifier (Object based) (a) 4sq (b) 4sq + NDVIOct, (c) 4sq + NDVIOct

+ CHM, (d) 4sq + NDVIOct + CHM 114

2.5 k-NN Classifier- Object Based

(b) (a)

(c)

Classified images with SVM classifier (Object based) (a) 4sq (b) 4sq + NDVIOct,

(c) 4sq + NDVIOct + CHM, (d) 4sq + NDVIOct + CHM