<<

remote sensing

Article Land-Cover Classification of Coastal Using the RF Algorithm for Worldview-2 and Landsat 8 Images

Xiaoxue Wang 1, Xiangwei Gao 1,*, Yuanzhi Zhang 2,3 , Xianyun Fei 1 , Zhou Chen 1, Jian Wang 1, Yayi Zhang 1, Xia Lu 1 and Huimin Zhao 1

1 School of Geomatics and Marine Information, Jiangsu Ocean University, Lianyungang 222002, 2 School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China 3 National Astronomical Observatories, Key Lab of Lunar Science and Deep-Space Exploration, Chinese Academy of Science, Beijing 100101, China * Correspondence: [email protected]

 Received: 30 June 2019; Accepted: 13 August 2019; Published: 17 August 2019 

Abstract: Wetlands are one of the world’s most important , playing an important role in regulating climate and protecting the environment. However, human activities have changed the land cover of wetlands, leading to direct destruction of the environment. If wetlands are to be protected, their land cover must be classified and changes to it monitored using remote sensing technology. The random forest (RF) machine learning algorithm, which offers clear advantages (e.g., processing feature data without feature selection and preferable classification result) for high spatial image classification, has been used in many study areas. In this research, to verify the effectiveness of this algorithm for remote sensing image classification of coastal wetlands, two types of spatial resolution images of the Linhong in Lianyungang—Worldview-2 and Landsat-8 images—were used for land cover classification using the RF method. To demonstrate the preferable classification accuracy of the RF algorithm, the support vector machine (SVM) and k-nearest neighbor (k-NN) methods were also used to classify the same area of land cover for comparison with the results of RF classification. The study results showed that (1) the overall accuracy of the RF method reached 91.86%, higher than the SVM and k-NN methods by 4.68% and 4.72%, respectively, for Worldview-2 images; (2) at the same time, the classification accuracies of RF, SVM, and k-NN were 86.61%, 79.96%, and 77.23%, respectively, for Landsat-8 images; (3) for some land cover types having only a small number of samples, the RF algorithm also achieved better classification results using Worldview-2 and Landsat-8 images, and (4) the addition texture features could improve the classification accuracy of the RF method when using Worldview-2 images. Research indicated that high-resolution remote sensing images are more suitable for small-scale land cover classification image and that the RF algorithm can provide better classification accuracy and is more suitable for coastal wetland classification than the SVM and k-NN algorithms are.

Keywords: coastal wetland; classification; RF algorithm

1. Introduction Wetlands play an important role in every aspect of ecological systems, including by providing habitats for fish and wildlife [1,2], regulating climate, improving water quality [3], sequestering carbon [4], and mitigating flooding [5]. However, most wetlands, especially coastal wetlands, on the one hand, are disappearing or are in danger of disappearing as a result of rapid economic development and population growth that lead to urban sprawl. On the other hand, the rising sea

Remote Sens. 2019, 11, 1927; doi:10.3390/rs11161927 www.mdpi.com/journal/remotesensing Remote Sens. 2019, 11, 1927 2 of 22 level caused by global climate change has also led to a rapid decline in the area of coastal wetlands. Accordingly, governments must plan, manage, and reconstruct coastal wetlands through wetland protection activities that will require the ability to quickly, precisely, and repeatedly obtain information about coastal wetland land cover for large areas [6]. Remote sensing (RS) technology has become one of most important methods for acquiring such information, and much effort has been given to improving various RS classification methods so as to obtain more valuable land cover classification results [7,8]. Classification of coastal wetlands is challenging, not least because of their complicated composition and pattern [9]. In previous studies, Landsat MSS, Landsat TM, and SPOT images have predominantly formed the basis of remote sensing techniques used for wetland monitoring and for land cover mapping, extraction, and classification [10–13]. Owing to technical limitations, the resolution of such images is relatively low, making them suitable only for large-scale area research and unable to provide detailed land use information. Aerial photographs are a very efficient method for acquiring micro-scale wetland cover mapping. At present, however, professional aerial drone operators are needed to take aerial photographs, most existing aerial photographs already meet commercial needs, and aerial photographs for research can be difficult to obtain. Compared with aerial photographs, high-resolution satellite remote sensing images have the advantages of saving manpower and material resources, being easily obtainable, being less time-consuming, and offering higher accuracy. Recently, high-resolution remote sensing images at the meter and sub-meter levels, such as IKONOS (1 m), Quick Bird (0.61 m), and Worldview-2 (0.5 m), have been widely used in various fields, including land use classification, urban planning, and [14–20]. For high spatial resolution remote sensing images, the spatial structure was fully expressed and the interior spectral valve was more changeable for each land cover type, increasing the heterogeneity of images, so that traditional pixel-based classification technologies are not suitable. The object-oriented method, however, provides technical support for high spatial remote sensing wetland land cover classification [21]. Using this method, pixels are merged into an entity unit—that is, the object. Merged objects exhibit a great many features, including textures, shapes, locations, relative relationships, and distances, as well as other non-spectral features. However, because massive object features pose great challenges to classification, the choice of the classification algorithm is essential for making full use of object characteristics. The selection of classification algorithms is a key issue in classification, greatly affecting as it does the accuracy of land cover classification. Traditional classification methods include the K-mean, the minimum distance method, the maximum likelihood method, the k-nearest neighbor (k-NN) method, and the like; advanced algorithms include the decision tree, artificial neural network, support vector machine (SVM), and random forest (RF) methods. These classification algorithms were applied to various images for land use classification of different surface conditions and types of patterns. Neural networks, decision trees, and SVM are widely used in land cover classification, soil type classification, vegetation (urban vegetation, tree species, and crops) classification, water quality detection, and the like [22–28]. In wetland classification and land cover studies, researchers make their own selections based on their needs. Sader et al. [12], for example, used Landsat-TM images to conduct experiments in Orono and Acadia and proposed a combination of mixed classification and GIS-based methods as the most promising method for classification of forest wetlands. Augusteijn et al. [29] used optimized radar data and neural networks to distinguish between wetlands and uplands. However, although these classifications achieved excellent results, overall accuracy remained below 85%. The algorithms introduced and used in this article are k-NN, SVM, and RF. The k-NN classification algorithm, one of the simplest machine learning algorithms, is a theoretically mature method [30,31]. The principle of the k-NN algorithm is that if most of the k nearest samples of a sample in the feature space belonged to a certain category, the sample is also divided into this category. In this method, the classification of the samples to be divided is determined only according to the category of one or more nearest samples. However, its main disadvantage is that when the sample is unbalanced Remote Sens. 2019, 11, 1927 3 of 22 and a new sample is input, one of the K neighbors of the sample is the majority, which is easy to cause misclassification. Another disadvantage of this method is that it involves a large number of calculations: The classification of each sample needs to calculate its distance from all known samples in order to obtain its K nearest neighbors [32]. SVM is a supervised learning model commonly used for pattern recognition, classification, and regression analysis. Nonlinear mapping forms the theoretical basis of SVM, whose goal is the optimal hyperplane of feature space partition. SVM is a novel small sample learning method with a solid theoretical basis. For classification of land cover types, however, it has major disadvantages. For example, solving the multi-class classification problem is difficult with SVM. The classical SVM algorithm gives only two classes of classification algorithm, but in the practical application of data mining, it is generally necessary to solve the problem of multi-class classification [33]. Notably, Salehi et al. [34] and Lin et al. [35] have proposed an improved SVM algorithm that uses different methods and with which they obtained excellent overall accuracy and kappa coefficient. Salehi et al. [34] compared the improved result with the accuracy of the MLC classification and found the improved SVM method to be superior. Based on bootstrap sampling, the RF method is an extended variant of bagging. This efficient integrated learning algorithm can be used for multi-class classification, regression, and other tasks without modification. The advantage of the RF method is that the algorithm is simple, is easy to implement, requires little calculation time, is suitable for a large number of characteristic parameters, is effective at solving redundancy problems, and exhibits a powerful performance in many practical applications. These advantages can make up for the shortcomings of k-NN and SVM while obtaining better classification results [36]. Since its proposal in 2001, this method has been widely used in ecology and land use classification. This highly precise algorithm allows determination of variables’ importance, prediction of complex interactions between variables, and reduction of training samples. Mahdianpari et al. [37] used SAR data to classify wetlands in the northeastern portion of the Avalon Peninsula and obtained an overall classification accuracy of 94% using the RF algorithm. In arid areas of Xinjiang, Tian et al. [38] used the RF algorithm to classify the wetlands, obtaining an overall accuracy of 93%. Using RF and other methods in combination can produce still better results. Liu et al. [39] combined RF and multiple end-member spectral mixture analysis for wetland classification in the Yellow area using Landsat-8 images. The experiments showed the RF algorithm to be suitable for the classification of wetlands in the Yellow River Delta area and revealed that the combination with multiple end-member spectral mixture analysis could improve results by 2%–3%. At present, the RF algorithm is notably popular, as research has shown that it performs well in a variety of complex wetland classifications. For classification of estuary wetland land cover, however, the RF method has seen less application for high-resolution remote sensing images, offering a new direction for research. This paper compared the application of Worldview-2 image and Landsat-8 image in the classification of land cover types in coastal wetlands in small areas. The classification accuracy of RF, SVM, and k-NN algorithms in coastal wetland land cover were compared respectively in Worldview-2 image and Landsat-8 image. Texture features were used for classification to further improve the classification accuracy.

2. Data and Method In this paper, two types of remote sensing images, Worldview-2 high-resolution images and Landsat-8 remote sensing images, of the Linhong Estuary region in Lianyungang City were used to classify the study area using the RF classification method. The classification results obtained by use of the SVM and nearest neighbor methods were compared with the results of the RF classification. All classifications were performed based on an object-oriented segmenting process in which a large number of characteristics, including shape features, texture characteristics, and spectral information, were used to classify the image. The high resolution remote sensing image distinguished well between objects having similar spectral features and improved the classification accuracy of these Remote Sens. 2019, 11, 1927 4 of 22 Remote Sens. 2019, 11, x FOR PEER REVIEW 4 of 22 comparison with classification results for shape features and spectral features. To highlight the objects. Classification features included the additions of texture features that allowed the comparison superiority of Worldview-2 high-resolution images for the classification of wetland land cover, with classification results for shape features and spectral features. To highlight the superiority of Landsat-8 remote sensing imagery of the region was used for land cover analysis, with the Worldview-2 high-resolution images for the classification of wetland land cover, Landsat-8 remote comparison of the classification accuracy of the RF, SVM, and k-NN methods for various images. sensing imagery of the region was used for land cover analysis, with the comparison of the classification The parameter settings of the three classification methods were the same in the two images. In the accuracy of the RF, SVM, and k-NN methods for various images. The parameter settings of the three RF method, a random forest consisting of 100 decision trees was established. For each decision tree, classification methods were the same in the two images. In the RF method, a random forest consisting the maximum depth was set to 100. In the SVM method, the radial basis function was used to of 100 decision trees was established. For each decision tree, the maximum depth was set to 100. In the determine the hyperplane, and the c parameter was set to 2. In the k-NN method, the k parameter SVM method, the radial basis function was used to determine the hyperplane, and the c parameter was set to 2. was set to 2. In the k-NN method, the k parameter was set to 2.

2.1.2.1. Study Study Area Area TheThe study study area was Linhong River Estuary Wetland,Wetland, inin thethe northeastnortheast of of Lianyungang Lianyungang City, City, with with a a longitude/latitude of 119°11′37″–119°16′33″ E and 34°45′37″–34°49′15″ N (Figure 1) and an average longitude/latitude of 119◦11037”–119◦16033” E and 34◦45037”–34◦49015” N (Figure1) and an average elevationelevation of of 3–5 3–5 m. The The total total area area of the experimental image was 27.6 km2.. Linhong Linhong River River Estuary Estuary is is thethe mouth mouth of of Xinshu Xinshu River River and and is is the the sea sea channel channel of of re regionalgional rivers rivers such such as as th thee Rose Rose River, River, Fan Fan River, River, andand Zhuji River.River. CoveringCovering tens tens of of square square kilometers, kilometers, it playsit plays a role a role in flood in flood discharge discharge and helpsand helps drain drainand tail and water tail water into the into sea the in Lianyungang.sea in Lianyungang. It is located It is located in a warm in a temperatewarm temperate humid humid monsoon monsoon climate climateand has and an average has an precipitationaverage precipitation of 900.9 mm of [40900.9–43 ].mm A large [40–43]. number A large of artificial number and of natural artificial objects, and naturalincluding objects, salt field, including fish , salt field, and tidal flatpond, types, and are tidal distributed flat types, on are either distributed side of theon either river. Inside 2001, of thethe river. Linhong In 2001, River the Estuary Linhong in the River coastal Estuary zone in of the Lianyungang coastal zone City of Lianyungang was used for City sewage was discharge used for sewageand flood discharge discharge and [44 flood]. In recentdischarge years, [44]. the In overall recent level years, of pollutionthe overall in level the sea of areapollution of Lianyungang in the sea areaCity hasof Lianyungang been relatively City high, has and been ineff ectivelyrelatively controlled high, and discharges ineffectively of land-source controlled pollutants discharges have of land-sourceseriously polluted pollutants theLinhong have seriously River Estuary. polluted Simultaneously, the Linhong reclamation River Estuary. efforts Simultaneously, have decreased reclamationcoastline resources efforts ashave the naturaldecreased coast coastline has been resources cut and straightened, as the natural the naturalcoast has coastline been cut reduced, and straightened,and the natural the coastal natural landscape coastline and reduced, offshore ecologicaland the environmentnatural coastal destroyed landscape [45]. and However, offshore the ecologicallatest Lianyungang environment marine destroyed economic development[45]. Howeve planr, the has namedlatest theLianyungang Linhong River marine Estuary economic as a key developmentprotected area, plan and has a new named Linhong the Linhong River wetland River Estuary reserve willas a bekey built protected to restore area, the and wetland a new ecological Linhong Riverenvironment wetland as reserve well as will the area’sbe built biodiversity to restore [ 46th].e wetland ecological environment as well as the area’s biodiversity [46].

FigureFigure 1. 1. StudyStudy area. area.

2.2. Satellite Data

Remote Sens. 2019, 11, 1927 5 of 22

2.2. Satellite Data Two different spatial resolution images were used for classification in this study area (Table1). Worldview-2 satellite imagery consisted of four standard bands (blue, green, red, and NearIR-1), four new bands (coastal, yellow, red-edge, and NearIR-2) at a spatial resolution of 2 m, and a panchromatic band at a spatial resolution of 0.5 m. Of the four newly added bands, the coastal band is greatly influenced by the atmosphere and can be used to improve atmospheric correction technology; the yellow band can detect yellow vegetation on land and water, important for feature classification; the red-edge band, located between the red band and the near infrared band, is useful for classification of vegetation; and the NIR-2 band coincides with the NIR-1 band but is less affected by the atmosphere [47,48]. The image acquisition used in the experiment was made on 1 October 2012, and image fusion was performed prior to the experiment using the Gram-Schmidt Pan Sharpening method [49].

Table 1. Band Descriptions.

Worldview-2 Landsat-8 Band Wavelength (nm) Spectral Region Wavelength (nm) Spectral Region 1 400–450 Coastal 433–453 Coastal 2 450–510 Blue 450–515 Blue 3 510–580 Green 525–600 Green 4 585–625 Yellow 630–680 Red 5 630–690 Red 845–885 NIR 6 705–745 Red edge 1560–1651 SWIR1 7 770–895 NIR-1 2100–2300 SWIR2 8 860–1040 NIR-2 500–680 Pan

The Landsat-8 image used for the experiment was taken in April 2013, and the image was fused using the Gram-Schmidt Pan Sharpening method to obtain a multispectral image with a resolution of 15 m.

2.3. Training and Verification Data Due to wetland protection and terrain restrictions, the seaside area could not be reached. It could only be investigated along narrow roads. The location of the surveying was recorded using the GNSS system. Two months of field surveys were conducted to obtain land cover in the study area, establishing a mapping between actual land use types and remote sensing images. Multi-scale segmentation was carried out in eCognition [50], and the image was automatically segmented into different polygons. The training samples and the verification samples were randomly selected in these polygons by visual interpretation and evenly distributed over the entire image. Random selection of the samples within each category assured that the samples were representative for each category. Due to the different image resolutions, the level of detail of land cover types was different. Therefore, different classification systems should be established, and different samples should be selected according to the classification system. The land cover types seen on Worldview-2 images were tidal flat, vegetation, building, backfill land, estuary, fish pond, road, saline alkali soil, and salt land. The land cover types seen on Landsat-8 images were tidal flat, road, vegetation, building, area, backfill land, estuary, fish pond, and high-salinity area (Tables2 and3, Figure2). Remote Sens. 2019, 11, x FOR PEER REVIEW 5 of 22

Two different spatial resolution images were used for classification in this study area (Table 1). Worldview-2 satellite imagery consisted of four standard bands (blue, green, red, and NearIR-1), four new bands (coastal, yellow, red-edge, and NearIR-2) at a spatial resolution of 2 m, and a panchromatic band at a spatial resolution of 0.5 m. Of the four newly added bands, the coastal band is greatly influenced by the atmosphere and can be used to improve atmospheric correction technology; the yellow band can detect yellow vegetation on land and water, important for feature classification; the red-edge band, located between the red band and the near infrared band, is useful for classification of vegetation; and the NIR-2 band coincides with the NIR-1 band but is less affected by the atmosphere [47,48]. The image acquisition used in the experiment was made on 1 October 2012, and image fusion was performed prior to the experiment using the Gram-Schmidt Pan Sharpening method [49]. The Landsat-8 image used for the experiment was taken in April 2013, and the image was fused using the Gram-Schmidt Pan Sharpening method to obtain a multispectral image with a resolution of 15 m.

Table 1. Band Descriptions.

Worldview-2 Landsat-8 Band Wavelength (nm) Spectral Region Wavelength (nm) Spectral Region 1 400–450 Coastal 433–453 Coastal 2 450–510 Blue 450–515 Blue 3 510–580 Green 525–600 Green 4 585–625 Yellow 630–680 Red 5 630–690 Red 845–885 NIR 6 705–745 Red edge 1560–1651 SWIR1 7 770–895 NIR-1 2100–2300 SWIR2 8 860–1040 NIR-2 500–680 Pan

2.3. Training and Verification Data Due to wetland protection and terrain restrictions, the seaside area could not be reached. It could only be investigated along narrow roads. The location of the surveying was recorded using the GNSS system. Two months of field surveys were conducted to obtain land cover in the study area, establishing a mapping between actual land use types and remote sensing images. Multi-scale segmentation was carried out in eCognition [50], and the image was automatically segmented into different polygons. The training samples and the verification samples were randomly selected in these polygons by visual interpretation and evenly distributed over the entire image. Random selection of the samples within each category assured that the samples were representative for each category. Due to the different image resolutions, the level of detail of land cover types was different. Therefore, different classification systems should be established, and different samples should be selected according to the classification system. The land cover types seen on Worldview-2 images were tidal flat, vegetation, building, backfill land, estuary, fish pond, road, saline alkali soil, and salt land. The land cover types seen on Landsat-8 images were tidal flat, road, vegetation, building, aquaculture area, backfill land, estuary, fish pond, and high-salinity area (Tables 2 and 3, Figure 2). Remote Sens. 2019, 11, x FOR PEER REVIEW 6 of 22 RemoteRemote Sens. Sens. 2019 2019, 11, 11, x, FORx FOR PEER PEER REVIEW REVIEW 6 6of of 22 22 Remote Sens. Sens. 20192019, 11,, x11 FOR, 1927 PEER REVIEW 6 of 22 6 of 22 Table 2. Class descriptionDisplayed training insamples green and verification samples in the Worldview-2 images. Remote Sens. 2019, 11, x FOR PEERDisplayed REVIEWDisplayed in in green green 6 of 22 Remote Sens. 2019, 11, x FOR PEERunderDisplayed REVIEW the true in green color 6 of 22 VegetationRemote Sens. 2019, 11, x FOR PEER underREVIEW the true color 78 49 1391 Actual6 of 22 Vegetation bandunderunder combination, the the true true color color can 78 49 1391 VegetationVegetationTable 2. ClassWorldview-2 descriptionband combination, training samples can and78 verification78 Training samples49Verification49 in the 1391 Worldview-21391Average images. VegetationRemote Sens. 2019, 11, x FOR PEERbandband REVIEW combination, combination,Displayed in can cangreen 78 49 1391 6 of 22 Class beband extracted combination,DisplayedClass with Description NDVIin can green Image bebe extracted extractedunder with withthe NDVItrue NDVI color Samples Samples Area of the RemoteVegetation Sens. 2019 , 11Worldview-2, x FOR PEERbeRegular extractedREVIEW arrangement with NDVI 78Training 49 Verification 1391Actual6 of 22 Average Area of VegetationRemoteClass Sens. 2019 , 11, x FOR PEERRegular REVIEWunder arrangement Class the Descriptiontrue color 78 49 Samples 13916 (mof 222) 2 Vegetation Image RegularRegularbandDisplayed arrangement arrangement combination, in green can 78 Samples 49 Samples 1391 the Samples (m ) RegularbandCentralized arrangement combination, can Building beCentralizedunder DisplayedextractedClose the to the truewith in rivergreen color NDVI 30 36 157 BuildingRemote Sens. 2019, 11, x FOR PEERdistribution REVIEWCentralizedCentralizedDisplayed away fromin green 30 36 157 6 of 22 BuildingBuildingVegetationTidal flat distributionbe extractedCloseestuaryDisplayed away to theLarger with from riverin green NDVI area estuary 3030 78 3636 6749 157157 59851391 BuildingTidal flat distributiondistributionbandRegularunder combination,away athe wayarrangement truefrom from color can 30 7836 67157 5985 Vegetation distributionLargerRegulartheunder estuary areaaway thearrangement Image truefrom smoothingcolor 78 49 1391 betheband extractedImage estuary Centralizedcombination, smoothing with NDVI can VegetationBuilding ArtificialthebandtheDisplayed estuarybackfill estuarycombination, d in ark green can 3078 3649 1391 157 Building ArtificialDisplayedband backfillCentralized combination, in greendark under can the 30 36 157 BackfillBuilding ArtificialArtificialbedistributionRegularunder extracted backfill backfill the arrangement truedwithaway arkdark color NDVI from 30 36 157 BackfillBackfillVegetation Vegetation Artificialbrowndistributionbetrue extractedin backfill color true band color a dwaywithark combination, fromNDVI 19 78 7821 49 493218 1391 1391 Backfillland brownRegular in trueCentralizedthe arrangementestuary color 19 21 3218 land brownbrownbandcanRegular band.in bein truecombination, extractedtrue arrangement color color with can NDVI 1919 2121 32183218 landlandlandBuilding Regularband.the estuary arrangement 30 36 157 land bedistributionArtificial extractedband.band.Centralized backfill withaway NDVI darkfrom BuildingBackfill GraduallyArtificialband.Regular widenedCentralized backfill arrangement by d ark 30 36 157 BackfillBuilding GraduallyGraduallydistributionRegularbrown widenedthe widened in arrangementestuary true away by bycolor from 1930 2136 3218157 EstuaryBuildingland Graduallya narrowCentralizeddistributionbrown strip, widened in cantrue distribution a way beby color from away 57 19 3047 21 364055 3218 157 EstuaryEstuaryland a narrowa narrowArtificial strip, fromCentralizedthestrip,band. estuarycanbackfill the can estuarybe be d ark 5757 4747 40554055 EstuaryBuildingBackfill aextracted narrow strip,withtheband. estuaryNDVIcan be 57 30 47 36 4055 157 extractedextracteddistributionGraduallyArtificialbrown with with in backfillNDVI widened true NDVIaway color darkfrom by 19 21 3218 Backfillland extracted with NDVI EstuaryBackfill BlueGraduallyArtificialArtificiala narrow square backfill widened backfillstrip,or darkcan d byark be brown 57 47 4055 BackfillEstuaryBackfill land Blueabrown narrow squarethe band.in estuary strip,or true cancolor be 1957 192147 21 32184055 3218 Estuaryland BlueaBlue narrowbrown square insquare true instrip, or colortrue or can band.color be 5719 4721 40553218 Fish pondland rectangle,BlueGraduallyArtificialextracted square it canband. backfill orwithwidened be NDVI dark by 51 55 7896 FishFish pond pondBackfill rectangle,rectangle,extracted it itcan withcan be be NDVI 5151 5555 78967896 Fish pond extractedrectangle, with it canband. NDWI be 51 55 7896 Estuary extractedGraduallyabrown Graduallynarrow Bluewith in squareNDWI strip, widenedtrue widened cancolor or bybeby a 1957 2147 32184055 land extractedextractedGradually Bluewith with squareNDWI widenedNDWI or by EstuaryEstuary extractednarrowaextracted narrowBlue with strip, squareband. NDWI strip, with can beorcanNDVI extracted be 57 5747 47 4055 4055 FishEstuary pond a rectangle,narrow strip, it can can be be 5157 5547 78964055 Fish pond Graduallyextractedrectangle,with with widenedit NDVI can NDVI be by 51 55 7896 Slim shape,extractedextractedBlue distributed square withwith NDWINDVI or Road SlimSlim shape, shape,extracted distributed distributed with NDVI 9 14 1629 RoadRoadEstuary alongSlim shape,Blue extractedthea narrow building square distributed with strip, or rectangle,area NDWI can be it can9 9 57 1414 47 16291629 4055 RoadFishFish pond pond along therectangle, buildingBlue square itarea can or be 9 51 5114 55 551629 7896 7896 alongalong theextracted thebe buildingBlue extracted building square with area with area NDVI or NDWI Fish pond Slimextractedrectangle, shape, with distributedit can NDWI be 51 55 7896 SalineFish Road pond Irregularrectangle, shape, high it can be 519 1455 16297896 SalineSaline IrregularIrregularSlimSlimalongextracted shape, Blue shape,shape, theshape, squarebuilding with distributeddistributed high high NDWI or area along 30 16 1283 alkaliSaline soilRoad Irregularalongbrightnessextracted shape, the building with high NDWI area 30 9 916 14 14 1283 1629 1629 alkali Fishsoil pond alongbrightnessrectangle, thethe building it can area area be 3030 51 1616 55 12831283 7896 alkalialkali soil soil Slimbrightnessbrightness shape, distributed 30 16 1283 alkali soilRoad Availablebrightness in a variety 9 14 1629 SalineSaline alkali AvailableAvailableSlimalongextractedIrregularIrregular inshape, thein a varietya building withshape,variety shape,distributed NDWI high higharea Salt fieldSaline ofAvailable colors,colorsSlimIrregular, rectangular shape,in a varietyshape, distributed in high 9 30 309 16 163119 1283 1283 Salt fieldalkalisoilRoad soil of colors,colorsSlim, rectangular shape,brightnessbrightness distributed in 9 9 9 14 3119 1629 SaltSalt field fieldRoad ofof colors colorsalong,shape rectangular, rectangular the building in in area 9 9 309 9 9 1614 31193119 12831629 alkali soil alongshapebrightness the building area Saline SlimAvailableIrregularshapeshape shape, shape,in distributed a variety high Road AvailableAvailableshapein in a varietya variety of colors, 309 1416 16291283 SaltalkaliSaltSaline field field soil ofalongIrregular colors, thebrightness rectangularbuilding shape, high area in 9 99 93119 3119 SaltTableSaline field 3. Class descriptiondescription trainingtrainingof Irregularcolors samplessamplesrectangular, rectangular andshape, veriv inerification shape highfication in samples930 in the Landsat-8Landsat916 -8 images.images. 3119 1283 alkaliTableTable soil 3. 3.Class Class d escriptiondescription ttraining trainingAvailable samples samplesbrightnessshape and in and a v varietyerification v erification samples samples30 in in thethe the Landsat Landsat16--8 - ii8mage images.. s. 1283 alkaliTable soil 3. Class description training samplesbrightnessshape and verification samples in the Landsat-8 images. Salt field ofAvailable colors, rectangular in a variety in 9 9 Actual3119 Saline AvailableIrregular shape,in a variety high Actual Available in a variety 30 16 AverageActualActual 1283 alkaliSaltTable field soilTable 3. Class3. ClassLandsat descriptiondescription-8 of training colors, trainingbrightness samplesshape rectangular samples and veri in and ficationTraining verification 9samples Verification in samples the Landsat-89 in theAverage images.Actual Landsat-83119 images. Salt field Landsat-8 of colors, rectangular in Training9 Verification9 AverageAverage3119 SaltClass field Table 3. ClassLandsatLandsat description-8 - 8 tofraining Classcolors Descriptionsamples, rectangular and verification in TrainingTraining samples9 Verification inVerification the Landsat9 -8Area iAveragemage ofs .the 3119 ClassClass LandsatImage- 8 AvailableClassClass Descriptionshape Description in a variety TrainingSamples VerificationSamples AreaArea of of theActual the Class Landsat-8ImageImage Class Descriptionshape SamplesSamplesTraining SamplesSamples Verification AreaSamples of theActual Actual Average Area of Table 3. ClassImage description training samples and verificationSamples samples Samplesin the Landsat -8Samples imageActuals. SaltClass field of colors,Class rectangular Description in 9 9 SamplesSamples(m2) Average3119 2 ImageLandsat-8 TrainingSamples VerificationSamplesSamples (m2) the Samples (m ) Table 3. Class description training samples and verification samples in the Landsat-8 images.(m(m22Average) 2) ClassTable 3. Class Landsatdescription-8Close training to theClass shapesamples river Description estuary and veri fication Training samples in theVerification Landsat-8 images.(m2Area) Actual of the ClassTable 3. Class descriptionImage Close training to theClass s amplesriver Description estuary and v erification Samplessamples in the SamplesLandsat- 8 imageAreas. of the CloseCloseClose to to the to the theriver river river estuary estuary estuary Samples TidalTidal flat flat Image CloseLarger to the area river imageimage estuary 30Samples 30 32Samples 32120120,006,006AverageActual 120,006 Landsat-8 Training Verification Samples2 TidalTidal flat flat Table 3. Class description Larger trainingLargerLargersmoothing area areasamples area image iimage i mage and smoothing verification3030 samples in the3232 Landsat-8 images.120120,,006,006Actual (m ) Tidal flatClass Largersmoothing Classarea i mageDescription 30 32 120,006AreaAverage 2of the Landsat-8Image Closesmoothingsmoothing to the river estuary TrainingSamples VerificationSamples (m ) Class DisplayedClosesmoothing Classin togreen the Description riverunder estuary AreaSamplesAverage of the Class Landsat-Displayed8Displayed CloseClass in to in green the green Description river under under estuary the Training Verification AreaActual of the Tidal flat Image DisplayedDisplayedLarger in in green green area under under image Samples30 Samples32 120,0062 Class Displayedthe true Classin color green Description band under AreaSamples(m of) the VegetationVegetationTidal flat Image truethe color trueLarger color band area band combination, image 17 Samples30 17 15 Samples32 15 84,64284,642120SamplesAverage ,006 84,642 Vegetation Landsat-8 combination,thethe true true color colorsmoothing can band band be 17 Training 15Verification 84,642Samples 2 VegetationVegetation cancombination,the be Closetrue extracted color to the can band withriver be NDVIestuary 1717 1515 8484,,642,642 (m 2) VegetationClass combination,combination,Classsmoothing Descriptioncan can be be 17 15 84,642Area of2 the Tidal flat Image extractedcombination,DisplayedLarger with can NDVIinarea greenbe i mage under Samples30 Samples32 120(m,006) extractedDisplayedClose with to the in NDVI greenriver estuaryunder Samples extractedextractedClosethe with to truewith the NDVI color NDVIriver band estuary VegetationTidal flat ArtificialArtificialextractedLarger backfillbackfill withsmoothing NDVIarea darkdarkdark image brown 3017 3215 120,00684,6422 BackfillBackfill Vegetationland land Artificialthecombination, backfill true color dark bandcan be 9 179 10 15 1039,88939,889 84,642(m ) 39,889 BackfillVegetation landTidal flat brown ArtificialArtificial Displayedin combination,trueLarger backfill backfill color inarea d band.green arkd arki canmage under be 9 1730 10 1532 39,88984120 ,642,006 BackfillBackfill land land brown Artificial inClosein truetrue backfill to colorsmoothingcolor the riverd band.band.ark estuary 9 9 1010 3939,889,889 Backfill land brown brown incombination, extractedin true true color color with band. band.can NDVI be 9 10 39,889 brown Displayedin truethe true smoothingcolor in color band.green band under VegetationTidal flat GraduallyDisplayedextractedLarger widened inareawith green by imageNDVI a under 3017 3215 120,00684,642 GraduallyGraduallyGraduallyDisplayedArtificialcombination, widened widened widened inbackfill green by bycan bya a darkunder abe EstuaryBackfill land Gradually narrowthe strip, widened truesmoothing can color beby aband 15 9 17 10 169169,450,45039,889 EstuaryBackfillVegetation land narrowbrownArtificial strip, in truecanbackfill becolor d arkband. 15 179 17 1510 169,45084,64239,889 EstuaryEstuaryEstuaryBackfill land narrowextractednarrownarrowbrown strip,extractedcombination,the strip, withstrip, truein can truecan NDVI cancolor bewith be color extractedbe can NDVIband band.be 1515 9 151717 10 17169169,,450,45039 ,889 169,450 EstuaryVegetation extractednarrowbrownDisplayedcombination, strip, within true can NDVIin color greenbe can band. under be 15 17 17 15 169,450 84,642 extractedextractedGraduallycombination,with with with NDVI NDVI widened NDVI can beby a Blue extracted squareArtificialextractedthe with or true rectangle, NDVI backfill colorwith bandNDVI dark BackfillVegetation land BlueBlue squareGradually squareextracted or or rectangle, widenedrectangle, with NDVI by a 179 1510 84,64239,889 Fish pondEstuary Blueit can square brownbeArtificialnarrow extracted or in rectangle, true strip, backfill with color can dark band.be 35 15 44 17 33,68633,686169,450 Fish pondEstuary Blue it can square benarrowArtificialcombination, extracted or rectangle,strip, backfill with can can itdarkbe be can 35 15 44 17 33,686169 ,450 FishFish pondBackfill pond land itit itcancan can bebe Artificialextractedbe extractedextracted extracted backfill with withwith with NDVI d ark 3535 9 4444 10 3333,,686,686 39,889 FishFish pond pondBackfill land it can brownbeGraduallyextracted NDWIextracted in true widenedwith with color NDVI band. by a 35 935 44 10 4433,686 39,889 33,686 be extractedextractedNDWINDWI with with NDWI NDVI Estuary brownBlueNDWInarrow square in true strip, or color rectangle, can band.be 15 17 169,450 Slim Blueshape,GraduallyArtificial square distributed backfillwidenedor rectangle, dark by a RoadBackfillFish pond land Slim shape,Graduallyit can distributedbe extracted widened with by a 20 359 22 1044 34,83734,837 39,88933,686 Road Estuary alongSlimSlim shape, brown the shape,extractednarrow building distributedin distributed true strip, with area color can NDVI along band. be 20 15 22 17 34,837169,450 RoadRoadRoadFish Estuary pond alongSlim shape,it the cannarrow building be distributed extracted strip, area can with be 2020 3515 20 2222 4417 223434,,837,83733169,450 , 686 34,837 Road Estuary alongalong Bluethe thenarrow building square buildingNDWI strip, or area arearectangle, can be 20 15 22 17 34,837 169,450 along Graduallytheextracted building widened with area NDVI by a Irregular shape,NDWI high High salinityFish area pond IrregularBlueSlimitextracted can shape,square shape, be extracted high with ordistributed rectangle, NDVI with 21 35 20 44 10,361 33,686 High salinityEstuaryRoad area IrregularIrregularIrregularBluenarrow shape,shape,square shape, strip, high highor high rectangle, can be 21 1520 20 1722 10,361169,450 34,837 HighHighHigh salinity salinity salinity area area IrregularIrregularSlimBluebrightnessalong shape, square the shape,NDWI building distributedhigh or highrectangle, area 2121 2020 1010,361,361 High salinityFishRoad area pond brightnessitalongextracted brightnesscan bethe extracted building with NDVI witharea 21 2035 21 20 2244 2010,36134 33,686,837 10,361 Remote Sens.area 2019, 11, x FOR PEER REVIEW alongbrightnessbrightness the building area 7 of 22 Fish pond it can beNDWI extracted with 35 44 33,686 BlueSlimIrregular square shape,NDWI shape, or distributed rectangle, high High salinityRoad area NDWI 2120 2022 10,36134,837 Fish pond IrregularitSlimalong can shape, bethebrightness extracted shape,building distributed high witharea 35 44 33,686 High salinityRoad area Distributed in the sea, 2120 2022 1034,837,361 AquacultureRoad DistributedSlim brightnessshape, inNDWI the distributed sea, dark 20 22 34,837 Aquaculture Roadarea darkalongIrregular squares the building orshape, high area 9 209 9 22 99575 34,837 9575 Higharea salinity area squaresalong orthe rectangles building area 21 20 10,361 rectanglesSlim shape,brightness distributed Road Irregular shape, high 20 22 34,837 High salinity area alongIrregular the building shape, high area 21 20 10,361 High salinity area brightness 21 20 10,361 brightness Irregular shape, high High salinity area 21 20 10,361 brightness

(a) (b)

(c) (d)

Figure 2. Worldview-2 and landsat-8 training sample and verification sample distribution. (a) Worldview-2 training sample; (b) Worldview-2 verification sample; (c) Landsat-8 training sample; and (d) Landsat-8 verification sample.

2.4. Segmentation The experiment began with a determination of the segmentation scale, in the image, each segmented object corresponded to the actual feature as much as possible. Specific object types could be expressed by one or several objects. The shape of the object boundary should not be overly broken, the spectral differences of objects in the same class should be small, and the boundaries of features should not be blurred [51]. Image segmentation and subsequent selection of experimental features, classification, and assessment of accuracy were completed in eCognition. Several key parameters were set during multi-scale segmentation, including shape, compactness, and scale. The sum of shape and color is 1, with shape determining the proportion of image spectral (color) information and object shape information in the segmentation process. If the shape is 0.3, for example, then color is 0.7. The sum of smoothness and compactness is also 1, with compactness controlling how much the object’s shape tends to be spatially compact versus spectrally

Remote Sens. 2019, 11, x FOR PEER REVIEW 7 of 22

Distributed in the sea, Remote Sens. 2019, 11, 1927 7 of 22 Aquaculture area dark squares or 9 9 9575 rectangles

(a) (b)

(c) (d)

FigureFigure 2. 2. Worldview-2Worldview-2 andand landsat-8 landsat-8 training training sample sample and and verification verification sample sample distribution. distribution. (a()a) Worldview-2Worldview-2 training training sample; sample; (b) ( Worldview-2b) Worldview-2 verification verification sample; sample; (c) Landsat-8 (c) Landsat-8 training training sample; sample; and (dand) Landsat-8 (d) Landsat-8 verification verification sample. sample.

2.4.2.4. Segmentation Segmentation TheThe experiment experiment began began with with a a determination determination of of the the segmentation segmentation scale, scale, in in the the image, image, each each segmentedsegmented object object corresponded corresponded to to the the actual actual feature feature as as much much as as possible. possible. Specific Specific object object types types could could bebe expressed expressed by by one one or severalor several objects. objects. The shapeThe shap of thee of object the boundaryobject boundary should notshould be overly not be broken, overly thebroken, spectral the di spectralfferences differences of objects of in objects the same in classthe sa shouldme class be should small, andbe small, the boundaries and the boundaries of features of shouldfeatures not should be blurred not be [51 blurred]. Image [51]. segmentation Image segmen andtation subsequent and subsequent selection of se experimentallection of experimental features, classification,features, classification, and assessment and assessment of accuracy of were accu completedracy were in completed eCognition. in SeveraleCognition. key parametersSeveral key wereparameters set during were multi-scale set during segmentation,multi-scale segmentation, including shape, including compactness, shape, compactness, and scale. and The scale. sum The of shapesum andof shape color and is 1, withcolor shapeis 1, with determining shape determining the proportion the ofproportion image spectral of image (color) spectral information (color) andinformation object shape and information object shape in theinformation segmentation in the process. segmentation If the shape process. is 0.3, forIf the example, shapethen is 0.3, color for isexample, 0.7. The sumthen ofcolor smoothness is 0.7. The and sum compactness of smoothness is also and 1, withcompactness compactness is also controlling 1, with compactness how much thecontrolling object’s shape how tendsmuch to the be spatiallyobject’s compactshape tends versus to spectrally be spatially homogeneous compact (smooth)versus spectrally but less compact. The scale parameter limits the color and shape complexity of the overall object [52]. ESP [53],

Remote Sens. 2019, 11, x FOR PEER REVIEW 8 of 22 RemoteRemote Sens. Sens.2019 2019, ,11 11,, 1927 x FOR PEER REVIEW 8 ofof 2222 homogeneoushomogeneous (smooth) (smooth) but but less less compact. compact. The The scal scalee parameter parameter limits limits the the color color and and shape shape complexity complexity a tool proposed by Dragut for automatically obtaining the optimal segmentation scale parameters ofof the the overall overall object object [52]. [52]. ESP ESP [53] [53], ,a a tool tool proposed proposed by by Dragut Dragut for for automatically automatically obtaining obtaining the the optimal optimal for the multi-scale segmentation algorithm in eCognition, was used as a scale evaluation tool for segmentationsegmentation scale scale parameters parameters for for the the multi-scal multi-scalee segmentation segmentation algorithm algorithm in in eCognition, eCognition, was was used used this experiment. The optimal segmentation scale parameters were determined from the evaluation asas aa scalescale evaluationevaluation tooltool forfor thisthis experiment.experiment. TheThe optimaloptimal segmentationsegmentation scalescale parametersparameters werewere results of ESP tools, visual discrimination of the segmentation effect, and the actual situation of surface determineddetermined from from the the evaluation evaluation resu resultslts of of ESP ESP tools, tools, visual visual discrimina discriminationtion of of the the segmentation segmentation effect, effect, features. The results of the ESP calculation were obtained, and three representative peaks, 126, 86, and andand thethe actualactual situationsituation ofof surfacesurface features.features. TheThe resultsresults ofof thethe ESPESP calculationcalculation werewere obtained,obtained, andand 53, were selected for segmentation experiments (Figure3). threethree representative representative peaks, peaks, 126, 126, 86, 86, and and 53, 53, were were selected selected for for segmentation segmentation experiments experiments (Figure (Figure 3). 3).

Figure 3. The result of the ESP calculation. FigureFigure 3. 3.The The resultresult ofof thethe ESPESP calculation.calculation. As shown in Figure4, at a segmentation scale of 126, the edge of the building could not be AsAs shownshown inin FigureFigure 4,4, atat aa segmsegmentationentation scalescale ofof 126,126, thethe edgeedge ofof thethe buildingbuilding couldcould notnot bebe represented, but at a scale of 53, the complete building was too fragmented—rather, it was better represented,represented, butbut atat aa scalescale ofof 53,53, thethe completecomplete buildingbuilding waswas tootoo fragmented—rather,fragmented—rather, itit waswas betterbetter reflected at a scale of 86. The experimental results show that the combination of parameters at which reflectedreflected at at a a scale scale of of 86. 86. The The experimental experimental results results show show that that the the combination combination of of parameters parameters at at which which the shape was 0.2, compactness 0.5, and segmentation scale 86 could be applied for segmentation of thethe shape shape was was 0.2, 0.2, compactness compactness 0.5, 0.5, and and segmentati segmentationon scale scale 86 86 could could be be applied applied for for segmentation segmentation of of the image. In the Landsat-8 image, by contrast, the segmentation scale was 120, the shape 0.1, and the thethe image.image. InIn thethe Landsat-8Landsat-8 image,image, byby contrast,contrast, thethe segmentationsegmentation scalescale waswas 120,120, thethe shapeshape 0.1,0.1, andand compactness 0.5. thethe compactness compactness 0.5. 0.5.

(a) (b) (c) (a) (b) (c) Figure 4. The segmentation effect at different scales. (a) Shape 0.2, compactness 0.5, and FigureFigure 4. 4.The The segmentation segmentation effect effect at diff erentat different scales. (ascales.) Shape ( 0.2,a) Shape compactness 0.2, compactness 0.5, and segmentation 0.5, and segmentation scale 126; (b) shape 0.2, compactness 0.5, and segmentation scale 86; and (c) shape 0.2, scalesegmentation 126; (b) shape scale 0.2, 126; compactness (b) shape 0.2, 0.5, compactness and segmentation 0.5, and scale segmentation 86; and (c) shapescale 86; 0.2, and compactness (c) shape 0.5,0.2, compactness 0.5, and segmentation scale 53. andcompactness segmentation 0.5, and scale segmentation 53. scale 53.

2.5.2.5. Features Features SpectralSpectral features, features,features, index indexindex features, features,features, geometric geometricgeometric features, featfeatures,ures, and andand texture texturetexture features featuresfeatures were werewere used used forused land forfor cover landland classificationcovercover classification classification of coastal of of coastal coastal wetlands. wetlands. wetlands. In each In In image,each each image, image, 80 features 80 80 features features were includedwere were included included in the in classification:in the the classification: classification: Nine spectralNineNine spectralspectral features, features,features, two index twotwo features,indexindex features,features, five geometric fivefive geometricgeometric features, features,features, and 64 textureandand 6464 features. texturetexture features.features. The spectral TheThe featuresspectralspectral includedfeaturesfeatures included theincluded mean the ofthe each meanmean band ofof each andeach brightness. bandband andand brightness. Thebrightness. index features TheThe indexindex included featuresfeatures normalized includedincluded dinormalizednormalizedfference vegetation difference difference index vegetation vegetation (NDVI) index index and (NDVI) (NDVI) normalized an andd normalized normalized difference waterdifference difference index water water (NDWI). index index The (NDWI). (NDWI). geometric The The featuresgeometricgeometric included featuresfeatures asymmetry, includedincluded asymmetry, boundaryasymmetry, index, boundaryboundary compactness, index,index, compactness,compactness, density, and shape density,density, index andand (Table shapeshape4). indexindex (Table(Table 4). 4).

Remote Sens. 2019, 11, 1927 9 of 22

Table 4. Partial feature description.

Feature Description p(NIR) p(R) − NDVI p(NIR)+p(R) p(G) p(NIR) − NDWI p(G)+p(NIR) Asymmetry Expressed as the length ratio of the short axis and the long axis of the ellipse. Represented the ratio of the true circumference of the object to the circumference Border index of the smallest enclosed rectangle of the object. Describe the compactness of an image object, which is the product of length and Compactness width divided by the number of pixels. Describe the degree of compactness of the object. The closer the object is to the Density square, the higher the density is. The length of the boundary of the image object divided by four times its square Shape index root of the area. The more the objects are broken, the larger the shape index is.

The gray level co-occurrence matrix method was used to extract texture features and obtain the features of homogeneity, contrast, dissimilarity, entropy, ang. 2nd moment, mean, std. dev., and correlation [54]. Eight different texture features were calculated for eight bands of the WV-2 image, totaling 64 texture features (Table5).

Table 5. Texture feature description.

Feature Description It is a measure of the local gray uniformity of the image. The larger the numerical value, Homogeneity the more uniform the texture of the image is. It reflects the clarity of the image and the depth of the texture grooves. The deeper the Contrast texture grooves, the greater the contrast and the clearer the visual effect. On the contrary, if the contrast is small, the groove is shallow and the effect is blurred. It is similar to contrast, but increasing linearly. The value increases with the increasing of Dissimilarity local contrast. Entropy It is the clarity of the image, which is the degree of texture clarity. It is a measure of the uniformity of gray distribution of the image. The texture is thicker, Ang. 2nd moment and the value is larger. It reflects the regularity of texture. When the texture is cluttered and difficult to describe, Mean the value is small. When texture is regular and easy to describe, the value is larger. It is a measure of the deviation between the pixel value and the mean value. When the gray Std. dev. level of the image changes greatly, the value is large. It reflects the local gray correlation in the image. When the values of matrix elements are Correlation uniform and equal, the correlation values are large. On the contrary, if the difference of matrix pixel values is large, the correlation value is small.

2.6. Accuracy Evaluation There are many methods of evaluating classification accuracy, including the confusion matrix and ROC curve. The former is commonly used in land cover classification: The column of the matrix is the reference image information, the row is the information of the evaluated image classification result, and the portion where the row and the column intersect gives the number of samples classified into a specific category related to the reference category. The main indicators are the producer (PA), user (UA), overall accuracy (OA), quantity disagreement (QD), and allocation disagreement (AD). PA is the proportion of samples correctly classified as category I in all samples of type I (one column of the confusion matrix). UA indicates, in all samples classified as class I (a row of the confusion matrix), the proportion of samples whose measured type is class I. Remote Sens. 2019, 11, 1927 10 of 22 Remote Sens. 2019, 11, x FOR PEER REVIEW 10 of 22

OAmeasured is the proportion type is class of I. samples OA is the that proportion are correctly of samples classified that in all are samples. correctly QD classified and AD in are all precision samples. evaluationQD and AD indexes are precision proposed evaluation in 2011 [55 ].indexes QD is thepropos differenceed in 2011 between [55]. reference QD is the and difference comparison between maps duereference to incomplete and comparison matching maps of proportions due to incomplete of the categories. matching ADof proportions is the difference of the between categories. reference AD is andthe comparisondifference between maps due reference to non-optimal and comparison matching maps in spatial due to allocation non-optimal of categories. matching Before in spatial the accuracyallocation calculation, of categories. the observedBefore the sample accuracy matrix calc wasulation, converted the observed into an estimated sample matrix unbiased was population converted matrixinto an according estimated to unbiased reference population [55]. matrix according to reference [55]. 3. Results 3. Results The land-cover classification result maps of seven experiments have been showed in Figure5. The land-cover classification result maps of seven experiments have been showed in Figure 5.

(a) (b)

(c) (d)

Figure 5. Cont.

Remote Sens. 2019, 11, 1927 11 of 22 Remote Sens. 2019, 11, x FOR PEER REVIEW 11 of 22

(e) (f)

(g)

Figure 5.5. TheThe land-cover land-cover classification classification result result maps. maps. (a) Random (a) Random forest forest (RF) method (RF) method and texture and featurestexture usedfeatures in theused WV-2 in the image; WV-2 ( bimage;) RF method (b) RF method used in used the WV-2 in the image; WV-2 (image;c) support (c) support vector machinevector machine (SVM) method(SVM) method used in used the WV-2 in the image; WV-2 ( dimage;) k-nearest (d) k neighbor-nearest neighbor (k-NN) method (k-NN) used method in the used WV-2 in image;the WV-2 (e) RFimage; method (e) RF used method in the used Landsat-8 in the Landsat-8 image; (f) SVMimage; method (f) SVM used method in the used Landsat-8 in the Landsat-8 image; and image; (g) k-NN and method(g) k-NN used method in the used Landsat-8 in the Landsat-8 image. image.

Experiments clearlyclearly revealed revealed that that in the in Worldview-2the Worldview-2 images, images, the overall the classification overall classification accuracies ofaccuracies RF, SVM, of andRF, SVM, k-NN and were k-NN 91.86%, were 87.18%, 91.86%, and 87.18%, 87.14%, and with87.14%, QD with of 4.61%, QD of 6.06%,4.61%, and6.06%, 6.89%, and respectively.6.89%, respectively. AD of RF, AD SVM, of andRF, k-NNSVM, were and 3.53%, k-NN 6.76%, were and3.53%, 5.97%. 6.76%, Classification and 5.97%. accuracy Classification obtained byaccuracy adding obtained texture featuresby adding to thetexture RF algorithm features to was the 94.14%, RF algorithm with a QD was of 94.14%, 3.52% and with 2.34%. a QD of 3.52% and 2.34%. For Landsat-8 imagery, the overall classification accuracies of RF, SVM, and k-NN were 86.61%, 79.96%, and 77.23%. QD was 6.48%, 7.78%, and 8.43%, respectively. AD was 6.91%, 12.26%, and 14.34% respectively. The image classification results are given in Figure 5.

Remote Sens. 2019, 11, 1927 12 of 22

For Landsat-8 imagery, the overall classification accuracies of RF, SVM, and k-NN were 86.61%, 79.96%, and 77.23%. QD was 6.48%, 7.78%, and 8.43%, respectively. AD was 6.91%, 12.26%, and 14.34% respectively. The image classification results are given in Figure5.

3.1. Analysis of Classification Results Using Two Types of Spatial Resolution Satellite Images Tables6–9 give the classification accuracy evaluation results obtained using 0.5 m Worldview-2 imagery; Tables 10–12 give the results obtained using Landsat-8 imagery. Tables6–9 give the classification results obtained using 0.5 m Worldview-2 imagery without the texture feature. For Worldview-2 imagery, the eight spectral bands, vegetation index, and five geometric features were all used in the k-NN, SVM, and RF classification methods in Tables7–9. The overall accuracy of the classification results obtained were 91.86%, 87.18%, and 87.14% for the RF, SVM, and k-NN methods, respectively. According to Tables7–9, which gives the classification accuracy for each type, the four types of vegetation, backfill land, salt field, and tidal flat were classified with a high degree of accuracy using these three methods—not surprisingly, as these four feature types are very obvious, are easy to distinguish, and do not require complex algorithm support. However, although roads can be well extracted using both the RF and the SVM methods, the k-NN algorithm did not distinguish roads or other types well. The spectral characteristics of the road were quite different, and the roads with different features could be selected as samples by visual interpretation. What is more, the classification results obtained using the k-NN algorithm were greatly affected by the samples. For each object to be classified, the distance to the entire feature space of samples must be calculated, the k-nearest neighbors obtained, and the type of the sample to be classified determined. The training sample selection was relatively scattered and could not take advantage of the k-NN algorithm. In the coastal wetland, most of the saline–alkali soil was distributed on the edge of the tidal flat and the spectral characteristics were similar. Using the three classification methods, these two types will be a small amount of misclassification. Estuary and fish pond could be classified using RF and SVM without any misclassification of land cover types among the other six types, whereas using the k-NN method, only one sample was misclassified. The accuracy of estuary and fish pond classification in all three classification methods showed the ability to surpass 90%, indicating that the three methods could distinguish between water and non-water types. However, these three methods did show two kinds of mutual mismatch between the estuary and fish pond. Building classification is very difficult. Complete building boundaries cannot be extracted for buildings that are covered by vegetation, resulting in low classification accuracy. An improved classification method is needed, or increasing the number of classification features. Based on the analysis, the RF method was most suitable for the complex land cover types and could be used to classify land cover types using large number of image features. Its classification accuracy was higher than seen for the SVM and k-NN methods in the cases of fewer samples and multiple land cover types. The RF classification method exhibited the highest classification accuracy, indicating that its use in estuary wetland classification plays to its strengths. Tables 10–12 give the results obtained using Landsat-8 imagery. Using the RF algorithm, overall accuracy of the land cover classification was 86.61% in Landsat-8 imagery. For the SVM and k-NN algorithms, overall accuracies were 79.96% and 77.23%, respectively, using Landsat-8 imagery. The QD of RF, SVM, and k-NN method were 6.48%, 7.78%, and 8.43% respectively. The AD of the three methods were 6.91%, 12.26%, and 14.34%. In the RF classification method, the classification accuracy of tidal flat, vegetation, backfill land, estuary, and aquaculture area was higher than that of the SVM and k-NN algorithms. For the fish pond category, the features on the image were more obvious and the shape was regular. The classification accuracy of the three methods was relatively high, and could reach more than 90%. Among the three classification methods, high salinity area, vegetation, backfill land, and estuary were all misclassified with tidal flat in varying degrees. The tidal flat area was large and might include categories such as vegetation, and high salinity area. It was difficult to enter this area during field Remote Sens. 2019, 11, 1927 13 of 22 surveys. When selecting samples by visual interpretation, these categories could not be distinguished and sample selection might be missed. At the same time, the boundary between the tidal flat and estuary was not obvious, which will lead to the misclassification of the two categories. In the RF algorithm, the PA and UA of the aquaculture area could reach 100%, and the SVM and k-NN algorithms misclassified the aquaculture area into fish with the regular shape feature, indicating that all three classification methods could use shape features to classify, but the RF algorithm had the best effect. Analysis made it obvious that the three classification methods were more accurate when using Worldview-2 imagery than when using Landsat-8 imagery in images having a lower spatial resolution, a single pixel may contain multiple types of land cover, making it difficult to distinguish between different types. Remote sensing images having lower resolution were not suitable for high-precision classification in estuary wetlands. What is more, the Landsat-8 image classification system differed from that for the Worldview-2 images. According to the different classification system, two different sets of training samples and verification samples were selected for the two images. In Landsat-8 images, it was impossible to visually interpret buildings or to accurately distinguish salt fields from saline alkali soil, and as a result these two types were merged into the category of high-salinity areas. High-resolution remote sensing images, by contrast, could show actual distributions and specific details of terrain, improving classification accuracy. Regardless of the difference between WV-2 and Landsat-8 image resolution, the eight land cover types could be expressed in Landsat-8 images with the RF method. The overall accuracy could reach 86.61%, higher than SVM and k-NN. The spectral characteristics of Landsat-8 would affect the classification accuracy. In addition, the boundary between different land cover types was not clear, which would lead to a decrease in classification accuracy. In our study, Landsat-8 images were obtained free and could be used for experiments after pre-processing, which was economical and convenient. The specific size of the image could also be obtained according to research needs. Remote Sens. 2019, 11, 1927 14 of 22

Table 6. Estimated population matrix of the RF method in the WV-2 image with additional texture features.

Backfill Saline Comparison Vegetation Road Building Salt Field Tidal Flat Estuary Fish Pond Land Alkali Soil Total Vegetation 0.1470 0.0090 0.1561 Road 0.0412 0.0034 0.0446 Building 0.0036 0.1075 0.0036 0.1146 Backfill Land 0.0669 0.0669 Salt Field 0.0255 0.0032 0.0287 Saline Alkali Soil 0.0510 0.0510 Tidal Flat 0.0087 0.0029 0.0058 0.1932 0.2134 Estuary 0.1375 0.0122 0.1497 Fish Pond 0.0034 0.1718 0.1752 Reference Total 0.1470 0.0476 0.1251 0.0669 0.0284 0.0637 0.1932 0.1440 0.1840 1 PA 1 0.8822 0.8593 1 0.8979 0.08006 1 0.9549 0.9337 UA 0.9417 0.9238 0.9380 1 0.8885 1 0.9053 0.9185 0.9806 Quantity disagreement (QD) = 0.0352. Allocation disagreement (AD) = 0.0234. Overall accuracy (OA) = 0.9414.

Table 7. Estimated population matrix of the RF method in the WV-2 image.

Backfill Saline Comparison Vegetation Road Building Salt Field Tidal Flat Estuary Fish Pond Land Alkali Soil Total Vegetation 0.1416 0.0144 0.1561 Road 0.0409 0.0037 0.0446 Building 0.1146 0.1146 Backfill Land 0.0029 0.0611 0.0029 0.0669 Salt Field 0.0287 0.0287 Saline Alkali Soil 0.0051 0.0127 0.0331 0.0510 Tidal Flat 0.0029 0.0088 0.0029 0.0058 0.1929 0.2134 Estuary 0.1401 0.0096 0.1497 Fish Pond 0.0096 0.1656 0.1752 Reference Total 0.1416 0.0489 0.1535 0.0611 0.0316 0.0427 0.1958 0.1497 0.1752 1 PA 1 0.8364 0.7466 1 0.9082 0.7752 0.9852 0.9359 0.9452 UA 0.9071 0.9170 1 0.9133 1 0.6490 0.9039 0.9359 0.9452 QD = 0.0461. AD = 0.0353. OA = 0.9186. Remote Sens. 2019, 11, 1927 15 of 22

Table 8. Estimated population matrix of the SVM method in the WV-2 image.

Backfill Saline Comparison Vegetation Road Building Salt Field Tidal Flat Estuary Fish Pond Land Alkali Soil Total Vegetation 0.1416 0.0116 0.0029 0.1561 Road 0.0446 0.0446 Building 0.0156 0.0938 0.0052 0.1146 Backfill Land 0.0035 0.0634 0.0669 Salt Field 0.0287 0.0287 Saline Alkali Soil 0.0096 0.0414 0.0510 Tidal Flat 0.0260 0.0052 0.0026 0.0052 0.1743 0.2134 Estuary 0.1257 0.0239 0.1497 Fish Pond 0.0168 0.1583 0.1752 Reference Total 0.1416 0.0602 0.1445 0.0715 0.0313 0.0518 0.1743 0.1426 0.1823 1 PA 1 0.7409 0.6491 0.8867 0.9169 0.7992 1 0.8815 0.8683 UA 0.9071 1 0.8185 0.9477 1 0.8118 0.8168 0.8397 0.9035 QD = 0.0606. AD = 0.0676. OA = 0.8718.

Table 9. Estimated population matrix of the k-NN method in the WV-2 image.

Backfill Saline Comparison Vegetation Road Building Salt Field Tidal Flat Estuary Fish Pond Land Alkali Soil Total Vegetation 0.1390 0.0170 0.1561 Road 0.0324 0.0081 0.0041 0.0446 Building 0.0060 0.1086 0.1146 Backfill Land 0.0635 0.0033 0.0669 Salt Field 0.0287 0.0287 Saline Alkali Soil 0.0089 0.0133 0.0288 0.0510 Tidal Flat 0.0028 0.0111 0.0055 0.0028 0.0055 0.1857 0.2134 Estuary 0.1238 0.0259 0.1497 Fish Pond 0.0143 0.1609 0.1752 Reference Total 0.1390 0.0501 0.1581 0.0691 0.0314 0.0384 0.1857 0.1381 0.1901 1 PA 1 0.6467 0.6869 0.9190 0.9140 0.7500 1 0.8965 0.8464 UA 0.8905 0.7265 0.9476 0.9492 1 0.5647 0.8702 0.8270 0.9184 QD = 0.0689. AD = 0.0597. OA = 0.8714. Remote Sens. 2019, 11, 1927 16 of 22

Table 10. Estimated population matrix of the RF method in the Landsat-8 image.

High Salinity Aquaculture Comparison Tidal Flat Vegetation Backfill Land Fish Pond Estuary Road Area Area Total Tidal Flat 0.1183 0.0197 0.0039 0.0079 0.0158 0.0237 0.1893 High Salinity Area 0.0148 0.0962 0.0074 0.1183 Vegetation 0.0888 0.0888 Backfill Land 0.0592 0.0592 Fish Pond 0.0057 0.2434 0.0113 0.2604 Estuary 0.0084 0.0922 0.1006 Road 0.0153 0.1149 0.1302 Aquaculture Area 0.0533 0.0533 Reference Total 0.1331 0.1312 0.0927 0.0727 0.2518 0.1193 0.1459 0.0533 1 PA 0.8888 0.7332 0.9579 0.8143 0.9666 0.7728 0.7875 1 UA 0.6249 0.8132 1 1 0.9347 0.9165 0.8825 1 QD = 0.0648. AD = 0.0691. OA = 0.8661.

Table 11. Estimated population matrix of the SVM method in the Landsat-8 image.

High Salinity Aquaculture Comparison Tidal Flat Vegetation Backfill Land Fish Pond Estuary Road Area Area Total Tidal Flat 0.1092 0.0146 0.0109 0.0109 0.0291 0.0146 0.1893 High Salinity Area 0.0139 0.0975 0.0070 0.1183 Vegetation 0.0666 0.0222 0.0888 Backfill Land 0.0296 0.0296 0.0592 Fish Pond 0.2488 0.0116 0.2604 Estuary 0.0112 0.0112 0.0782 0.1006 Road 0.0137 0.1165 0.1302 Aquaculture Area 0.0533 0.0533 Reference Total 0.1232 0.1257 0.0775 0.0739 0.2600 0.1189 0.1380 0.0828 1 PA 0.8864 0.7757 0.8594 0.4005 0.9569 0.6577 0.8442 0.6437 UA 0.5769 0.8242 0.7500 0.5000 0.9555 0.7773 0.8948 1 QD = 0.0778. AD = 0.1226. OA = 0.7996. Remote Sens. 2019, 11, 1927 17 of 22

Table 12. Estimated population matrix of the k-NN method in the Landsat-8 image.

High Salinity Aquaculture Comparison Tidal Flat Vegetation Backfill Land Fish Pond Estuary Road Area Area Total Tidal Flat 0.1114 0.0223 0.0037 0.0223 0.0297 0.1893 High Salinity Area 0.0947 0.0079 0.0158 0.1183 Vegetation 0.0740 0.0148 0.0888 Backfill Land 0.0444 0.0148 0.0592 Fish Pond 0.0055 0.2327 0.0111 0.0111 0.2604 Estuary 0.0091 0.0183 0.0732 0.1006 Road 0.0087 0.0174 0.0087 0.0955 0.1302 Aquaculture Area 0.0067 0.0466 0.0533 Reference Total 0.1292 0.1343 0.0740 0.0838 0.2509 0.1144 0.1557 0.0577 1 PA 0.8622 0.7051 1 0.5298 0.9275 0.6399 0.6134 0.8076 UA 0.5885 0.8005 0.8333 0.7500 0.8936 0.7276 0.7335 0.8743 QD = 0.0843. AD = 0.1434. OA = 0.7723. Remote Sens. 2019, 11, 1927 18 of 22

3.2. Analysis of Classification Results When Fewer Training Samples are Available When classifying wetlands, sometimes only a small number of samples of certain types are obtained after image scale segmentation. For example, using Worldview-2 imagery, only 14 training samples of the road class were available. When a classification training sample is small, classification accuracy decreases. For road classification, using the RF algorithm, PA and UA were 83.64% and 91.70%, respectively. The SVM algorithm produced a similar classification result, but the k-NN algorithm gave a PA of only 74.09%. Using Landsat-8 imagery, backfill land only had 10 training samples, so that classification PA for this land class were 81.43%, 40.05%, and 52.98% for the RF, SVM, and k-NN algorithms, with a UA of 100%, 50.00%, and 75.00%, respectively. From these results, it can be seen that, using 0.5 m Worldview-2 imagery, both the RF and the SVM algorithms achieved better classification results based on scaled objects when few samples were available. Using Landsat-8 imagery, a number of backfill lands could be omitted when using the SVM and k-NN algorithms to classify the image. When using the RF algorithm, fewer objects were omitted or mixed than when using other methods. These results coincide with those of extant studies and indicate one advantage of the RF method: When the number of samples is small, higher classification accuracy can be obtained [56].

3.3. Texture Features’ Influence on Classification Accuracy The RF algorithm has distinct advantages when using a large number of features for classification. As a result, texture features were added to the classification features and the influence of texture features on RF algorithms’ classification accuracy studied. In the RF classification method, in addition to spectral features, exponential features, and geometric features, texture features were added for the classification of Worldview-2 imagery, obtaining an improved overall classification accuracy of 94.14%, QD of 3.52%, and AD of 2.34%. Vegetation, backfill land, and salt field were classified by a different combinations of features, and the confusion matrix was identical, indicating that these three types of land cover were clearly characterized by a certain spectrum, index, and geometry features that could lead to highly accurate classification. Adding texture features could improve classification accuracy for the road, saline alkali soil, tidal flat, estuary, and fish pond types, to a degree, and could greatly improve the classification accuracy of buildings while leading to fewer misclassifications between buildings and both backfill land and saline alkali soil. As a result, use of texture features could improve accuracy in the classification of estuary wetlands.

4. Discussion When using the same classification method and the same features, the high-resolution Worldview-2 could achieve higher classification accuracy than the Landsat-8 image in the wetland land cover study in small areas. At the same time, Worldview-2 images could express the land cover type in more detail. Through visual interpretation, Worldview-2 images could represent nine different land cover types, while Landsat-8 images could only display eight types. This results in different classification systems and classification samples for the two images. In addition to describing the actual resolution of the image, the definition of “high resolution” needs to be based on the range of the study area. In our small-range study area, the Landsat-8 image with 15 m spectral resolution had a certain difference in the accuracy of the classification results from the Worldview-2 image with a spectral resolution of 0.5 m. However, in the global forest cover change study conducted by Hansen et al. [57], the 30 m Landsat image is a good choice. At the same time, due to the earlier and continuous Landsat satellite launch, long-term sequence studies can be performed. In addition, global Landsat images are available for free. Our experiments compared the adaptability of three algorithms to land cover classification in small-range coastal wetlands. The selection of samples and features were relatively simple and easy to implement. Compared with the research of Hansen et al. [57] and Peekel et al. [58], the amount of data processing and corresponding reference materials were less. Hansen et al. and Peekel et al. Remote Sens. 2019, 11, 1927 19 of 22 respectively studied global forest cover change and global surface water change. These two studies are essentially to study the distribution of a land cover type. Our experiment classified all land cover types in the image. In the aspect of sample selection, our experiment was based on field investigation combined with visual interpretation. Different training samples and validation samples were selected for two different resolution images. This sample selection method is feasible when the research area is small and the amount of data is small. In the study of Gong Peng et al. [59], a stable classification method using limited samples has been proposed to use the Landsat sample set with global resolution of 30 m for global land cover mapping with 10 m resolution. This sample transformation method enables the sample data set to play its maximum value. Sample transformation reduces the time of sample selection when doing large-area research. In different scales of research, it can ensure the uniformity of samples, and the experimental results are more reliable. Similarly, the RF method is also used in land cover mapping by Gong Peng et al. Therefore, the RF method can be applied not only to large-scale land use classification of coastal wetlands, but also to global land use classification. In the future, there are several aspects that can be optimized and further studied throughout the classification process. In the object-oriented segmentation process, how to set the optimal segmentation scale and optimally express the land cover type. In terms of feature selection, information such as location features can be added to improve the accuracy of the classification. In classification algorithms, in addition to the RF method, other deep learning algorithms such as deep convolutional neural networks can be used.

5. Conclusions The high-resolution remote sensing image could describe the actual object in detail and reflect the real situation of the ground object. The high-resolution Worldview-2 image was more suitable for the small-scale land cover type classification than the medium-resolution Landsat-8 image. The classification system of Worldview-2 images was more detailed than Landsat-8 images, and the classification accuracy was higher. Landsat-8 image classification accuracy was low due to spatial resolution. Landsat-8 image was easy to obtain and had a wide coverage, which was suitable for the large-scale land cover studies. The RF algorithm demonstrated its superiority in handling a large number of features. For Worldview-2 and Landsat-8 images, the use of RF algorithms for classification of almost all land cover types was more accurate than with SVM and k-NN algorithms. At the same time, the RF algorithm provided a more accurate classification for a small number of samples, and was more suitable for estuarine wetland classification than the SVM or k-NN algorithm. In addition, different texture features described the local pattern and arrangement rules of image from different aspects. The classification accuracy when using the RF algorithm could be improved by adding texture features. In general, high-resolution remote sensing images were suitable experimental data for small-scale land cover classification of coastal wetlands. The RF algorithm could give full play to its advantages and obtain high-quality classification results. What is more, among many classification features, the texture features were one of the classification features, which could improve the accuracy of classification.

Author Contributions: Investigation, X.W., Z.C., J.W., Y.Z., X.L. and H.Z.; Methodology, X.W., X.F., Z.C., J.W. and Y.Z.; Writing—Original draft, X.W. and X.F.; Writing—Review & editing, X.G., Y.Z. and Y.F.; Improving the data analysis, Y.Z. and X.F. Funding: This research was funded by the National Key Research and Development Program of China (Project Ref. No. 2016YFB0501501), the Natural Science Foundation of China (NSFC No. 31270745; No. 41506106), Lianyungang Land and Resources Project (LYGCHKY201701), Lianyungang Science and Technology Bureau Project (SH1629), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), the Top-notch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP), and Jiangsu Postgraduate Innovation Program (No. 5508201601; No. SY201808X). Acknowledgments: Worldview-2 satellite imagery data is highly appreciated. Conflicts of Interest: The authors declare no conflict of interest. Remote Sens. 2019, 11, 1927 20 of 22

References

1. Steven, D.; Toner, M.M. Vegetation of Upper Coastal Plain depression wetlands: Environmental templates and wetland dynamics within a landscape framework. Wetlands 2004, 24, 23–42. [CrossRef] 2. Poiani, K.A.; Johnson, W.C.; Swanson, G.A.; Winter, T.C. Climate change and northern prairie wetlands: Simulations of long-term dynamics. Limnol. Oceanogr. 1996, 41, 871–881. [CrossRef] 3. Lagos, N.A.; Paolini, P.; Jaramillo, E.; Lovengreen, C.; Duarte, C.; Contreras, H. Environmental processes, water quality degradation, and decline of waterbird populations in the Rio Cruces wetland, Chile. Wetlands 2008, 28, 938–950. [CrossRef] 4. Gorham, E. Northern Peatlands: Role in the Carbon Cycle and Probable Responses to Climatic Warming. Ecol. Appl. 1991, 1, 182–195. [CrossRef][PubMed] 5. Heimann, D.C.; Femmer, S.R. Water Quality, Hydrology, and Invertebrate Communities of Three Remnant Wetlands in Missouri, 1995-97; US Geological Survey: Leston, VA, USA, 1998. 6. Bildstein, K.L.; Bancroft, G.T.; Dugan, P.J.; Gordon, D.H.; Erwin, R.M.; Nol, E.; Payne, L.X.; Senner, S.E. Approaches to the conservation of coastal wetlands in the western hemisphere. Wilson Bull. 1991, 103, 218–254. 7. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [CrossRef] 8. Weng, Q. Remote sensing of impervious surfaces in the urban areas: Requirements, methods, and trends. Remote Sens. Environ. 2012, 117, 34–49. [CrossRef] 9. Gallant, A.L. The Challenges of Remote Monitoring of Wetlands. Remote Sens. 2015, 7, 10938–10950. [CrossRef] 10. Raabe, E.A. Gulf of Mexico and Southeast Tidal Wetlands Satellite Imagery: Florida Big Bend Landsat MSS 1973–1987; US Geological Survey: Leston, VA, USA, 2006. 11. Fang, F.L.; Yong, H.J. Study on method of extracting wetland and its changed area based on Landsat TM images. Sci. Surv. Mapp. 2008, 33, 147–149. 12. Sader, S.A.; Ahl, D.; Liou, W.-S. Accuracy of landsat-TM and GIS rule-based methods for forest wetland classification in Maine. Remote Sens. Environ. 1995, 53, 133–144. [CrossRef] 13. Johnston, R.; Barson, M. Remote sensing of Australian wetlands: An evaluation of Landsat TM data for inventory and classification. Mar. Freshw. Res. 1993, 44, 235–252. [CrossRef] 14. Chen, B.; Yu, F.H.; Pan, Y.; Lei, G.; Long, J.S.; Yu, L.; Wang, Y.T. Research on Information Extraction of Land Use in Mountainous Areas Based on Texture and Terrain. Geogr. Geo-Inf. Sci. 2017, 33, 1–8. 15. Gilichinsky, M.; Peled, A. Detection of discrepancies in existing land-use classification using IKONOS satellite data. Remote Sens. Lett. 2014, 5, 93–102. [CrossRef] 16. Hayes, M.M.; Miller, S.N.; Murphy, M.A. High-resolution landcover classification using Random Forest. Remote Sens. Lett. 2014, 5, 112–121. [CrossRef] 17. Xiong, C.W.; Yong, M.X.; Satellite Environment Center. Environmental Monitoring of Cross Border River Basin Using High Resolution Remote Sensing Image—A Case Study of the Red River Basin. Adm. Tech. Environ. Monit. 2016, 28, 11–14. 18. Hof, A.; Wolf, N. Estimating potential outdoor water consumption in private urban landscapes by coupling high-resolution image analysis, irrigation water needs and evaporation estimation in Spain. Landsc. Urban Plan. 2014, 123, 61–72. [CrossRef] 19. Lane, C.R.; Liu, H.; Autrey, B.C.; Anenkhonov, O.A.; Chepinoga, V.V.; Wu, Q. Improved Wetland Classification Using Eight-Band High Resolution Satellite Imagery and a Hybrid Approach. Remote Sens. 2014, 6, 12187–12216. [CrossRef] 20. Guo, Q.; Kelly, M.; Gong, P.; Liu, D.; Kelly, N.M. An Object-Based Classification Approach in Mapping Tree Mortality Using High Spatial Resolution Imagery. GIScience Remote Sens. 2007, 44, 24–47. [CrossRef] 21. Nagel, P.; Cook, B.J.; Yuan, F. High Spatial-Resolution Land Cover Classification and Wetland Mapping over Large Areas Using Integrated Geospatial Technologies. Int. J. Remote Sens. Appl. 2014, 4, 71. [CrossRef] 22. Scull, P.; Franklin, J.; Chadwick, O.A. The application of classification tree analysis to soil type prediction in a desert landscape. Ecol. Model. 2005, 181, 1–15. [CrossRef] 23. Kovaˇcevi´c,M.; Bajat, B.; Gaji´c,B. Soil type classification and estimation of soil properties using support vector machines. Geoderma 2010, 154, 340–347. [CrossRef] Remote Sens. 2019, 11, 1927 21 of 22

24. Carpenter, G.; Gjaja, M.; Gopal, S.; Woodcock, C. ART neural networks for remote sensing: Vegetation classification from Landsat TM and terrain data. IEEE Trans. Geosci. Remote Sens. 1997, 35, 308–325. [CrossRef] 25. Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of multitemporal RapidEye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [CrossRef] 26. Howard, D.M.; Wylie, B.K.; Tieszen, L.L. Crop classification modelling using remote sensing and environmental data in the Greater Platte River Basin, USA. Int. J. Remote Sens. 2012, 33, 6094–6108. [CrossRef] 27. Kumar, R.S.; Menaka, C.; Cutler, M.E.J. Ann Based Robust LULC Classification Technique Using Spectral, Texture and Elevation Data. J. Indian Soc. Remote Sens. 2013, 41, 477–486. [CrossRef] 28. Huiping, Z.; Hong, J.; Qinghua, H. Landscape and Water Quality Change Detection in Urban Wetland: A Post-classification Comparison Method with IKONOS Data. Procedia Environ. Sci. 2011, 10, 1726–1731. [CrossRef] 29. Augusteijn, M.F.; Warrender, C.E. Wetland classification using optical and radar data and neural network classification. Int. J. Remote Sens. 1998, 19, 1545–1560. [CrossRef] 30. Samaniego, L.; Schulz, K. Supervised Classification of Agricultural Land Cover Using a Modified k-NN Technique (MNN) and Landsat Remote Sensing Imagery. Remote Sens. 2009, 1, 875–895. [CrossRef] 31. Garcia-Gutierrez, J.; Mateos-García, D.; Riquelme-Santos, J.C. A SVM and K-NN Restricted Stacking to Improve Land Use and Land Cover Classification. In International Conference on Hybrid Artificial Intelligence Systems; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6077, pp. 493–500. 32. Hart, P. The condensed nearest neighbor rule (Corresp). IEEE Trans. Inf. Theory 2003, 14, 515–516. [CrossRef] 33. Osuna, E.E.; Robert, F.; Federico, G. Support Vector Machines: Training and Applications; Massachusetts Institute of Technology: Cambridge, MA, USA, 1997; Volume 144, pp. 1308–1316. 34. Salehi, B.; Mahdavi, S.; Brisco, B.; Huang, W. Object-Based Classification of Wetlands Using Optical and SAR Data with a Compound Kernel in Support Vector Machine (SVM); AGU Fall Meeting Abstracts: San Francisco, CA, USA, 2015. 35. Lin, Y.; Shen, M.; Liu, B.; Ye, Q. Remote Sensing Classification Method of Wetland Based on AN Improved SVM. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 1, 179–183. [CrossRef] 36. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [CrossRef] 37. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [CrossRef] 38. Tian, S.; Zhang, X.; Tian, J.; Sun, Q. Random Forest Classification of Wetland Landcovers from Multi-Sensor Data in the Arid Region of Xinjiang, China. Remote Sens. 2016, 8, 954. [CrossRef] 39. Liu, J.; Feng, Q.; Gong, J.; Zhou, J.; Li, Y. Land-cover classification of the Yellow River Delta wetland based on multiple end-member spectral mixture analysis and a Random Forest classifier. Int. J. Remote Sens. 2016, 37, 1845–1867. [CrossRef] 40. Shu, D.F.; Meng, D.Z.; Liu, Y.Q. Present Situation and Countermeasures of Linhonghekou Wetland Protection in Lianyungang; Jiangsu Water Resources: Nanjing, China, 2015; pp. 35–37. 41. Sun, K.Y.; Liu, X.M.; Zhu, L.X. Thoughts on the Protection and Utilization of the Wetland in Linhong Estuary. Tech. Superv. Water Resour. 2013, 21, 26–28. 42. Wu, H.T.; Gong, L.J.; Tang, S.D. Discussion on Several Relations That Should be Dealt with in Linhong Estuary Wetland Planning; Jiangsu Water Resources: Nanjing, China, 2010; pp. 9–10. 43. Xu, X.Q.; Liu, Y.W.; Ran, S.Q. Analysis on the Change Trend of Evaporation in Lianyungang City. Harnessing Huaihe River 2008, 8, 6–7. 44. Li, Y.F.; Yang, Y.; Zhu, X.D. A preliminary study on the functional zonation division of lianyungang coastal zone. Coast. Eng. 2001, 20, 43–50. 45. Wu, L.Z.; Zhao, M.; Wu, T.; Wu, W.Q.; Ding, Y.F. Situation and Countermeasures of Lianyungang Coastal Resources Development and Management. J. Huaihai Inst. Technol. Soc. Sci. Ed. 2015, 13, 101–103. 46. Zhang, J.; Wang, J.H.; Cui, W.L. The threat, management practice and work focus of science and technology support in the China coastal zone during the 13th Five-Year: Examples in Qingdao, Dongying and Lianyungang. Mar. Sci. 2015, 39, 1–7. Remote Sens. 2019, 11, 1927 22 of 22

47. Marchisio, G.; Pacifici, F.; Padwick, C. On the relative predictive value of the new spectral bands in the WorldWiew-2 sensor. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 2723–2726. 48. Bachri, S.; Selamat, M.B.; Siregar, V.P.; Wouthuyzen, S. The Comparison of Bathymetric Estimation from Three High Resolution Satellite Imagery. In Proceedings of the Seminar on Hydrography, Batam Island, Indonesia, 27–29 August 2013; pp. 27–29. 49. Wang, M.Y.; Fei, X.Y.; Xie, H.Q.; Liu, F.; Zhang, H. Study of Fusion Algorithms with High Resolution Remote Sensing Image for Urban Green Space Information Extraction. Bull. Surv. Mapp. 2017, 8, 36–40. 50. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2011, 58, 239–258. [CrossRef] 51. Rong, W. Research on Information Extraction Method of High Resolution Remote Sensing Image; Lanzhou Jiaotong University: Lanzhou, China, 2013. 52. Moffett, K.B.; Gorelick, S.M. Distinguishing wetland vegetation and channel features with object-based image segmentation. Int. J. Remote Sens. 2013, 34, 1332–1354. [CrossRef] 53. Dragu¸t,L.;ˇ Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [CrossRef] 54. Haralick, R.M.; Shanmugam, K. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [CrossRef] 55. Pontius, R.G., Jr.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [CrossRef] 56. Noi, P.T.; Kappas, M. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery. Sensors 2018, 18, 18. [CrossRef] 57. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-resolution global maps of 21st-century forest cover change. Science 2013, 342, 850–853. [CrossRef] 58. Pekel, J.F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its long-term changes. 2016, 540, 418. [CrossRef][PubMed] 59. Gong, P.; Liu, H.; Zhang, M.; Li, C.; Wang, J.; Huang, H.; Clinton, N.; Ji, L.; Li, W.; Bai, Y.; et al. Stable classification with limited sample: Transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. Sci. Bull. 2019, 64, 370–373. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).