sensors

Article Distinguishing Planting Structures of Different Complexity from UAV Multispectral Images

Qian Ma 1,2, Wenting Han 1,3,*, Shenjin Huang 3, Shide Dong 4 , Guang Li 3 and Haipeng Chen 3

1 Institute of Soil and Water Conservation, Chinese Academy of Sciences, Ministry of Water Resources, Yangling 712100, ; [email protected] 2 College of Advanced Agricultural Sciences, University of Chinese Academy of Sciences, Beijing 100049, China 3 College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China; [email protected] (S.H.); [email protected] (G.L.); [email protected] (H.C.) 4 Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China; [email protected] * Correspondence: [email protected]; Tel.: +86-029-8709-1325

Abstract: This study explores the classification potential of a multispectral classification model for farmland with planting structures of different complexity. Unmanned aerial vehicle (UAV) remote sensing technology is used to obtain multispectral images of three study areas with low-, medium-, and high-complexity planting structures, containing three, five, and eight types of crops, respectively. The feature subsets of three study areas are selected by recursive feature elimination (RFE). Object-oriented random forest (OB-RF) and object-oriented support vector machine (OB-SVM) classification models are established for the three study areas. After training the models with the feature subsets, the classification results are evaluated using a confusion matrix. The OB-RF and   OB-SVM models’ classification accuracies are 97.09% and 99.13%, respectively, for the low-complexity planting structure. The equivalent values are 92.61% and 99.08% for the medium-complexity planting Citation: Ma, Q.; Han, W.; Huang, S.; structure and 88.99% and 97.21% for the high-complexity planting structure. For farmland with Dong, S.; Li, G.; Chen, H. fragmentary plots and a high-complexity planting structure, as the planting structure complexity Distinguishing Planting Structures of Different Complexity from UAV changed from low to high, both models’ overall accuracy levels decreased. The overall accuracy of the Multispectral Images. Sensors 2021, OB-RF model decreased by 8.1%, and that of the OB-SVM model only decreased by 1.92%. OB-SVM 21, 1994. https://doi.org/10.3390/ achieves an overall classification accuracy of 97.21%, and a single-crop extraction accuracy of at least s21061994 85.65%. Therefore, UAV multispectral remote sensing can be used for classification applications in highly complex planting structures. Academic Editor: Thomas Jarmer Keywords: UAV; multispectral remote sensing; farmland objects; classification; RF; SVM Received: 17 January 2021 Accepted: 22 February 2021 Published: 12 March 2021 1. Introduction Publisher’s Note: MDPI stays neutral According to statistics published by the United Nations, the world population is with regard to jurisdictional claims in expected to reach about 10 billion in 2050 [1,2]. Population expansion brings new challenges published maps and institutional affil- to the maintenance of food production security. Mastering the area and spatial distribution iations. of regional crops is the prerequisite for accurately obtaining regional crop yields, and the rational allocation of regional water resources. However, smallholders or family farms, which are still prevailing in some developing countries, are responsible for a large share of the world food production. The scattered farmland and discrete crops of smallholders Copyright: © 2021 by the authors. make cropland mapping and monitoring more difficult, affecting the accurate estimation Licensee MDPI, Basel, Switzerland. of regional crop yields and the rational allocation of water resources. The emergence This article is an open access article of remote sensing technology has promoted agricultural production and research, from distributed under the terms and the traditional stage to the stage of refinement, quantification, and mechanism. High- conditions of the Creative Commons Attribution (CC BY) license (https:// quality remote sensing images, especially those of high resolution, can extract feature creativecommons.org/licenses/by/ information from the ground, making the fine classification and monitoring of ground 4.0/). details possible [3,4]. Agricultural information at the farmland scale can be directly applied

Sensors 2021, 21, 1994. https://doi.org/10.3390/s21061994 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 1994 2 of 22

to the optimization of cultivation management and the analysis of breeding decisions. It has further applications compared with the large-scale agricultural monitoring technology used for macro decision-making [5]. Moreover, the acquisition of farmland-scale agricultural information places more stringent requirements on image data timeliness and spatial resolution. At present, the monitoring platform for acquiring crop planting information on the farmland scale is mainly based on high-resolution satellite remote sensing and unmanned aerial vehicle (UAV) remote sensing. Compared with satellite remote sensing, UAV remote sensing is less expensive, offers more flexible mobility, and has a short operating cycle, thus providing several advantages in precision agriculture [6–9]. Researchers have focused on the extraction and classification of single and multiple farmland features, based on UAV remote sensing technology and remote sensing recognition algorithms, in recent years. Supervised classification is widely used to achieve high-precision classification results. There are many supervised classifica- tion algorithms, such as the maximum likelihood, single linkage, Mahalanobis distance, support vector machines and random forests. Among them, support vector machines (SVMs) and random forests (RFs) have been widely used in recent years, as they typically offer superior classification [10–13]. Compared with traditional statistical theory, SVMs with simple structures and strong generalization ability can solve a large number of small- sample learning problems [14]. RF uses sample disturbance and attributes disturbance to achieve good robustness in classification convergence and generalization error [15]. Besides this, deep learning, as an extension of the artificial neural network method, is a new exciting and effective classification method. However, its high requirement of samples increases its cost and limits its application in areas lacking samples [16,17]. The extraction of single-crop planting information is mainly realized by appropriate remote sensing recognition algorithms based on the unique phenological characteristics of a crop. At present, the extraction of large-scale crops, such as corn, wheat, rice, to- bacco, and fruit trees, is based on pixel- and object-oriented classification algorithms, and the classification accuracy can reach more than 90% [18–21]. Liu et al. [22] reported a classification accuracy of winter wheat planting information that exceeded 93.0%, using a deep learning algorithm for pixels combined with UAV visible light remote sensing data. Hall et al. [23] constructed spectral and texture features for maize information from UAV multispectral remote sensing data, and applied an object-oriented SVM to realize the extraction of corn planting information with an extraction accuracy of 94.0%. However, in the case of mixed crops or the intercropping of multiple crops, the fine extraction of multi-crop information suffers significant interference. It requires more robust recognition algorithms than single-crop information extraction. Object-oriented image analysis (OBIA) takes the speckle as the primary processing unit. It considers the spectrum, texture, and context as input features, effectively solving the “salt and pepper” phenomenon in pixel classification [24,25]. Scholars have used OBIA to achieve good classification results for complex planting structures containing between three and five different crops [26–29]. For example, based on UAV images, OBIA was used to classify five types of crops with an overall accuracy of 84.7% [30]. Nevertheless, the extraction accuracy of multiple crops is still insufficient because the similarity of two or more crops is high. Wu et al. [31] used a portable UAV to obtain light images, and used an object-oriented classification method to classify rice, lotus root, and vegetables. Their results showed that the mixed classification of lotus root and vegetable land presented a severe challenge, with a relative error as high as 193%. Liu et al. [32] used SVM to distinguish corn, wheat, trees, greenhouses, and other ground features, based on UAV visible images data and digital surface model (DSM) data. Their results indicated that using only spectral features for classification would lead to confusion between trees and corn. In general, the information extraction of single farmland features based on UAV remote sensing data is relatively mature, and the extraction accuracy is high. However, there is still some confusion in the classification of many kinds of crops. In addition, multiple crop classification mainly focuses on three to five different crops, and there have Sensors 2021, 21, x FOR PEER REVIEW 3 of 22

Sensors 2021, 21, 1994 In general, the information extraction of single farmland features based on UAV3 of re- 22 mote sensing data is relatively mature, and the extraction accuracy is high. However, there is still some confusion in the classification of many kinds of crops. In addition, multiple crop classification mainly focuses on three to five different crops, and there have been few comparativebeen few comparative studies in which studies the in cropping which the structures cropping have structures different have levels different of complexity. levels of Therefore,complexity. this Therefore, paper describes this paper the use describes of UAV the remote use ofsensing UAV technology remote sensing to classify technology farm- landto classify features farmland in study features areas with in study different areas levels with of different planting levels structure of planting complexity. structure The aimscomplexity. of this study The aimsare as of follows: this study (1) explore are as follows:the applicability (1) explore of UAV the applicabilitymultispectral ofremote UAV sensingmultispectral recognition remote algorithms sensing recognitionfor farmland algorithms feature classification for farmland with feature planting classification structures thatwith have planting different structures degrees that of havecomplexity; different and degrees (2) analyze of complexity; the potential and for (2) UAV analyze multi- the spectralpotential remote for UAV sensing multispectral technology remote to be sensing used for technology complex to planting be used structure for complex extraction. planting structure extraction. 2. Study Area and Data Preparation 2. Study Area and Data Preparation 2.1.2.1. Overview Overview of of the the Study Study Area Area TheThe study study areas areas are are located located in inWuyuan Wuyuan County, County, part part of the of Inner the Inner Mongolia Mongolia Autono- Au- moustonomous Region Region of China, of China, which whichhave a havetypical a typicalmid-temperat mid-temperatee continental continental monsoon monsoon climate. Theclimate. geographic The geographic location map location of the map study of theareas study is shown areas isin shownFigure in1. The Figure study1. The area studys are aridareas and are aridreceives and receivesplenty of plenty sunshine. of sunshine. The annual The annual rainfall rainfall is only is 130 only–285 130–285 mm, mm,and andthe annualthe annual total total amount amount of solar of solar radiation radiation is as is high as high as as6424.23 6424.23 MJ MJ·m−·2m. The−2. Therich richwater water re- sourcesresources in inthese these area area benefit benefit from from the the Yellow Yellow River River diversion diversion irrigation irrigation system, system, and and can completelycompletely satisfy satisfy the the needs needs of of local local crops. This This study study considers considers three three areas areas in Wuyuan County.County. Study area 1 (SA1) is in Taerhu Taerhu (49.99(49.99°N,◦N, 107.83°E 107.83◦E),), study area 2 (SA2) is in Fuxing Fuxing (41.12°N,(41.12◦N, 107.96107.96°E),◦E), andand studystudy area area 3 3 (SA3) (SA3) is locatedis located in Yindingtuin Yindingtu (41.18 (41.18°N,◦N, 107.84 107.84°E).◦E). SA1 SA1contains contains three three types types of crops of crops (corn, (corn, sunflower, sunflower, and zucchini), and zucchini), and is and selected is selected as a district as a districtwith low with planting low planting structure structure complexity. complex SA2 containsity. SA2 fivecontains types five of crops types (corn, of crops sunflower, (corn, sunflower,zucchini, zucchini, , hami and melon, pepper), and and pepper), is selected and is as selected a district as a with district medium with medium planting plantingstructure structure complexity. complexity. SA3 contains SA3 eight contains types eight of crops types (sunflower, of crops zucchini,(sunflower, hami zucchini, melon, hamipepper, melon, sapling, pepper, , sapling, cherrywaterme tomato,lon, cherry and tomato), tomato, andand istomato), selected and asa is district selected with as ahigh district planting with structurehigh planting complexity. structure During complexity. the experimental During the period, experimental the corn period, was in the cornjointing was stage,in the thejointing sunflower stage, wasthe sunflower in the budding was in stage, the budding and the stage, zucchini, and the hami zucchini, melon, hamipepper, melon, watermelon, pepper, waterm cherry tomato,elon, cherry and tomato, tomato wereand tomato all in the were fruiting all in stage.the fruiting stage.

Figure 1 1.. GeographicalGeographical location of the study areas. 2.2. The Collection of UAV Remote Sensing Data 2.2. The Collection of UAV Remote Sensing Data An information collection system based on a UAV (S900, DJI Technology Co., Shen- An information collection system based on a UAV (S900, DJI Technology Co., Shen- zhen, China) was adopted to collect the multispectral remote sensing images. The system zhen, China) was adopted to collect the multispectral remote sensing images. The system integrated UAV flight control and the corresponding position and orientation system (POS) integrated UAV flight control and the corresponding position and orientation system data acquisition. It could stably obtain UAV multispectral images without distortion. The (POS) data acquisition. It could stably obtain UAV multispectral images without distor- multispectral camera (MicaSense RedEdge-M, MicaSense, Seattle, WA, USA) could obtain tion. The multispectral camera (MicaSense RedEdge-M, MicaSense, Seattle, WA, USA) red, green, blue, near-infrared, and red edge band data. Detailed information of the UAV and multispectral camera is presented in Table1.

Sensors 2021, 21, 1994 4 of 22

Table 1. Main parameters of the unmanned aerial vehicle (UAV) and camera.

Unmanned Aerial Vehicle (UAV) Camera Parameters Values Parameters Values Wheelbase/mm 900 Camera model MicaSense RedEdge-M Takeoff mass/kg 4.7–8.2 Pixels 1280 × 960 Payload/g 820 Band 5 Endurance time/min 20 Wavelength/nm 400–900 Digital communication distance/km 3 Focal length/mm 5.5 Battery power/(mAh) 16,000 Field of view/(◦) 47.2 Cruising speed/(m·s−1) 5

The spectral characteristics of crops vary significantly under different phenological periods and light conditions. The UAV remote sensing tests were conducted on 26, 29 July and 1 August 2020, which had similar meteorological conditions. The meteorological data obtained from the local weather bureau were the average values from 11 a.m. to 2 p.m. during the test period (shown in Table2). The three experimental days were sunny days with lower wind speed, fewer air pollutants, and higher illuminance, all suitable for UAV flight operations.

Table 2. Meteorological data of the study areas during the test.

Air Air Humidity Illuminance Wind Speed PM2.5 PM10 Temperature (%) (1 × 104 lux) (m/s) (µg/m3) (µg/m3) (◦C) 26 July 2020 25.43 67.08 23.28 2.20 15.00 16.50 29 July 2020 28.63 51.98 21.53 1.60 5.00 5.25 1 August 2020 28.65 58.13 23.48 1.50 13.00 14.00

The flight altitude was set to 100 m above the ground, the course overlap was 70%, and the horizontal overlap was 65%. The RAW format images were exported and converted to TIFF format using the PixelWrench2 software installed in the camera. The spectral reflectivity was calculated using ENVI (v. 5.1, Exelis Visual Information Solutions, Boulder, CO, USA) combined with the standard whiteboard data. The ground control points (GCPs) are vital in verifying the accuracy of terrain information obtained by UAV. The 3D coordinates of the GCPs in this study were accurately measured by a real-time kinematic (RTK) product (Zhonghui i50, CHCNAV, Shanghai, China), which has a high precision of 2 mm. According to the actual terrain conditions and the control point layout principle, six GCPs were selected in each study area. Among them, three base numbers were used as calibration points and three even numbers were used as check points. The control points were set at the intersections of hardened roads. They were easy to distinguish and had good stability. Images were stitched using Pix4DMapper (v. 3.2, Pix4D, Prilly, Switzerland) based on the TIFF multispectral images and POS data collected by the UAV remote sensing system. In this study, 1540 multispectral remote sensing images were obtained from the three study areas. The data contained grayscale information such as red, blue, green, near-infrared, and red edge bands, and the spatial resolution was 7 mm. The UAV images’ mosaic results of the study areas are shown in Figure2. Sensors 2021, 21, 1994 5 of 22 Sensors 2021, 21, x FOR PEER REVIEW 5 of 22

(A) (B)

(C)

Figure 2. UAVFigure images 2. UAV mosaic images results mosaic of the results study of areas. the study((A) areas.((UAV imagesA) UAV mosaic images result mosaic of resultstudy ofarea study 1; (B area) UAV images mosaic result of study area 2; (C) UAV images mosaic result of study area 3) 1; (B) UAV images mosaic result of study area 2; (C) UAV images mosaic result of study area 3).

2.3. The Collection2.3. of T Groundhe collection Data of Ground Data During the experiment,During the we experiment, collected the we ground collected distribution the ground data distribution and ground data spectral and ground spec- data of crops.tral The data ground of crops distribution. The ground data distribution of crops is thedata basis of crops for selectingis the basis training for selecting training samples and verificationsamples and samples. verification This samples. can help This to can evaluate help to the evaluate classification the classification results results visu- visually. Groundally. spectral Ground data spectral of crops data can of help crops us explorecan help the us differences explore the in differences crop spectral in crop spectral characteristicscharacteristics better, provide better, a theoretical provide basis a theoretical for the classification basis for the results, classification and analyze results, and analyze the error sourcesthe in error the classificationsources in the results classification effectively. results effectively.

2.3.1. The Ground2.3.1. Distribution The Ground Data Distribution of Crops data of Crops The types of cropsThe weretypes determinedof crops were based determined on field surveys,based on andfield the surveys, location and of eachthe location of each crop was recordedcrop using was recorded portable using RTK in portable units of RTK plots. in Combiningunits of plots. ground Combining data and ground UAV data and UAV images, the groundimages, crops the distribution ground crops maps distribution (Figure3) maps were ( drawn.Figure 3) were drawn.

Sensors 2021, 21Sensors, 1994 2021, 21, x FOR PEER REVIEW 6 of 22 6 of 22

(B) (A)

± Bare land Cherry tomato

Hami melon

Tomato

Pepper

Sapling

Watermelon

Zucchini

Sunflower

Corn (C)

Figure 3. The ground crop distribution maps. ((A) the standard ground crop distribution map of Figure 3. The ground crop distribution maps. ((A) the standard ground crop distribution map of study area 1; (B) the B C ground crop distributionstudy area 1;map ( ) of the study ground area crop 2; (C distribution) the ground map crop of distribution study area 2; map ( ) theof study ground area crop 3). distribution map of study area 3).

2.3.2. Crop–Ground2.3.2. Crop Spectral–Ground Curves Spectral Curves The crop–groundThe crop spectral–ground curves spectral in this curves study in were this studyobtained were by obtained FieldSpec by Hand FieldSpec Hand Held (ASD,Held Westborough, (ASD, Westborough CO, USA) on, CO a sunny, USA)day on a (1 sunny August, day 11.00–14.00). (1 August, 11.00 The– specific14.00). The specific parameters ofparameters the FieldSpec of the Hand FieldSpec Held Hand are shown Held inare Table shown3. As in shown Table 3. in As Figure shown4 , thein Figure 4, the field experimentersfield exp haderimenters to wear had dark to clotheswear dark and clothes face the and sun face when the collecting sun when ground collecting ground spectral data.spectral First, thedata. optical First, fiberthe optical probe wasfiber alignedprobe was at the aligned whiteboard at the whiteboard for correction, for correction, and then alignedand then at the aligned vegetation at the canopy vegetation to collect canopy reflectance to collect spectra. reflectance Six samplesspectra. Six were samples were randomly selectedrandomly from selected each type from of each crop, type and of ten crop, spectral and curvesten spectral were curves measured were for measured each for each plant sample,plant which sample, was arithmeticallywhich was arithmetically averaged to averaged obtain the to final obtain reflectance the final spectralreflectance spectral data of the sample.data of the sample.

Table 3. The specificTable 3. parameters The specific of parameters FieldSpec Hand of FieldSpec Held. Hand Held.

ParametersParameters Values Values Spectral range 325–1075 nm Spectral range 325–1075 nm Spectral resolutionSpectral resolution 3.5 nm at 7003.5 nm nm at 700 nm Sampling intervalSampling interval 1.6 nm 1.6 nm n Integration timeIntegration time 2 × 17 ms for2∧ nn =× 0,17 1, ms . . . for , 15 n = 0, 1, …, 15 ± Wavelength accuracyWavelength accuracy 1 nm ±1 nm Noise equivalent radiance 5.0 E–9 W/cm2/nm/sr at 700 nm Noise equivalent radiance 5.0 E–9 W/cm∧2/nm/sr at 700 nm

Sensors 2021, 21, 1994 7 of 22 Sensors 2021, 21, x FOR PEER REVIEW 7 of 22

Sensors 2021, 21, x FOR PEER REVIEW 7 of 22

A A1 1 FiberFiber Interface Interface CC B B OpticalOptical Fiber Fiber Prob Prob

computer control computer control system D system D A2 Fiber Interface A2 Fiber Interface

Figure 4. Schematic diagram of spectral curve collection work. Figure 4. Schematic diagram of spectral curve collection work. Figure 4. Schematic diagram of spectral curve collection work. 3.3. Research Research Procedure Procedure andand MethodMethod TheThe workflow workflow of of planting planting structurestructure extraction is is shown shown in in Figure Figure 5.5 .There There are are seven seven 3.main Research stages: Procedure (1) the acquisition and Method and preprocessing of UAV remote sensing images, includ- main stages: (1) the acquisition and preprocessing of UAV remote sensing images, including ing the construction of the UAV multispectral system, the selection of an aerial photog- the constructionThe workflow of the of UAV planting multispectral structure system, extraction the selection is shown of an in aerialFigure photography 5. There are seven raphy path, and the stitching and geometric correction of orthophoto images; (2) the col- path,main and stages: the stitching(1) the acquisition and geometric and correction preprocessing of orthophoto of UAV images; remote (2) sensing the collection images, of includ- lection of ground data, including the investigation of the true distribution of crops on the grounding the data, construction including theof the investigation UAV multispectral of the true distribution system, the of cropsselection on the of ground an aerial and photog- ground and the collection of crop–ground spectral curves; (3) the selection of training and the collection of crop–ground spectral curves; (3) the selection of training and verification raphyverification path, samplesand the of stitching UAV multispectral and geometric images; correction (4) multiscale of orthophoto segmentation images; of UAV (2) the col- samples of UAV multispectral images; (4) multiscale segmentation of UAV images; (5) the lectionimages; of (5) ground the extraction data, including of features theand investigationthe determination of theof feature true distributio subsets, includingn of crops on the extractiongroundthe extraction and of featuresthe of spectralcollection and features the of determination crop and– texturegroundof features, spectral feature and subsets, curves; the selection including (3) the of selectionthe the best extraction feature of training of and spectralverificationband based features onsamples recursive and texture of featureUAV features, multispectralelimination and the (RFE); selection images; (6) the of use(4) the ofmultiscale best object feature-oriented segmentation band RF based (OB- on of UAV recursive feature elimination (RFE); (6) the use of object-oriented RF (OB-RF) and object- images;RF) and (5) object the-oriented extraction SVM of (OB features-SVM) clasandsification the determination models to classify of feature farmland subsets, crops; including oriented SVM (OB-SVM) classification models to classify farmland crops; (7) the use of the(7) extraction the use of confusionof spectral matrices features to evaluateand texture and comparefeatures, the and classification the selection accuracy of the of best feature confusioneach model. matrices to evaluate and compare the classification accuracy of each model. band based on recursive feature elimination (RFE); (6) the use of object-oriented RF (OB- RF) and object-oriented SVM (OB-SVM) classification models to classify farmland crops; Sample SVM (7) theselection use of confusion matrices to evaluate and compare the classification accuracy of

UAV visible images each model. Screening of characteristic parameters by RFE Sample SVM selection Multiresolution UAV visible images Segmentation Screening of images of characteristic parameters by RFE CLASSIFICATION

MultiresolutionThe true distribution Ground data Segmentationof crops on the ground of images

The crop ground CLASSIFICATION spectral curves

The true distribution Ground data of crops on the ground DATA ACQUISITION DADA PREPROCESSING ACCURACY EVALUATION

Figure 5. Workflow of planting structure extraction. The crop ground spectral curves

DATA ACQUISITION DADA PREPROCESSING ACCURACY EVALUATION

Sensors 2021, 21, 1994 8 of 22

3.1. Sample Selection The types of crops in the three study areas were determined through field research, and RTK was used to calibrate each crop’s geographic location. We randomly generated samples based on the ground standard crop distribution maps (Figure3). In three study areas, the reference samples were randomly split into two sets of disjointed training samples (TS) and validation samples (VS), via the sample-function in R (v. 4.0.3). The selection results of the samples are shown in Table4.

Table 4. Number of training and validation samples for each study area.

Study Area 1 Study Area 2 Study Area 3 Crops TS VS Crops TS VS Crops TS VS Corn 35 15 Corn 33 11 Sunflower 48 17 Sunflower 38 12 Sunflower 40 15 Zucchini 32 12 Zucchini 40 17 Zucchini 42 15 Hami melon 12 5 Bare land 20 8 Hami melon 25 8 Pepper 21 9 Pepper 27 9 Sapling 12 6 Bare land 18 6 Watermelon 14 6 Cherry tomato 23 8 Tomato 30 11 Bare land 25 9

3.2. Construction and Screening of Feature Parameters 3.2.1. Construction of Spectral Features and Texture Features Vegetation indices can magnify the spectral information between ground objects, and are one of the simplest and most effective methods of studying vegetation characteristics. In this study, eight common vegetation indices were obtained from band calculations (shown in Table5): the normalized difference vegetation index (NDVI) [ 33], the ratio vegetation index (RVI) [34], the difference vegetation index (DVI) [35], excess green (EXG) [36], the visible-band difference vegetation index (VDVI) [37], the normalized green–blue difference index (NGBDI) [38], the normalized green–red difference index (NGRDI) [39], and the Woebbecke index (WI) [40]. Texture features can reflect the characteristics of homogeneity in the images, and are unaffected by image color and image brightness The common texture features include the mean, variance, synergy, contrast, dissimilarity, information entropy, second moment, and correlation. This study obtained 40 texture features of crops in five bands (red, green, blue, near-infrared, red edge) by applying co-occurrence measures, which calculated texture values using the grey tone spatial dependence matrix. This process was implemented in ENVI 5.1 (Exelis Visual Information Solutions, Boulder, CO, USA). In addition, the size of the filtering window was 3×3, which was the default value in ENVI.

Table 5. Common vegetation indices.

Vegetation Indices Full Name Formula Normalized difference NDVI (ρ − ρ )/(ρ + ρ ) vegetation index NIR R NIR R RVI Ratio vegetation index ρNIR/ρR DVI Difference vegetation index ρNIR − ρR ExG Excess green 2ρNIR − ρR − ρB Visible-band difference VDVI (2ρ − ρ − ρ )/(2ρ + ρ + ρ ) vegetation index G R B G R B Normalized green-blue NGBDI (ρ − ρ )/(ρ + ρ ) difference index G B G B Normalized green-red NGRDI (ρ − ρ )/(ρ + ρ ) difference index G R G R WI Woebbecke index (ρG − ρB)/(ρR − ρG) Sensors 2021, 21, 1994 9 of 22

3.2.2. Screening of Characteristic Parameters To improve the operation speed and prediction accuracy of the model and avoid overfitting, the feature parameters of the images were screened to eliminate features that had a low correlation with the model prediction. Recursive feature elimination (RFE) is an efficient algorithm that combines classifiers to find the optimal feature subset [41]. It creates the model repeatedly, and retains the best features or removes the worst features in each iteration. In subsequent iterations, it uses features that were not selected in the previous model to create the next model until all features are exhausted. Finally, RFE ranks the features according to the order in which they were retained or removed, and selects the best subset. This study performed feature optimization based on the RFE module in scikit-learn, a free software machine learning library basing the Python programming language. The RF classifier was used to evaluate the RFE model, and ten-fold cross-validation was adopted to evaluate the model parameters’ accuracy. RFE was used to screen features in five spectral bands, seven vegetation indices, and 40 texture features. The importance rankings of the features are shown in Tables A1–A3. The characteristics ranked first, second, third (and so on) in their corresponding feature sets were denoted as B1, B2, B3 (and so on). The feature parameters were accumulated one by one as per the importance rankings of the feature parameters, and the images were pre-classified based on the accumulated feature subset. The classification accuracy is shown in Figures A1–A3. According to the importance rankings of all features and the pre-classification results in Figures A1–A3, the feature subset was then constructed by retaining the features that contributed significantly to the classification, and eliminating the features that contributed little or were useless. The final filtering results are presented in Table6.

Table 6. List of screening results of feature subsets.

Feature Types Feature Subset Multispectral bands Red-band; Blue-band SA1 Vegetation indices VDVI; ExG Red-mean; Green-mean; Blue-mean; Red-correlation; Texture features Green-homogeneity; Blue-correlation; NIR-correlation Multispectral bands Red-band; Green-band; Blue-band; NIR; Red-edge SA2 Vegetation indices NGBDI; ExG; RVI; Blue-contrast; Blue-entropy; Blue-homogeneity; Texture features Green-dissimilarity; Red-second-moment; NIR-dissimilarity Multispectral bands Red-band; Green-band; Blue-band; NIR; Red-edge SA3 Vegetation indices NGBDI; RVI; DVI; NGRDI Red-homogeneity; Green-homogeneity; Blue-dissimilarity; Texture features Red- correlation

3.3. Multiresolution Segmentation OBIA makes full use of the spatial, textural, contextual, and other geometric features and structure information of remote sensing images. It is superior to pixel-oriented analysis for crop extraction because it efficiently solves the problems of “same substance with a different spectrum”, “same spectrum with a foreign substance” and the “salt and pepper effect” [24]. OBIA uses an iterative algorithm to segment remote sensing images into uniform and continuous image objects. OBIA mainly has two independent modules: object generation and image information extraction. A good segmentation effect is the prerequisite to achieving excellent classification results [25]. Generally, the ground feature information is complex and mixed, making it challenging to obtain an ideal segmentation effect using a single-scale segmentation method. Therefore, multiresolution segmentation is commonly adopted for land use information extraction. This method creates image polygon objects with arbitrary scales and similar attribute information. Through multiresolution segmentation, adjacent similar pixels gather to form objects, and the classifier uses these Sensors 2021, 21, x FOR PEER REVIEW 10 of 22 Sensors 2021, 21, 1994 10 of 22

these homogeneous objects as the basic processing units to extract information from im- homogeneousages. objectsIn this asstudy, the basicremote processing sensing unitsimages to were extract first information segmented from into images. image objects In with this study,different remote sensingscales, based images on were the multiscale first segmented segmentation into image method. objects Then, with target different crop extrac- scales, basedtion onwas the accomplished multiscale segmentation using spectral method. and textural Then, features target crop of the extraction objects. The was data pro- accomplishedcessing using was spectral carriedand out texturalby eCognition features Developer of the objects. (v. 9.2.1, The Trimble data processing Geospatial). was The seg- carried outmentation by eCognition parameters Developer were adjusted (v. 9.2.1, through Trimble multiple Geospatial). segmentatio The segmentationn experiments based parameterson wereexpert adjusted knowledge. through The multipleprinciple segmentationof hyper-parameter experiments selection based was onthat expert the segmenta- knowledge.tion The effect principle best fits of the hyper-parameter ridge line. The segmentation selection was parameters that the segmentation were adjusted effect many times best fits theto ridgeensure line. optimal Thesegmentation values during parameters the multiscale were segmentation. adjusted many The times optimal to ensure segmentation optimal valuesparameters during for the remote multiscale sensing segmentation. images were The determined optimal segmentation to be as follows: parameters the segmentation for remotescale sensing was imagesset to 200, were the determined shape weight to bewas as set follows: to 0.2, theand segmentation the compactness scale weight was was set set to 200,to the 0.5 shapethrough weight segmentation was set to experiments. 0.2, and the The compactness final segmentation weight was results set to are 0.5 shown in through segmentationFigure 6. experiments. The final segmentation results are shown in Figure6.

(A) (B)

(C)

Figure 6. Image segmentationFigure 6. Image results segmentation of the study results areas. of ( the(A) studyimage areas. segmentation ((A) image result segmentation of study area result 1; (B of) studyimage seg- mentation result areaof study 1; (B area) image 2; (C segmentation) image segmentation result of study result area of study 2; (C) area image 3.) segmentation result of study area 3).

3.4. Classification3.4. Classification Methods Methods 3.4.1. RF 3.4.1. RF RF is a nonparametricRF is a nonparametric machine learningmachine algorithmlearning algorithm composed composed of multiple of decisionmultiple decision trees. Thistrees. algorithm This algorithm has high predictionhas high prediction accuracy, accuracy, good tolerance good tolerance to outliers to and outliers noise, and a noise, a wide rangewide of applications,range of applications, and cannot and easily cannot be overfittedeasily be overfitted [42]. According [42]. According to statistical to statistical learning theory, RF uses the bootstrap resampling method to extract multiple samples

Sensors 2021, 21, 1994 11 of 22

from the original data, and then performs decision tree modeling for each sample. The prediction results from various decision trees are synthesized, and finally, a random forest with a mass of classification trees is constructed [43]. Two parameters need to be defined to generate prediction models: the number of expected classification trees (ntree) and the number of features extracted when nodes are split (mtry). The implementation of the RF model in this study was based on the Random Forest module in scikit-learn, based on the Python programming language. It was found that setting the ntree to 50 produced an error that gradually converged and tended to be stable, while mtry was set to the square root of the total number of features.

3.4.2. SVM SVM is based on the Vapnik–Chervonenkis dimension theory of statistical learning and the principle of minimum structural risk. It is often used to solve small-sample, nonlinear, and high-dimensional pattern recognition problems [44]. Under the condition of limited sample information, SVM provides a good balance between the complexity and learning of the model, and has a good generalization ability. The common kernel functions in the SVM algorithm are linear, polynomial, radial basis, and sigmoid kernel functions. The radial basis kernel function is the most widely used as it has fewer parameters and better performance than the others, regardless of the number of samples [45]. The implementation of the SVM model in this study was based on the support vector machines module in scikit-learn based on the Python programming language.

3.5. Classification Accuracy Assessment Based on the verification sample data, a confusion matrix was used to calculate the user accuracy (UA), production accuracy (PA), extraction accuracy (F), overall accuracy (OA), and Kappa coefficient. UA and PA can be used to evaluate misclassification and omission errors quantitatively, and the overall accuracy and the Kappa coefficient (K) are commonly used to evaluate the overall classification effect. Besides this, F is used to evaluate the extraction accuracy of all kinds of ground objects under various methods.

P0 = Tc/Ac

where P0 represents the overall classification accuracy, Tc represents the number of pixels correctly classified by method c, and Ac represents the total number of pixels classified by method c. Po − Pe a1 × b1 + a2 × b2 + ··· + ac × bc K = , Pe = 1 − Pe n × n

where P0 represents the overall classification accuracy, assuming that the true number of samples of each category is a1, a2, ··· , ac, the predicted number of samples of each category is b1, b2, ··· , bc, and the total number of samples is n.

2P U F = Am Am × 100% PAm + UAm

where F represents the extraction precision, PAm represents the production accuracy of category m, and UAm represents the user precision of category m.

4. Results The crop planting information in three study areas with different planting complexities was extracted using OB-RF and OB-SVM (Figures7 and8), based on the multispectral remote sensing images obtained by the UAV in the three study areas. The confusion matrix was used to evaluate the accuracy of the classification results. It assumed that pixels at the reference locations could be assigned to single classes, and accuracy measures based on the proportion of area correctly classified were then calculated from the number of correctly classified pixels [46]. The accuracy evaluation results are presented in Tables7–9 . Sensors 2021, 21, x FOR PEER REVIEW 12 of 22

Sensors 2021, 21, 1994 12 of 22

correctly classified pixels [46]. The accuracy evaluation results are presented in Tables 7– In SA1, both OB-RF9. In SA1, and both OB-SVM OB-RF achieved and OB- goodSVM classificationachieved good results, classification with an results, overall with an overall accuracy greateraccuracy than 97% greater and than an extraction 97% and accuracyan extraction for everyaccuracy crop for greater every than crop 92%. greater than 92%. The accuracy ofThe SA2 accuracy was slightly of SA2 lower, was slightly but the overalllower, but accuracy the overall was still accuracy above was 92%. still The above 92%. The extraction accuracyextraction of the accuracy OB-RF modelof the forOB- pepperRF model and for hami pepper melon and was hami low melon (84.86% was for low (84.86% for pepper, 75.65%pepper, for Hami 75.65% melon), for Hami while melon), the extraction while the accuracy extraction of accuracy the OB-SVM of the model OB-SVM model for for all crops remainedall crops remained at a high at level a high (extraction level (extraction accuracy accuracy greater greater than 94%). than 94%). In SA3, In SA3, the over- the overall accuracyall accuracy and extractionand extraction accuracy accuracy based based on on the the OB-SVM OB-SVM model model remained remained high (overall high (overall accuracyaccuracy of of 97.21%, 97.21%, extraction extraction accuracy accuracy greater greater than than 85.65%). 85.65%). However, However, the the overall accu- overall accuracyracy and and extraction extraction accuracy accuracy given given by the by OB-RFthe OB model-RF model decreased decreased significantly. significantly. Among Among all studyall study areas, areas, corn hadcorn thehad highest the high extractionest extraction accuracy, accuracy, and and saplings saplings had had the the lowest ex- lowest extractiontraction accuracy. accuracy.

(A) (B)

Bare land Cherry tomato Hami melon Tomato Pepper Sapling Watermelon Zucchini Sunflower Corn

(C)

Figure 7. ClassificationFigure 7. Classification results of OB results-RF model. of OB-RF ((A) classification model. ((A) classificationresult of OB-RF result model of OB-RFin study model area 1; in (B study) classificaTable 2. (C) classificationarea 1; (resultB) classificaTable of OB-RF model2.( C )in classification study area 3 result). Note of: OB OB-RF-RF stands model for in Object study- areaoriented 3). Note: randomOB-RF forest classifi- cation model. stands for Object-oriented random forest classification model.

Sensors 2021, 21, 1994 13 of 22 Sensors 2021, 21, x FOR PEER REVIEW 13 of 22

(A) (B)

Bare land Cherry tomato Hami melon Tomato Pepper Sapling Watermelon Zucchini Sunflower Corn

(C)

Figure 8. ClassificatFigure 8.ionClassification results of OB results-SVM of model. OB-SVM ((A model.) classification ((A) classification result of OB result-SVM of OB-SVMmodel in modelstudy inarea study 1; (B), classi- fication resultarea of 1; OB (B-),SVM classification model in resultstudy ofarea OB-SVM 2; (C) classification model in study result area of 2; OB (C)-SVM classification model in result study of area OB-SVM 3). Note: OB- SVM standsmodel for support in study vector area machine 3). Note: classificationOB-SVM stands model. for support vector machine classification model.

Table 7. AccuracyTable evaluation 7. Accuracy of classification evaluation resultsof classification for SA1. results for SA1. Objects Methods Objects Zucchini Corn SunflowerBare Bare land Methods AccuracyZucchini(%) Corn Sunflower Land Accuracy (%) PA 98.31 100.00 95.19 98.01 PAUA 98.31 100.0090.90 99.83 95.19 99.3098.01 93.98 OB-RFUA 90.90 99.83 99.30 93.98 OB-RF FF 94.46 99.9194.46 99.91 97.20 97.2095.95 95.95 Overall accuracy = 97.09%,Overall Kappa accuracy = 0.95 = 97.09%, Kappa = 0.95 PAPA 99.41 99.8799.41 99.87 99.87 99.8787.50 87.50 UA 99.37 98.99 99.10 98.32 OB-SVM UA 99.37 98.99 99.10 98.32 OB-SVMF 99.39 99.43 99.48 92.59 Overall accuracyF = 99.13%, Kappa99.39 = 0.99 99.43 99.48 92.59 Overall accuracy = 99.13%, Kappa = 0.99 Note: OB-RF stands for Object-oriented random forest classification model. OB-SVM stands for support vector machine classification model. Note: OB-RF stands for Object-oriented random forest classification model. OB-SVM stands for support vector machine classification model.

Sensors 2021, 21, 1994 14 of 22

Table 8. Accuracy evaluation of classification results for SA2.

Objects Methods Zucchini Corn Sunflower Bare Land Pepper Hami Melon Accuracy (%)

PA 84.37 99.62 99.53 92.63 99.12 86.11 UA 99.68 98.93 99.94 98.43 73.85 67.45 OB-RF F 91.39 99.27 99.73 95.44 84.86 75.65 Overall accuracy = 92.61%, Kappa = 0.90 PA 99.74 99.69 99.57 91.07 96.48 99.51 UA 99.40 97.61 99.41 98.87 98.53 99.29 OB-SVM F 99.57 98.64 99.49 94.81 97.49 99.40 Overall accuracy = 99.08%, Kappa = 0.99

Table 9. Accuracy evaluation of classification results for SA3.

Objects Bare Hami Cherry Methods Zucchini Sunflower Land Pepper Melon Watermelon Tomato Tomato Sapling Accuracy (%)

PA 93.33 90.55 91.77 89.01 98.35 100.00 99.94 72.04 26.67 OB- UA 88.12 95.39 89.32 79.41 80.32 88.24 85.69 98.85 72.73 RF F 90.65 92.91 90.53 83.94 88.43 93.75 92.27 83.34 39.03 overall accuracy = 88.99%, Kappa = 0.86 PA 98.87 99.15 92.07 93.59 98.35 100.00 99.94 91.84 84.22 OB- UA 99.98 99.68 94.08 94.19 80.32 98.47 91.51 98.82 87.13 SVM F 99.42 99.41 93.06 93.89 88.43 99.23 95.54 95.20 85.65 Overall accuracy = 97.21%, Kappa = 0.97

5. Discussion 5.1. Classification Error Analysis By comparing the classification results obtained by OB-SVM and OB-RF (Figures7 and8) with the standard crops distribution map obtained through field investigation (Figure3), classification error detail maps (Figure9) were made. In SA1, the primary source of error was the mixed classification of zucchini and sunflower. In SA2, the primary source of error was the mixed fraction of and zucchini, and the mixed fraction of pepper and cantaloupe. In SA3, the primary source of error was the mixed fraction of hami melon and cherry tomato, the mixed fraction of pepper and cherry tomato, and the mixed fraction of zucchini and sunflower. In general, there are mainly five crops that are easy to mix: hami melon, pepper, zucchini, cherry tomato, and sunflower. In order to explore the reasons for crop mixing, we analyzed the spectral curves of five easily mixed crops. The spectral curves of five crops with a high mixing frequency (sunflower, cherry tomato, pepper, hami melon, and zucchini) are shown in Figure9. In the spectral range 400–900 nm, the spectral reflectance of five easily confused crops is stable in the near-infrared band range from 770 to 800 nm, where the difference is most apparent. Additionally, there are apparent reflection peaks in the green band from 540 to 560 nm, and some differences in the height of reflection peaks of different crops. However, for both 770–800 nm and 540–560 nm, the six spectral curves of Hami melon overlap with pepper and zucchini, which is one of the reasons why Hami melon is easily confused with pepper and zucchini. In addition, Hami melon, pepper, and zucchini are all grown by strip cultivation in the study areas, and were in the same phenological period (fruit setting) when the experimental images were obtained, which weakens the differences in their texture features. Sensors 2021, 21, 1994 15 of 22 Sensors 2021, 21, x FOR PEER REVIEW 15 of 22

(A) (B)

(C) (D)

(E) (F)

Figure 9. ClassificationFigure 9. Classification error map of errorOB-RF map and of OB OB-RF-SVM andin study OB-SVM areas. in ( study(A) classification areas. ((A) error classification map of OB error-RF map in study area 1; (B) classificationof OB-RF in error study map area of 1;OB (B-SVM) classification in study area error 1; ( mapC) classification of OB-SVM error in study map areaof OB 1;-RF (C )in classification study area 2; (D) classification error map of OB-SVM in study area 2; (E), classification error map of OB-RF in study area 3; (F) classification error map of OB-RF in study area 2; (D) classification error map of OB-SVM in study area 2; (E), error map of OB-SVM in study area 3.) classification error map of OB-RF in study area 3; (F) classification error map of OB-SVM in study area 3).

Sensors 2021, 21, x FOR PEER REVIEW 16 of 22 Sensors 2021, 21, 1994 16 of 22

Interestingly, although the reflectance of cherry tomato is obviously higher than that Interestingly, although the reflectance of cherry tomato is obviously higher than that of Hami melon, and the reflectance of sunflower is obviously higher than that of zucchini of Hami melon, and the reflectance of sunflower is obviously higher than that of zucchini in in the near-infrared band of 770–800 nm, there are mixed fractions of Hami melon and the near-infrared band of 770–800 nm, there are mixed fractions of Hami melon and cherry tomato,cherry tomato, and mixed and fractions mixed fractions of sunflower of sunflower and zucchini, and zucchini, in SA3. One in SA3. possible One explanation possible ex- isplanation that the cherryis that tomatothe cherry and tomato sunflower and are sunflower densely are planted densely in the planted study in area, the withstudy many area, overlappingwith many overlapping leaves. Compared leaves. with Compared single-leaf with plants, single multiple-leaf plants, leaves multiple can produce leaves can higher pro- reflectivityduce higher in reflectivity the near-infrared in the near band-infrared due to additionalband due to reflectivity additional [ 47reflectivity]. Therefore, [47]. cherry There- tomatofore, cherry and sunflower tomato and have sunflower a higher have reflectivity a higher than reflectivity other crops than in theother near-infrared crops in the band. near- However,infrared band. in Figure However, 10, the in cherry Figure tomato 10, the blossom cherry tomato in area blossom m and the in sunflowerarea m and in the area sun- n growflower poorly in area and n grow are sparsely poorly and planted, are sparsely decreasing planted, their decreasing reflectivity their in the reflectivity green band in the of 540–560green band nm andof 540 the–560 near-infrared nm and the band near of-infrared 770–800 band nm. Thus,of 770– the800 difference nm. Thus, between the difference them isbetween decreased. them Moreover, is decreased. Hami melonMoreover and, cherryHami tomatomelon and are vinecherry plants, tomato which are have vine similar plants, texturalwhich havefeatures. similar The big textural leaves features. of sunflower The andbig leaveszucchini of also sunflower weaken and the differences zucchini also in theirweaken respective the differences textural features.in their respective textural features.

FigureFigure 10. 10.Spectral Spectral curve curve of of ground ground objects. objects.

5.2.5.2. Model Model Performance Performance under under Different Different Planting Planting Structure Structure Complexity Complexity TheThe classification classification results results for for the the three three study study areas areas were were produced produced using using the OB-RF the OB and-RF OB-SVMand OB-SVM models. models. The overallThe overall accuracy accuracy values values of SA1 of givenSA1 given by OB-RF by OB and-RF OB-SVMand OB-SVM are 97.085%are 97.085% and 99.126%,and 99.126%, respectively. respectively. For SA2, For SA2, the overall the overall accuracy accuracy values values are 92.610% are 92.610% and 99.078%,and 99.078%, respectively, respectively, and for and SA3 for they SA3 are they 88.994% are 88.994% and 97.207%, and 97.207%, respectively. respectively. These results These indicateresults indicate that the that OB-RF the andOB- OB-SVMRF and OB classification-SVM classification accuracies accuracies decrease decrease as the complexity as the com- ofplexity the planting of the structureplanting structureincreases. increases. In particular, In particular, OB-RF’s overall OB-RF’s accuracy overall decreased accuracy by de- 8.1%,creased while by 8.1%, that ofwhile OB-SVM that of only OB-SVM decreased only decreased by 1.9%. Inby general,1.9%. In thegeneral, advantage the advantage of OB- SVM’sof OB- classificationSVM’s classification accuracy accuracy becomes becomes more prominent more prominent as the number as the ofnumber ground of features ground increases.features increases. From the From differences the differences in the extraction in the extraction accuracies accuraci of thees different of the different methods, meth- OB- SVM’s extraction accuracy was obviously better than that of OB-RF in SA3. ods, OB-SVM’s extraction accuracy was obviously better than that of OB-RF in SA3. The occurrence of classification errors in this study is related to the sample size The occurrence of classification errors in this study is related to the sample size limi- limitation, such as for the saplings and Hami melon in SA3. Comparing area j with area o, tation, such as for the saplings and Hami melon in SA3. Comparing area j with area o, and and area m with area q, in Figure 10, it is clear that the classification error of OB-SVM is area m with area q, in Figure 10, it is clear that the classification error of OB-SVM is smaller smaller than that of OB-RF in small sample areas. As a representative ensemble learning than that of OB-RF in small sample areas. As a representative ensemble learning algo- algorithm, the RF classifier achieved good results in the automatic extraction of remote rithm, the RF classifier achieved good results in the automatic extraction of remote sensing sensing information [42,43]. However, the RF classifier is better suited to large samples information [42,43]. However, the RF classifier is better suited to large samples and high- and high-dimensional data, and thus requires a sufficient number of samples [44]. The

Sensors 2021, 21, 1994 17 of 22

SVM classifier specializes in analyzing small numbers of samples [39,45]. In this study, the number of features in the three test areas gradually increased, and the number of available training samples was minimal as the plots became more fragmented. The learning ability of the classifier from the training samples under this situation directly determined the accuracy of the classification results. Therefore, the classification accuracy of the OB-SVM method was superior to that of OB-RF in this study, because of the high sensitivity of the SVM classifier to the samples. These results indicate that the OB-SVM method is more suitable for the classification of crops in fragmented areas with highly complex planting structures.

5.3. Classification Potential of UAV Multispectral Remote Sensing Technology under Complex Planting Structures The OB-SVM model achieved superior classification performance in extracting crop information in areas with low-, medium-, and high-complexity planting structures, based on UAV multispectral remote sensing. The overall accuracies of the three study areas were 99.13%, 99.08%, and 97.21%, and the extraction accuracy values were better than 92.59%, 94.81%, and 85.65%. As the planting structure complexity increased, the classification accuracy and extraction accuracy decreased, but the overall accuracy was only reduced by 1.92%. Using UAV visible light images, Park et al. [30] applied an object-oriented method to classify cabbage and radish, and obtained an accuracy of 84.7%. The overall accuracy reached 97.21%, even under a complex classification environment with eight different crops in this study. Chen et al. [48] pointed out that UAV visible light images led to lower inter- pretation accuracy than multispectral images in agricultural land classification. In addition, Ishida et al. [49] used UAV hyperspectral remote sensing technology to classify 14 ground objects with an overall accuracy of 94.00%, and compared with this, the classification results in this paper were not inferior. It can be seen from the spectral curve of ground objects (Figure9) that the most remarkable difference in the reflectivity of each crop was in the near-infrared band, which made a significant contribution to the classification. Additionally, the importance ranking of the multispectral bands (Table A1) suggests that the near-infrared band played an essential role in the classification results of each study area (importance ranked third, first and second for SA1, SA2, and SA3, respectively). The vegetation indices (DVI and RVI) obtained using the near-infrared band as the input variable, and the texture features obtained from the second-order matrix probability operation (near-infrared coordination, near-infrared information entropy, near-infrared correlation, near-infrared contrast, and near-infrared heterogeneity), also played essential roles in the classification results (Tables A2 and A3). Thus, it can be concluded that the near-infrared band provides essential features that improve the extraction accuracy of the planting structure, and enable the fine classification of crops. This is the main advantage of multispectral remote sensing compared with visible light remote sensing. Besides this, multispectral remote sensing’s high price limits its applicability to agricultural production, although UAV hyperspectral remote sensing offers a higher spectral resolution than multispectral remote sensing. Indeed, multispectral remote sensing satisfies the requirements as far as crop classification is concerned. In general, multispectral remote sensing technology has a higher spectral resolution than visible light, and has higher cost performance than hyperspectral remote sensing. Thus, it offers a wider range of applications for the fine classification of farmland features under highly complex planting structures. Based on UAV multispectral remote sensing images as data, we used the OB-SVM and OB-RF models to extract crops in areas with highly complex planting structures. We verified the application potential of this method in the extraction of complex planting structures. The conclusions can provide new ideas for obtaining accurate crop distribution maps in areas with complex planting structures, and technical support for stabilizing food security and protecting water resources. Sensors 2021, 21, 1994 18 of 22

6. Conclusions This study has described the analysis and classification of multispectral images using UAV remote sensing technology. RFE was used to screen the spectral features and texture features of crops in the images, allowing the feature subsets of three study areas to be suc- cessfully constructed. The OB-RF and OB-SVM models were used for the fine classification of crops based on the above procedures. Field observations and visual interpretation were used to evaluate the accuracy of the classification results through the confusion matrix method. The main conclusions of this study are as follows: (1) The OB-SVM model’s classification accuracy in areas with low-, medium-, and high- complexity planting structures was respectively 1.99%, 4.60%, and 8.22% higher than that of the OB-RF model. As the planting structure complexity increased, the classification advantages of the OB-SVM model became more evident. This indicates that the OB-SVM model offers higher classification accuracy under land fragmentation and highly complex planting structures, and is more suitable for the fine classification of farmland features with highly complex agricultural planting patterns; (2) Based on UAV multispectral remote sensing technology and the OB-SVM classifi- cation model, the overall accuracy of the study areas with low, medium, and high complexity were as high as 99.13%, 99.08%, and 97.21%, respectively. The extraction accuracy of each crop was at least 92.59%, 94.81% and 85.65% in the three study areas, respectively. As the planting structure complexity increased from low to high, the clas- sification accuracy and extraction accuracy decreased, but the overall accuracy only decreased by 1.92%. Therefore, UAV multispectral remote sensing technology has vast application potential for the fine classification of farmland features under highly complex planting structures. The conclusions can provide new ideas for accurately obtaining crop distribution maps in areas with complex planting structures, and then provide technical support for protecting food security and the rational allocation of water resources.

Author Contributions: W.H. was the supervisor of all tests and systems. Q.M. conceived and designed this system and experiment, and drafted the manuscript. S.D. give advice on the manuscript modification and give help on the translation of language. S.H., G.L. and H.C. conduct the collection of experimental data. All authors have read and agreed to the published version of the manuscript. Funding: This study was supported by the 13th Five-Year Plan for Chinese National Key R&D Project (2017YFC0403203) and the National Natural Science Foundation of China (51979233). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data are not publicly available due to [privacy]. Acknowledgments: We are very grateful to Shide Dong, a student at the institute of Geographic Sciences and Natural Research, Chinese Academy of Sciences, for his guidance and correction in English. Conflicts of Interest: The author declares no conflict of interest.

Appendix A

Table A1. The importance ranking of the multispectral bands.

Multispectral Bands B1 B2 B3 B4 B5 SA1 Red band Blue band NIR Green band Red edge SA2 NIR Red edge Green band Red band Blue band SA3 Blue band NIR Red band Red edge Green band Sensors 2021, 21, 1994 19 of 22

Table A2. The importance ranking of the vegetation indices.

Vegetation Indices B1 B2 B3 B4 B5 B6 B7 SA1 VDVI ExG DVI RVI NGBDI NDVI NGRDI SA2 NGBDI ExG DVI RVI NGRDI NDVI VDVI SA3 NGBDI RVI NDVI ExG DVI VDVI NGRDI

Table A3. The importance ranking of the texture features.

Texture Features SA1 SA2 SA3 B1 Red mean Red contrast Red homogeneity B2 Green mean Blue entropy Green homogeneity B3 Blue mean Blue homogeneity Blue dissimilarity B4 Red correlation Green dissimilarity Red correlation B5 Green homogeneity Red second moment Red contrast B6 Blue correlation Near-infrared dissimilarity Near-infrared contrast B7 Near-infrared homogeneity Near-infrared correlation Near-infrared homogeneity B8 Near-infrared entropy Blue correlation Blue correlation B9 Red variance Red dissimilarity Red variance B10 Green dissimilarity Red entropy Green correlation B11 Blue entropy Red-edge second moment Near-infrared second moment B12 Green entropy Green homogeneity Red-edge second moment B13 Green second moment Green correlation Green dissimilarity B14 Red-edge mean Red-edge dissimilarity Near-infrared contrast B15 Red-edge homogeneity Red-edge homogeneity Red-edge homogeneity B16 Near-infrared variance Blue dissimilarity Near-infrared dissimilarity B17 Red contrast Red variance Red mean B18 Blue variance Green entropy Green second moment B19 Near-infrared contrast Near-infrared variance Near-infrared mean B20 Green correlation Red correlation Green contrast B21 Green contrast Blue mean Green mean B22 Red dissimilarity Blue second moment Near-infrared entropy B23 Red-edge variance Near-infrared mean Near-infrared correlation B24 Near-infrared second moment Near-infrared homogeneity Blue mean B25 Red homogeneity Red mean Red entropy B26 Red entropy Green contrast Blue entropy B27 Near-infrared dissimilarity Red variance Near-infrared variance B28 Red second moment Green mean Green variance B29 Green variance Blue variance Blue homogeneity B30 Red-edge contrast Red-edge contrast Red-edge second moment B31 Red entropy Red-edge second moment Red-edge correlation B32 Near-infrared mean Near-infrared entropy Blue variance B33 Red dissimilarity Red homogeneity Red dissimilarity B34 Red contrast Blue contrast Near-infrared variance B35 Near-infrared correlation Near-infrared contrast Red-edge homogeneity B36 Blue homogeneity Green second moment Green entropy B37 Blue dissimilarity Green mean Blue contrast B38 Red-edge second moment Red-edge correlation Red-edge dissimilarity B39 Red-edge correlation Red-edge entropy Red-edge entropy B40 Blue second moment Red-edge mean Blue entropy Sensors 2021, 21, 1994 20 of 22 Sensors 2021, 21, x FOR PEER REVIEW 20 of 22 Sensors 2021, 21, x FOR PEER REVIEW 20 of 22

Sensors 2021, 21, x FOR PEER REVIEW 20 of 22

100 100 91.12 87.27 89.57 89.49 91.12 100 87.27 89.57 89.49 89.57 89.49 91.12 80 74.40 87.27

(%) 80 74.40 (%)

80 74.40 (%) 60 57.00 58.19 ccuracy 60 53.71 57.00 58.19 ccuracy 60 46.99 53.7146.61 57.0048.95 58.19 49.29

ccuracy 44.00 46.9943.73 53.7146.61 48.95 49.29 44.0040.52 46.99 48.95 49.29 40 44.00 43.73 46.61 Overall a 40 40.52 43.73 Overall a 40 40.52

Overall a SA1 SA2 SA3 SA1 SA2 SA3 20 SA1 SA2 SA3 20 20 B1 B1+B2 B1+B2+B3 B1+…+B4 B1+…+B5 B1 B1+B2 B1+B2+B3 B1+…+B4 B1+…+B5 B1 B1+B2 B1+B2+B3 B1+…+B4 B1+…+B5 FigureFigure A1. Relationship A1. Relationship between between accumulated accumulated multispectral multispectral bands bands and overall and overall accuracy. accuracy. FigureFigure A A1.1. Relationship Relationship between accumulatedaccumulated multispectral multispectral bands bands and and overall overall accuracy. accuracy. 90 85.47 85.67 85.58 85.41 9090 84.94 84.99 84.94 84.99 85.4785.47 85.6785.67 85.5885.58 85.4185.41 80 84.94 84.99 80 73.93 80 73.93 70 73.93 7070 60 6060 50 47.00 48.43 48.43 48.58 48.32 46.05 48.4348.43 48.4348.43 48.5848.58 48.32 5050 42.81 47.0047.00 46.0546.05 42.87 48.3242.81 42.8142.81 38.88 38.41 39.83 41.43 42.8742.87 42.8142.81 40 38.88 38.41 39.83 41.4341.43 4040 34.54 38.88 38.41 39.83 34.5434.54 30 30Overallaccuracy(%) 30 Overallaccuracy(%) SA1 SA2 SA3

Overallaccuracy(%) SA1 SA2 SA3 20 SA1 SA2 SA3 2020 B1B1 B1+B2B1+B2 B1+…+B3B1+…+B3 B1+…+B4B1+…+B4 B1+…+B5B1+…+B5 B1+…+B6B1+…+B6 B1+…+B7B1+…+B7 B1 B1+B2 B1+…+B3 B1+…+B4 B1+…+B5 B1+…+B6 B1+…+B7 Figure AA2.2. Relationship Relationship between between accumulated accumulated vegetatio vegetation indicesn indices and and overall overall accuracy. accuracy. FigureFigure A2. Relationship A2. Relationship between between accumulated accumulated vegetation vegetatio indicesn indices and overall and overall accuracy. accuracy.

100 100 94.1794.17 94.11 92.2892.28 92.9092.90 94.11 94.0494.04 93.8393.83 93.8093.80 93.75 93.75 93.50 93.50 93.55 93.55 94.17 94.11 94.04 92.28 92.90 93.87 93.83 93.80 93.75 93.50 93.55 90 91.14 92.3692.36 93.87 91.14 93.87 90 91.14 92.36

80 79.3379.33 80 79.33 70 70 65.69 66.72 65.81 66.12 65.66 65.75 65.69 66.72 65.81 66.12 65.66 65.75 65.00 64.15 64.38 70 65.00 64.15 64.38 61.50 65.69 66.72 65.81 60.7566.12 61.2665.66 61.6365.75 61.61 61.38 61.54 61.76 58.55 59.10 60.75 61.63 65.0061.61 64.1561.38 64.3861.54 60 61.5057.73 61.76 59.10 61.26 60 57.0261.50 57.73 57.18 58.55 60.75 61.26 61.63 61.61 61.38 61.54 Overallaccuracy(%) 57.02 61.76 57.18 59.10 57.73 58.55 Overallaccuracy(%) 60 55.65 57.18 57.02 53.0455.65 Overallaccuracy(%) 50 53.04 49.0355.65 50 47.7453.0449.03 50 46.1149.03 47.74 40 46.1141.8447.74 40 46.1141.84 SA1 SA2 SA3 40 41.84 SA1 SA2 SA3 30 SA1 SA2 SA3 30 30

Figure A3.FigureRelationship A3. Relationship between between accumulated accumulated textural textural features features and overalland overall accuracy. accuracy. Figure A3. Relationship between accumulated textural features and overall accuracy. Figure A3. Relationship between accumulated textural features and overall accuracy.

References

1. Atzberger, C. Advances in remote sensing of agriculture: Context description, existing operational monitoring systems and major information needs. Remote Sens. 2013, 5, 949–981. [CrossRef] 2. Gerland, P.; Raftery, A.E.; Ševˇcíková, H.; Li, N.; Gu, D.; Spoorenberg, T.; Alkema, L.; Fosdick, B.K.; Chunn, J.; Lalic, N. World population stabilization unlikely this century. Science 2014, 346, 234–237. [CrossRef][PubMed] 3. Petitjean, F.; Inglada, J.; Gançarski, P. Satellite image time series analysis under time warping. IEEE Trans. Geosc. Remote Sens. 2012, 50, 3081–3095. [CrossRef] 4. Petitjean, F.; Kurtz, C.; Passat, N.; Gançarski, P. Spatio-temporal reasoning for the classification of satellite image time series. Pattern Recognit. Lett. 2012, 33, 1805–1815. [CrossRef]

Sensors 2021, 21, 1994 21 of 22

5. Liu, Z.; Zhang, F.; Ma, Q.; An, D.; Li, L.; Zhang, X.; Zhu, D.; Li, S. Advances in crop phenotyping and multi-environment trials. Front. Agric. Sci. Eng. 2015, 2, 28–37. [CrossRef] 6. Zhang, X.; Zhang, F.; Qi, Y.; Deng, L.; Wang, X.; Yang, S. New research methods for vegetation information extraction based on visible light remote sensing images from an unmanned aerial vehicle (UAV). Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 215–226. [CrossRef] 7. Pajares, G. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–330. [CrossRef] 8. Bansod, B.; Singh, R.; Thakur, R.; Singhal, G. A comparison between satellite based and drone based remote sensing technology to achieve sustainable development: A review. J. Agric. Envir. Int. Dev. 2017, 111, 383–407. [CrossRef] 9. Fawcett, D.; Panigada, C.; Tagliabue, G.; Boschetti, M.; Celesti, M.; Evdokimov, A.; Biriukova, K.; Colombo, R.; Miglietta, F.; Rascher, U. Multiscale evaluation of drone-based multispectral surface reflectance and vegetation indices in operational conditions. Remote Sens. 2020, 12, 514. [CrossRef] 10. Bazi, Y.; Melgani, F. Toward an Optimal SVM Classification System for Hyperspectral Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3374–3385. [CrossRef] 11. Li, W.; Fu, H.; You, Y.; Yu, L.; Fang, J. Parallel multiclass support vector machine for remote sensing data classification on multicore and many-core architectures. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 4387–4398. [CrossRef] 12. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine Versus Random Forest for Remote Sensing Image Classification: A Meta-analysis and systematic review. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 6308–6325. [CrossRef] 13. Xu, S.; Zhao, Q.; Yin, K.; Zhang, F.; Liu, D.; Yang, G. Combining random forest and support vector machines for object-based rural-land-cover classification using high spatial resolution imagery. J. Appl. Remote Sens. 2019, 13, 014521. [CrossRef] 14. Pal, M.; Mather, P.M. Support vector machines for classification in remote sensing. Int. J. Remote Sens. 2005, 26, 1007–1011. [CrossRef] 15. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [CrossRef] 16. Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [CrossRef] 17. Liu, P.; Choo, K.R.; Wang, L.; Huang, F. SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 2017, 21, 7053–7065. [CrossRef] 18. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of citrus trees from unmanned aerial vehicle imagery using convolutional neural networks. Drones 2018, 2, 39. [CrossRef] 19. Li, L.M.; Guo, P.; Zhang, G.S.; Zhou, Q.; Wu, S.Z. Research on area information extraction of cotton field based on UAV visible light remote sensing. Agric. Sci. 2018, 55, 162–169, (In Chinese with English Abstract). [CrossRef] 20. Dong, M.; Su, J.D.; Liu, G.Y.; Yang, J.T.; Chen, X.Z.; Tian, L.; Wang, M.X. Extraction of tobacco planting areas from UAV remote sensing imagery by object-oriented classification method. Sci. Surv. Mapp. 2014, 39, 87–90, (In Chinese with English Abstract). [CrossRef] 21. Wu, J.; Liu, H.; Zhang, J.S. Paddy planting acreage estimation in city level based on UAV images and object-oriented classification method. Trans. CSAE 2018, 34, 70–77, (In Chinese with English Abstract). [CrossRef] 22. Liu, H.; Zhang, J.; Pan, Y.; Shuai, G.; Zhu, X.; Zhu, S. An efficient approach based on UAV orthographic imagery to map paddy with support of field-level canopy height from point cloud data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 2034–2046. [CrossRef] 23. Hall, O.; Dahlin, S.; Marstorp, H.; Archila Bustos, M.F.; Öborn, I.; Jirström, M. Classification of maize in complex smallholder farming systems using UAV imagery. Drones 2018, 2, 22. [CrossRef] 24. Castillejo-González, I.L.; López-Granados, F.; García-Ferrer, A.; Peña-Barragán, J.M.; Jurado-Expósito, M.; Orden, M.S.; González- Audicana, M. Object-and pixel-based analysis for mapping crops and their agro-environmental associated measures using QuickBird imagery. Comput. Electron. Agric. 2009, 68, 207–215. [CrossRef] 25. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [CrossRef] 26. Guo, P.; Wu, F.D.; Dai, J.G.; Wang, H.J.; Xu, L.P.; Zhang, G.S. Comparison of farmland crop classification methods based on visible light images of unmanned aerial vehicles. Trans. CSAE 2017, 33, 112–119, (In Chinese with English Abstract). [CrossRef] 27. Chen, Z.H.; Xu, Y.H.; Tong, Q.Q. Extraction and verification of crop information based on visible remote sensing image of unmanned aerial vehicle. Guizhou Agric. Sci. 2020, 48, 127–130, (In Chinese with English Abstract). 28. Wei, Q.; Zhang, B.Z.; Wang, Z. Research on object recognition based on UAV multispectral image. Xinjiang Agric. Sci. 2020, 57, 932–939, (In Chinese with English Abstract). [CrossRef] 29. Wang, L.; Liu, J.; Yang, L.B. Applications of unmanned aerial vehicle images on agricultural remote sensing monitoring. Trans. CSAE 2013, 29, 136–145, (In Chinese with English Abstract). [CrossRef] 30. Park, J.K.; Park, J.H. Crops classification using imagery of unmanned aerial vehicle (UAV). J. Korean Soc. Agric. Eng. 2015, 57, 91–97. [CrossRef] 31. Wu, F.M.; Zhang, M.; Wu, B.F. Object-oriented rapid estimation of rice acreage from uav imagery. J. Geo-Inf. Sci. 2019, 21, 789–798, (In Chinese with English Abstract). [CrossRef] Sensors 2021, 21, 1994 22 of 22

32. Liu, B.; Shi, Y.; Duan, Y.; Wu, W. UAV-based Crops Classification with joint features from Orthoimage and DSM data. Int. Arch. Photogramm. RSSIS 2018, 42, 1023–1028. [CrossRef] 33. Carlson, T.N.; Ripley., D.A. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote Sens. Environ. 1997, 62, 241–251. [CrossRef] 34. Jordan, C.F. Derivation of leaf area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [CrossRef] 35. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [CrossRef] 36. Meyer, G.E.; Mehta, T.; Kocher, M.F.; Mortensen, D.A.; Samal, A. Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Amer. Soc. Agric. Biol. Eng. 1998, 41, 1189–1197. [CrossRef] 37. Wang, X.; Wang, M.; Wang, S.; Wu, Y. Extraction of vegetation information from visible unmanned aerial vehicle images. Trans. CSAE 2015, 31, 152–159, (In Chinese with English Abstract). [CrossRef] 38. Verrelst, J.; Schaepman, E.M.; Koetz, B.; Kneub, M. Angular sensitivity analysis of vegetation indices derived from CHRIS/PROBA data. Remote Sens. Environ. 2008, 112, 2341–2353. [CrossRef] 39. Jannoura, R.; Brinkmann, K.; Uteau, D.; Bruns, C.; Joergensen, R.G. Monitoring of crop biomass using true colour aerial photographs taken from a remote controlled hexacopter. Biosyst. Eng. 2019, 129, 341–351. [CrossRef] 40. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.V.; Mortensen, D.A. Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions. Am. Soc. Agric. Biol. Eng. 1995, 38, 259–269. [CrossRef] 41. Chen, Q.; Meng, Z.; Liu, X.; Jin, Q.; Su, R. Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE. Genes 2018, 9, 301. [CrossRef] 42. Belgiu, M.; Drăgu¸t,L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [CrossRef] 43. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [CrossRef] 44. Fradkin, D.; Muchnik, I. Support vector machines for classification. DIMACS Ser. Discrete. Math. Theor Comput. Sci. 2006, 70, 13–20. [CrossRef] 45. Li, M. A High Spatial Resolution Remote Sensing Image Classification Study Based on SVM Algorithm and Application Thereof. Master’s Thesis, China University of Geosciences, Beijing, China, 2015. Available online: https://kns.cnki.net/kcms/detail/ detail.aspx?dbcode=CMFD&dbname=CMFD201601&filename=1015385537.nh&v=WVxPpasNAZQMxSuuuMI2iJhEuqdvSs% 25mmd2FYy7HlfXI%25mmd2BQG35yzQQZMUOX7bbd3yICpsB (accessed on 3 February 2021). 46. Lewis, H.G.; Brown, M. A generalized confusion matrix for assessing area estimates from remotely sensed data. Int. J. Remote Sens. 2001, 22, 3223–3235. [CrossRef] 47. Knipling, E.B. Physical and physiological basis for the reflectance of visible and near-infrared radiation from vegetation. Int. J. Remote Sens. 1970, 1, 155–159. [CrossRef] 48. Chen, P.; Chiang, Y.; Weng, P. Imaging Using Unmanned Aerial Vehicles for Agriculture Land Use Classification. Agriculture 2020, 10, 416. [CrossRef] 49. Ishida, T.; Kurihara, J.; Viray, F.A.; Namuco, S.B.; Paringit, E.C.; Perez, G.J.; Takahashi, Y.; Marciano Jr, J.J. A novel approach for vegetation classification using UAV-based hyperspectral imaging. Comput. Electron. Agric. 2018, 144, 80–85. [CrossRef]