drones

Article Individual Tree Crown Delineation for the Species Classification and Assessment of Vital Status of Forest Stands from UAV Images

Anastasiia Safonova 1,2,*, Yousif Hamad 1,3, Egor Dmitriev 4,5, Georgi Georgiev 6, Vladislav Trenkin 7, Margarita Georgieva 6 , Stelian Dimitrov 8 and Martin Iliev 8

1 Deep Learning Laboratory, Siberian Federal University, 660074 Krasnoyarsk, Russia; [email protected] 2 Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada, 18071 Granada, Spain 3 College of Information Technology, Imam Ja’afar Al-Sadiq University, Kirkuk 661001, Iraq 4 Marchuk Institute of Numerical Mathematics Russian Academy of Sciences, 119333 Moscow, Russia; [email protected] 5 Institute for Scientific Research of Aerospace Monitoring “Aerocosmos”, State Scientific Institution, 105064 Moscow, Russia 6 Department of Forest Entomology, Phytopathology and Game Fauna, Forest Research Institute, Bulgarian Academy of Sciences, 1756 Sofia, ; [email protected] (G.G.); [email protected] (M.G.) 7  Geografika Ltd., 1504 Sofia, Bulgaria; [email protected]  8 Department of Cartography and GIS, Faculty of Geology and Geography, Sofia University St. Kliment Ohridski, 1504 Sofia, Bulgaria; [email protected] (S.D.); [email protected] (M.I.) Citation: Safonova, A.; Hamad, Y.; * Correspondence: [email protected] Dmitriev, E.; Georgiev, G.; Trenkin, V.; Georgieva, M.; Dimitrov, S.; Iliev, M. Abstract: Monitoring the structure parameters and damage to trees plays an important role in forest Individual Tree Crown Delineation management. Remote-sensing data collected by an unmanned aerial vehicle (UAV) provides valuable for the Species Classification and resources to improve the efficiency of decision making. In this work, we propose an approach to Assessment of Vital Status of Forest enhance algorithms for species classification and assessment of the vital status of forest stands by Stands from UAV Images. Drones 2021, 5, 77. https://doi.org/ using automated individual tree crowns delineation (ITCD). The approach can be potentially used for 10.3390/drones5030077 inventory and identifying the health status of trees in regional-scale forest areas. The proposed ITCD algorithm goes through three stages: preprocessing (contrast enhancement), crown segmentation Academic Editors: Maggi Kelly and based on wavelet transformation and morphological operations, and boundaries detection. The per- Darren Turner formance of the ITCD algorithm was demonstrated for different test plots containing homogeneous and complex structured forest stands. For typical scenes, the crown contouring accuracy is about Received: 28 June 2021 95%. The pixel-by-pixel classification is based on the ensemble supervised classification method Accepted: 5 August 2021 error correcting output codes with the Gaussian kernel support vector machine chosen as a binary Published: 7 August 2021 learner. We demonstrated that pixel-by-pixel species classification of multi-spectral images can be performed with a total error of about 1%, which is significantly less than by processing RGB images. Publisher’s Note: MDPI stays neutral The advantage of the proposed approach lies in the combined processing of multispectral and RGB with regard to jurisdictional claims in photo images. published maps and institutional affil- iations. Keywords: remote sensing; pattern recognition; unmanned aerial vehicle; aerial photo and multi- spectral images; individual tree crowns delineation; species classification; vital status assessment

Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. 1. Introduction This article is an open access article distributed under the terms and Forests are one of the most important components of the global ecosystem having a conditions of the Creative Commons significant impact on the Earth’s radiation balance, carbon cycle, and are of great economic Attribution (CC BY) license (https:// importance. The most accurate source of information on changes of the vital state and creativecommons.org/licenses/by/ inventory parameters of forest stands are ground-based surveys. An alternative to this ap- 4.0/). proach is the use of remote-sensing methods based on airborne and satellite measurements.

Drones 2021, 5, 77. https://doi.org/10.3390/drones5030077 https://www.mdpi.com/journal/drones Drones 2021, 5, 77 2 of 18

With the advent of remote-sensing systems of very high spatial resolution, the problem of thematic mapping of forest areas at the level of individual trees arose. Prompt and accurate measurements of forest inventory parameters and vital status of individual trees are necessary for quantitative analysis of the forest structure, environmental modeling, and the assessment of biological productivity. The results obtained contribute to making more detailed thematic maps reflecting the local features of forest areas. Identifying trees from very high-resolution aerial images by looking at the form and structure of tree crowns is useful for the calculation of the forest density to control possible fire. In the past few years, unmanned imaging technologies have been widely developed for different remote-sensing tasks. In many ways, the development of unmanned aerial vehicle (UAV) imaging tools helps to complement satellite imagery for regional monitoring. In the last decade, we can find a number of works where methods of individual tree crown delineation (ITCD) based on data obtained from aerial photos and multi-spectral cameras as well as laser scanners (LiDARs) are successfully used for estimating structure parameters of forest stands (Jing et al., Ma et al. [1,2]). Such an analysis of individual tree crowns in digital images requires high spatial resolution. The results of ITCD obtained from high spatial resolution aerial images can be used in combination with high spectral resolution images for classifying tree species, growth rate, and regions of missing trees (Maschler et al. [3]). A similar approach can be employed to delineate tree growth areas to improve the results of thematic processing medium and high-resolution satellite images (Safonova et al., Dmitriev et al. [4,5]). Often, the satellite data do not determine the characteristics of the forest with the required accuracy. The improvement in the classification accuracy of dominant trees can be achieved by combining clearing mechanism data and horizontal raster clouds high- lighting sub-trees through the analysis of vertical features (Paris et al. [6]). The detection of individual trees on the aerial photography improves the management efficiency and production (Shahzad et al. [7]). ITCD methods based on the optical data (Wagner et al. [8]) or atmospheric laser (Ferraz et al. [9]) are being actively developed and used for tropical forests with high canopy density. Crown detection and segmentation have also been considered in northern and temperate forests (Morsdorf et al. [10]), and they perform better in coniferous forests than in mixed forests (Vega et al. [11]). Tree crowns are determined primarily by the image processing algorithms which are designed to detect edges and highlight some features (Torresan et al. [12]). Some of the forest parameters, such as crown area (Chemura et al. [13]), tree height, tree growth and crown cover are measured at the level of individual trees and require information on each tree. The accuracy of the separate tree crowns definition is related but not limited to the accuracy of subsequent species identification, gap analysis, and level-based assessment of characteristics such as above-ground biomass and forest carbon (Allouis et al. [14]). In Gupta et al. [15], the authors proposed to apply full waveforms to extract individual tree crowns using aggregated algorithms, but those did not show quantitative results due to the lack of local inventory data. Detection and segmentation of individual trees as a prerequisite for gathering accurate information about the structure of a forest (tree location, tree height) have received tremendous attention in the last few decades (Yang et al. [16]). A traditional method based on expert analysis of remote sensing data in combination with field measurements is still used to determine the forest parameters (Zhen et al. [17]). Due to increased levels of clogging with respect to land, tree crowns segmentation under a canopy in tall, dense forests remains an open problem. High-intensity UAV short- range drones can solve this problem to some extent. However, even if the segmentation of individual tree crowns is limited to subdominant trees, the crown size distribution provides new insights into tree architecture and improved carbon mapping capabilities that may be less dependent on local calibration than current region-based models (Coomes et al. [18]). The present paper aims to present an approach for the retrieval of forest parameters at the level of individual trees from very high spatial resolution UAV images. We propose Drones 2021, 5, x FOR PEER REVIEW 3 of 19

Drones 2021, 5, 77 The present paper aims to present an approach for the retrieval of forest parameters3 of 18 at the level of individual trees from very high spatial resolution UAV images. We propose an automated ITCD algorithm applicable to multispectral and RGB aerial images. The re- ansults automated obtained ITCDfrom the algorithm ITCD algorithm, applicable and to multispectralsupervised pixel and-by RGB-pixel aerial classification images. The are resultsused to obtained determine from individual the ITCD tree algorithm, species and supervisedvital state classes. pixel-by-pixel Validation classification of the results are usedof thematic to determine processing individual of aerial tree images species based and vital on stateexpert classes. data shows Validation a high of thepotential results for of thematicusing the processing proposed ofapproach aerial images for inventory based on and expert tree data health shows assessment a high potential in regional for usingforest theareas. proposed approach for inventory and tree health assessment in regional forest areas. TheThe manuscript manuscript is is organized organized as as follows. follows. Section Section2 presents 2 presents the the materials materials and and methods, meth- whereods, where Section Section 2.1 describes 2.1 describes the study the study area, Section area, Section 2.2 is shown2.2 is shown the individual the individual tree crown tree delineationcrown delineation algorithm algorithm with a description with a description of the preprocessing of the preprocessing stage (Section stage (Section2.2.1), the 2.2.1), tree crownthe tree segmentation crown segmentation stage (Section stage 2.2.2(Section), and 2.2.2), the boundary and the boundary detection detection stage (Section stage 2.2.3 (Sec-), Sectiontion 2.2.3), 2.3 presentsSection 2.3 species presents classification species classification and vital state and assessment, vital state assessment, and the assessment and the metricsassessment is in metr Sectionics is 2.4 in. Section The results 2.4. The and results discussion and discussion are presented are presented in Section in3 Sectionand the 3 conclusionsand the conclusions are in Section are in4 .Section 4.

2.2. MaterialsMaterials andand MethodsMethods 2.1.2.1. StudyStudy AreaArea TheThe study study area area is is the the territory territory of of the the West West Balkan Balkan Range Range (1300–1500 (1300–1500 m) inm) Bulgaria. in Bulgaria. We consideredWe considered test plotstest plots located located in protected in protected areas areas of Gornata of Gornata Koria Koria Reserve Reserve (Berkovska) (Berkovska) and Chupreneand Biosphere Biosphere Reserve Reserve (Chuprene, (Chuprene, Province), ), included included in the list in of the UNESCO. list of TheUNESCO. reserves The were reserves created w withere created the aim ofwith preserving the aim theof preserving environment the of envi theronment unique natural of the complex.unique natural Regular complex. and detailed Regular remote and detailed monitoring remote is requiredmonitoring to takeis required time management to take time actionsmanagement aimed actions at solving aimed problems at solving associated problems with associated damage towith forest damage stands to by forest stem stands pests andby stem changes pests of and stand changes structure of stand parameters. structure The parameters. location and The schematic location images and schematic of test plots im- areages presented of test plots in Figure are presented1. in Figure 1.

Figure 1. The location and schematic images of test plots Chuprene and Gornata Koria, Bul- Figure 1. The location and schematic images of test plots Chuprene and Gornata Koria, Bulgaria. garia.Coordinates Coordinates are presented are presented in degrees in degrees north a northltitude altitude (left border) (left border)and east and longitude east longitude (upper bo (upperrder), border),respectively. respectively.

TheThe remote-sensingremote-sensing datadata usedused inin this work were obtained from two UAV UAV systems. TheThe firstfirst oneone isis DJI-PhantomDJI-Phantom 44 ProPro UAVUAV isis equippedequipped withwith anan improvedimproved digitaldigital cameracamera withwith a a wide-angle wide-angle lenslens andand aa 1-inch1-inch 2020 MPMP CMOSCMOS sensor.sensor. RGBRGB imagesimages obtainedobtained usingusing this systemsystem duringduring thethe measurementmeasurement campaigncampaign havehave aa spatialspatial resolutionresolution ofof 7 7 cm/pixel. cm/pixel. TheThe secondsecond oneone isis SenseFlySenseFly eBeeeBee XX integratedintegrated withwith thethe ParrotParrot SequoiaSequoia multispectralmultispectral sensorsensor (green,(green, red, red, red red edge, edge, and and near near infrared infrared channels) channels) with with a a resolution resolution of of 3.75 3.75 cm/pixel. cm/pixel. Drone flightflight campaignscampaigns were carried out out several several times times on on 25 25 June June 2017, 2017, 16 16 August August 2017, 2017, and and 25 25September September 2017, 2017 all, all at atan an altitude altitude of of about about 120 120 m m.. The The object object of of the the study study is is natural natural forest Ips typographus damagedamage asas aa resultresult ofof attacksattacks byby EuropeanEuropean sprucespruce bark beetles (Ips typographus, L.) [19,20]. [19,20]. Forests mainly consist of spruce (Picea abies) and European beech (Fagus sylvatica) with an Forests mainly consist of spruce (Picea abies) and European beech (Fagus sylvatica) with an admixture of Scots pine (Pinus sylvestris) and black pine (Pinus nigra). The vital status of admixture of Scots pine (Pinus sylvestris) and black pine (Pinus nigra). The vital status of trees is defined by damage classes. We considered four damage classes: healthy, wakened, sick and deadwood. Verification was obtained from the Bulgarian experts in the field of forest entomology and phytopathology prof. DSc Georgi Georgiev and his colleagues in

Drones 2021, 5, x FOR PEER REVIEW 4 of 19

trees is defined by damage classes. We considered four damage classes: healthy, wakened, Drones 2021, 5, 77 sick and deadwood. Verification was obtained from the Bulgarian experts in the field4 of 18of forest entomology and phytopathology prof. DSc Georgi Georgiev and his colleagues in accordance with the methodical manual of assessment of crown conditions of ICP “For- accordanceests” (Eichhorn with et the al. methodical [21]). manual of assessment of crown conditions of ICP “Forests” (EichhornFor each et al. of [ 21the]). test plots, two orthorectified georeferenced mosaics of RGB and mul- tispectralFor each images of thewere test prepared plots, twousing orthorectified QGIS software georeferenced (Quantum GIS mosaics v. 2.14.2 of1) RGB. RGB and im- multispectralages are represented images wereby unsigned prepared integers using QGISfrom software0 to 255 in (Quantum natural colors. GIS v. The 2.14.21). Parrot RGB Se- imagesquoia camera are represented was equipped by unsigned with a solar integers radiation from 0sensor, to 255 thus in natural the multispectral colors. The images Parrot Sequoiawere calibrated camera wasto spectral equipped reflectance. with a solar Since radiation the Parrot sensor, Sequoia thus does the multispectralnot have a blue images chan- werenel, the calibrated original tomultispectral spectral reflectance. images are Since represented the Parrot in Sequoiafalse colors does in not this have paper. a blue The channel,image pixels the original are rectangular. multispectral images are represented in false colors in this paper. The image pixels are rectangular. 2.2. Individual Tree Crown Delineation Algorithm 2.2. IndividualThe proposed Tree Crown ITCD Delineationalgorithm consists Algorithm of three stages. The first one, preprocessing, is aimedThe proposedat enhancing ITCD the algorithm contrasting consists of crowns of three and stages. intercrown The first space. one, The preprocessing, input of the isalgorithm aimed at is enhancingan image in the RGB contrasting format represented of crowns in and natural intercrown or false colors space. containing The input vis- of theible algorithm channels. isA anpanchromatic image in RGB image format is also represented applicable. in naturalThe preprocessing or false colors consists containing of the visibleenhanced channels. equalization A panchromatic of the brightness image is histogram, also applicable. which The makes preprocessing the crown consistssegmentation of the enhancedeasier. The equalization resulting image of the is brightness transformed histogram, to grayscale which (in makes the case the crown of RGB segmentation input). The easier.second The stage resulting is the se imagegmentation is transformed of tree crowns to grayscale by using (in the Discrete case of Wavelet RGB input). Transform The second(DWT) stagefiltering is the and segmentation the multi-threshold of tree crowns technique. by using The Discrete results Wavelet are converted Transform to (DWT)black- filteringand-white and by the further multi-threshold applying the technique. morphological The results operations are converted for smoothing to black-and-white boundaries by of furthersegments. applying The third the morphologicalstage is the detection operations of boundaries for smoothing of individual boundaries tree of segments. crowns using The thirdcanny stage operator is the and detection the data of analysis. boundaries Illustration of individual of the tree proposed crowns ITCD using method canny operatoris shown andin Figure the data 2. analysis. Illustration of the proposed ITCD method is shown in Figure2.

FigureFigure 2.2. AA basicbasic schemescheme ofof thethe ITCDITCD algorithm.algorithm.

2.2.1.2.2.1. PreprocessingPreprocessing StageStage TheThe qualityquality ofof UAVUAV capturedcaptured imagesimages isis frequentlyfrequently deteriorateddeteriorated byby thethe suspendedsuspended particlesparticles inin thethe atmosphere,atmosphere, weatherweather conditions,conditions, and the quality of the equipmentequipment used. ThisThis resultsresults inin low/poorlow/poor contrast images (Samanta etet al.al. [[22]22]).). TheThe infraredinfrared sensorsensor isis easilyeasily affectedaffected byby diversediverse noises,noises, andand itit isis essentialessential toto processprocess aa contrastcontrast expansionexpansion usingusing thethe histogramhistogram equalizationequalization becausebecause ofof thethe lowlow outputoutput levellevel ofof the sensor (Haicheng et al. [[23]23]).). However,However, thethe classical classical histogram histogram equalization equalization (HE) (HE) for for IR IR image image processing processing has has side side effects ef- suchfects assuch a lack as ofa lack contrast of contrast expansion expansion and saturation. and saturation The authors. The (Kimauthors et al. (Kim [24]) et proposed al. [24]) anproposed adaptive an contrastadaptive enhancementcontrast enhancement to improve to improve the aerial the imageaerial image quality. quality. To solve To solve this problem,this problem, a balance a balance contrast contrast enhancement enhancement algorithm algorithm is proposed is proposed as it allowsas it allows better better tree propertiestree properties and properand proper separations separations (GUO (GUO [25]). [25]). The balance contrast enhancement is the first stage of the proposed ITCD algorithm. The balance contrast enhancement is the first stage of the proposed ITCD algorithm. This method provides a solution for creating offset colors (RGB). The contrast of the image This method provides a solution for creating offset colors (RGB). The contrast of the image can be stretched or compressed without changing the histogram pattern of the input image (Iin). The general form of the transform is represented as Equation (1):

2 Iout = a × (Iin − b) + c, (1) Drones 2021, 5, 77 5 of 18

where the coefficients a (2), b (3), and c (4) are calculated using the minimum and maximum ∗ intensity values of input (Iin) and selected reference output images (I out), as well as the mean intensity value of the output image (E):

H − L a = , (2) (h − l)(h + l − 2b)

where l and h are the minimum and the maximum intensity values of the input image respectively while H and L are the minimum and the maximum intensity values of the reference output image.

h2 × (E − L) − s × (H − L) + l2 × (H − E) b = , (3) 2 × [h × (E − L) − e × (H − L) + l × (H − E)]

in the equation above, E is the mean intensity value of the input image.

c = L − a × (l − b)2, (4)

where, S denotes the mean square sum of intensity values in the input image (5):

N 1 2 s = ∑ Iin(i). (5) N i = 1

An example of the contrast enhancement transforms for one of the test images is shown in Figure3. We can see those distinctions of individual tree crowns become much clearer and the brightness histogram expands to the entire tonal range. The histogram of the processed image indicates that the mean values of RGB channels are between 0 to 255, while the mean values of the channels of the original image are from ~90 to ~200. Thus, the results of such a contrast transform are useful for the segmentation of tree crowns due to the fact that intercrown spacing becomes darker than in the original image. Improving local contrast also increases the balance of the mean value of the original image based on light and dark edges. The proposed method was applied both to the original RGB and false color multispectral images. The local mean and standard deviation, minimum and maximum values of the entire image were used to statistically describe the digital image under processing. Table1 shows that mean brightness values in RGB channels are aligned after processing. This suggested color image enhancement takes 0.22 s of elapsed time when using a computer on the image of size 1129 × 1129 pixels to remove induced noise which may creep into the image during acquisition.

Table 1. Statistics of mean values of the red, green, and blue channels.

Min Values Max Values Mean Value Channel I/O Images Input Image Output Image Red 0 255 40 115 Green 0 255 59.53 116 Blue 0 255 29.9 113 DronesDrones 20212021,, 5, 77x FOR PEER REVIEW 6 of 18 6 of 19

FigureFigure 3. Preprocessing3. Preprocessing test test RGB RGB image: image: (a) original (a) original image, image, (b) result (b of) r balanceesult of contrastbalance enhance- contrast enhance- ment,ment (c, )( histogramc) histogram of original of original image image and ( dand) histogram (d) histogram of processed of processed image. image.

2.2.2. Tree Crown Segmentation Stage 2.2.2. Tree Crown Segmentation Stage The purpose of the tree crown segmentation is to separate the pixel of crowns and intercrownThe spacingpurpose so of that the the tree cross-sectional crown segmentation shape matches is to the separate shape of the the pixel tree crowns of crowns and inintercrown the original spacing image. The so that wavelet the transformcross-sectional is ideally shape suited matches for studying the shape images of the due tree to crowns itsin ability the original to multi-point image. analysis. The wavelet Thus, transform we apply the is aboveideally principle suited for in the studying moving images field due to toits segment ability theto multi forest- image.point analysis. To obtain Thus, accurately we apply the threshold the above (Archana principle et al. in [ 26the]), moving we field convertto segment the image the toforest obtain image. its binary To obtain version accurately and obtain athe high-pass threshold image (Archana by the inverse et al. [26]), we DWTconvert of its the high-frequency image to obtain sub-bands its binary from version the wavelet and domain, obtain whicha high analyzes-pass image the image by the inverse inDWT four sub-bandsof its high using-frequency high and sub low-bands frequency from filters the wavelet (Tausif et domain, al., Pradham which et al.analyzes [27,28]). the image As a result of the performed experiments, the optimal balance of image contrast was in four sub-bands using high and low frequency filters (Tausif et al., Pradham et al. applied to all channels with an average value of 75 (red, green, and blue) in order to improve[27,28]). the As segmentation a result of the of individual performed tree experiments, crowns. To determine the optimal the threshold balance value, of image the contrast imagewas applied was converted to all channels to a black with and white an average binary image.value of The 75 set (red, of threshold green, and values blue) of Tin order to isimprove 100, T is chosenthe segmentation for the high-quality of individual segmentation tree crown of thes. results, To determine as shown the in Figure threshold4, value, wherethe image all values was below convertedT limit to are a black converted and towhite black binary color, andimage. all the The above set of values threshold are values convertedof T is 100, to whiteT is chosen color. That for the is, the high analyzed-quality image segmentation with the intensity of the results,I(x, y) is as converted shown in Figure 0 to4 the, where binary all image valuesI ( belowx, y) according T limit are to Equation converted (6) to black color, and all the above values are converted to white color. That is, the( analyzed image with the intensity I(,) x y is con- 1, I(x, y) ≥ T I0(x, y) = , (6) verted to the binary image I'( x , y ) according0, I(x, y) to< EquationT (6)

where T is the threshold value of segmentation. 1,( ,I )x yT Ix'( y , ) =  , (6) 0,( ,I )x yT where T is the threshold value of segmentation.

Drones 2021, 5, 77 7 of 18 Drones 2021, 5, x FOR PEER REVIEW 7 of 19

FigureFigure4. 4.Results Results of of ITCD ITCD (white (white contours) contours) for for different different values values of of threshold thresholdT and T and detecting detecting individual individ- treesual trees (red points).(red points). (a) T (=a) 60 T (67= 60 crowns), (67 crowns), (b) T =(b 75) T (71 = 75 crowns), (71 crowns), (c) T = (c 90) T (67 = 90 crowns), (67 crowns), and (d and) T = ( 105d) T = 105 (64 crowns). (64 crowns).

ExamplesExamples of images obtained obtained for for various various threshold threshold values values of ofsegmentation segmentation of the of thefor- forestest are are shown shown in inFigure Figure 4. 4The. The figure figure demonstrates demonstrates a tree a tree segmentation segmentation characterizing characterizing the thechange change in individual in individual crowns crowns detection detection-based-based threshold threshold values values 60, 60,75, 75,90, 90,and and 105. 105. AsAs cancan be seen from from Figure Figure 4a,c,a,c, lower lower threshold threshold values values produce produce loss loss of of part part of ofthe thevolume volume and andquantity quantity of crowns of crowns,, whereas whereas Figure Figure 4d shows4d shows that thata larger a larger threshold threshold value valueproduces produces the combination the combination of multiple of multiple crowns crowns in one and in one hence and decreases hence decreases the total the number total numberof crowns. of crowns.The highest The segmentation highest segmentation accuracy is accuracy obtained is using obtained T = 75, using as inT Figure= 75, as4c, in in Figurewhich4 allc , inthe which individual all the crowns individual are crownsdetected are correctly. detected Such correctly. instabilities Such instabilities in the character- in the characteristicsistics of the original of the original images lead images to the lead need to the for need comparison for comparison with the with reference the reference expert expertsegmentation. segmentation. TheThe processesprocesses ofof morphologicalmorphological openingopening andand closingclosing areare bothboth derivedderived fromfrom thethe fun-fun- damentaldamental morphologicalmorphological operationsoperations ofof erosionerosion andand dilationdilation andand areare intendedintended toto removeremove minorminor foregroundforeground andand backgroundbackground objectsobjects keepingkeeping thethe sizesize ofof thethe mainmain objects.objects. TheThe scalescale andand smoothnesssmoothness ofof thethe resultsresults ofof thesethese processesprocesses dependdepend onon thethe choicechoice ofof thethe structuralstructural element.element. TheThe morphologicalmorphological openingopening processprocess removesremoves thethe smallersmaller anterioranterior structurestructure fromfrom thethe structuralstructural member,member, while while the the morphologicalmorphological closingclosing processprocess will will removeremove thethe posteriorposterior structurestructure smaller smaller than than the the structural structural component component (Efford (Efford [29]). [29] Structural). Structural elements elements are traits are thattraits explore that explore an important an important feature. feature. TheThe dilationdilation process process adds adds pixels pixels to to the the stroke stroke area area and and erosion erosion removes removes pixels pixels from from the strokethe stroke area area of objects. of objects. These These operations operations were were performed performed on the on basis the ofbasis structural of structural elements. ele- Thements. dilation The dilation selects theselects highest the highest value by value comparing by comparing all pixel all values pixel invalues the vicinity in the vicinity of the inputof the image input describedimage described with the with structural the structural element, elemen and erosiont, and selects erosion the selects lowest the value lowest by comparingvalue by comparing all pixel values all pixel in thevalues vicinity in the of thevicinity input of image the input (Perera image et al. (Perera [30]). Theet al. stages [30]). ofThe the stages proposed of the segmentation proposed segmentation method are shown method in are Figure shown5. In in Block Figure 1, the 5. DWTIn Block method 1, the isDWT used method to obtain is thresholdused to obtain accurately threshold by decomposing accurately by the decomposing image into twothe sub-bands:image into two the LLsub subdomain-bands: the that LL containssubdomain a low tha levelt contains and the a HHlow onelevel containing and the HH high one level containing filters. Otsu’s high

Drones 2021, 5, x FOR PEER REVIEW 8 of 19

Drones 2021, 5, 77 8 of 18

level filters. Otsu’s method is applied on the low-pass filter and on high-pass filter to de- termine threshold value edge enhancement. In Block 2, the inverse wavelet transform ap- method is applied on the low-pass filter and on high-pass filter to determine threshold plied to obtain a high-pass image of the remaining sub-bands (horizontal, vertical, and value edge enhancement. In Block 2, the inverse wavelet transform applied to obtain adiagonal). high-pass In image Block of 3, thethe remaininglocal contrast sub-bands balance (horizontal, used for enhancement vertical, and of diagonal). natural contrast In Blockof an 3,image the local and contrastdividing balance the image used into for enhancementtwo groups of of pixels natural (black contrast and of white). an image The ex- andperimental dividing resu thelt image shows into that two thegroups proposed of pixels algorithm (black is andefficient, white). simple, The experimental and easy to pro- resultgram. showsThe detection that the proposedtechnology algorithm based on is image efficient, segmentation simple, and can easy eliminate to program. the Theinterfer- detectionence of noise technology such as based clouds on and image fog segmentation on smoke detection. can eliminate The theproposed interference method of noise was reg- suchistered as cloudsby the andcopyright fog on holder smoke detection.of the Siberian The proposed Federal University method was (Krasnoyarsk, registered by theRussia) copyrightas a “Program holder for of theextracting Siberian the Federal crown University of a tree (Krasnoyarsk, according to Russia)UAV data as a” “Program in the patent foroffice extracting of the Russian the crown Federation of a tree according with certificate to UAV number data” in RU the 2020661677, patent office of2020 the (Hamad Russian and FederationSafonova [31] with). certificate number RU 2020661677, 2020 (Hamad and Safonova [31]).

FigureFigure 5.5.General General scheme scheme of of the the tree tree crown crown segmentation segmentation algorithm. algorithm.

2.2.3.2.2.3. BoundaryBoundary Detection Detection Stage Stage TheThe algorithm algorithm used used for for detecting detecting the the boundary boundary of an of individual an individual tree crowns tree crowns is based is on based applying an enhanced Canny detector (Ba¸stanet al. [32]) to the results of the segmentation on applying an enhanced Canny detector (Baştan et al. [32]) to the results of the segmen- stage. It relies on the value of the pixel gradient and is used to define thin edges. The tation stage. It relies on the value of the pixel gradient and is used to define thin edges. basic steps of this method are noise filtering, finding the intensity gradients, lower bound The basic steps of this method are noise filtering, finding the intensity gradients, lower cut-off suppression, determining potential edges, and tracking edges by hysteresis. Some detailsbound ofcut these-off suppression, steps are described determining in Table potential2. The results edges, of applyingand tracking the proposededges by ITCDhysteresis. algorithmSome details at each of these of the steps stages are described described above in Table are shown 2. The in results Figure of6. applying the proposed ITCD algorithm at each of the stages described above are shown in Figure 6. Table 2. The stages followed for Canny edge detection algorithm. Table 2. The stages followed for Canny edge detection algorithm. Step The Pre-Processing Stages of Segmented Image Step1 Gaussian filter (Ito and Xiong [33])The is used Pre to-Processing remove any Stages noise on of the Segmented image. Image The image gradient: 1 Gaussian filter (Ito and Xiong [33]) is used to remove any× noise on the image. 2 a. The gradient is calculated in the x and y directions using a 3 3 warp mask. b. The strength of the gradient and the direction of the edges are calculated. The image gradient: Non-maximum suppression is performed. This step removes pixels that are not part of the edge. Only thin lines will 3 2 remain,a. The which gradient contain pixelsis calculated that are partin the of thex and edge. y directions using a 3x3 warp mask. b. The strength of the gradient and the direction of the edges are calculated. Deceleration is the last step in which two thresholds are used for high and low: a. If the pixel gradient value is above the upper threshold, then the pixels are pixel edges. Non-maximum suppression is performed. This step removes pixels that are not part of the edge. Only 34 b. If the pixel gradient value is less than the minimum, the pixels will be rejected. c.thin linesIf the gradientwill remain, value which of a pixel contain is between pixels the lowerthat are and part upper of thresholds, the edge. the pixel will not be accepted unless it is attached to a pixel that is higher than the upper threshold. Deceleration is the last step in which two thresholds are used for high and low: a. If the pixel gradient value is above the upper threshold, then the pixels are pixel edges. 4 b. If the pixel gradient value is less than the minimum, the pixels will be rejected. c. If the gradient value of a pixel is between the lower and upper thresholds, the pixel will not be ac- cepted unless it is attached to a pixel that is higher than the upper threshold.

Drones 2021, 5, 77 Drones 2021, 5, x FOR PEER REVIEW 9 of 18 9 of 19

Figure 6. AFigure step-by 6.-stepA step-by-step illustration illustrationof our ITCD of approach. our ITCD (a approach.) Original (image,a) Original (b) result image, of balance (b) result contrast of balance enhancement, (c) result ofcontrast DWT, (d enhancement,) results of segmentation, (c) result of DWT,and (e (d) b)oundaries results of segmentation, detection as a andresult (e )of boundaries the input image. detection as a result of the input image. 2.3. Species Classification and Vital State Assessment 2.3. Species Classification and Vital State Assessment The results obtained from ITCD algorithm can be effectively used to improve the The results obtained from ITCD algorithm can be effectively used to improve the quality of thematic processing of aerial images. The first stage is the trained semantic seg- quality of thematic processing of aerial images. The first stage is the trained semantic mentation, as a result of which each pixel of the processed image is associated with a class segmentation, as a result of which each pixel of the processed image is associated with a label. The classification of objects is carried out according to spectral characteristics. In the class label. The classification of objects is carried out according to spectral characteristics. considered case, the labels of objects classified are combined into three groups of healthy In the considered case, the labels of objects classified are combined into three groups of trees, deadwood, and intercrown space. During the training process, healthy trees are healthy trees, deadwood, and intercrown space. During the training process, healthy trees subdivided according to species composition, and the intercrown space—according to are subdivided according to species composition, and the intercrown space—according to surface types. surface types. When processing multispectral images, in order to reduce the natural scatter arising When processing multispectral images, in order to reduce the natural scatter arising from differences in the illumination intensity of recognized objects, the spectral reflectance from differences in the illumination intensity of recognized objects, the spectral reflectance values are normalized to the corresponding integral values for each pixel. Integration is values are normalized to the corresponding integral values for each pixel. Integration is performed by trapezoidal rule, taking into account the values of the central wavelengths performed by trapezoidal rule, taking into account the values of the central wavelengths of the spectral channels. It should be mentioned that after this procedure, one degree of of the spectral channels. It should be mentioned that after this procedure, one degree of freedom is lost; thus, we used only three channels out of four for pixel-by-pixel classifica- freedom is lost; thus, we used only three channels out of four for pixel-by-pixel classification. tion. On the other hand, spectral normalization is not performed when classifying photo On the other hand, spectral normalization is not performed when classifying photo images, images, therefore all three RGB channels were used. therefore all three RGB channels were used. To carry out pixel-by-pixel classification of images by spectral features, an ensemble To carry out pixel-by-pixel classification of images by spectral features, an ensemble algorithm known as error correcting output codes (ECOC) proposed in (Dietterich and algorithm known as error correcting output codes (ECOC) proposed in (Dietterich and Bakiri [34]) wasBakiri used. [3 This4]) was algorithm used. This is based algorithm on some is based results on of some the theory results of of information the theory of information and coding to formalizeand coding multiclass to formalize classifier multiclass on the classifier basis of on series the ofbasis binary of series learners. of binary The learners. The ECOC algorithmECOC can bealgorithm formulated can be as follows.formulated as follows. M M 1 1  Let x ∈ R areLet feature xR vectors are andfeaturey ∈ vectorsR are classand labels.yR Let are also classC labels.= cij Letis thealso Cc= ( ij ) is coding design matrix of the size Q × L (i—row index, j—column index) containing only the the coding design matrix of the size QL ( i —row index, j —column index) contain- elements 1, −1, and 0. The lines of this matrix are unique codes of considered classes and the columns defineing only binary the learnerselements deciding 1, −1, and between 0. The lines two of composite this matrix classes are unique constructed codes of considered from the initialclasses ones. a Thus,nd the for columns each column, define thebinary first learners composite deciding class aggregatesbetween two initial composite classes classes correspondingconstructed to 1, from the compositethe initial ones. class aggregatesThus, for each classes column, corresponding the first composite to −1, class aggre- classes not participatinggates initial in classes the binary corresponding classification to 1, corresponds the composite to 0. class aggregates classes correspond- At the codinging to stage, −1, classes binary not learners participating are sequentially in the binary applied classification to the classified corresponds feature to 0. vector. Thus, we getAt the the code coding of somestage, unknownbinary learners class. are On sequentially the decoding applied stage, to the the code classified feature obtained in thevector. coding Thus, stage iswe compared get the code with of the some codes unknown from the class. coding On design the decoding matrix by stage, the code

Drones 2021, 5, 77 10 of 18

using some selected measure of distance. Thus, the classified feature vector is assigned to the class label with the most similar code. To construct binary learners, we use the Gaussian kernel soft margin support vector machine (Scholkopf and Smola [35]). The method allows finding the most distant parallel hyperplanes and separating the given pair of classes in the multidimensional feature space. The use of the Gaussian kernel instead of the standard scalar product makes this classifier non-linear. There is also the more general form of ECOC classifier proposed by Escalera et al. [36]). The coding stage consists of calculating the classification scores sj(x) for each of the L binary learners defined by coding matrix C and binary losses g(cij, sj(x)) for each of the classes yi. The decoding stage can be expressed by the formula (7):

L ∑ cij g(cij, sj(x)) j = 1 a(x) = argmin , (7) L yi ∑ cij j = 1

where the minimum search is performed on the class index i and the class label yi is the output. Thus, the algorithm selects the class corresponding to the minimum average loss. The performance of the ECOC classifier essentially depends on the choice of the coding design matrix and the binary loss function. In this paper, we used the one-vs-one coding design and the hinge binary loss, represented by (8):

g(yB, x) = max{0, 1 − yB · s(x)}/2, (8)

where yB ∈ {1, −1} is a label in the binary classification problem, the classification score s(x) means the normalized distance from the sample x to the discriminant sur- face in the area of the relevant class. The rationale for this choice is described in detail in (Dmitriev et al. [5]). The pixel-by-pixel classification of mixed forest species usually has a significant noise component. The availability of contours of individual trees allows us to perform post-processing and improves the quality of semantic segmentation. The post-processing algorithm can be formulated as follows. Pixels outside the contours of crowns are auto- matically assigned to the class “others”. The species of an individual tree is determined by a majority vote, i.e., an individual tree is associated with such a species (dominant), which has the maximum number of pixels within the corresponding crown contour. If the number of pixels classified as “others” within the contour of the crown exceeds the specified threshold, then the tree is classified as “others” too. If the absolute value of the difference between the number of pixels of the dominant species and the sum of the number of pixels of other species exceeds the selected threshold, then the classification is rejected, i.e., an individual tree is assigned the “unknown” species class. The living state LS were assessed as follows (9):

XH LS = · 100%, (9) XH + XD

where XH is the number of pixels classified as one of species and XD is the number of pixels classified as deadwood. The vital status of trees is represented by 4 categories: healthy, wakened, sick and deadwood, which are calculated from LS by multi-thresholding. If XH + XD is less the selected threshold then the contour of individual tree is assigned to the class “others”.

2.4. Assessment Metrics For the assessment of quality of the proposed ITCD algorithm, we used the Jaccard similarity coefficient (JSC) (Arnaboldi et al. [37]) and intersection over union metric (IoU) Drones 2021, 5, 77 11 of 18

(Rezatofighi et al. [38]). The JSC coefficient ranges from 0% to 100% and characterizes element-wise similarity of two sets. The higher the percentage, the more similar the two sets are. JSC is defined as the size of the intersection divided by the size of the union of two finite sample sets (10): |A ∩ B| JSC(A, B) = , (10) |A ∪ B| where |A| means the number of elements of the finite set A. In our case, A is a set of labels of trees delineated by the proposed ITCD algorithm and B is a set of labels of trees delineated by expert. Thus, JSC characterizes the correctness of detection of individual tree crowns in comparison with expert data. The use of the IoU metric allows us to take into account the accuracy of the coincidence of tree crown contours obtained using our algorithm with expert data. The IoU metric is defined in a similar way (11).

SA ∩ SB IoU(SA, SB) = , (11) SA ∪ SB

where SA and SB are sets of pixels located inside tree crown contours obtained by algorithm and by expert, respectively. The classification quality was assessed by confusion matrix (CM) and the related parameters, such as total error (TE), total omission, and total commission errors (TOE and TCE, respectively). CM is the basic classification quality characteristic allowing a comprehensive visual analysis of different aspects of the used classification method. Rows of CM represent reference classes and columns—predicted classes. TE is defined as the amount of incorrectly classified samples over the total number of samples. TOE is the mean omission error over all classes considered, where the omission error is the amount of false classified samples of the selected class over all samples of this class. TCE is the mean commission error over all possible responses of the classifier used, where the commission error is defined as the probability of false classification for each possible classification result. The implementation of the methods presented in Sections 2.2–2.4 was carried out using the MATLAB programming language.

3. Experimental Results The numerical experiments were conducted on images of different sizes and types. To demonstrate the performance of methods proposed for ITCD, species classification, and assessment of the vital status, several small area test plots of homogeneous and mixed forests with different canopy density, crown sizes, and degrees of damage by stem pests were selected. Test plots considered were combined into two groups: Dataset-1 containing multispectral images and Dataset-2 containing RGB images. Names of the test plots were assigned in agreement with their properties: Medium Dense Canopy Coniferous (MDCC), Damaged Medium Dense Canopy (DMDC), Dense Canopy Coniferous (DCC), Dense Canopy Deciduous (DCD), Dense Canopy Deciduous Multi-Scale (DCDMS), Dense Canopy Mixed Damaged (DCMD) and Dense Canopy Mixed Multi-Scale (DCMMS). The construction of the training dataset supposed to be used further for pixel-by-pixel (PbP) classification consists of the extraction spectral features from pixels falling inside the contours of recognized objects. We constructed two training datasets separately for Dataset-1 and Dataset-2 containing the following classes: dominant species (spruce and beech), deadwood, and several types of intercrown space. It should be noted that the training data were extracted from pixels laying outside the test plots considered. Datasets contain 1000 samples for each class. It is important that we used balanced training sets because the prior probability of classes depends on a number of training samples for the classifier used. We employed two methods for estimating the per-pixel classification accuracy: the prior estimate and k-folds cross-validation (k = 10). In the cross-validation method, the training set is divided into k equal class-balanced folds, k − 1 of which are used Drones 2021, 5, 77 12 of 18

for training the ECOC SVM algorithm and 1 is used for independent test classification. The test fold runs through all possible k positions and thus we obtain an ensemble of classification error samples. The prior estimate uses the same set for training and testing and gives us dependent classification errors. The comparison of prior and cross-validation estimates indicates the generalization ability—an important property of predictability of the classification accuracy. That is, if the difference is not significant, we can hope that real classification errors are in agreement with their estimates, obtained from training data. The confusion matrices estimated for Dataset-1 and Dataset-2 are shown in Figure7. We can see a good correspondence between prior and cross-validation estimates both for multispectral and RGB images, and, thus, we can hope on the stability of the PbP classification. We can see that the PbP classification of multispectral images is significantly more accurate. For Dataset-1 we obtained that TE is 0.9%, TOE is 0.9% and TCE is 1.1%. The difference between omission and commission errors is small enough. Classification errors obtained for Dataset-2 are approximately 5 times more, TE is 5.2%, TOE is 6.5% and TCE is 5.2%. We can see that for RGB images, the possibility of PbP species classification is questionable because of high enough omission errors. On the other hand, multispectral measurements allow the classification of species composition. Also, the damaged trees Drones 2021, 5, x FOR PEER REVIEW 13 of 19 (deadwood class) are classified exactly (or almost exactly) for both datasets that indicates the reliability of vital status estimates by using multispectral and RGB images.

Figure 7. ConfusionConfusion matrices matrices estimated estimated by by different different methods methods for forPbP PbP classification classification of test of multispec- test multi- spectraltral and andRGB RGB images: images: (a) prior, (a) prior, Dataset Dataset-1;-1; (b) (crossb) cross-validation,-validation, Dataset Dataset-1;-1; (c) prior, (c) prior, Dataset Dataset-2;-2; (a) (crossd) cross-validation,-validation, Dataset Dataset-2.-2.

Figures8 8 and and9 9show show the the results results at at key key stages stages of of thematic thematic processing processing of of the the test test multi- mul- spectraltispectral and and RGB RGB images, images, respectively. respectively. In In Figure Figure8, the8, the original original images images are are represented represented in falsein false colors. colors. The The results results of of crown crown delineation delineation were were verified verified by by using using expert expert data. data. Quanti- Quan- tativetitative characteristics characteristics of of ITCD ITCD accuracy accuracy are are shown shown inin TableTable3 .3. We We can can see see that that the the results results obtained are acceptable for all test plots. Both JSC and IoU values exceed 90% and reachreach 97% whichwhich indicatesindicates an an excellent excellent agreement agreement between between predicted predicted and and expert expert crown crown contours. con- Thetours. worst The resultsworst results are obtained are obtained for stands for stands with the with dense the canopydense canopy and multiscale and multiscale crown structure.crown structure. Also, it Also, should it should be noted be that noted the that proposed the proposed ITCD algorithm ITCD algorithm revealed revealed significantly sig- nificantly better accuracy for RGB images than for multispectral ones. This is due to the fact that the spatial resolution of RGB images is two times higher than that of multispectral images. Thus, it is preferable to use an image with a higher spatial resolution when solving the problem of contouring the crowns of individual trees.

Table 3. The accuracy statistics of the proposed ITCD algorithm on Dataset-1 and Dataset-2.

Number of Data Image Area of Missing Mean Number of Number of JSC, IoU, Delineated Set Area Trees, % Crowns per Plot Lost Trees % % Trees

MDC 33 130 124 6 95.38 92.72

DCC 26 171 159 12 92.98 94.50

1 DCD 36 77 74 3 96.1 93.89

DCDMS 19 340 310 30 91.18 90.67

DMDC 16 112 107 5 95.57 94.38

Drones 2021, 5, 77 13 of 18

better accuracy for RGB images than for multispectral ones. This is due to the fact that the spatial resolution of RGB images is two times higher than that of multispectral images. Thus, it is preferable to use an image with a higher spatial resolution when solving the problem of contouring the crowns of individual trees.

Table 3. The accuracy statistics of the proposed ITCD algorithm on Dataset-1 and Dataset-2.

Area of Missing Mean Number of Number of Number of DronesData 2021 Set, 5, x ImageFOR PEER Area REVIEW JSC, % IoU,14 of % 19 Trees, % Crowns per Plot Delineated Trees Lost Trees MDC 33 130 124 6 95.38 92.72 DCC 26 171 159 12 92.98 94.50 DCD 36 77 74 3 96.1 93.89 1 MDCC 38 118 113 5 95.76 96.64 DCDMS 19 340 310 30 91.18 90.67 DMDC 16 112 107 5 95.57 94.38 DCC 34 127 124 3 97.64 95.44 2 MDCC 38 118 113 5 95.76 96.64 DCC 34 127 124 3 97.64 95.44 2 DCMD 32 157 151 6 96.18 95.76 DCMD 32 157 151 6 96.18 95.76 DCDMSDCDMS 3333 139139 132132 77 94.9694.96 94.11 94.11

FigureFigure 8.8. Results of thematic processing test multispectral images (Dataset (Dataset-1).-1).

DronesDrones2021 2021, ,5 5,, 77 x FOR PEER REVIEW 1514 of of 19 18

FigureFigure 9.9. ResultsResults ofof thematic processing test RGB images (Dataset-2).(Dataset-2).

TheThe resultsresults ofof PbPPbP classificationclassification arearerepresented represented in in the the third third column column of of Figures Figures8 and8 and9. Multispectral9. Multispectral images images are are characterized characterized by by a a higher higher classification classification accuracyaccuracy of the species composition.composition. At At the the same same time, time, there there is isa visible a visible loss loss of ofpixels pixels in the in theintercrown intercrown space space for forareas areas with with a predominance a predominance of beech of beech and andhigh high canopy canopy density density (for instates, (for instates, DCDMS). DCDMS). The Theresults results of pixel of pixel-by-pixel-by-pixel classification classification of RGB of images RGB images reveal revealsignificant significant difficulties difficulties in the inclassification the classification of tree ofspecies. tree species. A characteristic A characteristic feature featureindicating indicating significa significantnt species classifi- species classificationcation errors i errorss the presence is the presence of pixels of falsely pixels falselyclassified classified as beech as at beech the borders at the borders of the spruce of the sprucecrowns. crowns. This effect This is effect clearly is seen clearly in seenFigure in 9 Figure for the9 forDCMD the DCMDand MDCC and MDCCtest plots. test On plots. the Onother the hand, other misclassified hand, misclassified pixels of pixels species of species classes classes do not do have not this have structure this structure and are and ran- are randomlydomly arranged arranged for formultispectral multispectral scenes. scenes. TheThe results of the classificationclassification of individual tree species are presented in the the column column fourthfourth fromfrom thethe leftleft in in Figures Figures8 8and and9. 9. The The application application of of the the above above technique technique of combiningof combin- theing resultsthe results of crownof crown contouring contouring and and pixel-by-pixel pixel-by-pixel classification classification allows allows usus to eliminate mostmost of the the misclassified misclassified pixels, pixels, especially especially for formultispectral multispectral images. images. RGB RGBimages images are char- are characterizedacterized by the by theappearance appearance of a of large a large number number of ofindividual individual trees trees classified classified as as unknown unknown species,species, forfor allall testtest plots plots except except DCC DCC which which have have pure pure spruce spruce composition. composition. For For multispectral multispec- images,tral images, the the number number of suchof such trees trees is is much much less less and and does does not not exceedexceed thethe percentagepercentage of of impuritiesimpurities ofof otherother species,species, according to ground data. InIn thethe farfar rightright column of Figures 88 andand9 9,, the the results results of of the the assessment assessment of of classes classes ofof thethe vitalvital statusstatus are are presented. presented. We We can can see see that that only only spruces spruces are are damaged damaged by stemby stem pests. pests. For multispectralFor multispectral images, images, the largestthe largest number number of damaged of damaged trees trees is present is present in the in DMDCthe DMDC site. Allsite. four All four classes classes of the of vital the vital status status are presentare present for thisfor this test test plot. plot. Also, Also, minor minor damages damages are detectedare detected in the in DCC the DCC plot (Figureplot (Figure8). Considering 8). Considering RGB images, RGB images, we can we see thatcan see the greatestthat the degreegreatest of degree damage of isdamage detected is indetected the DCMD in the test DCMD plot. Astest for plot. other As plots,for other there plots, is practically there is nopractically damage. no It shoulddamage. be It noted should that be the noted MDCC that plotthe containsMDCC plot a minor contains error a inminor the results error in of

Drones 2021, 5, 77 15 of 18

vital status assessment caused by local rock outcrops. In addition to this, we found that the proposed ITCD algorithm also provides a few error contours in this place. In the future, it will be necessary to adhere to the strategy of joint processing of multispectral and RGB images when carrying out thematic processing of such a kind. At the same time, for this approach, it will be necessary to carry out an accurate registration of the indicated images using a large number of reference points. Only if this condition is met will it be possible to count on an increase in the accuracy of the classification of the species of individual trees and the assessment of classes of vital status.

4. Conclusions High-resolution thematic mapping of forest parameters from data of photo and mul- tispectral aerial survey has a promising future. We proposed an effective algorithm for identifying and delineation tree crowns (ITCD algorithm) on very high spatial resolu- tion remote-sensing images (RGB and multispectral) of mixed forest areas and presented the method for the assessment of species, and vital status of individual trees. To carry out test calculations, we considered several typical test areas with different species com- position, vital state, canopy density and crown scales. The proposed ITCD algorithm showed good results for all samples. For typical scenes, the crown contouring accuracy is about 95% which is promising in comparison with the results presented in (Jing et al., Maschler et al. [1,3]). The most accurate results were obtained when processing photo images that have a higher spatial resolution. We demonstrated several illustrative exam- ples of using automated delineation of crowns for the construction of thematic maps of the species composition and vital status at the level of individual trees, which makes it possible to increase the information content of the results of the standard pixel-by-pixel classification. It was shown that both multispectral and RGB photo UAV images can be used to assess the degree of spruce forest damage. At the same time, it is preferable to use multispectral imagery to classify the individual tree species. Further improvement of the proposed approach lies in the joint processing of multispectral and RGB photo images. Also, it is of interest to investigate the possibilities of the use of the proposed method for processing high-resolution satellite remote-sensing images (for example WorldView, Pleiades, and Resurs-P) providing greater area coverage.

Author Contributions: A.S., E.D. and Y.H. developed the ITCD algorithm, conceived the experiments, performed processing UAV images using ITCD algorithm and validation the results, wrote the main part of the manuscript. E.D. carried out the experiments on species classification and vital status assessment, wrote the corresponding part of the manuscript. G.G., V.T., M.G., S.D. and M.I. made a survey of the research plots and a field exploration of the territory. All authors have read and agreed to the published version of the manuscript. Funding: This study was funded by the framework of the National Science Program “Environmental Protection and Reduction of Risks of Adverse Events and Natural Disasters”, approved by the Resolution of the Council of Ministers № 577/17.08.2018 and supported by the Ministry of Education and Science (MES) of Bulgaria (Agreement № D01-363/17.12.2020), the Russian Foundation for Basic Research (RFBR) with projects no. 19-01-00215 and no. 20-07-00370, and Moscow Center for Fundamental and Applied Mathematics (Agreement 075-15-2019-1624 with the Ministry of Education and Science of the Russian Federation; MESRF). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All drone and airborne orthomosaic data, shapefile, and code will be made available on request to the corresponding author’s email with appropriate justification. Acknowledgments: We are very grateful to the reviewers and academic editor for their valuable comments that helped to improve the paper. Conflicts of Interest: The authors declare no conflict of interest. Drones 2021, 5, 77 16 of 18

Abbreviations The list of abbreviations used in the manuscript: Abbreviation Explanation of the abbreviation BW Black and White DCC Dense Canopy Coniferous DCD Dense Canopy Deciduous DCDMS Dense Canopy Deciduous Multi-Scale DCMD Dense Canopy Mixed Damaged DCMMS Dense Canopy Mixed Multi-Scale DMDC Damaged Medium Dense Canopy DWT Discrete Wavelet Transform ECOC Error Correcting Output Codes ERS Earth Remote Sensing JSC Jaccard Similarity Coefficient HE Histogram equalization HH High-high IoU Intersection over Union ITC Individual Tree Crowns ITCD Individual Tree Crowns Delineation LL Low-low MDCC Medium Dense Canopy Coniferous MDC Medium Dense Canopy PbP Pixel-by-pixel RGB Red, Green, Blue SVM Support Vector Machine UAV Unmanned Aerial Vehicle

References 1. Jing, L.; Hu, B.; Noland, T.; Li, J. An individual tree crown delineation method based on multi-scale segmentation of imagery. ISPRS J. Photogramm. Remote Sens. 2012, 70, 88–98. [CrossRef] 2. Ma, Z.; Pang, Y.; Wang, D.; Liang, X.; Chen, B.; Lu, H.; Weinacker, H.; Koch, B. Individual tree crown segmentation of a larch plantation using airborne laser scanning data based on region growing and canopy morphology features. Remote Sens. 2020, 12, 1078. [CrossRef] 3. Maschler, J.; Atzberger, C.; Immitzer, M. Individual tree crown segmentation and classification of 13 tree species using airborne hyperspectral data. Remote Sens. 2018, 10, 1218. [CrossRef] 4. Safonova, A.; Tabik, S.; Alcaraz-Segura, D.; Rubtsov, A.; Maglinets, Y.; Herrera, F. Detection of fir trees (Abies Sibirica) damaged by the bark beetle in unmanned aerial vehicle images with deep learning. Remote Sens. 2019, 11, 643. [CrossRef] 5. Dmitriev, E.V.; Kozoderov, V.V.; Dementyev, A.O.; Safonova, A.N. Combining classifiers in the problem of thematic processing of hyperspectral aerospace images. Optoelectron. Instrum. Data Process. 2018, 54, 213–221. [CrossRef] 6. Paris, C.; Valduga, D.; Bruzzone, L. A Hierarchical Approach to three-dimensional segmentation of LiDAR data at single-tree level in a multilayered forest. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4190–4203. [CrossRef] 7. Shahzad, M.; Schmitt, M.; Zhu, X.X. Segmentation and crown parameter extraction of individual trees in an airborne TomoSAR point cloud. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2015, 40, 205–209. [CrossRef] 8. Wagner, F.H.; Ferreira, M.P.; Sanchez, A.; Hirye, M.C.M.; Zortea, M.; Gloor, E.; Phillips, O.L.; de Souza Filho, C.R.; Shimabukuro, Y.E.; Aragão, L.E.O.C. Individual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2018, 145, 362–377. [CrossRef] 9. Ferraz, A.; Saatchi, S.; Mallet, C.; Meyer, V. Lidar Detection of individual tree size in tropical forests. Remote Sens. Environ. 2016, 183, 318–333. [CrossRef] 10. Morsdorf, F.; Meier, E.; Kötz, B.; Itten, K.I.; Dobbertin, M.; Allgöwer, B. LIDAR-based geometric reconstruction of boreal type forest stands at single tree level for forest and wildland fire management. Remote Sens. Environ. 2004, 92, 353–362. [CrossRef] 11. Vega, C.; Hamrouni, A.; Mokhtari, S.E.; Morel, J.; Bock, J.; Renaud, J.P.; Bouvier, M.; Durrieu, S. PTrees: A point-based approach to forest tree extraction from LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2014, 98. [CrossRef] 12. Torresan, C.; Berton, A.; Carotenuto, F.; Gennaro, S.F.D.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [CrossRef] Drones 2021, 5, 77 17 of 18

13. Chemura, A.; van Duren, I.C.; van Leeuwen, L.M.; Department of Natural Resources; Faculty of Geo-Information Science and Earth Observation; UT-I-ITC-FORAGES. Determination of the age of oil palm from crown projection area detected from WorldView-2 multispectral remote sensing data: The case of Ejisu-Juaben District, Ghana. ISPRS J. Photogramm. Remote Sens. 2015, 100, 118–127. [CrossRef] 14. Allouis, T.; Durrieu, S.; Véga, C.; Couteron, P. Stem Volume and above-ground biomass estimation of individual pine trees From LiDAR data: Contribution of full-waveform signals. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 924–934. [CrossRef] 15. Gupta, S.; Weinacker, H.; Koch, B. Comparative analysis of clustering-based approaches for 3-D single tree detection using airborne fullwave Lidar data. Remote Sens. 2010, 2, 968. [CrossRef] 16. Yang, Q.; Su, Y.; Jin, S.; Kelly, M.; Hu, T.; Ma, Q.; Li, Y.; Song, S.; Zhang, J.; Xu, G.; et al. The influence of vegetation characteristics on individual tree segmentation methods with airborne LiDAR data. Remote Sens. 2019, 11, 2880. [CrossRef] 17. Zhen, Z.; Quackenbush, L.J.; Zhang, L. Trends in automatic individual tree crown detection and delineation—Evolution of LiDAR data. Remote Sens. 2016, 8, 333. [CrossRef] 18. Coomes, D.A.; Dalponte, M.; Jucker, T.; Asner, G.P.; Banin, L.F.; Burslem, D.F.R.P.; Lewis, S.L.; Nilus, R.; Phillips, O.L.; Phua, M.-H.; et al. Area-based vs tree-centric approaches to mapping forest carbon in southeast asian forests from airborne laser scanning data. Remote Sens. Environ. 2017, 194, 77–88. [CrossRef] 19. Wermelinger, B. Ecology and management of the spruce bark beetle Ips Typographus—A review of recent research. For. Ecol. Manag. 2004, 202, 67–82. [CrossRef] 20. Michigan Nursery and Landscape Association. 3PM Report: European Spruce Bark Beetle. Available online: https://www.mnla. org/story/3pm_report_european_spruce_bark_beetle (accessed on 25 August 2020). 21. Eichhorn, J.; Roskams, P.; Potocic, N.; Timmermann, V.; Ferretti, M.; Mues, V.; Szepesi, A.; Durrant, D.; Seletkovic, I.; Schroeck, H.- W.; et al. Part. IV Visual Assessment of Crown Condition and Damaging Agents; Thünen Institute of Forest Ecosystems: Braunschweig, Germany, 2016; ISBN 978-3-86576-162-0. 22. Samanta, S.; Mukherjee, A.; Ashour, A.S.; Dey, N.; Tavares, J.M.R.S.; Abdessalem Karâa, W.B.; Taiar, R.; Azar, A.T.; Hassanien, A.E. Log transform based optimal image enhancement using firefly algorithm for autonomous mini unmanned aerial vehicle: An application of aerial photography. Int. J. Image Grap. 2018, 18, 1850019. [CrossRef] 23. Haicheng, W.; Mingxia, X.; Ling, Z.; Miaojun, W.; Xing, W.; Xiuxia, Z.; Bai, Z. Study on Monitoring Technology of UAV Aerial Image Enhancement for Burning Straw. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; IEEE Industrial Electronics (IE) Chapter: Singapore, 2016; pp. 4321–4325. 24. Kim, B.H.; Kim, M.Y. Anti-Saturation and Contrast Enhancement Technique Using Interlaced Histogram Equalization (IHE) for Improving Target Detection Performance of EO/IR Images. In Proceedings of the 2017 17th International Conference on Control, Automation and Systems (ICCAS), Jeju, Korea, 18–21 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1692–1695. [CrossRef] 25. GUO, L.J. Balance contrast enhancement technique and its application in image colour composition. Int. J. Remote Sens. 1991, 12, 2133–2151. [CrossRef] 26. Archana; Verma, A.K.; Goel, S.; Kumar, N.S. Gray Level Enhancement to Emphasize Less Dynamic Region within Image Using Genetic Algorithm. In Proceedings of the 2013 3rd IEEE International Advance Computing Conference (IACC), Ghaziabad, India, 22–23 February 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1171–1176. [CrossRef] 27. Tausif, M.; Khan, E.; Hasan, M.; Reisslein, M. SMFrWF: Segmented modified fractional wavelet filter: Fast low-memory discrete wavelet transform (DWT). IEEE Access 2019, 7, 84448–84467. [CrossRef] 28. Pradham, P.; Younan, N.H.; King, R.L. Chapter 16—Concepts of Image Fusion in Remote Sensing Applications. In Image Fusion; Stathaki, T., Ed.; Academic Press: Oxford, UK, 2008; pp. 393–428. ISBN 978-0-12-372529-5. 29. Efford, N. Digital Image Processing: A Practical Introduction Using Java (with CD-ROM), 1st ed.; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 2000; ISBN 978-0-201-59623-6. 30. Perera, R.; Premasiri, S. Hardware Implementation of Essential Pre-Processing Morphological Operations in Image Processing. In Proceedings of the 2017 6th National Conference on Technology and Management (NCTM), Malabe, Sri Lanka, 27–27 January 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 142–146. 31. Hamad, Y.A.; Safonova, A.N. A Program for Highlighting the Crown of a Tree Based on UAV Data. Certificate of State Registration of a Computer Program, 2020. [Хaмaд, Ю.A.; Caфoнoвa, A.Н. Πрoгрaммa для выделения крoны деревa пo дaнным БΠЛA. Cвидетельствo o гoсудaрственнoй регистрaции прoгрaммы для ЭBМ, 2020]. SibFU Scientific and Innovation Portal [Нaучнo- ИннoвaциoнныйΠoртaлCφУ]. Available online: http://research.sfu-kras.ru/publications/publication/44104840 (accessed on 3 August 2021). 32. Ba¸stan,M.; Bukhari, S.S.; Breuel, T. Active canny: Edge detection and recovery with open active contour models. IET Image Process. 2017, 11, 1325–1332. [CrossRef] 33. Ito, K.; Xiong, K. Gaussian filters for nonlinear filtering problems. IEEE Trans. Autom. Control. 2000, 45, 910–927. [CrossRef] 34. Dietterich, T.G.; Bakiri, G. Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 1994, 2, 263–286. [CrossRef] 35. Schölkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; The MIT Press: Cambridge, MA, USA; London, UK, 2002. 36. Escalera, S.; Pujol, O.; Radeva, P. Error-correcting output codes library. J. Mach. Learn. Res. 2010, 11, 661–664. Drones 2021, 5, 77 18 of 18

37. Arnaboldi, V.; Passarella, A.; Conti, M.; Dunbar, R.I.M. Chapter 5—Evolutionary Dynamics in Twitter Ego Networks. In Online Social Networks; Arnaboldi, V., Passarella, A., Conti, M., Dunbar, R.I.M., Eds.; Computer Science Reviews and Trends; Elsevier: Boston, MA, USA, 2015; pp. 75–92. ISBN 978-0-12-803023-3. 38. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; CVPR 2019 Open Access, Provided by the Computer Vision Foundation: Long Beach, CA, USA, 2019; pp. 658–666.