remote sensing

Article Gravitation-Based in Hyperspectral Images

Genyun Sun 1,2, Aizhu Zhang 1,2,*, Jinchang Ren 3, Jingsheng Ma 4, Peng Wang 1,2, Yuanzhi Zhang 5 and Xiuping Jia 6 1 School of Geosciences, China University of Petroleum (East China), Qingdao 266580, China; [email protected] (G.S.); [email protected] (P.W.) 2 Laboratory for Marine Mineral Resources Qingdao National Laboratory for Marine Science and Technology, Qingdao 266071, China 3 Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XQ, UK; [email protected] 4 Institute of Petroleum, Heriot-Watt University, Edinburgh EH14 4AS, UK; [email protected] 5 Key Lab of Lunar Science and Deep-Exploration, Chinese Academy of Sciences, Beijing 100012, China; [email protected] 6 School of Engineering and Information Technology, The University of New South Wales at Canberra, Canberra ACT 2600, Australia; [email protected] * Correspondence: [email protected]; Tel./Fax: +86-532-8698-1952

Academic Editors: Eyal Ben-Dor and Prasad S. Thenkabail Received: 28 March 2017; Accepted: 8 June 2017; Published: 11 June 2017

Abstract: Edge detection is one of the key issues in the field of and remote sensing image analysis. Although many different edge-detection methods have been proposed for gray-scale, , and multispectral images, they still face difficulties when extracting edge features from hyperspectral images (HSIs) that contain a large number of bands with very narrow gap in the spectral domain. Inspired by the clustering characteristic of the gravitational theory, a novel edge-detection algorithm for HSIs is presented in this paper. In the proposed method, we first construct a joint feature space by combining the spatial and spectral features. Each pixel of HSI is assumed to be a celestial object in the joint feature space, which exerts gravitational force to each of its neighboring pixel. Accordingly, each object travels in the joint feature space until it reaches a stable equilibrium. At the equilibrium, the image is smoothed and the edges are enhanced, where the edge pixels can be easily distinguished by calculating the gravitational potential energy. The proposed edge-detection method is tested on several benchmark HSIs and the obtained results were compared with those of four state-of-the-art approaches. The experimental results confirm the efficacy of the proposed method.

Keywords: edge detection; hyperspectral image; gravitation; remote sensing; feature space

1. Introduction With the development of optic-electronics and data transmission technologies, hyperspectral sensors are able to capture hundreds of spectral bands simultaneously with good spatial resolution. A few well-known hyperspectral imaging systems, such as AVIRIS (the Airborne Visible Infrared Imaging Spectrometer), HYDICE (the Hyperspectral Digital Imagery Collection Experiment), and EnMap (the Environmental Mapping and Analysis Program), can provide 126–512 spectral bands with a spatial resolution from 3 m to 30 m. The growing availability of hyperspectral images (HSIs) with both fine spectral and spatial resolution has opened the door to numerous new applications in remote sensing [1]. In many application areas for HSIs, segmentation is required as an important image processing step because it divides an image into meaningful homogeneous regions [2]. However, because

Remote Sens. 2017, 9, 592; doi:10.3390/rs9060592 www.mdpi.com/journal/remotesensing Remote Sens. 2017, 9, 592 2 of 23 the imaged scenes are complex scenes and may not have clear boundaries, image segmentation is always a big challenging task. Effective edge detection algorithms have been found useful for image segmentation since they delineate boundaries of regions and determine segmentation results [2,3]. Especially for the advanced HSIs with high spatial resolution, the clearly depicted edges could be of great benefit in characterizing the spatial structures of landscapes [4]. Over the past few decades, a large number of edge-detection techniques have been proposed, such as Canny [5], Marr [6], SUSAN (Univalue Segment Assimilating Nucleus) [7] and active contour based methods [8]. However, these approaches are designed primarily for gray, color or multispectral images and few studies have paid attention to edge detection on HSIs [9,10]. In previous studies [10,11], edge-detection tasks are completed by adapting color or multispectral approaches to hyperspectral images (HSIs). Accordingly, existing approaches for color or multispectral based edge detection are discussed as follows. In general, most approaches for color or multispectral based edge detection can be grouped into three categories: (1) monochromatic approaches; (2) vector based approaches; and (3) feature-space based approaches. Monochromatic approaches combine all edge maps after applying well-established gray-scale edge detectors on each band, and typical rules for combining edges include the maximum rule [12], the summation rule [13] and the logic OR operation [14]. Although the fine spectral resolution of HSI provides invaluable and abundant information regarding the physical nature of different materials [10], edge features may be only observable over a small subset of bands in the HSI [15]. That is to say, different spectral bands may even contain inconsistent edges, which can easily leads to false edges by simple combination. To overcome this drawback caused by the combination rules, Lei and Fan combined the first principal component and component of color images to obtain complete object edges [16]. Different from monochromatic approaches, vector-based approaches consider each pixel as a spectral vector and then use vector operations to detect edges [17–24]. In these approaches, the gradient magnitude and direction are defined in a vector field by extending the gray-scale edge definition [11]. Di Zenzo’s gradient operator [17], which uses the tensor gradient to define the edge magnitude and direction, is a widely used vector-based method. However, Di Zeno’s method is sensitive to small changes in intensity because it is based on the measure of the squared local contrast variation of multispectral images. To overcome this problem, in Drewniok [19], Di Zeno’s method is combined with the with a Gaussian pre-filter model applied to each band for smoothing the image. Drewniok’s method has better effectiveness, but it results in a localization error when the Gaussian scale parameter is large [25]. To remove both the effect of noise and the localization error caused by pre-filtering, a novel method based on robust color morphological gradient (RCMG) is proposed, where the edge strength of a pixel is defined as the maximum distance between any two pixels in its surrounding window [24]. The RCMG is robust to noise and helps top reserve accurate spatial structures due to a non-linear scheme is implied in the process [10]. Generally speaking, vector based approaches are superior to monochromatic approaches in multispectral edge detection since they take the spectral correlation of the bands into consideration. Different from color images or multispectral images, however, HSIs usually consist of hundreds of contiguous spectral bands. Although the high spectral resolution of HSIs provides potential for more accurate identification of targets, it still suffers from three problems in edge detection, various interferences that lead to low homogeneity inside regions [26], spectral mixtures or edges that appear in a small subset of bands often cause weak edges [26] and inconsistent edges from narrow and abundant channels [27], which have made edge detection more challenging in HSIs. Dimension reduction has been investigated using signal subspace approach and proposed the local rank based edge detection method [28], however the performance is limited. It is necessary to consider the spatial relationships between spectral vectors [29]. In Reference [30], a modified Simultaneous Spectral/Spatial Detection of Edges (SSSDE) [31] for HSIs was proposed to exploit spectral information and identify boundaries (edges) between different materials. This is a kind of constructive attempt to joint utilization of Remote Sens. 2017, 9, 592 3 of 23 spatial and spectral information. Recently, many researchers have tried to develop feature-based new edge-detection methods because edge pixels have saliency in the feature space [32]. In Lin [32], color image edge detection is proposed based on a subspace classification in which multivariate features are utilized to determine edge pixels. In Dinh [27], MSI edge detection is achieved via clustering the gradient feature. The feature space is suitable to analyze HSIs because it is capable of showing data from various land cover types in groups [33]. In particular, a spatial-spectral joined feature space, which takes into consideration both the spatial and spectral correlation between pixels, has attracted increasing attention [1,29]. In addition to edge detection, feature space has also been widely used in other fields of image processing, such as hyperspectral feature selection [34,35], target detection [36], and image segmentation [37,38]. However, few studies focus on edge detection for HSIs using feature space. Physical models that mimic real-world systems can be used to analyze complex scientific and engineering problems [37,39,40]. The ongoing research of physical models has motivated the exploration of new ways to handle image-processing tasks. As a typical physical model, Newton’s law of universal gravitation [41] has received considerable attention, based on which, a series of image-processing algorithms has been proposed, such as the stochastic gravitational approach to feature-based color image segmentation (SGISA) [37], the improved gravitational search algorithm for image multi-level image thresholding [42] and the gravitational collapse (CTCGC) approach for color texture classification [43]. Other gravitation model based approaches for classification include data gravitation classification (DGC) [44–46] and gravitational self-organizing maps (GSOM) [47]. The advantages of the gravity field model have inspired the development of new edge-detection method for gray-scale images [48,49]. Although the image-processing procedure using gravitational theory is relatively easy for understanding, this theory is rarely used in edge detection for HSIs. As the gravitational theory describes the movements of objects in the three-dimensional feature space of the real world, we can easily extend it to n-dimensional feature space and apply it for edge detection in HSIs. In this paper, a novel edge-detection method for HSIs based on gravitation (GEDHSI) is proposed. The main ideas of the proposed GEDHSI are as follows: (1) there exists a type of “force”, called gravitation, between any two pixels in the feature space; (2) computational gravitation obeys the law of gravitation in the physical world; (3) all pixels move in the feature space according to the law of motion until the stopping criteria are satisfied, and then the image system reaches a stable equilibrium; and (4) the edge pixels and non-edge pixels are classified into two different clusters. Therefore, in the equilibrium system, edge responses can be obtained by computing the gravitational potential energy due to the relative positions. Different from our previous approach for gravitation based edge detection in gray-scale images [49], the proposed method features a dynamic scheme. By imitating the gravitational theory, all pixels move in the joint spatial-spectral feature space, which makes the GEDHSI more attractive and effective for HSIs. The experiment results illustrate both the efficiency and efficacy of the proposed method for HSI edge-detection problems. The rest of the paper is organized as follows: The law of universal gravity is briefly reviewed in Section2. The proposed gravitation-based edge detection method is presented in Section3. The experimental validations of the proposed algorithm using both artificial and real HSIs are given in Section4. Finally, the paper is concluded in Section5.

2. Background of the Universal Law of Gravitation According to gravitational theory [35], any two objects exert gravitational force onto each other. The forces between them are equal to each other but reverse in direction. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them, as shown in Equation (1): Remote Sens. 2017, 9, 592 4 of 23

→ Mi Mj → → F = G (Z − Z ), (1) i,j 3 i j → → Z − Z i j → where F i,j is the vector-form magnitude of the gravitational force between objects i and j, and their → masses are denoted as Mi and Mj (Mi < Mj), respectively. G is the universal gravitational constant, Zi → and Zj represent the position vector of objects i and j. Newton’s second law says that when a force, → → F i,j, is applied to object i, its acceleration, ai, depends only on the force and its mass, Mi [50]:

→ → Fi,j ai = . (2) Mi

For Mi, two types of mass are defined in theoretical physics: G Gravitational mass, Mi , is a measure of the strength of the gravitational field due to a particular object. The gravitational field of an object with small gravitational mass is weaker than an object with larger gravitational mass. I Inertial mass, Mi , is a measure of an object’s resistance to changing its state of motion when a force is applied. An object with large inertial mass changes its motion more slowly, and an object with small inertial mass changes it rapidly. In physics, gravitational mass and inertial mass are usually treated as the same. In this paper, they are treated separately to enhance the adaptability of gravitational theory for HSI edge detection. → That is to say, the gravitational force, F i,j, that acts on object i by object j is proportional to the product of the gravitational mass of objects j and i and inversely proportional to the square distance between → → them. ai is proportional to F i,j and inversely proportional to inertia mass of object i. More precisely, we rewrite Equations (1) and (2) as follows:

G G → Mi Mj → → F = G (Z − Z ), (3) i,j 3 i j → → Z − Z i j

→ → F = i,j ai I , (4) Mi G G I where Mi and Mj represent the gravitational mass of objects i and j, respectively. Mi represents the inertial mass of object i. The gravitational force obeys the superposition principle. Thus, for any object k in the space, the gravitational force exerted by objects i and j is calculated by Equation (5).

→ → → MG MG → → MG MG → → F = F + F = G i k (Z − Z ) + G i k (Z − Z ), (5) k i,k j,k 3 i k 3 j k → → → → Z − Z Z − Z i k j k

→ G where Zk is the position of object k in space Ω, and Mk is the gravitational mass of object k. The resultant force forms the gravitational field. Because the gravitational field is conservative, there is a scalar potential energy (SPE) per unit mass at each point in the space associated within the gravitational fields. The SPE of object i exerted by object j is determined by Equation (6) [51] as follows: Remote Sens. 2017, 9, 592 5 of 23

Remote Sens. 2017, 9, 592 5 of 23 MMGG φ =− ij ij, G →→. G− G (6) MZ i i MZ j φi,j = −G . (6) → → Z − Z Similar to the gravitational force, the SPE also obeysi thej superposition principle. The SPE is closely linked with forces. It can be concluded that the force is the negative differentialSimilar of to the the SPE gravitational [52], which force, is expressed the SPE alsoas follows: obeys the superposition principle. The SPE is closely linked with forces. It can be concluded that the force is the negative differential → of the SPE [52], which is expressed as follows: =−∇φ F ij, ij, , (7) → F = −∇φ , (7) where ∇ is the delta operator. The SPE functioni,j is extendablei,j to high dimensional feature space and capable of reflecting the inhomogeneity of data, which is a key point for edge detection. where ∇ is the delta operator. The SPE function is extendable to high dimensional feature space and In gravitational systems, all objects move under the gravitational force until an equilibrium capable of reflecting the inhomogeneity of data, which is a key point for edge detection. phase is reached. In the equilibrium phase, all similar objects aggregate to form a cluster. Different In gravitational systems, all objects move under the gravitational force until an equilibrium clusters have different properties, such as SPE. Some gravitation-based algorithms are inspired from phase is reached. In the equilibrium phase, all similar objects aggregate to form a cluster. Different the characteristics of gravitational theory [37,50]. In this paper, we assume that edge and non-edge clusters have different properties, such as SPE. Some gravitation-based algorithms are inspired from pixels are by nature two different clusters. Therefore, they will have different SPEs. These properties the characteristics of gravitational theory [37,50]. In this paper, we assume that edge and non-edge constitute the background of the proposed method. pixels are by nature two different clusters. Therefore, they will have different SPEs. These properties constitute the background of the proposed method. 3. Gravitation-Based Edge Detection in Hyperspectral Image (GEDHSI) 3. Gravitation-Based Edge Detection in Hyperspectral Image (GEDHSI) 3.1. Overview 3.1. Overview GEDHSI is proposed in this study based on the Newton’s law of universal gravitation. It containsGEDHSI three is main proposed phases in as this illustrated study based in Figu on there 1. Newton’s The first law phase of universal is image gravitation.mapping which It contains maps threea HSI maininto a phases joint spatial-spectral as illustrated in feature Figure space,1. The as first detailed phase in is Section image mapping 3.2. In the which joint mapsspace, a every HSI intopixel a is joint considered spatial-spectral as an object feature which space, attracts as detailed each inother Section via 3.2the. Ingravitational the joint space, force. every The pixelsecond is consideredphase is the as pixel an object traveling which phase attracts as detailed each other in Se viaction the gravitational3.3 where an force.iterative The procedure second phase is used is the to pixelallow traveling all pixels phase move as detailedwithin the in Sectionjoint spatial-sp 3.3 whereectral an iterative feature procedurespace under is used the toinfluence allow all of pixels the movegravitational within theforce joint until spatial-spectral the image system feature reaches space undera stable the equilibrium. influence of In the this gravitational state, the edge force pixels until theand imagenon-edge system pixels reaches will a stablehave different equilibrium. scalar In thispotential state, theenergy edge (SPEs) pixels andbased non-edge on their pixels relative will havepositions. different Consequently, scalar potential in the energy third phase, (SPEs) the based edge on response their relative is obtained positions. based Consequently, on the determined in the thirdSPE of phase, each pixel. the edge response is obtained based on the determined SPE of each pixel.

FigureFigure 1. Overview 1. Overview of ofthe the proposed proposed GEDHSI GEDHSI (Gravitation (Gravitation-Based-Based EdgeEdgeDetection Detection in in Hyperspectral Hyperspectral Image). Image).

3.2. Image Mapping 3.2. Image Mapping The gravitational theory is derived from three-dimensional real-world space. To simulate the The gravitational theory is derived from three-dimensional real-world space. To simulateH×W×B the theory, wewe first first need need to mapto map the HSIthe intoHSI a into feature a feature space. For space. a HSI For denoted a HSI by denotedIH×W×B by, the I parameters, the H,parameters W and B representH, W and height, B represent the width height, of the imagethe wi anddth theof numberthe image of bands,and the respectively. number of A simplebands, featurerespectively. space A is simple composed feature only space of its is bands. composed This only type of of its feature bands. space This primarily type of feature uses spectral space

Remote Sens. 2017, 9, 592 6 of 23 information. To make full use of spatial and spectral information, inspired by Reference [37], we define a spatial-spectral jointed feature space containing (B + 2) features (dimensions). In the feature space, each pixel addresses a location in the space:

→ → → 1 2 b B Zi = (Vi, Si) = (vi , vi ..., vi , ··· , vi , xi, yi), for i = 1, 2, . . . , N, (8) → 1 2 B where N = H × W is the total number of pixels in the image, Vi = (vi , vi , ..., vi ) is the spectral → components of the pixel i, and Si = (xi, yi) is its spatial position.

3.3. Pixel Traveling Once the joint spatial-spectral feature space is constructed, we consider each pixel as a mass object in the space. Thereby, as introduced in Section2, each pixel can attract others through the gravitational force and then moves in the joint feature space according to the gravitational theory. In other words, in the traveling stage, all objects move step by step within the joint feature space under the gravitational force until the stopping criteria are met. Specifically, the pixel traveling phase contains the following three steps in an iterative way until the whole image based system becomes stable:

• Calculation of the acceleration; • Calculation of the traveling step size; • Location updating.

(1) Calculation of the Acceleration

→ In GEDHSI, for a pixel i at time t, its acceleration ai (t) is computed according to Equation (9) as follow: → → F (t) ( ) = i ai t I , (9) Mi (t) → I where Fi(t) is the gravitational force of the pixel i exerted by the other pixels at time t, and Mi (t) is the inertial mass of the pixel i at time t. According to Equation (7), the gravitational force is proportional to the delta of SPE and along the decreased direction of SPE. Therefore, we first calculate the SPE of the pixels i produced by a pixel j as follows: G G Mj (t) · Mi (t) φi,j(t) = −G , (10) (dij(t))

G G where dij(t) is the distance between the pixels i and j, and Mi (t) and Mj (t) are the gravitational mass of the pixels i and j. In this paper, to simplify the calculation, gravitational mass of each object is set as a unit value. → → Obviously, the distance d is a dominant factor in Equation (10). The distance measure 1/ Z − Z ij i j used in Equation (1) will result in singularity. To avoid this problem and make 1 smooth and finite, di,j the distance in GEDHSI is redefined by Equation (11), as follows:

→ → kZi(t) − Zj(t)k d (t) = 1 + ( )2, (11) i,j σ → → where kZi(t) − Zj(t)k is the Euclidean distance between pixels i and j, and σ is an influence factor. Remote Sens. 2017, 9, 592 7 of 23

Then, by substituting Equations (10) and (11) into Equation (7), we can obtain the gravitational force of pixel i exerted by the pixel j as follows:

→ → G G → Mi (t) · Mj (t) (Zj(t) − Zi(t)) Fi,j(t) = −∇φi,j = 2G · → → . (12) σ 2 2 (1 + (kZi(t) − Zj(t)k/σ) )

Because the gravitational force obeys the superposition principle, for a dataset D that consists of m pixels, the gravitational force of pixel i exerted by the dataset D is determined as the accumulated force for each pixel within D as follows:

→ → → G G Mi · Mj (Zj − Zi) Fi = ∑ 2G · → → . (13) σ 2 2 j∈D (1 + (kZi − Zjk/σ) )

According to Equation (13), the dataset D is a key factor for computing gravitational force. In general cases, all pixels are involved to compute the gravitational force. However, this will lead to high computational time. According to Equation (13), the gravitational force attenuates toward zero when the distance increases. Thus, for those pixels that exceed a pre-specified range, we can safely assume that their gravitational forces are zero to simplify the computation. To this end, the dataset D can be constructed by   → → → → D = j kS − S k < r and kV − V k < δ , (14) i j i j where r is the radius of a local circular window in the spatial domain, δ is the radius in the spectral domain, and j is the neighbor pixel around the pixel i. r and δ construct a local suprasphere in the feature space. Here, to balance the effect of the spatial and spectral distance, the position of each pixel → → → 0 r is stretched in spectral domain by Vi = Vi · δ . Thus, the influence factor σ is equal to r, and Zi is → → → 0 replaced by Zi = (Vi , Si). All pixels inside the suprasphere compose the dataset D. According to Equation (9), in addition to resultant force, inertial mass is critical in the traveling I procedure. In the proposed method, the inertial mass Mi (t) of pixel i, is defined as follows: → → k ( ) − ( )k I Zj t Zi t 2 Mi (t) = ∑ exp(−( ) ). (15) j∈D σ

Obviously, the inertial mass is capable of capturing the local information with self-adaptation when traveling.

(2) Calculation of the Traveling Step

After obtaining the acceleration, we can calculate the traveling step size using the law of motion (Equation (16)). → 1 → → ∆Z (t) = a (t)∆t2 + v (t)∆t, (16) i 2 i i → where v i(t) is the velocity of pixel i at time t. To make the pixel move directly toward the potential → center, v i(t) is considered zero in ∆t time. ∆t is the time interval in two adjacent iterations and is set to 1. Thus, Equation (16) is replaced by Equation (17):

→ 1 → ∆Z (t) = a (t). (17) i 2 i

(3) Location Updating Remote Sens. 2017, 9, 592 8 of 23

Based on the traveling step size, the location of the pixel i can be updated as follows:

→ → → → → F (t) ( + ) = ( ) + ( ) = ( ) + Zi t 1 Zi t ∆Zi t Zi t I . (18) 2Mi (t)

The traveling procedure is an iterative procedure and will be repeated until the following stopping → criteria are met: (1) the maximum step size Smax(t) < ε where Smax(t) = max (abs(Zi(t))); i∈[1,2,..,H×W] or (2) the iterations reach a maximum T. In this stage, each pixel moves along the direction of the associated gravitational force. By traveling, they can find other, similar pixels, which remove the influence of noise and preserves the structural detail.

3.4. Edge Detection When the traveling stops, the image system reaches a stable equilibrium where similar pixels will cluster together. Thereby, due to the relative positions, different pixels have various SPEs. The SPE of each pixel can be calculated. For edge pixels, the SPE is relatively high, whereas for non-edge pixels, the SPE is relatively low. Thus, we take the SPE of each pixel as the edge response. The calculation of the SPE of each object is also restricted to the dataset D and using Equation (19).

G G Mj Mi Φi = ∑ ϕi,j = ∑ −G → → . (19) 2 j∈D j∈D (1 + (kZi − Zjk/σ) ) Apparently, finding an appropriate threshold will classify the edge and non-edge pixels into two categories and the edge will be finally detected. According to Reference [35], a good edge detection method should keep a thin and accurate localization of edges. To satisfy such requirements, information about edges’ directions is necessary for non-maximum suppression. In accordance with Reference [49], we first project the SPE to the horizontal and vertical directions in the spatial domain, as shown in Equation (20).

X xj Y yj Φi = ∑ ϕi,j · q and Φi = ∑ φi,j · q , (20) 2 2 j∈D (xj + yj) j∈D (xj + yj)

X Y where Φi and Φi are the SPE of pixel i projected onto the horizontal and vertical directions, respectively. Then, the edge direction is estimated by Equation (21) as follows:

X Y θi = arctan(Φi /Φi ). (21)

With Φ and θ, non-maximum suppression and automatic hysteresis thresholding [53] can be applied to obtain the exact edge. Figure2 depicts the flowchart of the proposed GEDHSI. Remote Sens. 2017, 9, 592 9 of 23 Remote Sens. 2017, 9, 592 9 of 23

Figure 2. Flowchart of the proposed GEDHSI. Figure 2. Flowchart of the proposed GEDHSI. 4. Experimental Results and Discussion 4. Experimental Results and Discussion To demonstrate the efficacy and efficiency of the proposed method, we carried out three groups of Toexperimental demonstrate validation, the efficacy including and efficiency quantitative of the proposedand qualitative method, evaluation we carried and out CPU three runtime. groups of experimentalGroup 1, was validation, designed including to quantitatively quantitative compare and qualitativethe performances evaluation of different and CPU edge runtime. detectors Group by 1, wasusing designed two artificial to quantitatively images, AI compare 1 and AI the 2 performances(Section 4.3). Group of different 2 was edge designed detectors to validate by using the two artificialperformance images, of AIthe 1propos and AIed 2approach (Section on4.3 two). Group real remote 2 was sens designeding HSIs to and validate two ground the performance scene HSIs of the(Section proposed 4.4). approach Group 3 was on twodesigned real remote to test the sensing computational HSIs and efficiency two ground of different scene HSIs edge (Section detectors 4.4 ). Groupusing 3 the was two designed remote tosensing test the HSIs computational (Section 4.5). efficiencyDetailed introduction of different edgeof the detectors tested data using sets theused two remotein experiments sensing HSIs and (Sectionevaluation 4.5 methods). Detailed are introductiongiven in Sections of the 4.1 testedand 4.2, data respectively. sets used in experiments and evaluationIn all experiments, methods are all given edge inresults Sections were 4.1 co andmpared 4.2, respectively.between the proposed method and four otherIn allstate-of-the-art experiments, multispectral all edge results edge detection were compared methods, between which include the proposed those from method Di Zenzo and [17], four otherDrewniok state-of-the-art [19], RCMG multispectral [24] and edge SSSDE detection [30]. Di methods, Zenzo whichinitially include used thosethe Sobel from operator Di Zenzo as [17 ], Drewniokchannel-wise [19], RCMGgradient [24 technique] and SSSDE [17], [ 30which]. Di is Zenzo sensitive initially to noise. used To the improve the anti-noise as channel-wise anility, gradientwe prefer technique to use the [17 ANDD-based], which is sensitive gradient to [46,54] noise. Toconsidering improve its the superiority anti-noise in anility, noise werobustness. prefer to It use theis ANDD-based worth noting gradientthat the implementation [46,54] considering codes its of superiority the comparative in noise methods robustness. are Itdirectly is worth taken noting from that the authors for fair and consistent comparison. We selected their optimized parameters by trial-error the implementation codes of the comparative methods are directly taken from the authors for fair and methods. The parameters of GEDHSI were set as ε = 0.0001, T =20 , r = 4 and δ = 50 in this consistent comparison. We selected their optimized parameters by trial-error methods. The parameters paper. The parameter σ was set to the same value as r as discussed in Section 3.3. All the of GEDHSI were set as ε = 0.0001, T = 20, r = 4 and δ = 50 in this paper. The parameter σ was experiments were conducted under the following settings: Matlab2013a, 64 bit Windows 7 system, set to the same value as r as discussed in Section 3.3. All the experiments were conducted under 2.9 GHz Intel Pentium CPU with 6 GB RAM. the following settings: Matlab2013a, 64 bit Windows 7 system, 2.9 GHz Intel Pentium CPU with 6 GB4.1. RAM. Test Datasets

4.1. TestSix Datasets hyperspectral data sets in three groups that include two artificial ones and four real natural scenes are used in our experiments for performance assessment as listed in Table 1. The first two artificialSix hyperspectral images are datasynthesized sets in three using groups the HY thatDRA include [55] two which artificial were ones used and to fourevaluate real natural the scenesperformance are used in in dealing our experiments with background for performance clutter and weak assessment edges. The as listedreal HSIs in Table include1. Thetwo firstremote two artificial images are synthesized using the HYDRA [55] which were used to evaluate the performance

Remote Sens. 2017, 9, 592 10 of 23

inRemote dealing Sens. 2017 with, 9, background 592 clutter and weak edges. The real HSIs include two remote sensing10 HSIs of 23 and two ground scene HSIs. The first remote sensing HSI is Farmland image is an AVIRIS flight line oversensing a farmland HSIs and of east-centraltwo ground Indiana scene HSIs. [56]. AnotherThe first remote remote sensing sensing HSI HSI is ais HYDICE Farmland flight image line is over an theAVIRIS Washington flight line DC Mallover [a57 farmland]. The two of ground east-centr sceneal HSIsIndiana are chosen[56]. Another from the remote Foster’s sensing HSI dataset HSI [is58 a] toHYDICE further flight investigate line over the performancethe Washington of the DC proposed Mall [57]. method The two on realground ground scene scenes. HSIs Allare thesechosen images from arethe suitableFoster’s toHSI evaluate dataset the[58] performance to further investigat of connectivity,e the performance edge resolution of the and proposed the ability method to suppress on real falseground edges. scenes. All these images are suitable to evaluate the performance of connectivity, edge resolution and the ability to suppress false edges. Table 1. Properties of the data sets used in experiments. Table 1. Properties of the data sets used in experiments. Data Spatial No. of Datasets Data Characteristic Spatial No. of SizeSize (Pixels) Datasets Source Characteristic Resolution Bands Source Resolution Bands (Pixels) 1 AI 1 background clutter 224 180 × 180 1 AI 1 Artificial background clutter N/A 224 180 × 180 Artificial N/A 2 AI AI 22 weak weak edge, edge, spectral spectral mixture mixture 224 224 180 180× × 180180 3 FarmlandFarmland Remote straightstraight lined lined boundary, boundary, textured textured 224224 400 400× × 200200 Remote High sensing High 4 Washington DCDC MallMall sensing many many tiny tiny objects, objects, cluttered cluttered 191 191 466 466× × 307307 man-made objects, textured wall, 5 Foster’s Scene 5 Ground man-made objects, textured wall, 31 820 × 820 5 Foster’s Scene 5 Ground desk, ball, blocks, etc. VeryVery high 31 820 × 820 scene desk, ball, blocks, etc. 6 Foster’s Scene 7 scene textured wall, grass, shadow, etc. high 31 690 × 425 6 Foster’s Scene 7 textured wall, grass, shadow, etc. 31 690 × 425

4.2. Evaluation Methods 4.2. Evaluation Methods Most edge detection techniques utilize thinning and binarization to obtain the final edge results [59]. MostIn this edge study, detection we put techniques all these edge utilize strength thinning maps and into binarization the same thinning to obtain process. the final In edge this resultsprocess, [ 59a ].pixel In is this only study, considered we put as all an these edge edge if it is strength a local mapsmaximum into in the the same horizontal thinning or process.vertical Indirection. this process, Then, awe pixel applied is only the consideredautomatic threshold as an edge selection if it is aand local hysteresis maximum thresholding in the horizontal technique or vertical[53] on the direction. edge strength Then, wemaps applied to generate the automatic the binary threshold edges. selection and hysteresis thresholding techniqueWe investigated [53] on the edgeedge strengthdetection maps results to generatewith both the subjective binary edges. and quantitative evaluations. For quantitativeWe investigated evaluation, edge as shown detection in Figure results 3, withpixels both in the subjective candidate and edge quantitative image were evaluations.classified as ForTP (True quantitative Positive), evaluation, TN (True as Negative), shown in Figure FP (False3, pixels Positive) in the and candidate FN (False edge Negative), image were from classified which the as TPprecision (True and Positive), recall TNevolutions (True Negative), could be FPobtained (False as Positive) Prec and and Rec. FN The (False Prec Negative), shows the from correct which rate the of precisionthe detectedand edgesrecall evolutions while the Rec could implies be obtained the sensitivity as Prec andof anRec. edgeThe detectorPrec shows to true the edges. correct Because rate of thethe detectedtwo evaluation edges while methods the Recmayimplies be unfaithful the sensitivity when the of annumber edge detector of objects to and true backgrounds edges. Because has the a twogreat evaluation different, methodsaccording may to beRefere unfaithfulnce [25], when the theF-measure number ofderived objects from and backgroundsPrec and Rec hasis used a great to different,evaluate the according overall toperformance. Reference [25 ], the F-measure derived from Prec and Rec is used to evaluate the overall performance. × = PrecPrec× RecRec F =Fα , ,(22) (22) α α ·ααPrec⋅+⋅Prec+ ((1-1 − )α) Rec· Rec wherewhere αα∈∈[0,1][0,1] is is a a parameter parameter weighing weighing the the contributions contributions ofofPrec Prec andand Rec.Rec. InIn thisthis paper,paper, ααwas was set set to 0.5to 0.5 to balanceto balance the thePrec Precand andRec .Rec As. aAs statistical a statistical error error measure, measure, F-measure F-measure works works well well for FPfor and FP and FN, whereFN, where a good a good detection detection has higherhas higher F-measure F-measure value. value.

Figure 3. Confusion matrix for the edge detection problem.

4.3. Evaluation on Artificial Datasets In practice, two cases make edge detection a challenge. The first one is objects that are embedded by background clutter, and the second is weak edges caused by spectral mixture or just appearing in a small subset of bands. As a result, we constructed two artificial images, AI 1 and AI 2, to evaluate the performance of our proposed edge detection approach under these two cases.

Remote Sens. 2017, 9, 592 11 of 23

4.3. Evaluation on Artificial Datasets In practice, two cases make edge detection a challenge. The first one is objects that are embedded by background clutter, and the second is weak edges caused by spectral mixture or just appearing in a smallRemote subsetSens. 2017 of, 9 bands., 592 As a result, we constructed two artificial images, AI 1 and AI 2, to evaluate11 of the 23 performance of our proposed edge detection approach under these two cases. A. Experiment on AI 1

In thisthis experiment,experiment, wewe investigatedinvestigated thethe behaviorsbehaviors ofof thethe fivefive edge-detectionedge-detection methodsmethods when the objects in an image are embedded by background clu clutter.tter. The tested image AI 1 is composed of one object (in the middle) and a background, where the object is located between column 60 and column 120. Thus, edge pixels are located at columns 60 andand 120. To simulate differentdifferent levels of background clutter, wewe addedadded GaussianGaussian noisesnoises withwith variousvarious Signal Noise Ratio (SNR)(SNR) to AIAI 1. The object noise is fixedfixed to SNRSNR == 1616 dB,dB, whereaswhereas thethe backgroundbackground noisenoise varies from 0.1 to 3.0 dB. FigureFigure4 4aa showsshows thethe content of the synthetic image without noise. Figu Figurere 44bb showsshows aa channelchannel forfor thethe backgroundbackground withwith SNR = 0.2 dB.

(a) (b)

Figure 4. A channel in in the the AI AI 1 1 dataset: dataset: (a (a) )The The content content of of the the synthetic synthetic image image without without noise noise (object (object is islocated located in the in the middle); middle); and and (b) a (b corrupted) a corrupted image image with with SNR SNR= 16 dB = 16 (where dB (where SNR stands SNR stands Signal SignalNoise NoiseRatio) Ratio)in the inobject the objectregion region and SNR and = SNR 0.2 =dB 0.2 in dBthe in background the background region. region. The dark The darkcolor color indicates indicates low lowintensity. intensity.

Figure 5 shows an edge-detection example for AI 1, where the background noise is SNR = 0.2 Figure5 shows an edge-detection example for AI 1, where the background noise is SNR = 0.2 dB . dB. The edge-strength maps and corresponding binary edge maps were shown in the first row and The edge-strength maps and corresponding binary edge maps were shown in the first row and the the second row, respectively, in Figure 5. For Di Zenzo’s method, the strength of the edge pixels is second row, respectively, in Figure5. For Di Zenzo’s method, the strength of the edge pixels is slightly slightly larger than the background (Figure 5b). Although we can set a pair of thresholds to larger than the background (Figure5b). Although we can set a pair of thresholds to separate edge and separate edge and non-edge pixels, severe false positive detections cannot be avoided, as shown in non-edge pixels, severe false positive detections cannot be avoided, as shown in Figure5g. For the Figure 5g. For the RCMG, the strength of noisy pixels is larger than that of edge pixels, as shown in RCMG, the strength of noisy pixels is larger than that of edge pixels, as shown in Figure5c; thus, Figure 5c; thus, the boundaries of the object were concealed from the noisy background (Figure 5h). the boundaries of the object were concealed from the noisy background (Figure5h). According to According to Figure 5d,i, the results of Drewniok’s method are better than those of Di Zenzo’s and Figure5d,i, the results of Drewniok’s method are better than those of Di Zenzo’s and the RCMG the RCMG method, but they blurred the edges to some extent (Figure 5d). Therefore, the object’s method, but they blurred the edges to some extent (Figure5d). Therefore, the object’s edges were edges were blurred with burrs, as shown in Figure 5i. For the SSSDE, the strength of almost all the blurred with burrs, as shown in Figure5i. For the SSSDE, the strength of almost all the edge pixels is edge pixels is larger than that of the noise pixels as depicted in Figure 5e. However, some false larger than that of the noise pixels as depicted in Figure5e. However, some false edges are detected edges are detected in the object, as shown in Figure 5j. Compared to the other four methods, the in the object, as shown in Figure5j. Compared to the other four methods, the better performance of better performance of the presented detector is clearly perceivable, and the strength of the edge the presented detector is clearly perceivable, and the strength of the edge pixels is larger than that of pixels is larger than that of the noise pixels (Figure 5a). It is therefore easy to set appropriate the noise pixels (Figure5a). It is therefore easy to set appropriate thresholds to separate the edge and thresholds to separate the edge and non-edge pixels, as shown in Figure 5e. non-edge pixels, as shown in Figure5e.

(a) (b) (c) (d) (e)

Remote Sens. 2017, 9, 592 11 of 23

A. Experiment on AI 1

In this experiment, we investigated the behaviors of the five edge-detection methods when the objects in an image are embedded by background clutter. The tested image AI 1 is composed of one object (in the middle) and a background, where the object is located between column 60 and column 120. Thus, edge pixels are located at columns 60 and 120. To simulate different levels of background clutter, we added Gaussian noises with various Signal Noise Ratio (SNR) to AI 1. The object noise is fixed to SNR = 16 dB, whereas the background noise varies from 0.1 to 3.0 dB. Figure 4a shows the content of the synthetic image without noise. Figure 4b shows a channel for the background with SNR = 0.2 dB.

(a) (b)

Figure 4. A channel in the AI 1 dataset: (a) The content of the synthetic image without noise (object is located in the middle); and (b) a corrupted image with SNR = 16 dB (where SNR stands Signal Noise Ratio) in the object region and SNR = 0.2 dB in the background region. The dark color indicates low intensity.

Figure 5 shows an edge-detection example for AI 1, where the background noise is SNR = 0.2 dB. The edge-strength maps and corresponding binary edge maps were shown in the first row and the second row, respectively, in Figure 5. For Di Zenzo’s method, the strength of the edge pixels is slightly larger than the background (Figure 5b). Although we can set a pair of thresholds to separate edge and non-edge pixels, severe false positive detections cannot be avoided, as shown in Figure 5g. For the RCMG, the strength of noisy pixels is larger than that of edge pixels, as shown in Figure 5c; thus, the boundaries of the object were concealed from the noisy background (Figure 5h). According to Figure 5d,i, the results of Drewniok’s method are better than those of Di Zenzo’s and the RCMG method, but they blurred the edges to some extent (Figure 5d). Therefore, the object’s edges were blurred with burrs, as shown in Figure 5i. For the SSSDE, the strength of almost all the edge pixels is larger than that of the noise pixels as depicted in Figure 5e. However, some false edges are detected in the object, as shown in Figure 5j. Compared to the other four methods, the better performance of the presented detector is clearly perceivable, and the strength of the edge Remotepixels Sens. is2017 larger, 9, 592 than that of the noise pixels (Figure 5a). It is therefore easy to set appropriate12 of 23 thresholds to separate the edge and non-edge pixels, as shown in Figure 5e.

Remote Sens. 2017, 9, 592 12 of 23 Remote Sens.( a2017) , 9, 592 (b) (c) (d) (e) 12 of 23

(f) (g) (h) (i) (j) (f) (g) (h) (i) (j) Figure 5. Edge-strength maps and corresponding binary edges of AI 1 with SNR = 0.2 dB. Figure 5. Edge-strength maps and corresponding binary edges of AI 1 with SNR = 0.2 dB. Edge-strength FigureEdge-strength 5. Edge-strength maps of: (amaps) GEDHSI; and corresponding(b) Di Zenzo’s bimethod;nary edges (c) the of Robust AI 1 Colorwith MorphologicalSNR = 0.2 dB. maps of: (a) GEDHSI; (b) Di Zenzo’s method; (c) the Robust Color Morphological Gradient (RCMG) Edge-strengthGradient (RCMG) maps method; of: (a) (dGEDHSI;) Drewniok’s (b) method;Di Zenzo’s and (method;e) the modified (c) the Simultaneous Robust Color Spectral/Spatial Morphological method; (d) Drewniok’s method; and (e) the modified Simultaneous Spectral/Spatial Detection of GradientDetection (RCMG) of Edges method; (SSSDE) (d method.) Drewniok’s (f–j) Binary method; edge and maps (e) ofthe corresponding modified Simultaneous results of (a Spectral/Spatial–e). EdgesDetection (SSSDE) of Edges method. (SSSDE) (f– jmethod.) Binary ( edgef–j) Binary maps ofedge corresponding maps of corresponding results of (a results–e). of (a–e). The robustness to the background clutter of the GEDHSI was further demonstrated in Figure 6, underThe different robustness background-noise to the background levels clutter from of0.1 the dB GEDHSIto 3 dB. From was furtherFigure demonstrated 6b we can find in that Figure the 66,, underGEDHSI different produces background-noise better Rec values levels in fromalmost from 0.1 0.1 all dB dB the to to noise3 3 dB. dB. levels.From From Figure FigureThis indicates 66bb wewe can canthefind findsuccessful thatthat thethe GEDHSIdetection producesproduces of almost better better all Recthe Rec valuesedges, values inwhich almost in almostis also all the confirmedall noise the levels.noise by Figure Thislevels. indicates 5f.This Although indicates the successful the thePrec successful detectionvalues ofdetectionobtained almost allof by almost the the edges,GEDHSI all the which are edges, occasionally is also which confirmed isslightly also by confirmedwo Figurerse than5f. bythat Although Figure of Di Zenzo’s5f. the AlthoughPrec method,values the it obtainedincreasesPrec values by theobtainedquickly GEDHSI byas SNRthe are GEDHSI raised occasionally as are shown occasionally slightly in Figure worse slightly 6a. For than thewo that rseF-measure ofthan Di that Zenzo’s in ofFigure Di method,Zenzo’s 6c, we canmethod, it increases see that it increasesunder quickly asquicklyhigh SNR background-noise raisedas SNR as raised shown as inlevelsshown Figure (SNR in6 Figurea. approximately For 6a. the For F-measure the 0.1 F-measure dB), in Figurethe GEDHSIin 6Figurec, we produced can6c, we see can that an see underF-measure that highunder background-noisehighvalue background-noise larger than 0.9, levels whereas levels (SNR (SNRthe approximately F-measure approximately values 0.1 dB), produced 0.1 thedB), GEDHSI theby the GEDHSI other produced four produced methods an F-measure an are F-measure all less value largervaluethan larger than0.7. When 0.9, than whereas the0.9, SNR whereas the surpasses F-measure the F-measure 0.9, the values F-measure values produced produced value by for the bythe other theGEDHSI other four methodsreachesfour methods approximately are all are less all than less 1.0, whereas for the other four methods the F-measure values increased slowly. It is obvious that the 0.7.thanWhen 0.7. When the SNR the SNR surpasses surpasses 0.9, the0.9, F-measurethe F-measure value value for thefor GEDHSIthe GEDHSI reaches reaches approximately approximately 1.0, other four methods are weaker than the GEDHSI in addressing such severe background clutter. whereas1.0, whereas for the for otherthe other four four methods methods the F-measure the F-measur valuese values increased increased slowly. slowly. It is obvious It is obvious that the that other the With severe noise interference, the large difference between a noisy pixel and its neighbors usually fourother methods four methods are weaker are thanweaker the than GEDHSI the inGEDHSI addressing in addressing such severe such background severe background clutter. With clutter. severe leads to a high gradient magnitude for that noisy pixel, a magnitude even larger than that of the noiseWith interference,severe noise theinterference, large difference the large between differenc a noisye between pixel and a noisy its neighbors pixel and usually its neighbors leads to usually a high true edge pixels. As a result, this will inevitably cause incorrect edges because of failing to make the leads to a high gradient magnitude for that noisy pixel, a magnitude even larger than that of the gradientfull use magnitude of the spatial for that information noisy pixel, of a magnitudethe image, even especially larger thaninadequate that of theconsideration true edge pixels. of the As a true edge pixels. As a result, this will inevitably cause incorrect edges because of failing to make the result,relationships this will among inevitably the pixels cause within incorrect a local edges window. because of failing to make the full use of the spatial informationfull use of ofthe the spatial image, information especially inadequate of the image, consideration especially of theinadequate relationships consideration among the of pixels the withinrelationships a local among window. the pixels within a local window.

(a) (b) (c) Figure 6. Performance evaluation on AI 1 under various noise levels. Performance of: (a) Precision; (b) Recall;( aand) (c) F-measure. (b) (c) Figure 6. Performance evaluation on AI 1 under various noise levels. Performance of: (a) Precision; Figure 6. Performance evaluation on AI 1 under various noise levels. Performance of: (a) Precision; B. Experiment(b) Recall; and on ( cAI) F-measure. 2 (b) Recall; and (c) F-measure. In this experiment, we analyze the behaviors of the five edge-detection methods in dealing with B. Experiment on AI 2 B.weak Experiment edges between on AI 2 objects. For HSIs, weak edges are usually caused by spectral mixture between objects or edges appearing in only a small subset of bands [17]. To tackle this problem, we In this experiment, we analyze the behaviors of the five edge-detection methods in dealing with generatedIn this experiment,a well-designed we analyzeimage AI the 2 with behaviors a spatial of dimension the five edge-detection of 180 × 180 in methods nine regions. in dealing The size with weak edges between objects. For HSIs, weak edges are usually caused by spectral mixture between weakof each edges region between is 60 × objects. 60 pixels. For We HSIs, first weakselected edges three are types usually of endmembers caused by spectralfrom the mixtureUSGS Spectral between objectsLibrary or [60], edges as shown appearing in Figure in only7a. a small subset of bands [17]. To tackle this problem, we generated a well-designed image AI 2 with a spatial dimension of 180 × 180 in nine regions. The size of each region is 60 × 60 pixels. We first selected three types of endmembers from the USGS Spectral Library [60], as shown in Figure 7a.

Remote Sens. 2017, 9, 592 13 of 23 objects or edges appearing in only a small subset of bands [17]. To tackle this problem, we generated a well-designed image AI 2 with a spatial dimension of 180 × 180 in nine regions. The size of each region is 60 × 60 pixels. We first selected three types of endmembers from the USGS Spectral Library [60], asRemote shown Sens. in 2017 Figure, 9, 5927 a. 13 of 23

(a) (b) (c)

FigureFigure 7.7.Materials Materials for for AI AI 2: 2: (a ()a curve) curve of of endmembers; endmembers; (b )( diagramb) diagram of AIof 2;AI and 2; and (c) a ( bandc) a band of AI of 2 withAI 2 SNRwith =SNR 10 dB. = 10 dB.

According to Figure 7a, the two endmembers A and C are found to be very similar in the range According to Figure7a, the two endmembers A and C are found to be very similar in the range of 0.6 µm to 1.4 µm, whereas the two endmembers B and C are very similar in the range of 1.5 µm of 0.6 µm to 1.4 µm, whereas the two endmembers B and C are very similar in the range of 1.5 µm to 2.1 µm. A diagram of AI 2 is shown in Figure 7b, where the symbol of each region denotes which to 2.1 µm. A diagram of AI 2 is shown in Figure7b, where the symbol of each region denotes which endmembers the region is formed. For example, the “A” of Region A means the region was endmembers the region is formed. For example, the “A” of Region A means the region was generated generated only using endmember A, Region AB was generated by using endmembers A and B, only using endmember A, Region AB was generated by using endmembers A and B, whereas Region whereas Region BA was composed of endmembers B and A, etc. Except for the three regions BA was composed of endmembers B and A, etc. Except for the three regions located in the principal located in the principal diagonal, all regions were mixed by different endmembers. The mixed ratio diagonal, all regions were mixed by different endmembers. The mixed ratio was fixed as 0.4:0.6 for was fixed as 0.4:0.6 for these regions. We then added Gaussian noise to AI 2 and result in a varying these regions. We then added Gaussian noise to AI 2 and result in a varying SNR between 1 and SNR between 1 and 30 dB. Figure 7c shows an example band with SNR = 10 dB. Due to the spectral 30 dB. Figure7c shows an example band with SNR = 10 dB. Due to the spectral mixture and spectral mixture and spectral similarity in many bands, there are several weak edges. Under such similarity in many bands, there are several weak edges. Under such conditions, weak edges may be conditions, weak edges may be easily vanished in noise when Gaussian noise was added. easily vanished in noise when white Gaussian noise was added. For the noise-degraded AI 2 with a SNR of 10 dB, Figure 8 shows the edge-strength maps (the For the noise-degraded AI 2 with a SNR of 10 dB, Figure8 shows the edge-strength maps first row) and their corresponding binary edge maps (the second row) generated by the five (the first row) and their corresponding binary edge maps (the second row) generated by the five methods. As seen from the edge-strength maps generated by Di Zenzo’s method (Figure 8b), the methods. As seen from the edge-strength maps generated by Di Zenzo’s method (Figure8b), the RCMG (Figure 8c), Drewniok’s method (Figure 8d) and SSSDE method (Figure 8e), the strength of RCMG (Figure8c), Drewniok’s method (Figure8d) and SSSDE method (Figure8e), the strength of upper horizontal and left vertical edge pixels is close to that of the noisy pixels. Therefore, it is upper horizontal and left vertical edge pixels is close to that of the noisy pixels. Therefore, it is difficult difficult to set appropriate thresholds for these edge-strength maps. According to Figure 8g, Di Zenzo’s to set appropriate thresholds for these edge-strength maps. According to Figure8g, Di Zenzo’s method method seems to produce discontinuous edges and severe false edges. Although RCMG is slightly better seems to produce discontinuous edges and severe false edges. Although RCMG is slightly better than Di Zenzo’s as shown in Figure 8h, the results still contains some spurious edges. The similar than Di Zenzo’s as shown in Figure8h, the results still contains some spurious edges. The similar problems exist in the results of SSSDE as illustrated in Figure 8j through the SSSDE generate much better problems exist in the results of SSSDE as illustrated in Figure8j through the SSSDE generate much results than RCMG. Although Drewniok’s method has removed almost all noise, the edges are hackly better results than RCMG. Although Drewniok’s method has removed almost all noise, the edges are due to the displacement caused by Gaussian smoothing in each band. For comparison, the intensity of hackly due to the displacement caused by Gaussian smoothing in each band. For comparison, the the edge pixels is dramatically larger than that of the noisy pixels, as shown in Figure 8a. Thus, intensity of the edge pixels is dramatically larger than that of the noisy pixels, as shown in Figure8a. thresholding can easily classify edge and non-edge pixels, as shown in Figure 8f. Thus, thresholding can easily classify edge and non-edge pixels, as shown in Figure8f. Figure 9 shows the performance comparison of the five methods under noise from 1.0 dB to 30 dB. As seen in Figure9c, due to the simplicity of the tested image, all five methods produced good results when the level of noise is low (SNR > 10 db). With the increase of noise level, especially when SNR < 7 db, the F-measure values of the four compared algorithms decreased rapidly while the proposed GEDHSI still produce much better results. Furthermore, GEDHSI obtained better results on both Prec and Rec values than Di Zenzo’s, RCMG and SSSDE methods as illustrated in Figure9a,b. This demonstrates the GEDHSI can detect the edges in the HSIs with higher accuracy in the complicated environments. For Drewniok’s method, although its Rec values are better than that of the GEDHSI to some extent,

(a) (b) (c) (d) (e)

Remote Sens. 2017, 9, 592 13 of 23

(a) (b) (c)

Figure 7. Materials for AI 2: (a) curve of endmembers; (b) diagram of AI 2; and (c) a band of AI 2 with SNR = 10 dB.

According to Figure 7a, the two endmembers A and C are found to be very similar in the range of 0.6 µm to 1.4 µm, whereas the two endmembers B and C are very similar in the range of 1.5 µm to 2.1 µm. A diagram of AI 2 is shown in Figure 7b, where the symbol of each region denotes which endmembers the region is formed. For example, the “A” of Region A means the region was generated only using endmember A, Region AB was generated by using endmembers A and B, whereas Region BA was composed of endmembers B and A, etc. Except for the three regions located in the principal diagonal, all regions were mixed by different endmembers. The mixed ratio was fixed as 0.4:0.6 for these regions. We then added Gaussian noise to AI 2 and result in a varying SNR between 1 and 30 dB. Figure 7c shows an example band with SNR = 10 dB. Due to the spectral mixture and spectral similarity in many bands, there are several weak edges. Under such conditions, weak edges may be easily vanished in noise when white Gaussian noise was added. For the noise-degraded AI 2 with a SNR of 10 dB, Figure 8 shows the edge-strength maps (the first row) and their corresponding binary edge maps (the second row) generated by the five methods. As seen from the edge-strength maps generated by Di Zenzo’s method (Figure 8b), the RCMG (Figure 8c), Drewniok’s method (Figure 8d) and SSSDE method (Figure 8e), the strength of upper horizontal and left vertical edge pixels is close to that of the noisy pixels. Therefore, it is difficult to set appropriate thresholds for these edge-strength maps. According to Figure 8g, Di Zenzo’s method seems to produce discontinuous edges and severe false edges. Although RCMG is slightly better RemotethanSens. Di Zenzo’s2017, 9, 592 as shown in Figure 8h, the results still contains some spurious edges. The similar14 of 23 problems exist in the results of SSSDE as illustrated in Figure 8j through the SSSDE generate much better Remote Sens. 2017, 9, 592 14 of 23 results than RCMG. Although Drewniok’s method has removed almost all noise, the edges are hackly its Prec values are remarkably worse. This implies that Drewniok’s method is easily affected by the due to the displacement caused by Gaussian smoothing in each band. For comparison, the intensity of noise,the edge which pixels is also is confirmeddramatically by larger the F-measure than that shownof the innoisy Figure pixels,9c. Moreover,as shown thein Figure jagged 8a. boundaries Thus, betweenthresholding different can easily edges classify in Figure edge8i alsoand non-edge confirm thatpixels, Drewniok’s as shown in method Figure 8f. produces many false edges. To this end, we can reasonably conclude that GEDHSI is more robust to severe noise and weak edges than the other four methods.

(f) (g) (h) (i) (j)

Figure 8. Edge-strength maps and corresponding binary edges of AI 2 with SNR = 10 dB. Edge-strength maps of: (a) GEDHSI; (b) Di Zenzo’s method; (c) the RCMG method; (d) Drewniok’s method; and (e) SSSDE method. (f–j) Binary edge maps of corresponding results of (a–e) by applying automatic hysteresis thresholding and thinning processes. Remote Sens. 2017, 9, 592 14 of 23 (a) (b) (c) (d) (e) Figure 9 shows the performance comparison of the five methods under noise from 1.0 dB to 30 dB. As seen in Figure 9c, due to the simplicity of the tested image, all five methods produced good results when the level of noise is low (SNR > 10 db). With the increase of noise level, especially when SNR < 7 db, the F-measure values of the four compared algorithms decreased rapidly while the proposed GEDHSI still produce much better results. Furthermore, GEDHSI obtained better results on both Prec and Rec values than Di Zenzo’s, RCMG and SSSDE methods as illustrated in Figure 9a,b. This demonstrates the GEDHSI can detect the edges in the HSIs with higher accuracy

in the complicated(f) environments.(g) For Drewniok’s(h) method, although(i )its Rec values are better(j) than that of the GEDHSI to some extent, its Prec values are remarkably worse. This implies that Figure 8. Edge-strength maps and corresponding binary edges of AI 2 with SNR = 10 dB. Drewniok’sFigure 8. Edge-strength method is easily maps affected and corresponding by the noise, binary which edges is also of AI confirmed 2 with SNR by = 10the dB. F-measure Edge-strength shown Edge-strength maps of: (a) GEDHSI; (b) Di Zenzo’s method; (c) the RCMG method; (d) Drewniok’s in mapsFigure of: 9c. (a )Moreover, GEDHSI; (bthe) Di jagged Zenzo’s boundaries method; ( cbetween) the RCMG different method; edges (d) Drewniok’sin Figure 8i method; also confirm and method; and (e) SSSDE method. (f–j) Binary edge maps of corresponding results of (a–e) by (e) SSSDE method. (f–j) Binary edge maps of corresponding results of (a–e) by applying automatic that applyingDrewniok’s automatic method hysteresis produces thresh manyolding false and edges. thinning To processes. this end, we can reasonably conclude that GEDHSIhysteresis is more thresholding robust to and severe thinning noise processes. and weak edges than the other four methods. Figure 9 shows the performance comparison of the five methods under noise from 1.0 dB to 30 dB. As seen in Figure 9c, due to the simplicity of the tested image, all five methods produced good results when the level of noise is low (SNR > 10 db). With the increase of noise level, especially when SNR < 7 db, the F-measure values of the four compared algorithms decreased rapidly while the proposed GEDHSI still produce much better results. Furthermore, GEDHSI obtained better results on both Prec and Rec values than Di Zenzo’s, RCMG and SSSDE methods as illustrated in Figure 9a,b. This demonstrates the GEDHSI can detect the edges in the HSIs with higher accuracy in the complicated environments. For Drewniok’s method, although its Rec values are better than that of the GEDHSI to some extent, its Prec values are remarkably worse. This implies that Drewniok’s method(a) is easily affected by the noise,( bwhich) is also confirmed by the F-measure(c) shown in FigureFigure 9c. 9. Moreover,Performance the evaluation jagged onboundaries AI 2 under betweenvarious noise different levels: edges(a) precision; in Figure (b) recall; 8i also and confirm (c) Figure 9. Performance evaluation on AI 2 under various noise levels: (a) precision; (b) recall; that Drewniok’sF-measure. method produces many false edges. To this end, we can reasonably conclude that and (c) F-measure. GEDHSI is more robust to severe noise and weak edges than the other four methods. 4.4. Evaluation on Real World Datasets 4.4. Evaluation on Real World Datasets 4.4.1. Experiments on Remote Sensing Datasets 4.4.1. Experiments on Remote Sensing Datasets For remote sensing images, due to the complexity of land cover based surface feature, For remote sensing images, due to the complexity of land cover based surface feature, illumination illumination and atmospheric effects as well as spectral mixing of pixels, edges are always anddeteriorated atmospheric for effects proper as detection. well as spectral More seriously, mixing of there pixels, usually edges exist are alwayssome noisy deteriorated bands in for remote proper detection.sensing MoreHSIs, seriously,which make there edge usually detection exist somevery noisydifficult. bands To invalidate remote the sensing performance HSIs, which of edge make edgedetection detection on veryremote difficult. sensing To HSIs, validate we the select performanceed two remote of edge sensing detection HSIs, on remotethe Farmland sensing and HSIs, weWashington selected two DC remote Mall image sensing as shown HSIs, in the Figures Farmland 10 and and 12, Washington to conduct experiments DC Mall image in this as section. shown in Figures 10 and 12(a,) to conduct experiments in this( section.b) (c) A. Experiment on the Farmland HSIs A. ExperimentFigure 9. Performance on the Farmland evaluation HSIs on AI 2 under various noise levels: (a) precision; (b) recall; and (c) F-measure. Figure 10 is the Farmland image obtained by AVIRIS. The AVIRIS instrument contains 224 Figure 10 is the Farmland image obtained by AVIRIS. The AVIRIS instrument contains 224 different different detectors, where the spectral band width is approximately 0.01 µm, allowing it to cover detectors,4.4. Evaluation where on the Real spectral World Datasets band width is approximately 0.01 µm, allowing it to cover the entire

4.4.1. Experiments on Remote Sensing Datasets For remote sensing images, due to the complexity of land cover based surface feature, illumination and atmospheric effects as well as spectral mixing of pixels, edges are always deteriorated for proper detection. More seriously, there usually exist some noisy bands in remote sensing HSIs, which make edge detection very difficult. To validate the performance of edge detection on remote sensing HSIs, we selected two remote sensing HSIs, the Farmland and Washington DC Mall image as shown in Figures 10 and 12, to conduct experiments in this section.

A. Experiment on the Farmland HSIs

Figure 10 is the Farmland image obtained by AVIRIS. The AVIRIS instrument contains 224 different detectors, where the spectral band width is approximately 0.01 µm, allowing it to cover

Remote Sens. 2017, 9, 592 15 of 23 Remote Sens. 2017, 9, 592 15 of 23 rangethe entire between range 0.4 betweenµm and 0.4 2.5 µmµm. and There 2.5 µm. are There several are low several SNR low bands SNR in bands the AVIRIS in the image, AVIRIS and image, one exampleand one isexample shown inis Figureshown 10 ina whereFigure the 10a edges where are th nearlye edges invisible. are nearly Even invisible. in some noiseless Even in bands, some asnoiseless shown bands, in Figure as 10shownb, the in edges Figure are 10b, still the very edge vague.s are Composited still very vague. with threeComposited low correlation with three bands, low thecorrelation edge structure bands, canthe edge be much structure clearer can as shownbe much in clearer Figure as10 c.shown This isin mainlyFigure due10c. toThis the is effective mainly utilizationdue to the of effective spectral information.utilization of In spectral addition, inform as shownation. in FigureIn addition, 10c, the as boundaries shown in betweenFigure 10c, regions the areboundaries clearer, which between make regions it appropriate are clearer, for the which evaluation make of it edge appropriate detection for algorithms. the evaluation However, of inedge the compositeddetection algorithms. image, the However, pixels inside in the the composited region are heavilyimage, textured,the pixels which inside makes the region edge detectionare heavily a difficulttextured, problem. which makes edge detection a difficult problem.

(a) (b) (c)

Figure 10.10. FarmlandFarmland image:image: ((aa)) noisynoisy imageimage inin gray-scalegray-scale (band(band 1);1); ((bb)) noiselessnoiseless imageimage inin gray-scalegray-scale (band(band 87);87); andand ((cc)) three-bandthree-band falsefalse colorcolor compositecomposite (bands(bands 25,25, 60,60, 128).128).

Figure 11 illustrates the results of edge detection from the four methods. Overall, the better Figure 11 illustrates the results of edge detection from the four methods. Overall, the better performance of the GEDHSI on the image is clearly perceivable. The proposed GEDHSI has performance of the GEDHSI on the image is clearly perceivable. The proposed GEDHSI has produced produced continuous, compact and clear edges, where the most edge structures are well preserved continuous, compact and clear edges, where the most edge structures are well preserved as shown in as shown in Figure 11a. For Di Zenzo's method, almost all of edges produced are broken as shown Figure 11a. For Di Zenzo's method, almost all of edges produced are broken as shown in Figure 11b. in Figure 11b. Drewniok’s method generates many spurious edges within regions and some line Drewniok’s method generates many spurious edges within regions and some line edges are distorted edges are distorted at the place where it should be straight according to Figure 11d. RCMG and at the place where it should be straight according to Figure 11d. RCMG and SSSDE produce similar SSSDE produce similar results as GEDHSI. However, there are many grainy edges with some fine resultsedge structures as GEDHSI. missing However, or distorted there are to many some grainy degree, edges as withshown some in Figure fine edge 11c,e. structures Specifically, missing as orlabeled distorted by the to somered rectangles, degree, as Di shown Zenzo’s in Figure method 11 c,e.has Specifically,detected serious as labeled broken by edges. the Drewniok’s rectangles, Dimethod Zenzo’s generates method many has detected small and serious severe broken spurious edges. edges Drewniok’s and fails methodto capture generates the main many structure. small andMoreover, severe Drewniok’s spurious edges method and resu failslts to in capture distortion the mainand flexuous structure. edges. Moreover, The RCMG Drewniok’s produces method some resultsgrainy inand distortion slightly distorted and flexuous edges. edges. In contrast The RCMG, the edges produces produced some by grainy the proposed and slightly GEDHSI distorted are edges.straight In and contrast, continuous. the edges produced by the proposed GEDHSI are straight and continuous.

Remote Sens. 2017, 9, 592 16 of 23 Remote Sens. 2017, 9, 592 16 of 23

Remote Sens. 2017, 9, 592 16 of 23

(a) (b) (c) (d) (e)

(Figurea) 11. The binary( bedge) maps of the Farmland(c) image by: (a) GEDHSI;(d) (b) Di Zenzo’s (method;e) (c) Figure 11. The binary edge maps of the Farmland image by: (a) GEDHSI; (b) Di Zenzo’s method; the RCMG method; (d) Drewniok’s method (σ = 1.0 ); and (e) SSSDE method. Figure(c) the 11. RCMG The binary method; edge (d maps) Drewniok’s of the Farmland method image (σ = 1.0);by: (a and) GEDHSI; (e) SSSDE (b) Di method. Zenzo’s method; (c) the RCMG method; (d) Drewniok’s method (σ = 1.0 ); and (e) SSSDE method. B. Experiment on the Washington DC Mall HSIs B. Experiment on the Washington DC Mall HSIs B. ExperimentFigure on 12 the is anWashington image obtained DC Mall by HYDICE,HSIs which has 210 bands ranging from 0.4 µm to 2.4 µ µ µmFigure and 12covers is an the image visible obtained and (near) by HYDICE, infrared sp whichectrum. has Bands 210 bands in the ranging0.9 µm fromand 1.4 0.4 µmm regions to 2.4 m Figure 12 is an image obtained by HYDICE, which has 210 bands ranging from 0.4 µm to 2.4 andwhere covers the the atmosphere visible and is (near) opaque infrared have spectrum.been omitted Bands from in the 0.9dataset,µm andleaving 1.4 µ 191m regions bands wherefor µm and covers the visible and (near) infrared spectrum. Bands in the 0.9 µm and 1.4 µm regions theexperiments. atmosphere isAs opaque shown havein Figure been 12a, omitted the data fromset the contains dataset, complex leaving and 191 cluttered bands for landcovers. experiments. where the atmosphere is opaque have been omitted from the dataset, leaving 191 bands for As shownEspecially in Figurein the middle12a, the of dataset Figure contains12a, as high complexlighted and within cluttered a dotted landcovers. rectangle, Especiallyvarious roofs, in the experiments. As shown in Figure 12a, the dataset contains complex and cluttered landcovers. middleroads, of trails, Figure grass, 12a, astrees highlighted and shadow within are cluttered. a dotted rectangle,For better visual various effect, roofs, the roads, enlarged trails, version grass, of trees Especially in the middle of Figure 12a, as highlighted within a dotted rectangle, various roofs, andthis shadow part of are the cluttered. image is shown For better in Figure visual 12b. effect, This theis used enlarged for testing version as it ofcontains this part abundant of the imagetinny is roads, trails, grass, trees and shadow are cluttered. For better visual effect, the enlarged version of edges which is difficult to be detected completely. Examples of these tiny structures can be found in thisshown part inof Figurethe image 12b. is Thisshown is in used Figure for 12b. testing This as is it used contains for testing abundant as it contains tinny edges abundant which tinny is difficult the two ellipses in Figure 12b, where local details such as the boxes on the roof and the footpath are edgesto be which detected is difficult completely. to be detected Examples completely. of these Ex tinyamples structures of these cantiny bestructures found can in the be found two ellipses in in clearly visible yet hard for detection. theFigure two 12ellipsesb, where in Figure local 12b, details where such local as details the boxes such on as thethe roofboxes and on the roof footpath and the are footpath clearly visibleare yet clearlyhard for visible detection. yet hard for detection.

(a) (b)

Figure 12. (a) representation of the Washington DC Mall hyperspectral(b) data: (a) the whole data; and (b) the detail display of the doted rectangle region of (a) with two regions emphasized in Figure 12. False color representation of the Washington DC Mall hyperspectral data: (a) the whole Figureellipses 12. Falseto show color local representation tiny structures. of the Washington DC Mall hyperspectral data: (a) the whole data; and (b) the detail display of the doted rectangle region of (a) with two regions emphasized in data; and (b) the detail display of the doted rectangle region of (a) with two regions emphasized in ellipses to show local tiny structures. ellipsesFigure to show13 illustrates local tiny the structures. edge results for Figure 13b produced by the five methods. According to the criteria of Reference [54], a good detection should keep good edge resolution and Figure 13 illustrates the edge results for Figure 13b produced by the five methods. According connectivity. The superiority of the proposed method on these criteria is clearly visible as shown in to theFigure criteria 13 illustratesof Reference the [54], edge a results good fordetec Figuretion should13b produced keep good bythe edge five resolution methods. and According connectivity.to the criteria The of superiority Reference [of54 the], a proposed good detection method should on these keep criteria good is edgeclearly resolution visible as andshown connectivity. in

Remote Sens. 2017, 9, 592 17 of 23

TheRemote superiority Sens. 2017, 9 of, 592 the proposed method on these criteria is clearly visible as shown in Figure17 of 23 13. The edge image of Di Zenzo’s method is poor and contains many broken edges as shown in Figure 13b. Figure 13. The edge image of Di Zenzo’s method is poor and contains many broken edges as shown The RCMG and SSSDE are better than Di Zenzo’s method, but they still produce many grainy and in Figure 13b. The RCMG and SSSDE are better than Di Zenzo’s method, but they still produce many spurious edges (Figure 13c,e). Compared with RCMG and SSSDE, the edge map produced by the grainy and spurious edges (Figure 13c,e). Compared with RCMG and SSSDE, the edge map produced Drewniok’s method in Figure 13d contains more grainy edges within regions and more spurious edges. by the Drewniok’s method in Figure 13d contains more grainy edges within regions and more spurious Inedges. contrast, In thecontrast, edges the in Figureedges in13 aFigure produced 13a prod by ouruced approach by our areapproach clearer are and clearer of better and connectivity, of better whereconnectivity, most of where fine edge most structures of fine edge from structures different from land different covers land are covers successfully are successfully detected. detected.

(a) (b) (c) (d) (e)

Figure 13. The binary edge maps of the Washington DC Mall by: (a) GEDHSI; (b) Di Zenzo’s method; Figure 13. The binary edge maps of the Washington DC Mall by: (a) GEDHSI; (b) Di Zenzo’s method; (c) the RCMG method; (d) Drewniok’s method (σ = 1.0); and (e) SSSDE method. (c) the RCMG method; (d) Drewniok’s method (σ = 1.0); and (e) SSSDE method. For the edge resolution criterion, the better performance of the proposed method is also obviouslyFor the edgeperceivable. resolution In the criterion, upper left the ellipse better performanceof Figure 13b,d, of thethe proposedDi Zenzo’s method and RCMG is also methods obviously perceivable.have missed In the edge upper structures, left ellipse while of Figure the proposed 13b,d, the GEDHSI Di Zenzo’s preserves and RCMG such tiny methods structures have very missed thewell. edge In structures, the lower right while corner the proposed ellipse of GEDHSI Figure 13c–e, preserves the RCMG, such tiny Drewniok’s, structures veryand SSSDE well. In methods the lower rightnot corneronly fail ellipse to obtain of Figure continuous 13c–e, the edge RCMG, but also Drewniok’s, miss one and side SSSDE of the methods edge. On not the only contrary, fail to obtainthe continuousproposed edgeGEDHSI but alsocan missdetect one complete side of the edges edge. at Onboth the side contrary, of the the path. proposed Overall, GEDHSI the proposed can detect completeapproach edges is more at botheffective side than of the the path. other Overall,four methods the proposed for detecting approach edges from is more these effective HSIs. than the other four methods for detecting edges from these HSIs. 4.4.2. Experiments on Ground Scene Datasets 4.4.2. ExperimentsBoth of the two on Ground tested real Scene ground Datasets scenes images cover the wavelength from 0.4 µm to 0.72 µm,Both and of are the sampled two tested at 0.01 real µm ground intervals scenes and images result coverin 31 bands. the wavelength For simplicity, from we 0.4 µselectedm to 0.72 twoµ m, andimages, are sampled Scenes 5 at and 0.01 Scenesµm intervals 7 as illustrated, and result because in 31 of bands. the following For simplicity, two reasons: we selected First, as twoshown images, in ScenesFigure 5 and14, Scenesthe spatial 7 as illustrated,resolution is because higher ofthan the followingthe four other two reasons:datasets First,and asthey shown contain in Figuremany 14, theman-made spatial resolution objects, isthus higher we thanknow the where four otherthe edge datasetss occur and exactly. they contain Therefore, many man-madethey are rather objects, appropriate for performance evaluation. Second, there are heavily textured objects in the datasets, thus we know where the edges occur exactly. Therefore, they are rather appropriate for performance such as the objects surrounded by a heavily textured wall in Scene 5 (Figure 14a), and finely evaluation. Second, there are heavily textured objects in the datasets, such as the objects surrounded textured walls and grass in Scene 7 (Figure 14b). Edge detection under these circumstances is still a by a heavily textured wall in Scene 5 (Figure 14a), and finely textured walls and grass in Scene 7 challenge. (Figure 14b). Edge detection under these circumstances is still a challenge.

Remote Sens. 2017, 9, 592 18 of 23 Remote Sens. 2017, 9, 592 18 of 23 Remote Sens. 2017, 9, 592 18 of 23

(a) (b)

FigureFigure 14. 14. RGBRGB representations representations of of two two hy hyperspectralperspectral scene scene dataset: dataset: ( (aa)) Scene Scene 5; 5; and and ( (bb)) Scene Scene 7. 7.

A. Experiment on the Scene 5 dataset A.A. Experiment Experiment on on the the Scene Scene 5 5 dataset dataset

ForFor the SceneScene 55 dataset, dataset, Figure Figure 15 illustrates15 illustrates the binarythe binary edge mapsedge generatedmaps generated by the fiveby methods.the five methods.Obviously, Obviously, Di Zenzo’s Di edge Zenzo’s image edge is poor image and is contains poor and many contains small andmany spurious small edges,and spurious and the edges, details andof the the wall details are poorlyof the wall detected are poorly as shown detected in Figure as shown 15b. Comparedin Figure 15b. to Figure Compared 15b, the to Figure RCMG 15b, method the RCMGcan detect method more can edges detect on the more wall, edges while on there the are wa stillll, somewhile spuriousthere are edges still especiallysome spurious on the edges chair, especiallyunder the on ball the and chair, at the under bottom the right ball ofand the at wallthe bottom due to theright variance of the inwall intensity due to ofthe the variance surface in as intensityshown in of Figure the surface 15c. Drewniok’s as shown in method Figureperforms 15c. Drewniok’s better than method Di Zenzo’s performs method better on than the Di wall, Zenzo’s but it methodhas produced on the morewall, spuriousbut it has and produced grainy more edges spur thanious either and the grainy RCMG edges or GEDHSI.than either Some the RCMG fine edges or GEDHSI.on the wall Some grid fine and edges racket on bag the are wall missing grid and (Figure racket 15 bagd). Performanceare missing (Figure of SSSDE 15d). is Performance better than that of SSSDEof Di Zenzo’s is better and than Drewniok’s that of Di Zenzo’s on the leftand wall, Drewniok’s while it on misses the left many wall, edges while on it themisses many and yellowedges onboxes the asblue well and as the basketball, boxes as aswell shown as the in basketball, Figure 15e. as In shown contrast, in almostFigure 15e. complete In contrast, edges ofalmost most completeobjects are edges attained of most bythe objects proposed are attained method by as th showne proposed in Figure method 15a. as The shown details inin Figure the circle 15a. fromThe detailsFigure 15in alsothe circle illustrate from the Figure efficacy 15 of also the illustra proposedte the method, efficacy where of the the proposed GEDHSImethod, methodwhere the has proposedgenerated GEDHSI the best edgemethod results has generated in keeping the the best real edge structure. results in keeping the real structure.

(a) (b)

Figure 15. Cont.

RemoteRemote Sens.Sens. 20172017,, 99,, 592592 1919 ofof 2323

(a) (b)

(c) (d) (e) (c) (d) (e) Figure 15. The binary edge maps generated for the Scene 5 dataset by: (a) GEDHSI; (b) Di Zenzo’s Figure 15. The binary edge maps generated for the Sceneσ 5 dataset by: (a) GEDHSI; (b) Di Zenzo’s Figuremethod; 15. (c)The the binaryRCMG edge method; maps (d generated) Drewniok’s for method the Scene (σ 5 dataset = 1.0); and by: ( (ea)) SSSDE. GEDHSI; (b) Di Zenzo’s method; ( c) the RCMG method; ( d) Drewniok’s method (σ = 1.0); = 1.0); and and (e )(e SSSDE.) SSSDE. B. Experiment on Scene 7 B. Experiment on Scene 7 Figure 16 shows the binary edge maps of Scene 7 produced by the five methods. According to FigureFigure 16b, 16Di showsZenzo’s the method binary produces edge maps many of Scenespurious 7 produced edges and by performs the five methods.poorly on According connectivity. to FigureThe RCMG 16b, Di preserves Zenzo’s methodmost signific producesant manyspatial spurious structures, edges but and it performsalso brings poorly many on grainy connectivity. edges Theespecially RCMG in preserves the grass most region significant as shown spatial in structures,Figure 16c. but The it also Drewniok’s brings many method grainy can edges detect especially many inedges the grasson the region wall as and shown grass in Figureas depicted 16c. The in Drewniok’sFigure 16d, method but the can interval detect manybetween edges edges on the on wall the andwindows grass asis depictedenlarged indue Figure to edges 16d, butdisplacement the interval caused between by edgesGaussian on the smoothing. windows Connectivity is enlarged due of toedges edges generated displacement by SSSDE caused is bybetter Gaussian than that smoothing. of Di Zenzo’s Connectivity while some of edges edges generated between by the SSSDE upper is betterand lower than thatwindows of Di Zenzo’s are missed, while someas can edges be betweenseen in theFigure upper 16e. and lowerComparatively, windows arethe missed, better asperformance can be seen of inthe Figure proposed 16e. method Comparatively, is perceivable. the better Most performance of the fine edge of the structures proposed such method as the is perceivable.edges located Most at the of thewindows fine edge are structures attained suchand th ase theedges edges are located clear and at the connective windows as are shown attained in andFigure the 16a. edges More are clearspecifically, and connective as indicated as shown by the in circle Figure at 16thea. bottom More specifically, of the image, asindicated Di Zenzo’s by and the circleDrewniok’s at the bottommethods of produce the image, undesirable Di Zenzo’s edges and Drewniok’s for the grass methods in the producecircle. The undesirable RCMG almost edges formisses the grassand the in theSSSDE circle. totally The RCMGmisses almostthe edge misses struct andure. the Comparatively, SSSDE totally missesthe fine the structure edge structure. is well Comparatively,preserved by the the proposed fine structure GEDHSI is well method. preserved At th bye right the proposed bottom ellipse, GEDHSI Drewniok’s method. Atmethod the right has bottommissed ellipse,both the Drewniok’s edge of the method left sidewalls has missed and boththe edge the edge between of the the left middle sidewalls white and door the edge and betweenthe dark theshadow. middle On white the doorcontrary, and thethe darkproposed shadow. GEDHSI On the can contrary, again thedetect proposed correct GEDHSI edges without can again missing detect correctmain structure edges without of that missingregion. main structure of that region.

1

(a) (b) (c) (d) (e)

Figure 16. The binary edge maps generated for the Scene 7 by: (a) GEDHSI; (b) Di Zenzo’s method; Figure 16. The binary edge maps generated for theσ Scene 7 by: (a) GEDHSI; (b) Di Zenzo’s method; ((cc)) thethe RCMGRCMG method;method; ((dd)) Drewniok’sDrewniok’s methodmethod ((σ = 1.0); = 1.0); and and (e) ( SSSDE.e) SSSDE.

Overall, the comparative experiments above have validated the superiority of the proposed Overall, the comparative experiments above have validated the superiority of the proposed GEDHSI for edge detection in HSIs. The proposed method is more robust to background clusters GEDHSI for edge detection in HSIs. The proposed method is more robust to background clusters and weak edges, as illustrated by the experiments on artificial images. For real-world images, it is and weak edges, as illustrated by the experiments on artificial images. For real-world images, it is robust to noisy bands in keeping complete and reasonable spatial structures and performs well on

Remote Sens. 2017, 9, 592 20 of 23 robust to noisy bands in keeping complete and reasonable spatial structures and performs well on edge localization and resolution. As a result, GEDHSI is more suitable for HSI edge detection than other state-of-the-art multi-band approaches.

4.5. Comparison of Runtime In this section, we compare the computational efficiency of the above five algorithms. The mean computational time (in seconds) required by each algorithm for five independent runs in solving the two tested images, the Farmland and Washington DC Mall, are presented for comparison. As seen in Table2, RCMG takes the least time, and one main reason is its C++ based implementation in comparison to the MATLAB-based implementation for the other four methods. The proposed GEDHSI, unfortunately, consumes the longest runtime as a whole. The long time consumption is largely caused by the iterative computation in traveling procedure. For each particle, the repeating calculation of gravitational force for all particles in dataset D leads to heavy burden of runtime. How to speed up the process and improve the efficiency of GEDHSI will be investigated in the future.

Table 2. The time efficiency evaluation.

Image GEDHSI 1 Di Zenzo RCMG 2 Drewniok SSSDE 3 Farmland 32.17s 15.65s 4.04s 13.55s 21.53s Washington DC Mall 36.56s 22.45s 4.81s 14.28s 26.75s 1 GEDHSI (Gravitation-Based Edge Detection in Hyperspectral Image); 2 RCMG (Robust Color Morphological Gradient); 3 SSSDE (modified Simultaneous Spectral/Spatial Detection of Edges).

5. Conclusions In recent years, various heuristic and physically inspired image-understanding methods have been developed. To extract the edges from HSIs, a novel edge-detection method, namely Gravitation-based Edge Detection on HSIs (GEDHSI), is presented in this paper. In GEDHSI, each pixel is modeled as a movable celestial object in the joined spatial-spectral feature space where all objects travel in the feature space under the gravitational field force. Due to the relative positions, the edge pixels and non-edge pixels can be separated by computing their potential energy fields. In addition to its simplicity and generality, GEDHSI has promising theoretical properties that make it a solid tool for edge detection. The performance of the proposed algorithm has been benchmarked with four state-of-the-art methods, including Di Zenzo’s, Drewniok’s, RCMG, and SSSD detectors. Experiments on a variety of datasets have validated the efficacy of the proposed algorithm in edge-detection for HSIs under various levels of noise. The future work will focus on further improving the efficiency of the approach and investigating the relationship between the edge scale and the parameter settings for edge results in the multiscale space.

Acknowledgments: This work was supported by Chinese Natural Science Foundation Projects (41471353) and National Key Research and Development Program of China (2016YFB0501501). Author Contributions: Genyun Sun, Aizhu Zhang, and Peng Wang conceived, designed and performed the experiments, analyzed the data, and wrote the paper. Jinchang Ren, Jingsheng Ma, and Yuanzhi Zhang help organize and revise the paper. Xiuping Jia helps analyze the experimental data and improve the presentation of the second version. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Spectral-spatial classification of hyperspectral imagery based on partitional clustering techniques. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2973–2987. [CrossRef] Remote Sens. 2017, 9, 592 21 of 23

2. Youn, S.; Lee, C. Edge Detection for Hyperspectral Images Using the Bhattacharyya Distance. In Proceedings of the 2013 International Conference on Parallel and Distributed Systems, Seoul, South Korea, 15–18 December 2013; pp. 716–719. 3. Kiani, A.; Sahebi, M.R. Edge detection based on the Shannon Entropy by piecewise thresholding on remote sensing images. IET Comput. Vis. 2015, 9, 758–768. [CrossRef] 4. Weng, Q.; Quattrochi, D.A. Urban Remote Sensing; CRC Press: Boca Raton, FL, USA, 2006. 5. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [CrossRef] 6. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. B Biol. Sci. 1980, 207, 187–217. [CrossRef] [PubMed] 7. Smith, S.M.; Brady, J.M. SUSAN—A new approach to low level image processing. Int. J. Comput. Vis. 1997, 23, 45–78. [CrossRef] 8. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image. Process. 2001, 10, 266–277. [CrossRef] [PubMed] 9. Priego, B.; Souto, D.; Bellas, F.; Duro, R.J. Hyperspectral image segmentation through evolved cellular automata. Pattern Recogn. Lett. 2013, 34, 1648–1658. [CrossRef] 10. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recogn. 2010, 43, 2367–2379. [CrossRef] 11. Bakker, W.; Schmidt, K. Hyperspectral edge filtering for measuring homogeneity of surface cover types. ISPRS J. Photogramm. 2002, 56, 246–256. [CrossRef] 12. Kanade, T. Image Understanding Research at CMU. Available online: http://oai.dtic.mil/oai/oai?verb= getRecord&metadataPrefix=html&identifier=ADP000103 (accessed on 28 March 2017). 13. Hedley, M.; Yan, H. Segmentation of color images using spatial and information. J. Electron. Imaging 1992, 1, 374–380. [CrossRef] 14. Fan, J.; Yau, D.K.; Elmagarmid, A.K.; Aref, W.G. Automatic image segmentation by integrating color-edge extraction and seeded region growing. IEEE Trans. Image Process. 2001, 10, 1454–1466. [PubMed] 15. Garzelli, A.; Aiazzi, B.; Baronti, S.; Selva, M.; Alparone, L. Hyperspectral image fusion. In Proceedings of the Hyperspectral 2010 Workshop, Frascati, Italy, 17–19 March 2010. 16. Lei, T.; Fan, Y.; Wang, Y. Colour edge detection based on the fusion of hue component and principal component analysis. IET Image Process. 2014, 8, 44–55. [CrossRef] 17. Di Zenzo, S. A note on the gradient of a multi-image. Comput. Vis. Graph. Image Process. 1986, 33, 116–125. [CrossRef] 18. Cumani, A. Edge detection in multispectral images. CVGIP Graph. Model. Image Process. 1991, 53, 40–51. [CrossRef] 19. Drewniok, C. Multi-spectral edge detection. Some experiments on data from Landsat-TM. Int. J. Remote Sens. 1994, 15, 3743–3765. [CrossRef] 20. Jin, L.; Liu, H.; Xu, X.; Song, E. Improved direction estimation for Di Zenzo’s multichannel image gradient operator. Pattern Recogn. 2012, 45, 4300–4311. [CrossRef] 21. Trahanias, P.E.; Venetsanopoulos, A.N. Color edge detection using vector order statistics. IEEE Trans. Image Process. 1993, 2, 259–264. [CrossRef][PubMed] 22. Toivanen, P.J.; Ansamäki, J.; Parkkinen, J.; Mielikäinen, J. Edge detection in multispectral images using the self-organizing map. Pattern Recogn. Lett. 2003, 24, 2987–2994. [CrossRef] 23. Haralick, R.M.; Sternberg, S.R.; Zhuang, X. Image analysis using mathematical morphology. IEEE Trans. Pattern. Anal. Mach. Intell. 1987, PAMI-9, 532–550. [CrossRef] 24. Evans, A.N.; Liu, X.U. A morphological gradient approach to color edge detection. IEEE Trans. Image Process. 2006, 15, 1454–1463. [CrossRef][PubMed] 25. Lopez-Molina, C.; De Baets, B.; Bustince, H. Quantitative error measures for edge detection. Pattern Recogn. 2013, 46, 1125–1139. [CrossRef] 26. Wang, Y.; Niu, R.; Yu, X. Anisotropic diffusion for hyperspectral imagery enhancement. IEEE Sens. J. 2010, 10, 469–477. [CrossRef] 27. Dinh, C.V.; Leitner, R.; Paclik, P.; Loog, M.; Duin, R.P. SEDMI: Saliency based edge detection in multispectral images. Image Vis. Comput. 2011, 29, 546–556. [CrossRef] Remote Sens. 2017, 9, 592 22 of 23

28. Fossati, C.; Bourennane, S.; Cailly, A. Edge Detection Method Based on Signal Subspace Dimension for Hyperspectral Images; Springer: New York, NY, USA, 2015. 29. Paskaleva, B.S.; Godoy, S.E.; Jang, W.Y.; Bender, S.C.; Krishna, S.; Hayat, M.M. Model-Based Edge Detector for Spectral Imagery Using Sparse Spatiospectral Masks. IEEE Trans. Image Process. 2014, 23, 2315–2327. [CrossRef][PubMed] 30. Resmini, R.G. Simultaneous spectral/spatial detection of edges for hyperspectral imagery: the HySPADE algorithm revisited. In Proceedings of the SPIE Defense, Security, and Sensing International Society for Optics and Photonics, Baltimore, MD, USA, 23–27 April 2012; Volume 83930. 31. Resmini, R.G. Hyperspectral/spatial detection of edges (HySPADE): An algorithm for spatial and spectral analysis of hyperspectral information. In Proceedings of the SPIE—The International Society for Optical Engineering, Orlando, USA, 12–16 April 2004; Volume 5425, pp. 433–442. 32. Lin, Z.; Jiang, J.; Wang, Z. Edge detection in the feature space. Image Vis. Comput. 2011, 29, 142–154. [CrossRef] 33. Jia, X.; Richards, J. Cluster-space representation for hyperspectral data classification. IEEE Trans. Geosci. Remote Sens. 2002, 40, 593–598. 34. Imani, M.; Ghassemian, H. Feature space discriminant analysis for hyperspectral data feature reduction. ISPRS J. Photogramm. 2015, 102, 1–13. [CrossRef] 35. Gomez-Chova, L.; Calpe, J.; Camps-Valls, G.; Martin, J.; Soria, E.; Vila, J.; Alonso-Chorda, L.; Moreno, J. Feature selection of hyperspectral data through local correlation and SFFS for crop classification. In Proceedings of the Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; Volume 1, pp. 555–557. 36. Zhang, Y.; Wu, K.; Du, B.; Zhang, L.; Hu, X. Hyperspectral Target Detection via Adaptive Joint Sparse Representation and Multi-Task Learning with Locality Information. Remote Sens. 2017, 9, 482. [CrossRef] 37. Rashedi, E.; Nezamabadi-Pour, H. A stochastic gravitational approach to feature based color image segmentation. Eng. Appl. Artif. Intell. 2013, 26, 1322–1332. [CrossRef] 38. Sen, D.; Pal, S.K. Improving feature space based image segmentation via density modification. Inf. Sci. 2012, 191, 169–191. [CrossRef] 39. Peters, R.A. A new algorithm for image noise reduction using mathematical morphology. IEEE Trans. Image Process. 1995, 4, 554–568. [CrossRef][PubMed] 40. Perona, P.; Malik, J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. 1990, 12, 629–639. [CrossRef] 41. Olenick, R.P.; Apostol, T.M.; Goodstein, D.L. The Mechanical Universe: Introduction to Mechanics and Heat; Cambridge University Press: Cambridge, UK, 2008. 42. Sun, G.Y.; Zhang, A.Z.; Yao, Y.J.; Wang, Z.J. A novel hybrid algorithm of gravitational search algorithm with genetic algorithm for multi-level thresholding. Appl. Soft Comput. 2016, 46, 703–730. [CrossRef] 43. De Mesquita Sá Junior, J.J.; Ricardo Backes, A.; César Cortez, P. Color texture classification based on gravitational collapse. Pattern Recogn. 2013, 46, 1628–1637. [CrossRef] 44. Peng, L.; Yang, B.; Chen, Y.; Abraham, A. Data gravitation based classification. Inf. Sci. 2009, 179, 809–819. [CrossRef] 45. Cano, A.; Zafra, A.; Ventura, S. Weighted data gravitation classification for standard and imbalanced data. IEEE Trans. Cybern. 2013, 43, 1672–1687. [CrossRef][PubMed] 46. Peng, L.; Zhang, H.; Yang, B.; Chen, Y. A new approach for imbalanced data classification based on data gravitation. Inf. Sci. 2014, 288, 347–373. [CrossRef] 47. Ilc, N.; Dobnikar, A. Generation of a clustering ensemble based on a gravitational self-organising map. Neurocomputing 2012, 96, 47–56. [CrossRef] 48. Lopez-Molina, C.; Bustince, H.; Fernández, J.; Couto, P.; De Baets, B. A gravitational approach to edge detection based on triangular norms. Pattern Recogn. 2010, 43, 3730–3741. [CrossRef] 49. Sun, G.; Liu, Q.; Liu, Q.; Ji, C.; Li, X. A novel approach for edge detection based on the theory of universal gravity. Pattern Recogn. 2007, 40, 2766–2775. [CrossRef] 50. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [CrossRef] 51. Sretenskii, L.N. Theory of the Newton Potential; Moscow-Leningrad: Moskva, Russia, 1946. 52. Hulthén, L.; Sugawara, M. Encyclopaedia of Physics; Springer: Berlin, Germany, 1957; Volume 39. Remote Sens. 2017, 9, 592 23 of 23

53. Medina-Carnicer, R.; Munoz-Salinas, R.; Yeguas-Bolivar, E.; Diaz-Mas, L. A novel method to look for the hysteresis thresholds for the Canny edge detector. Pattern Recogn. 2011, 44, 1201–1211. [CrossRef] 54. Shui, P.L.; Zhang, W.C. Noise-robust edge detector combining isotropic and anisotropic Gaussian kernels. Pattern Recogn. 2012, 45, 806–820. [CrossRef] 55. Rink, T.; Antonelli, P.; Whittaker, T.; Baggett, K.; Gumley, L.; Huang, A. Introducing HYDRA: A multispectral data analysis toolkit. Bull. Am. Meteorol. Soc. 2007, 88, 159–166. [CrossRef] 56. , R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [CrossRef] 57. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [CrossRef] 58. Foster, D.H. Hyperspectral Images of Natural Scenes 2002. Available online: http://personalpages. manchester.ac.uk/staff/d.h.foster/Hyperspectral_images_of_natural_scenes_02.html (accessed on 10 June 2017). 59. González-Hidalgo, M.; Massanet, S.; Mir, A.; Ruiz-Aguilera, D. On the pair uninorm-implication in the morphological gradient. In Computational Intelligence; Madani, K., Correia, A.D., Rosa, A., Filipe, J., Eds.; Springer: Cham, Switzerland, 2015; Volume 577, pp. 183–197. 60. Clark, R.N.; Swayze, G.A.; Wise, R.; Livo, K.E.; Hoefen, T.M.; Kokaly, R.F.; Sutley, S.J. USGS Digital Spectral Library Splib06a; USGS: Reston, VA, USA, 2007.

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).