Quick viewing(Text Mode)

UVI Image Segmentation of Auroral Oval: Dual Level Set and Convolutional Neural Network Based Approach

UVI Image Segmentation of Auroral Oval: Dual Level Set and Convolutional Neural Network Based Approach

applied sciences

Article UVI Image Segmentation of Auroral Oval: Dual Level Set and Convolutional Neural Network Based Approach

Chenjing Tian , Huadong Du, Pinglv Yang, Zeming Zhou * and Libin Weng

Institute of Meteorology and Oceanology, National University of Defense Technology; Nanjing 210000, China; [email protected] (C.T.); [email protected] (H.D.); [email protected] (P.Y.); [email protected] (L.W.) * Correspondence: [email protected]; Tel.: +86-177-9850-5716

 Received: 4 March 2020; Accepted: 4 April 2020; Published: 9 April 2020 

Abstract: The auroral ovals around the Earth’s magnetic poles are produced by the collisions between energetic particles precipitating from solar and atoms or molecules in the upper . The morphology of auroral oval acts as an important mirror reflecting the -- coupling process and its intrinsic mechanism. However, the classical level set based segmentation methods often fail to extract an accurate auroral oval from the ultraviolet imager (UVI) with intensity inhomogeneity. The existing methods designed specifically for auroral oval extraction are extremely sensitive to the contour initializations. In this paper, a novel deep feature-based adaptive level set model (DFALS) is proposed to tackle these issues. First, we extract the deep feature from the UVI image with the newly designed convolutional neural network (CNN). Second, with the deep feature, the global energy term and the adaptive time-step are constructed and incorporated into the local information based dual level set auroral oval segmentation method (LIDLSM). Third, we extract the contour of the auroral oval through the minimization of the proposed energy functional. The experiments on the UVI image data set validate the strong robustness of DFALS to different contour initializations. In addition, with the help of deep feature-based global energy term, the proposed method also obtains higher segmentation accuracy in comparison with the state-of-the-art level set based methods.

Keywords: auroral oval; convolutional neural network; image segmentation; level set; adaptive time-step

1. Introduction The is a natural phenomenon mostly observed around magnetic polar regions in both hemispheres. The energetic particles precipitating from solar wind interact with geomagnetic field and accelerate along magnetic field lines towards Earth [1]. These particles collide with neutral constituents in the upper atmosphere, which generates the most impressive wonder of Auroral Borealis and Aurora Australis [2]. More importantly, the structure of aurora is a mirror which reflects the physical process in the magnetosphere and assists the scientists to research the intrinsic mechanism in the solar wind-magnetosphere-ionosphere coupling process [3]. The aurora observation methods can be mainly divided into two categories. One is the ground-based observation system composed of all-sky imagers (ASI), and the other is the space borne observation system represented by the ultraviolet imager (UVI). At first, the ASI was used to monitor the local auroral structure. During the international geophysical year (IGY) (1957–1958), space scientists all around the world coordinated their efforts to record the aurora from many places at the same time. Using the data from the IGY-ASI, Shun-Ichi Akasofu, was one of the first to understand that the global

Appl. Sci. 2020, 10, 2590; doi:10.3390/app10072590 www.mdpi.com/journal/applsci Appl. Sci. 2020, 10, 2590 2 of 17 Appl. Sci. 2019, 9, x FOR PEER REVIEW 2 of 17 aurora in the North was an oval of light surrounding the north magnetic pole: the auroral oval [4]. that the global aurora in the North was an oval of light surrounding the north magnetic pole: the With the development of the aerospace technology, space scientists are able to observe the aurora auroral oval [4]. With the development of the aerospace technology, space scientists are able to through the UVI cameras aboard the spacecraft. The aurora images captured by these two instruments observe the aurora through the UVI cameras aboard the spacecraft. The aurora images captured by are shown in Figure1. Di fferent from the ground-based observation, auroral images captured by UVI these two instruments are shown in Figure 1. Different from the ground-based observation, auroral instruments in the space mainly focus on providing global information, e.g., the overall configuration images captured by UVI instruments in the space mainly focus on providing global information, e.g., of the aurora and the spatial distribution of the auroral intensity [5,6] the overall configuration of the aurora and the spatial distribution of the auroral intensity [5,6]

(a) (b)

Figure 1. Aurora images. ( (aa)) Aurora Aurora images images captured captured by by ultraviolet ultraviolet imager imager (UVI). (UVI). ( b) Aurora images captured by all-sky imagers (ASI).

In the last few years, the extraction of auroral auroral ov ovalal attracts huge interests of geophysicists due to its usefulness to monitor the geomagnetic activity and research the physical mechanism of the solar wind-magnetosphere-ionosphere couplingcoupling processprocess [[2,32,3,,7-11].7–11]. The The exact exact location of the auroral oval’s equatorward boundary depends on the magnetosphericmagnetospheric electric and magnetic fields.fields. The poleward boundary of the auroral oval is taken to separate the closed magnetic field field lines (field (field lines connected at bothboth endsends to to the the earth) earth) from from the polarthe polar cap covered cap covered by open by magnetic open magnetic field lines field (field lines lines (field connected lines connectedfrom the earth from to the the earth solar to wind) the solar [12]. wind) Besides, [12]. because Besides, the because interplanetary the interplanetary environment environment can modulate can modulatethe structure the of structure the magnetosphere, of the magnetosphere, it influences theit influences morphology the of morphology the auroral ovalof the remarkably auroral oval [13]. remarkablyFor instance, [13]. diff erentFor instance, interplanetary different magnetic interplane fieldtary (IMF) magnetic conditions field make (IMF) the conditions auroral oval make drift the to auroraldifferent oval directions drift to [ 14different,15], and directions the IMF B[14,15],z is widely and knownthe IMF to 𝐵 control is widely the locationknown ofto thecontrol auroral the locationoval [16]. of Therefore, the auroral the extractionoval [16]. ofTherefore, the auroral the oval extraction is of considerable of the auroral significance oval is to of understand considerable the significanceinterplanetary to andunderstand geomagnetic the interplanetary environment. and geomagnetic environment. The aurora extraction mission can be seen as a segmentation task to divide the UVI image into two groups: the the aurora region and the background region. Because of the tediousness of manually detecting auroral oval from UVI images, many techniques have been applied to extract the auroral oval automatically. However, thethe UVI images are low in contrast and often corrupted by intense noise and interference, such as cosmic ray tracks, bright stars, and dayglow contamination, which make it difficultdifficult to distinguish the auroral oval from thethe background.background. Existing auroral oval segmentation methods can be broadly classified classified into two categories, active contour-free and active contour-basedcontour-based modelsmodels [[17].17]. Active contour-free models include adaptive minimum error thresholding (AME (AMET)T) [18], [18], linear rand randomizedomized Hough transform (LLSRHT) [19], [19], maximal similarity-based region merging (MSRM) [20], [20], quasi-elliptical fitting fitting with fuzzy local information C-means clustering result (FCM + QEF) [8], [8], etc. As As a a pixel-based method, the AMET model cannot obtain a complete auroral oval in the low contrast region. Though the MSRM method can segment segment the the auroral auroral oval oval region region completely, completely, the the segmentation segmentation result result does does not not agree agree well well with with the manuallythe manually annotated annotated benchmark. benchmark. The LLSRHT The LLSRHT and FCM and FCM+ QEF+ modelsQEF models applied applied the elliptic the elliptic or quasi- or quasi-ellipticelliptic fitting fitting to extract to extract auroral auroral oval ovalboundaries boundaries.. Therefore, Therefore, the segmentation the segmentation results results obtained obtained by theseby these two two methods methods often often have have smooth smooth inner inner andand outer outer boundaries, boundaries, which which is inconsistent is inconsistent with with the fact the thatfact thatthe boundaries the boundaries of some of some auroral auroral oval oval are rugged. are rugged. Over the past few years, the active contour model (ACM) proposed by Kass et al. [21] has been widely perused and applied to image segmentation. The fundamental idea of the ACM is to control Appl. Sci. 2020, 10, 2590 3 of 17

Over the past few years, the active contour model (ACM) proposed by Kass et al. [21] has been widely perused and applied to image segmentation. The fundamental idea of the ACM is to control the curve to evolve towards its interior normal and stop on the boundary of an object based on the energy minimization model [22]. Relying on the local spatial distance within a local window, Niu et al. introduced a region-based active contour model via local similarity factor (RLSF) to improve the segmentation results of images with unknown noise distribution [23]. Based on an improved signed pressure forceAppl. Sci. (SPF) 2019, and9, x FOR local PEERimage REVIEW fitting (LIF), Sun et al. proposed the improved SPF-3 andof 17 LIF-based image segmentation (SPFLIF-IS) algorithm to address the difficulty that the inhomogeneous image cannot be segmentedthe curve to evolve quickly towards and its accurately interior normal [24]. and Moreover, stop on the some boundary researchers of an object refine based the on active the contour energy minimization model [22]. Relying on the local spatial distance within a local window, Niu et model foral. auroral introduced oval a segmentation.region-based active For contour instance, model Shi via etlocal al. similarity introduced factor the (RLSF) interval to improve type-2 the fuzzy sets (IT2FS) intosegmentation the active results contour of images model with to unknown obtain a noise more distribution accurate [2 segmentation3]. Based on an improved result [11 signed]. As a widelypressure investigated force (SPF) and active local image contour fitting model (LIF), that Sun canet al. address proposed the the di improvedfficulties SPF- related and LIF- to topological changes duringbased image the evolution segmentation process, (SPFLIF-IS) level algorithm set methods to address have the beendifficulty introduced that the inhomogeneous into the auroral oval image cannot be segmented quickly and accurately [24]. Moreover, some researchers refine the active extractioncontour in recent model years. for auroral X. Yang oval [ 25segmentation.] proposed For a instance, shape-initialized Shi et al. introduced and intensity-adaptive the interval type-2 level set method (SIIALSM)fuzzy sets (IT2FS) to extract into the auroral active contour oval boundaries. model to obtain Research a more accurate by P. segmentation Yang [26] utilized result [11]. the distance constraint of innerAs a andwidely outer investigated boundaries active in contour the local model information that can address based dualthe difficulties level set related method to (LIDLSM) to determinetopological the auroral changes oval during regions. the evolution However, process, these level two set levelmethods set have based been methods introduced are into sensitive the to the auroral oval extraction in recent years. X. Yang [25] proposed a shape-initialized and intensity- image inhomogeneityadaptive level andset method evolving (SIIALSM) curve to initializations. extract auroral oval If there boundaries. exist lowResearch contrast by P. regionsYang [26] in the UVI images orutilized the initial the distance contour constraint is set improperly, of inner and the outer auroral boundaries oval in cannot the local be information extracted based accurately dual by these two methods.level set method (LIDLSM) to determine the auroral oval regions. However, these two level set based In recentmethods years, are owing sensitive to to the the rapid image development inhomogeneity of an deepd evolving learning curve techniques, initializations. we If have there witnessedexist a lot low contrast regions in the UVI images or the initial contour is set improperly, the auroral oval cannot of groundbreakingbe extracted results accurately [27– by29 ].these As two the methods. most common used network in deep learning, the convolutional neural networkIn recent (CNN) years, has owing been to the widely rapid development applied to of computerdeep learning vision techniques, tasks we andhave witnessed gains remarkable popularity.a lot In of this groundbreaking work, the CNN results based [27-29]. global As the energy most common term andused adaptivenetwork intime-step deep learning, are the proposed to address theconvolutional segmentation neural failure network brought (CNN) byhas imagebeen widely inhomogeneity applied to computer and di ffvisionerent tasks contour and gains initializations. remarkable popularity. In this work, the CNN based global energy term and adaptive time-step are The flowproposed diagram to address of the our segmentation work is shown failure brou in Figureght by image2. First, inhomogeneity the CNN and designed different contour in our work is trained oninitializations. the training samples extracted from the UVI image data set. Following this, the global energy term andThe adaptiveflow diagram time of stepour work are devisedis shown in based Figure on 2. theFirst, confidence the CNN designed value obtainedin our work from is trained CNN. Subsequently,trained on the the training global samples energy extracted term andfrom adaptivethe UVI image time-step data set. are Following introduced this, the into global the LIDLSM energy term and adaptive time step are devised based on the confidence value obtained from trained for the extractionCNN. Subsequently, of the auroral the global oval. energy Experiments term and adaptive performed time-step on theare introduced UVI image into data the LIDLSM sets demonstrate that the proposedfor the extraction method, of the compared auroral oval. with Experiments the state-of-the-art performed on the algorithms, UVI image data can sets obtain demonstrate more accurate segmentationthat the results proposed and method, is more compared robust towith di thefferent state- contourof-the-art algorithms, initializations. can obtain more accurate segmentation results and is more robust to different contour initializations.

Input an UVI image

Newly designed CNN

Deep feature map

Deep feature Global energy term based adaptive level-set model Adaptive time Δ (DFALS) step t Global energy term LIDLSM

Level-set evolution equation

k + ∂φ φφkk1 =+Δt ∂t

Output the segmentation result of UVI image

Figure 2. The flow diagram of the deep feature-based adaptive level set (DFALS) model. Appl. Sci. 2020, 10, 2590 4 of 17

Appl. Sci. 2019, 9, x FOR PEER REVIEW 4 of 17 The remainder of the paper is organized as follows. The second section briefly reviews state-of-the-artFigure level 2. The set flow models diagram and of previousthe deep feature level- setbased models adaptive explicitly level set designed (DFALS) model. for auroral oval extraction, which include the SPFLIF-IS and LIDLSM. In Section3, the global energy term and adaptive timeThe step remainder will be devised of the paper and introducedis organized into as follows. the dual The level second set framework.section briefly The reviews UVI datastate-of- set, experimentalthe-art level set results models and and analysis previous are presentedlevel set model in Sections explicitly4. Section designed5 reports for theauroral conclusions. oval extraction, which include the SPFLIF-IS and LIDLSM. In Section 3, the global energy term and adaptive time 2.step Related will be Works devised and introduced into the dual level set framework. The UVI data set, experimental results and analysis are presented in Section 4. Section 5 reports the conclusions. SPFLIF-IS [24] is a universally applicable image segmentation method, which combines the local fitted image formulation (LIF) [30] and the improved signed pressure force (SPF) [31] for inhomogeneous 2. Related Works image segmentation. BySPFLIF-IS combining [24] theis a improveduniversally SPF applicable function imag withe the segmentation LIF model, method, the level which set evolution combines equation the local of SPFLIF-ISfitted image was formulation defined as: (LIF) [30] and the improved signed pressure force (SPF) [31] for inhomogeneous image segmentation. ∂φ  LIF By combining the improved= (1 λ1) SPFα functionsp f (I(u)) with+ λ1 theI LIFI model,(Iinner theIoutter level)δ (setφ) evolution equation(1) of SPFLIF-IS was defined∂t as: − · · − −

The first term of Equation𝜕𝜙 (1) is the product of the improved SPF (sp f (I(u))) and the balloon force =(1−𝜆) · 𝛼 · 𝑠𝑝𝑓(𝐼(𝑢))+𝜆 (𝐼 − 𝐼 )(𝐼 −𝐼)𝛿(𝜙) (1) (α). The improved SPF function𝜕𝑡 has values in the range [ 1, 1]. It modulates the signs of the pressure − forceThe inside first and term outside of Equation the region (1) ofis interestthe product so that of the contourimproved shrinks SPF (𝑠𝑝𝑓(𝐼(𝑢)) when outside) and the the object, balloon or expands𝛼 when inside the object [32]. The second term is the level set− evolution function corresponding force ( ). The improved SPF function has values in the range[1,1]. It modulatesn the signs of theo topressure the LIF force model, inside where andIinner outsideand I outerthe regiondenote of the inte meanrest grayscale so that the intensity contour of shrinksv φ(v )when> 0, outsidev u < ther n o | || − || andobject,v orφ (expandsv) 0, vwhenu >insider , respectively. the object [32].ILIF Theis the second local fittedterm is image the level and setδ(φ evolution) is the univariate function | ≤ || − || Diraccorresponding function. Byto the utilizing LIF model, the local where image 𝐼 information, and 𝐼 the LIFdenote model the is mean able to grayscale segment imagesintensity with of intensity𝑣| 𝜙(𝑣) inhomogeneity > 0, ‖𝑣−𝑢‖ <𝑟 [30 and]. The 𝑣 weight| 𝜙(𝑣) coeffi ≤ 0, ‖cient𝑣−𝑢λ‖1 >𝑟is used , respectively. to adjust the weight𝐼 is betweenthe local the fitted SPF functionimage and and 𝛿(𝜙) the LIFis the model. univariate Based onDirac global function. and local By imageutilizing information, the local image inhomogeneous information, images the LIF can bemodel segmented is able to quickly segment and images accurately with intensity by SPFLIF-IS. inhomogeneity [30]. The weight coefficient 𝜆 is used to adjustLIDLSM the weight [26] is abetween level set the method SPF function explicitly an designedd the LIF for model. auroral Based oval on segmentation. global and local In LIDLSM, image Yanginformation, et al. [26 inhomogeneous] utilized dual level images set functioncan be segmented of φ1 and quicklyφ2 to obtain and accurately the inner and by SPFLIF-IS. outer auroral oval boundaries,LIDLSM respectively. [26] is a level Suppose set method that explicitlyI : Ω R2 deissigned an image for domain,auroral ovalI : Ω segmentation.R2 is the input In LIDLSM, auroral ⊂ → ovalYang image. et al. [26] Two utilized level set dual functions level set can function divide the of auroral𝜙 and oval𝜙 to image obtain into the three inner regions: and outer auroral oval boundaries, respectively. Suppose that 𝐼: 𝛺 ⊂𝑅 is an image domain, 𝐼: 𝛺 →𝑅 is the input  n o  S = u Ω φ (u) >0 auroral oval image. Two level set functions1 can divide1 the auroral oval image into three regions:  n ∈ | o  S2 = u𝑆 =𝑢∈𝛺|𝜙Ω φ1(u) (𝑢)0, φ >2 (0 u) >0 (2)  ∈ n | ≤ o  𝑆 =𝑢∈𝛺|𝜙 (𝑢) ≤ 0, 𝜙 (𝑢) > 0  S3 = u Ω φ2(u) >0 (2) 𝑆 =𝑢∈𝛺|𝜙∈ | (𝑢) ≤ 0 This can be shown in Figure3 3..

FigureFigure 3. 3. TheThe three three regions regions divided divided by by 𝜙φ1 and φ𝜙2..

Assuming that the distance between 𝜙 and 𝜙 follows the Gaussian distribution, Yang et al. [26] introduced the shape energy term into LIDLSM. Besides, the local information term was constructed to improve the segmentation performance in the low contrast region, and the regularization term was adopted to ensure the curve can evolve smoothly towards the auroral oval Appl. Sci. 2020, 10, 2590 5 of 17

Assuming that the distance between φ1 and φ2 follows the Gaussian distribution, Yang et al. [26] introduced the shape energy term into LIDLSM. Besides, the local information term was constructed to improve the segmentation performance in the low contrast region, and the regularization term was Appl.adopted Sci. 2019 to ensure, 9, x FOR the PEER curve REVIEW can evolve smoothly towards the auroral oval boundaries and inhibit5small of 17 isolated regions. Combining these three terms, the total energy functional of LIDLSM was defined as: boundaries and inhibit small isolated regions. Combining these three terms, the total energy Z 2 2 Z functional of LIDLSM was defined( φ as:1(u ) φ2(u) µ) X E = λ | − | − du + λ H(φ (u)) du + LIDLSM 2 2 3 i 2σ |∇ | Ω(|𝜙 (𝑢) − 𝜙 (𝑢)| −𝜇) i=1 Ω | {z } | | 𝐸 = 𝜆 𝑑𝑢 + 𝜆 |𝛻𝐻(𝜙(𝑢)) {z𝑑𝑢 + } shape2𝜎 energy term regularization term shape energy term regularization term (3) X2 Z Z (3) 2 2 δ ( φ i(u )) 𝛿(𝜙(𝑢)) H ( 𝐻(𝜙 φ i ( v )) (𝑣))(𝐼(𝑣) ( I (v) I inner −𝐼 ) ) + +(1−𝐻(𝜙 (1 H(φ(𝑣)))(𝐼(𝑣)i(v)))(I(v) −𝐼 Iouter) 𝑑𝑣𝑑𝑢) dvdu − − − i=1 Ω Ψ | local{z information term } local information term where 𝜇 and 𝜎 are mean and standard deviation of distance between 𝜙 and 𝜙. 𝛹=𝑣| ‖𝑣− where𝑢‖ <𝑟µ andindicatesσ are meana local and region standard with deviation radius 𝑟 of at distance pixel 𝑢 between. 𝐼 φandand 𝐼φ . Ψdenotes= v vthe umean< r 1 2 { | || − || } grayscaleindicates aintensity local region 𝑣| 𝜙 with(𝑣) radius > 0, ‖𝑣−𝑢r at‖ <𝑟 pixel andu. I𝑣| 𝜙and(𝑣)I ≤ 0, ‖𝑣−𝑢denotes‖ >𝑟 the , respectively. mean grayscale 𝜆 n o n inner o outer andintensity 𝜆 arev φconstant(v) > 0, coefficientsv u < r andcontrollingv φ (v) the0, strengthv u > ofr , respectively.the shape energyλ and λtermare and constant the | i || − || | i ≤ || − || 2 3 regularizationcoefficients controlling term. the strength of the shape energy term and the regularization term. To obtain an accurate segmentation result of the auroral oval, LIDLSM employs the LLSRHT results to serve as the initial contours of the levellevel set.set. However, However, the auroral oval region cannot be identifiedidentified accurately if the initial contour is far far away from the boundaries. Figure Figure 44cc shows LIDLSM’ segmentation failure in such a case.

(a) (b) (c)

Figure 4.4.The The auroral auroral oval oval image image and theand local the information local inform basedation dual based level dual set auroral level ovalset segmentationauroral oval segmentationmethod’s (LIDLSM’s) method’s segmentation (LIDLSM’s)results segmentation obtained results by diff obtainederent initializations. by different ( ainitializations.) The original auroral(a) The originaloval image auroral (top) andoval imageimage after (top histogram) and image equalization after histogram (bottom) equalization which can (bottom) show the which auroral can structure show themore auroral clearly. structure (b) The more linear clearly. least ( squareb) The randomizedlinear least square Hough randomized transform Hough method transform (LLSRHT) method based (LLSRHT)initialization based (top) initialization and segmentation (top) and result segmentation of LIDLSM result (bottom). of LIDLSM (c) The self-defined(bottom). (c) initialization The self-defined that initializationinitial level set that curves initial are level far away set curves from auroral are far oval away boundaries from auroral (top) oval and low-precisionboundaries (top) segmentation and low- precisionresult of LIDLSM segmentation (bottom). result of LIDLSM (bottom).

3. Deep Feature-Based Adaptive Level Set Model (DFALS) To overcome the auroral oval segmentation problems brought by the image intensity inhomogeneity, the high noise level and the level set initialization, this work proposes a deep feature- based adaptive level set (DFALS) model with the aid of the convolutional neural network.

3.1. CNN Model Design In deep learning, CNNs are the most popular networks used for image classification. In this work, CNN is utilized to obtain the confidence value of each pixel belonging to the auroral oval region. The design of the proposed CNN architecture is motivated by [33], and the structure is shown Appl. Sci. 2020, 10, 2590 6 of 17

3. Deep Feature-Based Adaptive Level Set Model (DFALS) To overcome the auroral oval segmentation problems brought by the image intensity inhomogeneity, the high noise level and the level set initialization, this work proposes a deep feature-based adaptive level set (DFALS) model with the aid of the convolutional neural network.

3.1. CNN Model Design In deep learning, CNNs are the most popular networks used for image classification. In this work, CNN is utilized to obtain the confidence value of each pixel belonging to the auroral oval region. TheAppl. designSci. 2019, of9, x the FOR proposed PEER REVIEW CNN architecture is motivated by [33], and the structure is shown6 of in17 Figure5. This network consists of an input layer, four convolution layers, and two fully connected layers.in Figure The 5. input This network layer specifies consists a sub-image of an input extracted layer, four from convolution the auroral layers, images and with two the fully size connected of 11 11. × Eachlayers. convolutional The input layer layer specifies has a 3 a 3sub-image filter and aextracted rectified from linear the unit auroral (ReLU) images [34], and with zeros the aresize padded of 11 × × around11. Each the convolutional border before layer each has convolution a 3 × 3 filter so and that a the rectified output linear does unit not lose(ReLU) relative [34], pixeland zeros position are information.padded around The the feature border map before (yellow) each after convolution the fourth so convolutional that the output layer does is reduced not lose to arelative one-channel pixel featureposition as information. a 1 576 vector. The Thefeature output map of (yellow) the network after isthe a 1fourth2 vector convolutional which indicates layer is the reduced confidence to a × × valuesone-channel that the feature center as pixel a 1 × belongs 576 vector to the. The auroral output region of the (red) network and background is a 1 × 2 vector region which (green). indicates the confidenceThe cross-entropy values that loss the function center [pixel35] is belongs adopted to as the the auroral cost function region (red) for the and network: background region (green). h i The cross-entropy loss functionLoss =[35] isylog[ adopy +ted(1 as ythe) log cost(1 functionyˆ) for the network: (4) − − − 𝐿𝑜𝑠𝑠 = −[𝑦 𝑙𝑜𝑔𝑦 +(1−𝑦)𝑙𝑜𝑔(1−𝑦)] (4) where y denotes the sample label. If the center pixel of the sub-image locates in the auroral oval region, thewhere sample 𝑦 denotes label is 1;the otherwise, sample label. the label If the is 0.centeryˆ is the pixel confidence of the sub-image value, which locates indicates in the the auroral probability oval thatregion, the the center sample pixel label belongs is 1; to otherwise, the auroral the oval label region. is 0. 𝑦 The is the parameters confidence of CNNvalue, are which optimized indicates by thethe back-propagationprobability that the algorithm center pixel [36]. belongs to the auroral oval region. The parameters of CNN are optimized by the back-propagation algorithm [36].

FigureFigure 5.5. Convolutional neural network (CNN) architecture.architecture. “Conv” and “FC” indicate convolutional layerslayers andand fullyfully connectedconnected layers,layers, respectively. respectively.

3.2. Constructing the Global Energy Term 3.2. Constructing the Global Energy Term Inspired by the sliding window method [37], we gather the sub-images (11 11 pixels) with stride Inspired by the sliding window method [37], we gather the sub-images× (11 × 11 pixels) with equal 1 pixel horizontally and vertically from each UVI image. Since the auroral images used in this stride equal 1 pixel horizontally and vertically from each UVI image. Since the auroral images used work have a fixed size of 228 200 as shown in Figure6a, 45,600 sub-images with the size of 11 11 in this work have a fixed size× of 228 × 200 as shown in Figure 6a, 45,600 sub-images with the size× of can be obtained after padding zeros at the border of each image. Taking each of the 45,600 sub-images 11 × 11 can be obtained after padding zeros at the border of each image. Taking each of the 45,600 sub-images as the input of the pre-trained CNN, we can obtain the confidence value 𝑦 (𝑦∈[0,1]) of the center pixel in the sub-image, which reflects the probability of the pixel locating at the aurora oval region. All confidence values of the pixels in an image can be visualized through the deep feature map, whose gray value is 𝐼 =𝑦 × 255. An example of the auroral oval image and corresponding deep feature map are shown in Figure 6. With the deep feature map, we can detect the weak boundary regions of the aurora effectively. Appl. Sci. 2020, 10, 2590 7 of 17 as the input of the pre-trained CNN, we can obtain the confidence value yˆ (yˆ [0, 1]) of the center ∈ pixel in the sub-image, which reflects the probability of the pixel locating at the aurora oval region. All confidence values of the pixels in an image can be visualized through the deep feature map, whose gray value is Iˆ = yˆ 255. An example of the auroral oval image and corresponding deep feature map × are shown in Figure6. With the deep feature map, we can detect the weak boundary regions of the aurora effectively. Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 17

(a) (b)

Figure 6. AuroralAuroral oval oval image image an andd the the corresponding corresponding deep deep feature feature map. map. (a) (Aurorala) Auroral oval oval image. image. (b) (Theb) The corresponding corresponding deep deep feature feature map. map.

Based on the deep feature map, wewe proposepropose thethe globalglobal energyenergy termterm asas followsfollows

R𝐸 =(𝐼(𝑢)2 −̂ 𝑐 )𝐻(𝜙 )𝑑𝑢R + 2(𝐼(𝑢) −2̂ 𝑐 )(1 − 𝐻(𝜙 ))𝐻(𝜙 )𝑑𝑢 + (𝐼R (𝑢) −̂ 𝑐 )(1 −2 𝐻(𝜙 ))𝑑𝑢 Eglobal = (Iˆ(u) cˆ1) H(φ 1)du + 2 (Iˆ(u) cˆ2) (1 H(φ1))H(φ2)du + (Iˆ(u) cˆ3) (1 H(φ2))du (5)(5) Ω − Ω − − Ω − − 𝑐̂ , 𝑐̂ and 𝑐̂ represent the mean gray intensities of region 𝑆 , 𝑆 and 𝑆 in the deep feature map, cˆ1, cˆ2and cˆ3 represent the mean gray intensities of region S1, S2and S3 in the deep feature map, which canwhich be can calculated be calculated as: as: R 𝐼(𝑢)𝐻(𝜙)𝑑𝑢 R 𝐼(𝑢)(1 − 𝐻(𝜙))𝐻(𝜙)𝑑𝑢 𝐼(𝑢)(1R − 𝐻(𝜙))𝑑𝑢 Iˆ(u𝑐̂ )=H(φ1)du 𝑐̂ = Iˆ(u)(1 H(φ1))H(φ2)du 𝑐̂ = Iˆ(u)(1 H (φ3))du (6) Ω 𝐻(𝜙)𝑑𝑢 Ω (1 − 𝐻(𝜙))𝐻(𝜙)𝑑𝑢 1−𝐻(𝜙Ω )𝑑𝑢 cˆ1 = R cˆ2 = R − cˆ3 = R − (6) H(φ )du (1 H(φ ))H(φ )du 1 H(φ )du The global Ωenergy1 term contains threeΩ − componen1 ts.2 In the minimizationΩ − of the3 global energy term,The the global first component energy term can contains drive three the components.inner level set In the(𝜙 minimization) curve to evolve of the globaltowards energy the inner term, boundary of the auroral oval, and the third component can drive the outer level set (𝜙) curve to the first component can drive the inner level set (φ1) curve to evolve towards the inner boundary of evolve towards the outer boundary. The second component can attract the inner and outer level set the auroral oval, and the third component can drive the outer level set (φ2) curve to evolve towards thecurves outer moving boundary. to the The auroral second oval component boundary can dete attractrmined the by inner the deep and outer feature level map set simultaneously. curves moving to the auroralTo obtain oval an boundary accurate determinedboundary of by the the auroral deep featureoval, we map incorporate simultaneously. the global energy term into the LIDLSM,To obtain and an accuratethe total energy boundary functional of the auroral of the DFALS oval, we can incorporate be formulated the global as energy term into the LIDLSM, and the total energy(|𝜙 (𝑢) functional−𝜙 (𝑢)| −𝜇) of the DFALS can be formulated as 𝐸 = 𝛾 𝑑𝑢 + 𝛾 𝛻𝐻𝜙 (𝑢)𝑑𝑢 + 𝛾 · 𝐸 2𝜎 R 2 ( φ (u) φ (u) µ)2 P2 R (7) E = γ | 1 − 2 |− du + γ H(φ (u)) du + γ E DFALS 1 Ω 2σ2 2 Ω 2 i 3 global2 𝛿(𝜙𝑖(𝑢)) 𝐻(𝜙𝑖(𝑣))(𝐼(𝑣)i=1 𝑖𝑛𝑛𝑒𝑟 −𝐼 |∇) +(1−𝐻(𝜙|𝑖(𝑣)))(𝐼(𝑣)· 𝑜𝑢𝑡𝑒𝑟 −𝐼 ) 𝑑𝑣𝑑𝑢 2 𝛺 𝛹 (7) P R𝑖=1 R 2 2 ( (u)) H( (v))(I(v) I ) + ( H( (v)))(I(v) I ) dvdu Ω δ φi φi inner 1 φi outer Benefiting fromi=1 the deep feature-basedΨ global− energy term,− the DFALS model− can prevent the boundary leakage of the LIDLSM and obviously improve the segmentation accuracy at the low- Benefiting from the deep feature-based global energy term, the DFALS model can prevent the contrast regions in the UVI images. boundary leakage of the LIDLSM and obviously improve the segmentation accuracy at the low-contrast regions in the UVI images. 3.3. Constructing Adaptive Time-Step It is worth noting that the confidence value 𝑦 also contains the positional information of the corresponding pixel. 𝑦 is relatively large if the pixel locates closely to the auroral oval region or even in it. Otherwise, 𝑦 is small when the pixel is far away from the auroral oval region. As shown in Figure 4c, the segmentation failure is brought by the shape energy term of LIDLSM. More specifically, when the inner and outer level set curves cannot evolve to the adjacent region of the auroral oval simultaneously, the one far from the boundary attracts the other to continue evolving and overstepping the region of the auroral oval, which results in the segmentation failure eventually. The adaptive time step 𝛥𝑡 is proposed to address this problem. In this solution, the time step is large when the confidence value is small. Otherwise, the time step will be set with a low value or even zero. According to these requirements, the adaptive time step can be designed as follows: 𝑚𝑎𝑥( 0 , 4𝑦 −8𝑦 + 3), 𝑘 ≤ 300 𝛥𝑡 = 0.5, 𝑘 > 300 (8) Appl. Sci. 2020, 10, 2590 8 of 17

3.3. Constructing Adaptive Time-Step It is worth noting that the confidence value yˆ also contains the positional information of the corresponding pixel. yˆ is relatively large if the pixel locates closely to the auroral oval region or even in it. Otherwise, yˆ is small when the pixel is far away from the auroral oval region. As shown in Figure4c, the segmentation failure is brought by the shape energy term of LIDLSM. More specifically, when the inner and outer level set curves cannot evolve to the adjacent region of the auroral oval simultaneously, the one far from the boundary attracts the other to continue evolving and overstepping the region of the auroral oval, which results in the segmentation failure eventually. The adaptive time step ∆t is proposed to address this problem. In this solution, the time step is large when the confidence value is small. Otherwise, the time step will be set with a low value or even zero. According to these requirements, the adaptive time step can be designed as follows:   2 max(0 , 4yˆ 8yˆ + 3), k 300 ∆t =  − ≤ (8) Appl. Sci. 2019, 9, x FOR PEER REVIEW  0.5, k > 300 8 of 17 wherewhere k 𝑘refers refers to the to k theth iteration 𝑘ℎ iteration of the level of the set evolution.level set Auroralevolution. oval Auroral image andoval the image corresponding and the timecorresponding step map intime the step former map 300 in iterationsthe former are 300 shown iterations in Figure are shown7. in Figure 7.

(a) (b)

Figure 7.7. Auroral image and thethe correspondingcorresponding timetime stepstep map.map. ((aa)) AuroralAuroral ovaloval image.image. (b) TheThe correspondingcorresponding timetime stepstep map.map.

As shownshown byby Figure Figure7, 7, under under the the e ff ecteffect of theof the adaptive adaptive time time step, step, the level the level set curve set curve will slow will down slow ordown even or stop even moving stop moving when approaching when approaching the nearby the region nearby of region the auroral of the oval auroral boundaries. oval boundaries. Therefore, theTherefore, adaptive the time adaptive step can time eff ectivelystep can prevent effectivel they contourprevent curvethe contour from oversteppingcurve from overstepping the auroral oval the regionauroral and oval resulting region and in resulting the segmentation in the segmentation failure. When failure.k > When300, the 𝑘 dual > 300 level, the dual set curves level set continue curves advancingcontinue advancing with a relatively with a relatively small and small stable and time stable step time∆t = step0.5. 𝛥𝑡 = 0.5.

3.4. Implementation of the DFALS As aforementioned, the the energy energy functional functional of of the the pr proposedoposed method method is defined is defined as Equation as Equation (7). (7).By By introducing the time variable t and minimizing the energy functional with respect to φ and φ , introducing the time variable 𝑡 and minimizing the energy functional with respect to 𝜙1 and 𝜙2, we can deduce the following evolution equations for φ and φ by the gradient descent flow: we can deduce the following evolution equations for 𝜙1 and 2𝜙 by the gradient descent flow: 𝜕𝜙 (𝑢) (|𝜙 (𝑢) − 𝜙 (𝑢)| −𝜇)   ∂φ1(u) ( φ1(u) φ2(u) µ) R 2 2 = =𝛾| − |− + (+𝛿(𝜙( ))(𝑢))( 𝛿 (𝜙((𝑣))((𝐼(𝑣)))(( ( ) −𝐼 ) )−(𝐼(𝑣)−𝐼( )) )𝑑𝑣) ∂t 𝜕𝑡 γ1 2 𝜎 δ φ1 u δ φ1 v I v Iinner I v Iouter dv σ Ψ − − − (9) 𝛻𝜙φ1((𝑢)u) 2 2 (9) +γ2δ(φ1(u))div( ∇ 1 ) + γ3(δ(φ1(u)) ((Iˆ(u) 2 cˆ1) H (φ2)(Iˆ(u2 ) cˆ2) )) +𝛾 𝛿(𝜙 (𝑢))𝑑𝑖𝑣(φ1(u) )+𝛾 (𝛿(𝜙 (𝑢)) · ((𝐼̂(𝑢) −̂ 𝑐 ) −𝐻(𝜙 )(𝐼̂(𝑢) −̂ 𝑐 ) )) 2 1 | | 3 1 · 1− −2 2 − |𝜙1(𝑢)|   ∂φ2(u) 𝜕𝜙 (𝑢)( φ1(u)(|𝜙φ2(𝑢)(u) µ −) 𝜙 (𝑢)| −𝜇) R 2 2 = | − |− + ( ( )) ( ( ))(( ( ) ) ( ) ) ∂t γ1 =𝛾 2 δ φ2 +𝛿(𝜙u (𝑢))δ φ2 𝛿v(𝜙 (𝑣))((𝐼(𝑣)I v vinner −𝑣 ) −(𝐼(𝑣)−𝑣I v vouter) )𝑑𝑣dv 𝜕𝑡 σ 𝜎 Ψ − − − (10) φ2(u) 2 2 +γ2δ(φ2(u))div( ∇ 𝛻𝜙)(𝑢) + γ3(δ(φ2(u)) ((Iˆ(u) cˆ2) (1 H(φ1)) (Iˆ(u) cˆ3) )) (10) φ2(u) 2 · ̂ − 2 − −̂ −2 +𝛾2𝛿(𝜙2(𝑢))𝑑𝑖𝑣(| | )+𝛾3(𝛿(𝜙2(𝑢)) · ((𝐼(𝑢) −̂2 𝑐 ) (1 − 𝐻(𝜙1)) − (𝐼(𝑢) −̂3 𝑐 ) )) |𝜙2(𝑢)| The evolution equation can be discretized as: 𝜕𝜙 𝜙 =𝜙 +𝛥𝑡 (11) 𝜕𝑡 where 𝑘 refers to the 𝑘 iteration. The adaptive time step 𝛥𝑡 is introduced into the evolution equation. The main procedures of the proposed DFALS model are summarized in Algorithm 1 as follows:

Appl. Sci. 2020, 10, 2590 9 of 17

The evolution equation can be discretized as:

∂φk φk+1 = φk + ∆t (11) ∂t where k refers to the kth iteration. The adaptive time step ∆t is introduced into the evolution equation. The main procedures of the proposed DFALS model are summarized in Algorithm 1 as follows:

Algorithm 1. Deep feature-based adaptive level set model (DFALS). Preprocessing: Train the CNN with the training samples, and then construct each UVI image’s corresponding deep feature map with the trained CNN. Input: An original UVI image and the corresponding deep feature map Output: The segmentation result of the auroral oval image

Step 1: Initialize the level set functions φ1 and φ2, and set the coefficient parameters γ1, γ2 and γ3. Step 2: For k = 1 : K (K is the total number of iterations).

Step 3: Calculate the gradient descent flow for φ1 and φ2 by Equations (9) and (10). The gradient descent flow of the shape energy term, the regularization term, and the local information term are calculated with the original UVI image, and the gradient descent flow of the global energy term is calculated with the corresponding deep feature map. Step 4: Calculate the adaptive time step according to Equation (8). Step 5: Calculate the level set evolution equation using Equation (11). Step 6: If the evolution of the curves is stable or the predetermined iteration number is reached, then output the segmentation result; otherwise, return to Step 4. Step 7: End for Step 8: Output the segmentation result.

4. Experiment and Results

4.1. UVI Data Set The data set includes 200 auroral oval images which were captured by ultraviolet imager (UVI) aboard on National Aeronautics and Space Administration (NASA) Polar satellite on 10 January 1997, 31 July 1997, 15 January 1998, and 25 January 1998. The Polar satellite was launched in February 1996 into a of 2 9RE (Earth radius) with 86 inclination, and it views the north × ◦ polar region for about 9 hours out of every 18-hour orbit [38]. Auroral oval images used in this study were obtained using the Lyman–Birge–Hopfield “long” (LBHL) filter of the UVI on the Polar satellite, and the auroral radiance in this band originates from N2 (Nitrogen molecules) emissions at altitudes of ~120 km [39]. The corresponding benchmark images which show the auroral oval region were annotated and cross-verified by several aurora experts manually. An example of the auroral oval image and the corresponding benchmark image is shown in Figure8. Among the UVI images, 50 typical images are used to extract training samples for CNN, and the rest 150 images are used as test images for the validation of the proposed method. As abovementioned, 45,600 sub-images with the size of 11 11 can be obtained from each UVI image, in which 4000 × sub-images are used for training CNN. Therefore, there are 200,000 (4000 50) sub-images are selected × as training samples for CNN. The process of training samples extraction is shown in Figure9. In the first step, 2000 auroral oval pixels and 2000 background pixels are selected randomly from each image. In the second step, centered on these pixels, sub-images with the size of 11 11 are available. In the × third step, the center pixels’ values in the benchmark image are taken as the labels of the corresponding sub-images. With the 200,000 training samples, the prediction accuracy reaches 93.2% when the CNN was trained for 50,000 epochs. Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 17

Algorithm 1. Deep feature-based adaptive level set model (DFALS). Preprocessing: Train the CNN with the training samples, and then construct each UVI image’s corresponding deep feature map with the trained CNN. Input: An original UVI image and the corresponding deep feature map

Output: The segmentation result of the auroral oval image

Step 1: Initialize the level set functions 𝜙 and 𝜙, and set the coefficient parameters 𝛾, 𝛾 and 𝛾. Step 2: For 𝑘=1:𝐾 (𝐾 is the total number of iterations).

Step 3: Calculate the gradient descent flow for 𝜙 and 𝜙 by Equations (9) and (10). The gradient descent flow of the shape energy term, the regularization term, and the local information term are calculated with the original UVI image, and the gradient descent flow of the global energy term is calculated with the corresponding deep feature map. Step 4: Calculate the adaptive time step according to Equation (8).

Step 5: Calculate the level set evolution equation using Equation (11). Step 6: If the evolution of the curves is stable or the predetermined iteration number is reached, then output the segmentation result; otherwise, return to Step 4. Step 7: End for

Step 8: Output the segmentation result.

4. Experiment and Results

4.1. UVI Data Set The data set includes 200 auroral oval images which were captured by ultraviolet imager (UVI) aboard on National Aeronautics and Space Administration (NASA) Polar satellite on 10 January 1997, 31 July 1997, 15 January 1998, and 25 January 1998. The Polar satellite was launched in February 1996

into a highly elliptical orbit of 2 × 9𝑅 (Earth radius) with 86° inclination, and it views the north polar region for about 9 hours out of every 18-hour orbit [38]. Auroral oval images used in this study were obtained using the Lyman–Birge–Hopfield ‘‘long’’ (LBHL) filter of the UVI on the Polar satellite,

and the auroral radiance in this wavelength band originates from 𝑁 (Nitrogen molecules) emissions at altitudes of ~120 km [39]. The corresponding benchmark images which show the auroral oval Appl.region Sci. were2020, 10 annotated, 2590 and cross-verified by several aurora experts manually. An example 10of ofthe 17 auroral oval image and the corresponding benchmark image is shown in Figure 8. Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 17

Figure 8. Auroral oval image and the corresponding benchmark image. (a) Auroral oval image. (b) Benchmark image.

Among the UVI images, 50 typical images are used to extract training samples for CNN, and the rest 150 images are used as test images for the validation of the proposed method. As abovementioned, 45,600 sub-images with the size of 11 × 11 can be obtained from each UVI image, in which 4000 sub- images are used for training CNN. Therefore, there are 200,000 (4000 × 50) sub-images are selected as training samples for CNN. The process of training samples extraction is shown in Figure 9. In the first step, 2000 auroral oval pixels and 2000 background pixels are selected randomly from each image. In the second step, centered on these pixels, sub-images with the size of 11 × 11 are available. In the third step, the center pixels’ values in the(a) benchmark image are taken(b )as the labels of the corresponding sub-images. With the 200,000 training samples, the prediction accuracy reaches 93.2% when the CNN Figure 8. Auroral oval image and the corresponding benchmark image. (a) Auroral oval image. was trained(b) Benchmark for 50,000 image. epochs.

Figure 9. Extract training samples from the auroral oval images.

The 150 test images that do not participate in the training process are used for validating the The 150 test images that do not participate in the training process are used for validating the performance of the DFALS method. The experimental results are also compared with other auroral performance of the DFALS method. The experimental results are also compared with other auroral oval segmentation models and recently developed level set methods, i.e., SIIALSM [25], LIDLSM [26], oval segmentation models and recently developed level set methods, i.e., SIIALSM [25], LIDLSM [26], RLSF [23], and SPFLIF-IS [24]. The parameters of the DFALS are empirically set as follows: γ1= 1, γ2= 2, RLSF [23], and SPFLIF-IS [24]. The parameters of the DFALS are empirically set as follows: 𝛾=1, γ3= 1. Optimal parameters of other methods are chosen to guarantee the fairness of the comparison. 𝛾=2, 𝛾=1. Optimal parameters of other methods are chosen to guarantee the fairness of the 4.2.comparison. Robustness to Contour Initializations

4.2. RobustnessWe evaluate to Contour the influence Initializations of contour initializations on the segmentation results for the LIDLSM and DFALS models. The segmentation results of these two methods initialized by different level set We evaluate the influence of contour initializations on the segmentation results for the LIDLSM curves are shown in Figure 10. Despite the differences among several initializations, the DFALS model and DFALS models. The segmentation results of these two methods initialized by different level set yields accurate segmentation results. In contrast, the LIDLSM is extremely sensitive to the initial curves are shown in Figure 10. Despite the differences among several initializations, the DFALS contour positions, which can be seen from its unreasonable segmentation results. Therefore, benefiting model yields accurate segmentation results. In contrast, the LIDLSM is extremely sensitive to the from the deep feature-based adaptive time step, the DFALS model shows stronger robustness to initial contour positions, which can be seen from its unreasonable segmentation results. Therefore, different contour initializations. benefiting from the deep feature-based adaptive time step, the DFALS model shows stronger robustness to different contour initializations.

Appl. Sci. 2020, 10, 2590 11 of 17 Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 17

(a)

(b)

(c)

(d)

Jul 31 1997 Jul 31 1997 01:34:00 01:47:23

Figure 10.10. TheThe segmentation segmentation results results of LIDLSMof LIDLSM and theand deepthe feature-baseddeep feature-based adaptive adaptive level set level (DFALS) set (DFALS)model initialized model initialized by several by di ffseveralerent level different set curves. level set (a) curves. The UVI (a images) The UVI captured images by captured Polar satellite by Polar on satelliteJul 31 1997 on 01:34:00 Jul 31 and1997 Jul 01:34:00 31 1997 01:47:23.and Jul (31b) The1997 initial 01:47:23. level ( setb) curves.The initial (c) Segmentation level set curves. results ( ofc) Segmentationthe DFALS. (d )results Segmentation of the DFALS. results ( ofd) theSegmentation LIDLSM. results of the LIDLSM. 4.3. Comparison with Other Methods 4.3. Comparison with Other Methods To further testify the effectiveness and robustness of the DFALS model, two recently proposed To further testify the effectiveness and robustness of the DFALS model, two recently proposed universally applicable image segmentation methods (RLSF, SPFLIF-IS) and methods designed universally applicable image segmentation methods (RLSF, SPFLIF-IS) and methods designed specifically for auroral oval segmentation (SIIALSM, LIDLSM), were conducted to compare with our specifically for auroral oval segmentation (SIIALSM, LIDLSM), were conducted to compare with our method. The LLSRHT results were adopted as the initial level set contours of the SIIALSM and the method. The LLSRHT results were adopted as the initial level set contours of the SIIALSM and the LIDLSM to guarantee the fairness of comparison. LIDLSM to guarantee the fairness of comparison. 4.3.1. Subjective Evaluation 4.3.1. Subjective Evaluation From test images for validating the DFALS model, we choose three auroral oval images that are diFromfficult test to images segment for for validating comparison the DFALS experiments. model, In we the choose experiments, three auroral the inner oval andimages outer that level are difficultset curves to ofsegment the DFALS for comparison model are initializedexperiments. by twoIn the concentric experiments, circles, the whose inner radiusand outer is 15 level and set 98, curvesrespectively. of the The DFALS center model of concentric are initialized circles coincides by two withconcentric the image circles, center. whose The contourradius is initializations 15 and 98, respectively.of different methods The center for these of threeconcentric auroral circle ovals imagescoincides are shownwith the in Figures image S1–S3.center. The contour initializationsAs shown of by different the red methods rectangle for in these Figure three 11a, auroral there exist oval dayglow images are contamination shown in Figures regions S1–S3. in the first UVIAs shown image. by The the auroral red rectangle oval annotated in Figure by 11a, experts there and exist segmentation dayglow contamination results obtained regions by di ffinerent the firstmethods UVI image. automatically The auroral are alsooval shownannotated in Figure by expe 11rtsb–f. and The segmentation RLSF, SPFLIF-IS, results and obtained SIIALSM by different models methodsmistakenly automatically identify the are edge also of shown UVI’s lensin Figure as the 11b–f. outer The boundary RLSF, SPFLIF-IS, of the auroral and oval.SIIALSM In contrast, models mistakenlyLIDLSM and identify DFALS the obtain edge reasonable of UVI’s segmentationlens as the outer results. boundary Compared of the with auroral the LIDLSM, oval. In the contrast, DFALS LIDLSMmodel shows and superiorityDFALS obtain on retaining reasonable the detailssegmentation of the inner results. boundary Compared of the with auroral the oval,LIDLSM, as shown the DFALSby the blue model rectangle shows insuperiority Figure 11 f,g.on retaining the details of the inner boundary of the auroral oval, as shown by the blue rectangle in Figure 11f,g.

Appl. Sci. 2020, 10, 2590 12 of 17 Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 17

(a) Auroral oval image (b)Benchmark (c) RLSF (d) SPFLIF-IS

(e) SIIALSM (f) LIDLSM (g) DFALS

FigureFigure 11. 11.The The segmentationsegmentation results results of of di differentfferent methods. methods. ( a(a)) UVI UVI image image captured captured by by the the Polar Polar satellite satellite onon Jul Jul 31 31 1997 1997 09:34:48. 09:34:48. ((bb)) TheThe benchmarkbenchmark annotated annotated by by experts experts manually. manually. ( c(c)) The The segmentation segmentation result result ofof the the region-based region-based active active contour contour model model via localvia local similarity similarity factor factor (RLSF). (RLSF). (d) The (d segmentation) The segmentation result ofresult the improved of the improved signed pressure signed forcepressure and localforce imageand local fitting image based fitting image based segmentation image segmentation (SPFLIF-IS) method.(SPFLIF-IS) (e) Themethod. segmentation (e) The segmentation result of the shape-initializedresult of the shape-initialized and intensity-adaptive and intensity-adaptive level set method level (SIIALSM).set method ( f(SIIALSM).) The segmentation (f) The segmentation result of LIDLSM. result (ofg )LIDLSM. The segmentation (g) The segmentation result of DFALS. result of DFALS.

TheThe second second auroral auroral oval oval image image contains contains lowlow contrastcontrast regionregion asas marked marked byby the the red red rectangle rectangle in in FigureFigure 12 12a.a. AsAs aforementioned,aforementioned, the the RLSF RLSF model model improved improved the the segmentation segmentation result result when when the the image’s image’s noisenoise distribution distribution is is unknown unknown and and the the SPFLIF-IS SPFLIF-IS model model was was developed developed to to address address the the segmentation segmentation problemproblem broughtbrought byby image image intensity intensity inhomogeneity. inhomogeneity. However,However, thesethese twotwo methodsmethods failfail toto obtainobtain accurateaccurate segmentation segmentation results. results. The The RLSF RLSF model model eliminates eliminates the the noise noise point point in in the the segmentation segmentation result result whilewhile missing missing to to extract extract complete complete auroral auroral oval oval in the in low the contrast low contrast region. region. The SPFLIF-IS The SPFLIF-IS model inhibits model apparentinhibits apparent boundary boundary leakage ofleakage the auroral of the ovalauroral in theoval low in the contrast low contrast region yetregion regarding yet regarding many noisemany pixelsnoise aspixels the auroralas the auroral oval region. oval region. Influenced Influenced by the intensityby the intensity inhomogeneity, inhomogeneity, the segmentation the segmentation results ofresults SIIALSM of SIIALSM and LIDLSM and alsoLIDLSM breaks also in the breaks low contrastin the low region. contrast Compared region. with Compared other methods, with other the DFALSmethods, model the DFALS obtains model the most obtains reasonable the most segmentation reasonable resultsegmentation with the result help of with deep the feature-based help of deep globalfeature-based energy term. global energy term. The third auroral oval image also includes low-contrast regions around the boundary, which is marked with the red box in Figure 13a. As seen from the segmentation results, the contours obtained by RLSF and SIIALSM break in the low contrast region, and SPFLIF-IS mistakes the noise pixels as part of the auroral oval. By comparing Figure 13b,f, we can conclude that although the boundary leakage does not appear in the segmentation result of LIDLSM, the extracted auroral oval deviates from the benchmark significantly in the low contrast region. However, regardless of the intensity inhomogeneity and image noise, DFALS yields the segmentation result that most closely matches the benchmark.

(a) Auroral oval image (b)Benchmark (c) RLSF (d) SPFLIF-IS

(e) SIIALSM (f) LIDLSM (g) DFALS Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 17

(a) Auroral oval image (b)Benchmark (c) RLSF (d) SPFLIF-IS

(e) SIIALSM (f) LIDLSM (g) DFALS

Figure 11. The segmentation results of different methods. (a) UVI image captured by the Polar satellite on Jul 31 1997 09:34:48. (b) The benchmark annotated by experts manually. (c) The segmentation result of the region-based active contour model via local similarity factor (RLSF). (d) The segmentation result of the improved signed pressure force and local image fitting based image segmentation (SPFLIF-IS) method. (e) The segmentation result of the shape-initialized and intensity-adaptive level set method (SIIALSM). (f) The segmentation result of LIDLSM. (g) The segmentation result of DFALS.

The second auroral oval image contains low contrast region as marked by the red rectangle in Figure 12a. As aforementioned, the RLSF model improved the segmentation result when the image’s noise distribution is unknown and the SPFLIF-IS model was developed to address the segmentation problem brought by image intensity inhomogeneity. However, these two methods fail to obtain accurate segmentation results. The RLSF model eliminates the noise point in the segmentation result while missing to extract complete auroral oval in the low contrast region. The SPFLIF-IS model inhibits apparent boundary leakage of the auroral oval in the low contrast region yet regarding many noise pixels as the auroral oval region. Influenced by the intensity inhomogeneity, the segmentation results of SIIALSM and LIDLSM also breaks in the low contrast region. Compared with other Appl.methods, Sci. 2020 the, 10 ,DFALS 2590 model obtains the most reasonable segmentation result with the help of13 deep of 17 feature-based global energy term.

Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 17 (a) Auroral oval image (b)Benchmark (c) RLSF (d) SPFLIF-IS Figure 12. The segmentation results of different methods. (a) UVI image captured by polar satellite on Jan 10 1997 01:16:40. (b) The benchmark annotated by experts manually. (c) The segmentation result of RLSF. (d) The segmentation result of SPFLIF-IS. (e) The segmentation result of SIIALSM. (f) The segmentation result of LIDLSM. (g) The segmentation result of DFALS.

The third auroral oval image also includes low-contrast regions around the boundary, which is marked with the red box in Figure 13a. As seen from the segmentation results, the contours obtained by RLSF and SIIALSM break in the low contrast region, and SPFLIF-IS mistakes the noise pixels as part of the auroral(e) SIIALSM oval. By comparing Figure (f )13b LIDLSM and Figure 13f, we can conclude (g) DFALS that although the boundaryFigure 12.leakageThe segmentation does not appear results in of the diff segmentationerent methods. (resulta) UVI of image LIDLSM, captured the by extracted polar satellite auroral on oval deviatesJan 10 from 1997 01:16:40.the benchmark (b) The benchmarksignificantly annotated in the bylow experts contrast manually. region. (c) However, The segmentation regardless result of the intensityof RLSF. inhomogeneity (d) The segmentation and image result noise, of SPFLIF-IS. DFALS ( eyiel) Theds segmentationthe segmentation result ofresult SIIALSM. that most (f) The closely matchessegmentation the benchmark. result of LIDLSM. (g) The segmentation result of DFALS.

(a) Auroral oval image (b)Benchmark (c) RLSF (d) SPFLIF-IS

(e) SIIALSM (f) LIDLSM (g) DFALS

FigureFigure 13. 13.The The segmentation segmentation results results of of di ffdifferenterent methods. methods. (a) ( UVIa) UVI image image captured captured by polarby polar satellite satellite on Janon 10Jan 1997 10 1997 03:10:08. 03:10:08. (b) The (b) benchmark The benchmark annotated annotated by experts by experts manually. manually. (c) The ( segmentationc) The segmentation result ofresult RLSF. of (RLSF.d) The (d segmentation) The segmentation result ofresult SPFLIF-IS. of SPFLIF-IS. (e) The (e segmentation) The segmentation result result of SIIALSM. of SIIALSM. (f) The (f) segmentationThe segmentation result result of LIDLSM. of LIDLSM. (g) The (g) segmentation The segmentation result result of DFALS. of DFALS.

4.3.2.4.3.2. ObjectiveObjective EvaluationEvaluation Toobjectively evaluate the segmentation accuracy of different methods, an experiment was conducted To objectively evaluate the segmentation accuracy of different methods, an experiment was to compare the segmentation results with the benchmark on 150 test images. Both boundary-based and conducted to compare the segmentation results with the benchmark on 150 test images. Both region-basedboundary-based measures and region-bas are employed.ed measures are employed. Boundary-based measurement •• Boundary-based measurement Pixel deviation Pd and gap pixel percentage Pg [19] are adopted as the evaluation metrics in the Boundary-basedPixel deviation measurement. 𝑃 and gapPd pixelis a summative percentage measure 𝑃 [19] of are the adopted distance as variation the evaluation of the automatically metrics in the Boundary-based measurement. 𝑃 is a summative measure of the distance variation of the automatically segmented boundary from the expert’s annotated boundary. 𝑃 indicates the proportion of the gap in the whole auroral oval boundary. The smaller these two metric values, the better the segmentation result. 𝑃and 𝑃 are calculated as follows: ∑ ∑ ∈ 𝑑(𝑏,𝐵) + ∈ 𝑑(𝑏,𝐵) 𝑃 = (12) 𝑁 +𝑁

𝑁 𝑃 = × 100% (13) 𝑁 Appl. Sci. 2020, 10, 2590 14 of 17

segmented boundary from the expert’s annotated boundary. Pg indicates the proportion of the gap in the whole auroral oval boundary. The smaller these two metric values, the better the segmentation result. Pd and Pg are calculated as follows: P P d(b B ) + d(b B ) ba Ba a, b bb Bb b, a Pd = ∈ ∈ (12) Na + Nb

Ngap Pg = 100% (13) Nb × where Ba and Bb are the pixel sets on the segmented auroral oval boundaries and the benchmark boundaries. ba and bb are pixels in Ba and Bb, respectively. Na and Nb are the number of elements in set Ba and Bb, respectively, and Ngap denotes the number of gap pixels. d(b1, B2) = min b1 b2 represents b B − 2∈ 2 the minimum distance from pixel b1 to the pixels in set B2. Tables1 and2 list the mean value and standard deviation of Pd and Pg given by different methods. The best results are labeled in bold. For these two metrics, the RLSF model performs slightly worse than the SPFLIF-IS model, which reflects that the problem of auroral oval segmentation is mainly brought by intensity inhomogeneity. Besides, the suboptimal Pg’s mean value of SPFLIF-IS shows that this method can efficiently inhibit the boundary leakage in the low contrast region in the UVI images. Both initialized by LLSRHT result, the comparison between the LIDLSM and the SIIALSM verifies the effectiveness of the shape energy term in the LIDLSM. Though the Pd’s mean value of the LIDLSM is suboptimal, this method owes the largest value of standard deviation except for SIIALSM. The large Pd’s standard deviation of the SIIALSM and LIDLSM is mainly brought by some poor LLSRHT result initializations, which also indirectly reflects that these two methods are sensitive to contour initializations.

Table 1. The mean value and standard deviation of Pd given by different methods on the test images.

Methods RLSF SPFLIF-IS SIIALAM LIDLSM DFALS Mean value 7.114 6.221 6.314 4.614 1.651 Standard deviation 3.456 3.401 8.354 8.063 1.289

Table2. The mean value and standard deviation of Pg given by different methods on the test images in %.

Methods RLSF SPFLIF-IS SIIALAM LIDLSM DFALS Mean value 3.010 0.200 1.142 0.600 0.100 Standard deviation 5.153 1.090 4.006 5.513 0.482

Region-based measurement •

The percentage of mislabeled pixels Pmp [19] measures the deviation of the automatically segmented auroral oval region (Ra) from the benchmark annotated by experts manually (Rb), and it is used as the metric for region-based measurement. Pmp is calculated as the following formula.

Nmiss + N f alse Pmp = 100% (14) Nb + Na × where Nmiss represents the number of pixels in Rb but not in Ra, N f alse is the number of pixels in Ra but not in Rb. The mean value and standard deviation of Pmp given by different methods are listed in Table3, and the best results are labeled in bold. The Pmp’s mean value of SPFLIF-IS is the largest for the reason that this method mistakes many noise pixels as the auroral oval region, and the Pmp’s mean value of LIDLSM is suboptimal due to the effectiveness of its local information term. Compared with other Appl. Sci. 2020, 10, 2590 15 of 17

methods, the standard deviation of Pd and Pmp obtained by the SIIALSM are the largest, which shows that this method is sensitive to different auroral oval shapes and contour initializations. The auroral oval region extracted by the DFALS model agrees well with the expert annotated auroral region which can be verified from the lowest Pmp’s mean value and standard deviation. Indeed, the DFALS model obtains the lowest value of all three metrics. From Table3, we can conclude that with the help of deep feature-based global energy term, the auroral oval segmented by the DFALS matches well with the benchmark annotated by the experts.

Table 3. The mean value and standard deviation of Pmp given by different methods on test images in %.

Methods RLSF SPFLIF-IS SIIALSM LIDLSM DFALS Mean value 31.173 33.927 33.002 25.484 16.565 Standard deviation 6.609 6.9569 17.729 6.610 3.560

5. Conclusions In this paper, motivated by previous auroral oval segmentation methods, we propose a novel deep feature-based adaptive level set model to extract the auroral oval region in the UVI images. By utilizing the confidence value obtained from pre-trained CNN, the global energy term is constructed and incorporated into the LIDLSM’s energy functional. Additionally, an adaptive time step is introduced to the evolution equation of the level set. The experiments validate that the main contributions of our method are as follows:

The comparative experiment between the LIDLSM and DFALS models demonstrate that the • proposed method owes stronger robustness to different contour initializations. It can be seen from the subjective evaluation experiments that the auroral oval region extracted by • DFALS agrees well with the experts’ annotated benchmark. Compared with the RLSF, SPFLIF-IS, SIIALSM, and LIDLSM models, the DFALS model improves • the auroral oval segmentation accuracy in the intensity inhomogeneous region of the UVI images. Furthermore, the DFALS model also obtains the best performance in terms of the boundary- and region-based evaluation metrics.

The DFALS model focuses on segmenting the UVI images which have complete auroral oval. However, in practice, some aurora in the UVI images are incomplete due to the capturing position of the ultraviolet imager instrument aboard on the Polar satellite. In the future, we will consider adaptively setting the weight coefficients and introducing the prior knowledge of the auroral shape, so that the incomplete aurora can also be segmented accurately.

Supplementary Materials: The following are available online at http://www.mdpi.com/2076-3417/10/7/2590/s1, Figure S1: The contour initializations of different methods when segmenting the first UVI image in Section 4.3.1. (a) Initial level set curves (concentric circles) of DFALS. (b) Initial level set curves (LLS-RHT result) of SIIALSM and LIDLSM. Figure S2: The contour initializations of different methods when segmenting the second UVI image in Section 4.3.1.(a) Initial level set curves (concentric circles) of DFALS. (b) Initial level set curves (LLS-RHT result) of SIIALSM and LIDLSM. Figure S3: The contour initializations of different methods when segmenting the third UVI image in Section 4.3.1.(a) Initial level set curves (concentric circles) of DFALS. (b) Initial level set curves (LLS-RHT result) of SIIALSM and LIDLSM. Author Contributions: C.T., P.Y.and Z.Z. conceived the algorithm and designed the experiments; C.T. implemented the experiments; H.D. analyzed the results; and C.T. drafted the manuscript, Z.Z., P.Y. and L.W. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the National Natural Science Foundation of China, grant number 61473310, 41174164. Acknowledgments: The authors would like to thank NASA’s Space Physics Data Facility (SPDF) for providing UVI images from the Polar satellite, which is available at https://cdaweb.gsfc.nasa.gov/cgi-bin/eval3.cgi. Conflicts of Interest: The authors declare no conflicts of interest. Appl. Sci. 2020, 10, 2590 16 of 17

References

1. Boudouridis, A.; Zesta, E.; Lyons, L.R.; Anderson, P.C.; Lummerzheim, D. Enhanced solar wind geoeffectiveness after a sudden increase in dynamic pressure during southward IMF orientation. J. Geophys. Res. Space Phys. 2005, 110.[CrossRef] 2. Clausen, L.B.N.; Nickisch, H. Automatic Classification of Auroral Images From the Oslo Auroral THEMIS (OATH) Data Set Using Machine Learning. J. Geophys. Res. Space Phys. 2018, 123, 5640–5647. [CrossRef] 3. Yang, Q.; Tao, D.; Han, D.; Liang, J. Extracting Auroral Key Local Structures From All-Sky Auroral Images by Artificial Intelligence Technique. J. Geophys. Res. Space Phys. 2019, 124, 3512–3521. [CrossRef] 4. Akasofu, S.I. Dynamic morphology of . Space Sci. Rev. 1965, 4, 498–540. [CrossRef] 5. Lei, Y.; Shi, J.; Zhou, Y.; Tao, M.; Wu, J. Extraction of Auroral Oval Regions Using Suppressed Fuzzy C Means Clustering. In Proceedings of the International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 6883–6886. 6. Qian, W.; QingHu, M.; ZeJun, H.; ZanYang, X.; JiMin, L.; HongQiao, H. Extraction of auroral oval boundaries from UVI images: A new FLICM clustering-based method and its evaluation. Adv. Polar Sci. 2011, 22, 184–191. [CrossRef] 7. Kvammen, A.; Gustavsson, B.; Sergienko, T.; Brändström, U.; Rietveld, M.; Rexer, T.; Vierinen, J. The 3-D Distribution of Artificial Aurora Induced by HF Radio Waves in the Ionosphere. J. Geophys. Res. Space Phys. 2019, 124, 2992–3006. [CrossRef] 8. Ding, G.-X.; He, F.; Zhang, X.-X.; Chen, B. A new auroral boundary determination algorithm based on observations from TIMED/GUVI and DMSP/SSUSI. J. Geophys. Res. Space Phys. 2017, 122, 2162–2173. [CrossRef] 9. Boudouridis, A.; Zesta, E.; Lyons, R.; Anderson, P.C.; Lummerzheim, D. Effect of solar wind pressure pulses on the size and strength of the auroral oval. J. Geophys. Res. Space Phys. 2003, 108.[CrossRef] 10. Zhao, X.; Sheng, Z.; Li, J.; Yu, H.; Wei, K. Determination of the “wave turbopause” using a numerical differentiation method. J. Geophys. Res. Atmos. 2019, 124, 10592–10607. [CrossRef] 11. Shi, J.; Wu, J.; Anisetti, M.; Damiani, E.; Jeon, G. An interval type-2 fuzzy active contour model for auroral oval segmentation. Soft Comput. 2017, 21, 2325–2345. [CrossRef] 12. Kauristie, K.; Weygand, J.; Pulkkinen, T.I.; Murphree, J.S.; Newell, P.T. Size of the auroral oval: UV ovals and precipitation boundaries compared. J. Geophys. Res. 1999, 104, 2321. [CrossRef] 13. Hu, Z.-J.; Yang, Q.-J.; Liang, J.-M.; Hu, H.-Q.; Zhang, B.-C.; Yang, H.-G. Variation and modeling of ultraviolet auroral oval boundaries associated with interplanetary and geomagnetic parameters. 2017, 15, 606–622. [CrossRef] 14. Meng, C.-I. Polar Cap Variations and the Interplanetary Magnetic Field; Springer: Dordrecht, The Netherlands, 1979; pp. 23–46. 15. Liou, K.; Newell, P.T.; Sibeck, D.G.; Meng, C.-I.; Brittnacher, M.; Parks, G. Observation of IMF and seasonal effects in the location of auroral onset. J. Geophys. Res. Space Phys. 2001, 106, 5799. [CrossRef] 16. Hardy, D.A.; Burke, W.J.; Gussenhoven, M.S.; Heinemann, N.; Holeman, E. DMSP/F2 electron observations of equatorward auroral boundaries and their relationship to the solar wind velocity and the north-south component of the interplanetary magnetic field. J. Geophys. Res. Space Phys. 1981, 86, 9961–9974. [CrossRef] 17. Meng, Y.; Zhou, Z.; Liu, Y.; Luo, Q.; Yang, P.; Li, M. A prior shape-based level-set method for auroral oval segmentation. Remote Sens. Lett. 2019, 10, 292–301. [CrossRef] 18. Li, X.; Ramachandran, R.; He, M.; Movva, S.; Rushing, J.A.; Graves, S.J.; Lyatsky, W.B.; Tan, A. Comparing different thresholding algorithms for segmenting auroras. In Proceedings of the International Conference on Information Technology Coding and Computing, Las Vegas, NV, USA, 5–7 April 2004; pp. 594–601. 19. Cao, C.; Newman, T.S. New shape-based auroral oval segmentation driven by LLS-RHT. Pattern Recognit. 2009, 42, 607–618. [CrossRef] 20. Liu, H.; Gao, X.; Han, B.; Yang, X. An Automatic MSRM Method with a Feedback Based on Shape Information for Auroral Oval Segmentation. In Proceedings of the International Conference on Intelligent Science and Big Data Engineering, Beijing, China, 31 July–2 August 2013; pp. 748–755. 21. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [CrossRef] Appl. Sci. 2020, 10, 2590 17 of 17

22. Sun, L.; Meng, X.; Xu, J.; Zhang, S. An Image Segmentation Method Based on Improved Regularized Level Set Model. Appl. Sci. 2018, 8, 2393. [CrossRef] 23. Niu, S.; Qiang, C.; Sisternes, L.D.; Ji, Z.; Rubin, D.L. Robust noise region-based active contour model via local similarity factor for image segmentation. Pattern Recognit. 2016, 61, 104–119. [CrossRef] 24. Sun, L.; Meng, X.; Xu, J.; Tian, Y. An Image Segmentation Method Using an Active Contour Model Based on Improved SPF and LIF. Appl. Sci. 2018, 8, 2576. [CrossRef] 25. Yang, X.; Gao, X.; Li, J.; Han, B. A shape-initialized and intensity-adaptive level set method for auroral oval segmentation. Inf. Sci. 2014, 277, 794–807. [CrossRef] 26. Yang, P.; Zhou, Z.; Shi, H.; Meng, Y. Auroral oval segmentation using dual level set based on local information. Remote Sens. Lett. 2017, 8, 1112–1121. [CrossRef] 27. Ham, Y.-G.; Kim, J.-H.; Luo, J.-J. Deep learning for multi-year ENSO forecasts. Nature 2019, 573, 568–572. [CrossRef][PubMed] 28. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the game of Go without human knowledge. Nature 2017, 550, 354. [CrossRef] 29. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. 30. Zhang, K.; Song, H.; Zhang, L. Active contours driven by local image fitting energy. Pattern Recognit. 2010, 43, 1199–1206. [CrossRef] 31. Abdelsamea, M.M.; Tsaftaris, S.A. Active contour model driven by Globally Signed Region Pressure Force. In Proceedings of the 2013 18th International Conference on Digital Signal Processing (DSP), Santorini, Greece, 1–3 July 2013; pp. 1–6. 32. Zhang, K.; Zhang, L.; Song, H.; Zhou, W. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis. Comput. 2010, 28, 668–676. [CrossRef] 33. Kim, J.; Nguyen, D.; Lee, S. Deep CNN-Based Blind Image Quality Predictor. IEEE Trans. Neural Netw. Learn. Syst. 2018, 1–14. [CrossRef] 34. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. 35. de Boer, P.-T.; Kroese, D.P.;Mannor, S.; Rubinstein, R.Y.A Tutorialon the Cross-Entropy Method. Ann. Oper. Res. 2005, 134, 19–67. [CrossRef] 36. Rumelhart, D.; Hinton, G.; Williams, R. Learning Internal Representation by Back-Propagation Errors. Nature 1986, 323, 533–536. [CrossRef] 37. Dro˙zd˙z,M.; Kryjak, T. FPGA Implementation of Multi-scale Face Detection Using HOG Features and SVM Classifier. Image Process. Commun. 2017, 21, 27–44. [CrossRef] 38. Brittnacher, M.; Spann, J.; Parks, G.; Germany, G. Auroral observations by the polar Ultraviolet Imager (UVI). Adv. Space Res. 1997, 20, 1037–1042. [CrossRef] 39. Carbary, J.F. Auroral boundary correlations between UVI and DMSP. J. Geophys. Res. 2003, 108, 1018. [CrossRef]

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).