electronics

Article An Improved Level Set Algorithm Based on Prior Information for Left Ventricular MRI Segmentation

Lei Xu * , Yuhao Zhang , Haima Yang and Xuedian Zhang

School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; [email protected] (Y.Z.); [email protected] (H.Y.); [email protected] (X.Z.) * Correspondence: [email protected]

Abstract: This paper proposes a new level set algorithm for left ventricular segmentation based on prior information. First, the improved U-Net network is used for coarse segmentation to obtain pixel-level prior position information. Then, the segmentation result is used as the initial contour of level set for fine segmentation. In the process of evolution, based on the shape of the left ventricle, we improve the energy function of the level set and add shape constraints to solve the “burr” and “sag” problems during curve evolution. The proposed algorithm was successfully evaluated on the MICCAI 2009: the mean dice score of the epicardium and endocardium are 92.95% and 94.43%. It is proved that the improved level set algorithm obtains better segmentation results than the original algorithm.

 Keywords: left ventricular segmentation; prior information; level set algorithm; shape constraints 

Citation: Xu, L.; Zhang, Y.; Yang, H.; Zhang, X. An Improved Level Set Algorithm Based on Prior 1. Introduction Information for Left Ventricular MRI Uremic cardiomyopathy is the most common complication also the cause of death with Segmentation. Electronics 2021, 10, chronic kidney disease and left ventricular hypertrophy is the most significant pathological 707. https://doi.org/10.3390/ feature of uremic cardiomyopathy [1]. Therefore, it is of great significance for the prevention electronics10060707 and treatment of uremic diseases to segment left ventricle from medical images and analyze its pathology scientifically, objectively and quantitatively. Heart and other soft tissue Academic Editors: Luca Mesin and images have low contrast with the background and high noise [2], so segmentation of Byung Cheol Song left ventricle has always been a difficult problem in the field of image segmentation. In recent years, image segmentation technology based on deep convolution neural network is Received: 6 February 2021 widely used in various medical image segmentation, such as MRI, CT and X-ray. Comelli Accepted: 16 March 2021 Published: 18 March 2021 et al. [3] had proved that using deep learning to assist medical segmentation can not only improve the accuracy of diagnosis result, but also improve the management of

Publisher’s Note: MDPI stays neutral patients towards personalized risk strategies. However, fully convolution network (FCN), with regard to jurisdictional claims in U-Net and other segmentation models of deep convolution neural network are pixel published maps and institutional affil- level segmentation [4]. For some medical images with sub-pixel segmentation accuracy, iations. there are still some shortcomings. The level set algorithm based on curve evolution and contour fitting can achieve sub-pixel segmentation effect and the segmentation result is more accurate. However, the level set algorithm needs to set the initial contour artificially. Because the contour of tissues and organs in medical images is fuzzy, it is difficult to achieve pixel level accuracy by human calibration of the initial contour, which is easy Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. to cause evolution error. Moreover, for the sequential medical images, each layer of This article is an open access article tissue section is to calibrate the initial contour again, which undoubtedly increases the distributed under the terms and workload of doctors. Therefore, this paper proposes a level set algorithm based on prior conditions of the Creative Commons information to segment the left ventricle. We use a convolution neural network to extract Attribution (CC BY) license (https:// the deep information of the image and provide pixel level initial contour for the region creativecommons.org/licenses/by/ to be segmented. Then, we use a level set algorithm based on prior shape constraint to 4.0/). segment the left ventricle in detail and obtain sub-pixel level segmentation results. The

Electronics 2021, 10, 707. https://doi.org/10.3390/electronics10060707 https://www.mdpi.com/journal/electronics Electronics 2021, 10, 707 2 of 13

experimental results on MICCAI 2009 dataset show that our segmentation algorithm is better than other segmentation algorithms.

2. Related Works Since Osher et al. [5] proposed the level set algorithm, it has made great achievements in the field of medical image segmentation. The level set algorithm mainly considers how to fuse the image information into the construction of energy functional, so as to segment the image effectively. The Mumford-Shah (MS) [6] model was proposed by Mumford and Shah, which aimed at image segmentation by minimizing the energy function. Because of the smooth term and regular term in the model, the segmentation result was smoother and more accurate, but the solution of the model was more complex. Chan and Vese simplify the solution process of the MS model by improving the smoothing term, which was the famous CV model [7]. Although CV model solved the defects of MS model, it needed to re-initialize the contour in the iterative process of CV model. Li et al. [8] proposed a level set model without reinitialization, which further improved the wide application of level set algorithm in medical image segmentation. In order to further solve the problem of incorrect moving of and error segmentation result caused by weak boundary and uneven gray level in medical image, a series of excellent models were proposed to make level set have higher anti-noise performance in medical image segmentation. In recent years, with the improvement of computational power, convolutional neural network had been gradually applied to medical image segmentation. FCN was first proposed for image segmentation, FCN also achieved good segmentation results in medical images [9]. Subsequently, U-Net innovatively proposed up sampling and feature fusion technology, which had been widely recognized in medical image segmentation [10]. On the basis of U-Net, many had proposed the improved segmentation network [11–13]. Although these neural networks can obtain more accurate segmentation results, due to the limited number of specific data sets and the segmentation results were not smooth enough, as we can see from Figure1, neural networks still cannot obtain sub-pixel seg- mentation results in actual medical image segmentation. Recently, some scholars had tried to combine level set algorithm with neural network to get better segmentation results. Kim et al. [14] proposed that the energy function of level set was directly used as the loss function of neural network. This method effectively used the contour information of the target area to be segmented and made up for the shortcomings of neural network. Hatamizadeh et al. [15] also used neural network to learn the energy function of level set. The difference is that they added distance regularization change to initialize the level set contour, so they obtained better segmentation results. Based on this idea, Kim [16] pro- posed a semi supervised segmentation method, using the energy function of the level set as the loss function and using some unlabeled data for training, which also obtained good segmentation results. Chen et al. [17] used the length regular term and area regular term in the level set as constraints to modify the segmentation results of neural network, but only took the length and area of the target region as constraints, without considering the single integrity of the target region, it is easy to cause error segmentation. The above method of applying level set energy function to convolutional neural network is still the result of pixel level segmentation in essence, which cannot reach sub-pixel level. Recently, Comelli [18] added the classification results of machine learning as constraints to the energy function of level set to segment medical images and achieved good results. Thus, the combination of neural network and level set algorithm is not only feasible, but also may get better results. Electronics 2021, 10, 707 3 of 13

Electronics 2021, 10, x FOR PEER REVIEW 3 of 13 Electronics 2021, 10, x FOR PEER REVIEW 3 of 13

Figure 1. Comparison of U-Net segmentation and level set evolution results (U-Net segmentation results on the left and Figure 1. ComparisonComparison of of U-Net U-Net segmentation segmentation and and level level set set evolution evolution results results (U-Net (U-Net segmentation segmentation results results on the on left the and left and level set evolution results on the right). level set set evolution evolution results results on on the the right). right). 3. Proposed Method for Left Ventricular Segmentation 3.3. Proposed Proposed Method Method for for Left Left Ventricular Ventricular Segmentation Segmentation InInIn this this this paper, paper, paper, we we propose propose a levela level set setalgorithm algorithm based based on location on location and andshape shape prior prior informationinformationinformation to to to segment segment segment left left left ventricular ventricular ventricular endocardium endocardium and and epicardium. epicardium. It is Itmainly is mainly mainly divided divided divided intointointo two two two steps steps steps (as (as (as shown shown in in Figure Figure 2): 2 2):(1)): (1) (1)First, First,First, we we wetrain traintrain an animprovedan improvedimproved U-Net U-NetU-Net to segment toto segmentsegment thethethe left left left ventricular ventricular ventricular endocardium endocardium endocardium and and and epicardi epicardi epicardiumumum in this in this step, step, the theU-Net U-Net can segmentcan segment the the approximateapproximateapproximate position position position of of ofthem. them. We We We name name name th eth thetrainede trained network network a priori a priori network network and and the the outputoutputoutput result result a aapriori prioripriori position position position map. map. map. (2) (2) (2) Usin Using Using the theg the prior prior prior network network network trained trained trained in in (1), in the (1), unlabeledunla-the unla- beledbeledheart heart heart image image image is roughly is isroughly roughly segmented segmented segmented to obtainto obtainto obtain the the approximate theapproximate approximate location location location of of thethe of leftleft the ventricle.ven- left ven- tricle.tricle.Through Through Through the outputthe the output output prior prior prior position position position map, map, map, we we can wecan calculate can calculate calculate the the initial initialthe initialcontour contour contour coordinatescoordi- coordi- natesnatesneeded needed needed for the for for levelthe the level set level evolution; set set evolution; evolution; in order in orderin to order solve to solveto the solve problems the theproblems problems of uneven of uneven of gray uneven levelgray gray and levellevelfuzzy and and boundary fuzzy fuzzy boundary boundary of medical of ofmedical image,medical image, we image, weighted we weweighted weighted the prior the theprior position prior position position map map and map and the originaltheand the originaloriginalimage toimage image enhance to to enhance enhance the the the gradient gradient information information information of the of target the of thetarget region, target region, so region, as toso facilitateas so to as facilitate to thefacilitate curve thetheevolution. curve curve evolution. evolution. Based on Based theBased prior on on the condition the prior prior condit thatcondition theion that left that ventriclethe theleft leftventricle is ventricle approximately is approximately is approximately circular, we circular,circular,added awe we shape added added constraint a shapea shape constraint to constraint the energy to theto functionthe en ergyenergy function of thefunction level of the of set thelevel to drivelevel set toset the drive to curve drive the to the fit curvecurveinto ato to , fit fit into into which a acircle, circle, solves which which the solves problems solves the the problems of “burr”problems of and “burr”of “sag” “burr” and in theand “sag” level “sag” in set.the in Next,levelthe level set. we willset. Next,Next,introduce we we will will our introduce introduce algorithm our our in algorithm detail. algorithm in detail.in detail.

Figure 2. The chart of our proposed algorithm. First step was proposed to achieve the priori posi- Figure 2. The chart of our proposed algorithm. First step was proposed to achieve the priori position tion information; second step used the level set to segment the target. Figureinformation; 2. The secondchart of step our usedproposed the level algorithm. set to segment First step the was target. proposed to achieve the priori posi- tion information; second step used the level set to segment the target. Electronics 2021, 10, 707 4 of 13

Electronics 2021, 10, x FOR PEER REVIEW 4 of 13 3.1. The Segmentation Network Provides the Initial Contour 3.1. The Segmentation Network Provides the Initial Contour There are two ways of using deep convolution neural network to provide prior position informationThere are two for levelways setof using algorithm. deep convolution (1) Using the neural object network detection to provide network prior[19 –posi-21], the coordinatetion information information for oflevel the set upper algorithm. left corner (1) Using and the the lower object right detection corner network of the detected [19–21], the coordinate information of the upper left corner and the lower right corner of the de- target is given, so as to obtain the initial contour required by the level set algorithm; (2) tected target is given, so as to obtain the initial contour required by the level set algorithm; using the segmentation network to segment the approximate location area of the target (2) using the segmentation network to segment the approximate location area of the target and then on the basis of this area, the initial contour of the level set algorithm is obtained and then on the basis of this area, the initial contour of the level set algorithm is obtained and thenand the then curve the curve evolution evolution is carried is carried out. out. We We find find that that the the initial initial positioning positioning errorerror ofof target detectiontarget detection network network is larger is larger than thatthan of that human of human calibration. calibration. The The results results are are shown shown in the followingin the following Figure3. Figure 3.

(a) Left ventricular endocardial (b) Left ventricular epicardium

Figure 3. In the figureFigure above, 3. In (a) the is the figure result above, of detecting (a) is the left result ventricular of detecting endocardium left ventricular and (b) is endocardium the result of detecting and (b) is the left ventricular epicardium. The red border is the ground truth and the green is the detection result of the object detection result of detecting left ventricular epicardium. The red border is the ground truth and the green is network. the detection result of the object detection network. After analysis, it can be concluded that the essence of object detection is to use net- After analysis, it can be concluded that the essence of object detection is to use work learning parameters to fit the artificially labeled object region. Because this kind of network learning parameters to fit the artificially labeled object region. Because this kind fitting has error information, so the learned position information also has error. For the of fittingsegmentation has error information,of left ventricle so which the learned requires position high accuracy, information the initial also position has error. cannot For the segmentationreach the pixel of left level ventricle accuracy which because requires of the higherror accuracy,of the ground the initialtruth manually position labeled, cannot reach theso it pixel is more level difficult accuracy to locate because the initial of the contour error of by the using ground the network truth manually to fit the labeled,position so it is moreinformation difficult manually to locate labeled. the initial Therefore, contour we exclude by using the the use network of detection to fitnetwork the position to pro- informationvide initial manually contour labeled. for level Therefore,set evolution. we exclude the use of detection network to provide initialFor contourthe output for of level segmentation set evolution. network, the output is pixel level classification [9]. ForWe the can output get the of approximate segmentation location network, region the of output the target is pixel at the level pixel classification level from the [ 9output]. We can getof the the approximate segmentation location network region and then of the use target some at processing the pixel levelmethods from to the easily output obtain of thethe segmentationinitial contour network at the and pixel then level use for some the processingevolution of methodsthe level set. to easilyIn this obtainpaper, the output initial contourof at the the segmentation pixel level fornetwork the evolution is used as of the the pr levelior position set. In thisinformation, paper, the which output is used of the to segmentationinitialize network the level is set used contour as the for prior curve position evolution information, segmentation which of left is usedventricular to initialize endo- the levelcardium set contour and epicardium. for curve evolution Next, we segmentationwill introduce the of left details ventricular of our improved endocardium segmenta- and epicardium.tion network Next, weand will initial introduce contour post-processing the details of our algorithm. improved segmentation network We can see from Figure 4a, U-Net is named because its network structure presents a and initial contour post-processing algorithm. “U” shape. On the left side of the network is the encoder, which is used to down sample We can see from Figure4a, U-Net is named because its network structure presents a the image; on the right side is the decoder, which is used to up sample the image to recover “U” shape.the original On the image left side size. of The the encoder network has is four the sub encoder, modules, which each is sub used module to down contains sample two the image;convolution on the right layers, side followed is the decoder, by a maximum which is pooling used to layer up sample for down the sampling. image to recoverThe de- the originalcoder image also includes size. The four encoder sub modules has four and sub the modules, image resolution each sub is module increased contains by up sam- two convolutionpling layers,until it is followed consistent by with a maximum the resolution pooling of the layer input for image. down sampling.In U-Net network, The decoder jump also includesconnection four is sub used modules to connect and the the up-sampling image resolution result in is decoder increased with by the up down-sampling sampling until it is consistentsub module with with the resolutionthe same output of the size input in encoder image. Inand U-Net it is used network, as the jump input connection of the next is usedsub to connect module thein decoder. up-sampling result in decoder with the down-sampling sub module with the same output size in encoder and it is used as the input of the next sub module in decoder. Electronics 2021, 10, 707 5 of 13

Electronics 2021, 10, x FOR PEER REVIEW 5 of 13

(a)

(b)

(c) Figure 4. Segmentation network structure: (a) represents the segmentation network structure; (b) represents the “Res Figure 4. Segmentation network structure: (a) represents the segmentation network structure; (b) represents the “Res Block” Block” in segmentation network structure; (c) representation the “Attention Gate” in segmentation network structure. in segmentation network structure; (c) representation the “Attention Gate” in segmentation network structure. We set the input image size as 256 × 256. After four down sampling, the final output We set the input image size as 256 × 256. After four down sampling, the final output size of the encoding module is 16 × 16. After four times of up sampling, the image size can size of the encoding module is 16 × 16. After four times of up sampling, the image size be restored to 256 × 256. After the last feature extraction layer, SoftMax is added for clas- can be restored to 256 × 256. After the last feature extraction layer, SoftMax is added for sification. Outputs are divided into three categories, which correspond to background, classification.ventricle and Outputsmyocardial are wall, divided respectively. into three We categories, use crosswhich entropy correspond as the loss to function background, of ventriclethe network. and In myocardial order to deepen wall, respectively.the number of We network use cross layers entropy and enhance as the lossthe ability function of of thenetwork network. to extract In order image to deepen features, the we number replace ofthe network convolution layers module and enhance in the original the ability U- of networkNet with to the extract ResNet image block features, [22], as weis shown replace in the Figure convolution 4b. In the module jump connection in the original part U-Netof withthe network, the ResNet we blockadd the [22 attention], as is shown mechanism in Figure based4b. on In spatial the jump region connection information part pro- of the network,posed by weOktay add et the al. attention[23], which mechanism is Figure 4c. based This attention on spatial mechanism region information can learn to proposed sup- bypress Oktay irrelevant et al. [ 23areas], which and focus is Figure on useful4c. This salient attention features mechanism during training. can learn At tothe suppress same irrelevanttime, it can areas greatly and improve focus on the useful accuracy salient of features segmentation. during The training. final segmentation At the same time, results it can greatlyare shown improve in Figure the accuracy5. It can be of seen segmentation. from the figure The that final the segmentation results of the resultsimproved are seg- shown inmentation Figure5. network It can be are seen closer from to thethe figureground that truth. the results of the improved segmentation network are closer to the ground truth. Electronics 2021, 10, 707 6 of 13

Electronics 2021, 10, x FOR PEER REVIEW 6 of 13

Figure 5. Prior position results results ( (a-1,b-1,c-1,d-1a-1,b-1,c-1,d-1)) represent represent the the original original image; image; (a-2,b-2,c-2,d-2 (a-2,b-2,c-2,d-2) )rep- repre- sentresent the the ground ground truth; truth; (a-3 (a-3,b-3,c-3,d-3,b-3,c-3,d-3) represent) represent the the original original U-Net U-Net segmentation segmentation results; results; (a-4 (a-4,b-,b-4,c- 4,c-4,d-4) represent the improved U-Net segmentation results). 4,d-4) represent the improved U-Net segmentation results).

By calculating thethe coordinatescoordinates of of the the upper upper left left corner corner and and the the lower lower right right corner corner of theof priorthe prior location location map, map, we canwe can get theget locationthe location coordinates coordinates of the of leftthe ventricularleft ventricular epicardium epicar- anddium endocardium. and endocardium. If the coordinateIf the coordinate point informationpoint information is directly is directly mapped mapped to the original to the imageoriginal for image level for set level evolution, set evolution, its accuracy its accuracy will be will affected. be affected. Because Because of the characteristicsof the charac- ofteristics medical of medical image suchimage as such uneven as uneven gray levelgray level and fuzzyand fuzzy boundary boundary of object of object contour, contour, the levelthe level set algorithmset algorithm based based on gradienton gradient information information evolution evolution will will be interfered.be interfered. Therefore, There- furtherfore, further processing processing is needed is needed to reduceto reduce the the interference interference area area in in the the originaloriginal image and and enhance the gradient of the target area. In In order order to to enhance enhance the the image image information information of of the the target area and reduce the interference of of th thee tissue tissue around around the the target, target, we we add add the the pixel pixel value of the original image and the result of the network network segmentation segmentation and and then then map map the the coordinate information to to the weighted weighted image, image, so so as as to to get get the the coordinate coordinate information information of of the left ventricular endocardium endocardium and and epicardi epicardiumum after after processing. processing. The The weighting weighting formula formula is as follows: I1 + (1 − α)I2 = I3 (1) 𝐼 + (1−𝛼)𝐼 = 𝐼 (1) where I1 is the prior location image segmented by U-Net, I2 is the original image; α is where 𝐼 is the prior location image segmented by U-Net, 𝐼 is the original image; α is the the weighting factor, in order to ensure that the weighted pixel value is between 0 and weighting factor, in order to ensure that the weighted pixel value is between 0 and 255, 255, α ∈ [0,1]. After testing different α values, as shown in Figure6, we find that when α ∈ [0,1]. After testing different α values, as shown in Figure 6, we find that when α ≤ 0.3, α ≤ 0.3, the original image is seriously distorted; when α ≥ 0.7, the gray level of the the original image is seriously distorted; when α ≥ 0.7, the gray level of the ventricular ventricular edge of the superimposed image is uneven; finally, α = 0.5 is selected as the edge of the superimposed image is uneven; finally, α = 0.5 is selected as the weighting weighting factor. factor. Electronics 2021, 10, 707 7 of 13

Electronics 2021, 10, x FOR PEER REVIEW 7 of 13

FigureFigure 6. 6.The The results results of of different different weighting weighting factor. factor.

3.2.3.2. ShapeShape ConstraintConstraint ContourContour EvolutionEvolution InIn thisthis section, section, wewe willwill introduceintroduce thethe improvedimproved levellevel setset algorithmalgorithm withwith shapeshape con-con- straint.straint. We We choose choose the the level level set set model model proposed proposed by by Li Li [8 ][8] as as our our basic basic model, model, because because it isit ais levela level set set model model without without re-initialization, re-initialization, which which simplifies simplifies the stepsthe steps of curve of curve evolution. evolution. On theOn basisthe basis of this of model,this model, we add we shapeadd shape constraints constraints to carry to carry out curve out curve evolution. evolution. 𝐸(𝜙) =μP(𝜙) +υℰ (𝜙) E(φ) = µP(φ) + υg,λ,ν,,(φ) (2)(2) where 𝜙 is the signed distance function, 𝜇>0 and 𝜐 are constants. P(𝜙) is the distance where φ is the signed distance function, µ > 0 and υ are constants. P(φ) is the distance regular term, which forces the level set function to be close to the signed distance function regular term, which forces the level set function to be close to the signed distance function andand therefore therefore completely completely eliminates eliminates the the need need of of the the costly costly re-initialization re-initialization procedure. procedure. 1 Z 0 𝑃(𝜙) =1 (|𝛻𝜙| −1)2 𝑑𝛺 (3) P(φ) = (2|∇φ| − 1) dΩ (3) Ω 2 where ℰ,,(𝜙) is the energy function to drive the curve evolution, which is defined by where g,λ,ν(φ) is the energy function to drive the curve evolution, which is defined by ℰ,,(𝜙) =𝜆ℒ(𝜙) +𝜐ℱ(𝜙) (4) g,λ,ν(φ) = λLg(φ) + υFg(φ) (4) where 𝜆>0,𝜐 is a constant,ℒ(𝜙) and ℱ(𝜙) are respectively defined by

where λ > 0, υ is a constant, Lg(φ) and Fg(φ) are respectively defined by ℱ(𝜙) =𝑔𝐻(−𝜙)𝑑Ω (5) Z Fg(φ) = gH(−φ)dΩ (5) ℒ(𝜙) =𝑔𝛿(𝜙) |∇𝜙|𝑑Ω (6) Z 𝑔 is the edge detection functionLg(φ) defined= gδ( byφ) |∇φ|dΩ (6) 1 g is the edge detection function defined 𝑔 = by (7) 1+|∇𝐺 ∗𝐼| 1 where G is the Gaussian kernel withg = standard deviation σ. (7) 1 + |∇Gσ ∗ I| 𝐻 is the Heaviside function, which defined by where Gσ is the Gaussian kernel with standard1, deviation 𝑥 ≥σ 0 . 𝐻(𝑥) = 0, 𝑥 < 0 (8) 𝛿 is the Dirac function, in application, it always defined by Electronics 2021, 10, 707 8 of 13

H is the Heaviside function, which defined by

 1, x ≥ 0 H(x) = (8) 0, x < 0

Electronics 2021, 10, x FOR PEER REVIEW δ is the Dirac function, in application, it always defined by 8 of 13 ( 0, 0, | x|𝑥| |>>𝜖e ( ) = δe x 1  πx  (9) 𝛿(𝑥) =1 + 𝜋𝑥 | | ≤ (9) 2e 1 1cos + 𝑐𝑜𝑠e , ,x |𝑥|e ≤ 𝜖 2𝜖 𝜖 The above is the energy function of the traditional levellevel setset model. When segmenting the left ventricle, ventricle, we we find find that that the the gradient gradient force force is too is too small small in the in thelow low contrast contrast region region and and the weak edge, the original energy function will appear “leakage” of the evolution the weak edge, the original energy function will appear “leakage” of the evolution curve, curve, which eventually leads to the segmentation result is not smooth enough and “burr” which eventually leads to the segmentation result is not smooth enough and “burr” phe- phenomenon, just as shown in Figure7. Therefore, based on the prior information that the nomenon, just as shown in Figure 7. Therefore, based on the prior information that the left left ventricular membrane is close to a circle, we propose a shape constraint term, which is ventricular membrane is close to a circle, we propose a shape constraint term, which is used to constrain the curve in the evolution process of level set, make it fit to a circle and used to constrain the curve in the evolution process of level set, make it fit to a circle and reduce the “burr” and “sag” of the curve. reduce the “burr” and “sag” of the curve.

(a) with bulge (b) curves with hollow

Figure 7. “Burr”Figure and 7. “sag”“Burr” in andlevel “sag” set evolution. in level set (a) evolution. represents ( a“burr”;) represents (b) represents “burr”; (b “sag”.) represents “sag”.

E𝐸shape((··)) is is a a priori priori shape shape constraint constraint added added by by our our improved improved algorithm. algorithm. We knowWe know that thethat shape the shape of the of left the ventricle left ventricle membrane membrane is similar is similar to a circle, to a circle, so we so add we a circleadd a ascircle the shapeas the constraintshape constraint term to term the levelto the set level energy set energy function: function:

2 2 2 (𝑥 −20 𝑥 ) +(𝑦−𝑦02) =𝑟2 (10) (x − x0) + (y − y0) = r (10) 2 2 2 h 2𝑟 − (𝑥−𝑥02) − (𝑦−𝑦0)2i × 𝐻(∅)𝑑𝑥𝑑𝑦 (11) x r − (x − x0) − (y − y0) × H(∅)dxdy (11) where 𝑥 and 𝑦 are the coordinates of the center of the circle and 𝑟 is the radius of the where x0and y0 are the coordinates of the center of the circle and r is the radius of the circle. x +x y +y circle. The center coordinates are expressed as:min 𝑥 =max min,𝑦 max= ; The ra- The center coordinates are expressed as: x0 = 2 , y0= 2 ; The radius of the n o dius of the circle is expressed2 as: 𝑟 =min2{ (𝑥 −𝑥) ,2 (𝑦 −𝑦) }; Initial time 𝑥and y circle is expressed as: r = min (x0 − x) , (y0 − y) ; Initial time xmin and ymin are the are the coordinates of the upper left corner of the initial contour, 𝑥 and y are the coordinates of the upper left corner of the initial contour, xmax and ymax are the coordinates ofcoordinates the upper of right the cornerupper ofright the corner initial contour.of the initial Next, contour. with each Next, curve with evolution, each curve the evolu- fixed centertion, the coordinates fixed center and coordinates radius of the and circle radius will of drivethe circle the curvewill drive to fit the towards curve theto fit minimum towards inscribedthe minimum circle inscribed of the prior circle box. of the When prior Equation box. When (11) Equation reaches (11) the reaches minimum the value,minimum the fittingvalue, curvethe fitting will curve be approximately will be approximately circular. The circular. final expression The final expression of energy function of energy of func- level settion is of as level follow: set is as follow: 0 1 R 0 2 R R 𝐸(𝜙) =𝜇 E(|(∇𝜙φ) |=−µ1)2𝑑Ω1 (+𝛾𝑔𝐻|∇φ| − 1)(−𝜙dΩ)+𝑑Ωγ gH + λ 𝑔𝛿(−φ()𝜙dΩ) |+∇𝜙λ|𝑑Ωgδ (φ) |∇φ|dΩ 2 Ω 2 h i Ω 2 2 2 (12)(12) + r − (x − x0) − (y − y0) × H(φ)dxdy s + 𝑟 − (𝑥−𝑥) − (𝑦−𝑦) ×𝐻(𝜙)𝑑𝑥𝑑𝑦

Using the flow method to solve the energy function, the curve evo- lution equation can be obtained as follow: 𝜕𝜙 ∇𝜙 ∇𝜙 =𝜇∇𝜙−𝑑𝑖𝑣 + 𝜆𝛿(𝜙)𝑑𝑖𝑣 𝑔 +𝛾𝑔𝛿(𝜙) 𝜕𝑡 |∇𝜙| |∇𝜙| (13) + 𝛿(𝜙)𝑟 − (𝑥−𝑥) − (𝑦−𝑦) Electronics 2021, 10, 707 9 of 13

Using the gradient descent flow method to solve the energy function, the curve evolution equation can be obtained as follow:

∂φ h  ∇φ i  ∇φ  = µ ∇φ − div |∇ | + λδ(φ)div g |∇ | + γgδ(φ) ∂t φ φ (13) h 2 2 2i +δ(φ) r − (x − x0) − (y − y0)

4. Experiments 4.1. Dataset The data set used was provided by MICCAI 2009. The image data was randomly selected from the clinical database of Sunnybrook Health Science Center. All images were divided into 20 cardiac phases with time resolution. A total of 6–12 layers of short axis images were collected from the atrioventricular ring to the apex. The thickness dimension was 8–10 mm, the visual field was 320 mm × 320 mm and the matrix was 256 × 256. There were 45 cases in the whole data set, which were divided into three groups on average. There were four types of heart pictures in each group of 15 cases, namely four heart failure with ischemia (HF-I), four heart failure without ischemia (HF-NI), four cardiac hypertrophy (HYP) and three (N). In order to obtain a robust result and eliminate over-fitting, we use the k-fold cross validation (k = 5) to train and test our data set.

4.2. Implementation Details In this paper, the segmentation network architecture, as shown in Figure4, is used to segment the left ventricle to obtain the prior position information; then, the coordinates of the corresponding points are obtained by processing the segmented area and we use these coordinates to initialize the initial contours of the left ventricular endocardium and epicardium, respectively; in the level set evolution stage, we add a shape constraint term to standardize the level set evolution, which makes it fit the contour curve into a circle as much as possible. The algorithm is implemented in python and pytorch and runs on core i9-9900kf @ 3.6GHz 16GB (Sichuan, China) ram and single NVIDIA GTX 1080ti (Fujian, China) computer. In the training segmentation network stage, the random gradient descent method is selected as the optimization method of the model. The initial learning rate is 0.1 and the decay index is 0.9, which decays every two epochs. A total of 50 epochs are trained and the batch size of each iteration is 8. In the evolution stage of level set, referring to the parameter setting of curve evolution equation in ref. [8], the parameters in the curve evolution equation are λ = 5.0, µ = 0.01, ν = 3.0 and the evolution time interval ∆t = 5 (they are only suggested values and can be adjusted according to the actual situation). x0 and y0 are determined by the prior position map and x0 and y0 of each image are different, r is determined by the prior box of the prior position map. When the prior box is determined, the center and radius of the fitting circle are determined accordingly.

4.3. Evaluation Method In order to measure the similarity between the segmentation results of our algorithm and the ground truth, the average perpendicular distance (APD) is used to compare the differences between the contours. The smaller the value is, the closer the contours are. We can see from the Figure8: hypothesis A1, A2, A3 are the contour points segmented by the algorithm; M1, M2, M3 are the contour points of ground truth. L1 is a fitting line, which determined by A1, A2, A3; line L2 perpendicular to L and go through A2. L3 is a straight line fitted by M1, M2, M3 three points, passing through point A2 make a straight line L3, intersect with D, A2D is the required vertical distance; the average vertical distance calculated by selecting more than one group of contour points is APD. Electronics 2021, 10, 707 10 of 13

Electronics 2021, 10, x FOR PEER REVIEW 10 of 13

Figure 8. PerpendicularFigure 8. Perpendicular distance distance diagram. diagram.

In order to measure the region segmented by the algorithm, we will extract the region In ordersurrounded to measure by the leftthe ventricularregion segmented endocardium by and the epicardium algorithm, and we convert will itextract into a the region surroundedbinary by image.the left We ventricular use some common endocardium criteria to measure and epicardium the differences and between convert our it into a bi- algorithms and other algorithms [24]. They are sensitivity, positive predictive value (PPV), nary image.Dice We Score use (DSC), some area common overlap error criteria (AOE), relativeto measure area difference the differences (RAD). between our algo- rithms and other algorithms [24]. They are sensitivity, positive predictive value (PPV), True Positive Dice Score (DSC), area overlapSensitivity error= (AOE), relative area difference (RAD).(14) True Positive + False Negative True Positive True Positive Sensitivity =PPV = (15) (14) TrueTrue Positive Positive + +False False Positive Negative 2 × True Positive DSC = True Positive (16) PPV =2 × True Positive + False Positive + False Negative (15) True Positive + False Positive True Positive AOE = 1 − (17) True Positive2 × True+ False Positive Positive + False Negative DSC = (16) 2 × True Positive|False + False Negative Positive− False Positive+ False| Negative RAD = (18) 2 × True Positive + False Positive + False Negative True Positive AOE = 1 − (17) 5. Results True Positive + False Positive + False Negative From the perspective of segmentation contour (as shown in Figure9): Figure9 a-1, b-1,c-1,d-1 represent the results|False of Negative epicardium; − Figure False9a-2,b-2,c-2,d-2 Positive| represent the results RAD = (18) of endocardium;2 × True As can Positive be seen from + theFalse figure Positive below, compared + False with Negative the original algo- rithm, the contour of the evolution curve is smoother after adding a priori condition and the phenomenon of “burr” and “sag” are obviously alleviated. Due to the use of pixel 5. Resultsweighting, the gradient information of the image is enhanced, so the evolution problem caused by uneven gray level can be significantly reduced in the segmentation of left ven- Fromtricular the perspective inner wall. In of order segmentation to prove the effectiveness contour (as of theshown algorithm, in Figure we randomly 9): Figure 9a-1,b- 1,c-1,d-1 representextract some the data results from the testof epicardium; set, calculate their Figure APD and 9a-2,b-2,c-2,d-2 draw them as Table represent1. As can the results be seen from the table, compared with the APD of endocardium and epicardium of DRLSE, of endocardium;3.05 mm and As 2.76 can mm, be theseen APD from of the the proposed figure algorithm below, are compared 1.40 mm and with 1.28 mmthe and original algo- rithm, theits contour segmentation of the contour evolution is closer tocurve the contour is smoother of the ground after truth, adding which provesa priori that condition the and the phenomenonimproved algorithmof “burr” is more and accurate “sag” in are contour obvi fitting.ously alleviated. Due to the use of pixel weighting, the gradient information of the image is enhanced, so the evolution problem caused by uneven gray level can be significantly reduced in the segmentation of left ven- tricular inner wall. In order to prove the effectiveness of the algorithm, we randomly ex- tract some data from the test set, calculate their APD and draw them as Table 1. As can be seen from the table, compared with the APD of endocardium and epicardium of DRLSE, 3.05 mm and 2.76 mm, the APD of the proposed algorithm are 1.40 mm and 1.28 mm and its segmentation contour is closer to the contour of the ground truth, which proves that the improved algorithm is more accurate in contour fitting. Electronics 2021, 10, 707 11 of 13

Electronics 2021, 10, x FOR PEER REVIEW 11 of 13

FigureFigure 9. 9.Comparison Comparison chart chart of of curve curve evolution. evolution. (a-1 (a-1,b-1,c-1,d-1,b-1,c-1,d-1) represent) represent the resultsthe results of epicardium; of epicar- (a-2dium;,b-2 ,(c-2a-2,b-2,c-2,d-2,d-2) represent) represent the results the of results endocardium; of endocardiu the manualm; the segmentmanual segment (red), DRLSE (red), (green), DRLSE improved(green), improved algorithm algorith (blue) arem superimposed.(blue) are superimposed.

TableTable 1. 1.APD APD (mm) (mm) index index of of different different algorithms. algorithms. Epicardium Endocardium Subject Epicardium Endocardium Subject DRLSE Ours DRLSE Ours DRLSE Ours DRLSE Ours HF-I-02 2.97 1.34 2.77 1.23 HF-I-02 2.97 1.34 2.77 1.23 HF-I-06HF-I-06 3.083.08 1.411.41 2.962.96 1.31 1.31 HYP-03HYP-03 4.434.43 1.691.69 3.993.99 1.45 1.45 HYP-05HYP-05 4.114.11 1.981.98 3.713.71 1.76 1.76 HF-NI-15 2.88 1.32 2.61 1.13 HF-NI-15 2.88 1.32 2.61 1.13 HF-NI-21 2.90 1.24 2.69 1.01 N-01HF-NI-21 1.972.90 1.071.24 1.292.69 1.15 1.01 N-03N-01 2.121.97 1.171.07 2.091.29 1.21 1.15 MeanN-03 3.052.12 1.401.17 2.762.09 1.28 1.21 Mean 3.05 1.40 2.76 1.28 In order to further prove the effectiveness of the algorithm, this paper compares the improvedIn order algorithmto further with prove several the effectiveness other left ventricular of the algorithm, segmentation this paper algorithms compares and the calculatesimproved the algorithm sensitivity, with PPV, several DSC, AOEother and left RADventricular corresponding segmentation to the averagealgorithms value and of cal- theirculates segmentation the sensitivity, results, PPV, as is DSC, shown AOE in Tables and 2RAD and3 .corresponding LBF [ 25] and LCV to the [26 average] are level value set of algorithm;their segmentation U-Net and results, attention as U-Netis shown are in convolutional Tables 2 and neural 3. LBF network. [25] and Compared LCV [26] withare level theseset algorithm; models (all U-Net of these and are attention open source), U-Net the ar proposede convolutional algorithm neural has higher network. accuracy Compared in segmentation of left ventricular endocardium and epicardium. with these models (all of these are open source), the proposed algorithm has higher accu- Tableracy in 2. Evaluationsegmentation criteria of resultsleft ventricular of left ventricular endocardium epicardium. and epicardium.

SensitivityTable (%) 2. Evaluation PPV criteria (%) results of left DSC ventricular (%) epicardium. AOE (%) RAD (%) Subject Mean ±stdSensitivity Mean (%) ±std PPV Mean (%) ±std DSC Mean(%) ± AOEstd (%) Mean RAD±std (%) Subject LBF [25] 75.72 1.95Mean 81.27 ±std 2.31 Mean 77.97 ±std 3.11Mean 36.06±std Mean 2.59 ±std 21.83 Mean 1.95 ±std LCV [26] 73.95 1.62 80.35 2.97 77.88 3.38 36.12 4.60 21.92 3.43 U-NetLBF [11] [25] 86.87 1.47 75.72 90.84 1.95 2.37 81.27 87.78 2.31 3.5377.97 21.623.11 36.06 5.66 2.59 11.71 21.83 3.86 1.95 Attention U-NetLCV [[26]23] 87.44 1.01 73.95 93.30 1.62 2.58 80.35 90.88 2.97 1.9677.88 16.663.38 36.12 2.98 4.60 8.62 21.92 1.99 3.43 OursU-Net [11] 89.59 1.33 86.87 95.43 1.47 3.21 90.84 92.95 2.37 2.0187.78 14.773.53 21.62 3.12 5.66 7.56 11.71 2.11 3.86 Attention U-Net [23] 87.44 1.01 93.30 2.58 90.88 1.96 16.66 2.98 8.62 1.99 Ours 89.59 1.33 95.43 3.21 92.95 2.01 14.77 3.12 7.56 2.11

Electronics 2021, 10, 707 12 of 13

Table 3. Evaluation criteria results of left ventricular endocardium.

Sensitivity (%) PPV (%) DSC (%) AOE (%) RAD (%) Subject Mean ±std Mean ±std Mean ±std Mean ±std Mean ±std LBF [25] 75.08 1.43 80.63 1.99 79.53 1.66 26.95 1.43 19.41 2.01 LCV [26] 74.53 1.51 80.78 2.21 79.22 1.98 27.39 2.02 19.96 2.48 U-Net [11] 88.25 1.42 89.32 2.07 88.92 2.15 16.48 4.27 10.69 3.56 Attention U-Net [23] 90.29 1.33 94.61 2.13 93.01 1.38 12.69 1.95 7.92 2.67 Ours 91.78 1.25 95.37 2.41 94.43 1.59 10.52 2.26 6.96 2.94

6. Conclusions This paper proposes a level set segmentation algorithm based on prior information. Firstly, a deep learning segmentation network is trained to obtain the prior position in- formation of the left ventricle and the output of the trained network is used as the initial contour for the level set evolution. In order to enhance the gradient information of the ventricle, we use the pixel weighting method to enhance the left ventricle contour in the original image. In the evolution stage of level set, a shape constraint is added to drive the curve to fit to a circle, so as to reduce the “burr” and “sag” problems in the curve evolution process. Finally, the improved algorithm obtains better segmentation results. Because the image operation in this paper is based on 8-bit digital image processing and the original medical image is 16-bit, so there are some errors in the conversion process. Therefore, the follow-up work of this paper will consider operating directly in the 16-bit medical image to further improve the accuracy of segmentation results.

Author Contributions: Conceptualization, L.X. and Y.Z.; methodology, L.X.; software, Y.Z.; validation, L.X., Y.Z. and H.Y.; formal analysis, X.Z.; investigation, H.Y.; resources, X.Z.; data curation, X.Z.; writing—original draft preparation, L.X.; writing—review and editing, Y.Z.; visualization, Y.Z.; supervision, X.Z.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript. Funding: This work was supported by the Natural Science Foundation of Shanghai (Grant No. 17ZR1443500), the Joint Funds of the Nation Natural Science Foundation of China (Grant No. U1831133) and the National Natural Science Foundation of China (Grant No. 61701296). Data Availability Statement: Data available in a publicly accessible repository that does not issue DOIs. Publicly available datasets were analyzed in this study. This data can be found here: http://www.miccai.org/ (accessed on 16 March 2021). Conflicts of Interest: The authors declared that they have no conflict interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

References 1. Huang, F.; Connelly, P.W.; Prasad, G.V.R.; Nash, M.M.; Gunaratnam, L.; Yan, A.T. Evaluation of Left Atrial Remodeling in Kidney Transplant Patients Using Cardiac Magnetic Resonance Imaging. J. Nephrol. 2020, 1–9. [CrossRef][PubMed] 2. Cha, M.J.; Lee, J.H.; Jung, H.N.; Kim, Y.; Choe, Y.H.; Kim, S.M. Cardiac Magnetic Resonance-tissue Tracking for the Early Prediction of Adverse Left Ventricular Remodeling after ST-segment Elevation Myocardial Infarction. Int. J. Cardiovasc. Imaging 2019, 35, 2095–2102. [CrossRef][PubMed] 3. Comelli, A.; Dahiya, N.; Stefano, A.; Benfante, V.; Gentile, G.; Agnese, V.; Raffa, G.M.; Pilato, M.; Yezzi, A.; Petrucci, G.; et al. Deep learning approach for the segmentation of aneurysmal ascending aorta. Biomed. Eng. Lett. 2020, 11, 15–24. [CrossRef] 4. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging 2018, 9, 611–629. [CrossRef][PubMed] 5. Osher, S.; Sethian, J.A. Fronts Propagating with Curvature-dependent Speed: Algorithms based on Hamilton-Jacobi Formulations. J. Comput. Phys. 1988, 79, 12–49. [CrossRef] 6. Mumford, D.B.; Shah, J. Optimal Approximations by Piecewise Smooth Functions and Associated Variational Problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [CrossRef] 7. Chan, T.F.; Vese, L.A. Active Contours without Edges. IEEE Trans. Image Process. 2001, 10, 266–277. [CrossRef][PubMed] Electronics 2021, 10, 707 13 of 13

8. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Level Set Evolution without Re-initialization: A New Variational Formulation. In Proceedings of the 2005 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 430–436. 9. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. 10. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI); Springer: Cham, Switzerland, 2015; pp. 234–241. 11. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; pp. 3–11. 12. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.; Heng, P.A. H-Dense U-Net: Hybrid Densely Connected U-Net for Liver and Tumor Segmentation from CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [CrossRef][PubMed] 13. Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation. Neural Netw. 2020, 121, 74–87. [CrossRef][PubMed] 14. Kim, Y.; Kim, S.; Kim, T.; Kim, C. CNN-based Semantic Segmentation Using Level Set Loss. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; pp. 1752–1760. 15. Hatamizadeh, A.; Hoogi, A.; Sengupta, D.; Lu, W.; Terzopoulos, D. Deep Active Lesion Segmentation. In International Workshop on Machine Learning in Medical Imaging (MLMI); Springer: Cham, Switzerland, 2019; pp. 98–105. 16. Kim, B.; Ye, J.C. Mumford–Shah Loss Functional for Image Segmentation with Deep Learning. IEEE Trans. Image Process. 2019, 29, 1856–1866. [CrossRef][PubMed] 17. Chen, X.; Williams, B.M.; Vallabhaneni, S.R.; Zheng, Y. Learning Active Contour Models for Medical Image Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 11632–11640. 18. Comelli, A. Fully 3D Active Surface with Machine Learning for PET Image Segmentation. J. Imaging 2020, 6, 113. [CrossRef] 19. Adarsh, P.; Rathi, P.; Kumar, M. YOLO v3-Tiny: Object Detection and Recognition Using One Stage Improved Model. In Proceedings of the IEEE Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 6–7 March 2020; pp. 687–694. 20. Tan, M.X.; Pang, R.M.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10778–10787. 21. Li, Y.; Lv, C. SS-YOLO: An Object Detection Algorithm based on YOLOv3 and ShuffleNet. In Proceedings of the IEEE Conference on Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 769–772. 22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. 23. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. 24. Eelbode, T.; Bertels, J.; Berman, M.; Vandermelen, D.; Mase, F.; Bisschops, R.; Blaschko, M.B. Optimization for Medical Image Segmentation: Theory and practice when evaluating with Dice score or Jaccard index. IEEE Trans. Med. Imaging 2020, 39, 3679–3690. [CrossRef][PubMed] 25. Chen, H.; Yu, X.; Wu, C. Active Contour Model based on Partition Entropy and Local Fitting Energy. In Proceedings of the IEEE Conference on Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 3501–3506. 26. Zou, L.; Song, L.T.; Wang, X.F. A Fast Algorithm for Image Segmentation based on Local Chan Vese Model. In International Conference on Intelligent Computing (ICIC); Springer: Cham, Switzerland, 2018; pp. 54–60.