<<

1 Gaussian Filter on 3D Meshes

Wenming Tang1, Yuanhao Gong1, Kanglin Liu2, Jun Liu3, Wei Pan4, Bozhi Liu5, and Guoping Qiu*

Abstract—Minimizing the Gaussian curvature of meshes can play a fundamental role in 3D mesh processing. However, there is a lack of computationally efficient and robust Gaussian curvature optimization method. In this paper, we present a simple yet effective method that can efficiently reduce Gaussian curvature for 3D meshes. We first present the mathematical foundation of our method. Then, we introduce a simple and robust implicit Gaussian curvature optimization method named Gaussian Curvature Filter (GCF). GCF implicitly minimizes Gaussian curvature without the need to explicitly calculate the Gaussian curvature itself. GCF is highly efficient and this method can be used in a large range of applications that involve Gaussian curvature. We conduct extensive experiments to demonstrate that GCF significantly outperforms state-of-the-art methods in minimizing Gaussian curvature, and geometric feature preserving soothing on 3D meshes. GCF program is available at https://github.com/tangwenming/GCF-filter.

Index Terms—Gaussian curvature, filter, mesh smoothing, feature preserving. !

1 INTRODUCTION

Among Various representations of 3D models, triangular Iter=0 meshes are perhaps the most popular. Triangular meshes usually contain two parts. The first part is a set of vertices, representing 3D spatial locations of the . The other Iter=50 part is a set of triangular faces that indicate the connectivity Iter=100 between vertices. With the topological information of adja- cent vertices, triangular meshes can represent the geometric details of a surface. Automatic 3D mesh generation has made great progress in the past few years. There are several ways to generate triangular meshes such as, interactive design from CAD software, 3D scanning, end-to-end generation and so on. In 3D scanning, a scanner can automatically obtain the 3D coordinates of the surface to produce a high-quality 3D mesh [1]. In end-to-end methods, the 2D images are used to Fig. 1. Gaussian curvature filter on the noisy Vase mesh. train a neural network, which generates the corresponding 3D mesh [2]. Unfortunately, the 3D mesh obtained through these These methods can be categorized into three types: opti- technologies is often noisy. As a result, the obtained 3D mization [3], [4], [5], [6], [7], [8], training [9], [10], [11], mesh cannot be directly used in practice. For this reason, and filtering [12], [13], [14], [15], [16], [17]. Optimization- smoothing methods for 3D meshes become indispensable. based methods for mesh smoothing need some manually set In the literature, various methods have been developed. parameters, which often need to be optimized iteratively to satisfy the assumed regularization. Jones et al. [4] proposed 1 arXiv:2003.09178v2 [cs.GR] 18 Jan 2021 • Wenming Tang and Yuanhao Gong equally contributed to this work. a method that captures the smoothness of a surface by defining local first-order predictors. Lei and Schaefer [5] • Wenming Tang, Yuanhao Gong, Kanglin Liu, Jun Liu and Bozhi Liu L are with the College of Information Engineering, Shenzhen University, proposed an 0 minimization method that maximizes the Guangdong Key Laboratory of Intelligent Information Processing, flat regions of the model and removes noise while pre- Shenzhen Institute of Artificial Intelligence and Robotics for Society, serving sharp features. This method is very practical for Shenzhen, China. models with rich flat features. Such methods are generally E-mail: [email protected], [email protected], max [email protected], [email protected], [email protected]. time consuming and sometimes do not converge for certain • Wei Pan is with the School of Mechanical & Automotive Engineering, given parameter values [9]. Training-based methods need South China University of Technology, Department of Research and De- to provide sufficient training data by learning the mapping velopment, OPT Machine Vision Tech Co., Ltd, Jinsheng Road, Chang’an, Dongguan 523860, Guangdong, China. relationship between the noisy model and the ground truth E-mail: [email protected]. model. The trained network can achieve model denoising • Guoping Qiu (Corresponding author) is with the College of Information and feature preservation similar to the noise distribution Engineering, Shenzhen University, Guangdong Key Laboratory of Intelli- of the training model. However, the disadvantage of the gent Information Processing, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, China, and also with the School of training-based approach is that it is difficult to find suf- Computer Science, University of Nottingham, Nottingham NG8 1BB, ficient models to train the network parameters, to have U.K. good generalization capabilities for different meshes and E-mail: [email protected]. different noise levels. Filter-based methods are implemented 2 based on mutual constraints between vertex normals and ber of iterations. Our method is more robust than face normals. Zheng et al. [13] treated the vertex position existing methods and outperforms state-of-the-art update as a quadratic optimization problem based on the qualitatively and quantitatively. two fields. Sun et al. [12] proposed a two stages denoising method. The first is model patch normal filtering: 2 RELATED WORK the weighted average of the neighboring face normals of each face is used, and then the vertices are updated accord- Before presenting our method that minimizes Gaussian ing to the filtered face normals. Those methods use global or curvature energy and preserves the geometric features, we local statistics, and may also rely on several manually set pa- mainly discuss the related work from three aspects: the rameters. Setting different parameters is conducive to better Gaussian curvature in image processing, Gaussian curva- filtering a specific model, but it also brings inconvenience to ture in mesh processing, and finally, mesh smoothing capa- use. bility and feature preservation property. Curvature is an important geometric feature of surfaces. It is often used as an important tool for surface analysis and 2.1 Gaussian curvature in image processing processing. The literature has reported that using curvature Researchers have made great progress in image smoothing features on 3D mesh surfaces can achieve good results in using the geometric properties of Gaussian curvature in past surface fairing [18]. They designed a diffusion equation decades. Lee et al. [22] design a Gaussian curvature-driven whose diffusion direction depends on the diffusion equation for image noise removal. This method normal, and its magnitude is a defined function of Gaussian can maintain the boundary and some details better than curvature. Michael et al. [19] proposed a 3D the mean curvature. Jidesh et al. [25] proposed Gaussian processing framework to achieve 3D mesh filtering and curvature to guide image smoothing of fourth-order partial editing by utilizing the curvature distribution of the surface. differential equations (PDE). It works for image smoothing Gaussian curvature is a specific type of curvature. It is an and maintains curved edges, slopes, and corners. intrinsic measurement of surfaces. It has been applied to The calculation formula of Gaussian curvature is gener- images and 3D meshes [20], [21], [22]. ally complicated and has some numerical issues. To over- A 3D mesh that contains noise has higher Gaussian come these issues, researchers found simple filters to opti- curvature in absolute value than its corresponding noise- mize Gaussian curvature. Gong et al. [26] proposed a locally free model. Therefore, reducing the curvature energy can weighted Gaussian curvature as a regularized variational smooth or denoise the meshes [21], [23]. Based on this model and designed a closed-form solution. It has achieved observation, we can formulate the problem of noise removal excellent results in image smoothing, smoothing, texture de- on 3D mesh as that of reducing the Gaussian curvature. composition and image sharpening. They further proposed However, minimizing Gaussian curvature is challeng- an optimization method for regularizers based on Gaussian ing. It is traditionally carried out by Gaussian curvature curvature, mean curvature and total variation [21]. These flow [18]. This method requires the explicit computation of pixel local filters can be used to efficiently reduce the energy Gaussian curvature. Although Gabriel Taubin [24] proposed of the entire model, thus significantly reduce computational a method of explicitly estimating the Gaussian curvature on complexity because there is no need to explicitly calculate closed manifold meshes, in order to optimize the Gaussian Gaussian curvature itself. curvature of a model, it is not the best choice to explicitly calculate the Gaussian curvature of each vertex. Another 2.2 Gaussian curvature in mesh processing problem with Gaussian curvature flow is that the time step has to be small to ensure numerical stability [18]. As a result, In 3D geometry, researchers have found some approxima- such geometric flow is time consuming. These two issues tion methods to calculate Gaussian curvature for discrete hamper the application of Gaussian curvature on meshes. meshes [27], [28]. Michael et al. [19] proposed a framework In order to solve the above-mentioned problems in 3D for 3D geometry processing that provides direct access to mesh processing, we propose a simple, easy to implement, surface curvature to facilitate advanced shape editing, fil- and robust filter that can efficiently minimize Gaussian tering, and synthesis algorithms. This algorithm framework curvature. Our method can effectively remove noise and is widely used in geometric processing, including smooth- preserve geometric features as illustrated in Fig. 1. Different ing, feature enhancement, and multi-scale curvature editing. from most existing methods that have many free param- There have been some works [29], [30], [31], [32] about the eters, our method has only one. The contributions of this application of Gaussian curvature in fied of developable 3D paper are as follows: meshes. For example, Oded et al. [29] used a variational ap- proach that drives a given mesh toward developable pieces • We propose a simple and robust implicit Gaussian separated by regular seam curves. The partial developability curvature optimization method, which we call Gaus- of a mesh makes the mesh convenient for industrial manu- sian Curvature Filter(GCF). GCF does not need to facturing. Similarly, the real-time nature of the algorithm explicitly calculate the Gaussian curvature. needs to be greatly improved. There has been several works • We developed a Gaussian curvature optimization based on curvature flow [18], [33], [34] in the field of 3D algorithm using a 1-ring neighborhood to preserve mesh optimization. For example, Zhao et al. [18] applied the model’s original geometric features and simulta- Gaussian curvature flow to mesh fairing. They designed a neously optimize the Gaussian curvature. diffusion equation whose evolution direction relies on the • Our algorithm has only one parameter - the num- normal and the step size is a manually defined function 3

of Gaussian curvature. The corner and edge features of the 3.1 Variational Energy mesh are preserved during fairing. However, the Gaussian In many applications, reducing Gaussian curvature is usu- curvature of each vertex is explicitly calculated, and the ally imposed by following variational model ( see [23], page computation complexity is too high. 131, formula 6.2): In Gaussian curvature optimization methods, the Gaus- N sian curvature flow is the most classical one. The Gaussian X 1 0 2 0 arg min [ (vi − vi) + λ|K(vi)|] , (1) curvature flow method relies on high-precision Gaussian {v0 } 2 i i=1 curvature calculations. It also requires the time step size to be small for ensuring numerical stability. If the step size is where {vi} are the input vertices N is the number of 0 0 set too large, the algorithm may be unstable. If the step size vertices, vi is the desired output, K(vi) is the Gaussian 0 is too small, the convergence speed is slow [18]. How to set curvature at vi and λ > 0 is a scalar parameter that a reasonable step size is an open problem. usually is related to noise level. The first quadratic term measures the similarity between the input and the output. 2.3 Mesh smoothing and feature preservation The second term measures the Gaussian curvature energy of There are many types of 3D mesh smoothing and fea- the output mesh. The main challenge in this model is how to ture preservation methods. Here, we mainly discuss the efficiently minimize the Gaussian curvature. The definition optimization-based and filter-based state-of-the-art meth- for a “discrete Gaussian curvature” on a triangle mesh is via ods. a vertex’s angular deficit [36]: Optimization-based methods achieve global optimiza- X K(v0) = (2π − θ )/A , (2) tion constrained by the priors of the ground truth geometry i ij N(i) j∈N(i) and noise distribution. He et al. [5] proposed a L0 minimiza- tion based method that achieves smoothing by maximizing where N(i) are the triangles incident on vertex i and θij is the of the model. In a model with rich planar the angle at vertex i in triangle j, AN(i) is the sum of features, this method can preserve some sharp geometric of the N(i) triangles [36]. features during the smoothing. Wang et al. [6] implement According to [37] theorem 1.15, the Gaussian curvature smoothing and feature preservation in two steps. First, the energy (GCE) is defined as global Laplace optimization algorithm is used to denoise, N and then an L1-analysis compressed sensing optimization is 0 X 0 EGC(vi) = |K(vi)| . (3) used to recover sharp features. i=1 Filter-based methods are commonly implemented by {v0} moving the vertex position along the vertex or face normal. This energy measures the developability of the mesh i . In [14], [15], [16], researchers perform the smoothing and Different from the Eq. 1, this energy does not consider the {v0} {v } feature preservation by moving the vertex position of the similarity between the output i and input i . There- model. The vertex is moved along the normal direction. fore, only minimizing Gaussian curvature energy does not And the moving step size is an empirical parameter. Lu et preserve the geometric features of the input mesh during al. [16] constructs geometric edges by extracting geometric the optimization. We will discuss how to minimize Gaussian features of the input model, and iteratively optimizes vertex curvature and preserve geometric features in our method. E = 0 K = 0 positions for smoothing and feature preservation by guiding When GC , it is clear that everywhere on the the geometric edges. In [8], [15], [16], [17], [35], iterative surface. Such surface is called a , which optimization of the face normal is used as a guide for can be mapped to a plane without any distortion. That smoothing and feature preservation. Li et al. [17] present is why it is called “developable”. Reducing the Gaussian a non-local low-rank normal filtering method. Smoothing curvature on the surface is trying to make the surface de- and feature preservation of synthetic and real scan models velopable. Developable surfaces can be easily manufactured are achieved by guided normal patch covariance and low- and produced in industry [29]. This is one reason that mini- rank matrix approximation. mizing Gaussian curvature is an important topic. Although Most of existing smoothing and feature preservation we know that developable surfaces are very useful, this is algorithms have the following problems: 1) Excessive de- not the focus of this paper. This paper is not to obtain a pendence on the a priori assumptions ( for example, edge developable surface, but to design an implicit optimization and corner features), thus resulting in many parameters of the Gaussian curvature to achieve smoothing and feature to be manually set; 2) It is difficult to find the optimal preservation of the model. parameters; 3) Based on the method of face normal filtering, the original feature is easily damaged while smoothing 3.2 Mathematical Foundation texture-rich models. For any developable surface S (Gaussian curvature is zero everywhere on the surface), we denote TS as its tangent 3 GAUSSIAN CURVATURE FILTER ON MESH space. We have following theorem: In this section, we show a simple iterative filter that can ef- Theorem 1. ∀~x ∈ S, ∀ > 0, ∃~x ∈ S, 0 < |~x − ~x | < , s.t. ficiently reduce Gaussian curvature for meshes. Meanwhile, 0 0 ~x ∈ T S(~x). our method preserves geometric features of the input mesh 0 during the optimization process. We show a mathematical Proof. Let ~x = ~r(u, v) ∈ S, where ~r = (x, y, z) ∈ R3, theory behind this filter, which guarantees to reduce Gaus- and (u, v) is the parametric coordinate. Since S is devel- sian curvature. opable, ~r(u, v) can be represented as ~r(u, v) = ~rA(u) + 4

(a) (b) (a) Cylindrical surface (b) Conical surface (c) Tangent surface Fig. 3. The 1-ring neighborhood and the domain decomposition result on the Stanford bunny mesh. (a) is the basic structure of the 1-ring Fig. 2. Theorem 1 on three types of developable surfaces. topology neighborhood. (b) is the result of the domain decomposition algorithm on the Stanford bunny.

v~rB(u) [38], where ~rA(u) is the directrix and ~rB(u) is a unit vector. Let ~x0 = ~r(u, v0) ∈ S, where v0 = v +  and that one of its neighbors falls on its tangent plane. If the  6= 0, then ~x0 = ~rA(u) + (v + )~rB(u). For two arbitrary Gaussian curvature at the vertex is zero, then the moving scalars α1 and α2, the tangent plane at ~x is distance is zero because one of its neighbors already lives on d~r d~r d~r its tangent plane, see Fig. 8. Otherwise, the absolute value of TS(~x) = ~r + α + α = ~r + α + α ~r (u) . (4) 1 du 2 dv 1 du 2 B Gaussian curvature is high before the movement. After the movement, the processed vertex is closer to a developable Because of Eq. 4, ~x0 is on the plane that passes ~x and d~r surface. Therefore, the Gaussian curvature is reduced. is spanned by the two vectors du and ~rB. Therefore, ~r0 ∈ T S(~x). 3.3 Discrete Neighborhood on Meshes This theorem indicates that for any point ~x on a devel- opable surface there must be another neighbor point ~x0 that Based on Theorem 1, there is always a neighbor point ~x0 that lives on its tangent plane. This conclusion is the theoretical lives on the tangent plane of ~x for any point ~x on a devel- foundation for our method. opable surface. However, on the discrete mesh, this point ~x0 This theorem can be verified on developable surfaces. In is not necessarily a vertex in real applications. To overcome mathematics, it is already known that there are only three this issue, we take all the 1-ring topology neighborhood types of developable surface: , cone and tangent vertices as possible candidates and finally adopt only one developable. As shown in Fig. 2, for any point ~x (red point) as an approximation to ~x0. Although such approximation on such surface, there is another point ~x0 (blue point) that introduces some numerical error, it simplifies the way to lives on its tangent plane (green triangle ). find ~x0 on triangular meshes. Our numerical experiments This theorem can also be explained from another point confirm that such approximation works well on triangular of view. In , Gaussian curvature of a meshes in practical applications, see Fig. 6 and 10. vertex on a surface is the product of the κ1 and κ2 at the vertex. That is K = κ1κ2. In literature 3.4 Our Method [23] chapter 6.1.2, it has been proved that minimizing a principal curvature in the Gaussian curvature of a vertex Our method can be roughly divided into two stages. The is to minimize the Gaussian curvature of the vertex for the first part is to classify all the vertices of a mesh according 2D discrete images. More specifically, we have following to their neighborhood relationship, so as to ensure that a relationship: certain vertex is moved in the local area and its neighbor- hood is fixed. In this paper, we call it the Greedy Domain κ1κ2 = 0 ⇐⇒ min{|κ1|, |κ2|} = 0. (5) Decomposition Algorithm, or GDD for short. The second part is the vertex update algorithm. The vertex update This result is stronger than Theorem 1 because it tells where is performed according to the normal direction and the ~x0 should be. Theorem 1 and Formula 5 can tell us that minimum absolute distance. Gaussian curvature can be minimized without calculating principal curvature. Although this theory is for continuous surfaces, it is 3.4.1 Greedy Domain Decomposition still valid for discrete triangular meshes. And all numerical A 1-ring neighborhood of a triangular mesh is usually com- experiments in this paper have confirmed its validity. The posed of the similar structure as Fig. 3 (a). The local shape only issue on the meshes is that ~x0 is not necessarily a vertex structure consists of a vertex vi and its neighborhood vertex

on the mesh (even not on the mesh). However, we can use set ={vj1 , ..., vj5 }. Vertex and neighborhood vertices are one of the 1-ring neighboring vertexes to approximate ~x0. connected by edges. Such approximation works well for practical applications as Implementation of this algorithm is described in Algo- confirmed in this paper. The procedure to find this vertex rithm 1, where V = {v1, v2, ..., vn} is the set of all vertices will be explained in Section 3.4.3. of a mesh, P = {P1,P2, ..., Pn} is the set of neighborhood In this paper, we adopt Theorem 1 and apply it on mesh points of each vertex, and D = {D1,D2, ..., Dn} is the color processing. According to Theorem 1, we can reduce the label of each vertex after greedy domain decomposition Gauss curvature of the vertex by moving its position such (Algorithm 1). 5

Algorithm 1: GREEDY DOMAIN DECOMPOSI- TION Input: Vertices V={v1, v2, ..., vn}, Neighbor list P={P1,P2, ..., Pn} Initialization each vertex color Ci = 0, i = 1, ··· , n; for each vi ∈ V do Through the greedy algorithm, until the color of the vertex vi is different from the vertex set of its neighborhood Pi, the maximum value k of the color Ci obtained is the number of domain sets Dk, and each domain set only contains (a) one normal projection (b) multi normal projection vertices of the same color. end Output: vertex domain set {D1,D2, ..., Dk} where all vertices in Di have the same color label.

The advantages of greedy domain decomposition for mesh vertices are as follows: First, it can ensure that the (c) vertex update vertex moves while the neighboring vertices do not move. Second, all vertices are divided into several independent Fig. 4. Projection strategy and the vertex update. (a) And (b) are one sets. Therefore, each set can move independently. This and multi normal projection strategy. (c) shows the vertex movement mechanism can speed up the convergence of our algorithm, direction and amplitude. as shown in Fig. 9. This is another reason why our algorithm is faster than the Gaussian curvature flow [18] (see table 1). We only give the numerical convergence rate in the experi- have to find the moving direction of the vertex and also the ment. Here we define the average convergence slope (ACS ) corresponding moving distance. We propose a multi normal as: projection strategy for computing the moving distance, As ||E (t+1)−E (t)|| shown in Fig. 4. we optimize the Gaussian curvature of M N−1 GCj GCj ∞ lg( (t) (t−1) ) v v 1 X X ||EGCj −EGCj ||∞ i by moving the vertex i. So we need to calculate the ACS = . magnitude of the movement of the vertex v . It is shown in M ∗ (N − 2) lg(t + 1) − log(t) −→ −→ −→i j=1 t=2 Fig. 4 (a) that multiple edges {v v ,v v ...,v v } composed (6) i j1 i j2 i j5 by vertex v and neighborhood vertices {v ,v ...v } are where, M represents the number of meshes, and N repre- i j1 j2 j5 projected to the vertex v unit normal vector ~n to calculate sents the number of iterations of each mesh. E is Gaussian i vi GC the distance set d={d , d , ..., d }. curvature energy. The result of the greedy domain decom- 1 2 5 (b) position of a triangle mesh by Algorithm 1 is shown in Fig. As shown in Fig. 4 , we compute the normal ~n v 3 (b). We can see that each vertex color is different from the vi (Eq.9) [41] of the vertex i and then the normal of each color of its neighborhood vertices, and all the vertices of the neighborhood vertices. bunny are independently divided into several sets. X ~nvi = Aj~nF j, (9)

3.4.2 Vertex Moving Direction j∈Fv (i) The differential coordinate of the i-th vertex vi = (xi, yi, zi) ∈ V = {v1, v2, ..., vn} is the difference between where Aj is the corresponding face area and ~nF jis the num- the absolute coordinates of vi and the center of mass of ber of the jth face normal. Fv(i) is the number of faces in its immediate neighbors Pi = {vj1 , vj2 , ..., vjm } in the the ith vertex-ring. It should be noted that the neighborhood mesh [39], i.e. vertex normal is the cross-product unit vector of the two edges of the neighborhood vertices. More specifically, the ~ 1 X δvi = (δxi , δyi , δzi ) = vi − vj . (7) ~n unit vector in Fig. 4 (b) is given by (Eq.10) m jk vj ∈P (vi) −−→ −−→ vj vj × vj vj In differential geometry, the direction of the differential ~n = k (k−1) k (k+1) . ~ jk −−→ −−→ (10) coordinate vector δvi approximates the normal direction of kvjk vj(k−1) × vjk vj(k+1) k the local area [40]. Following these works, we use the ~δ unit vector Eq. 8 (reverse normal direction) as the moving We calculate the projection distance of the unit vector set direction. {~n , ~n , ..., ~n } from the vertex of the 1-ring neighborhood ~ vi j1 j5 −→ −→ −→ −δvi {v v v v v v } ~δ = . structure. Then each edge i j1 , i j2 , ..., i j5 has a ~ (8) kδvi k projection to each unit vector {~nvi , ~nj1 , ..., ~nj5 }. As a result, we have all possible projection distances {d1, d2, ..., dn} 3.4.3 Vertex Moving Distance (in this example n = 5 × 6 = 30). We choose the smallest After the greedy domain decomposition, we update each absolute value in this set |d| as the moving amplitude of independent vertex set separately. During each iteration, we vertex vi. 6

In summary, the minimal moving vertex distance for vi Algorithm 2: GAUSSIAN CURVATURE FILTER i ∈ {1, ..., n} is computed ON MESH

~ −→ ~ Input: Vertex set V={v1, v2, ..., vn}, Vertex normal d = min{ < {N}, vivjk > }, {N} = {~nvi , ~nj1 , ..., ~njm } k set N={ ~n1, ~n2, ..., ~nn}, Neighbor List (11) P={P1,P2, ..., Pn}, Domains where <, > is the standard inner product, k = 1, ..., m;. D={D1,D2, ..., Dk}, IterationNumber Such minimal distance corresponds to a neighboring vertex for i = 0 → IterationNumber − 1 do v ~x jk . And this vertex is selected as an approximation to 0 as for j = 0 → k − 1 do described in Theorem 1. //each vertex in the same domain for t = 0 → Dk[j].size() − 1 do index = Dk[j][t]; Corner current = V [index]; if current on boundary then v0[index] = v[index]; Plane else Compute ~δ by Eq. 8; Compute P rojN by Eq. 10; Edge put N[index] into P rojN; //find the min projection distance; d  0 d  0 MIN = +∞; for p = 0 → P [index].size() − 1 do Fig. 5. Multi normal projection strategy distance. for r = 0 → P rojN.size() − 1 do P rojectDistance = Utilizing the multi normal projection strategy as shown abs((P [index][p] − current) ∗ (b) in Figure 4 ensures that geometric features are preserved P rojN[r]); when optimizing the Gaussian curvature. This property is if P rojectDistance < MIN important for the vertices at the corner, edge, and plane then geometry, as shown in Fig. 5. Since the vertices are contained MIN = P rojectDistance; in the above geometrical features, the minimum absolute end value projection distance obtained by the projection strategy end (b) |d| = 0 of Fig. 4 is . end −→ −→ ∃ ~n ∈ {N~ }, ~n ⊥ vivjk ⇒ ∃ k, < {N~ }, vivjk > = 0. (12) // vertex update; v0[index] = v[index] + ~δ ∗ MIN Therefore, the vertex moving distance 0 and the spatial posi- ; end tion is preserved. This kind of projection strategy makes our end algorithm have strong smoothing and feature preservation end capability for different noise levels. Not surprisingly, the end performance of feature preservation in the noise-free model Output: {v0 v0 v0 } is still robust, as shown in Fig. 6 (Vase). Vertices V’= 1, 2, ..., n 3.4.4 Vertex Update Algorithm Through the above computation, we obtain the minimum projection distance of the vertex vi, see Fig. 4 (c). algo- 4.1 Minimize Gaussian Curvature rithm 2: The vertex update then is given by To show the property of minimizing Gaussian curvature, 0 v = v + |d| · ~δ . (13) we compare our method with two approaches. The first one It is worth noting that our algorithm is fundamentally dif- is the classical Gaussian curvature flow method [18]. We ferent from the existing Laplace method. The comparative compare them by processing two commonly used but rep- experimental results are shown in Fig. 8. resentative meshes Max Planck head and Vase. Moreover, we also show the performance of both methods on these 4 EXPERIMENTS meshes when adding some random noise. In this section, we perform several experiments to show We further compare our method with a recent approach two properties of our method: minimizing the Gaussian from SIGGRAPH 2018 by Stein et. al [29]. We compare both curvature and smoothing with feature preservation. In the methods on noise-free and noisy meshes respectively. We aspect of Gaussian curvature optimization, we choose [18] will discuss the results in later sections. and [29] for comparative experiments. In the aspect of We chose Gaussian curvature energy (GCE, see for- geometric feature preserving noise removal, there are many mula 3), mean square angle error (MSAE), the maxi- methods, including optimization based methods [4] and mum and average distance from vertex to vertex (D max, [5], and filter based methods [3], [12], [13], [15], and [17]. D mean) [15], and the Gaussian curvature distribution curve We compare our approach with these methods with respect Kullback-Leibler divergence KLD for quantitative evalu- to these two aspects. ation with other comparison methods. The definition of 7

MSAE comes from previous work [42], [43], [44] : We further compare our method with the developed method [29]. The results are shown in Fig. 7. In this MSAE = E[ (n , n )]. (14) ∠ p o comparative experiment, we do not make a developable Where E is the expectation operator and ∠(np, no) is the comparison with it, because our work is not focused on angle between the processed normal np and the original developing. We run [29] according to the model and default normal no. parameters given by the author until convergence. We run our algorithm to roughly the same result (GCE, MSAE), and 4.1.1 Parameter settings then compare Dmax, Dmean and time-consuming between In [18], the algorithm has five parameters to be manually the processed model and the ground truth model. adjusted, k, ρ, β, , α. For the specific meaning of each Method [29] gathers Gaussian curvature to regular seam parameter, see Eq. 2 in [18]. According to the author’s curves by defining different energies, and through continu- suggestion, we set β = 2,  = 0.001, and α = 0.0005. See ous iterative optimization to achieve piecewise developable table 1 for detailed parameters. It is important to note that surfaces. Our method does not introduce developable en- this parameter has to set to different values for different ergy constraints, so there is no regular seam curves. But to a models, or different noise levels of the same model. The certain extent, optimization of Gaussian curvature energy author also mentioned the importance of an implicit step can be compared. From Fig. 7 and Table 2, we can see size algorithm. that our algorithm can achieve the same effect as [29] in However, our algorithm only has one parameter, i.e., the optimizing Gaussian curvature, and it takes less time. iteration number. How to choose the number of iterations, the number of GCF iterations only depends on the noise 4.2 smoothing with Feature Preservation level of the model. The higher the noise level, the more iterations are required. To evaluate GCF performs in smoothing and feature preservation, we choose seven representative state-of-the- 4.1.2 Result analysis art methods for comparison. In Fig. 6, we use two meshes, Max Planck head and Vase. 4.2.1 Parameter settings We perform the Gaussian curvature flow [18] and GCF on these meshes, respectively. From the previous multiple sets of experiments, our al- Visually, on the noise-free model: the result of [18] is not gorithm roughly changed after 40-50 iterations. For some much different from ours in the color of Gaussian curvature. models, in order to better balance smoothing and feature But in detail, for the noise-free model, we can see that on the preservation, 40 iterations are generally selected. For large vase model, the feature preservation of our result is more noise, the number of iterations can be increased according to obvious. On the noisy model: Our algorithm is not only the noise level. Both the filter based methods and the opti- more prominent in feature preservation, but also smoother mization based methods use their default parameters(except in local details. for the Bunny). The detailed parameters are listed in Table 3. Quantitatively, our algorithm can achieve the same The meanings of the specific parameters can be found in the Gaussian curvature energy as [18], but lower than [18] on respective papers. MSAE, which also proves that our algorithm is not only minimizing Gaussian curvature energy, but also preserving 4.2.2 Result analysis ground truth features. Our algorithm has distinguishing In the experiments, we have selected four representative advantage in time consumption. In Table 1, we can see that models with rich features: armadillo, bunny, Max Planck, without adding our GDD in [18], we are almost 20 times and vaselion. In the armadillo model of Fig. 10 row 1, faster. After adding our GDD to [18], our algorithm is this represents a type of mesh with more vertices and also nearly 3 times faster (the experiment is on the same faces, features, and a complete shape. The face normal computer: Intel Xeon 4 cores, 3.7Ghz, 96GB RAM). filtering methods are mostly two-step filtering algorithms, The weakness of [18] is that the temporal direction such as [12], [13], [15], those algorithms first filter the faces discretization currently used is explicit, the inherent prob- normal, and then updates the vertices. Such methods rely lem with this approach is that explicit methods behave heavily on the threshold of the first step, and is suitable poorly if the system is stiff, and in order to converge to for models with rich planar features. For models with the correct solution it is necessary to use small time steps. rich geometric features, it is easy to over smoothing. We So there are too many parameters, and different models reduce the Gaussian curvature energy of the overall model are sensitive to the size of the parameter values. In Fig. 6, by minimizing the Gaussian curvature absolute value of combining quantitative and visual results, we can draw each vertex. Our model’s Gaussian curvature energy value the following conclusions: Firstly, we can obtain the same is closest to the ground truth model. In the comparison Gaussian curvature energy value as [18], which proves that method, our MSAE value is the smallest, which also shows we can achieve the effect of explicit calculation optimization that our algorithm preserves the features the best in the through implicit optimization. Secondly, there is no need smoothing process (see Table 3). As shown in Fig. 10, our to explicitly perform the calculation of Gaussian curvature results are the best in terms of both overall shape and local and the addition of GDD, which makes our algorithm gain a detail. clear advantage in time-consuming. Thirdly, our algorithm Figures 10 row 2 demonstrates that our method also can simulteneously smooth the mesh and preserve its fea- works in models with few vertices and faces but rich fea- tures. tures with high noise level (σn = 0.5el). In Fig. 10 row 8

Input GC Flow Ours Input GC Flow Ours (a1) Max Planck (noise-free) (b1) Vase (noise-free)

y 3 2 0 1 4 y 1 2 0 1 5 G C F l o w G C F l o w G C F l o w G C F l o w g g

r O u r s O u r s r O u r s O u r s e 2 8 0 1 2 e

n n 1 0 0 1 2 E E

2 4 0

e 1 0 e

r r 8 0

u 2 0 0 u

t t 9 E 8 E a a A A v 1 6 0 v 6 0 S S r r

u 6 u

M M 6

C 1 2 0 C 4 0

n 4 n

a 8 0 a i i

s s 3 s 2 s 2 0

u 4 0 u a a

G 0 0 G 0 0 0 2 0 4 0 6 0 8 0 1 0 0 0 2 0 4 0 6 0 8 0 1 0 0 0 2 0 4 0 6 0 8 0 1 0 0 0 2 0 4 0 6 0 8 0 1 0 0 I t e r a t i o n s I t e r a t i o n s I t e r a t i o n s I t e r a t i o n s (a2) Max Planck GCE (a3) Max Planck MSAE (b2) Vase GCE (b3) Vase MSAE

Input GC Flow Ours Input GC Flow Ours (c1) Max Planck (noise level σn = 0.3el) (d1) Vase (noise level σn = 0.3el)

y 2 4 0 0 G C F l o w y 1 8 0 0 4 0 G C F l o w 3 6 G C F l o w G C F l o w g O u r s g r O u r s r O u r s O u r s e e 3 5 n 2 0 0 0 3 0 n 1 5 0 0 E E

3 0 e e r 1 6 0 0 2 4 r 1 2 0 0 u u 2 5 E t t E a a A A v v 1 2 0 0 S 1 8 9 0 0 2 0 S r r u u M M

C C 1 5 8 0 0 1 2 6 0 0 n n a a 1 0 i i s s

s 4 0 0 6 s 3 0 0

u u 5 a a

G 0 0 G 0 0 0 2 0 4 0 6 0 8 0 1 0 0 0 2 0 4 0 6 0 8 0 1 0 0 0 2 0 4 0 6 0 8 0 1 0 0 0 2 0 4 0 6 0 8 0 1 0 0 I t e r a t i o n s I t e r a t i o n s I t e r a t i o n s I t e r a t i o n s (c2) Max Planck GCE (c3) Max Planck MSAE (d2) Vase GCE (d3) Vase MSAE

Fig. 6. Minimizing Gaussian curvature on noise-free meshes (top row) and noisy meshes (bottom row). In each panel, from left to right: the original, Gaussian curvature flow method, our method. GC is an abbreviation for Gaussian curvature. 9

Input [29] Ours Input [29] Ours (a) Bunny(noise free) (b) Bunny(σn = 0.3el)

Fig. 7. Comparison between [29] and our algorithm. Left: noise free case. Right: noisy case. The algorithm [29] converges to the comparative experimental results in the case of GCE and MSAE which are the same as GCF. The experimental results of [29] use the default bunny parameters provided by the author’s open source. The final running time of the two algorithms is shown in Table 2.

D Max

0

Min

Input [45] [20] [46] Ours  MSAE :5.50 9.45 12.92 1.69 Cone D mean: 29.34 29.13 1.93 0.00 D max: 53.34 52.31 7.41 3.10  MSAE :2.90 4.18 5.32 0.08 Cylinder D mean: 73.89 74.20 1.69 0.00 D max: 158.99 160.56 5.57 0.33

Fig. 8. Comparison between our algorithm and three common Laplacian smoothing algorithms on the model of cone and cylinder. The number of iterations of our method is 40, and the others default to 10.

TABLE 1 Running time( in debug mode) comparison between the standard Gaussian curvature flow method and our method on the Max Planck (|V |: 5272, |F |: 1054) and Vase (|V |: 3827, |F |: 7650).

Models Methods Parameters Time(s):W/O GDD Max Planck [18] (100, 1.0, 2, 0.001, 0.0005) 163.94/1016.41 (noise-free) Ours (100) 56.39/428.58 Vase [18] (100, 0.01, 2, 0.001, 0.0005) 100.65/744.75 (noise-free) Ours (100) 40.34/320.48 Max Planck [18] (100, 1.0, 2, 0.001, 0.0005) 158.23/1123.56 (σn = 0.3el) Ours (100) 62.41/425.20 Vase [18] (100, 0.01, 2, 0.001, 0.0005) 106.81/833.12 (σn = 0.3el) Ours (100) 45.61/317.00 10

TABLE 2 Quantitative comparison of [29] on the bunny (|V |: 3301, |F |: 6598). The running time( in debug mode) is measured in seconds.

Models Methods MSAE GCE Dmean: Dmax: Time(s) Bunny [29] 9.68 71.07 8.21 × 10−3 6.07 × 10−2 1129.10 (noise-free) Ours 10.65 71.83 1.21 × 10−2 5.75 × 10−2 25.34 Bunny [29] 10.10 100.82 1.03 × 10−2 6.75 × 10−2 1303.93 −2 −2 (σn = 0.3el) Ours 10.79 100.00 1.20 × 10 4.00 × 10 28.80

TABLE 3 Quantitative comparison of noise removal and feature preservation performances with other state-of-the-art methods.

Models Methods Parameters MSAE GCE KLD Dmean Dmax −3 −3 Noisy (−) 22.07 13282.30 0.79 1.62 × 10 8.59 × 10 Ground truth (−) 0.00 1958.15 0.00 0.00 0.00 [3] (10) 13.43 4524.08 0.23 1.67 × 10−3 8.75 × 10−3 Armadillo (1, 1) 12.36 4854.35 0.13 1.54 × 10−3 8.28 × 10−3 σ = 0.3e [4] n l [12] (0.5, 20, 10) 15.06 3765.05 0.35 1.85 × 10−3 7.60 × 10−3 (Figure 10 row 1) [13] (1, 3.5 × 10−1, 20, 1.0 × 10−2, 10) 14.89 4012.49 0.45 1.80 × 10−3 8.46 × 10−3 |V |: 43243 [5] (1.4 × 10−2, 1.0 × 10−3, 1.0 × 103, 0.5, 2.9 × 10−3, 3.0 × 10−6) 13.44 2385.80 0.52 1.67 × 10−3 8.21 × 10−3 |F |: 86482 [15] (2, 1, 0.35, 20, 1, 1.0 × 10−3, 10) 14.88 2167.83 0.75 1.78 × 10−3 8.15 × 10−3 [17] (3.9 × 10−1, 20, 10) 12.15 3376.38 0.26 1.58 × 10−3 7.61 × 10−3 Ours (40) 9.72 1665.57 0.08 1.27 × 10−3 7.71 × 10−3 −2 −2 Noisy (−) 31.58 1733.72 1.65 1.82 × 10 8.30 × 10 Ground truth (−) 0.00 146.60 0.00 0.00 0.00 [3] (30) 19.37 602.77 0.31 2.03 × 10−2 9.80 × 10−2 Bunny (1, 1) 18.47 773.32 0.26 1.68 × 10−2 7.79 × 10−2 σ = 0.5e [4] n l [12] (0.5, 20, 30) 20.05 504.60 0.18 1.99 × 10−2 9.19 × 10−2 (Figure 10 row 2) [13] (1, 3.5 × 10−1, 20, 1.0 × 10−2, 30) 18.95 548.70 0.19 1.86 × 10−2 6.97 × 10−2 |V |: 3301 [5] (1.4 × 10−2, 1.0 × 10−3, 1.0 × 103, 0.5, 5.9 × 10−3, 3.2 × 10−4) 15.52 411.53 0.20 1.70 × 10−2 8.21 × 10−2 |F |: 6598 [15] (2, 1, 0.35, 20, 1, 1.0 × 10−3, 30) 14.63 221.53 0.37 1.69 × 10−2 7.25 × 10−2 [17] (3.9 × 10−1, 20, 30) 17.08 509.81 0.12 1.72 × 10−2 7.06 × 10−2 Ours (100) 11.93 99.37 0.07 1.60 × 10−2 6.86 × 10−3 Noisy (−) 9.54 455.27 0.11 5.58 × 10−1 2.71 Ground truth (−) 0.00 226.34 0.00 0.00 0.00 [3] (10) 8.59 242.63 0.08 8.19 × 10−1 3.47 Max Planck (1, 1) 7.32 221.09 0.07 7.25 × 10−1 3.09 σ = 0.1e [4] n l [12] (0.5, 20, 10) 11.83 240.25 0.12 1.10 5.30 (Figure 10 row 3) [13] (1, 3.5 × 10−1, 20, 1.0 × 10−2, 10) 10.75 225.02 0.20 9.45 × 10−1 3.88 |V |: 5272 [5] (1.4 × 10−2, 1.0 × 10−3, 1.0 × 103, 0.5, 3.8 × 10−3, 0.4) 8.67 170.66 0.18 6.63 × 10−1 2.71 |F |: 10540 [15] (2, 1, 0.35, 20, 1, 1.0 × 10−3, 10) 11.63 179.21 0.46 1.05 5.58 [17] (3.9 × 10−1, 20, 10) 9.57 221.26 0.14 8.39 × 10−1 5.11 Ours (40) 6.60 127.10 0.07 1.19 5.19 −3 −3 Noisy (−) 25.11 12933.10 0.47 1.46 × 10 7.65 × 10 Ground truth (−) 0.00 2757.32 0.00 0.00 0.00 [3] (10) 21.13 7435.05 0.14 1.72 × 10−3 1.41 × 10−1 Vaselion (1, 1) NaN 8127.51 0.12 NaN 7.46 × 10−3 σ = 0.2e [4] n l [12] (0.5, 20, 10) 24.92 7017.39 0.15 2.18 × 10−3 2.80 × 10−2 (Figure 10 row 4) [13] (1, 3.5 × 10−1, 20, 1.0 × 10−2, 10) 21.78 6449.90 0.18 1.85 × 10−3 1.32 × 10−2 |V |: 38728 [5] (1.4 × 10−2, 1.0 × 10−3, 1.0 × 103, 0.5, 1.1 × 10−3, 2.0 × 10−6) 16.84 5536.49 0.26 1.46 × 10−3 7.20 × 10−3 |F |: 77452 [15] (2, 1, 0.35, 20, 1, 1.0 × 10−3, 10) 25.66 4403.20 0.41 2.18 × 10−3 1.19 × 10−2 [17] (3.9 × 10−1, 20, 10) 21.55 6184.21 0.10 1.91 × 10−3 6.77 × 10−2 Ours (40) 12.72 2741.82 0.02 1.94 × 10−3 1.80 × 10−2

2, we can see that for the method of [15], handling low- Our method’s distribution curve is closest to the ground resolution mesh models can result in local regions using truth distribution amongst all compared methods. The KLD larger neighborhood filtering normals and calculating nor- values of our method in Table 3 is the smallest, indicating mals on larger patches, which can result in over smoothing. that the Gaussian curvature probability distribution of our Figure 10 demonstrates the robustness of GCF at different method is the closest to the ground truth distribution. The noise levels and vertex numbers. MSAE values of our method are the smallest, indicating that the output of our method can preserve the model’s The quantitative comparison results are shown in Ta- geometric feature the best. It is also interesting to observe ble 3. In Fig. 13, we draw the Gaussian curvature prob- that the GCE values of our method’s outputs are also the ability distributions of the models which can help better lowest. explain why our scheme achieves such a good effect. A wider Gaussian curvature probability distribution indicates a higher level of noise. As can be seen from the armadillo in Fig. 13, for the noisy model, the Gaussian curvature energy We also compared to our previous work [44] on smooth- is the largest, so the curve is the widest. For the ground ing and feature preservation. As shown in the Fig. 11, Gaus- truth model, the Gaussian curvature energy is concentrated sian curvature filtering achieves better results on different near 0, and the distribution curve is relatively narrow. noise level models. 11

1 (b) Dragon (d) Lucy 2 (a) Kitten (c) Statues E 3 C 1 . 2 4

G 5

6 d

e 7

z 8 i 0 . 8 l a m r

o 0 . 4 N 0 . 0 0 2 0 4 0 6 0 8 0 1 0 0 i t e r a t i o n s

(a) GCF without GDD (ACS : −1.45) Times(s) Models 1 2 Kitten Dragon Statues Lucy

E 3 Methods 4 C 1 . 2 5 16.83 258.8 NaN NaN

G [12]

6

d 7 GCF(GPU) 3.28 3.39 4.93 7.50 e 8

z 0 . 8 TABLE 4 i l

a Time-consuming comparison on meshes with a huge number of

m vertices.(Kitten: |V |: 24956, |F |: 49912; Dragon: |V |: 437645, |F |: r 0 . 4 o 871414; Statues: |V |: 4999996, |F |: 10000000; Lucy: |V |: 14027872,

N |F |: 28055742;). Number of iterations: [12] iterates 20 times for the 0 . 0 normal of the face and 20 iterations for the vertices. Our iterations are 0 2 0 4 0 6 0 8 0 1 0 0 40. i t e r a t i o n s

(b) GCF with GDD (ACS : −1.70)

Fig. 9. The influence of GDD on the convergence of GCF. We randomly further research on the possibility of GCF in the field of selected 8 different meshes, GCF with GDD and without GDD for 100 mesh developability and editing. iterations respectively. For comparison, we normalize GCE( Divided by the largest Gaussian curvature energy value). The smaller the average convergence slope ACS 6, the better. 5 CONCLUSIONS In this paper, we propose an iterative filter that optimizes 4.3 Applied to real scan models the Gaussian curvature of a triangular mesh. Our method does not need to explicitly calculate the Gaussian curvature. We have seen that our method achieves state-of-the-art Our method is simple but effective and efficient, as con- effect on synthesized CAD meshes. We have applied the al- firmed by the numerical experiments. Our algorithm has gorithm to process real 3D models. We use two real scanned only one parameter in the form of the number of iterations, meshes which contain unknown noise in the experiments which makes our algorithm easy to use. From the results and results are shown in Fig. 12. Since these two real of multiple sets of experiments, whether it is visual or scanned meshes do not have a ground truth model, they quantitative analysis, we have verified that our method out- are not shown here as shown in Table 3, but only show performs the state-of-the-art. With this Gaussian curvature qualitative smoothing results. filter, minimizing Gaussian curvature on triangular meshes becomes much easier. This is important for many industries 4.4 Apply GPU version of GCF on super large meshes that require Gaussian curvature optimization, such as ship Our algorithm can be implemented by GPU parallel com- manufacture, car shape design, etc. And we believe that our puting, which is especially practical for super large meshes method can be beneficial to both academical research and that need to be optimized. Select the fastest algorithm [12] practical industries. in the comparison method of this paper and compare our algorithm on two super large meshes in terms of computing ACKNOWLEDGMENTS time. While [12] failed in processing large meshes, our This work was supported in part by the National Nat- method is successfully applied on large meshes. The results ural Science Foundation of China under Grant 61907031. are shown in Table 4: Also partially supported by the Education Department of Guangdong Province, PR China, under project No 4.5 Limitations and future work 2019KZDZX1028. For some corner features like Fig. 4, GCF can ensure that these corner features are preserved during the smoothing REFERENCES process. However, In the sharp corner feature shown in [1] S. Kriegel, C. Rink, T. Bodenmuller,¨ A. Narr, M. Suppa, and Fig. 8, the apex of the cone tip, GCF treats it as noise. G. Hirzinger, “Next-best-scan planning for autonomous 3d model- This is a common problem shared by all local methods that ing,” in 2012 IEEE/RSJ International Conference on Intelligent Robots do not have a priori assumption about edges. For noisy and Systems. IEEE, 2012, pp. 2850–2856. CAD models with particularly sharp features, our method [2] N. Wang, Y. Zhang, Z. Li, Y. Fu, W. Liu, and Y.-G. Jiang, “Pixel2mesh: Generating 3d mesh models from single rgb im- is inferior to methods based on face normal filtering at ages,” in Proceedings of the European Conference on Computer Vision preserving sharp features. In future work, we will conduct (ECCV), 2018, pp. 52–67. 12

(a) Noisy (b) Original (c) [3] (d) [4] (e) [12] (f) [13] (g) [5] (h) [15] (i) [17] (j) Ours

Fig. 10. Comparison between our algorithm and the selected state-of-the-art algorithms on the armadillo (σn = 0.3el), bunny (σn = 0.5el), Max Planck (σn = 0.1el), vaselion (σn = 0.2el).

[3] S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral mesh denois- based on normal voting tensor and binary optimization,” IEEE ing,” in ACM transactions on graphics (TOG). ACM, 2003, pp. transactions on visualization and computer graphics, vol. 24, no. 8, pp. 950–953. 2366–2379, 2017. [4] T. R. Jones, F. Durand, and M. Desbrun, “Non-iterative, feature- [9] P.-S. Wang, Y. Liu, and X. Tong, “Mesh denoising via cascaded preserving mesh smoothing,” in ACM Transactions on Graphics normal regression.” ACM Trans. Graph., vol. 35, no. 6, pp. 232–1, (TOG). ACM, 2003, pp. 943–947. 2016. [5] L. He and S. Schaefer, “Mesh denoising via l 0 minimization,” [10] G. Arvanitis, A. S. Lalos, K. Moustakas, and N. Fakotakis, “Feature ACM Transactions on Graphics (TOG), vol. 32, no. 4, p. 64, 2013. preserving mesh denoising based on graph spectral processing,” [6] R. Wang, Z. Yang, L. Liu, J. Deng, and F. Chen, “Decoupling noise IEEE transactions on visualization and computer graphics, vol. 25, and features via weighted l1-analysis compressed sensing,” ACM no. 3, pp. 1513–1527, 2018. Transactions on Graphics (TOG), vol. 33, no. 2, p. 18, 2014. [11] M. Wei, J. Wang, X. Guo, H. Wu, H. Xie, F. L. Wang, and J. Qin, [7] M. Wei, J. Yu, W.-M. Pang, J. Wang, J. Qin, L. Liu, and P.-A. “Learning-based 3d surface optimization from medical image Heng, “Bi-normal filtering for mesh denoising,” IEEE transactions reconstruction,” Optics and Lasers in Engineering, vol. 103, pp. 110– on visualization and computer graphics, vol. 21, no. 1, pp. 43–55, 2014. 118, 2018. [8] S. K. Yadav, U. Reitebuch, and K. Polthier, “Mesh denoising [12] X. Sun, P. Rosin, R. Martin, and F. Langbein, “Fast and effective 13

Input [44] Ours

Input [44] Ours

Fig. 11. Comparison between our algorithm (Number of iterations: iHbunny: 40, Vase: 100) and our previous work [44](Number of iterations: iHbunny: 10, Vase: 20) on noisy iHbunny (σn = 0.2el) and noisy Vase (σn = 0.5el).

(a) Noisy (b) [3] (c) [4] (d) [12] (e) [13] (f) [5] (g) [15] (h) [17] (i) Ours

Fig. 12. Comparison between our algorithm and the selected state-of-the-art algorithms on the model of the real scan (angel and rabbit). The parameters are selected as the second row of Table 3

feature-preserving mesh denoising,” IEEE transactions on visualiza- [20] M. Desbrun, M. Meyer, P. Schroder,¨ and A. H. Barr, “Implicit tion and computer graphics, vol. 13, no. 5, pp. 925–938, 2007. fairing of irregular meshes using diffusion and curvature flow,” [13] Y. Zheng, H. Fu, O. K.-C. Au, and C.-L. Tai, “Bilateral normal in Proceedings of the 26th annual conference on Computer graphics and filtering for mesh denoising,” IEEE Transactions on Visualization and interactive techniques, 1999, pp. 317–324. Computer Graphics, vol. 17, no. 10, pp. 1521–1530, 2011. [21] Y. Gong and I. F. Sbalzarini, “Curvature filters efficiently reduce [14] L. Zhu, M. Wei, J. Yu, W. Wang, J. Qin, and P.-A. Heng, “Coarse-to- certain variational energies,” IEEE Transactions on Image Processing, fine normal filtering for feature-preserving mesh denoising based vol. 26, no. 4, pp. 1786–1798, 2017. on isotropic subneighborhoods,” in Computer Graphics Forum. Wi- [22] S.-H. Lee and J. K. Seo, “Noise removal with gauss curvature- ley Online Library, 2013, pp. 371–380. driven diffusion,” IEEE Transactions on Image Processing, vol. 14, [15] W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu, “Guided no. 7, pp. 904–909, 2005. mesh normal filtering,” in Computer Graphics Forum. Wiley Online [23] Y. Gong, “Spectrally regularized surfaces,” Ph.D. dissertation, Library, 2015, pp. 23–34. ETH Zurich, 2015. [Online]. Available: http://e-collection.library. [16] X. Lu, Z. Deng, and W. Chen, “A robust scheme for feature- ethz.ch/eserv/eth:47737/eth-47737-02.pdf preserving mesh denoising,” IEEE transactions on visualization and [24] G. Taubin, “Estimating the tensor of curvature of a surface from computer graphics, vol. 22, no. 3, pp. 1181–1194, 2015. a polyhedral approximation,” in Proceedings of IEEE International [17] X. Li, L. Zhu, C.-W. Fu, and P.-A. Heng, “Non-local low-rank Conference on Computer Vision. IEEE, 1995, pp. 902–907. normal filtering for mesh denoising,” in Computer Graphics Forum. [25] P. Jidesh and S. George, “Fourth-order gauss curvature driven Wiley Online Library, 2018, pp. 155–166. diffusion for image denoising,” International Journal of Computer [18] H. Zhao and G. Xu, “Triangular surface mesh fairing via gaussian and Electrical Engineering, vol. 4, no. 3, p. 350, 2012. curvature flow,” Journal of Computational and Applied Mathematics, [26] Y. Gong and I. F. Sbalzarini, “Local weighted gaussian curvature vol. 195, no. 1-2, pp. 300–311, 2006. for image processing,” in 2013 IEEE International Conference on [19] M. Eigensatz, R. W. Sumner, and M. Pauly, “A comparison of Image Processing. IEEE, 2013, pp. 534–538. gaussian and mean estimation methods on triangular [27] J. Peng, Q. Li, C.-C. J. Kuo, and M. Zhou, “Estimating gaussian meshes,” in Computer Graphics Forum. Wiley Online Library, 2008, curvatures from 3d meshes,” in Human Vision and Electronic Imag- pp. 241–250. ing VIII. International Society for Optics and Photonics, 2003, pp. 14

(a) ( a ) 0 (b) ( b ) [41] H. Zhao and P. Xiao, “An accurate vertex normal computation 0 (c) ( c ) ) scheme,” in Computer Graphics International Conference. Springer, (d) y ( d ) t i

(e) l ( e )

-1 i 2006, pp. 442–451. (f) ( f ) b

(g) a - 1 ( g ) [42] Y. Shen and K. E. Barner, “Fuzzy vector median-based surface (h) b ( h ) o

-2 (i) r ( i ) smoothing,” IEEE Transactions on Visualization and Computer Graph- P

(j) ( ( j )

g ics, vol. 10, no. 3, pp. 252–265, 2004.

o - 2 l log(Probability) -3 [43] Y. Zhao, H. Qin, X. Zeng, J. Xu, and J. Dong, “Robust and effective mesh denoising using l0 sparse regularization,” Computer-Aided -4 - 3 -2 0 2 4 - 2 - 1 0 1 2 Design, vol. 101, pp. 82–97, 2018. Gaussian Curvature G a u s s i a n C u r v a t u r e [44] W. Pan, X. Lu, Y. Gong, W. Tang, J. Liu, Y. He, and G. Qiu, “Hlo: Half-kernel laplacian operator for surface smoothing,” Computer- Armadillo (σn = 0.3el) Bunny (σn = 0.5el) Aided Design, p. 102807, 2020. (a) (a) [45] J. Vollmer, R. Mencl, and H. Mueller, “Improved laplacian smooth- 0 (b) 0 (b) (c) (c) ing of noisy surface meshes,” in Computer graphics forum, vol. 18, (d) (d) (e) (e) no. 3. Wiley Online Library, 1999, pp. 131–138. (f) -1 -1 (f) (g) (g) [46] O. Sorkine, “Laplacian mesh processing,” Eurographics (STARs), (h) (h) (i) -2 (i) vol. 29, 2005. (j) (j) -2 log(Probability) log(Probability) -3

-3 -4 -2 0 2 -8 -6 -4 -2 0 2 4 6 8 Gaussian Curvature Gaussian Curvature

Max Planck (σn = 0.1el) Vaselion (σn = 0.2el)

Fig. 13. Gaussian curvature distribution map corresponding to Fig.10 row 1-4. The left axis is Gaussian curvature probability in Log scale. And the bottom axis is the Gaussian curvature value of the vertices on the model. The curves of the eight different colors are: (a) is the Noisy input, (b) is the Ground truth . (c) is the method [3], (d) is the method [4], (e) is the method [12], (f) is the method [13], (g) is the method [5], (h) is the method [15], (i) is the method [17], (j) is Ours. Our results are close to the ground truth. The quantitative K-L divergence is summarized in the right column of Table 3.

270–281. [28] T. Surazhsky, E. Magid, O. Soldea, G. Elber, and E. Rivlin, “A comparison of gaussian and mean curvatures estimation methods on triangular meshes,” in 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422). IEEE, 2003, pp. 1021–1026. [29] O. Stein, E. Grinspun, and K. Crane, “Developability of triangle meshes,” ACM Transactions on Graphics (TOG), vol. 37, no. 4, pp. 1–14, 2018. [30] M. Rabinovich, T. Hoffmann, and O. Sorkine-Hornung, “Discrete nets for modeling developable surfaces,” ACM Transac- tions on Graphics (ToG), vol. 37, no. 2, pp. 1–17, 2018. [31] A. Ion, M. Rabinovich, P. Herholz, and O. Sorkine-Hornung, “Shape approximation by developable wrapping,” ACM Transac- tions on Graphics (TOG), vol. 39, no. 6, pp. 1–12, 2020. [32] S. Sellan,´ N. Aigerman, and A. Jacobson, “Developability of heightfields via rank minimization,” ACM Transactions on Graphics, vol. 39, no. 4, pp. 10–1145, 2020. [33] M. Kazhdan, J. Solomon, and M. Ben-Chen, “Can mean-curvature flow be modified to be non-singular?” in Computer Graphics Forum, vol. 31, no. 5. Wiley Online Library, 2012, pp. 1745–1754. [34] K. Crane, U. Pinkall, and P. Schroder,¨ “Robust fairing via confor- mal curvature flow,” ACM Transactions on Graphics (TOG), vol. 32, no. 4, pp. 1–10, 2013. [35] X. Lu, X. Liu, Z. Deng, and W. Chen, “An efficient approach for feature-preserving mesh denoising,” Optics and Lasers in Engineer- ing, vol. 90, pp. 186–195, 2017. [36] M. Meyer, M. Desbrun, P. Schroder,¨ and A. H. Barr, “Discrete differential-geometry operators for triangulated 2-manifolds,” in Visualization and mathematics III. Springer, 2003, pp. 35–57. [37] X. Guoliang and Q. Zhang, Geometric partial differential equation methods in computational geometry. Science Press, 2013. [38] H. Pottmann and J. Wallner, Computational line geometry. Springer Science & Business Media, 2009. [39] O. Sorkine, “Differential representations for mesh processing,” in Computer Graphics Forum, vol. 25, no. 4. Wiley Online Library, 2006, pp. 789–807. [40] G. Taubin, “A signal processing approach to fair surface design,” in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 1995, pp. 351–358.