Master of Science Thesis in Electrical Engineering Department of Electrical Engineering, Linköping University, 2018

Make it Simpler Structure-aware mesh decimation of large scale models

Daniel Böök Master of Science Thesis in Electrical Engineering Make it Simpler Structure-aware mesh decimation of large scale models Daniel Böök LiTH-ISY-EX–19/5192–SE

Supervisor: Harald Nautsch isy, Linköpings universitet Mikael Hägerström Spotscale AB Examiner: Ingemar Ragnemalm isy, Linköpings universitet

Division of Information Coding Department of Electrical Engineering Linköping University SE-581 83 Linköping, Sweden

Copyright © 2018 Daniel Böök For Luna and Johanna

Abstract

A 3D-model consists out of , and in many cases, the amount of triangles are unnecessarily large for the application of the model. If the camera is far away from a model, why should all triangles be there when in reality it would make sense to only show the contour of the model? Mesh decimation is often used to solve this problem, and its goal is to minimize the amount of triangles while still keep the visual representation intact. Having the decimation algorithm being structure aware, i.e. having the algorithm aware of where the important parts of the model are, such as corners, is of great benefit when doing extreme simplification. The algorithm can then decimate large, almost planar parts, to only a few triangles while keeping the important features detailed. This thesis aims to describe the development of a structure aware decimation algorithm for the company Spotscale, a company specialized in creating 3D-models of drone footage.

v

Acknowledgments

I would like to thank everyone at Spotscale for giving me the opportunity to work at their office and being helpful with the addition of being great resources during this thesis. Also the examiner and the supervisor have been there and gave great insights during the development of the algorithm and writing this thesis. Finally, I would like to thank my family for being supportive and pushed me to do better when needed.

Linköping, January 2019 Daniel Böök

vii

Contents

1 Introduction 1 1.1 Motivation ...... 1 1.2 Aim ...... 2 1.3 Research Questions ...... 2 1.4 Delimitations ...... 2

2 Background 3 2.1 Spotscale ...... 3 2.2 List of specification ...... 3

3 Related Work 5 3.1 Surface simplification ...... 5 3.1.1 Garland and Heckberts decimation algorithm ...... 5 3.1.2 Structure-Aware decimation ...... 6 3.2 Mesh Refinement ...... 6

4 Theory 9 4.1 Edge collapse ...... 9 4.2 Planar proxies ...... 10 4.3 Graph of proxies ...... 10 4.4 Error Quadrics ...... 10 4.4.1 Inner quadric ...... 11 4.4.2 Boundary quadric ...... 12

5 Method 13 5.1 Feasibility study ...... 13 5.1.1 Introduction to mesh simplification ...... 13 5.1.2 A search for libraries ...... 14 5.1.3 Metro - A way of comparing different algorithms ...... 15 5.2 Implementation ...... 15 5.2.1 Inner and Boundary quadrics ...... 15 5.2.2 Solving the problem with flipping normals ...... 16 5.2.3 Avoiding slivers and needles ...... 16

ix x Contents

5.2.4 Decimating with an error margin ...... 16 5.2.5 Vary the amount of simplification ...... 17 5.3 Evaluation ...... 17

6 Result 19 6.1 Feasibility study ...... 19 6.1.1 A tool for evaluating errors in a mesh ...... 19 6.1.2 The chosen library ...... 20 6.2 Implementation - The resulting application ...... 21 6.2.1 Decimating a model ...... 23 6.2.2 Decimating with error tolerance ...... 23 6.2.3 Varying the decimation ...... 25 6.3 Evaluation of the algorithm ...... 26

7 Discussion 31 7.1 Result ...... 31 7.1.1 The results from feasibility study ...... 31 7.1.2 The resulting application ...... 33 7.1.3 The structure-aware decimation algorithm ...... 33 7.2 Method ...... 33 7.2.1 The choice of library ...... 33 7.2.2 Decimating with an error tolerance ...... 34 7.2.3 Vary the amount of decimation ...... 34 7.2.4 Evaluation of the final algorithm ...... 34 7.3 Further work ...... 34

8 Conclusion 35 8.1 Aim ...... 35 8.2 Research questions ...... 35

Bibliography 37 1 Introduction

This chapter contains a short motivation why this is a topic of interest, the aim of this thesis, research questions to be answered and delimitations.

1.1 Motivation

Mesh decimation is a common processing step in mesh processing. The aim of decimating a model is to lower the amount of triangles that the model is built upon. By having some input criterion(s), e.g. a target number of faces or continue to decimate as long as an error is kept below a specific number, the model can be minimized to a smaller, more manageable size. By lowering the amount of triangles in a mesh and still keep the general structures of the model intact, a boost in performance can be achieved without being able to distinguish any difference between the original and the decimated model. Viewing a large 3D-model on a smart-phone or a tablet which do not have the same computing power as a desktop computer would not be possible or would be prohibitively slow if no simplification is done. Even when using a fast modern computer, a whole block of buildings would not be possible to show if each building consisted of millions of triangles. Here, decimation takes on the important role of reducing the number of triangles in the models down to a fraction while still keeping the structural integrity of the buildings. When simplifying a model, a difficulty is to preserve important details for that specific model. How does the algorithm know what is important and what to discard? Looking at buildings once again, the edges of the building is impor- tant to keep highly detailed or else the result will not resemble the original. On the other hand, finding large almost planar parts in the model and decimating these parts down to only a few triangles will save a lot of space. Having a mesh decimation algorithm that is aware of the geometry is something that is highly

1 2 1 Introduction desirable.

1.2 Aim

This thesis aims to develop a mesh decimation algorithm for the company Spotscale. Spotscale specializes in converting drone footage, mainly of blocks and buildings, into 3D-models by using photogrammetry. In their current pipeline of creating a 3D model the mesh is decimated with little control of how the algorithm rec- ognizes surfaces or any other features in the model. As mentioned in section 1.1, when creating models out of buildings, corner detail is an important feature to preserve while larger, almost planar parts, can be decimated down to only a few triangles. This, along with a few more demands on the final algorithm is listed and more thoroughly explained in Chapter 2.

1.3 Research Questions

The following research questions will be investigated and answered throughout this thesis:

• Which library will be used as a starting point for implementing a decima- tion algorithm? • What is a good metric for comparing the error between a decimated model and the original model?

• How does the final algorithm that this thesis result in hold up against other publicly available methods of decimation?

1.4 Delimitations

This thesis aims at creating an algorithm that focuses on decimating 3D models of buildings, since these models is what Spotscale specifies in. The algorithm will be tested with 3D models provided by Spotscale. 2 Background

This chapter contains background information of the thesis, the company where the thesis is conducted at and describes the list of specification given by the com- pany.

2.1 Spotscale

Spotscale is a company that works in the realm of creating high quality 3D- models of buildings using photogrammetry and drone footage. In their pipeline of creating these models, there are a processing step of creating a lower quality model for LOD purposes. These models contains less triangles than the original, and for the building to still look good, the right triangles need to be discarded while the important triangles stay untouched by the decimation. The company wanted more control over how the decimation step in their pipeline was done, and with this thesis completed, hopefully they have something that they can use in production.

2.2 List of specification

Before the thesis started, the company gave a list of specifications with impor- tant features that the final algorithm should fulfill. This list was used during the course of the thesis for reference of how far the project has come along. The spec- ification list below is not in any way written in a prioritized manner, although some points are more important than others to fulfill.

• Vary the amount of decimation in different places in the mesh (on/off, but also by weight or some cost function based on classification, e.g. vegetation, building or vehicle).

3 4 2 Background

• Decimate the mesh to a target number of faces. • Decimate the mesh that continue as long as it does not exceed an error mar- gin (in e.g. meters). • Avoid non-manifolds. • Take flat surfaces in consideration. (Spotscales already written plane detec- tion could be used).

• Have other input in the decimation, such as maximum angle in a triangle to avoid slivers. • Have RAM usage and speed of calculation is not the focus, but should be with in reason (less than an hour for a big model with more than 3 million triangles).

• The following points will be implemented if time is available: – Have the implementation use the point cloud (of which the mesh is built upon) in the decimation – Decimate with the texture so that it fits the new model. 3 Related Work

This chapter will describe related work and explain the concepts that this thesis is based around.

3.1 Surface simplification

This section describes the background of surface simplification and the two most important papers that this thesis is based upon.

3.1.1 Garland and Heckberts decimation algorithm

Garland and Heckbert proposed an algorithm of surface simplification in the late 1990s and since then this article has been cited in thousands of papers [7]. They developed a surface simplification algorithm that produced high quality approxi- mations of polygonal models. The algorithm uses iterative contractions of vertex pairs(edge collapses) to simplify models and maintained surface error approx- imations using quadric matrices. Edge collapsing and the quadric matrices is further described in section 4.1 and section 4.4 respectively. The algorithm and the concept of error quadrics is developed in many ways and cited in many pa- pers. One of these papers is Structure-Aware Mesh Decimation, written by Salinas et al., described in section 3.1.2. This paper is a big inspiration for this thesis. Figure 3.1 shows a model of a bunny and figure 3.2 shows an approximation down to 1000 triangles using Garland and Heckberts proposed algorithm. Fig- ure 3.3 is the same approximation as 3.2, but with error ellipsoids for each vertex visualized. These figures gives a good understanding of how the quadrics is dis- tributed for each vertex. These three figures is from the paper by Garland and Heckbert [7].

5 6 3 Related Work

Figure 3.3: 1,000 Figure 3.1: Original Figure 3.2: An ap- face approximation. bunny model with proximation using Error ellipsoids 69,451 triangles. only 1,000 triangles. for each vertex are shown in green.

3.1.2 Structure-Aware decimation Salinas et al. published a paper that builds upon Garland and Heberts method of using error quadric matrices for simplifying surfaces but with structure aware- ness [10]. Salinas et al. propose an algorithm that in a pre-processing analysis step detects planar proxies in a mesh and structure these proxies via an adjacency graph. The mesh is then decimated with regards to both an inner quadric, calcu- lated in a similar way how Garland and Heckbert calculates their quadrics and the quadric matrices of the detected planar proxies. Also a boundary quadric is used, and this is to keep the structural integrity of the planar proxies. Both the boundary of the mesh and the boundary of the planar proxies are used to penalize edge collapses that collapse edges of proxy and mesh edge boundaries. The result is a structure-preserving approach, one that is well suited for pla- nar abstraction when extreme decimation is wished for. This thesis draws inspi- ration from this paper and both the planar proxy method and inner and outer quadrics is used in this implementation. How the planar proxies are found and used in the algorithm is explained in section 4.2 and the way the quadrics is cal- culated is described in section 4.4.

3.2 Mesh Refinement

Spotscale have a plane detection software, called Mesh Refinement. Mesh Re- finement will be used to find the planar proxies in a mesh. The software takes a mesh as input together with a few console commands given by the user. Seeds are found where there is a probability for finding planes, see figure 3.4a. The seeded planes are then grown by checking the normals of the neighbouring vertices to a plane that the specific plane does not belong to, see figure 3.4b. After the planes have grown, they are merged together, with a similar method of the growing, but checks the nearby planes normals, see figure 3.4c. Finally small holes in the mesh 3.2 Mesh Refinement 7 are closed, see figure 3.4d. The set of detected proxies are then used to compute a graph, explained in section 4.3.

(a) Step 1 – The software finds seeds (b) Step 2 – The planar proxies are for the planar proxies. grown.

(c) Step 3 – The planar proxies are (d) Step 4 – Small holes are closed in merged. the mesh.

Figure 3.4: Riddarholmen_a4 – The mesh with their detected planar proxies colored.

4 Theory

This chapter will describe the theory from the related work that will be used in this thesis.

4.1 Edge collapse

The most common mesh decimation operator is edge collapse, which often leads to efficient and reliable algorithms [7]. An edge collapse operator, v v v, is 0 1 → defined by merging two vertices v0 and v1 to a unique vertex v. Mesh decimation algorithms that uses edge collapse often have this work flow:

1. For each edge collapse v0v1 v, define a cost to an error metric. This cost is also linked together with the→ optimal placement of the new point location for v.

2. Compute an initial heap of prioritized edge collapses with increasing cost.

3. Extract the edge collapse with lowest cost from the heap, compute its opti- mal location and collapse the edge.

4. Update the prioritized heap for edges in the local neighbourhood.

5. Step 3 and 4 is ran until some condition is fulfilled, e.g. only a percentage of the original mesh faces remains.

A simple visualization of how the edge collapse operator works can be seen in figure 4.1. The cost and optimal location used both rely on a quadric error metric attached to each operator that is explained in 4.4.

9 10 4 Theory

(a) The edge v v v is the thicker (b) New vertex v after the collapse. 0 1 → line between point v0 and v1.

Figure 4.1: How the edge collapse operator works.

4.2 Planar proxies

Planar proxies is described by Salinas et al. as large planar parts in a mesh [10]. By looking for large planar parts in a mesh, the input mesh can be described with a very rough representation, that can be used for extreme mesh simplification. A planar proxy, ϕ, can be described as a plane ax + by + cz + d = 0 represented by a vector [a b c d] where n = [a b c] and consists of a set of vertices. Finding planar proxies is done by using Mesh Refinement, more thoroughly explained in section 3.2.

4.3 Graph of proxies

The detected proxies is placed in an undirected graph, G = (VG,EG,α), for the neighbourhood relationship between the proxies. Each proxy is represented as a vertex of VG. The edges EG,α consists of a pair of proxies with a distance between each other lower than α.

4.4 Error Quadrics

Garland and Heckbert, from now on referred as GH, propose an algorithm that associate each vertex a quadric which represents an approximation of the error between the current and the initial mesh [7]. This quadric is a symmetric 4x4 matrix used to compute the sum of squared distances from a point to a set of planes. Let P be a plane, ax + by + cz + d = 0, represented as a vector P = [a b c d]. This plane P is associated with the quadric: 4.4 Error Quadrics 11

 2  a ab ac ad  2  T ab b bc bd Q = PP =   p ac bc c2 cd   ad bd cd d2

T The squared distance of point v to P can be written d(v, P ) = v QP v. Salinas et al. also use quadrics but depart from the GH proposed quadrics [10]. The quadrics used by Salinas et al. optimize simultaneously for several criteria by minimizing the sum of:

1. The supporting planes of the local mesh triangles

2. The planes of the local set of proxies where detected

3. The boundary of proxies

4. The boundary of the mesh

Each quadric is weighted by an area for scale invariance and lowers sensitivity to the initial mesh density.

4.4.1 Inner quadric

For a triangle t, Pt is denoted as the supporting plane of t and its associated quadric QPt . The set of proxies that contain the triangle t is denoted P roxies(t). For a planar proxy ϕ, Qϕ is defined as the quadrics of the plane. Each triangle t is associated to a quadric Qt:

 Q if triangle is not associated to a proxy  Pt  X Qt = (1 λ)Q + λ Q otherwise  − Pt ϕ  ϕ P roxies(t) ∈

The edge e get the inner quadric Qinner (e), defined as the weighted sum: X Q (e) = t Q inner | | t t T (e) ∈ where t is defined as the area of the triangle t. | | The inner quadric is used to compute the cost and optimal placement for an edge collapse operator. The parameter λ is used as an abstraction parameter and provides the user to trade between mesh, i.e. the local error quadrics, versus the proxy quadrics. When λ = 1 the vertex is placed at the intersection of proxies when two proxies pass through edge e. When λ = 0 or e is not associated to any proxy, only the local geometric error quadric is used. 12 4 Theory

4.4.2 Boundary quadric When calculating the boundary quadrics for an edge e, two cases needs to be taken care of. If the triangular faces connected to e is associated to different proxies, e will get a contribution of a proxy boundary quadric. Also if e only has one connected face to it, it means that the edge is at the border of the mesh, and e will get a contribution of a mesh boundary quadric. The boundary quadric is calculated in the same way for both the proxy bound- ary and the mesh boundary. For a boundary edge e we denote R as a plane that contains e. We denote Qe,R the quadric associated to a plane orthogonal to R. The proxy boundary quadric is denoted Qe,P and the mesh boundary quadric is denoted Qe,M . EP and EM the set of boundary edges for the mesh and the prox- ies respectively. For the edge e we also denote t as the triangle associated to the edge e. As previous, the area of the triangle is denoted t . The boundary quadric | | Qbdry(e) for an edge is defined as: X X Q (e) = t Q + t Q bdry | M | e,M | P | e,P e E e E ∈ M ∈ P By looking at figure 4.2, we can see a mesh with its detected proxies. Figure 4.3 show the boundary edge of the roof proxy. Finally, figure 4.4 shows the orthogo- nal plane passing through an edge e, used for building the quadric of the proxy boundary.

Figure 4.2: A mesh Figure 4.3: The Figure 4.4: Orthog- with its detected edges of the roof onal plane passing proxies proxy through an edge 5 Method

This chapter will describe how the feasibility study was conducted and how the algorithm was implemented.

5.1 Feasibility study

The thesis began with conducting a feasibility study to answer the research ques- tions:

• Which library will be used as a starting point for implementing a decima- tion algorithm?

• What is a good metric for comparing the error between a decimated model and the original model?

5.1.1 Introduction to mesh simplification The feasibility study started with searching for articles and papers regarding mesh simplification other that the work of Garland and Heckbert and Salinas et al. [7] [10]. A tutorial written by David P. Lubke was found that described surface simplification well with different applications to different scenarios [9]. The tutorial reinforced the fact that quadric error metrics should be used during this thesis with the tutorial containing the quote:

"Quadric error metrics provide a fast, simple way to guide the sim- plification process with relatively minor storage costs. The resulting algorithm is extremely fast. The visual fidelity of the resulting simpli- fications tends to be quite high."

13 14 5 Method

5.1.2 A search for libraries A search for a library with an implementation of mesh representation and mesh decimation was conducted to use as a starting point. The libraries found was of in- terest where: VCG (implemented in the application MeshLab)[6], OpenMesh[4] and CGAL[11]. The libraries should preferably be easy to work with and have extensive documentation to make the feasibility study process faster. An already implemented edge collapse algorithm would be preferred, due to the fact that it will be included in the decimation algorithm. The libraries were installed and the source code and documentation were investigated further on how to evaluate them against each other. Spotscale provided 14 models to test the mesh decima- tion on that will be used in the comparison.

VCG (MeshLab) The Visualization and Library is an open source library for manipulation, processing and displaying triangular meshes developed by the Vi- sual Computing Lab [6]. MeshLab is an open source and extensible system, also developed by the Visual Computing Lab, for the processing and editing of 3D triangular meshes. It provides a set of tools for editing, cleaning, healing, inspect- ing, rendering, texturing and converting this kind of models. MeshLab uses the VCG library for all of its operations on a mesh, including their implementation of surface simplification, which they call Quadric Edge Collapse Decimation. Since MeshLab uses VCG, there was no need to look in the code itself of the library and instead use the application for testing this library. MeshLab have an option that can be checked that reads: Planar Simplification. A previous study conducted by Spotscale found that MeshLabs Quadric Edge Collapse Decimation with Planar Simplification gave the best visual results compared to other libraries at the time. Since this was the case, MeshLab will both be used with and without its planar simplification.

OpenMesh OpenMesh is an open source library for handling and representing developed by Computer Graphics Group, RWTH Aachen [2]. The library lets the user specify traits for vertices, edges and faces on top of the predefined attributes like . From OpenMesh introduction web page [3]: It was designed with the following goals in mind: 1. Flexibility: provide a basis for many different algorithms without the need for adaptation. 2.E fficiency: maximize time efficiency while keeping memory us- age as low as possible. 3. Ease of use: wrap complex internal structure in an easy-to-use interface. M 5.2 Implementation 15

CGAL

The Algorithms Library is a software project that, like the other libraries, provides efficient and reliable geometric algorithms [11]. CGAL is used in various areas that need geometric computation, such as robotics, com- puter aided design, computer graphics, etc.

5.1.3 Metro - A way of comparing different algorithms

To compare different libraries implementation of mesh decimation compared to one another the tool Metro was used [5]. Metro, which also is developed by the Visual Computing Lab, is a tool designed for evaluating the difference in two triangular meshes. The comparison were always made between the original 3D model and the decimated model. The models were decimated with the three chosen libraries VCG(both with and without planar simplification), OpenMesh and CGAL and then processed with Metro. Metro reports the mean-error between the original and the decimated mesh and this metric was chosen as a way of comparing the resulting mesh of the dif- ferent implementations. Metro also has an option saving a mesh with error as per-vertex colour and quality, and this was done with the original mesh to re- ceive a visual where the decimated model differs the most from the original one. The difference is displayed in a red-blue color scale, with red being little to no difference and blue being a big difference from the original mesh.

5.2 Implementation

Because the implementation was based upon on the outcome of the feasibility study, the implementation commenced once the study was finished. The com- pany Spotscale had guidelines when setting up the environment and how the application should look and operate once it was done.

5.2.1 Inner and Boundary quadrics

The implementation started with implementing the inner and the boundary quadric described in section 4.4.1 and 4.4.2. The initial error quadrics was calculated for each face in the mesh and then the calculation for the inner quadrics was done. If the selected face was a boundary face to the mesh, or if the neighbouring faces had a different proxy assigned to them, the face also got a contribution of a bound- ary quadric. Salinas et. al. propose to recalculate the quadrics after each collapse during the decimation, and have a Memoryless decimation process [10]. This memoryless simplification was introduced by Lindstrom and Turk in 1999 [8]. When implementing this the models were not as consistent as if the quadrics were just added up when a collapse was done. This lead to making a decision to not have a memoryless decimation. 16 5 Method

5.2.2 Solving the problem with flipping normals When doing extreme simplification down to as low as 1% of the original mesh’s face count with the implemented algorithm, a flipping of the face normal around the collapse occurred at times. To avoid this, a simulation of the collapse was introduced in the algorithm and the collapse was rejected if any one of the faces around the collapse deviated more than 150 degrees. This specific angle was the result of manually testing different angles and the chosen parameter created the least amount of problematic faces.

5.2.3 Avoiding slivers and needles A common problem when decimating models is the creation of slivers and nee- dles. Figure 5.1a and figure 5.1b displays a sliver and a needle respectively. These artifacts are not desirable and is hard to texture. To avoid these non-desirable triangles an analysis of the surrounding faces of the collapse was implemented. A simulation of the collapse is ran and if the biggest angle in the faces around the collapse exceeded 150 degrees, the collapse is rejected. For needles, a simulation of the collapse is done and if the ratio between the shortest and second shortest side of any of the surrounding faces is smaller than 0.01, the collapse is rejected. By introducing these verification’s in the algorithm, the amount of problematic triangles dropped significantly.

(a) A sliver. (b) A needle

Figure 5.1: Artifacts created when decimating meshes.

5.2.4 Decimating with an error margin One point in the list of specification was to be able to continue to decimate the mesh as long as it does not exceed an error margin. This was implemented with 5.3 Evaluation 17 a simulation of the collapse at hand, and if a specified local Hausdorff ratio is ex- ceeded the collapse is rejected. The local Hausdorff distance is calculated for the faces around the collapse, and if the distance is shorter than a user set tolerance, the collapse will take place and the decimation will continue. If not, the collapse is rejected, and the decimation will continue. When using the error tolerance, the mesh will not be decimated to the desired amount of face, but instead will continue as long as the tolerance is not exceeded.

5.2.5 Vary the amount of simplification The specification list included a point of being able to vary the amount of decima- tion in different places of the mesh and this was taken care of by letting the user be able to input another file. The input file is created in the same software, Mesh Refinement, as the proxies are detected, but the user can chose to highlight impor- tant or less important features in the mesh. By sending in this file together with the a parameter to the application, the user can chose to direct the algorithm to decimate more or less in this area. For instance, if the chosen model to decimate is of a castle with a courtyard filled with trees and the user wants to keep these trees in the decimation, the user would "paint" these trees in Mesh Refinement and sending in a factor of 1.2 in to the application. The inner quadric of the faces of the trees will then receive an increase of 20%, making them more expensive to collapse. If the user instead want to remove the trees, the input factor can be set to 0.6, making the faces 40% cheaper to collapse.

5.3 Evaluation

The final implementation of the algorithm will be evaluated in the same way as the libraries were. 14 different models will be decimated to the same percentages as done in the feasibility study and then will be ran through the software Metro. The results will be plotted next to the other libraries and compared in terms of mean-error. It is hard to evaluate a decimated model by only looking at the mean- error metric. The resulting models will also be viewed in MeshLab and compared with the original to visually compare the two. This is probably the best way to actually evaluate the algorithm, but it demands that the viewer knows what a "good" mesh looks like.

6 Result

This chapter will describe the results of the feasibility study and the implementa- tion.

6.1 Feasibility study

6.1.1 A tool for evaluating errors in a mesh

Metro is the chosen tool for comparing the decimated mesh and the original mesh [5]. Metro uses an approximated approach based on surface sampling and point- to-surface distance computation with the Hausdorff-distance. The Hausdorff- distance measures the longest distance between two subsets. Spotscale provided 14 different models of buildings for testing the different libraries that had mesh decimation implemented. After decimating models with each library to 50, 25, 10, 5, 4, 3, 2, 1, 0.5 and 0.1% of their original meshes’ triangle faces, the deci- mated models were then processed with Metro and the resulting mean error for each library could be calculated. Figure 6.1 displays the mean error for each li- brary. Metro also saves the original mesh with a red-blue color gradient with error as per-vertex color and quality, which can be viewed in figure 6.2. By looking at figure 6.2a, it is obvious that the error is larger than the other meshes. Looking at the other meshes though, it is much harder to distinguish which of the libraries that performed with least error. What can be seen is how different smaller details in the nearest tower of the building is preserved in figure 6.2c than in figure 6.2b or figure 6.2d.

19 20 6 Result

Figure 6.1: The resulting mean-error from Metro. The mean error for each library that have been evaluated from different degrees of decimation out of the 14 different models.

6.1.2 The chosen library

With the feasibility study completed, the chosen library to base the algorithm on was OpenMesh. By looking at figure 6.1, one can observe that the example code given from CGAL performed worst across the board. The other three results are similar to each other, but when looking at 0.5% and 0.1% of the original mesh faces, MeshLab and MeshLab with planar simplification outperforms OpenMesh.

Despite the fact that MeshLab, or rather VCG, performed best in the feasibility study, the decision was made to use OpenMesh as the library of which to base the algorithm upon. This was due to the promising results given that the OpenMesh example did not have any implementation of finding planar parts in the mesh. After looking through the documentation of each library, it also proved to be the most well documented library. 6.2 Implementation - The resulting application 21

(a) CGAL (b) OpenMesh

(c) MeshLab (d) MeshLab with planar simplifica- tion

Figure 6.2: Riddarholmen_a4 – Each mesh has been decimated to 1% of their original faces and then compared with the original mesh with Metro. The red color correlates to a small error while blue correlates to a large error.

6.2 Implementation - The resulting application

During the implementation, the algorithm was only tested on one model to speed up the process. Once the implementation was finished, the algorithm was used on all the models provided by Spotscale. The resulting application is ran in the command line with a few input parameters and can be seen in figure 6.3. The required parameters for the application are:

• Input mesh - The mesh to be decimated.

• Output mesh - Where the resulting mesh should be saved.

• Planar proxies file - The detected proxies in the mesh.

• One or more of the following:

– Percentage reduction - Decimates the mesh to a desired percentage of the original mesh faces. A number between 0 and 1. – Reduction to specified face number - Decimates the mesh to a set num- ber of faces. A positive number 22 6 Result

Figure 6.3: The resulting application

– Error tolerance - Let the decimation continue as long as it does not exceed this error tolerance. A number between 0 and 1. The smaller the number, the tighter the constraint gets. The optional parameters are: • Features file - The parts of the mesh to decimate more or less. Dependant on the importance factor. • Importance factor - The factor of which the inner and boundary quadrics are multiplied with. • Lambda - An abstraction parameter between 0 and 1. Provides a mean of trading mesh versus proxy fidelity. • Mu - Boundary parameter. Provides a means of trading boundary versus inner simplification. If both a percentage and a target number of faces is given, the smallest number of the two is selected as the target number of faces. Lambda and mu is used during the calculation of quadrics and provides the user a means of trading mesh versus proxy fidelity and boundary versus inner simplification respectively. In 6.2 Implementation - The resulting application 23 the paper by Salinas et. al. lambda and mu were both set to 0.8 and so they are in this thesis [10].

6.2.1 Decimating a model Using the algorithm would be to therefore to chose a model and create the planar proxies in Mesh Refinement. Then starting the application and inputting the in- put model, the output path and chose whether to reduce to a percentage, a target number of faces or via error tolerance option. Results of the error tolerance will be shown in section 6.2.2. An example would be to choose the model Riddarhol- men_a4, shown in its original state in figure 6.5. A planar proxy file of the model is created and shown in figure 6.4. The application is ran and a choice to decimate the model to 4% of its original number of faces is made. After the algorithm is finished, the the decimated model can be viewed in figure 6.6.

Figure 6.4: Original model of Riddarholmen_a4.

6.2.2 Decimating with error tolerance When decimating with error tolerance a target number of faces still has to be given. This is because of the fact that if no target is given, the algorithm will not start. The error tolerance parameter is a constraint and not a set number of meters that was given as an example in the list of specification. Setting the error tolerance to a number close to 1 will yield in a loose constraint while setting the number close to 0 will result in a tight constraint. The tolerance is tested dur- ing each edge collapse with a simulation, and if the tolerance is exceeded, the collapse will be rejected. This results in different amounts of faces for different 24 6 Result

Figure 6.5: The detected planar proxies in the model.

Figure 6.6: The model decimated to 4%. tolerances set. The original model of Riddarholmen_a4 can be seen in figure 6.5. An observation can be done when looking at figure 6.7 that the same target num- ber of faces has been set, but the output differs in how many collapses were made. Figure 6.7a shows the error tolerance being set to 0.2 while figure 6.7b shows the error tolerance being set to 0.8. Figure 6.7c and figure 6.7d show the decimated 6.2 Implementation - The resulting application 25 models with error tolerance set to 0.2 and 0.8 respectively.

(a) Application output with error tol- (b) Application output with error tol- erance set to 0.2 erance set to 0.8

(c) Decimated model with error toler- (d) Decimated model with error toler- ance set to 0.2 ance set to 0.8

Figure 6.7: Application and resulting models with different error tolerances

6.2.3 Varying the decimation The implementation of having certain parts of the model being more or less deci- mated resulted in having an another input from the user with the chosen faces in a similar file as the planar proxies. The faces highlighted in Mesh Refinement is then loaded in to the application and during the decimation these faces quadrics were affected by a factor given by the user. An example of varying the amount of decimation in a model follows below. The model displayed in figure 6.8 is the original model of Riddarholmenb3limited. The fountain is the subject of varying the decimation. A feature-file is created in Mesh Refinement with the fountains faces highlighted. The highlighted faces can be seen in figure 6.9. For reference, figure 6.10 is a close up of the fountain before 26 6 Result decimation. Figure 6.11 displays a decimated model down to 5% of the original amount of faces without varying the decimation. Figure 6.12 displays the same model decimated to the same amount of faces, but with the feature-file used and the importance factor set to 1.2. The fountains faces quadrics is multiplied with 1.2, resulting in an increase of cost of collapsing these faces by 20%. The fountain is kept more detailed than the rest of the model. On the other hand the importance factor can be set to a value below 1 to cheapen the cost of collapsing the faces. Figure 6.13 shows the same model with the importance factor set to 0.01. The face’s quadrics is multiplied with 0.01, de- creasing the cost of collapsing its faces to 1% of their original cost. The fountain is basically removed from the model.

Figure 6.8: Original model

6.3 Evaluation of the algorithm

After the structure-aware decimation algorithm was done, it was evaluated in the same manner done in the feasibility study. The 14 models were used in this evaluation as well. The detection of planar proxies were automated in Mesh Re- finement for ease of use, with the same parameters sent in to Mesh Refinement for every model. The decimated models were then used in Metro to calculate the mean-error for the structure-aware algorithm. The results were plotted in a graph against the other libraries and can be seen in figure 6.14. 6.3 Evaluation of the algorithm 27

Figure 6.9: Original model with the fountain colored

Figure 6.10: Close up of the fountain 28 6 Result

Figure 6.11: Decimation without varying the decimation

Figure 6.12: Decimation with the quadrics of the fountain increased by 20% 6.3 Evaluation of the algorithm 29

Figure 6.13: Decimation with the quadrics of the fountain reduced to 1%

Figure 6.14: The graph displaying the mean-error of the new structure- aware algorithm.

7 Discussion

A discussion about the results, the method and further work will go in this chap- ter.

7.1 Result

7.1.1 The results from feasibility study When the results from the feasibility study was looked at regarding OpenMesh, the higher mean error when looking at the 0.5 and 0.1% decimated models arose suspicion around that there is something about the chosen model could not be handled by the decimation. By then looking at each individual model with re- gards to the mean error, one can note that some of the models are handled as well as the VCG library, as seen in figure 7.1a. On the other hand, by looking at the other model landtag, one can see by looking at figure 7.1b that the mean error is just as bad as the CGAL-library when looking at the lower percentages of decimation. This inconsistency in the lower percentages causes fluctuations that results in worse results. Why the OpenMesh decimation performs worse in some cases might have something to do with the algorithm decimating some triangles to the point of destroying the mesh. Reviewing figure 7.2 one can make a few guesses why the mean error gets higher. Looking at figure 7.2a, a destruction of the model can be seen to the point of no recognition of the original model. Comparing this to figure 7.2c, the corners is kept intact. This is probably the reason why the mean error of OpenMesh gets higher in the lower percentages of decimation. One can speculate why the corners is destroyed by OpenMesh, it might have something to do with how the quadrics is added up or how they are calculated in the first place.

31 32 7 Discussion

(a) Riddarholmen_a4 (b) Landtag

Figure 7.1: The difference in mean error between the models Riddarhol- men_a4 and Landtag.

(a) Decimated model of Landtag (b) Mean error of Landtag

(c) Decimated model of Riddarhol- (d) Mean error of Riddarholmen_a4 men_a4

Figure 7.2: Landtag and Riddarholmen_a4 – The decimated model and the mean error. Both the models have been decimated to 0.1% of their original models triangle faces. 7.2 Method 33

7.1.2 The resulting application

The application fulfilled most of the list of specification given in the start of the thesis. The amount of decimation can be varied throughout the mesh, the mesh can be decimated to a target number of faces, flat surfaces can be taken in consid- eration via Spotscales plane detection, the RAM usage and speed of calculation is with in reason, slivers and needles is avoided during decimation and an error mar- gin can be chosen to decimate with. The only points not fulfilled from the list of specification was to ensure that non-manifolds triangles was avoided completely, have the implementation use the point cloud of which the mesh is built upon and decimate with the texture. Two of these points were additional and only would be looked at if time were available. During the final stages of the thesis, research was made on how non-manifolds were created and avoided, but unfortunately the time ran out before anything could be unfolded or implemented. Having the application being able to also decimate the texture of the model would be of great benefit, but since time was a limiting factor this was left out. Now the uv-coordinates of the decimated model does not correspond to the cor- rect point in the texture, and therefore if one where to add the original texture to the decimated mesh, a weird looking model would be created.

7.1.3 The structure-aware decimation algorithm

The final implementation of the decimation algorithm performs good compared to the other algorithms. With further development, some things could sure be re- solved and worked on with a lower mean-error as a result. Looking at figure 6.14 the algorithm outperformed all the other algorithm except for MeshLab. Since the VCG-library was not examined at great detail it is hard to tell what the differ- ences between the two. This is something that could be looked at when further developing the structure-aware decimation. Nevertheless, the algorithm is some- thing that is of great benefit for the company Spotscale, as they now have a greater control and understanding of how the decimation of their models are made.

7.2 Method

7.2.1 The choice of library

The choice of using the OpenMesh library was in retrospect a good choice. The library was well documented with examples showing what the already imple- mented decimation algorithm was doing [1]. It was an easy library to work with as far as introducing new concepts and also the people working at Spotscale had some previous experience with it, making it easy to ask for help in the beginning of how things were organized in the library. 34 7 Discussion

7.2.2 Decimating with an error tolerance The implementation of decimating with an error tolerance is not so intuitive and easy to use. Since it is only a measurement of "strictness" it is hard to predict how much the mesh will be decimated. What would have been a better approach to this feature is instead having the decimation to continue as long as

7.2.3 Vary the amount of decimation The user can chose to input another file that is the same as the detected planes file, but this is instead used for varying the decimation in the model. The user can can either chose to have these parts of the model be less or more decimated.

7.2.4 Evaluation of the final algorithm The tool Metro that was used as the evaluation measures the mean-error of the models with the use of the Hausdorff-distance. No other software for evaluation was found during the feasibility study. A difficult thing to measure when looking at a decimated model is to find a metric that corresponds to how "good" a model looks. As mentioned in chapter 5, a viewer with experience will know what a good model and a bad model looks like, but to correspond this to an actual measurable metric was not found during this thesis. The closest and best metric found was the mean-error.

7.3 Further work

As mentioned in section 7.1.2, the decimation with texture was not looked at, due to time constraints. This would be a valuable feature for the algorithm to include, to make the texture fit the new decimated model. Also, it would be a great benefit to combine the planar detection software Mesh Refinement with the decimation algorithm. In that case the user would be able to see get direct feedback when the planes are detected. 8 Conclusion

This chapter aims to conclusion of the thesis and answer the research questions.

8.1 Aim

The aim of the thesis was to develop a mesh decimation algorithm for the com- pany Spotscale, and that has been fulfilled to a large extent. Only a few of the points in the list of specifications were not fulfilled. Avoiding non-manifolds, using the point cloud and decimating with texture was unfortunately not imple- mented due to a lack of time. Some improvements can surely be made to improve both performance and the look of the resulting model. For instance, as mentioned in section 7.1.2, the decimation with texture could be looked at and implemented to ensure that the texture will fit the new decimated model. During the final weeks of working on the application the avoidance of non- manifolds triangles were researched. Time ran out before anything of signifi- cance could be implemented. This would be the first step taken if more time were available during the thesis.

8.2 Research questions

The library chosen to build the algorithm upon was OpenMesh, after a feasibility study was done researching how different libraries compared to each other. Even though the library did not perform best out of all libraries looked at, it had other benefits, such as good documentation made the decision easy. The metric chosen for comparing error between decimated meshes were the mean-error metric. It uses the Hausdorff-distance and the tool Metro was used

35 36 8 Conclusion across the thesis to evaluate the decimated meshes and it was also used for com- parison between libraries. The final algorithm performed good compared to the other already imple- mented algorithms studied in the feasibility study. While it did not perform best across the board, it is in the top three when it comes to the mean-error metric. Bibliography

[1] OpenMesh documentation. http://www.openmesh.org/ Documentation/OpenMesh-Doc-Latest/index.html, . Accessed: 2019-01-25. Cited on page 33.

[2] Computer Graphics Group rwth aachen university. http://www. graphics.rwth-aachen.de/, . Accessed: 2019-01-25. Cited on page 14.

[3] OpenMesh introduction page. https://www.openmesh.org/intro/,. Accessed: 2018-09-25. Cited on page 14.

[4] Mario Botsch, Stephan Steinberg, Stephan Bischoff, and Leif Kobbelt. Openmesh-a generic and efficient polygon mesh data structure. 2002. Cited on page 14.

[5] Paolo Cignoni, Claudio Rocchini, and Roberto Scopigno. Metro: measur- ing error on simplified surfaces. In Computer Graphics Forum, volume 17, pages 167–174. Blackwell Publishers, 1998. Cited on pages 15 and 19.

[6] Paolo Cignoni, Marco Callieri, Massimiliano Corsini, Matteo Dellepiane, Fabio Ganovelli, and Guido Ranzuglia. MeshLab: an Open-Source Mesh Processing Tool. In Vittorio Scarano, Rosario De Chiara, and Ugo Erra, ed- itors, Eurographics Italian Chapter Conference. The Eurographics Associa- tion, 2008. ISBN 978-3-905673-68-5. doi: 10.2312/LocalChapterEvents/ ItalChap/ItalianChapConf2008/129-136. Cited on page 14.

[7] Michael Garland and Paul S Heckbert. Surface simplification using quadric error metrics. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 209–216. ACM Press/Addison- Wesley Publishing Co., 1997. Cited on pages 5, 9, 10, and 13.

[8] Peter Lindstrom and Greg Turk. Fast and memory efficient polygonal simpli- fication. In Visualization’98. Proceedings, pages 279–286. IEEE, 1998. Cited on page 15.

37 38 Bibliography

[9] David P Luebke. A developer’s survey of polygonal simplification algo- rithms. IEEE Computer Graphics and Applications, (3):24–35, 2001. Cited on page 13. [10] David Salinas, Florent Lafarge, and Pierre Alliez. Structure-aware mesh dec- imation. In Computer Graphics Forum, volume 34, pages 211–227. Wiley Online Library, 2015. Cited on pages 6, 10, 11, 13, 15, and 23. [11] The CGAL Project. CGAL User and Reference Manual. CGAL Editorial Board, 4.12.1 edition, 2018. URL https://doc.cgal.org/4.12.1/ Manual/packages.html. Cited on pages 14 and 15.