Tetrahedral Mesh Generation for Deformable Bodies

Neil Molino∗ Robert Bridson† Ronald Fedkiw‡ Stanford University Stanford University Stanford University

Abstract Motivated by the simulation of deformable bodies, we propose a new tetrahedral mesh generation algorithm that produces both high quality elements and a mesh that is well conditioned for subsequent large deformations. We use a signed distance function defined on a grid in order to represent the object geometry. After tiling space with a uniform lattice based on crystallography, we identify a sub- set of these tetrahedra that adequately fill the space occupied by the object. Then we use the signed distance function or other user defined criteria to guide a red green mesh subdivision algorithm that results in a candidate mesh with the appropriate level of detail. After this, both the signed distance function and topological con- siderations are used to prune the mesh as close to the desired shape as possible while keeping the mesh flexible for large deformations. Finally, we compress the mesh to tightly fit the object boundary using either masses and springs, the finite element method or an optimization approach to relax the positions of both the interior and boundary nodes. The resulting mesh is well suited for simulation since it is highly structured, has robust topological connectivity in the face of large deformations, and is readily refined if deemed nec- essary during subsequent simulation. Figure 1: Tetrahedral mesh of a cranium (80K elements). CR Categories: I.3.5 []: and Object Modeling—Physically based modeling Mesh generation is not only a broad field, but is in some sense I.3.5 [Computer Graphics]: Computational Geometry and Object many fields each concerned with the creation of meshes that con- Modeling—Curve, surface, solid and object representations form to quality measures specific to the application at hand. The requirements for fluid flow and heat transfer where the mesh is Keywords: tetrahedral mesh generation, BCC lattice, red green not deformed, and for small deformation solids where the mesh is refinement, finite element method, level set methods barely deformed, can be quite different from those for simulating soft biological tissue that may undergo large deformations. For ex- 1 Introduction ample, while an optimal mesh for a fluid flow simulation should in- clude anisotropically compressed elements in boundary layers, e.g. Tetrahedral meshes are used in a number of areas including frac- [Garimella and Shephard 1998; Lohner and Cebral 1999], these ture [O’Brien and Hodgins 1999; O’Brien et al. 2002], haptics [De- highly stretched cells tend to be ill-conditioned when a mesh de- bunne et al. 2001], [Cutler et al. 2002], surgical forms significantly as is typical for soft bodies. Either the mesh simulations [Ganovelli et al. 2000], and the modeling of biologi- is softer in the thin direction and the cell has a tendency to invert, cal tissue including the brain [Grinspun et al. 2002], the knee [Hi- or the mesh is stiffer in the thin direction and the simulation be- rota et al. 2001] and even fatty tissue [James and Pai 2002]. Robust comes very costly as the time step shrinks with higher stiffness and methods for generating these meshes are in high demand. Of partic- smaller element cross-section. Thus, although our method has been ular interest to us are simulations of highly deformable bodies such designed to provide a high degree of adaptivity both to resolve the as the muscle and fatty tissues commonly encountered in graphics, geometry and to guarantee quality simulation results, we neither biomechanics or virtual surgery. consider nor desire anisotropically stretched elements. Also, since highly deformable bodies tend to be devoid of sharp features such ∗e-mail: [email protected] as edges and corners, we do not consider boundary feature preser- †e-mail: [email protected] vation. ‡e-mail: [email protected] Our main concern is to generate a mesh that will be robust when subsequently subject to large deformations. For example, although we obviously want an adaptive mesh with smaller elements in areas where more detail is desired, it is even more important to have a mesh that can be adapted during the simulation since these regions will change. Motivated by crystallography, we use a body-centered cubic (BCC) mesh (see e.g. [Burns and Glazer 1990]) that is highly structured and produces similar (in the precise geometric sense) tetrahedra under regular refinement. This allows us to adaptively refine both while generating the mesh and during the subsequent simulation. We start with a uniform tiling of space and use a signed dis- tance function representation of the geometry to guide the creation of the adaptive mesh, the deletion of elements that are not needed to represent the object of interest, and the compression of the mesh necessary to match the object boundaries. This compression stage can be carried out using either a mass spring system, a finite element method or an optimization based approach. One advantage of using a physically based compression algorithm is that it gives an indica- tion of how the mesh is likely to respond to the deformations it will experience during simulation. This is in contrast to many traditional methods that may produce an initial mesh with good quality mea- sures, but also with hidden deficiencies that can be revealed during simulation leading to poor accuracy or element collapse. Moreover, our novel topological considerations (discussed below) were specif- ically designed to address these potential defects present in most (if not all) other mesh generation schemes. 2 Related Work Delaunay methods have not been as successful in three spatial di- mensions as in two, since they admit flat sliver tetrahedra of negli- gible volume. [Shewchuk 1998] provides a nice overview of these methods, including a discussion of why some of the theoretical re- Figure 2: Tetrahedral mesh of a cranium (cutaway view) show- sults are not reassuring in practice. Moreover, he discusses how ing the regularity and coarseness of the interior tetrahedra the worst slivers can often be removed. [Cheng et al. 2000] also (80K elements). discusses sliver removal, but states that their theorem gives an esti- mate that is “miserably tiny”. [Cutler et al. 2002] starts with either a uniform grid or a graded Advancing front methods start with a boundary discretization octree mesh and subdivides each cell into five tetrahedra. Then and march a “front” inward forming new elements attached to the tetrahedra that cross an internal or external interface are sim- existing ones, see e.g. [Schoberl¨ 1997]. While they conform well ply split, which can form poorly shaped elements for simulation. to the boundary, they have difficulty when fronts merge, which un- Neighboring tetrahedra are also split to avoid T-junctions. The fortunately can occur very near the important boundary in regions usual tetrahedral flips and edge collapses are carried out in an at- of high curvature, see e.g. [Garimella and Shephard 1998; Lohner tempt to repair the ill-conditioned mesh. and Cebral 1999]. In [Radovitzky and Ortiz 2000], the authors start Our compression phase moves the nodes on the boundary of our with a face-centered cubic (FCC) lattice defined on an octree and candidate mesh to the implicit surface providing boundary confor- use an advancing front approach to march inward constructing a mity. Related work includes [Bloomenthal and Wyvill 1990] who mesh with the predetermined nodes of the FCC lattice. They choose attracted particles to implicit surfaces for visualization, [Turk 1991; FCC over BCC because it gives slightly better tetrahedra for their Turk 1992] who used particles and repulsion forces to sample and error bounds. However, after any significant deformation the two retriangulate surfaces, [de Figueiredo et al. 1992; Crossno and An- meshes will usually have similar character. Moreover, since we gel 1997] who used the same idea to triangulate implicit surfaces, keep our BCC connectivity intact (as opposed to [Radovitzky and [Szeliski and Tonnesen 1992] who modeled surfaces with oriented Ortiz 2000]), we retain the ability to further refine our BCC mesh particles, and [Witkin and Heckbert 1994] who used particle repul- during the calculation to obtain locally higher resolution for im- sion to sample (and control) implicit surfaces. proved accuracy and robustness. On the other hand, their approach In some sense, this wrapping of our boundary around the level set is better at resolving boundary features and is thus most likely su- is related to snakes [Kass et al. 1987] or GDM’s [Miller et al. 1991] perior for problems with little to no deformation. which have been used to triangulate isosurfaces, see e.g. [Sadarjoen [Shimada and Gossard 1995] packed spheres (or ellipsoids [Ya- and Post 1997]. [Neugebauer and Klein 1997] started with a march- makawa and Shimada 2000]) into the domain with mutual attrac- ing cubes mesh and moved vertices to the centroid of their neigh- tion and repulsion forces, and generated tetrahedra using the sphere bors before projecting them onto the zero level set in the neigh- centers as sample points via either a Delaunay or advancing front boring triangles’ average normal direction. [Grosskopf and Neuge- method. However, ad hoc addition and deletion of spheres is re- bauer 1998] improved this method using internodal springs instead quired in a search for a steady state, and both local minima and of projection to the centroid, incremental projection to the zero iso- “popping” can be problematic. This led [Li et al. 1999] to pro- contour, adaptive subdivision, edge collapse and edge swapping. pose the removal of the dynamics from the packing process, instead [Kobbelt et al. 1999] used related ideas to wrap a mesh with sub- marching in from the boundary removing spherical “bites” of vol- division connectivity around an arbitrary one (other interesting ear- ume one at a time. lier work includes [Hoppe et al. 1993; Hoppe et al. 1994]), but had [Debunne et al. 2001] carried out finite element simulations on a difficulty projecting nodes in one step, emphasizing the need for hierarchy of tetrahedral meshes switching to the finer meshes only slower evolution. Similar techniques were used by [Bertram et al. locally where more detail is needed. They state that one motivation 2000; Hormann et al. 2002] to wrap subdivision surfaces around for choosing this strategy is the difficulty in preserving mesh quality implicit surfaces. To improve robustness, [Wood et al. 2000] adap- when subdividing tetrahedral meshes. [Grinspun et al. 2002] pro- tively sampled the implicit surface on the faces of triangles, redis- poses basis refinement pointing out that while T-junctions are easily tributed the isovalues times the triangle normals to the vertices to removed in two spatial dimensions, it is more involved in three spa- obtain forces, and replaced the spring forces with a modified Lapla- tial dimensions. Our method alleviates both of these concerns since cian smoothing restricted to the tangential direction. [Ohtake and subdivision of our BCC mesh leads to a similar (the tetrahedra are Belyaev 2002] advocated moving the triangle centroids to the zero congruent up to a dilation) BCC mesh, and the particular red green isocontour instead of the nodes, and matching the triangle normals strategy that we use can be implemented in a straightforward man- with the implicit surface normals. ner. Although we derive motivation from this work, we note that our problem is significantly more difficult since these authors move can be refined into eight tetrahedra, shown in red in fig- their mesh in a direction normal to the surface, which is orthogonal ure 4, with a one to eight (or 1:8) refinement. When the shortest of to their measure of mesh quality (shapes of triangles tangent to the the three possible choices for the edge internal to the tetrahedron is surface). When we move our mesh normal to the surface, it directly taken, the newly formed tetrahedra are exactly the BCC tetrahedra conflicts with the quality of the surface tetrahedra. [de Figueiredo that result from a mesh with cells one half the size. Thus, these et al. 1992] evolved a volumetric mass spring system in order to eight new tetrahedra are geometrically similar to the tetrahedra of align it with (but not compress it to) the zero isocontour, but the the parent mesh and element quality is guaranteed under this regular measure of mesh quality was still perpendicular to the evolution di- 1:8 refinement. rection since the goal was to triangulate the zero isocontour. Later, however, [Velho et al. 1997] did push in a direction conflicting with mesh quality. They deformed a uniform-resolution Freudenthal lat- 4 Red Green Refinement tice to obtain tetrahedralizations using a mass spring model, but Many applications do not require and cannot afford (due to com- were restricted to simple geometries mostly due to the inability putation time and memory restrictions) a uniformly high resolution to incorporate adaptivity. In two spatial dimensions, [Gloth and mesh. For example, many graphical simulations can tolerate low Vilsmeier 2000] also moved the mesh in a direction that opposed accuracy in the unseen interior of a body, and many phenomena the element quality. They started with a uniform Cartesian grid bi- such as contact and fracture show highly concentrated stress pat- sected into triangles, threw out elements that intersected or were terns, often near high surface curvature, outside of which larger outside the domain, and moved nodes to the boundary in the di- tetrahedra are acceptable. Thus, we require the ability to generate rection of the gradient of the level set function using traditional adaptive meshes. smoothing, edge-swapping, insertion and deletion techniques on As the BCC lattice is built from cubes, one natural approach to the mesh as it deformed. adaptivity is to build its analog based on an octree. We implemented this by adding body centers to the octree leaves, after ensuring the 3 A Crystalline Mesh octree was graded with no adjacent cells differing by more than We turn our attention to the physical world for inspiration and start one level. The resulting BCC lattices at different scales were then our meshing process with a body-centered cubic (BCC) tetrahedral patched together with special case tetrahedra. For more on oc- lattice. This mesh has numerous desirable properties and is an ac- trees in mesh generation, see e.g. [Shephard and Georges 1991; tual crystal structure ubiquitous in nature, appearing in vastly dif- Radovitzky and Ortiz 2000]. ferent materials such as soft lithium and hard iron crystals, see e.g. However, we found that red green refinement is more econom- [Burns and Glazer 1990]. ical, simpler to implement, and more flexible, see e.g. [Bey 1995; The BCC lattice consists of nodes at every point of a Carte- Grosso et al. 1997; de Cougny and Shephard 1999]. The initial sian grid along with the cell centers. These node locations may BCC lattice tetrahedra are labeled red, as are any of their eight chil- be viewed as belonging to two interlaced grids. Additional edge dren obtained with our 1:8 subdivision shown in red in figure 4. connections are made between a node and its eight nearest neigh- Performing a red refinement on a tetrahedron creates T-junctions bors in the other grid. See figure 3 where these connections are at the newly-created edge midpoints where neighboring tetrahedra depicted in red and the two interlaced grids are depicted in blue and are not refined to the same level. To eliminate these, the red tetra- in green. The BCC lattice is the Delaunay complex of the inter- hedra with T-junctions are irregularly refined into fewer than eight laced grid nodes, and thus possesses all properties of a Delaunay children by introducing some of the midpoints. These children are tetrahedralization. Moreover, all the nodes are isomorphic to each labeled green, and are of lower quality than the red tetrahedra that other (and in particular have uniform valence), every tetrahedron is are part of the BCC mesh. Moreover, since they are not BCC tetra- congruent to the others, and the mesh is isotropic (so the mesh itself hedra, we never refine them. When higher resolution is desired will not erroneously induce any anisotropic bias into a subsequent in a region occupied by a green tetrahedron, the entire family of calculation). The BCC lattice is structured and thus has advantages green tetrahedra is removed from its red parent, and the red parent for iterative solvers, multigrid algorithms, and both computational is refined regularly to obtain eight red children that can undergo and memory requirements. subsequent refinement.

Figure 3: A portion of the BCC lattice. The blue and the green connections depict the two interlaced grids, and the eight red connections at each node lace these two grids together.

There are a number of ways to improve the accuracy of a finite element simulation. R-refinement relocates existing mesh nodes to better locations, h-refinement locally refines the mesh to ob- tain smaller elements where needed, and p-refinement increases the Figure 4: The standard red refinement (depicted in red) pro- polynomial order of the interpolating functions—see e.g. [Cheng duces eight children that reside on a BCC lattice that is one- et al. 1988]. A significant advantage of the BCC mesh is that it is half the size. Three green refinements are allowed (depicted in easily refined initially or during the calculation. Each regular BCC green). A red tetrahedron that needs a green refinement can have be- tween one and six midpoints on its edges (in the case of six we do red refinement). We reduce the possibilities for green refinement to those shown in figure 4, adding extra edge midpoints if necessary. This restriction (where all triangles are either bisected or quadri- sected) smooths the gradation further and guarantees higher quality green tetrahedra. While there can of course be a cascading effect as the extra midpoints may induce more red or green refinements, it is a small price to pay for the superior mesh quality and seems to be a minor issue in practice. Any criteria may be used to drive refinement, and we experi- mented with the geometric rules described in the next section. A significant advantage of the red green framework is the possibility for refinement during simulation based on a posteriori error esti- mates, with superior quality guarantees based on the BCC lattice Figure 6: Tetrahedral mesh of a torus (8.5K elements). The instead of an arbitrary initial mesh. Note that the lower quality principal curvatures were used to increase the level of resolu- green tetrahedra can be replaced by finer red tetrahedra which ad- tion in the inner ring. mit further refinement. However, one difficulty we foresee is in discarding portions of green families near the boundary (see sec- of course). The remaining cases are checked by sampling φ appro- tion 6), since part of the red parent is missing. To further refine this priately (at the level set grid size 4x), allowing refinement if any tetrahedron, the green family has to be replaced with its red parent sample is close enough to the interface (|φ| < 4x). Figure 5 shows which can be regularly refined, then some of the red children need a sphere adaptively refined near its boundary. Note how the interior to be discarded and the others must be compressed to the bound- mesh can still be rather coarse. ∇ ary (see sections 7–8). A simpler but lower quality alternative is The outward unit normal is defined as N = φ and the mean ∇ to arbitrarily relabel those green boundary tetrahedra that are miss- curvature is defined as κ = · N. One may wish to adaptively re- ing siblings as red allowing them to be directly refined. We plan to fine in regions of high curvature, but the mean curvature is a poor address this issue in future work. measure of this since it is the average of the principal curvatures, (k1 + k2)/2, and can be small at saddle points where positive and negative curvatures cancel. Instead we use |k |+|k |. The principal 5 Representing the Geometry 1 2 curvatures are computed by forming the Hessian, We represent the geometry with a signed distance function defined   on either a uniform grid [Osher and Fedkiw 2002] or an octree grid φxx φxy φxz [Frisken et al. 2000]. In the octree case, we constrain values of H =  φxy φyy φyz , fine grid nodes at gradation boundaries to match the coarse grid φxz φyz φzz interpolated values, see e.g. [Westermann et al. 1999]. When the signed distance function has a resolution much higher than that of and projecting out the components in the normal direction via the our desired tetrahedral mesh, we apply motion by mean curvature to projection matrix P = I −NNT . Then the eigenvalues of PHP/|∇φ| smooth the high frequency features and then reinitialize to a signed are computed, the zero eigenvalue is discarded as corresponding distance function, see e.g. [Osher and Fedkiw 2002] (and [Desbrun to the eigenvector N, and the remaining two eigenvalues are k1 et al. 1999] where curvature motion was applied directly to a trian- and k2. See e.g. [Ambrosio and Soner 1996]. To detect whether gulated surface). a tetrahedron contains regions of high curvature, we sample at a At any point in space, we calculate the distance from the implic- fine level and check the curvature at each sample point. Figure 6 itly defined surface as φ which is negative inside and positive out- shows a torus where the inner ring is refined to higher resolution side the surface. To obtain a finer mesh near the boundary, one sim- even though the principal curvatures there differ in sign. ply refines tetrahedra that include portions of the interface where φ = 0. If a tetrahedron has nodes with positive values of φ and 6 Constructing a Candidate Mesh nodes with negative values of φ, it obviously contains the interface and can be refined. Otherwise, the tetrahedron is guaranteed not In the previous three sections we described the BCC lattice, the red to intersect the interface if the minimum value of |φ| at a node is green refinement strategy and the representation of the geometry of larger than the longest edge length (tighter estimates are available interest with a signed distance function. This section describes how we tie these ideas together to obtain a candidate tetrahedral mesh of our object. This candidate mesh will be compressed to the boundary using one of the methods described in the next two sections. The first step is to cover an appropriately sized bounding box of the object with a coarse BCC mesh. Then we use a conservative discard process to remove tetrahedra that are guaranteed to lie com- pletely outside of the zero isocontour: tetrahedra with four positive φ values all larger than the maximum edge length are removed. In the next step, the remaining tetrahedra are refined according to any user defined criteria, such as a posteriori error estimates, indi- cator variables or geometric properties. We have experimented with using both the magnitude of φ and various measures of curvature as discussed in the previous section. Using simply the magnitude of φ Figure 5: Tetrahedral mesh of a sphere (18K elements). The cut- produces large tetrahedra deep inside the object and a uniform level away view illustrates that the interior mesh can be fairly coarse of refinement around the surface, which can be useful since objects even if high resolution is desired on the boundary. interact with each other via surface tetrahedra. A more sophisti- cated method uses the surface principal curvatures, better resolving complex geometry and allowing for more robust and efficient simu- lation when subject to large deformation. We refine any tetrahedron near the interface if its maximum edge length is too large compared to a radius of curvature measure, 1/(|k1| + |k2|), indicating an in- ability to resolve the local geometry. We refine to a user-specified number of levels, resolving T-junctions in the red green framework as needed. From the adaptively refined lattice we will select a subset of tetrahedra that closely matches the object. However, there are spe- cific topological requirements necessary to ensure a valid mesh that behaves well under deformation: the boundary must be a manifold; no tetrahedron may have all four nodes on the boundary; and no interior edge may connect two boundary nodes. Boundary forces can readily crush tetrahedra with all nodes on the boundary, or that are trapped between the boundary and an interior edge with both endpoints on the boundary. To satisfy the conditions, we select all the tetrahedra incident on a set of “enveloped” nodes sufficiently in- terior to the zero isocontour. This guarantees that every tetrahedron is incident on at least one interior node, and also tends to avoid the bad interior segments for reasonably convex regions, i.e. regions where the geometry is adequately resolved by the nodal samples. We specifically choose the set of nodes where φ < 0 that have all their incident edges at least 25% inside the zero isocontour as de- termined by linear interpolation of φ along the edge. Additional processing is used to guarantee appropriate topol- ogy even in regions where the mesh may be under-resolved. Any remaining interior edges and all edges incident on non-manifold nodes are bisected, and the red green procedure is used to remove all T-junctions. If any refinement was necessary, we recalculate the set of enveloped nodes and their incident tetrahedra as above. As an option, we may add any boundary node with surface degree three to the set of enveloped nodes (if these nodes were to remain, Figure 7: Tetrahedral mesh of a model Buddha (800K elements). the final surface mesh would typically contain angles over 120◦). The principal curvatures provide refinement criteria that read- We also add any non-manifold node that remains and the deeper ily allow the resolution of features on multiple scales. of the two boundary nodes connected by a bad interior edge. We check that these additions do not create more problems, continuing symmetric negative semi-definite, we can use a conjugate gradient to add boundary nodes to the set of enveloped nodes until we have solver for the implicit step as in [Baraff and Witkin 1998]. We also achieved all requirements. This quickly and effectively eliminates limit the time step so that no tetrahedron altitude can deform by all topological problems, resulting in a mesh that approximates the more than 10% during the time step as motivated by [Baraff and object fairly closely (from the viewpoint of an initial guess for the Witkin 1998; Bridson et al. 2002]. In addition, we use the velocity compression phase of the algorithm) and that has connectivity well modification procedure discussed in [Bridson et al. 2002] to artifi- suited for large deformation simulations. cially limit the maximum strain of a tetrahedral altitude to 50% for both compression and expansion, and to artificially limit the strain 7 Physics Based Compression rate of a tetrahedral altitude to 10% per time step. Since altitudes do not connect two mesh nodes together, all of these operations are We outfit our candidate mesh with a deformable model based on carried out by constructing a virtual node at the intersection point either masses and springs or the finite element method, and subse- between an altitude and the plane containing the base triangle. The quently compress the boundary nodes to conform to the zero iso- velocity of this point is calculated using the barycentric coordinates contour of the signed distance function. The compression is driven and velocities of the triangle, and the mass is the sum of the trian- using either a force or velocity boundary condition on the surface gle’s nodal masses. The resulting impulses on this virtual node are nodes. Applying forces is more robust as it allows the interior mesh equally redistributed to the triangle nodes, conserving momentum. to push back, resisting excessive compression while it seeks an opti- mal state. If the internal resistance of the mesh becomes larger than 7.1 Mass Spring Models the boundary forces, the boundary will not be matched exactly. In- stead of adjusting forces, we switch from force to velocity boundary The use of springs to aid in mesh generation dates back at least to conditions after an initial stage that carries out most of the needed [Gnoffo 1982]. [Bossen and Heckbert 1996] points out that inter- compression. At each boundary vertex, we choose the direction nodal forces that both attract and repel (like springs with nonzero of the force or constrained velocity component as the average of rest lengths) are superior to Laplacian smoothing where the nodes the incident triangles’ normals. No force (or velocity constraint) only attract each other. Thus, we use nonzero rest lengths in our is applied in other directions so the mesh is free to adjust itself springs, i.e. simulating the mesh as if it were a real material. All tangentially. The magnitude of the force or velocity constraint is edges are assigned linear springs obeying Hooke’s law, and the proportional to the signed distance from the level set boundary. nodal masses are calculated by summing one quarter of the mass For time integration we use the central difference scheme advo- of each incident tetrahedron. cated in [Bridson et al. 2002] that treats the nonlinear elastic forces Edge springs are not sufficient to prevent element collapse. As a explicitly and the damping forces implicitly. This allows larger time tetrahedron gets flatter, the edge springs provide even less resistance steps to be chosen based only on the elastic (and not the damp- to collapse. Various methods to prevent this have been introduced, ing) forces. Moreover, since all our damping forces are linear and e.g. [Palmerio 1994] proposed a pseudo-pressure term, [Bour- guignon and Cani 2000] used an elastic (only, i.e. no damping) force emanating from the barycenter of the tetrahedron. [Cooper and Maddock 1997] showed that these barycentric springs do not prevent collapse as effectively as altitude springs. In this paper, we propose a novel extension of their model for altitude springs by including damping forces which are linear and symmetric negative semi-definite in the nodal velocities. This allows the damping terms to be integrated using a fast conjugate gradient solver for implicit integration, which is important for efficiency especially if stiff alti- tude springs are used to prevent collapse. Every tetrahedron has four altitude springs each attaching a node to its opposite face. Consider a particular node labeled 1 with nodes 2, 3, 4 on the opposite face with outward normaln ˆ. If the current perpendicular distance is l and the rest length is l0, then we add the e elastic force F1 = k [(l − lo)/lo]nˆ to node 1 and subtract one third of that from each of the opposite face nodes. If the nodal velocities are v1,...,v4, we add the damping forces: F1 = f12 + f13 + f14, d F2 = − f12, F3 = − f13 and F4 = − f14 where f1i = k ((v1 −vi)·nˆ)nˆ for i = 2,3,4. Multiplying the f1i by the barycentric weights of the triangle that determine the location of the virtual node at the base d of the altitude spring gives f = k ((v1 −vB)·nˆ)nˆ which is the usual damping force for a spring. Thus there is nothing special or unusual about the damping force from the viewpoint of the altitude spring. However, after calculating this force, it needs to be redistributed to Figure 8: Tetrahedral mesh of a model dragon (500K elements). the nodal locations at the vertices of the triangle. Our formulas do The final compression stage can be carried out with masses and this so that the Jacobian matrix of damping forces with respect to springs, finite elements, or an optimization based method. the velocities is symmetric negative semi-definite. [Van Gelder 1998] proposed scaling the spring stiffness as the side the material, we can use it to calculate the force on the nodes as sum of the volumes of the incident tetrahedra divided by the length in for example [O’Brien and Hodgins 1999; Debunne et al. 2001]. of the edge. We provide a similar, but much shorter argument for p In their finite element simulation, [Picinbono et al. 2001] added scaling of this nature. The frequency of a spring scales as k/mlo a force in the same direction as our altitude springs. But since that (note our “spring constant” is k/lo) so the sound speed scales as force was the same on all nodes and based on the volume deviation p p lo k/mlo = klo/m. We simply require the sound speed to be a from the rest state, it does not adversely penalize overly compressed material property implying that k must scale as m/lo. This result is directions and can even exacerbate the collapse. Moreover, they consistent with [Van Gelder 1998], since the mass and the volume do not discuss damping forces. Therefore, we instead artificially both scale identically with their ratio equal to the density. However, damp the strain and strain rate of the altitudes of the tetrahedra as the mass is more straightforward to deal with since one can simply discussed above. use the harmonic mass of a spring. For example, we set the spring constant for our altitude springs using the harmonic average of the 8 Optimization Based Compression nodal mass and the triangle mass. As an alternative to physical simulation, one can directly optimize 7.2 mesh quality metrics such as aspect ratios. This doesn’t provide the same feedback on potential problems for subsequent simulation, but It is well known that finite element methods are more robust than can give better quality measures since they are directly pursued with their mass spring counterparts, and thus we propose using the fi- each movement of a node. Coupled with our robust connectivity nite element method for mesh generation (as well as simulation). (see section 6), this produces excellent results. While any number of constitutive models could be used, an inter- [Freitag and Ollivier-Gooch 1997] demonstrated that optimizing esting strategy is to use the real constitutive model of the material node positions in a smoothing sweep, i.e. considering one node at when generating its mesh. In this sense, one might hope to pre- a time and placing it at a location that maximizes the quality of dict how well the mesh will react to subsequent deformation dur- incident elements, is far superior to Laplacian smoothing in three ing simulation, and possibly work to ensure simulation robustness spatial dimensions. We combine this optimization sweeping with while constructing the mesh. boundary constraints by first moving boundary nodes in the inci- We use the nonlinear Green strain tensor, ε = dent triangles’ average normal direction by an amount proportional 1/2[(∂x/∂u)T (∂x/∂u) − I] where x(u) represents a point’s to the local signed distance value. Then the optimization is con- position in world coordinates as a function of its coordinates in strained to only move boundary nodes in the tangential direction. It object space. Isotropic, linearly-elastic materials have a stress is important to move boundary nodes gradually over several sweeps strain relationship of the form σe = λtr(ε)I + 2µε where λ and just as with physical simulation, since otherwise the optimization µ are the Lame´ coefficients. Damping stress is modeled similarly gets stuck in local extrema. We also found it helpful to order the with σd = αtr(ν)I + 2βν where ν = ∂ε/∂t is the strain rate. The nodes in the sweep with the boundary nodes first, their interior total stress tensor is then σ = σe + σd. neighbors next, and so on into the interior. Then we sweep in the re- We use linear basis functions in each tetrahedron so that the dis- verse order and repeat. This efficiently transfers information from placement of material is a linear function of the tetrahedron’s four the boundary compression to the rest of the mesh. Typically we nodes. From the nodal locations and velocities we obtain this linear do five sweeps of moving the boundary nodes 1/6 of the signed mapping and its derivative and use them to compute the strain and distance in the mesh normal direction, and then 2/6,...,5/6. Fi- the strain rate, which in turn are used to compute the stress tensor. nally, we perform five more sweeps moving boundary nodes the Finally, because the stress tensor encodes the force distribution in- full signed distance to ensure a tight boundary fit. While more efficient gradient methods may be used for the nodal optimization, we found a simple pattern search (see e.g. [Torczon 1997]) to be attractive for its robustness, simplicity of implemen- tation, and flexibility in easily accommodating any quality metric. We implemented the normal direction constraint on boundary nodes simply by choosing pattern directions orthogonal to the mesh nor- mal at the node. It was helpful to combine incident boundary triangle quality with the incident tetrahedron quality, not just for producing better bound- ary meshes but also for guiding the optimization away from local extrema as we compressed the boundary. One promising avenue of research is to alternate optimization with physical simulation to further avoid local extrema and to better condition the mesh for the subsequent physical simulations. 9 Results We demonstrate several examples of tetrahedral meshes that were generated with our algorithm. The number of tetrahedra ranges from 8k to 800K. We used mass spring models, the finite element method and optimization based techniques on all of our available Figure 9: Our method is easily extended to create a triangle data in order to discern some trends. The results are comparable for mesh of a manifold with boundary. The peeled away regions all three compression techniques, with the FEM simulations tak- of the dragon were modeled with a second level set. ing slightly longer (ranging from a few minutes to a few hours on the largest meshes) than the mass spring methods, but producing a ming level set and compressed to the trimming boundary, making higher quality mesh. The fastest method, by far, is the optimization sure to stay on the surface. Note that quality two-dimensional sur- based compression which tends to be about 10 times faster. face meshes with boundary are important for cloth simulation, es- We track a number of quality measures including the maximum pecially when considering collisions. aspect ratio (defined as the tetrahedron’s maximum edge length di- vided by its minimum altitude), minimum dihedral angle, and max- imum dihedral angle during the compression phase. The aspect 10 Conclusions and Future Work ratios of our candidate mesh start at about 3.5 regardless of the de- gree of adaptivity, emphasizing the desirability of our combined red Meshes generated using this algorithm have been successfully used green adaptive BCC approach. This number comes from the green to simulate highly deformable volumetric objects with a variety of √ constitutive models, see [Anonymous us 2003a], as well as two di- tetrahedra whereas the red tetrahedra have aspect ratios of 2. In mensional shells [Anonymous us 2003b]. This is not surprising the more complicated models, the worst aspect ratio in the mesh because these meshes have withstood the fairly strenuous defor- tends to increase to around 6-7 for the physics based compression mation of compressing them to the zero isocontour, where every methods and to around 5 for the optimization based compression. surface node is subjected to external forces. Dihedral angles typically range from 15◦ to 150◦, although one can often do better, e.g. the optimization based technique obtains angles We considered discrete ways to improve the final mesh such as between 19◦ and 145◦ for the cranium. edge-swapping methods where an edge with N adjacent tetrahedra Of course, these results are dependent on the types and strengths is replaced with 2N − 4 tetrahedra, see for example [Freitag and of springs, the constitutive model used in the FEM, and the quality Ollivier-Gooch 1997; Joe 1995]. But in general, we avoid swap- measures used in the optimization based technique. It is easier to ping in order to preserve our red green structure. Moreover, swap- get good quality with the optimization technique since one simply ping tends to have a negligible effect on our mesh quality measures optimizes based on the desired measure, as opposed to the physics indicating that we have achieved near optimal connectivity with our based techniques where one has to choose parameters that indirectly red green adaptive BCC structure. lead to a quality mesh. However, we stress that the measure of mesh For simplification of tetrahedral meshes, edge collapse tech- quality is the measure of the worst element at any point of dynamic niques can be used, see e.g. [Staadt and Gross 1998; de Cougny simulation. It does little good to have a perfect mesh that collapses and Shephard 1999; Trotts et al. 1999; Cignoni et al. 2000] (see immediately when the simulation begins. For meshes that undergo also [Hoppe 1996]), and we plan to pursue combinations of edge little to no deformation (fluid flow, heat flow, small strain, etc.) this collapse with level set projection techniques as future work. quality measure is either identical to or very close to that of the ini- Boundary features and internal layers (as in [Cutler et al. 2002]) tial mesh. However, for large deformation problems this is not the may be matched in our algorithm with additional forces or con- case, and the physics based compression techniques hold promise straints. in the sense that the resulting mesh may be better conditioned for simulation. References The general theme of our algorithm—tile ambient space as regu- larly as possible, select a subset of elements that are nicely con- AMBROSIO,L., AND SONER, H. M. 1996. Level set approach to mean curvature nected and roughly conform to the object, then deform them to flow in arbitrary codimension. Journal of Differential Geometry 43, 693–737. match the boundary—is applicable in any dimension and on gen- ANONYMOUSUS. 2003. Finite volume methods for the simulation of muscle tissue. eral manifolds. Figure 9 shows the result of triangulating a two- ANONYMOUSUS. 2003. Simulation of detailed clothing with folds and wrinkles. dimensional manifold with boundary. We began with a surface BARAFF,D., AND WITKIN, A. 1998. Large steps in cloth simulation. Computer Graphics (SIGGRAPH Proc.), 1–12. mesh of the dragon actually created as the boundary of a tetrahe- BERTRAM,M.,DUCHAINEAU,M.,HAMANN,B., AND JOY, K. 2000. Bicubic dral mesh from our method (with additional edge-swapping and subdivision-surface wavelets for large-scale isosurface representation and visual- smoothing), and a level set indicating areas of the surface to trim ization. In Proceedings Visualization 2000, 389–396. away. We kept a subset of the surface triangles inside the trim- BEY, J. 1995. Tetrahedral grid refinement. Computing, 55, 355–378. BLOOMENTHAL,J., AND WYVILL, B. 1990. Interactive techniques for implicit Proc.) 21. modeling. Computer Graphics (1990 Symposium on Interactive 3D Graphics) 24, JOE, B. 1995. Construction of three-dimensional improved-quality triangulations 2, 109–116. using local transformations. SIAM Journal of Scientific Computing 16, 6, 1292– BOSSEN, F. J., AND HECKBERT, P. S. 1996. A pliant method for anisotropic mesh 1307. generation. In 5th International Meshing Roundtable, 63 – 76. KASS,M.,WITKIN,A., AND TERZOPOULOS, D. 1987. Snakes: Active contour BOURGUIGNON,D., AND CANI, M. P. 2000. Controlling anisotropy in mass-spring models. In International Journal of Computer Vision, 321–331. systems. In Eurographics, 113–123. KOBBELT, L. P., VORSATZ,J.,LABSIK,U., AND SEIDEL, H.-P. 1999. A shrink BRIDSON,R.,FEDKIW,R., AND ANDERSON, J. 2002. Robust treatment of colli- wrapping approach to remeshing polygonal surfaces. In Eurographics, 119–130. sions, contact and friction for cloth animation. ACM Trans. Gr. (SIGGRAPH Proc.) LI,X.,TENG,S., AND U¨ NGOR¨ , A. 1999. Biting spheres in 3d. In 8th International 21, 594–603. Meshing Roundtable, 85–95. BURNS,G., AND GLAZER, A. M. 1990. Space groups for solid state scientists. LOHNER,R., AND CEBRAL, J. 1999. Generation of non-isotropic unstructured grids Academic Press, Inc. (2nd ed.). via directional enrichment. In 2nd Symposium on Trends in Unstructured Mesh CHENG,J.H.,FINNIGAN, P. M., HATHAWAY, A. F., KELA,A., AND SCHROEDER, Generation. W. J. 1988. Quadtree/octree meshing with adaptive analysis. Numerical Grid MILLER,J.,BREEN,D.,LORENSEN, W., O’BARA,R., AND WOZNY, M. 1991. Generation in Computational Fluid Mechanics, 633–642. Geometrically deformed models: A method for extracting closed geometric models CHENG, S.-W., DEY, T. K., EDELSBRUNNER,H.,FACELLO,M.A., AND TENG, from volume data. Computer Graphics (SIGGRAPH Proc.), 217–226. S.-H. 2000. Sliver exudation. Journal of the ACM 47, 5, 883–904. NEUGEBAUER, P., AND KLEIN, K. 1997. Adaptive of objects recon- CIGNONI, P., COSTANZA,C.,MONTANI,C.,ROCCHIN,C., AND SCOPIGNO,R. structed from multiple range images. In IEEE Visualization. 2000. Simplification of tetrahedral meshes with accurate error evaluation. In Visu- O’BRIEN, J. F., AND HODGINS, J. K. 1999. Graphical modeling and animation of alization. brittle fracture. Computer Graphics (SIGGRAPH Proc.), 137–146. COOPER,L., AND MADDOCK, S. 1997. Preventing collapse within mass-spring- O’BRIEN, J. F., BARGTEIL, A. W., AND HODGINS, J. K. 2002. Graphical modeling damper models of deformable objects. In The Fifth International Conference in of ductile fracture. ACM Trans. Gr. (SIGGRAPH Proc.) 21. Central Europe on Computer Graphics and Visualization. OHTAKE, Y., AND BELYAEV, A. G. 2002. Dual/primal mesh optimization for poly- CROSSNO, P., AND ANGEL, E. 1997. Isosurface extraction using particle systems. In gonized implicit surfaces. In Proceedings of the seventh ACM symposium on solid Proceedings Visualization 1997, 495–498. modeling and applications, ACM Press, 171–178. CUTLER,B.,DORSEY,J.,MCMILLAN,L.,MULLER¨ ,M., AND JAGNOW, R. 2002. OSHER,S., AND FEDKIW, R. 2002. Level set methods and dynamic implicit surfaces. A procedural approach to authoring solid models. ACM Trans. Gr. (SIGGRAPH Springer-Verlag. New York, NY. Proc.) 21, 302–311. PALMERIO, B., 1994. An attraction-repulsion mesh adaption model for flow solution DE COUGNY,H.L., AND SHEPHARD, M. S. 1999. Parallel refinement and coarsen- on unstructured grids. ing of tetrahedral meshes. International Journal for Numerical Methods in Engi- PICINBONO,G.,DELINGETTE,H., AND AYACHE, N. 2001. Non-linear and neering 46, 1101–1125. anisotropic elastic soft tissue models for medical simulation. In IEEE International DE FIGUEIREDO,L.H.,GOMES,J.,TERZOPOULOS,D., AND VELHO, L. 1992. Conference Robotics and Automation. Physically-based methods for polygonization of implicit surfaces. In Proceedings RADOVITZKY,R.A., AND ORTIZ, M. 2000. Tetrahedral mesh generation based in of the conference on Graphics interface, 250–257. node insertion in crystal lattice arrangements and advancing-front Delaunay trian- DEBUNNE,G.,DESBRUN,M.,CANI,M., AND BARR, A. H. 2001. Dynamic real- gulation. Computer Methods in Applied Mechanics and Engineering 187, 543–569. time deformations using space & time adaptive sampling. Computer Graphics SADARJOEN,I.A., AND POST, F. H. 1997. Deformable surface techniques for field (SIGGRAPH Proc.) 20. visualization. In Eurographics, 109–116. DESBRUN,M.,MEYER,M.,SCHRODER¨ , P., AND BARR, A. H. 1999. Implicit SCHOBERL¨ , J. 1997. Netgen - an advancing front 2d/3d mesh generator based on fairing of irregular meshes using diffusion and curvature flow. Computer Graphics abstract rules. Computing and Visualization in Science 1, 41–52. (SIGGRAPH Proc.), 317–324. SHEPHARD,M.S., AND GEORGES, M. K. 1991. Automatic three-dimensional FREITAG,L., AND OLLIVIER-GOOCH, C. 1997. Tetrahedral mesh improvement mesh generation by the finite octree technique. International Journal of Numerical using swapping and smoothing. International Journal for Numerical Methods in Methods Engineering 32, 709–739. Engineering 40, 3979–4002. SHEWCHUK, J. 1998. Tetrahedral mesh generation by Delaunay refinement. In Proc. FRISKEN, S. F., PERRY,R.N.,ROCKWOOD, A. P., AND JONES, T. R. 2000. Fourteenth Annual Symposium on Computational Geometry, 86–95. Adaptively sampled distance fields: a general representation of shape for computer graphics. Computer Graphics (SIGGRAPH Proc.), 249–254. SHIMADA,K., AND GOSSARD, D. 1995. Bubble mesh: Automated triangular mesh- ing of non-manifold geometry by sphere packing. In ACM Third Symposium on GANOVELLI, F., CIGNONI, P., MONTANI,C., AND SCOPIGNO, R. 2000. A mul- Solid Modelling and Applications, 409–419. tiresolution model for soft objects supporting interactive cuts and lacerations. In Eurographics. STAADT,O.G., AND GROSS, M. H. 1998. Progressive tetrahedralizations. In Visu- alization, 397–402. GARIMELLA,R., AND SHEPHARD, M. 1998. Boundary layer meshing for viscous flows in complex domains. In 7th International Meshing Roundtable, 107–118. SZELISKI,R., AND TONNESEN, D. 1992. Surface modeling with oriented particle systems. Computer Graphics (SIGGRAPH Proc.), 185–194. GLOTH,O., AND VILSMEIER, R. 2000. Level sets as input for hybrid mesh genera- tion. In Proceedings of 9th International Meshing Roundtable, 137–146. TORCZON, V. 1997. On the convergence of pattern search algorithms. SIAM J. Opt. 7, 1, 1–25. GNOFFO, P. 1982. A vectorized, finite-volume, adaptive-grid algorithm for Navier- Stokes calculations. Numerical Grid Generation, 819–835. TROTTS,I.J.,HAMANN,B., AND JOY, K. I. 1999. Simplification of tetrahedral meshes with error bounds. Visualization 5, 3, 224–237. GRINSPUN,E.,KRYSL, P., AND SCHRODER¨ , P. 2002. CHARMS: A simple frame- work for adaptive simulation. ACM Trans. Gr. (SIGGRAPH Proc.) 21, 281–290. TURK, G. 1991. Generating textures on arbitrary surfaces using reaction-diffusion. Computer Graphics (SIGGRAPH Proc.) 25, 289–298. GROSSKOPF,S., AND NEUGEBAUER, P. J. 1998. Fitting geometrical deformable models to registered range images. In European Workshop on 3D Structure from TURK, G. 1992. Re-tiling polygonal surfaces. Computer Graphics (SIGGRAPH Multiple Images of Large-Scale Environments (SMILE), 266–274. Proc.), 55–64. VAN GELDER, A. 1998. Approximate simulation of elastic membranes by triangulated GROSSO,R.,LURIG¨ ,C., AND ERTL, T. 1997. The multilevel finite element method for adaptive mesh optimization and visualization of volume data. In Visualization, spring meshes. Journal of Graphics Tools 3, 2, 21–42. 387–394. VELHO,L.,GOMES,J., AND TERZOPOULOS, D. 1997. Implicit manifolds, triangu- lations and dynamics. Journal of Neural, Parallel and Scientific Computations 15, HIROTA,G.,FISHER,S.,STATE,A.,LEE,C., AND FUCHS, H. 2001. An implicit finite element method for elastic solids in contact. In Computer Animation. 1–2, 103–120. WESTERMANN,R.,KOBBELT,L., AND ERTL, T. 1999. Real-time exploration of HOPPE,H.,DEROSE, T., DUCHAMP, T., MCDONALD,J., AND STUETZLE, W. 1993. Mesh optimization. Computer Graphics (SIGGRAPH Proc.), 19–26. regular volume data by adaptive reconstruction of isosurfaces. The Visual Computer 15, 2, 100–111. HOPPE,H.,DEROSE, T., DUCHAMP, T., HALSTEAD,M.,JIN,H.,MCDONALD, WITKIN,A., AND HECKBERT, P. 1994. Using particles to sample and control implicit J.,SCHWEITZER,J., AND STUETZLE, W. 1994. Piecewise smooth surface recon- struction. Computer Graphics (SIGGRAPH Proc.), 295–302. surfaces. Computer Graphics (SIGGRAPH Proc.) 28, 269–277. WOOD,Z.,DESBRUN,M.,SCHRODER¨ , P., AND BREEN, D. 2000. Semi-regular HOPPE, H. 1996. Progressive meshes. Computer Graphics (SIGGRAPH Proc.) 30, 98–108. mesh extraction from volumes. In Visualization 2000, 275–282. YAMAKAWA,S., AND SHIMADA, K. 2000. High quality anisotropic tetrahedral HORMANN,K.,LABSIK,U.,MEISTER,M., AND GREINER, G. 2002. Hierarchi- cal extraction of iso-surfaces with semi-regular meshes. In Symposium on Solid mesh generation via packing ellipsoidal bubbles. In The 9th International Meshing Modeling and Applications, 58–58. Roundtable, 263–273. JAMES,D.L., AND PAI, D. K. 2002. DyRT: Dynamic response textures for real time deformation simulation with graphics hardware. ACM Trans. Gr. (SIGGRAPH