FLUID MODELING WITH STOCHASTIC AND STRUCTURAL FEATURES

A dissertation submitted to Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy

by

Zhi Yuan

August 2013 Dissertation written by

Zhi Yuan

B.S., Huazhong University of Science and Technology, 2005

Ph.D., Kent State University, 2013

Approved by

Dr. Ye Zhao , Chair, Doctoral Dissertation Committee

Dr. Ruoming Jin , Members, Doctoral Dissertation Committee

Dr. Austin Melton

Dr. Xiaoyu Zheng

Dr. Robin Selinger

Accepted by

Dr. Javed Khan , Chair, Department of Computer Science

Dr. Raymond A. Craig , Dean, College of Arts and Sciences ii TABLE OF CONTENTS

LISTOFFIGURES...... vi

LISTOFTABLES ...... ix

Acknowledgements ...... x

Dedication...... xi

1 Introduction ...... 1

1.1 Significance,ChallengeandObjectives...... 1

1.2 MethodologyandContribution ...... 3

1.3 Background...... 5

1.3.1 PhysicallyBasedFluidSimulationMethods ...... 5

1.3.2 FluidTurbulence ...... 6

1.3.3 FluidControl ...... 7

1.3.4 FluidCompression ...... 8

2 Incorporating Fluctuation and Uncertainty in Particle-basedFluidSimulation. . . . . 10

2.1 Introduction...... 10

2.2 BasicSPHAlgorithm...... 15

2.3 StochasticTurbulenceinSPH ...... 16

2.4 TurbulenceEvolution ...... 17

iii 2.4.1 Production ...... 17

2.4.2 Development ...... 19

2.4.3 Spreading...... 19

2.5 Discussion...... 20

2.6 Results...... 21

3 Stochastic Modeling of Light-weight Floating Objects ...... 30

3.1 Introduction...... 30

3.2 k ε TurbulenceModel ...... 33 − 3.3 StochasticObjectMotion...... 35

3.4 ImplementationandResults ...... 37

4 Pattern-based Smoke with Lagrangian Coherent Structure ...... 40

4.1 Introduction...... 40

4.2 FlowPattern...... 44

4.2.1 Finite-TimeLyapunovExponent(FTLE) ...... 46

4.2.2 ForwardandBackwardFTLE ...... 47

4.2.3 LagrangianCoherentStructure(LCS) ...... 48

4.2.4 Thinning ...... 50

4.2.5 Implementation...... 51

4.3 Pattern-drivenFluidAnimation...... 52

4.4 ResultsandPerformance ...... 53

5 Ad Hoc CompressionofSmokeAnimation...... 61

iv 5.1 Introduction...... 61

5.2 CompressionFramework ...... 64

5.3 Inter-frame Compression with Bidirectional Advection ...... 66

5.3.1 AdaptiveVelocitySimplificationwithFTLE ...... 68

5.3.2 ReconstructionfromMotionVectors...... 70

5.3.3 BidirectionalAdvectionandWeightMap ...... 71

5.4 Intra-frameCompression ...... 74

5.5 Decompression ...... 77

5.6 ExperimentsandPerformance ...... 77

5.7 Discussion...... 85

6 FutureWork...... 87

6.1 Improving Turbulence Enhancement and Fluid Control ...... 87

6.2 TexturebasedFluidAppearanceEnhancement ...... 88

6.2.1 AdvectedTexturebasedonFlowPatterns ...... 88

6.2.2 Optimization based Texture Synthesis using Flow Patterns ...... 90

6.3 TrajectoryAnalysisandVisualization ...... 91

7 Conclusion...... 94

BIBLIOGRAPHY...... 95

v LIST OF FIGURES

1 Snapshotsof2Dturbulenceevolution...... 21

2 Snapshots of 2D turbulence induced by object. (a) Use object introduced SIPs

without turbulence spreading; (b) With medium turbulence spreading; (c) With

strong turbulence spreading; (d) Use a large number of initial SIPs without

turbulencespreading...... 22

3 Snapshots of a moving object inside a water tank along different time steps

(a)-(d). Top: Original SPH simulation; Bottom: With introduced turbulence. . 25

4 Snapshots of a moving object inside a water tank in comparison with using

vortexparticles...... 26

5 Snapshots of a water stream with obstacle-induced SIPs. (a) Original placid

simulation; (b)-(c) With introduced turbulence at differentsteps...... 27

6 Snapshots of water pouring into a tank. (a) Original simulation with smooth

surface; (b)-(c) With added fluctuation at different steps...... 28

7 Snapshotsofwaterpouringintoalargertank...... 29

8 Diagramofouralgorithm...... 32

9 Simulation snapshots of flying leaves past a house based on a pre-computed

stationaryvelocityfield...... 38

10 Overviewofourmethod...... 43

vi 11 Flow pattern with FTLE and LCS. (a) Red: upward velocity. Green: downward

velocity; (b)(c) Red: high FTLE value; Blue: low FTLE value; (d)(e) LCS

regionfrom(c)...... 45

12 ControldomainwithLCSthinningfromFig. 11(e)...... 48

13 2D Pattern-based fluid animation. (a) Low-resolution simulation result; (b)

High-resolution simulation with regulation after 10 thinning passes; (c) High-

resolution simulation with regulation after 20 thinning passes; (d) High-resolution

simulationwithoutregulation...... 49

14 Pattern-based fluid animation on a moving ball simulation. (a) Low-resolution

simulation result; (b) High-resolution simulation with regulation; (c) High-

resolution simulation with regulation after 8 passes of thinning; (d) High-

resolutionsimulationwithoutregulation...... 57

15 Pattern visualization of the Fig. 23 example. (a) 4 thinnings; (b) 8 thinnings;

(c)12thinnings...... 58

16 Pattern-based fluid animation on vortex particles. (a) Low-resolution simu-

lation result; (b) Adding vortex particles with regulation (4 thinnings); (c)

Addingvortexparticleswithoutregulation...... 59

17 Pattern-based fluid animation on turbulence enhancement with wavelet noise.

(a) Low-resolution simulation result; (b) Adding noise with regulation (8 thin-

nings);(c)Addingnoisewithoutregulation...... 60

18 Smokeanimationframeworkoverview...... 64

vii 19 Illustration of the adaptive velocity simplification with FTLE. The domain is

divided into nonuniform blocks based on FTLE. (a) 2D velocity field (256 × 256); (b) FTLE field; (c) Motionvectors overthe blockswith the smallest block

at 8 8 and the largest block at 64 64...... 67 × × 20 Bidirectional advection for P-Frames estimation from two consecutive K-Frames.

Red and purple arrow lines represent forward advection and backward advec-

tion,respectively...... 71

21 Using different CP , the number of P-Frames between two K-Frames, in com-

pression. (a) One K-Frame with CP =5; (b) CP = 20 compared with (a); (c)

One middle P-Frame with CP =5; (d) CP = 20 comparedwith(c)...... 75

22 Snapshots of smoke animation created from decompressed density volumes

(192 256 192) after compression. (a) Original data with no compression × × (clip size: 1.4GB); (b) Compression by our method (compressed clip size:

7.1MB); (c) Visualize the difference of (b) from (a) in red; (d) Compression

by extending 2D video compression techniques to 3D volumes (compressed

clip size: 8.4MB); (e) Visualize the difference of (d) from (a) in red. Here,

(b)(c) and (d)(e) achieve a similar compression ratio around 200 compared to

(a), but (d)(e) introduce excessive aliasing which is destructiveinthevideo. . . 78

23 Usingdifferent C in compression with a 250 320 250 simulation. . . . . 79 P × × 24 Varied compression cases. (a) Using DCT quantization coefficient φ = 0.4;

(b) Using a smaller rendering scattering coefficient σs ...... 83

25 FlowPatternsforAnimalTrajectory...... 93

viii LIST OF TABLES

1 Experimentsparameters...... 24

2 PerformanceReport(inseconds)...... 55

3 Compression performance of several smoke animation clips. The clips are

created in a short time period from different smoke simulations. The weight

map quantization coefficient ω = 5 and the DCT quantization coefficient φ =

0.01...... 76

4 QualityMeasurementofTable3...... 81

5 Using different weight map quantization coefficient ω...... 82

6 Using different DCT quantization coefficient φ...... 84

7 Computing performance per step in milliseconds. MV: motion vector gener-

ation; u∗: reconstruction of velocities from motion vector; ω: weight map

generation; Inv. DCT: inverse DCT transform...... 85

ix Acknowledgements

First and foremost, I want to express my deepest gratitude to my advisor, Professor Ye Zhao, who leads me into the colorful world of with his persistent vision, patience and encouragement. Thanks a lot, Dr.

Special thanks to my committee, Dr. Ruoming Jin, Dr. Austin Melton, Dr. Xiaoyu Zheng and

Dr. Robin Selinger. Their support and guidance are highly beneficial to me. I also want to thanks Dr. Cheng Chang Lu for the advice and discussion of some work in this thesis, and Dr.

Paul A. Farrell for the support during my candidacy exam.

To all the friends in Graphics and Visualization Lab, I enjoyed every moment with you guys.

Most of my work is supported by U.S. National Science Foundation under grant IIS-0916131.

x To my lovely family

xi CHAPTER 1

Introduction

1.1 Significance, Challenge and Objectives

Physically-based fluid simulation has achieved great success in computer graphics with a vari- ety of astounding appearances of splashing water, burning fire, rising smoke, etc. The appear- ance and behavior of these phenomena are governed by Navier-Stokes (NS) equations. In order to solve it correctly and efficiently, many fluid solvers have been proposed. However, the cur- rent NS solvers share the same problem which is that, under limited computational resources, the direct numerical simulation is infeasible to model the fluid with sufficient turbulence and detail due to the numerical dissipation and limited resolution. For computer graphics applica- tions, turbulent and detailed results are critical for simulating the natural phenomena, which can be enhanced by swirling energy re-injection and subgrid detail synthesizing. The naive solution is to homogeneously apply these processes on the whole fluid domain which may cre- ate unrealistic results due to the variation of stochastic behaviors at different flow locations.

In this thesis, we propose methods to model the physically based heterogeneously turbulent behaviors of the widely used Smoothed Particle Hydrodynamics (SPH) method and simulate the stochastic movement of light-weight objects.

Current fluid modeling technology still imposes great challenges on : the nonlinear

flow dynamics is notorious for adjusting to achieve desired fluid path and shape; and the ex- cessive computational cost hinders their effort to design special effects in an interactive way.

1 2

Therefore, advanced fluid design tools are direly needed by the animators which, ideally, can provide the functionality as a two-stage process: fast but low-quality design on low-cost sim- ulation and slow but high-quality final output on high-cost simulation. However, unlike im- age or geometry objects, the transition from experimental design results to final outcome is not straightforward due to the inherent nonlinearity of fluids and numerical dissipation. Flow behavior greatly changes while simulation resolution increases or turbulence enhancement is exploited. The resultant fluid dynamics might not be what an prefers and has tested, which will definitely frustrate the animator considering his/her previous design efforts. There- fore, a mechanism must be provided to guarantee the similarity of the overall characteristics between high-quality and low-quality results, while preserving the adequate detail and turbu- lence of the high resolution simulation results.

Physically-based fluid simulation creates smoke which involve 3D, high-resolution, and time-varying data sets. The large data size imposes challenges on storing and transmitting the animations where good compression techniques are highly demanded. Furthermore, small- scale details play important roles in the realistic animations which should be well preserved in compression. This requirement can not be satisfied by most of the existing techniques in video and scientific volume compression since they always try to do the compression by discard- ing the high frequency components. Therefore, how to compress the simulation results while retaining the detail as much as possible is our main goal. 3

1.2 Methodology and Contribution

The stochastic and structural features can not only be used to characterize a flow, but also be very useful in directing the flow driven applications. We contribute to computer graphics and visualization by applying these features into real graphics topics, including fluid enhancement for SPH method, light-weight objects’ behavior modeling, two-stage smoke design and smoke animation compression.

We introduce stochastic turbulence to SPH-based flow simulation, which has not been • developed before for the well-known Lagrangian scheme. We have designed the swirling

probability and vorticity of SPH particles aimed at introducing stochastic fluctuation to

the Lagrangian method. The probability distribution function (PDF) in the whole domain

is represented by this attribute of all particles. The computation is very fast and fully

incorporated inside the SPH framework. Furthermore, the turbulence is easily controlled

to deliver different levels of turmoil effects with non-repeating behavior. It is ready to be

employed in various fluid modeling applications.

We propose a new modeling scheme which couples k ε model and a specified SDE to • − quickly simulate the stochastic behaviors of floating objects inside fluid flows. We pro-

pose a novel model for the movement of light-weight floating objects as a random pro-

cess, which is resolved by a stochastic differential equation (SDE). First, a base flow of

main stream can be pre-generated by an animator’s special design, or with an affordable

simulation on-the-fly. Then, the random process introduces necessary stochastic fluctua-

tion to object trajectories with very fast computation by avoiding costly high-resolution

simulations. 4

We exploit the emerging fluid analysis techniques, FTLE and LCS, to promote easy • fluid animation. After users design the fluid animation in low-cost, and hence fast and

interactive, numerical experiments, the extracted flow patterns are integrated with high-

quality simulations to provide a final animation output that is consistent with the design.

FTLE measures the rate of separation of very close particles after a given time inter-

val inside a fluid. A sequence of FTLE fields, which are evolving scalar volumes, are

computed from the velocity fields created over time in a low-cost flow simulation. The

FTLE ridges present geometries that divide the domain into coherent regions called LCS

which plays a role as material boundaries in the material’s transport inside the flow and

record the major flow trends. Our method regulates a high-quality animation by enforc-

ing its velocities on the LCS region to follow the pre-computed velocities. Therefore, the

animation is guaranteed to follow the mainstream flow characteristics which have been

designed and preserved on the extracted patterns. On remaining regions, the high-quality

fluid dynamics is allowed to liberally develop thus leading to preferred realistic details.

We develop an ad hoc compression algorithm for smoke animation. It enables easy stor- • age and transmission of the large-scale, 3D and dynamic data sets from physically-based

simulation. The velocity field of smoke animation is adaptively simplified to gener-

ate motion vectors over non-uniform blocks which are used to drive the bidirectional

advection. The motion vector generation is conducted by utilizing specific flow fea-

tures. Intra-frame compression is included through transform, quantization and encod-

ing. Eventually, a good compression ratio is achieved while smoke animation dynamics

is well preserved. The compression is controllable for different scenarios. 5

1.3 Background

1.3.1 Physically Based Fluid Simulation Methods

Physically-based fluid simulation [1] has achieved great success in computer graphics with a variety of astounding appearances such as splashing water, burning fire, rising smoke, etc. The appearance and behavior of these phenomena can be simulated by solving the incompressible

Navier-Stokes (NS) equations:

u = 0, (1) ∇· ∂u + u u = P + ν 2u + F. (2) ∂t ·∇ −∇ ∇

Here, u means the velocity field, t is the time variable, P stands for the pressure, F is the external force and ν is the kinetic viscosity coefficient. NS equations only model the velocity

field at some given boundary and initial conditions. Eqn.(1) is called the incompressibility con- dition which can be used to guarantee the mass conservation. Eqn. (2) is the flow momentum equation under the influence of diffusion (viscous damping), pressure and external forces (e.g. gravity, buoyancy). NS equations can be solved by methods in three categories: Eulerian meth- ods, Lagrangian methods and hybrid methods. Eulerian methods track the fluid quantities (e.g. velocities, densities, surface level set, etc) at fixed points and measure their variation in time.

In order to get a stable solution, the time step is usually constrained to a small value because of the Courant-Friedrichs-Lewy condition (CFL condition). Fortunately, the semi-Lagrangian advection scheme proposed by [2] makes the Eulerian method unconditionally stable under any given time steps which highly advances its practicability in interactive applications. However,

Eulerian methods’ unavoidable interpolation operations introduce a lot numerical dissipation 6 and energy loss which will tremendously decrease the fluid detail and generate unrealistically smooth results. In order to alleviate this problem many approaches have been proposed includ- ing vorticity confinement [3], higher order advection scheme (BFECC) [4], energy preserving scheme [5], etc. The Lagrangian methods model the fluid by particles carrying qualities. Be- cause the Lagarangian methods do not need to solve the big linear system and the computation for the particles is easy to be parallelized, the simulation per step can be very fast. However, the small time step restriction brought about by its conditionally stable nature will slow down the overall speed, and it also has difficulty in tracking the very shape surface detail. The Mov- ing Particle Semi-Implicit method [6] and the Smoothed Particle Hydrodynamics (SPH) [7] are two typical Lagrangian methods. Hybrid methods are the combination of the Eulerian method and the Lagrangian method. They try to combine the merits of these two methods.

Lagrangian particles in Eulerian methods can be used to compensate the turbulence energy

(e.g. Lagrangian vortex particles [8]), increase the advection accuracy (e.g. Particle-in-Cell

Methods, Fluid-Implicit-Particles (FLIP) [9]) and achieve high surface tracking (e.g. Particle

Level Set (PLS) [10]). Eulerian grid in the Lagrangian method can be used to add the vortical detail [11].

1.3.2 Fluid Turbulence

For enhancing the turbulence and detail of fluids, recently proposed graphical methods model the instantaneous velocity field u by a mean field U and a rapidly fluctuating component u′, while the latter is synthesized. For modeling u′, Stam [12] created a chaotic field in the frequency domain. The curl operation, following Bridson et al. [13], is used on Perlin noise [14, 15] and wavelet vector noise [16] , or alternatively, vortex particles are randomly 7 seeded at pre-computed artificial boundary layers [17]. u′ is then integrated with the simula- tion results U with respect to the temporal consistency and the energy cascade. To handle this, small-scale u′ fields are deliberately managed with texture distortion detection [16], through an empirical rotation scalar field [15] or by special noise particles [14]. In order to handle free stream turbulence, [18] further proposes a particle based turbulence model which can cap- ture the directional vortices with regard to the energy transport model. Our previous work also uses random forcing and Langevin dynamics for turbulence enhancement [19, 20]. These methods introduce fluctuations by modeling turbulence energy and solving turbulence trans- port equations, mostly employing turbulent viscosity hypothesis and statistical Kolmogorov cascade theory [21].

1.3.3 Fluid Control

Fluid control has been studied a lot in literatures. [22] first proposed the embedding of con- trollers to control the pressure and velocity of flows. Extra forcing is a direct mechanism to per- form the task, which drives fluids towards a predefined guiding object or simple path [23–26].

An advected radial basis function carried by moving particles provides an editable medium to modify the flow [27]. The potential field is distributed on a shape to apply guiding forces, while the geometric shape is simply user-defined [28].Keyframe control and adjoint methods are used to drive the simulation at a sequence of keyframes with user-defined path/shapes [29,30].

Moreover, vortex filament and ring are used to control and edit artistic smoke behavior [31,32].

Forces created from optimization between current and ideal simulations can give favorable re- sults but introduce overhead on complex numerical optimization [33,34]. Nielsen et al. [33] 8 propose the optimization that allows a high-resolution simulation to follow low-resolution ve- locity fields, which is enforced in the whole domain. They extend the original algorithm and use density erosion in the proportional control to provide efficient time-varying simulation and increase fluctuation [34].

1.3.4 Fluid Compression

Multimedia and Geometry Compression encodes data with less consump- tion of storage and transmission [35]. Multimedia compression has been widely studied with rapidly developing techniques, e.g., MPEG-4 [36] for video and audio and JPEG 2000 [37] for images. The compression algorithms have been extended for High Dynamic Range (HDR) visual data: higher precision image and video (e.g. [38]).

Geometry compressing techniques are becoming increasingly important to make mesh storage and transmission through networks more feasible. For instance, many role-playing games ex- perience bottlenecks in bandwidth due in large part to transmission of geometry data. Single- rate coded mesh and progressive mesh enable geometry data streaming [39, 40]. There also exists some research work about compression of motion captured data (e.g. [41, 42]). These techniques cannot be directly used in smoke animation compression due to the 3D volumetric and dynamic nature of the fluid data.

Volume and Vector Compression In scientific visualization, very large volumetric data sets are compressed so that visualization can be applied with limited main memory or GPU mem- ory capacity [43]. Volume compression mostly aims at keeping critical features for fast loading and visualization. Furthermore, most volume compression techniques are designed for static 9 volumes. A few approaches on time varying data are directly extended from video compres- sion techniques [44,45]. These techniques are not feasible for fast-evolving fluid animations, where the compression should help reproduce good visual results of smoke dynamics with high-frequency details. Smoke flow has its inherent structure/physical properties which are exploited for a promising solution in this paper.

Compression techniques for vector fields are also investigated for scientific flow visualization, in order to make the process of visual analysis of the flow data more efficient. Simple compres- sion can be completed by locally iterative refinement using clustering and principal component analysis of vectors [46,47]. Some methods are proposed under the consideration of preserving the topological features (e.g. [48,49]). These techniques are not viable for delivering smoke phenomena, because most of the techniques work for 2D steady vector fields, and intently to keep topological structures that are important for flow field analysis.

Predictive Compression MPEG-4 uses predictive compression that improves compression performance by exploiting time and space coherence. Motion-compensated prediction uses spatial displacement motion vectors to form a prediction of object movement inside frames

[50,51]. Geometry compression methods are also studied for time-varying meshes based on geometry images [52] and principal component analysis [53]. CHAPTER 2

Incorporating Fluctuation and Uncertainty in Particle-based Fluid Simulation

2.1 Introduction

Particle-based methods model fluid dynamics in a Lagrangian way, which are widely explored due to the programming simplicity, continuous modeling scales and ease of handling pressure and boundaries, in comparison to the grid-based Eulerian solvers of the governing Navier-

Stokes (NS) equations. Smoothed Particle Hydrodynamics (SPH), with its accurate physics model, has become the most popular particle-based NS solver, which nevertheless has to man- age a large amount of particles distributed in the whole simulation domain. The necessary number of particles, and hence the computational cost, will dramatically increase when sim- ulating large-scale turbulent fluids with abundant fluctuations. Furthermore, the simulated re- sults manifest deterministic and repeating behavior in multiple executions. They are unable to model the inherent uncertainty associated with the stochastic nature of real fluid, which was described by Heraclitus (c.540 - c.475 BC): “You cannot step into the same river twice”.

Fluid simulation should, ideally, provide non-repeating results even with the same initial and boundary configurations.

Combining a Lagrangian representation with Eulerian solvers, vortex particles are introduced

[8], as an auxiliary tool, to model unresolved rotational effects and to apply forces back to the grid. Our method similarly incorporates fluctuation through particles and follows the vorticity- velocity form of NS equations, however in the SPH scheme without grids. The pure Lagrangian

10 11 framework needs no artificial feedback to grid and it does not produce extra computing prim- itives in addition to SPH particles. On the other hand, Park and Kim [54] model gaseous phenomena by entirely distributing vortex particles on the whole grid and utilizing a pure La- grangian simulation with vorticity transport equation. The vorticity evolution is also used in our method. Their approach simulates incompressible gases successfully, but not in particular for liquids. Here, we focus on incorporating turbulence for liquid simulation, where SPH is widely used. The SPH scheme uses particles for representing medium as well as for simula- tion, which does not need to distribute particles to entire domain. It also frees us from special boundary treatments while introducing turbulence. In [17], vortex particles are employed in a grid-based simulation to introduce detailed turbulence around boundary. They present the sampling vortex particles physically on boundary layers for obstacle induced turbulence. After seeding, the method solves energy transport equations to determine when the particles should increase or reduce their chaotic agitation, and correspondingly heuristic rules are developed for particle merging and splitting. It does not handle fluid streams without objects and typically requires time-consuming precomputing, while our method does not need special precomput- ing. Moreover, we model the stochastic fluid feature with the new concept of particle swirling probability and implement turbulence evolution with the spreading of the probability in a sim- ple way. In contrast, the vortex particle methods do not consider the evolution of turbulence probability distribution except in the production stage.

In physics literature, there are very limited studies of SPH turbulence. Monaghan [55] devel- oped a Lagrangian average model using the Holm alpha turbulence model in the manner of 12

Large Eddy Simulation. Violeau [56] proposed to apply eddy viscosity and Langevin mod- els in a simple 2D Poiseuille flow. However, these theoretical studies in lower dimensions are not practical for creating visually satisfying chaos in a graphical simulation with limited computational resources.

In this paper, we develop a new method to incorporate fluctuation and uncertainty, the crit- ical physical features of fluid behavior, into SPH computation, in order to make the simula- tion results more realistic and thus enhance the mesh-free method for more extensive usage in graphical applications. This approach can be built up on various successful SPH fluid solvers with preservation of their benefits. Moreover, it only marginally increases the computational overload and algorithm complexity, advocating graphical animators with its efficiency and in- teractivity.

Fluid fluctuation from mean flow is considered as effects of underlying stochastic agitation from subgrid-scale dynamics, which is not modeled within an existing numerical solver. This mechanism is demonstrated by the Reynolds averaged NS equations, where the agitation is modeled as an additional, however physically unknown, Reynolds stress tensor to typical NS equations [57]. In computer graphics, the effects were added to direct numerical simulations by post-linked noise [12, 14–16], confinement [3] or spectral [19] forcing, or through rota- tional forces from artificially injected vortex particles [8,17]. These methods are based on

Eulerian approaches with the affiliated pros and cons, and a satisfactory approach of coupling controllable turbulence to particle-based methods has not been well developed. Even the vortex particle method still requires a grid spanning the whole domain, integrated with its synthetic particle evolution and force feedback based on an arbitrary kernel. In comparison, the SPH, a 13 pure Lagrangian method, is more appropriate to couple vorticity forces in the same way as its pressure and viscous forces with physically designed kernels. There is no need to define the back-coupling from particles to grid sites. Our method includes fluctuation through swirling stimulation carried by SPH particles themselves. Furthermore, we design a statistical agitation scheme to model the inherit randomness featuring turbulence evolution so that the simulation is both spatially anomalous and temporally unique.

In particular, we innovate a swirling probability to SPH particles aimed at introducing stochas- tic fluctuation to the Lagrangian method. The probability distribution function (PDF) in the whole domain is represented by this attribute of all particles. In physics, the PDF-based flow turbulence models have been explored [57], which however are very complex and not feasible in graphical simulations. Our approach is a simple but effective PDF-based turbulence model designed for graphical animations. This particle probability defines the likelihood that a par- ticle will become a swirling incentive particle (SIP). A SIP plays its typical role as a SPH particle, and meanwhile, it drives its neighboring particles rotating around itself by applying rotational forces to the affected neighbors. Coalesced effects from many SIPs model the desir- able turbulence unhindered by a limited number of particles.

Our main contributions are (1) incorporating turbulent fluctuation to SPH fluid simulation, which, to the best of our knowledge, has not been addressed before in computer graphics;

(2) implementing non-repeating flow behavior through stochastic models, which is convenient for animators to create different dynamics without changing the configuration; (3) achieving fast computation with minimal extra cost. In details, we enhance SPH fluid simulation by the algorithms of: 14

Swirling probability definition: High swirling probability is given to particles in the ar- • eas prone to turbulence production defined by, e.g., high flow strain or vicinity to bound-

aries. These particles also receive a swirling vorticity to model the additional swirling

power, which is different from the vorticity of SPH flow. The attribute can be, for exam-

ple, calculated according to the boundary layer effect;

Turbulence development: The swirling vorticity evolves obeying the physical equation • of vorticity evolution along the flow;

Fluctuation spreading: Each particle transfers its swirling probability and swirling vor- • ticity to neighbors, which uniquely simulates the process of turbulence propagation. The

diffusive transfer also naturally models the temporal decay of turbulence energy. Such

significant turbulence evolution is pleasantly implemented thanks to the pure particle-

based framework, which was ignored [17] or handled through complicated heuristic

merging and splitting [8] in the hybrid particle-grid approach;

Swirling initiation: The particles’ probability is randomly sampled to actively act as a • SIP through a Monte-Carlo process, which is critical for uncertainty incorporation;

Swirling execution: Each active SIP imposes rotational forces to its neighbors, with • minimal extra computing in addition to the conventional pressure and viscous forces.

In summary, our method models turbulence production, advection, evolution and extinction naturally. In particular, it features intrinsic implementation of turbulent energy aggregation

(i.e. small vortices merge to large ones) and cascade (i.e. large vortex splits to smaller ones), and the introduced randomness novelly models the stochastic nature of fluids. It also provides 15 animators with convenient control of fluctuation levels to enhance SPH simulations of realistic

fluids.

2.2 Basic SPH Algorithm

Fluids are represented by a set of SPH particles playing the same role as material particles in physics and also serving as interpolation centers for computing fluid attributes. At a given moment, a particle, pi, with material mass, mi, is centered at a location, li. Usually, we can assume each particle has the same mass mi = m. To model continuum with the Lagrangian discretization, a continuous quantity A at location l is approximated by a weighted summation of neighboring particles pi:

A(l)= A(l )W (l l , h), (3) X i − i i where A(li) is the quantity carried by pi and W () is the interpolation kernel with smoothing radius h. For example, the density ρi can be computed as ρi = mW (li lj, h). From Pj − Equation of State, the pressure is obtained as

P = k(ρ ρ ), (4) i i − 0

where k is a gas constant and ρ0 is the rest density.

pi moves according to fluid dynamics by applying the pressure and viscosity forces computed from its neighboring particles pj:

m m P + P Fpressure = i ( j i j ) W (l l , h) (5) i − ρ X ρ 2 ∇ j − i i j j m u u Fviscosity = µ i (m j − i ) 2W (l l , h), (6) i ρ X j ρ ∇ j − i i j j 16 where µ is the fluid viscosity. The derivatives of W () are easily pre-computed to accelerate the simulation. In SPH the choice of different W () functions for different operations is critical.

We refer the readers to M¨uller et al. [58] for details.

2.3 Stochastic Turbulence in SPH

When one wants to create agitated and restless flow behavior with the limited computing con-

figuration (e.g. grid size, particle amount), an effective, efficient and controllable turbulence integration module is necessarily required and it should be easily coupled with the NS solver.

A critical physical feature of the turbulence is its stochastic oscillations, which is a challenging effect to produce.

To rise to that challenge, we add a new attribute, swirling probability (ζ [0, 1]), to each i ∈

SPH particle pi. This attribute over all SPH particles collectively defines the probability den- sity function of the turbulence over the whole fluid domain. Then at each step, we are able to follow a simple approach on each particle to trigger turbulent behavior: Given a randomly generated number ξ [0, 1], if ξ<ζ , p starts to stimulate turbulence as a Swirling Incentive ∈ i i Particle (SIP). Consequently, more particles will act as SIPs for areas with high turbulence probability. This selection process works as a Monte-Carlo approach to sample SIPs represent- ing the turbulence distribution function of the whole domain.

turb In particular, a SIP, pi, will apply an additional force, Fi→j , to each of its neighbors, pj, inside

t the kernel radius. In particular, each particle accommodates a swirling vorticity, ωi , modeling its swirling power. The rotational force is computed as

Fturb = c((l l ) ωt), (7) i→j j − i × i 17

t where c is a control constant. Note that each SPH particle has the capability, modeled by ωi , to revolve its neighbors, but only some of them (SIPs) exert their effects obeying the evolving turbulence in the flow. Using SIP has several features:

Besides turbulent swirling, a SIP works as a typical SPH particle for all SPH computa- • tion.

A SIP does not spin other SIPs in its neighborhood, which avoids the possible continuous • mutual rotation that is unnatural.

A reciprocal force from p to p is not necessary. Since Fturb is to model the unresolved • j i i→j chaotic motion, this external force is not the mutual force between particles.

SIPs are dynamically developing. A SIP will lose its swirling ability once its ωt mag- • i nitude decreases to a lower threshold, and it may change to a SIP again if the random

sampling check of ζi allows this again.

With the SIP scheme, we enforce turbulent motion for particular SPH particles by Fturb, to- gether with the ordinary pressure, viscosity and body forces (e.g. gravity, surface tension), to achieve final turbulent fluid behavior. Next, we discuss the turbulence evolution obtained by

t varying ζi and ωi .

2.4 Turbulence Evolution

2.4.1 Production

Within our scheme, classic SPH particles can be viewed as holding a zero turbulence probabil-

t ity ζi =0 and ωi =0. To introduce turbulence, a particle pi receives a large ζi value modeling 18 the turbulence production.

Boundary objects are obviously chaos generators. We detect the collision of pi with objects, a necessary operation in the SPH algorithm. When that happens, pi can be assumed to appear in a very thin boundary layer around the object. Physically, objects distort and perturb the surrounding flow inside the layer, making the flow unstable and become turbulent. It is still an active research topic in physics to study and mathematically model the boundary layer effects

[57]. In graphics, this effect was modeled through pre-computing an artificial boundary layer considering the tangential flow profile and layer separation [17]. This computation requires extra efforts and since the layer is physically very small, the visual difference might not always be significant when compared to directly using a random definition of the turbulence possibility.

To introduce as little as possible extra work, we instead randomly select a percentage of the collided particles as initial SIPs and give them a large ζi to approximate the boundary layer effect. This percentage and initial ζi are chosen to control the level of turbulence to be created.

Meanwhile, an initial swirling vorticity is computed for pi as

ωt = U N, (8) i i × where Ui is the flow velocity along the tangent direction and N is the surface normal at the collision point.

Besides the boundary-induced turbulence, an animator can also initialize SIPs with appropriate

t ζi and ωi in different regions, such as (1) at a fluid inlet; (2) in visible (unblocked) simulation region; (3) in user interested areas; or (4) in other critical locations (e.g. with strong stress tensor, high pressure, etc.). In a word, the production is dynamically defined in physical and/or heuristical ways according to the application, which provides flexible control power. 19

2.4.2 Development

The swirling vorticity should evolve in the fluid following the vorticity NS equation (it is used in many graphics works, e.g., [8,17,54]):

∂ωt =(ωt )u (u )ωt + ν 2ωt. (9) ∂t ·∇ − ·∇ ∇

Here u is the instantaneous flow velocity. Our method does not introduce extra data primitives

(e.g. vortex particles in Eulerian solvers). A particle is updated with SPH computation and streams with the flow, where the vorticity advection, (u )ωt, is inherently implemented. ·∇ Meanwhile, we implement vorticity stretching by modifying its direction with δt(ωt )u. ·∇ This is easy to achieve in SPH scheme where the derivatives of SPH kernel are pre-defined and used for gradient computing. The stretching does not introduce much instability due to the diffusion in SPH liquid simulation. The diffusion term is discussed in the next section.

2.4.3 Spreading

A significant evolution is the turbulence spreading from a particle to its neighbors. The swirling vorticity diffuses to neighbors as described by the last term of Eqn. 9. Meanwhile, the swirling probability should also propagate to neighbors. A high probability particle tends to increase the swirling probability of its neighbors. This models the physical behavior that if a turbulent vortex forms, the rotation will also stimulate motion of its adjacent fluid. The diffusion also models the decay of strong turbulence. In the SPH scheme, the diffusion of ζ and ωt is solved 20 by

∂ωt m ωt ωt i = ν i m j − i 2W (l l , h), (10) ∂t ρ X j ρ ∇ j − i i j j ∂ζ m ζ ζ i = ν i m j − i 2W (l l , h). (11) ∂t ρ X j ρ ∇ j − i i j j

Here ν is an important control parameter used to define the diffusivity of the swirling proba- bility and vorticity. We use the same kernel in the computation of viscosity force (Eqn. 6). For an animator, ν is also a convenient control for the turbulence intensity. This stimulation can be simply realized as this isotropic dispersion process, since the nonlinearity has been implicitly incorporated with the advection and stretching.

2.5 Discussion

Our method requires minimal extra memory (a float scalar of probability and a float 2D/3D vector of vorticity) and computational effort (probability, vorticity diffusion and stretching) above a SPH solver. It only uses a few lines of extra codes, since we utilize the existing SPH kernels, derivatives and force implementation. However, it successfully achieves realistic tur- bulent behaviors. Multiple small turbulent vortices tend to form larger ones with the coalesced forcing effect of SIPs in close proximity. One large turbulent vortex will automatically decay to smaller ones, because the scattered ζ tends to make its neighbors become new rotational centers with smaller vorticities from the dispersed ωt. This behavior cannot be created by sim- ply increasing the initial SIP numbers at the production without the probability and vorticity spreading. Since the spreading creates SIPs in a local region having similar rotational direc- tions, this leads to the merging. In contrast, with only initial SIPs, many SIPs flowing together can be created at different locations, which may not have similar vorticity directions and may 21

(a) SIPs seeded (b) Large vortex formed

(c) Vortex split (d) Turbulence decayed Figure 1: Snapshots of 2D turbulence evolution. result in a joint effect of chaotic cancellation.

On the other hand, the sampling (seeding) location, the number of SIP particles, and other pa- rameters are manually controlled to produce different turbulence effects. The management of these factors may potentially introduce visible artifacts in results, and requires careful adjust- ment similar as in many turbulence enhancement methods.

2.6 Results

2D Examples We first show the procedure of vortex evolution through a 2D animation exam- ple. Several snapshots (zoomed) of the animation are displayed in Fig. 1. At the beginning,

t three SIPs are seeded with given large values of ωi and ζi in a fluid flowing from left to right 22

(a) (b)

(c) (d) Figure 2: Snapshots of 2D turbulence induced by object. (a) Use object introduced SIPs with- out turbulence spreading; (b) With medium turbulence spreading; (c) With strong turbulence spreading; (d) Use a large number of initial SIPs without turbulence spreading. 23

(Fig. 1a). A color spectrum is used to dye the particles with the evolving swirling vorticity

t t (ωi ). Red to yellow to blue colors illustrate that the magnitude of ωi changes from large to small. The color distribution in the fluid domain also indicates the probability distribution of turbulence. In Fig. 1b, the vortices merge together, where many neighboring particles turn to

SIPs with an increased ζi transferred from the seeds. These particles move following the fluid

flow and perform jointly to stimulate a complex large vortex. The vortex further evolves and splits up inside the flow field (Fig. 1c). Finally in Fig. 1d, the turbulence decays to smaller

t variations since ωi and ζi eventually decrease to small values among these particles. In this example, we intentionally use a large ν which generates a large amount of SIPs, in order to describe the evolution process clearly in the large-scale turmoil.

t In Fig. 2, we perform 2D animations with an internal obstacle in the fluid. Initial ωi and ζi values are assigned to particles around the obstacle as described in Sec. 2.4.1. Thus the obstacle induced turbulence will be triggered in the flow. Fig. 2a is one zoomed snapshot of a simulation

t without turbulence spreading, that is, without ωi and ζi diffusion. We assign 5% of particles inside the boundary layer as initial SIPs. In this case, only the SIPs created at the production stage perform swirling incentive roles. No new SIPs are generated. These SIPs stream in the

fluid and introduce many vortices. The effects show many individual vortices flowing in the

fluid. In contrast, Fig. 2b illustrates the effects after enabling the turbulence spreading. It can be observed that stronger rotational behavior is achieved, which correctly approximates the dynamics like the vortex street. Furthermore, we enhance the turbulence in Fig. 2c by further increasing the turbulence diffusivity ν. More importantly, even with a large number of initial

SIPs, the small vortices are hard to perform together to model the large-scale vortex merging 24

Number of Volume Diffusion Forcing SIP Example SPH Grid Coeff. Factor Seeding Ratio Particles Resolution ν c r Fig. 3 290k 155 363 155 7e-4 1.7e+1 0.3 Fig. 5 200k 155× 317× 85 6e-4 1.0e+3 0.1 Fig. 6 200k 108 ×131 ×247 1e-5 2.0e+1 0.05 × × Table 1: Experiments parameters. and splitting without the spreading. Fig. 2d shows the result with a large number of initial SIPs

(80% of particles inside the boundary layer).

3D Experiments Fig. 3 to Fig. 7 display several results of our 3D animations. For rendering

3D water results, we construct a density volume from the particles, and apply the Marching

Cubes method to generate the surface mesh, which is then visualized with the POV-Ray ren- derer. Please note some complex and advanced methods of mesh generation and free-surface treatment could further improve the surface quality. Table 1 collects important parameters used in the 3D experiments. It includes the number of SPH particles and the size of density volumes.

Three critical control values are the diffusion coefficient ν, the forcing back factor c, and the

SIP seeding ratio r. At each simulation step, we first detect if a SPH particle will collide with the boundary. For all the colliding particles, a ratio of r over one is used to seed a portion of them as initial SIPs. In computing, a particle is given a random number between zero and one, compared with r, to decide if this particle should be changed to a SIP. The swirling probability

ζi = 0.8 is given to the initial SIPs. An existing SIP will not be re-initialized. When a SIP’s

ζ becomes 1% of the initial value, it will no longer plays the role. Meanwhile, a particle can be initialized as SIP again after it vanishes. In our experiments, the radius of a SPH particle is

4e 3, the SPH kernel radius h =6e 3, and the viscosity of water is set as µ =0.2. − −

Fig. 3 displays the snapshots of simulating a water tank with a moving object. The origin 25

(a) (b)

(c) (d) Figure 3: Snapshots of a moving object inside a water tank along different time steps (a)-(d). Top: Original SPH simulation; Bottom: With introduced turbulence. 26

(a) Our method (b) Vortex particle Figure 4: Snapshots of a moving object inside a water tank in comparison with using vortex particles.

SPH simulation generates large-scale waves, which are illustrated by the top row of Fig. 3a- d. The SIPs are seeded around the obstacle to model the boundary induced turbulence. They introduce small and turbulent details with the waves, which is shown on the bottom row of Fig.

3a-d. Natural water dynamics is modeled, which can be observed in the supplemental movie.

We also conduct a simulation with vortex particle method [8] without our SIP approach, while using similar parameters. In Fig. 4, we compare it with our turbulent simulation result. Fig. 4b shows that the vortex particles method leads to dispersive small vortices which adversely affect the formation and developing of waves. Meanwhile, our approach in Fig. 4a makes natural

fluctuation with implicit vortex merging and diffusion. Please see the supplemental movie for animated comparison.

We then simulate a water stream with three internal obstacles. Fig. 5a shows a simulation snapshot of the stream without adding turbulence, and Fig. 5b and Fig. 5c are the snapshots with the introduced fluctuation. They illustrate natural water gyration along the wake and interacting with banks, which happens in running creeks.

Fig. 6 is the results of water pouring into a tank. Fig. 6a is from the original SPH simulation producing the water surface which is too smooth. In Fig. 6b and Fig. 6c, we show the snapshots 27

(a)

(b) (c) Figure 5: Snapshots of a water stream with obstacle-induced SIPs. (a) Original placid simula- tion; (b)-(c) With introduced turbulence at different steps. after adding stochastic turbulence, which create fluctuated water fronts. Fig. 7 shows another simulation with a larger tank while the number of SPH particles is increased to 36k, observed from a different location.

Thanks to the introduced randomness, these animations are temporally non-repeating along a long-period execution or within multiple runs. The flows reveal nonuniform, varied and inter- mittent turbulence, reflecting the stochastic nature of realistic fluids. The effects are controlled in the turbulence production and in the spreading by ν, providing easy and direct control for animators. 28

(a)

(b) (c) Figure 6: Snapshots of water pouring into a tank. (a) Original simulation with smooth surface; (b)-(c) With added fluctuation at different steps. 29

(a) Original (b) With added turbulence Figure 7: Snapshots of water pouring into a larger tank.

These examples run on two Intel Xeon E5520 2.27GHZ CPUs. For the animation of Fig. 3, it costs an averaging 7672 milliseconds (ms) for each simulation step. However, the average extra computation more than the classic SPH is minimal: (1) 732 ms for probability and vorticity development and spreading; (2) 367 ms for rotational force coupling. Note that the cost of (2) only applies to SIP neighbors. In our experiments, even with a large number of SIPs for intense turbulence, the total extra cost is less than 15%. CHAPTER 3

Stochastic Modeling of Light-weight Floating Objects

3.1 Introduction

Light-weight objects streaming inside a flow play a significant role in the liveliness of our world, such as leaves, dust, snowflakes, bubbles, and many more. In general, the floating ob- jects travel following the flow and show complex and oscillating motion trajectories. A handful of approaches have been conducted to model and simulate the phenomena. First, animators usually employ a simple approach adding random noise to the streaming path of floating ob- jects. However, this method yields a low-quality floating motion, since the random noise does not take into account the spatial and temporal distribution of underlying flow turbulence. For instance, this approach cannot easily create obstacle-induced oscillation of floating objects, which nonetheless, is a major source of their unique motion. Second, floating objects can be considered as passively advected by flow velocities. A critical challenge is that modeling the important jiggling motion requires a very turbulent flow field, which is hard to achieve by a direct numerical simulation (DNS) due to limited computational resources and numerical dis- sipation. This situation deteriorates severely when realtime performance and interactivity are demanded, such as in a gaming environment. Moreover, floating motion of light-weight ob- jects has intrinsic stochastic nature, i.e., the repeated executions result in nonidentical motions even with the same configuration, which is not achievable with the existing deterministic simu- lation. Third, adding noise to fluid solvers can introduce chaotic flow velocities (e.g. [16, 59]).

30 31

However, such methods rely on ongoing DNS and apply chaotic addition to the whole fluid domain, which is inefficient in handling a group of floating objects inside.

To overcome the drawbacks, we novelly model the movement of light-weight floating objects as a random process, which is resolved by a stochastic differential equation (SDE). First, a base

flow of main stream can be pre-generated by animator’s special design, or with an affordable simulation on-the-fly. Then, the random process introduces necessary stochastic fluctuation to object trajectories with very fast computation by avoiding costly high-resolution simulations.

Moreover, the SDE models preferred jiggling behavior adaptively. That is, the floating mat- ters adaptively show placid motion along a quiet flow, while behaving in a chaotic manner in unsteady regions. The location and intensity of such fluctuation are determined by a turbu- lence model, which measures the “physically estimated” turmoil from the base flow, and can be managed by an animator easily.

In particular, a Langevin SDE is employed and extended to compute the momentum change of a light-weight object, which has a solid physics background in modeling “random walks” of objects, and has been widely used in physics and other fields. This SDE is integrated with a two-equation k ε transport model, the most popular turbulence model in commercial CFD − (Computational Fluid Dynamics) software. All these models are explicitly solved by finite difference schemes for each object, with minimal programming effort and very fast speed.

Fig. 8 shows our algorithm with an illustrative diagram. In summary, we contribute a new and powerful tool in modeling light-weight floating objects inside flows, which is suitable for com- plex, realtime and interactive animation environments. Compared with existing approaches, 32

Random Floating Agitation objects Through a SDE

k Ɛ

Turbulent Pre-generated Ū Flow Field Energy Model

Figure 8: Diagram of our algorithm. the method has the advantages: (1) Floating motion is separated from base flows for easy man- agement; (2) Stochastic nature of floating oscillation is achieved by a special SDE; (3) The objects adaptively alter their jiggling behavior according to physically-modeled flow turbu- lence; (4) Floating behavior is easily controlled by animators; (5) Fast performance is achieved by explicit computation on individual objects.

A variety of approaches for modeling the interaction of flow with leaves, trees, grass, and human hair have been proposed [60–62]. In particular for floating objects, Wejchert and Hau- mann [63] develope an aerodynamic model for simulating leaves in a wind field. Chen et al. [64] use a CFD solver to generate a flow field and construct empirical particle models for a traveling vehicle. Saltvik et al. [65] combine a CFD solved flow with a falling snow model.

Wei et al. [66] create a flow with the lattice Boltzmann method (LBM) and model the wind- object interaction of bubble and feather. In these methods, the dynamics of floating objects is determined by the flow simulations. In contrast, we utilize stochastic equations to model the 33 turbulent motion of light-weight objects, with a low cost base flow. Modeling the random os- cillation, Kim et al. [67] apply a random path for dispersive bubbles through a simple heuristic model adopting the Schlick phase function. In comparison, our method utilizes a more phys- ical Langevin SDE, and a well-known turbulence energy model. Physically modeling fluids typically involves solving the incompressible Navier-Stokes (NS) equation [2]. The method cannot model very turbulent behavior which demands excessive computation and random vari- ation. Some approaches propose graphical animations by coupling synthetic noises with a low-cost simulator (e.g., [13,16,19]). These methods involve coupling turbulence models with the simulation. The added stochastic fluctuation is tightly bound to the simulated field at each step in the whole domain.

In contrast, our method imposes stochastic agitation on individual floating object with a special

SDE providing self-adjusted oscillation behavior, and the agitation is separated from evolving

flows. Pfaff et al. [59] solve a turbulence model on particles combining the simulated flow with noise fields. It depends on full-domain noise texture synthesis and special texture advection.

Our method, in contrast, regards the floating object motion as a random Markov process. It has no need to seek help from noise textures.

3.2 k ε Turbulence Model −

The floating objects to be modeled are suspended and liberally move inside a fluid domain. A heavy computational burden hinders the applicability of costly high-resolution fluid simulation to model the flow field. In fact, the state-of-the-art fluid solvers still cannot achieve random turbulent effects directly, so that fluid physicists are actively seeking to better understand and 34 simulate flow turbulence [57]. Our method is designed to model the stochastic behavior of light-weight objects, which exhibit placid or jiggling motion depending on the underlying flow features. Based on a pre-generated base flow, we employ a turbulence model to measure chaotic

fluctuation.

In physics, a turbulent flow is demonstrated by Reynolds decomposition as a velocity field,

U = U + u, where U is the mean flow and u is the fluctuation. Here, the fluctuation is not achievable from the base simulation. Turbulence models measure such small-scale oscillation using turbulent kinetic energy, k 1 u u, which is transferred from the simulated flow (low ≡ 2 · frequency, large scale) to the fluctuation field (high frequency, small scale). k will further dissipate its energy into even higher frequencies. The dissipation rate is denoted by ε, and the

ε turbulence frequency is defined as ω = k . A widely used turbulence model defines the energy transport as [57]:

∂k + U k = P ε (12) ∂t ∇ − ∂ε + U ε = ω(C P C ε). (13) ∂t ∇ ε1 − ε2

Here, P is the production of the turbulent energy. Cε1 and Cε2 are empirical parameters.

The turbulent energy, k, is an unknown physical attribute from a low-cost simulation of U.

However, it can be estimated from U. A popular approach is the turbulent-viscosity hypothesis, in which P is defined according to the mean rate of flow strain:

2 P =2νtSij , (14) where the strain tensor

2 1 ∂U ∂U S = ( ( i + j ))2. (15) ij X 2 ∂x ∂x i,j j i 35

2 νt is the turbulent viscosity that is defined as νt = Cµk /ε, with a constant Cµ =0.09.

Each floating object is given two turbulence attributes, k and ε, to represent the kinetic energy and dissipation rate at its location x. Eqn. 12, 13 and 14, are explicitly solved with temporal

finite difference. Here, the advection term, U k, is implicitly implemented by the object ∇ motion.

3.3 Stochastic Object Motion

The computed turbulent energy, representing the agitation to floating objects, is employed to influence the trajectory of these objects in a SDE. Langevin described the random movement of a suspended particle by a well-known SDE (i.e., differential equation with components of random processes), the Langevin equations [57]:

dx dv t = v m t = γv + ψ , (16) dt t dt − t t

where vt and xt are the velocity and position, respectively, of the particle with mass m. γ is the friction coefficient. A stochastic component ψt is implemented as a standardized Gaus- sian random variable in vector format [57]. The Langevin SDE does not take into account an underlying inhomogeneous turbulent flow, which was extended to a generalized form [57]:

dv ∂P m t = t C ω(v U )+ C εψ , (17) dt − ∂x − γ t − t p 0 t

where the base flow velocity Ut is introduced to represent the global flowing effects, the pres- sure gradient ∂P t plays the role of friction. ω and ε are the measurement at time t from the − ∂x k ε model, which are used to agitate particles according to the estimated turbulence. This − is a pivotal solution that makes it possible to model desirable fluctuation without using costly 36 ongoing DNS solvers. In particular, the term C ω(v U ) is a relaxation process that mod- − γ t − t els the convergence of the agitated particle velocity towards the mean flow Ut. The last term

√C0εψt introduces random fluctuation by the stochastic variable ψt, which is controlled by the dissipated energy from the base flow.

Light-weight floating objects move inside a turbulent flow in an approximate way to the afore- mentioned suspended particles. Therefore, we utilize the SDE Eqn. 17, and modify it for modeling floating objects in graphical animations. In this case, the air friction does not obvi- ously affect floating behavior, so that we ignore the pressure-gradient term. Please note that the pressure term in NS equation is still normally handled in fluid solver. Furthermore, the random agitation is controlled by ε in Eqn. 17, which models the energy removed (i.e., dissipated) from the flow and transferred to the particles. As we model the floating objects, their turbulent behavior should be controlled by the turbulent kinetic energy, k, of the flow. As a result, the

SDE used in our method becomes:

dv m t = C ω (v U )+ C k ψ + F, (18) dt − l1 t t − t l2 t t

where F is the body force such as gravity and buoyancy. Two control parameters, Cl1 and

Cl2, are employed to provide easy and meaningful control to animators in interactive appli- cations. The SDE has a unique relaxation process related to the inertial motion of objects.

Consequently, it creates smooth traces (e.g., like a spiral) while simple random addition leads to jiggered velocity change. In general, this extended SDE models the Onrstein-Uhlenbeck

(OU) process, which is a powerful mathematic tool used in different applications (the orig- inal Langevin equation is a case for fluid particles). We believe floating oscillation can be approximately modeled as such an OU process for computer animations, which is indeed a key 37 contribution of this paper.

Algorithm 1 Computational algorithm 1: Pregenerate a base velocity field 2: Initialize floating objects 3: Time step t =0 4: while not stopped do 5: for each floating object do 6: Loadposition xt and velocity vt 7: Read base velocity Ut at xt 8: Compute kt and εt by Eqn. 12-15 9: Compute new velocity vt+1 by Eqn. 18 10: Updateposition xt+1 by Eqn. 16 11: end for 12: Floating objects management 13: Outputandrender 14: t = t +1 15: end while

3.4 Implementation and Results

The computational algorithm underlying our modeling of floating objects involves easy pro- gramming, as described in Algorithm 1. A key input is the stochastic variable ψt, which is a sequence of Gaussian distributed random vectors. We utilize Bell’s algorithm [68] to gen- erate three independent random scalars with zero mean and deviation one, and then construct the needed vector-valued variables. The management of objects includes adding new objects, removing objects out of the domain, and other application-related operations.

The method provides several control parameters for animators to adjust floating behavior flex- ibly. In the turbulence assessment, ε (Eqn. 13) is empirically given with Cε1 = 1.44 in physics [57]. Meanwhile, Cε2 plays a significant role in determining turbulence energy dis- tribution. With a large Cε2, the produced k, e.g., around an internal obstacle, can propagate to locations far from the obstacle. As a result, the modeled objects show fluctuated trajectories in 38

(a)Passiveadvection (b)Simplerandomalgorithm

(c)Ouralgorithmwithfluctuation (d)Ouralgorithmwithstronger fluctuation

Figure 9: Simulation snapshots of flying leaves past a house based on a pre-computed station- ary velocity field.

a long time after interacting with the obstacle. In contrast, smaller Cε2 values limit the oscilla- tion of these objects in a vicinity of the obstacle. Cl1 and Cl2, in Eqn. 18 can control different

floating behaviors. Cl2 models the intensity of random agitation, a large value of which results in strong fluctuation of the objects. Cl1 controls the converging speed of oscillation toward the guidance base flow, Ut. Increasing Cl1 quickly drags the oscillation back to Ut, so that the trajectories become consistent with the simulation results.

In Fig. 9, we show the snapshots of an animation example modeling flying leaves passing a house. A pre-computed stationary wind field with resolution of 128 79 51 is applied. Fig. 9a × × displays a snapshot of the leaves flowing along the wind velocity. The leaves follow a regular route and the steady dynamic effects are not acceptable. Fig. 9b uses random noise, where the dynamics becomes unsteady, however, with an unpredicted dispersive propagation. In contrast, our algorithm models fluctuation of the leaves according to wall-induced turbulence with Cε2 = 39

1.9, Cl1 = 1.0 and Cl2 = 1.2. As a result (Fig. 9c), the leaves flow past the roof, climb upwards, and dispersively oscillate to a wide area behind the house. Furthermore, we increase

Cl2 = 2 to make the leaves behave in a stronger fluctuation and propagate into even wider areas (Fig. 9d). In this example, we remove those leaves moving out of the simulation domain.

The total number of leaves in the scene has a maximal value of 570. Our SDE model creates spiral motion that is not achievable by a pure random noise integration. We make the leaves self-rotating slowly. When a leave hits the boundary of the house, it is bounced back in the mirror-symmetric direction. Our method can be coupled seamlessly with complex modeling methods of object deformation, collision, rotation, etc. The turbulence model and SDE are solved with straightforward arithmetic computing without introducing significant overhead.

The examples run very fast on a desktop with a CPU (Intel Xeon E5620 Dual 2.4 GHz) and 12

GB memory, with an average speed at around 0.9 millisecond per step. The algorithm is also inherently parallel and amenable for GPU acceleration when necessary for a very large number of objects in complex scenes. CHAPTER 4

Pattern-based Smoke Animation with Lagrangian Coherent Structure

4.1 Introduction

In computer graphics, extensive research has been conducted in fluid modeling for a long period, where direct simulation with the application of Computational Fluid Dynamics (CFD) methods has led to substantial success in creating realistic fluid phenomena.

The NS equation is solved on discretized grids with numerical simulation preferably in uncon- ditionally stable schemes [2,5]. Modular bases are used to solve the equation in a reduced space with constraints [69]. The energy loss from numerical dissipation may impair the simu- lation quality, which is tackled by vorticity confinement [3], vortex particle [8], fluid-implicit- particles [9], circulation preservation [70], or high-order advection schemes [4, 71], etc. The

flow results achieved on different grid resolutions vary largely due to the intrinsic complexity of energy transport among different scales. A variety of techniques have been proposed to enhance simulation with special turbulence effects. Synthetic noises are used together with low-cost simulators [14–17], among which noises are integrated with simulated velocities con- sidering Kolmogorov energy cascade theory. The random agitation is also added to simulations through special carrier particles [20, 59]. These methods add good sub-scale details, whereas large-scale noises could induce strong deviation from the simulation results which may not be satisfied.

However, current fluid modeling technology still imposes great challenge on animators: the 40 41 nonlinear flow dynamics is laborious to adjust for achieving desired fluid path and shape; and the expensive computational cost hinders their effort to design special effects in an interactive way. Therefore, advanced fluid design tools are direly needed by the animators which, ideally, can provide the functionality as a two-stage process:

Design Stage: Users design fluid behavior with multiple experiments of low-cost simu- • lation. This procedure needs interactive adjustment of initial and boundary conditions,

internal structures, and parameters;

Output Stage: Once the expected fluid characteristics (such as major path and shape) • are achieved, users can run high-quality simulation to create final animation.

However, unlike image or geometry objects, the transition from experimental design results to final outcome is not straightforward due to the inherent nonlinearity of fluids and numerical dissipation . Flow behavior greatly changes while simulation resolution increases or turbulence enhancement is exploited. The resultant fluid dynamics might not be what an animator prefers and has tested, which will greatly frustrate the animator considering his/her previous design efforts. Although researchers are actively presenting advanced techniques in fluid modeling and simulation, unfortunately, few efforts have been made to support such a two-stage design protocol which can provide great convenience for animators. Lack of such design tools also hamper the usability of the advanced fluid techniques for a wider audience.

In this paper, we propose a novel pattern-based fluid animation approach for advancing fluid modeling in the two-stage animation scenario. Our method focuses on regulating high-quality animation with pre-computed patterns extracted from low-cost simulation results after design experiments. We employ a fast-emerging fluid analysis technique, in particular the Lagrangian 42

Coherent Structure (LCS), to represent flow patterns. LCS defines the dynamic fluid skele- tons of major flow trends. It provides a geometric instrument for revealing intrinsic property of complex fluids, which is computed as the locally maximum regions of the finite-time Lya- punov exponents (FTLE). LCS “has evolved to become one of the most exciting avenues of research in dynamical systems” [72]. It can represent structural features and material separa- trices which are often hidden when viewing the vector field or trajectories. Recently LCS has been successfully used in physics, meteorology, and oceanology to study real-world fluid dy- namics. It has also been introduced in topology-based visualization for visually analyzing fluid

flows. The application of FTLE and LCS has been used for studying time-dependent dynami- cal flow systems [72,73]. The technique has been successfully applied in topology-based flow visualization [74–76], where the structures are discovered to identify critical features of experi- mental or real flow data sets, giving insights of the dynamic processes. Based on these previous research efforts, our project initiates using FTLE and LCS in fluid animation missions.

In our approach, we use LCS to define fluid patterns based on its capability to structurally characterize the main streams. Using the patterns to drive high-quality simulation, the final animation is confined to the design. To the best of our knowledge, this is the first time FTLE and LCS are used in driving fluid simulation and animation.

In detail, FTLE measures the rate of separation of very close particles after a given time interval inside a fluid. A sequence of FTLE fields, which are evolving scalar volumes, are computed from the velocity fields created over time in a low-cost flow simulation. Such velocity fields are the satisfactory results for animators after a comprehensive design process. The FTLE ridges present geometries that divide the domain into coherent regions. Such ridges, i.e, LCS, play a 43

Velocity Field FTLE LCS

Design Stage Output Stage

Fluid Guided Final Animation Fluid Design Animation

Low-cost GPU Controlled High-quality Flow Simulaon Acceleraon Thinning Flow Simulaon

Figure 10: Overview of our method. role as material boundaries in the material’s transport inside the flow and record the major flow trends. Our method regulates a high-quality animation by enforcing its velocities on the LCS region to follow the pre-computed velocities. Therefore, the animation is guaranteed to follow the mainstream flow characteristics, e.g., major shape and direction, which have been designed and preserved on the extracted patterns. On remaining regions, the high-quality fluid dynamics is allowed to liberally develop thus leading to preferred realistic details.

Based on this scheme, the high-quality animation is geometrically maneuvered by the pre- designed results. Animators can adjust parameters to achieve different high-quality results while making them consistent with the design. In particular, the patterns are controllable to achieve results with variations, which are implemented by computing and representing the

LCS regions with different sizes. A thick LCS skeleton will force a large portion of the flow to follow the pre-computed shape, while a thin skeleton gives the fluid more freedom to de- velop and leads to less shape-controlled results. We employ a skeleton thinning algorithm to achieve such control. These patterns are pre-computed in the design stage, and furthermore the parallel nature of their generation is utilized by GPU acceleration. In graphical modeling, the high-quality fluid simulation is typically accomplished by increasing the simulation resolution 44 and using the turbulence enhancement methods. Our approach facilitates deterministic fluid animation by extracting and employing the dynamic “signature” of flows. It can be combined with these modeling methods in fluid design applications.

Fig. 10 illustrates the overview of our method. In summary, our major contributions in this project are:

Promoting the two-stage process in fluid animation, in particular fast design and high- • quality outcome, which gives animators a very comfortable tool in their missions;

Proposing geometric structures representing major trends of fluids, as a new operative • instrument for graphical tasks involving fluid flows;

Innovating the use of Lagrangian Coherent Structure in fluid animation beyond its tradi- • tional domain of visualizing and analyzing real-world or experimental fluid data;

Developing techniques on using the flow patterns in guided fluid animation with accel- • eration and effect control.

4.2 Flow Pattern

A key challenge in fluid study is to find the “representative pattern” of a fluid flow so that such a pattern acts as an operative vehicle for flow manipulation, such as guiding high-quality animations. One major impediment is that the flow is dynamically-evolving. Another one is the very complex momentum variation on a wide range of spatial scales. While the Kolmogorov theory has been used to describe the dynamic energy transport, its statistical description is not eligible for unveiling geometric structures over time. We use the structural extraction methods 45

(a) Velocity streamline (b) Forward FTLE (c) Backward FTLE

(d) LCS with Hessian (e) LCS with threshold Figure 11: Flow pattern with FTLE and LCS. (a) Red: upward velocity. Green: downward velocity; (b)(c) Red: high FTLE value; Blue: low FTLE value; (d)(e) LCS region from (c). 46 to discover flow patterns.

4.2.1 Finite-Time Lyapunov Exponent (FTLE)

Lyapunov exponent has its roots in the theory of dynamical systems, which characterizes the rate of separation of infinitesimally close trajectories. Haller [73] used finite-time Lyapunov ex- ponent for identifying LCS over a finite time interval. Considering the motion of a Lagrangian particle inside a fluid domain, its trajectory can be described by an ordinary differential equa- tion: dp(t) = u(p(t), t), (19) dt where p is the position of the particle at time t, and u is the velocity. In the parlance of dynamical systems, the trajectory that takes a particle forward T units in time from its initial position defines a flow map:

t0+T Φt0 := p(t0 + T ). (20)

It depends on the initial time, t0, and the integration period, T . The flow map provides a way to compute the amount of local stretching, which is measured by the Cauchy-Green deformation tensor: dΦt0+T (p) dΦt0+T (p) ∆ := ( t0 )∗( t0 ), (21) dp dp where denotes transpose of a matrix. ∆ is a 2 2 matrix for 2D flow or 3 3 matrix for ∗ × × 3D flow, respectively. It is computed on each grid site of a discrete grid on the fluid domain.

Furthermore, ∆ has positive eigenvalues since the matrix is positive definite. The eigenvalues measure the rate of separation of the underlying flow at a location p. In particular, the FTLE 47 value is defined as a time-dependent scalar using the maximum eigenvalue λmax:

1 σ (p, t)= log λ (∆). (22) T T p max | |

4.2.2 Forward and Backward FTLE

In Eqns. 20-22, the time interval T indeed can be either positive or negative. The Lagrangian particle thus moves forward or backward over time along its trajectories, respectively. For

T > 0, the FTLE measures forward trajectory separation and the associated LCS (i.e., local maxima of FTLE) represents repelling surfaces (stable manifold) in the flow. If T < 0, the separation is evaluated backward in time and the resulting LCS acts as attracting surfaces (un- stable manifold). Fig. 11b and Fig. 11c display a forward FTLE and a backward FTLE field, respectively. Note that the FTLE fields are dynamically evolving and these are the snapshots at a moment. The velocity field at that moment is shown in Fig. 11a. Haller and Sapsis [77] addressed that “the attracting LCSs form the backbones of forward-evolving trajectory patterns over the time interval [t0, t], acting as central structures on which nearby trajectories accumu- late”. In our application we want to make high-quality simulations confine to pre-computed trends, in other words, to attract the simulations towards the skeletons. Therefore, the attracting surfaces and hence the backward FTLE are used in our implementation. Fig. 11c displays the major characteristic of the flow above the ball. The integration time T is chosen depending on the amount of details needed in the resulting FTLE: a large T achieves smooth and large-scale structures, but it should not be too large in which necessary vortex is ignored. We use T = 1 second in our examples. 48

(a)5thinnings (b)15thinnings Figure 12: Control domain with LCS thinning from Fig. 11(e).

4.2.3 Lagrangian Coherent Structure (LCS)

From the computed FTLE field, the LCS is identified as the local maxima of the field. Flu- ids are represented as such invisible repelling/attracting structures that outline the boundaries between different regions and reveal material transport pathways. Ridge extraction is an estab- lished topic in differential geometry. Second order derivative of the FTLE field is evaluated by

2 d σT (p) the Hessian matrix, Γ= dp2 , whose smallest eigenvalue πmin and its related eigenvector n satisfy π < 0 and (σ) n =0 for the LCS ridge. Unfortunately, such an accurate LCS com- min ∇ · putation is very sensitive to small fluctuations in the FTLE field, and thus results in scattered small patches/points which impair the ability to identify the desired skeleton. The fluctuation comes from the sensitive nature of the particle trajectories to small velocity variations, and also from the inaccuracy of numerical integration. A practical algorithm of extracting LCS 49

(a) (b)

(c) (d) Figure 13: 2D Pattern-based fluid animation. (a) Low-resolution simulation result; (b) High- resolution simulation with regulation after 10 thinning passes; (c) High-resolution simulation with regulation after 20 thinning passes; (d) High-resolution simulation without regulation. 50

−3 from the Hessian was previously proposed by setting the largest eigenvalue, πmax < 10 , of

Γ in extracting 2D LCS [76]. A 2D curve-thinning approach was applied after such extraction to find the exact skeleton. Fig. 11d shows the LCS created from Fig. 11c with the largest eigenvalue threshold. The approach is good at continuous ridge extraction, but still results in many separated ridges. The shattered regions are not appropriate in our application: firstly, they introduce irregular and unnatural aliases in flow simulation when we apply flow guiding forces on them; secondly, it is hard to control different regulation levels in the fluid animation results, which is an essential feature of our approach.

4.2.4 Thinning

To address the problem, we apply the skeleton thinning method directly after a threshold- selection on the FTLE field without Hessian computation. First, we use a threshold (k) to find

LCS region (σT (p) > k). This method may miss some LCS ridges with a very small FTLE value which are related to small fluctuations. It is appropriate since the mainstream guidance usually does not focus on the small details, and very small fluctuations can introduce aliases in animation. The resulting LCS region is a portion of the simulation area, which defines the control domain where the guidance of animation takes effect. Fig. 11e shows the result with such threshold extraction from Fig. 11c. In comparison to Fig. 11d from the Hessian-based computation, it achieves smooth and integral LCS control domain.

The threshold k plays a role in achieving different results. Using a small k leads to a large control domain and hence strong fluid guiding effects. k = 0.1 is used in our examples for a normalized FTLE range between [0, 1]. This initial control domain can be pared by applying a 51

3D thinning algorithm [78]. Skeleton thinning preserves the geometric feature which is better than directly using a large k. The thinning algorithm is iteratively applied with nthin passes which is selected by users, so as to achieve a smaller control domain and hence a desired level of animation regulation. Fig. 12 shows two control domains with different sizes after the iterative thinning from Fig. 11e. Thinning 15 times retains a smaller major flow region than thinning 5 times.

4.2.5 Implementation

From a sequence of low-cost simulation results on a coarse grid, the FTLE is computed nu- merically at each grid point at a time t. For each point p(x, y, z), six particles are positioned at (x τ,y τ, z τ) with a small perturbation τ (e.g. 0.1 is used in our experiments for a ± ± ± unit grid interval). The six particles are back traced in the velocity fields for a period of T , and their stopping positions are used to numerically compute Eqn. 21. The FTLE value σT (p, t) is then achieved using Eqn. 22 by calculating the maximum eigenvalue. The trajectory tracing is

T implemented within δt steps, where δt is the time step size in an NS simulator. For example, in

Fig. 11, δt =0.1 seconds and T is 1 second. We adopt a fourth order Runge-Kutta integration scheme in the tracing and use linear interpolations in the computation. Further enhancement using a smaller particle moving step size or high-order integration and interpolation methods can increase the accuracy, which has been used in some approaches of real flow analysis and visualization. However, the computational cost also grows greatly. In our practice, the fluid control tasks in animation can be satisfied with the current computational scheme and hence we avoid the time-consuming computation. 52

The FTLE fields are time-varying volume data in 3D. In generation, multiple particle tracings are required for each grid point. Fortunately, this is a pre-processing computation on a low- resolution grid. Moreover, the FTLE computational algorithm is explicit and embarrassingly parallel, which is implemented on GPUs to further increase the performance.

The computed FTLE fields are stored after the design stage. In practice, we only need to compute and store FTLE fields every β steps (e.g. β = 5), since they change gradually. In the output stage they are retrieved and linearly interpolated to the compatible high resolution domain and every time step. Then, LCS is computed with nthin passes of thinning for high- quality animation (Sec. 4.3).

4.3 Pattern-driven Fluid Animation

High-resolution Simulation: Dynamic flow patterns extracted from a low-cost fluid simula- tion, SL, are used to drive a high-quality animation. At a time t, the pattern represented by the

LCS control domain, Ω(t), determines where a high-quality animation should follow the pre- designed velocities. A high-resolution flow simulation, SH , is influenced by applying special guiding forces in Ω, which confines its velocity to SL. The guiding force is computed at a point p as

 1 c δt (ˆuSL (t) uSH (t)) if p Ω(t);  − ∈ F(t)=  (23) 0 if p / Ω(t).  ∈  Here, ˆuSL is the upsampled velocity on the high resolution grid. A constant c (e.g. c = 1.5 in our 3D examples and c = 2.0 in 2D examples) is chosen to be compatible for a good control effect. Note that simply adjusting c is not preferable to the LCS thinning. Since decreasing it 53 might make the different control effects at different regions less distinguishable, while increas- ing thinning times will create smaller control regions and flows will be allowed to freely evolve on larger areas.

Turbulence Enhancement: Besides using high-resolution simulations for final animation, noises can be integrated with a pre-computed velocity field for turbulence enhancement in a post-processing stage without solving the NS equation (e.g., [16]). Such methods successfully add small-scale details to an existing flow, whereas a large vortex scale of noises will result in diverted and unnatural flow behavior. We also use the LCS control domain to adjust noise integration to further leverage the potential of these methods. Noise-based fluctuation is only added out of the control domain, so that the major flow trend is preserved when a noise field with larger scales is used.

Such a pattern-driven simulation imposes fluid control on LCS pattern domains. This is the key difference of our approach from previous fluid control methods, which have focused on different methods of applying control to fluid simulation, such as optimization [33,34], detail- preserving [25], etc. But they did not study the intrinsic structural features of fluids and the corresponding spatial regions to apply their control. Indeed, the new scheme complements these previous approaches. Our method can be combined with different control methods be- sides using Eqn. 23.

4.4 Results and Performance

Several examples are implemented on a workstation with two Intel Xeon E5630 2.53 GHz

Quad-Core CPUs and 12 GB memory. We use a stable solver with the MacCormack advection 54

[71]. Here, smoke density volume is advected along the flow field.

Fig. 13 is a 2D example of fluid animation using our method. Fig. 13a is a low-resolution simulation snapshot with a 32 64 grid. For better illustration, we assign two different colors × to the smoke on the left and right side of the ball. In Fig. 13d, a high-resolution grid 128 256 × is used. However, this configuration of high-quality animation makes Fig. 13d totally different from Fig. 13a at the same time step. The high-resolution simulation has distinct characteristic shapes than the low-resolution one. To make the final output consistent to the low-quality simulation, we extract LCS patterns and apply regulation. Fig. 13b and Fig. 13c display the snapshots using 10 passes and 20 passes of thinning, respectively. Their shapes are similar to

Fig. 13a with two different levels of details.

In Fig. 14, a 3D flow simulationwith a movingball is performed on a 32 64 32 grid. Fig. 14a × × shows the basic flow shape in the low-cost design. A four times larger grid 128 256 128 is × × then used for creating final animation and a small amount of vorticity confinement [3] is added to achieve turbulent results shown in Fig. 14d. However, the smoke behavior deviates from our satisfied design. It also evolves faster in part due to the extra confinement energy. In the two stage design scenario, this transition difference hinders an animator to use the turbulence enhancement technique. Fig. 14b is the result of applying pattern guidance with a large LCS control domain (0 thinning). The snapshot has detailed smoke behavior and the shape agrees with Fig. 14a. Applying 8 passes of thinning, Fig. 14c shows more turbulence and details.

In Fig. 23, smoke flows past two internal objects. When a 48 64 48 grid is used in the × × design stage, Fig. 23a shows the designed simulation snapshot. Once patterns are extracted, a

96 128 96 grid is applied for high-quality output. Fig. 15 visualizes three thinning results of × × 55

Table 2: Performance Report (in seconds). Examples Low-cost Simulation Low-cost Ave. Simulation FTLE Ave. Time Per Frame Thinning Thinning High-quality Simulation High-quality Ave. Simulation Resolution Time Per Frame CPU GPU passes time Resolution Time Per Frame Fig. 13 32 64 0.02 1.3 0.03 10 0.007 128 256 0.55 Fig. 23 48 64× 48 0.68 62.1 0.41 4 0.34 96 128× 96 5.6 Fig. 14 32 × 48 × 32 0.23 19.8 0.14 8 1.35 128 × 192 × 128 23.4 Fig. 16 24 × 32 × 24 0.05 5.51 0.06 4 0.23 96 × 128 × 96 5.15 Fig. 17 48 × 64 × 48 1.8 61.6 0.95 8 3.29 192 × 256 × 192 27.1 × × × × the patterns at a step. The finer simulation, enhanced by the vorticity confinement, creates good smoke effects (Fig. 23d). However, the result abandons the predefined smoke behavior. We thus apply our pattern based method after 4 thinning passes. Fig. 23b illustrates detailed smoke but still follows the design. Furthermore, we choose to only apply guidance on half of the LCS control domain. Fig. 23c shows that the smoke close to the observer still follows the design shape, while the smoke far away from the observer evolves more freely. This example indicates that the geometric pattern can be further edited and adjusted for spatially maneuverable fluid animation.

In Fig. 16 we show our regulation result used in a fluid simulation which is fluctuated by the vortex particles [8]. Vortex particles apply vorticity forces to a high-resolution simulation

(96 128 96) as in Fig. 16c. The enhanced flow shows scattered smoke rotations, which is × × not like the original simulation in Fig. 16a (24 32 24). If the original shape is preferred, × × Fig. 16b is the confined result after applying the LCS patterns.

We also apply our method for noise based turbulence. Kim et al. [16] successfully added sub- scale wavelet noises to low-cost simulation results. However, when the spatial scale of the noises (i.e., the size of the added vortices) increases, the flow will be greatly diverted from the designed path and behavior, which will lead to unnatural final results. Fig. 17c shows the strong fluctuation result, in comparison to the original flow in Fig. 17a. In contrast, Fig. 17b displays the turbulent smoke with its shape similar to Fig. 17a. 56

Table 7 reports the system performance. When using a high-resolution simulation, the average simulation time (including density advection) per frame greatly increases, which depicts the necessity of the two-stage animation design. Besides the memory usage by the fluid solver, the

FTLE stores a scalar floating-point value at each low-resolutiongrid site. It is up-sampled to the desired high-resolution and used to compute the LCS region represented as a grid with binary values. The total extra memory used by the FTLE/LCS computation is around 10% more than the classic fluid solver. The computational requirement of 3D FTLEs is slow on CPU using the fourth Runge-Kutta integration and tri-linear interpolations. However, this computation is highly parallel and applies on low-resolution grids. We then compute low-resolution FTLE

fields with GPU acceleration on an nVidia Tesla C1060. The speedup factor of GPU/CPU is great due to the parallel nature of the algorithm. For example, on a 48 64 48 grid, the × × GPU computation is 150x faster than the CPU version for Fig. 23 and 64x faster for Fig. 17.

The discrepancy between the two factors is due to the different scenes and flow behaviors.

Please note that the computation is only applied every β = 5 steps. The thinning algorithm is also very fast compared to the high-quality simulation. The guiding force computation and feedback are very trivial in terms of computational requirements. The wavelet noise method uses noise integration instead of a NS simulator in creating the high-quality animation. 57

(a) (b)

(c) (d) Figure 14: Pattern-based fluid animation on a moving ball simulation. (a) Low-resolution sim- ulation result; (b) High-resolution simulation with regulation; (c) High-resolution simulation with regulation after 8 passes of thinning; (d) High-resolution simulation without regulation. 58

(a)

(b) (c) Figure 15: Pattern visualization of the Fig. 23 example. (a) 4 thinnings; (b) 8 thinnings; (c) 12 thinnings. 59

(a)

(b) (c) Figure 16: Pattern-based fluid animation on vortex particles. (a) Low-resolution simulation result; (b) Adding vortex particles with regulation (4 thinnings); (c) Adding vortex particles without regulation. 60

(a)

(b) (c) Figure 17: Pattern-based fluid animation on turbulence enhancement with wavelet noise. (a) Low-resolution simulation result; (b) Adding noise with regulation (8 thinnings); (c) Adding noise without regulation. CHAPTER 5

Ad Hoc Compression of Smoke Animation

5.1 Introduction

The advent of advanced fluid modeling techniques has achieved astounding animations of smoke and various natural phenomena. The animations are mostly generated by physically- based flow simulations. The simulated smoke density fields can be rendered so that the anima- tions are distributed in the form of 2D videos. But such canonical approach is inflexible since the rendering results are pre-defined. The rendering parameters (e.g., viewing and lighting) cannot be easily changed for different uses. For example, in a graphical training system in- volving multiple agents the simulation data may need to be rendered from different viewpoints simultaneously at different sites. In such cases it is preferred that the 3D simulated data can be directly compressed, stored and transmitted. Moreover, in the emerging networked graphi- cal applications, e.g. online games and training systems, there exists an urgent need to ensure that the physically-modeled fluids can be stored and streamed fast and smoothly. However, smoke animations usually have 3D, high-resolution and time-varying datasets, which impose a major challenge for efficient storage and transmission. Using existing compression tools does not utilize the features of smoke dynamics and cannot preserve important small-scale details to achieve good compression performance. There is a lack of specific compression tools of smoke animations, while in contrast, the compression of 3D geometric objects has been widely studied [40].

61 62

In this paper, we initiate the study of compressing 3D data of smoke animations towards their efficient migration in various scenarios. A set of ad hoc compression techniques is developed with respect to the specific features of smoke animation. To the best of our knowledge, this is the first time compression techniques are studied specifically for such graphical animations. In general, smoke animations have:

Large data size: smoke is typically stored on 3D grids with a large number of elements, • which become a heavy burden on direct data storage.

Rapidly varying data: smoke is inherently dynamic and amorphous with fast and irregu- • lar evolution, consisting of a large number of evolving frames. Such nature is radically

different from geometric objects.

Visually sensitive details: smoke behavior is vivid from its abundant small details. The • small-scale details should be well preserved to produce realistic visual results. This

is a major hurdle of applying existing data compression techniques typically aimed at

reducing high-frequency variations.

General data compression techniques [35] perform compression without utilizing the knowl- edge of smoke features, so further compression is demanded. On the other hand, extensive research studies have been conducted on the compression of very large scientific volumetric data [43], mostly on static volumes. More importantly, they focus on preserving critical fea- tures for fast loading and visualization of the data, while small and unimportant features are minimized to promote compression. In contrast, the dynamic, high-frequency smoke details play an indispensable role in realistic smoke animations. Many advanced techniques have been developed to create or add details to physical simulations. To address the challenges, we 63 develop new techniques of smoke animation compression. The contributions are multifold:

1. Dynamic animation is compressed with frame-based motion prediction and compensa-

tion. Key frames are used to predict the frames between them. Unlike the classic motion

estimation in video compression, we novelly perform the estimation through a bidirec-

tional advection from key frames, and the advected results are combined by a special

method to better predict original frames with fewer bits. A special weight map with min-

imal size is then used to represent many intermediate frames for effective inter-frame

compression.

2. The velocity field of smoke animation is adaptively simplified to generate motion vectors

over non-uniform blocks. They are used to drive the bidirectional advection. The motion

vector generation is conducted by utilizing specific flow features. The motion vectors are

small in size, and more importantly, reflect the spatial heterogeneity of smoke evolution

behavior, which leads to more accurate flow-directed motion compensation. In particu-

lar, we implement the adaptive simplification by employing the Finite Time Lyapunov

Exponent (FTLE), which represents the evolving structural patterns of smoke flows.

3. Pertinent intra-frame compression techniques are designed for different data types: (1)

the key frames are further compressed by transform model (we use discrete cosine trans-

form here) and quantization; (2) the weight maps of the predicted frames are highly

quantized to reduce the bitstream; (3) the motion vectors can be downsampled and quan-

tized. The parameters in these operations are controllable for different requirements in

compression ratio and reconstruction quality. Finally, lossless encoding techniques are

used for further compression. 64

Physically Smoke Animation Compression Smoke Animation Decompression Simulated Motion Smoke Density Field of Block-based Inter-frame Vectors Animation Key Frames Transform Compression File Bidirectional with Bidirectional Key Weight Maps of Quantization Storage Advection Density Field Advection Decoding, Frames Intermediate Downsampling and or Inverse Transform Frames Encoding Network Weight Velocity Transfer Compensation Maps Field Adaptive Velocity Motion Vectors on Recovered Density Simplification based Nonuniform Blocks Field for Rendering on Flow Features

Figure 18: Smoke animation framework overview.

Our ad hoc methods achieve better performance than general compression tools and direct adaptation of 2D video techniques. Moreover, the process can be controlled to achieve varied results for different scenarios.

5.2 Compression Framework

Smoke animation compression aims to reduce the data size created by physically based smoke simulations. In this section, we rationalize and delineate our compression framework and background, which is illustrated in Fig. 18.

Inter-frame Compression The simulation generates a series of smoke density fields, consist- ing of the time frames of animation. Each frame can be compressed using file or volume com- pression tools. Apparently, the temporal coherence between frames should be further exploited for better performance. An approach is to extend existing 2D video compression techniques

(e.g. block-based motion compensation in MPEG) to the compression of 3D density frames. In particular, motion prediction and compensation promotes inter-frame compression by dividing the whole volume into a group of blocks (e.g., mostly a fixed 16 16 block size is used in × video compression). Then, a block A in the frame of time step t +1 can be predicted by using some block B from the existing frame of t step, assuming A is moved from B during the time interval. Such block motion prediction is measured by a motion vector (mAB)) between the 65 source and destination. The motion vector is computed by conducting a neighborhood search for a best block match. Consequently, the original frame series are divided into key frames and intermediate predictive frames. Original density fields are used for key frames. Each in- termediate frame between two consecutive key frames only needs to store the motion vectors of those blocks, and the difference, e.g., between the block A and the moved block B by the motion vector mAB. We call the key frame as “K-Frame” and the predicted intermediate frame

“P-Frame”. A series of “K P P . . . P K P P . . . ” frames form the real data streams. The number of P-Frames between K-Frames is an adjustable parameter.

Using Flow Advection for Inter-frame Compression Unfortunately, the direct adaptation of the 2D video compression model does not generate good results in our implementation for

3D smoke animation compression (see Fig. 22). First, the difference between K-Frames and

P-Frames does not have the preferred largely-reduced data, compared to the original density

field. This is fundamentally different from video compression where P-Frames typically have fewer bits for encoding than original images do. The reason lies in that the dynamic evolution of smoke is nonlinear, rotational and turbulent. Second, the difference presents a diverse distri- bution of high-frequency details, which prevent transform methods being effectively applied.

Moreover, the small-scale details represent realistic animation effect which has been pursued by many advanced simulation methods. So they should be well preserved to reconstruct real- istic animation. Therefore, smoke animation data sets demand ad hoc compression exploiting

flow features. We develop a novel method which utilizes a special bi-directional advection for inter-frame compression. Flow velocity fields are simplified to compute motion vectors over nonuniform blocks. The motion vectors advect key frames to approximate intermediate frames. 66

The use of bidirectional advection in compression is discussed in Sec. 5.3.

Intra-frame Compression Data blocks inside both K-Frames and P-Frames are further com- pressed by utilizing intra-frame spatial coherence, including transform, quantization and loss- less encoding. The algorithms are described in Sec. 5.4.

5.3 Inter-frame Compression with Bidirectional Advection

Smoke animations are typically represented by smoke densities evolving inside velocity fields, which are created in a physical simulation based on solving Navier-Stokes equations. In the simulation, the densities are advected by velocities as:

∂ρ = u ρ, (24) ∂t − ·∇ where u is the velocity at time t and ρ is the density. So the evolutionary relationship between consecutive density frames is largely defined by the velocity fields. The velocities, in a sense, play the role of motion vectors from the perspective of motion estimation in compression. This is a theoretical extreme case for the block-based motion prediction when considering each grid site as a block. However it is not practical to directly use velocities as motion vectors, since the size of the velocity field is too large for compression. Therefore, we simplify the velocity

field to generate sparse motion vectors in a small size to enable effective compression. Then, they are used in compression and decompression for P-Frame prediction. 67

(a)

(b) (c) Figure 19: Illustration of the adaptive velocity simplification with FTLE. The domain is divided into nonuniform blocks based on FTLE. (a) 2D velocity field (256 256); (b) FTLE field; (c) Motion vectors over the blocks with the smallest block at 8 8 and the× largest block at 64 64. × × 68

5.3.1 Adaptive Velocity Simplification with FTLE

Velocity Simplification The goal of the velocity field simplificationis to find a set of velocities

(i.e., motion vectors) over a much coarser grid of blocks, which represent the major flow behav- ior with a smaller data size. If the velocity field is smoothly changed in a region, a block A can easily find a similar block B from the previous frame, while B evolves to A in a way close to affine transform. This leads to good prediction using a motion vector mAB to represent veloci- ties in the blocks. In contrast, in a turbulent region smoke densities are greatly distorted by the velocities. Therefore, the prediction difference between blocks A and B inevitably increases.

Here the block size can be decreased to ameliorate the problem with less inter-block distortion.

Based on such observation, we adaptively set various block sizes for motion prediction, i.e., using small blocks in highly-distorted area and large ones in relatively placid regions.

In general, the irregular partitions used in finite element models (e.g., tetrahedra), or the iter- ative velocity refinement, can create simplification of velocities. However, such methods are not optimal in compression, since they create arbitrarily-shaped blocks. Their positions and sizes produce large-scale extra data. In our implementation, we first partition the space into a regular octree. Then we choose blocks from the tree nodes at different levels, which creates a set of nonuniform blocks. In smooth flow regions, high-level tree nodes are selected which create large blocks. Meanwhile, in the regions of largely varied velocities, low-level tree nodes are used which lead to small blocks. Fig. 19a illustrates the nonuniform blocks created from a 2D velocity field. The octree nodes have predefined fixed positions and sizes, so only a little information is needed in compression, which is important in reducing the extra data load.

Such selection is conducted by using the knowledge of flow fields, which should reflect the 69 spatial pattern of velocity variation. For dynamic fields in animations, such knowledge also needs to consider the temporal evolution of such patterns. Therefore, we use the Finite-Time

Lyapunov Exponents (FTLE) [73] for the purpose. It characterizes the rate of separation of infinitesimally close trajectories in a given time period, which is used to evaluate the rate of velocity variation in the period.

FTLE A Lagrangian particle moving inside a flow domain creates a trajectory, which is mea-

dp(t) sured by a differential equation dt = u(p(t), t). p is the position of the particle at time t, and u is the velocity. The trajectory moves a particle forward T time steps from its initial position,

t0+T which is defined as a flow map: Φt0 (p) := p(t0 + T ). It has the initial time, t0, and the integration period, T . Based on the flow map, the amount of local stretching is computed by the Cauchy-Green deformation tensor:

T ∆ := Φt0+T (p) Φt0+T (p) , (25) ∇ t0  ∇ t0  where ()T denotes transpose of a matrix. The tensor’s eigenvalues measure the rate of sep- aration of the underlying flow at a location p. Therefore, the FTLE value is defined as a time-dependent scalar: σ (p, t)= 1 log λ (∆), where the maximum eigenvalue λ of T |T | p max max the tensor is used. Fig. 19b visualizes the FTLE field of the velocity field.

Motion Vectors on Nonuniform Blocks In our computation, T is set as the length of time steps between two consecutive key frames. So the FTLE value at each position provides a measure of velocity field distortion in this period. As described above, they are used to select blocks from the octree. In implementation, we compute FTLE for each leaf block on a down- sampled velocity field. A parent node’s FTLE is computed by the average of its children. If for a node this value is smaller than a given threshold ξ, this block is selected and its children 70 nodes (blocks) are no longer used. Otherwise, we repeat this operation on its children nodes.

FTLE is an enhanced measurement of flow features in a time period, so the nonuniform blocks can be selected only once between two consecutive K-Frames. This is advantageous since we only need to store the information of nonuniform blocks for the key frames in compression. In comparison, other measures of velocity variation such as strain are defined over each time step, so that the block structure may need to be updated at all the frames. Besides FTLE, if there is no or only a little smoke evolving inside a block, it is obvious that we do not need to further visit its children.

Once the nonuniform blocks are created, we compute one motion vector of each block by averaging the velocities inside the block. The parameter ξ controls the accuracy of motion vectors representing the flow field, and then the data size for compression. Fig. 19c shows the motion vectors over the created nonuniform blocks, where the smallest block is 8 8 and the × largest block is 64 64. In our experiments on real 3D smoke animation, usually three different × sized blocks (i.e., 8 8 8, 16 16 16, and 32 32 32) are used for motion vectors. × × × × × ×

5.3.2 Reconstruction from Motion Vectors

The generated motion vectors are used in both compression and decompression stages to per- form the advection. Here we need to reconstruct a full velocity field from the motion vectors.

This reconstructed velocity field u∗ approximates the original one u. Note that though origi- nal velocity field is available in compression, we have to use the approximated field, which is available in decompression.

To compute the reconstructed velocity u∗(p) at a position p, those motion vectors close to 71

Forward Forward Forward Advected Advected Advected Forward Forward Forward Frame K Frame K+1 Frame K+2

K-Frame P-Frame P-Frame P-Frame K-Frame Blend Blend Blend A K K+1 K+2 B

Backward Backward Backward Advected Backward Advected Backward Advected Backward Frame K Frame K+1 Frame K+2

Figure 20: Bidirectional advection for P-Frames estimation from two consecutive K-Frames. Red and purple arrow lines represent forward advection and backward advection, respectively.

this position in a given range are used. Each such motion vector mi contributes to p with a distance-based weight computed by applying a Gaussian kernel. Finally, u∗(p) is computed by summing up the contributions as

G(di)mi u∗(p)= Pi , (26) G(di) Pi where di is the distance from p to the block center of mi, and G is the Gaussian function with the mean µ = 0 and the standard deviation σ = 1.0. This method makes u∗ change smoothly over block boundaries.

5.3.3 Bidirectional Advection and Weight Map

Forward and Backward Advection We develop a new method that advects smoke density

field in K-Frames for the estimation of P-Frames. The advection is performed from both for- ward and backward directions based on the reconstructed field u∗. Such bidirectional approach can greatly reduce the deviation caused by advection in one direction. In detail, after applying the forward advection for several steps, the advected density field may depart from the orig- inal one too much, leading to unsatisfied estimation. Then we apply the backward advection 72 concurrently to compensate for the loss. Finally, the advected results are blended together to create a closer approximation of the original data, so that the P-Frame estimation is optimized.

Fig. 20 illustrates our bidirectional advection algorithm. Given two consecutive K-Frames (A and B), we create the intermediate P-Frames K, K +1, and K +2. Note that the number of the P-Frames CP between two K-Frames is adjustable. Here CP = 3. The red arrow lines represent the forward advection operations, and the purple ones depict the backward advection operations. First, K-Frame B is backward advected by u∗ directly to create all the temporary fields of K, K +1 and K +2. However, simply performing the forward advection for all steps in the same way and merging the temporary backward and forward advected results cannot provide a good approximation. The reason is that the deviation of the advected results from original data increases when the predicted steps are far away from the K-Frames. The introduced error peaks at the middle step between A and B (e.g., K+1 in the graph). Therefore, the accumulated error from both A and B will undermine the compression ratio. Moreover, such decompression results have inconsistent quality in different time steps. The reconstructed animation will become jittering, an unacceptable effect for smoke animation.

Weighted Blending To overcome the problem, we design a specific algorithm for bidirectional advection. As shown in Fig. 20, at each step, the forward advection is performed over the pre- vious P-Frame to create a temporary forward frame. Then it is blended with the pre-generated backward advected frame. The blending function at each P-Frame is implemented as:

d = αd(F )+(1 α)d(B), (27) − where d is the estimated density, α [0, 1] is the blending weight, and d(F ) and d(B) are the ∈ densities created by the forward and backward advection, respectively. This function is used 73 to reconstruct density fields from K-Frames in decompression. In compression, we only need to store the weight α at each position. The weight is computed by replacing d in Eqn. 27 with the original density do, which is known when performing compression. All the weights form a weight map at each intermediate frame, which is the real P-Frame data. Unlike canonical predictive compression methods, our approach does not need to store the difference between the P-Frames and the original data, due to the use of bidirectional advection.

Partial Blending We further perform the blending at each step only for a part of all grid positions. In particular, at each step K + i (0 i

λi defines the percentage of the positions to be modified. It is computed by first applying a Gaussian function G(i) with the mean µ = CP and the standard deviation σ set to a ⌊ 2 ⌋ controllable value (we use σ =0.03). Then, the G(i) values for all frames of (0 i

K-Frames. It can further address the inconsistent deviation problem: At the steps far away from two consecutive K-Frames (i.e., i close to CP ) where the advection deviation is large, ⌊ 2 ⌋ the Gaussian distribution function allows more correction with a large G(i) from the original data do to the advected results by Eqn. 27.

The unmodified positions have α =1 which is highly compressible with encoding tools. Such partial blending approach reduces the size of P-Frame weight maps for more compression.

More importantly, this approach is the key solution which overcomes the unrealistic effects in 74 reconstructed animations.

Advection Methods The advection is implemented in the same way as in fluid solvers. Tri- linear interpolation may increase the approximation error. Therefore, an advanced advection method, MacCormack advection [71], is used in our implementation.

5.4 Intra-frame Compression

After the inter-frame compression based on motion prediction, K-Frames of density fields,

P-Frames of weight maps, and the motion vectors over nonuniform blocks, are further com- pressed by utilizing intra-frame spatial coherence.

Key Frame Compression Frequency-domain compression is an effective solution that aggre- gates information into less data storage. Densities in K-Frames are divided into 3D blocks.

Block-based transform is implemented by discrete cosine transform (DCT). In such cases,

DCT provides fast and good compression comparable to wavelet transform, with the medium size blocks created by velocity simplification.

The transform creates sparse data: a large number of coefficients are zero or close to zero.

Data size can be further reduced through quantization of the floating data to generate more zeros, which leads to lossy but high-rate compression. In detail, a value x after transform is quantized by a coefficient φ as an integer x to remove smaller values. A large φ leads ⌊ φ ⌋ to better compression rate but reduced quality. It can be controlled for different application requirements.

Weight Maps Weight maps are compressed in two steps. First, we can downsample the weights defined on the original grid, so that the weight map is stored on a smaller grid. This 75

(a) (b)

(c) (d)

Figure 21: Using different CP , the number of P-Frames between two K-Frames, in compres- sion. (a) One K-Frame with CP =5; (b) CP = 20 compared with (a); (c) One middle P-Frame with CP =5; (d) CP = 20 compared with (c). 76

Table 3: Compression performance of several smoke animation clips. The clips are created in a short time period from different smoke simulations. The weight map quantization coefficient ω =5 and the DCT quantization coefficient φ =0.01. Compressed Data Size / Compression Ratio Simulation Grid Clip Data Size Lossless CP =0 CP =5 CP = 10 CP = 20 96 128 96 315.0MB 32.51M/9.69 10.36M/30.41 6.24M/50.41 5.17M/60.9 4.38M/71.86 192 × 256 × 192 1.4GB 140.18M/10.23 39.60M/36.20 11.30M/126.87 7.14M/200.78 4.97M/288.45 250 × 320 × 250 9.0GB 193M/47.80 84.09M/109.71 28.1M/328.33 18.5M/498.72 13.5M/683.42 × × operation is optional and can play an important role for high-resolution smoke fields.

Second, the floating point weight α is highly quantized into only a few values. The number of possible values, ω, is a weight quantization control coefficient. For example, by setting

ω = 10, the weight values belong to 0, 0.1, 0.2, , 1.0. Consequently, we can use four bits to ··· represent one α value since only ten different cases are possible. ω can be managed for quality and compression ratio control of different datasets. While the reconstructed smoke animation has the satisfied quality, a smaller ω can greatly reduce the compressed data size of P-Frames.

Motion Vectors and Nonuniform Blocks Motion vectors in floating points are also quantized to bring down data storage, where we use a short type of two bytes to represent each of its components. For the block structure, since our nonuniform blocks are created by pruning an octree (Sec. 5.3.1), we do not need to use the arbitrary position and size of a block, which can cost six floating points. Instead, only two bytes are used to store one block for its depth level in the tree and its sequence in the same level.

Lossless Encoding Finally, we apply universal lossless encoding techniques to compress the bitstream after these infra-frame operations. In implementation, we adopt the widely-used lossless compression tool, zlib, which uses an LZ77 [79] variant algorithm. Other methods, such as Huffman and Run-length, can be applied. 77

5.5 Decompression

In decompression, we first apply the necessary operations of lossless decoding, up-sampling, and inverse transform for intra-frame decompression. Second, the motion vectors over the nonuniform blocks are used to reconstruct the approximated velocity field. Then the key frames are bidirectionally advected and blended using the weight maps to create intermediate frames, as described in Sec. 5.3.3. Decompression is preferably completed with a fast computational speed. Our decompression method is mostly amenable for parallel computing, enabling GPU acceleration. First, the velocity field reconstruction (Eqn. 26) can be conducted in a similar way as splatting. Each block works as a splat contributing its motion vector to a grid position.

Splatting of multiple blocks is performed in parallel. Second, the advection computation is also easily parallelized on GPU because different grid positions can compute it independently.

Finally, the blending operation is also parallelizable as it is also performed over individual grid positions.

5.6 Experiments and Performance

We study the compression performance using several smoke animations created from different simulations. Fig. 22 displays the snapshots by rendering the decompressed density volumes after using different compression methods. A 192 256 192 grid is used in this simulation. × × Fig. 22a is directly rendered from the original data with no compression. After using our compression approach, the reconstructed result (Fig. 22b) is close to Fig. 22a with little variation. Here the weight map quantization coefficient ω = 5 and the DCT quantization coefficient φ = 0.01. The compressed results rendered in Fig. 22b has a compression ratio 78

(a) (b)

(c) (d) (e)

Figure 22: Snapshots of smoke animation created from decompressed density volumes (192 256 192) after compression. (a) Original data with no compression (clip size: 1.4GB); (b)× Compression× by our method (compressed clip size: 7.1MB); (c) Visualize the difference of (b) from (a) in red; (d) Compression by extending 2D video compression techniques to 3D volumes (compressed clip size: 8.4MB); (e) Visualize the difference of (d) from (a) in red. Here, (b)(c) and (d)(e) achieve a similar compression ratio around 200 compared to (a), but (d)(e) introduce excessive aliasing which is destructive in the video. 79

(a)Original (b)CP =5

(c)CP = 10 (d)CP = 20 Figure 23: Using different C in compression with a 250 320 250 simulation. P × × of 200 with the clip size reduced to 7.1MB. We compute the difference of the two images and map the ratio of difference/original of each pixel to red color, which visualizes the introduced error in Fig. 22c .

In comparison, we also implement the density field compression by extending typical 2D video compression techniques to 3D volumes. The implemented algorithms include (1) motion pre- diction with fixed block size (16 16 16). The motion vector generation is conducted by × × a complete neighborhood search of best matching blocks considering the density distribution; 80

(2) Discrete Cosine Transform over the blocks; (3) Quantization; and (4) Lossless encoding.

Fig. 22d shows the result, and the introduced error is visualized on Fig. 22e. This method achieves a compression ratio about 170 with the compressed clip size at 8.4MB. Though this method has a comparable ratio to ours at 200. Fig. 22e manifests excessive aliasing compared to Fig. 22c. The aliasing shows disperse red spots which become jittering and disturbing on the video. This effect is destructive to a smooth smoke animation. It shows that the smoke animation is not easily compressed by the direct adaptation of canonical motion prediction.

Our advection-based prediction provides better prediction compression results.

For the same simulation, Fig. 21 shows the results of our method by using different CP , the number of P-Frames between two K-Frames. Fig. 21a-b is the rendering results of a decompressed K-Frame with CP = 5 and CP = 20, respectively. Similarly, Fig. 21c-d is the results for a decompressed P-Frame. Using CP = 5 and CP = 20 has compression size/ratio of 11.3MB/126 and 4.9MB/288, respectively. Using CP = 20 lowers the quality on both K- and P-Frames. But it still achieves a reasonable reconstruction of the animation, because the difference (shown in red) continuously evolves in the video. This is a key difference compared with Fig. 22d-e. It shows that our method has acceptable results even with a large compression ratio. Users can control CP for various requirements. Fig. 23 displays similar results using another 250 320 250 animation. × ×

Table 3 summarizes the compression performance of several animations. The clip data size measures the raw volume data of the clips in the final video. Note that using a longer time period will create larger clip data, which has no effect on compression ratio. The lossless 81

Table 4: Quality Measurement of Table 3. Measure Volume PSNR (dB) Image PSNR (dB) CP 0 5 10 20 0 5 10 20 96 128 96 56.5 39.9 34.1 23.9 30.8 21.8 17.6 16.0 192 × 256 × 192 56.3 34.2 30.9 27.9 24.0 19.5 17.7 16.1 250 × 320 × 250 50.5 38.5 36.3 33.9 24.1 24.0 23.2 21.7 × × results are generated by zlib. CP = 0 refers to the compression using no inter-frame com- pression, that is, applying DCT and quantization to all frames independently. We also show the performance with CP =5, 10, 20 using motion prediction with advection. For instance, our ap- proach achieves compression size/ratio of 28.1MB/328 with CP = 5 (Fig. 23a), 18.5MB/498 with CP = 10 (Fig. 23b), and 13.5MB/683 with CP = 20 (Fig. 23c) for a 9GB clip (Fig. 23a).

The table shows that the performance is much better than the lossless compression and the individual volume compression (CP = 0). The compressed data have a distribution at nearly

70% of K-Frames, 27% of P-Frame weight maps, and 3% of motion vectors. The percentages change for different animations in a small amount.

Quality Measures We measure the quality of compression in Table 3 by volume PSNR and image PSNR. PSNR (Peak Signal-to-Noise Ratio) measures the ratio (in decibel dB) between the maximum possible power of the original data and the power of error introduced by com- pression, where higher is better. Table 4 shows the results. Volume PSNR measures the quality based on the 3D density fields, and image PSNR measures the quality by the rendered im- ages. The table shows the decreasing quality with the increasing CP . However, we find that these measures are not completely feasible in measuring smoke animation compression. First, volume is rendered by different methods, so the volume PSNR does not directly define the an- imation quality. Second, the image PSNR is computed on individual images, it cannot reflect 82

Table 5: Using different weight map quantization coefficient ω. ω Compression Size/Ratio Volume PSNR Image PSNR 5 22.89M/62.62 29.0 18.7 10 23.80M/60.23 29.0 19.0 20 31.49M/45.52 29.0 19.2 100 41.37M/34.65 29.0 19.4 the temporal consistency of animation. Moreover, smoke animation is special, since for ob- servers the visual quality depends not only on the fidelity on voxel or pixel level. For example, in Fig. 22, the volume PSNR and image PSNR are closely equivalent for the two different compression methods, but the rebuilt animations show clear differences. The extended video method introduces unrealistic aliasing such as jittering which is unacceptable in smoke anima- tion. Our method creates approximated results but preserves the smooth dynamic evolution.

Such reconstruction quality relates to many factors such as the small-scale details, the smooth- ness, the rendering effects, and the human perceptive issues. Fig. 24b is the result of using a smaller rendering scattering coefficient σs. It also uses CP = 20, but the aliasing is smaller than Fig. 21b, due to the dark and thin smoke effects. We will investigate the measurement metrics in future work.

Weight Quantization We test different weight map quantization coefficient ω on the 192 × 256 192 example. Table 5 shows the results with C = 10 and the DCT quantization × P coefficient φ = 0.01. A large ω creates more accurate but large data in P-Frames (see Sec.

5.4). In experiments, we find using ω =5 or ω = 10 can create optimal results.

DCT Quantization We further change the DCT quantization coefficient φ (see Sec. 5.4). The quantization is applied on K-Frames after DCT transform to reduce information. Fig. 24a is the result using φ =0.4, compared to Fig. 21b where φ =0.01. Using a large coefficient directly 83

(a) (b)

Figure 24: Varied compression cases. (a) Using DCT quantization coefficient φ = 0.4; (b) Using a smaller rendering scattering coefficient σs 84

Table 6: Using different DCT quantization coefficient φ. φ Compression Size/Ratio Volume PSNR Image PSNR N/A 21.02M/68.20 32.19 19.86 0.001 11.50M/124.66 31.89 17.77 0.01 7.14M/200.78 30.99 17.76 0.1 3.58M/400.46 30.23 15.40 0.4 2.77M/517.54 29.72 15.13 lowers the quality of animation, which however, is useful when smaller size is demanded (e.g., in some network environment). Table 6 is the result on the 192 256 192 example. N/A × × means no quantization is applied.

Computing Performance Table 7 depicts the performance of our method using a workstation with two Intel Xeon 2.53 GHz CPUs and 12 GB memory. Note that several functions are processed on both compression and decompression stages. Here the encoding and decoding by zlib include file writing and reading time. To further improve the performance, we need to improve (1) advection since we use MacCormick method for better quality. Linear methods can be used when speed is demanded; and (2) u∗, which reconstructs approximated velocities from the motion vectors. These computation are parallel amenable for GPU acceleration. We have implemented the two functions on a single GPU (nVidia Tesla C1060). For 96 128 96, the × × bidirectional advection speed increases 4 times to 114 milliseconds per step. The computation of u∗ becomes less than 1 milliseconds per step, as the parallel splatting is fast. The single

GPU memory restrains the use on large grid volumes since currently our implementation is not highly optimized on both CPU and GPU, considering speed and memory consumption.

We are investigating the implementation of our compression/decompression on GPUs with special block streaming and processing algorithms. Finally, the preprocessing of FTLE is performed very fast by the GPU. It is computed on a very small grid, because the FTLE values 85

Table 7: Computing performance per step in milliseconds. MV: motion vector generation; u∗: reconstruction of velocities from motion vector; ω: weight map generation; Inv. DCT: inverse DCT transform. Compression Decompression Simulation Grid MV u∗ Advection ω Blending DCT Encoding Total Decoding Inv. DCT u∗ Advection Blending Total 96 128 96 40 162 413 145 4 89 37 1231 33 6 162 410 4 618 192 × 256 × 192 1645 1317 3370 1132 37 671 3791 11963 290 35 1200 3120 25 4670 250 × 320 × 250 2843 11114 20015 5386 80 1379 4921 45748 392 69 11100 20000 75 31636 × × are used in the unit of blocks (Sec. 5.3.1). By setting the smallest block to 8 8 8, a × × 192 256 192 velocity field is downsampled to 24 32 24, where FTLE is computed in × × × × around 60 milliseconds.

5.7 Discussion

We have discussed the compression quality measurement and computing performance in Sec.

5.6. Here we further discuss some related issues:

1. Our method performs better for high-resolution data sets since it utilizes coherence better than low-resolution data sets. For 192 256 192 in Table 3, our compression ratio is about × ×

12 (CP = 5) to 28 (CP = 20) times larger than using lossless tools. More importantly, compared with CP =0 where each frame is compressed individually with lossy approach, our compression ratio is 4 times better when CP =5. Please note that here the best achievement is less than 5 times, which means all P-Frames are discarded since K-Frames are compressed in the same way. When CP = 10 and CP = 20, our method is 5.5 and 8 times better, while the theoretical upper limits are at 10 and 20 times, respectively. Since obviously P-Frames cannot be totally dropped, our results depict the success of the ad hoc compression, while there still exists space to improve by refining the algorithms in different stages.

2. Our method focuses on grid based smoke animation, which is widely used in graphics 86 applications. Particle based animation may be converted to grid. It also can be compressed in other specific ways. We believe our strategy of using advection in predictive compression can also be applied. Moreover, we will further study the compression of animations for other natural phenomena.

3. Currently we implement motion vectors from velocity fields on compression, which is avail- able in most graphical applications based on simulation. If the velocity field is not available, we can study how to extract the approximated motion vectors. This might be conducted by density gradient aggregation or other advanced methods.

4. In fluid simulation, the time step size ∆t determines the evolutionary smoke dynamics. It affects the compression since it defines the original difference between two frames. This value can be used to design the control coefficients (CP , φ, ω) in our compression method.

5. An advanced topic is to control compression and decompression based on rendering param- eters and environment. By considering rendering factors, we can stream varied compressed data to multiple sites with varied requirements. This will be studied together with the quality control metrics. CHAPTER 6

Future Work

6.1 Improving Turbulence Enhancement and Fluid Control

The turbulence control provides flexibility for users; however, sometimes unrealistic turbulence may be created from an inappropriate choice of parameters. Users will need to rerun the simulation with modified parameters. This is a typical challenge for many fluid simulation techniques, which directs our future pursuit of improving the method. For example, we will study the relation between local physical features and the probability density function to assist animation control. We will also design tools to run a fast example simulation (e.g., with a small number of particles), and use the achieved knowledge to control the final simulation. Moreover, visual effects are mainly defined by the surface effects in particular for liquid simulations.

Fluctuation behavior of flows below the surface is not visibly significant in many cases. We will further improve our method by considering more complex handling of free surfaces with SIP evolution. The method can be incorporated with other advanced SPH versions, such as WCSPH or PCISPH. Adaptive schemes and GPU acceleration can be applied to further improve the

fluid’s quality and performance, thanks to the independence of our method from underlying

SPH implementations. The algorithm will also be transplanted to other Lagrangian fluid solvers such as pure vortex fluids.

One limitation of our smoke control method is that if the LCS control is set too low on a long simulation, the results may appear unconstrained later in the simulation due to the accumulated

87 88

fluctuation of energy. Thus, we plan to study how to adaptively adjust the control effects along time to further enhance the method. Also, we currently use the LCS region as a mask (defined by discrete grid cells), but using other geometric representations such as meshes or dynamic implicit surfaces may allow improved control techniques or increased analytic power.

6.2 Texture based Fluid Appearance Enhancement

Appearance enhancement using texture is an efficient way to accelerate the flow modeling.

High resolution textures in a low resolution flow can achieve high detail results with great speed. However, temporal and spatial consistency are difficult to guarantee simultaneously.

Currently advected texture [80,81] and optimization-based texture synthesis [82] are proposed to handle the problem, both of which apply the constant enhancement in the whole domain without considering the underlying flow structure’s variation. As a results, flow patterns can be used to improve them by different means.

6.2.1 Advected Texture based on Flow Patterns

For the advected texture method, the latest Lagrangian based texture advection [81] uses the

Poisson-disk distribution of particles to carry the texture patch and to be advected by the flow.

The final result comes from the blending result of those particles carrying texture patches. If the distortion of some texture patch exceeds the threshold, the corresponding particle will be deleted and a new particle will be reseeded. In their method all these particles are of the same radius, which are homogeneously distributed in the flow domain. It can be found that the com- putation expense is proportional to the number of particles. Therefore, for the some pre-defined domain, the particle radius is inversely proportional to computation expense. Small particles 89 are better to represent fine details and handle the boundary, but are more computation intensive, while big particles with large textures are easily distorted which may cause frequent deletion and reseeding. Therefore the tradeoff between fine detail and speed should be determined by repeated experiments which are burdensome for the user. We plan to use flow features (e.g.

FTLE, Strain, Curl, etc) to adaptively adjust the particle radius; we hope it can bring the fol- lowing benefits:

It is feasible to use unprecedentedly small radius particles to render the important local • areas with more details;

Particles with the bigger radius can be used to render the placid areas with high speed; •

The user only needs to specify the lower/upper bound of the particle radius, which will • speed up the process of trial and error;

The area’s importance can be related to the flow distortion. Under this condition, the fre- • quency of particles’ deletion and reseeding processes can be depressed because a small

particle can survive longer under high distortion areas.

During the implementation some practical problems will inevitably appear. Here are some issues I can foresee:

What features are used to map the flow patterns to the particle radius? •

Due to the particles’ dynamic behavior, some particles may roam between important and • unimportant areas. The radius transition should be seamlessly implemented according to

the patterns’ changes;

The time step during advection is an important factor related to the accuracy and speed, • 90

which also can be adjusted according to the flow patterns.

6.2.2 Optimization based Texture Synthesis using Flow Patterns

Optimization based texture synthesis [83] is an efficient way to do the texture synthesis on any given surface, which is easily extended to handle other texture related problems by adding extra constraints into the optimization equations. Similarly, flow consistency constraints are introduced to enhance the flow appearance [82]. Different from the texture advection method, this method is applicable for any given textures instead of ones which are easy to blend together.

However, as mentioned in [81], its main deficiency is that it will result in rigid moving texture chunks around the structured features and shows sporadic sudden changes. We plan to use the

flow pattern to alleviate this problem. The enhancement can be done in two ways:

The texture similarity in the original optimization method is computed in a pre-defined • neighborhood whose size is very important. If the size is too small the final image will

be different from the input texture; it is too big the final image can not reflect the flow.

Therefore, similar to 6.2.1, we plan to adaptively adjust the neighborhood size according

to the flow features;

Since flow patterns can be used to reflect the flow’s relative importance at every position, • we plan to include the flow features’ constraints into the optimization functions. At the

important areas the flow constraints can retain the original characteristic and make the

final result better reflect the structural features. 91

6.3 Trajectory Analysis and Visualization

Trajectories have been studied with ever increasing importance and interest in many research areas spanning data mining, computer vision, pattern recognition, machine learning, biomed- ical imaging, statistics, etc. Researchers have conducted intensive work on trajectory acqui- sition using wireless or radio techniques (e.g. GPS, RFID) and computer vision/imaging al- gorithms, leading to huge amount of data available for knowledge discovery. Most of the approaches treat each trajectory (or its spatial/temporal partition) as one general data entity consisting of a sequence of discrete positions and velocities. Existing data analysis techniques are then adapted to handle the generalized trajectory data set, for example, applying distance measures on the entities to cluster the trajectories or find swarm/convoy patterns. The current paradigm builds its solution on the original Lagrangian (i.e. object-based) representation of tra- jectories. But the complex knowledge associated with intrinsic structure of regions/locations is neither explicitly modeled nor easily explored. We plan to convert the trajectory information into the Eulerian modeling velocity field and carry out the flow pattern analysis and visualiza- tion based on it.

Fig. 25 shows our preliminary results based on the animal trajectory data set 1. We plan to analyze and visualize it from the following flow patterns:

Velocity reveals the overall movement of the animal during some time period; •

Density reveals the distribution of the animal which is inversely proportional to the speed. • Animals like to stay in areas with a large density possibly because of the water or grass;

Entropy reveals the movement divergence of the animal. The small entropy value means • 1http://www.fs.fed.us/pnw/starkey/data/tables/ 92

that the animal always moves in some similar directions possibly caused by some envi-

ronmental factors;

FTLE and LCS reveal the boundaries for the animals’ movement meaning some animals • will move in particular areas. These areas can be divided by visible (e.g. river, moun-

tains,etc) or invisible (e.g. animals’ territorial behavior) barriers.

The above flow patterns are the general tools which can be applied to different situations such as one specie in a long or short period, several species in a long or short period, one animal in long period, animals in different time periods, etc. 93

(a) Path (b) Velocity

(c) FTLE (d) Entropy (e) Density Figure 25: Flow Patterns for Animal Trajectory. CHAPTER 7

Conclusion

In this thesis, we have proposed our solutions to four important and challenging topics related to computer graphics and physically based simulation: fluid enhancement for SPH method, light-weight objects’ behavior modeling, two-stage smoke design and smoke animation com- pression. In all these fields, the stochastic and structural features of flow play very important roles. In summary, the main contributions of this thesis are:

Introduced a new fluid modeling technique aimed at incorporating stochastic turbulence • into the widely used SPH method;

Incorporated a new and powerful tool in modeling the stochastic behavior of light-weight • floating objects inside flows, which is suitable for complex, realtime and interactive ani-

mation environment;

Proposed a novel pattern-based fluid animation approach for advancing fluid modeling in • the two-stage animation scenario by guiding high-quality animation with pre-computed

structural features;

Designed an effective smoke animation compression technique which can greatly retain • the high frequency details by bidirectional advection.

94 BIBLIOGRAPHY

[1] R. Bridson, Fluid Simulation for Computer Graphics. Natick, MA, USA: A. K. Peters, Ltd., 2008. [2] J. Stam, “Stable fluids,” Proceedings of SIGGRAPH, pp. 121–128, 1999. [3] R. Fedkiw, J. Stam, and H. Jensen, “Visual simulation of smoke,” Proceedings of SIG- GRAPH, pp. 15–22, 2001. [4] B. Kim, Y. Liu, I. Llamas, and J. Rossignac, “Advections with significantly reduced dissipation and diffusion,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 1, pp. 135–144, 2007. [5] P. Mullen, K. Crane, D. Pavlov, Y. Tong, and M. Desbrun, “Energy-preserving integrators for fluid animation,” ACM Trans. Graph., vol. 28, no. 3, 2009. [6] S. Premoze, T. Tasdizen, J. Bigler, A. Lefohn, and R. T. Whitaker, “Particle-based simu- lation of fluids,” in Eurographics, 2003, pp. 401–410. [7] M. Mueller, D. Charypar, and M. Gross, “Particle-based fluid simulation for interactive applications,” Proceedings of ACM SIGGRAPH/EUROGRAPHICS Symposium on Com- puter Animation, pp. 154–159, 2003. [8] A. Selle, N. Rasmussen, and R. Fedkiw, “A vortex particle method for smoke, water and explosions,” Proceedings of SIGGRAPH, pp. 910–914, 2005. [9] Y. Zhu and R. Bridson, “Animating sand as a fluid,” in Proceedings of ACM SIGGRAPH. New York, NY, USA: ACM, 2005, pp. 965–972. [10] D. Enright, R. Fedkiw, J. Ferziger, and I. Mitchell, “A hybrid particle level set method for improved interface capturing,” J. Comput. Phys, vol. 183, pp. 83–116, 2002. [11] Z. Bo, X. Yang, and Y. Fan, “Creating and preserving vortical details in sph fluid,” Com- puter Graphics Forum, vol. 29, pp. 2207–2214, 2010. [12] J. Stam and E. Fiume, “Turbulent wind fields for gaseous phenomena,” in SIGGRAPH ’93: Proceedings of the 20th annual conference on Computer graphics and interactive techniques. New York, NY, USA: ACM, 1993, pp. 369–376. [13] R. Bridson, J. Houriham, and M. Nordenstam, “Curl-noise for procedural fluid flow,” in Proceeding of ACM SIGGRAPH. New York, NY, USA: ACM, 2007, p. 46. [14] R. Narain, J. Sewall, M. Carlson, and M. C. Lin, “Fast animation of turbulence using en- ergy transport and procedural synthesis,” in Proceeding of ACM SIGGRAPH Asia. New York, NY, USA: ACM, 2008, pp. 1–8. [15] H. Schechter and R. Bridson, “Evolving sub-grid turbulence for smoke animation,” in Eurographics/ACM SIGGRAPH Symposium on , 2008, pp. 1–8.

95 96

[16] T. Kim, N. Th¨urey, D. James, and M. Gross, “Wavelet turbulence for fluid simulation,” in Proceeding of ACM SIGGRAPH. New York, NY, USA: ACM, 2008, pp. 1–6. [17] T. Pfaff, N. Thuerey, A. Selle, and M. Gross, “Synthetic turbulence using artificial bound- ary layers,” in SIGGRAPH Asia ’09: ACM SIGGRAPH Asia 2009 papers. New York, NY, USA: ACM, 2009, pp. 1–10. [18] T. Pfaff, N. Thuerey, J. Cohen, S. Tariq, and M. Gross, “Scalable fluid simulation using anisotropic turbulence particles,” in ACM SIGGRAPH Asia 2010 papers, ser. SIGGRAPH ASIA ’10. New York, NY, USA: ACM, 2010, pp. 174:1–174:8. [Online]. Available: http://doi.acm.org/10.1145/1866158.1866196 [19] Y. Zhao, Z. Yuan, and F. Chen, “Enhancing fluid animation with adaptive, controllable and intermittent turbulence,” Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, July 2010. [20] F. Chen, Y. Zhao, and Z. Yuan, “Langevin particle: A self-adaptive lagrangian primitive for flow simulation enhancement,” Computer Graphics Forum (Eurographics 2011), To Appear, April, 2011. [21] U. Frisch, Turbulence: The legacy of A.N. Kolmogorov. Cambridge University Press, 1995. [22] N. Foster and D. Metaxas, “Controlling fluid animation,” in Proceedings of the 1997 Conference on Computer Graphics International, ser. CGI ’97. Washington, DC, USA: IEEE Computer Society, 1997, pp. 178–. [Online]. Available: http: //dl.acm.org/citation.cfm?id=792756.792862 [23] R. Fattal and D. Lischinski, “Target-driven smoke animation,” in SIGGRAPH ’04: ACM SIGGRAPH 2004 Papers. New York, NY, USA: ACM, 2004, pp. 441–448. [24] L. Shi and Y. Yu, “Controllable smoke animation with guiding objects,” ACM Trans. Graph., vol. 24, no. 1, pp. 140–164, 2005. [25] N. Th¨urey, R. Keiser, M. Pauly, and U. R¨ude, “Detail-preserving fluid control,” in Pro- ceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer anima- tion, Aire-la-Ville, Switzerland, Switzerland, 2006, pp. 7–12. [26] Y. Kim, R. Machiraju, and D. Thompson, “Path-based control of smoke simulations,” in Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer Animation. Aire-la-Ville, Switzerland, Switzerland: Eurographics Association, 2006, pp. 33–42. [27] F. Pighin, J. M. Cohen, and M. Shah, “Modeling and editing flows using advected radial basis functions,” in Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation. Aire-la-Ville, Switzerland, Switzerland: Eurographics Associ- ation, 2004, pp. 223–232. [28] J.-M. Hong and C.-H. Kim, “Controlling fluid animation with geometric potential,” Comput. Animat. Virtual Worlds, vol. 15, pp. 147–157, July 2004. [Online]. Available: http://dx.doi.org/10.1002/cav.v15:3/4 [29] A. Treuille, A. McNamara, Z. Popovi´c, and J. Stam, “Keyframe control of smoke simu- lations,” in ACM SIGGRAPH 2003 Papers, ser. SIGGRAPH ’03. New York, NY, USA: 97

ACM, 2003, pp. 716–723. [30] A. McNamara, A. Treuille, Z. Popovi´c, and J. Stam, “Fluid control using the adjoint method,” ACM Trans. Graph., vol. 23, pp. 449–456, August 2004. [31] A. Angelidis and F. Neyret, “Simulation of smoke based on vortex filament primitives,” in Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer an- imation. New York, NY, USA: ACM, 2005, pp. 87–96. [32] S. Weißmann and U. Pinkall, “Filament-based smoke with vortex shedding and varia- tional reconnection,” in Proceedings of SIGGRAPH, 2010. [33] M. B. Nielsen, B. B. Christensen, N. B. Zafar, D. Roble, and K. Museth, “Guiding of smoke animations through variational coupling of simulations at different resolutions,” in SCA ’09: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. New York, NY, USA: ACM, 2009, pp. 217–226. [34] M. B. Nielsen and B. B. Christensen, “Improved variational guiding of smoke animations,” Computer Graphics Forum, vol. 29, no. 2, pp. 705–712, 2010. [Online]. Available: http://doi.wiley.com/10.1111/j.1467-8659.2009.01640.x [35] D. Salomon, Data Compression: The Complete Reference. Springer, 2006. [36] G. J. Sullivan and T. Wiegand, “Video compression – from concepts to the H.264/AVC standard,” vol. 93, no. 1, pp. 18–31, January 2005. [37] C. Christopoulos, A. Skodras, and T. Ebrahimi, “The JPEG2000 still image coding sys- tem: An overview,” IEEE Transactions on Consumer Electronics, vol. 46, pp. 1103–1127, 2000. [38] R. Mantiuk, A. Efremov, K. Myszkowski, and H.-P. Seidel, “Backward compatible high dynamic range mpeg video compression,” in ACM SIGGRAPH 2006 Papers, ser. SIGGRAPH ’06. New York, NY, USA: ACM, 2006, pp. 713–723. [Online]. Available: http://doi.acm.org/10.1145/1179352.1141946 [39] P. Alliez and C. Gotsman, “Recent advances in compression of 3d meshes,” in In Ad- vances in Multiresolution for Geometric Modelling. Springer-Verlag, 2003, pp. 3–26. [40] J. Peng, C.-S. Kim, and C. Jay Kuo, “Technologies for 3d mesh compression: A survey,” J. Vis. Comun. Image Represent., vol. 16, pp. 688–733, December 2005. [Online]. Available: http://dx.doi.org/10.1016/j.jvcir.2005.03.001 [41] O. Arikan, “Compression of databases,” in ACM SIGGRAPH 2006 Papers, ser. SIGGRAPH ’06. New York, NY,USA: ACM, 2006, pp. 890–897. [Online]. Available: http://doi.acm.org/10.1145/1179352.1141971 [42] C. Faloutsos, J. Hodgins, and N. Pollard, “Database techniques with motion capture,” in ACM SIGGRAPH 2007 courses, ser. SIGGRAPH ’07. New York, NY, USA: ACM, 2007. [Online]. Available: http://doi.acm.org/10.1145/1281500.1281636 [43] M. Balsa Rodriguez, E. Gobbetti, J. Iglesias Guiti´an, M. Makhinya, F. Marton, R. Pa- jarola, and S. Suter, “A survey of compressed gpu-based direct volume rendering,” in Eurographics State-of-the-art Report, May 2013. 98

[44] J. Mensmann, T. Ropinski, and K. Hinrichs, “A gpu-supported lossless compression scheme for rendering time-varying volume data,” in Proceedings of the 8th IEEE/EG international conference on Volume Graphics, ser. VG’10, 2010, pp. 109–116. [45] Y. Jang, D. Ebert, and K. Gaither, “Time-varying data visualization using functional representations,” Visualization and Computer Graphics, IEEE Transactions on, vol. 18, no. 3, pp. 421–433, 2012. [46] B. Heckel, G. Weber, B. Hamann, and K. I. Joy, “Construction of vector field hierarchies,” in Proceedings of the conference on Visualization ’99: celebrating ten years, ser. VIS ’99. Los Alamitos, CA, USA: IEEE Computer Society Press, 1999, pp. 19–25. [Online]. Available: http://dl.acm.org/citation.cfm?id=319351.319352 [47] A. Telea and J. J. van Wijk, “Simplified representation of vector fields,” in Proceedings of the conference on Visualization ’99: celebrating ten years, ser. VIS ’99. Los Alamitos, CA, USA: IEEE Computer Society Press, 1999, pp. 35–42. [Online]. Available: http://dl.acm.org/citation.cfm?id=319351.319354 [48] S. K. Lodha, N. M. Faaland, and J. C. Renteria, “Topology preserving top-down compression of 2d vector fields using bintree and triangular quadtrees,” IEEE Transactions on Visualization and Computer Graphics, vol. 9, pp. 433–442, October 2003. [Online]. Available: http://dl.acm.org/citation.cfm?id=939836.940011 [49] H. Theisel, C. R¨ossl, and H.-P. Seidel, “Compression of 2d vector fields under guaranteed topology preservation.” Comput. Graph. Forum, vol. 22, no. 3, pp. 333–342, 2003. [Online]. Available: http://dblp.uni-trier.de/db/journals/cgf/cgf22.html#TheiselR03 [50] Y.-C. Tsai, C.-W. Lin, and C.-M. Tsai, “H.264 error resilience coding based on multi- hypothesis motion-compensated prediction,” Image Commun., vol. 22, pp. 734–751, October 2007. [Online]. Available: http://dl.acm.org/citation.cfm?id=1297413.1297462 [51] B. Girod, “Efficiency analysis of multi-hypothesis motion-compensated prediction,” IEEE Trans. Image Process., vol. 9, pp. 173–183, Feb 2000. [52] H. M. Brice˜no, P. V. Sander, L. McMillan, S. Gortler, and H. Hoppe, “Geometry videos: a new representation for 3d animations,” in Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, ser. SCA ’03. Aire-la-Ville, Switzerland, Switzerland: Eurographics Association, 2003, pp. 136–146. [Online]. Available: http://dl.acm.org/citation.cfm?id=846276.846295 [53] M. Sattler, R. Sarlette, and R. Klein, “Simple and efficient compression of animation sequences,” in Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, ser. SCA ’05. New York, NY, USA: ACM, 2005, pp. 209–217. [Online]. Available: http://doi.acm.org/10.1145/1073368.1073398 [54] S. I. Park and M. J. Kim, “Vortex fluid for gaseous phenomena,” in Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, ser. SCA ’05. New York, NY, USA: ACM, 2005, pp. 261–270. [Online]. Available: http://doi.acm.org/10.1145/1073368.1073406 [55] J. Monaghan, “SPH compressible turbulence,” Mon. Not. R. Astron. Soc, vol. 335, pp. 843–852, 2002. 99

[56] D. Violeau, S. Piccon, and C. J.P., “Two attempts of turbulence modelling in smoothed particle hydrodynamics,” in Proceedings of the 8th International Symposium on Flow Modeling and Turbulence Measurements, Tokyo, Japan, 2001, pp. 339–346. [57] S. B. Pope, Turbulent Flows. Cambridge University Press, 2000. [58] M. M¨uller, D. Charypar, and M. Gross, “Particle-based fluid simulation for interactive applications,” in SCA ’03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics sym- posium on Computer animation. Aire-la-Ville, Switzerland, Switzerland: Eurographics Association, 2003, pp. 154–159. [59] T. Pfaff, N. Thuerey, J. Cohen, S. Tariq, and M. Gross, “Scalable fluid simulation using anisotropic turbulence particles,” in ACM SIGGRAPH Asia, 2010. [60] M. Shinya and A. Fournier, “Stochastic motion - motion under the influence of wind,” Computer Graphics Forum, vol. 11, pp. C119–C128, 1992. [61] J. Stam, “Stochastic dynamics: Simulating the effects of turbulence on flexible struc- tures,” Computer Graphics Forum, vol. 16, pp. C159–C164, 1997. [62] F. Perbet and M.-P. Cani, “Animating prairies in real-time,” in Proceedings of the sympo- sium on Interactive 3D graphics. New York, NY, USA: ACM, 2001, pp. 103–110. [63] J. Wejchert and D. Haumann, “Animation aerodynamics,” in Proceedings of SIGGRAPH. New York, NY, USA: ACM, 1991, pp. 19–22. [64] J. X. Chen, X. Fu, and J. Wegman, “Real-time simulation of dust behavior generated by a fast traveling vehicle,” ACM Trans. Model. Comput. Simul., vol. 9, no. 2, pp. 81–104, 1999. [65] I. Saltvik, A. C. Elster, and H. R. Nagel, “Parallel methods for real-time visualization of snow,” in Proceedings of the 8th international conference on applied parallel computing. Berlin, Heidelberg: Springer-Verlag, 2007, pp. 218–227. [66] X. Wei, Y. Zhao, Z. Fan, W. Li, F. Qiu, S. Yoakum-Stover, and A. Kaufman, “Lattice- based flow field modeling,” IEEE Transactions on Visualization and Computer Graphics, vol. 10, no. 6, pp. 719–729, November 2004. [67] D. Kim, O.-y. Song, and H.-S. Ko, “A practical simulation of dispersed bubble flow,” ACM Trans. Graph., vol. 29, no. 4, pp. 1–5, 2010. [68] J. R. Bell, “Algorithm 334: Normal random deviates,” Commun. ACM, vol. 11, no. 7, p. 498, 1968. [69] M. Wicke, M. Stanton, and A. Treuille, “Modular bases for fluid dynamics,” ACM Trans. Graph., vol. 28, no. 3, pp. 1–8, 2009. [70] S. Elcott, Y. Tong, E. Kanso, P. Schr¨oder, and M. Desbrun, “Stable, circulation- preserving, simplicial fluids,” ACM Trans. Graph., vol. 26, no. 1, p. 4, 2007. [71] A. Selle, R. Fedkiw, B. Kim, Y. Liu, and J. Rossignac, “An unconditionally stable maccormack method,” J. Sci. Comput., vol. 35, pp. 350–371, June 2008. [Online]. Available: http://portal.acm.org/citation.cfm?id=1401696.1401764 100

[72] T. Peacock and J. Dabin, “Introduction to focus issue: Lagrangian coherent structures,” Chaos, vol. 20, p. 017501, 2010. [73] G. Haller, “Distinguished material surfaces and coherent structures in 3D fluid flows,” Physica D, vol. 149, pp. 248–277, 2001. [74] F. Sadlo and R. Peikert, “Efficient visualization of lagrangian coherent structures by filtered AMR ridge extraction,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, pp. 1456–1463, November 2007. [Online]. Available: http://dx.doi.org/10.1109/TVCG.2007.70554 [75] C. Garth, F. Gerhardt, X. Tricoche, and H. Hans, “Efficient computation and visualization of coherent structures in fluid flow applications,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, pp. 1464–1471, November 2007. [Online]. Available: http://dx.doi.org/10.1109/TVCG.2007.70551 [76] F. Ferstl, K. Burger, H. Theisel, and R. Westermann, “Interactive separating streak surfaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, pp. 1569–1577, November 2010. [Online]. Available: http://dx.doi.org/10.1109/TVCG. 2010.169 [77] G. Haller and T. Sapsis, “Lagrangian coherent structures and the smallest finite-time lya- punov exponent,” Chaos, vol. 21, p. 023115, 2011. [78] T.-C. Lee, R. L. Kashyap, and C.-N. Chu, “Building skeleton models via 3-d medial surface/axis thinning algorithms,” CVGIP: Graph. Models Image Process., vol. 56, pp. 462–478, November 1994. [79] J. Ziv and A. Lempel, “A universal algorithm for sequential data compression,” IEEE Trans. Inf. Theor., vol. 23, no. 3, pp. 337–343, Sept. 2006. [80] F. Neyret, “Advected textures,” Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 147–153, 2003. [81] Q. Yu, F. Neyret, E. Bruneton, and N. Holzschuch, “Lagrangian texture advection: Pre- serving both spectrum and velocity field,” 2010. [82] V. Kwatra, D. Adalsteinsson, N. Kwatra, M. Carlson, and M. C. Lin, “Texturing fluids,” in ACM SIGGRAPH 2006 Sketches, ser. SIGGRAPH ’06. New York, NY, USA: ACM, 2006. [Online]. Available: http://doi.acm.org/10.1145/1179849.1179928 [83] V. Kwatra, I. Essa, A. Bobick, and N. Kwatra, “Texture optimization for example-based synthesis,” ACM Transactions on Graphics, SIGGRAPH, vol. 24, pp. 795–802, 2005.