Avatar Locomotion in Crowd Simulation

N. Pelechano B. Spanlang A. Beacco U. Politècnica de Catalunya U. de Barcelona U. Politècnica de Catalunya [email protected] [email protected] [email protected]

Abstract easily extended to large numbers of agents simulated in real time. This paper presents an Planning To achieve natural looking crowds, most of the Mediator (APM) designed to synthesize anima- current work in this area uses avatars with a tions efficiently for virtual characters in real limited number of animation clips. In most time crowd simulation. From a set of animation cases, problems arise when the crowd simulator clips, the APM selects the most appropriate and module updates the position of each agent’s root modifies the skeletal configuration of each without taking into account movement of the character to satisfy desired constraints (e.g. feet. This yields unnatural simulations where eliminating foot-sliding or restricting upper the virtual appear to be ‘skating’ in the body torsion), while still providing natural look- environment, which is known as ‘foot sliding’. ing . We use a hardware accelerated Other problems emerge from the lack of coher- library to blend animations ence between the orientation of the geometric increasing the number of possible locomotion representation of the agents and direction of types. The APM allows the crowd simulation movement. As we increase model sophistication module to maintain control of path planning, through enhanced path selection, avoidance, collision avoidance and response. A key advan- rendering, and inclusion of more behaviors, tage of our approach is that the APM can be these artifacts become more noticeable. integrated with any crowd simulator working in In order to avoid these problems, most ap- continuous space. We show visual results proaches either perform inverse kinematics, achieved in real time for several hundreds of which implies a high computational cost that agents, as well as the quantitative accuracy. does not allow large crowd simulations in real time, or adopt approaches where the animation Keywords: Crowd animation, Foot sliding. clip being used drives the root position which limits movements and speeds. In this paper we present an Animation Planning 1. Introduction Mediator; a highly efficient animation synthe- sizer that can be seamlessly integrated with a Crowd simulation in real time for computer crowd simulation system. The APM selects the graphics applications, such as training and video parameters to feed a motion synthesizer while it games, requires algorithms for not only agent feeds back to the crowd simulation module the navigation in large virtual environments while required updates to guarantee consistency. Even avoiding obstacles and other agents, but also when we only have a small set of animation rendering highly complex scenes with avatars to clips, our technique allows a large and continu- enhance realism. To achieve that, either of these ous variety of movement. It can be used with steps can become a major bottleneck for real any crowd simulation software, since it is the time simulation. Therefore, trade-offs between crowd simulation module which drives the accuracy and speed are necessary. Simulating movement of the virtual agents and our module motion accurately whilst keeping within limits its work to adjusting the root displace- constraints is not an easy task, and although ment and skeletal state. many techniques have been developed for syn- thesizing the motion of one agent, they are not 2. Related Work Ménardais et al. were able to synchronize and adapt different clips without motion graphs We can classify crowd simulation models into [10]. two approaches. The first encompasses those Proportional derivative controllers and optimi- models that calculate the movement of agents in zation techniques are used [22] [14] to drive a virtual environment without taking into con- physically simulated characters. Goal-directed sideration the underlying animation. The second steps can perform a controlled navigation [21]. is defined by those that have a core set of ani- While such techniques show very impressive mation clips that they play back, or blend be- results for a single character in real time, the tween, to move from one point to another. computational costs mean they are not suitable The first group suffers from an artifact known for real time large crowd simulations. as foot sliding, since the position of the charac- Kovar et. al. [7] presented an online algorithm ters is updated without considering the foot to clean up foot sliding artifacts of motion cap- position. Notice this is not exactly the same ture data. The technique focuses on minute de- problem addressed by other works removing tails and therefore is computationally not suit- noise [5]. In this group we have able for large real time crowds. many examples using different crowd simula- Treuille et. al. [19] generated character anima- tion models, such as social forces [4], rule- tions with near-optimal controllers using low- based models [13], cellular automata [17], flow dimensional basis representation. This approach tiles [1], roadmaps [16], continuum dynamics also uses graphs but, unlike previous models, [18] and forces models parameterized by psy- blending can occur between any two clips of chological and geometrical rules [12]. motion. They also avoid foot sliding by re- The second approach puts effort into avoiding rooting the skeletons to the feet and specifying foot-sliding while limiting the number of possi- constraint frames, but their method requires ble movements for each agent. Lau and Kuffner hundreds of animation clips which is very time [8] introduced pre-computed search trees for consuming to gather. planning interactive goal-driven animation. Recently, Gu and Deng [3] increased the variety These models do not perform motion blending, realism creating new stylized motions from a are often limited to a small graph of animations small motion capture dataset. Maïm et. al. [9] and play one animation clip after another, thus apply linear blending to animations selected moving the agent according to the root move- based on the agent’s velocity and morphology ment in each animation clip. As such they do achieving nice animations for crowds but with- not perform well in interactive, dynamic envi- out eliminating foot sliding. ronments, especially with dense crowds where collision response forces have an impact. 3. The Framework Recent trends in character animation include driving physical simulations by motion capture The framework employed for this work per- data or using machine learning to parameterize forms a feedback loop where the APM acts as motion for simplified interactive control. Ex- the communication channel between a crowd amples are inverse kinematics and motion simulation module and a character animation blending based on gaussian process statistics or and rendering module. The outline of this geostatistics of motion capture data [2][11]. framework is shown in Figure 1. Such techniques avoid foot sliding but are com- putationally more expensive. Through interpolation and concatenation of motion clips, new natural looking animations can be created [20]. Kovar et. al. introduced motion graphs [6]. Zhao et. al. extended them to improve connectivity and smooth transitions [23]. These techniques can avoid foot sliding by Figure 1 : Framework using the root velocity of the original motion data, but they require a large database of motion For each frame, the crowd simulation module, capture data to allow for interactive change of CS, calculates the position p, velocity v, and walking speed and orientation. desired orientation of the torso θ. This informa- tion is then passed to the APM in order to select constantly. Through filtering we can simulate an the parameters to be sent to the character anima- agent that moves with a slight zigzag effect, tion module, CA, which will provide the next while the torso of the rendered character moves character’s pose, P. Each pose is a vector in a constant direction. specifying all joint positions in a kinematic For the animation and of avatars skeleton. (CA) we are using a hardware accelerated li- The APM calculates the next synthesized ani- brary for character animation (HALCA)[15]. mation Si which is described by the tuple { Ai , dt The core consists of a motion mixer and an , b, P , v, θ } (see Table 1). avatar visualization engine. Direct access to properties such as the duration, frame rate, and p Agent root position. the skeletal states of an animation are provided to the hosting application. Such information can v Agent velocity. be useful to compute, for example, the actual θ Angle indicating torso orientation in x-z plane. walking speed of a character when animated. Among the functionalities provided are: blend- Ai Animation clip for agent i. ing, morphing, and efficient access and manipu-

dt Differential of time for blending animation clip Ai lation of the whole skeletal state. The CA contains a motion synthesizer which b Blending factor when changing animation clip, can provide a large variety of continuous mo- i.e. when Ai(t) ≠Ai(t-1) , where t indicates time. tion from a small set of animations by allowing P Pose given by the skeletal state of the agent. direct manipulation of the skeleton configura- Table 1 : Input/Output variables of the APM tion. To create the library of animations, we decided to use hand created animations, al- The APM may need to slightly adjust the though motion capture data could also be used agent’s position and velocity direction in order after some pre-processing. to guarantee that the animations rendered are smooth and continuous. It is thus essential that 4. Animation Planning Mediator the crowd simulation model employed works in continuous space and allows for updates of the To achieve realistic animation for large crowds position and velocity of each agent at any given from a small set of animation clips, we need to time in order to guarantee consistency with the synthesize new motions. requirements dictated by the animation module. The APM is used to find the best animation We have used the crowd simulation (CS) model available while satisfying a set of constraints. HiDAC [12]. On the one hand it needs to guarantee that the The torso orientation at time t, w , is obtained next pose of the animation will reflect as closely t as possible the parameters given by the crowd from the velocity vector vt after applying a filter so that it will not be unnaturally affected by simulation module (p, v, θ), and on the other abrupt changes. hand, it needs to guarantee smooth and continu- ous animations. Therefore, the selection of the w = ω w + v best next character’s pose needs to take into t o t−1 t account the current skeletal state, the available where ωo is the orientation weight introduced animations, the maximum rotation physically by the user and wt-1 is the direction of the orien- possible for the upper body of a human, and tation vector at time t-1. The orientation angle θ whether there are any contact points to respect of the vector w is measured relative to the posi- between the limbs of the skeleton and the envi- tive x-axis (since either the angle or the vector ronment (such as contact between a foot and the can be calculated from the other). floor). Once the APM determines the best set of This orientation filter is applied as we want the parameters for the next pose and passes this position of the character to be able to react information to the CA for animation and render- quickly to changes in the environment such as ing, it will also provide feedback to the CS in moving agents and obstacles, but we need the the cases where the parameters sent needed to torso to exhibit only smooth changes, as without be slightly adjusted to guarantee natural looking this the result will be unnatural animations animations with the available set of animation where the rendered characters appear to twist clips and transitions. During pre-processing the APM will calculate, Once the tracks are defined we divide them into for each animation clip average velocity vanim in sectors, each of which maps to a clip. m/s by computing the total distance traveled by All the animations used must satisfy the follow- the character through the animation clip divided ing requirements: (1) must be time aligned, (2) by the total duration of the animation clip, T, as v and α must be approximately the same well as the angle α between the torso orienta- throughout the animation clip (within a small tion, θanim , and the velocity vector, vanim , in the threshold defined by the user), and (3) anima- th animation clip. The i animation clip Ai is de- tion clips must be cyclical. fined by the tuple {vanim,i , αi, Ti}. Each animation clip could be used when the During the simulation the APM takes the input velocity of the agent v ≤ vanim , therefore we parameters from the CS and proceeds through decide on which animation to assign to each the five steps shown in Figure 2 to obtain the sector depending on the α value. The decision output tuple {Ai, dt , b, P’ ’, p’, θ} that will be points of when to switch from one animation sent to the CA. We explain this in detail below. sector to the next as the angle increases is de- fined as being halfway between the α values of two neighboring animations. Figure 3 graphi- cally represents the decision framework, where colors are used to represent the animation clips assigned per sector. We have chosen some ani- mation clips with similar velocities or angles (v3 ≈ v , v ≈ v ≈ v , and α ≈ α ≈ α ) to graphically Figure 2 : Planning Mediator 4 6 7 8 1 2 5 show the splitting of tracks into sectors. Only 4.1. Animation Clip Selection half of the circle of animation clips is shown due to symmetry. At any time during the simu- Instead of achieving different walking speeds lation we can run any animation backwards by by using hundreds of different walk animations, using a negative dt and selecting the clip by the algorithm can be effective with a limited flipping vertically over the x axis and mirroring number of animation clips by blending within in the y axis. an animation. This is given by the parameter dt which defines the time elapsed between two 4.1.1 Classifying Motion Clips consecutive frames. Each animation clip covers Initially the algorithm starts by dividing the a subset of speeds going from the minimum circle into tracks T , T ,…, T , where each track speed of 0 m/s ( dt =0) to the original speed of 1 2 m is defined by the velocity of an animation clip the animation clip ( dt =1). from the library (v ) and v >v >…>v . Note An animation clip of walking forward can also anim 1 2 n that m ≤n as two animations A and A with simi- be used to have the agent turn, as we can reori- i j lar velocity (i.e. |v -v |<ε ) will be assigned to ent the foot on the floor, thus reorienting the i j v the same track. Each track T is defined by its entire figure. This provides natural looking re- i maximum and minimum velocity, T = {vmin ,vmax }. sults for high walking velocities, but for slower i i i velocities, using several turning animation clips Starting from the outer track T1 (highest veloc- and blending between them results in more ity), the algorithm proceeds by splitting each natural looking motion. track into sectors based on the γ values defined If we consider α being the angle between the as being halfway between the α values of two direction of movement vanim and the torso orien- animations. tation θanim , of an animation Ai, we can classify Each sector Si,k inside track Ti is defined by the tuple: min max min max which corresponds to animations based on α and the velocity. For {vi ,vi ,γ i,k ,γ i,k } example, for an animation of walking sideways the minimum and maximum velocity, and α = 90 degrees and for walking forwards α = 0 minimum and maximum decision angles. degrees. For each step of the algorithm, a track Ti+1 will To determine when to use each animation we be split into at least as many sectors as con- classify them in a circle defined by tracks and tained within Ti. For each sector sectors. A track is the area contained between S = {vmin ,vmax ,γ min ,γ max } there will be a new sector two concentric circles. Each concentric circle i,k i i i,k i,k S = {vmin ,vmax ,γ min ,γ max } where vmax = vmin , with has as radius the velocity of an animation, vanim . i + ,1 k i +1 i+1 i,k i,k i+1 i the same animation being assigned. Then fur- be done using inverse kinematics, but since we ther splitting of those sectors will occur for each are simulating large crowds, we need a method of the new animations with max depend- that can be quickly calculated and applied to vanim − vi+1 < εv hundreds of 3D animated figures in real-time. ing on the α values of the remaining animations. If we let α be the angle of a new animation A Also, to avoid the time and cost of generating a p p large number of animation clips, we are inter- and α be the angle of the previous animation A q q ested in a limited number of clips that can be that is assigned to a new sector S , then if | i+1,k combined to achieve as many realistic anima- α - α |< ε , the new animation A replaces A for p q α p q tions as possible. We have a trade-off between that sector. For any other case a new sector is created and the angle limits γ between sectors the accuracy of our animations and the simplic- ity, and therefore speed, of our calculations. are recalculated. Knowing the agent’s velocity v and the anima- The variety of movements that can be synthe- tion velocity v from the selected animation sized with this method will depend upon the anim A , we can calculate the blending factor τ. number of animation clips. The more clips we i v have, the more sectors we can define. τ = , τ ∈ []1,0 v During run time the APM will select the best anim animation clip based on the current velocity, v, The dt needed by the CA module to blend be- of the agent, and the angle, φ, between the ve- tween poses is: locity and the torso orientation, θ, provided by dt =τ ⋅∆t the CS. If a change of animation is required, then the APM will determine the weight, b, where t is the elapsed time between consecu- necessary for blending between animations. tive frames. At this point of our algorithm we have the new pose of the agent and thus can obtain the local coordinates of the root and the feet position. Knowing that the root movement is driven by the foot that is on the floor during two consecu- tive poses, we update the root position in the global coordinates of the environment.

4.3. Calculation of Root Displacement

For the current pose PPP , we calculate the vector Figure 3 : Circular Decision Framework. t ut that goes from the foot on the floor to the root 4.2. Blending Factors of the skeleton. Likewise for the pose PPPt-1 in the previous frame we calculate ut-1 (see figure 4). At every frame the CS calculates the new posi- The vector ut contains all the rotations that hap- tion for each agent based on the desired path to pen at the ankle and knee level and thus pro- move between an initial position to a destina- vides sufficient information about the leg tion, while interacting with other agents, walls movement. and obstacles. However, since the CS is not By subtraction of vectors we can calculate root aware of the type of animation being used for displacement d between frames. The vector of the rendering, if we take that new position to displacement d, shown in red is:

translate the root of the skinned avatar and the d = ut − ut−1 angle θ to reorient it, we will observe that the And the new position is thus: figures appear to ‘skate’, and also that there is pt = pt−1 + d no coherence between the orientation of the The method is efficient enough to allow for fast avatar and the actual animation movement. calculation and extension to 3D is straight for- To avoid foot-sliding and guarantee that the ward. From the results shown in Figure 5 we animation satisfies the constraints given by v can observe that it avoids foot sliding and pre- and θ, we need to ensure that the figure appears serves the vertical root movement that appears to move according to v while the foot currently when we walk fast. In the figure we have ren- in contact with the floor stays in place, and the dered the root path in black. torso faces the direction given by θ. This could relative to the positive x-axis.

φ angle of the velocity vector, v, relative to the orientation vector, w.

α angle in the x-z plane of the velocity vector vanim of the animation clip Ai relative to its orientation w vector anim . Figure 4 : Root displacement. Table 2 : Variables required for upper body rotation . To achieve movement in the direction given by v, the avatar is rotated by (β - α) so that the ve- locity vector of the animation vanim matches v. Then the torso is oriented according to w: the angle ψ is calculated as the difference between φ and α (Figure 6) and propagated across the

Figure 5 : Path followed by one agent. spine of the character by modifying the current skeletal state of the character’s pose. This al- 4.4. Updating the Skeletal State lows the agent to move the root in the direction indicated by v while the torso is facing the di- Turns in the environment can be achieved by rection indicated by w. reorienting the figure according to the velocity vector v and adjusting the torso orientation ac- 4.5. The APM Algorithm cording to θ by modifying the skeletal configu- ration. This will also orient the displacement The following table summarizes the APM algo- vector to move the character in the direction rithm with references to the sections where each indicated by the CS module. step was explained: Algorithm : APM Section φ←AngleBetween( w,v ) 4.1 Ai←SelectAnimation( v,φ)

if (Ai≠Ai-1) then b ←Agent.BlendingFactor() 4.2

dt ← CalculateDT ( v, A i)

P ←getNextPose(A , P , dt) t i t-1 4.3 d←CalculateDisplacement( P , P ) a) b) t t-1 α←GetAngleAnim(Ai) Figure 6 : Angles described in Table. β←CalculateAngle( v) 4.4 ψ ← α - φ The visual effect of this is that the foot on the agentCA.PropagateAngleSpine( Pt,ψ) floor will slightly rotate in place. This is almost unnoticeable to the human eye, especially at Algorithm 1 : The APM Algorithm high velocities, and thus is a trade-off worth With the parameters calculated by the APM, we considering as we can achieve turns in any di- apply an absolute rotation of (β- α) and add the rection without the requirement of having a displacement, d, to the previous position to ren- large database of animation clips. der our character satisfying the requirement For slower velocities, we can achieve higher given by the CS. realism by having a number of simulations where the torso orientation does not necessarily need to be aligned with v. To calculate the rota- 5. Results tions we use the following variables: The method presented only needs a small set of w torso orientation vector. animation clips to obtain visually plausible re- sults. By increasing the number of animations β angle in the x-z plane of the velocity vector v and/or the quality of those animations (i.e. using motion capture data), we would obtain im- In Figure 9 we can see that deviations are larger proved results with no additional cost during at turns and reach maximum values when there simulation time. are repulsion forces between agents (for exam- The animation library shown in the accompany- ple at the doors where agents congregate). Most ing videos consists of four walking forward of the time the deviation stays under 1cm per animations, two side-step animations and four frame. “walking on an angle” animations. Per agent and per frame, on a Intel Dual Core 3GHz with 4GB of RAM, the root displacement computation is less than 0.8 s. The whole APM algorithm requires less than 0.021ms. There- fore, we could incorporate the APM into any crowd simulation module, with an additional linear cost per frame of 0.021 ms times the crowd size. Since typically not all the agents in Figure 8 : Deviations in mm for each segment a crowd are visible simultaneously, we could of the path shown in figure 7. apply the APM algorithm to only the few hun- dred agents closest to the camera.

Figure 9 : Paths for 20 agents showing the deviation with a color gradient scale. 6. Conclusions Figure 7 : Path followed by an agent with close-ups of a turn. We present a realistic, yet computationally in- expensive, method to achieve natural animation In order to satisfy the constraints given by the without foot-sliding for crowds. Our goal was to CA, our APM may need to slightly modify the free the crowd simulation module from the position of the root given by the CS. Figure 7 computational work of achieving natural look- shows the path followed by an agent, with a ing animations and focus instead on developing zoomed view of a sharp. Each blue/green seg- crowd behavior that looks realistic. Natural ment corresponds to 75 frames of the animation looking results can be obtained with a minimal (3 seconds). We have represented the deviation library of animation clips and this method can introduced by the algorithm as a segment with a be integrated with any crowd simulation soft- green/blue ending indicating the position given ware. by the CS and a black one indicating the cor- As our model requires a foot on the floor to rected position calculated by the APM. calculate the root displacement, it has some Deviation is bigger where there are abrupt turns limitations. Currently we cannot achieve run- and blending simultaneously. We have calcu- ning simulations where both feet are in the air lated the average deviation between the CS for certain frames. We plan on addressing this position and the APM corrected position in problem in future work. order to determine the impact of our algorithm The library employed for this work was hand on the final path. Running at 25fr/s, on average created, and thus our original animations suffer we obtain a deviation of less than 7.78mm per from rigidity which affects the overall look of frame. Larger deviations correspond to seg- the crowd. Since the quality of the final crowd ments 13 and 14 which are when the agent is animation depends strongly on the set of anima- turning sharply (Figure 8). tion clips available, having time aligned motion capture animation clips will achieve more natu- [11] T. Mukai and S. Kuriyama. “Geostatistical ral looking results. motion interpolation”. In ACM Transac- tions on Graphics, 24(3), 1062–1070.2005 7. Acknowledgements [12] N. Pelechano, J.M. Allbeck, and N.I. Badler, “Controlling Individual Agents in This research has been funded by the Spanish High-Density Crowd Simulation”, ACM Government grant TIN2010-20590-C0-01 and Proc. Symp. on .99- the ERC advanced grant TRAVERSE 227985. 108. 2007. A. Beacco is also supported by the grant FPU- [13] C. Reynolds. “Flocks, , and Schools: AP2009-2195 (Spanish Ministry of Education). A Distributed Behavior Model”, In Proc. of ACM SIGGRAPH, 25-34.1987. References [14] M. da Silva, Y. Abe, and J. Popovi ć. “Simulation of human motion data using short-horizon model-predictive control”. [1] S. Chenney, “Flow Tiles”. In. Proc. ACM In. Computer. Graphics. Forum, 27(2), Symposium on Computer Animation, 233– 371–380. 2008. 242. 2004 [15] B. Spanlang. Halca hardware accelerated [2] K. Grochow, S. L. Martin, A. Hertzmann, library for character animation. Technical and Z. Popovi ć. “Style-based inverse report. Event Lab, Universitat de Barce- kinematics”. In ACM Transactions on lona. event-lab.org. 2009. Graphics, 23(3), 522–531. 2004 [16] A. Sud, E. Andersen, S. Curtis, M. Lin, and [3] Q. Gu and Z. Deng, "Context-Aware Mo- D. Manocha,“Real-time Path Planning for tion Diversification for Crowd Simula- Virtual Agents in Dynamic Environments” tion," In IEEE and In. Proc. IEEE , 91–98. Applications. 2010 2007. [4] D. Helbing, I. Farkas, and T. Vicsek. [17] F Tecchia, C. Loscos, R. Conroy, and Y. “Simulating Dynamical Features of Escape Chrysanthou. “Agent behavior simulator Panic”, In Nature, 407, 487-490. 2000 (ABS): A Platform for Urban Behavior [5] L.Ikemoto, O.Arikan, and D. Forsyth, Development”, In Proc. of ACM Games “Knowing when to put your foot down”. In Technology Conference, 17–21. 2001. ACM Proceedings of the 2006 symposium [18] A. Treuille, S. Cooper, and Z. Popovi ć. on Interactive 3D graphics and games (I3D “Continuum Crowds” In. Proc. ACM '06). 49-53. 2006 Transactions on Graphics SIGGRAPH, [6] L. Kovar, M. Gleicher, and F. Pighin. “Mo- 1160–1168. 2006. tion Graphs”. ACM Transactions on [19] A. Treuille, Y. Lee, and Z. Popović, “Near- Graphics 21, 3, 473-482. 2002 optimal Character Animation with Con- [7] L. Kovar, J. Schreiner, and M. Gleicher, tinuous Control”, Proc. ACM Siggraph, “Foot skate cleanup for motion capture ed- 26(3), Article 7. 2007. iting” ACM Proc. Symp. on Computer [20] A. Witkin, and Z. Popovi ć, “Motion Warp- Animation. ACM Press. 97-104. 2002 ing”, In. Proc. of Siggraph’95. 105-108. [8] M. Lau and J. Kuffner, “Precomputed 1995. Search Tree: Planning for Interactive Goal- [21] C.C., Wu, and V. Zordan, “Goal-directed Driven Animation”. Proc. Symp. on Com- stepping with momentum control”, ACM puter Animation. 299-308. 2006 Symp. on Computer Animation. 2010. [9] J. Maïm, B. Yersin, J. Pettré, and D. Thal- [22] K.K.Yin, K. Loken, and M. van de Panne. mann, “YaQ: An Architecture for Real- “Simbicon: Simple Biped Locomotion Time Navigation and Rendering of Varied Control”. In ÁCM Transactions on Graph- Crowds”. IEEE Computer Graphics and ics, 26(3), Article 105, 2007. Applications, 29(4), 44-53. 2009 [23] L. Zhao and A. Safanova, “Achieving [10] S. Ménardais, R. Kulpa, F. Multon and B. Good Connectivity in Motion Graphs”, Arnaldi. “Synchronization for dynamic ACM Proc. Symp. on Computer Anima- blending of motions”. In. Proc. ACM tion.139-152. 2008. Symposium on Computer Animation, 325– 336. 2004