Avatar Locomotion in Crowd Simulation
Total Page:16
File Type:pdf, Size:1020Kb
Avatar Locomotion in Crowd Simulation N. Pelechano B. Spanlang A. Beacco U. Politècnica de Catalunya U. de Barcelona U. Politècnica de Catalunya [email protected] [email protected] [email protected] Abstract easily extended to large numbers of agents simulated in real time. This paper presents an Animation Planning To achieve natural looking crowds, most of the Mediator (APM) designed to synthesize anima- current work in this area uses avatars with a tions efficiently for virtual characters in real limited number of animation clips. In most time crowd simulation. From a set of animation cases, problems arise when the crowd simulator clips, the APM selects the most appropriate and module updates the position of each agent’s root modifies the skeletal configuration of each without taking into account movement of the character to satisfy desired constraints (e.g. feet. This yields unnatural simulations where eliminating foot-sliding or restricting upper the virtual humans appear to be ‘skating’ in the body torsion), while still providing natural look- environment, which is known as ‘foot sliding’. ing animations. We use a hardware accelerated Other problems emerge from the lack of coher- character animation library to blend animations ence between the orientation of the geometric increasing the number of possible locomotion representation of the agents and direction of types. The APM allows the crowd simulation movement. As we increase model sophistication module to maintain control of path planning, through enhanced path selection, avoidance, collision avoidance and response. A key advan- rendering, and inclusion of more behaviors, tage of our approach is that the APM can be these artifacts become more noticeable. integrated with any crowd simulator working in In order to avoid these problems, most ap- continuous space. We show visual results proaches either perform inverse kinematics, achieved in real time for several hundreds of which implies a high computational cost that agents, as well as the quantitative accuracy. does not allow large crowd simulations in real time, or adopt approaches where the animation Keywords: Crowd animation, Foot sliding. clip being used drives the root position which limits movements and speeds. In this paper we present an Animation Planning 1. Introduction Mediator; a highly efficient animation synthe- sizer that can be seamlessly integrated with a Crowd simulation in real time for computer crowd simulation system. The APM selects the graphics applications, such as training and video parameters to feed a motion synthesizer while it games, requires algorithms for not only agent feeds back to the crowd simulation module the navigation in large virtual environments while required updates to guarantee consistency. Even avoiding obstacles and other agents, but also when we only have a small set of animation rendering highly complex scenes with avatars to clips, our technique allows a large and continu- enhance realism. To achieve that, either of these ous variety of movement. It can be used with steps can become a major bottleneck for real any crowd simulation software, since it is the time simulation. Therefore, trade-offs between crowd simulation module which drives the accuracy and speed are necessary. Simulating movement of the virtual agents and our module human motion accurately whilst keeping within limits its work to adjusting the root displace- constraints is not an easy task, and although ment and skeletal state. many techniques have been developed for syn- thesizing the motion of one agent, they are not 2. Related Work Ménardais et al. were able to synchronize and adapt different clips without motion graphs We can classify crowd simulation models into [10]. two approaches. The first encompasses those Proportional derivative controllers and optimi- models that calculate the movement of agents in zation techniques are used [22] [14] to drive a virtual environment without taking into con- physically simulated characters. Goal-directed sideration the underlying animation. The second steps can perform a controlled navigation [21]. is defined by those that have a core set of ani- While such techniques show very impressive mation clips that they play back, or blend be- results for a single character in real time, the tween, to move from one point to another. computational costs mean they are not suitable The first group suffers from an artifact known for real time large crowd simulations. as foot sliding, since the position of the charac- Kovar et. al. [7] presented an online algorithm ters is updated without considering the foot to clean up foot sliding artifacts of motion cap- position. Notice this is not exactly the same ture data. The technique focuses on minute de- problem addressed by other works removing tails and therefore is computationally not suit- motion capture noise [5]. In this group we have able for large real time crowds. many examples using different crowd simula- Treuille et. al. [19] generated character anima- tion models, such as social forces [4], rule- tions with near-optimal controllers using low- based models [13], cellular automata [17], flow dimensional basis representation. This approach tiles [1], roadmaps [16], continuum dynamics also uses graphs but, unlike previous models, [18] and forces models parameterized by psy- blending can occur between any two clips of chological and geometrical rules [12]. motion. They also avoid foot sliding by re- The second approach puts effort into avoiding rooting the skeletons to the feet and specifying foot-sliding while limiting the number of possi- constraint frames, but their method requires ble movements for each agent. Lau and Kuffner hundreds of animation clips which is very time [8] introduced pre-computed search trees for consuming to gather. planning interactive goal-driven animation. Recently, Gu and Deng [3] increased the variety These models do not perform motion blending, realism creating new stylized motions from a are often limited to a small graph of animations small motion capture dataset. Maïm et. al. [9] and play one animation clip after another, thus apply linear blending to animations selected moving the agent according to the root move- based on the agent’s velocity and morphology ment in each animation clip. As such they do achieving nice animations for crowds but with- not perform well in interactive, dynamic envi- out eliminating foot sliding. ronments, especially with dense crowds where collision response forces have an impact. 3. The Framework Recent trends in character animation include driving physical simulations by motion capture The framework employed for this work per- data or using machine learning to parameterize forms a feedback loop where the APM acts as motion for simplified interactive control. Ex- the communication channel between a crowd amples are inverse kinematics and motion simulation module and a character animation blending based on gaussian process statistics or and rendering module. The outline of this geostatistics of motion capture data [2][11]. framework is shown in Figure 1. Such techniques avoid foot sliding but are com- putationally more expensive. Through interpolation and concatenation of motion clips, new natural looking animations can be created [20]. Kovar et. al. introduced motion graphs [6]. Zhao et. al. extended them to improve connectivity and smooth transitions [23]. These techniques can avoid foot sliding by Figure 1 : Framework using the root velocity of the original motion data, but they require a large database of motion For each frame, the crowd simulation module, capture data to allow for interactive change of CS, calculates the position p, velocity v, and walking speed and orientation. desired orientation of the torso θ. This informa- tion is then passed to the APM in order to select constantly. Through filtering we can simulate an the parameters to be sent to the character anima- agent that moves with a slight zigzag effect, tion module, CA, which will provide the next while the torso of the rendered character moves character’s pose, P. Each pose is a vector in a constant direction. specifying all joint positions in a kinematic For the animation and visualization of avatars skeleton. (CA) we are using a hardware accelerated li- The APM calculates the next synthesized ani- brary for character animation (HALCA)[15]. mation Si which is described by the tuple { Ai , dt The core consists of a motion mixer and an , b, P , v, θ } (see Table 1). avatar visualization engine. Direct access to properties such as the duration, frame rate, and p Agent root position. the skeletal states of an animation are provided to the hosting application. Such information can v Agent velocity. be useful to compute, for example, the actual θ Angle indicating torso orientation in x-z plane. walking speed of a character when animated. Among the functionalities provided are: blend- Ai Animation clip for agent i. ing, morphing, and efficient access and manipu- dt Differential of time for blending animation clip Ai lation of the whole skeletal state. The CA contains a motion synthesizer which b Blending factor when changing animation clip, can provide a large variety of continuous mo- i.e. when Ai(t) ≠Ai(t-1) , where t indicates time. tion from a small set of animations by allowing P Pose given by the skeletal state of the agent. direct manipulation of the skeleton configura- Table 1 : Input/Output variables of the APM tion. To create the library of animations, we decided to use hand created animations, al- The APM may need to slightly adjust the though motion capture data could also be used agent’s position and velocity direction in order after some pre-processing. to guarantee that the animations rendered are smooth and continuous. It is thus essential that 4. Animation Planning Mediator the crowd simulation model employed works in continuous space and allows for updates of the To achieve realistic animation for large crowds position and velocity of each agent at any given from a small set of animation clips, we need to time in order to guarantee consistency with the synthesize new motions.