Steering Behaviors for Autonomous Cameras Quentin Galvane, Marc Christie, Rémi Ronfard, Chen-Kim Lim, Marie-Paule Cani
Total Page:16
File Type:pdf, Size:1020Kb
Steering Behaviors for Autonomous Cameras Quentin Galvane, Marc Christie, Rémi Ronfard, Chen-Kim Lim, Marie-Paule Cani To cite this version: Quentin Galvane, Marc Christie, Rémi Ronfard, Chen-Kim Lim, Marie-Paule Cani. Steering Behaviors for Autonomous Cameras. MIG 2013 - ACM SIGGRAPH conference on Motion in Games, Nov 2013, Dublin, Ireland. pp.93-102, 10.1145/2522628.2522899. hal-00862713 HAL Id: hal-00862713 https://hal.inria.fr/hal-00862713 Submitted on 17 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Steering Behaviors for Autonomous Cameras Quentin Galvane ∗ Marc Christie Remi´ Ronfard INRIA / LJK, Grenoble, France University of Rennes I, France INRIA / LJK , Grenoble, France Chen-Kim Lim Marie-Paule Cani INRIA / Universiti Sains Malaysia INRIA / LJK, Grenoble, France Figure 1: Examples of shots captured by our autonomous cameras: (i) the initial interactions between different ethnic groups, (ii) two individual agents are interacting with each other, (iii) the workers are aware of the arrival of the boat, (iv) the workers stop their current interactions and rush to the pier. Abstract et al. 2012b]), or of large number of targets but from a single view- point [Vo et al. 2012]. Other methods are designed for highlighting The automated computation of appropriate viewpoints in complex the motions of isolated characters or of predefined groups of char- 3D scenes is a key problem in a number of computer graphics ap- acters [Assa et al. 2008; Lee et al. 2012; Assa et al. 2010], or fo- plications. In particular, crowd simulations create visually complex cus on high-level aspects by encoding elements of cinematographic environments with many simultaneous events for which the compu- rules which are only applicable to a small number of characters tation of relevant viewpoints remains an open issue. In this paper, (see [Lino et al. 2010]). Therefore, capturing a full crowd simula- we propose a system which enables the conveyance of events occur- tion, with the variety of multi-scale events taking place, calls for the ring in complex crowd simulations. The system relies on Reynolds’ design of new methods, specifically created for crowd simulations. model of steering behaviors to control and locally coordinate a col- lection of camera agents similar to a group of reporters. In our ap- In this paper, we propose a novel approach for controlling multiple proach, camera agents are either in a scouting mode, searching for cameras, in the task of conveying as-many-as-possible of the events relevant events to convey, or in a tracking mode following one or occurring in a crowd simulation. The original idea is to simulate more unfolding events. The key benefit, in addition to the simplic- a team of reporters trying to capture the diversity of behaviors in ity of the steering rules, holds in the capacity of the system to adapt an environment encompassing both local events (interactions be- to the evolving complexity of crowd simulations by self-organizing tween small numbers of characters) and emerging events that in- the camera agents to track interesting events. volve many characters. Our approach for animating cameras draws its inspiration from Reynold’s steering behaviors [Reynolds 1999]. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Indeed, each camera is animated separately as an autonomous agent Graphics and Realism—Animation; in our system. This ensures smooth displacements and rotations of cameras. Our camera behaviors are driven by the necessity to both convey local and emerging events, while ensuring a proper Keywords: camera control, boids, steering behavior distribution of camera agents around the events. More precisely, our camera agents automatically avoid static objects that obstruct 1 Introduction the visibility of targeted events. In addition, camera-agent behav- iors are specifically designed so that each of them either scouts the Crowd simulations generate complex 3D scenes that include both environment, searching for new events to shoot, or tracks existing local events (individual actions of crowd members) and larger scale events. While tracking an event, the steering behavior favors good events emerging from many individual behaviors such as a group views of the event. When scouting for an event, the steering be- of individuals simultaneously moving from one location to another. havior favors events that are not already followed by other camera Conveying the resulting animations to spectators is therefore a chal- agents. We also set up specific interaction rules between camera lenge. Using automatic camera control methods would be a good agents, preventing them from tracking the same event from simi- way to solve the problem. Unfortunately, previous camera control lar viewpoints. Nearby camera agents are steered away from each methods focus on maintaining the visibility either of a small num- other both in position and in orientation. Distant camera agents fol- ber of targets (see [Halper et al. 2001; Oskam et al. 2009; Christie lowing the same event are also steered to rotate around the event and take different views of it. All those desirable properties are obtained by allowing the camera agents to be steered horizontally ∗e-mail: [email protected] (as in Reynolds’ original paper) and also vertically and rotationally (with pan and tilt angles). As a result the user is therefore provided with a gallery of relevant shots on individual and emerging events occurring in crowd simulation, among which the user can select the appropriate animation sequences. The first key benefit of this approach stands in its simplicity. We show that a small range of simple steering behaviors are sufficient for creating quality camera motions, while covering a wide range 2010], for example, presented a solution for automatically position- of the events occurring in the simulation. Behaviors are crafted in ing a virtual camera in a 3D environment given the specification of a way which is independent of the crowd simulation model. The visual properties to be satisfied (on-screen layout of subjects, van- second benefit stands in the adaptivity of the method: the same set tage angles, visibility, scale). It then makes it easy to find the op- of steering behaviors enable camera agents to adapt to a wide range timal positions for the camera and build a sequence of viewpoints of situations, from a small number of cameras in a simulation with conveying a set of events. many occurring events to a large number of camera agents with a small number of events. In crowd simulations, in which individual Another interesting feature when selecting a viewpoint is the and emergent behaviors of the characters are mostly unanticipated, amount of motion it may convey. Dedicated to the specific task those benefits are especially important. of creating overviews of human motions (e.g. from mocap data), [Assa et al. 2008] cast the problem of camera control as an energy 2 Related work minimization process guided by the conjunction of external forces (describing the viewpoint quality at each location and time) and in- The computer graphics research community has been showing an ternal forces (enforcing smoothness on the generated path). As a increasing interest for techniques to automatically control virtual result, the system generates a sequence of camera paths (with ed- cameras in 3D environments. The complexity and unpredictable na- its) using an optimization process on a potential field defined by ture of interactive 3D animated scenes, coupled with the necessity the aggregation of forces. While purely based on character mo- to convey contents matching some viewpoint quality metrics, has tion analysis and viewpoint quality heuristics, the process gener- triggered the study of novel techniques dedicated to virtual camera ates meaningful overviews of human motions without integrating control. semantic aspects. The approach was extended to multiple charac- ters [Assa et al. 2010], to convey motions of multiple humans in A first and fundamental requirement of any camera control system a real-time context. The authors consider a set of virtual cameras is the ability to track (i.e. maintain visibility) for one or several tar- animated in real-time that shoot multi-character motions. Starting gets. Techniques vary according to the knowledge available to the with random camera motions or from simple heuristic motion rules, camera control system (known vs. unknown environments, known the system enables to select viewpoints that maximize a correlation vs. unknown trajectories of targets) and to the static or dynamic measure between the motion of the characters in the scene, and the nature of the 3D scene. projected motion of the characters on the screen (a strong corre- lation being a good indicator of viewpoint quality). In contrast to Results in robotics have greatly inspired techniques in virtual cam- our work, this method offers no means of controlling a large set of era control. For example, [Oskam et al. 2009] perform a pre- virtual cameras in a complex environment. computation of visibility between all possible locations in a static environment, and create a visibility graph that encodes this infor- Closer to our work, [Lee et al. 2012] proposed to automatically mation. When tracking a single target, the camera relies on the extract social events from the joint analysis of multiple character visibility graph to plan the best path towards the target (following motions, e.g. related to trajectories and similarities in distance and different criteria). Changes in environments are partially handled direction of character motions.