The Director's Lens: an Intelligent Assistant for Virtual Cinematography
Total Page:16
File Type:pdf, Size:1020Kb
The Director’s Lens: An Intelligent Assistant for Virtual Cinematography Christophe Lino, Marc Christie, Roberto Ranon, William Bares To cite this version: Christophe Lino, Marc Christie, Roberto Ranon, William Bares. The Director’s Lens: An Intelligent Assistant for Virtual Cinematography. ACM Multimedia, Nov 2011, Scottsdale, United States. hal- 00646398 HAL Id: hal-00646398 https://hal.inria.fr/hal-00646398 Submitted on 19 Feb 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The Director’s Lens: An Intelligent Assistant for Virtual Cinematography ∗ Christophe Lino Marc Christie IRISA/INRIA Rennes IRISA/INRIA Rennes Campus de Beaulieu Campus de Beaulieu 35042, Rennes Cedex, France 35042, Rennes Cedex, France [email protected] [email protected] Roberto Ranon William Bares HCI Lab, University of Udine Millsaps College via delle Scienze 206 1701 North State St Jackson 33100, Udine, Italy MS 39210 [email protected] [email protected] ABSTRACT 1. INTRODUCTION We present the Director’s Lens, an intelligent interactive as- Despite the numerous advances in the tools proposed to sistant for crafting virtual cinematography using a motion- assist the creation of computer-generated animations, the tracked hand-held device that can be aimed like a real cam- task of crafting virtual camera work and edits for a sequence era. The system employs an intelligent cinematography en- of 3D animation remains a time-consuming endeavor requir- gine that can compute, at the request of the filmmaker, a set ing skills in cinematography and 3D animation packages. Is- of suitable camera placements for starting a shot. These sug- sues faced by animators encompass: (i) creative placement gestions represent semantically and cinematically distinct and movement of the virtual camera in terms of its posi- choices for visualizing the current narrative. In computing tion, orientation, and lens angle to fulfill desired commu- suggestions, the system considers established cinema con- nicative goals, (ii) compliance (when appropriate) with es- ventions of continuity and composition along with the film- tablished conventions in screen composition and viewpoint maker’s previous selected suggestions, and also his or her selection and (iii) compliance with continuity editing rules manually crafted camera compositions, by a machine learn- which guide the way successive camera placements need to ing component that adapts shot editing preferences from be arranged in time to effectively convey a sequence of events. user-created camera edits. The result is a novel workflow Motivated in part to reduce this human workload, re- based on interactive collaboration of human creativity with search in automated virtual camera control has proposed automated intelligence that enables efficient exploration of a increasingly sophisticated artificial intelligence algorithms wide range of cinematographic possibilities, and rapid pro- (e.g. [10, 5, 9, 17, 2, 20]) that can in real-time generate vir- duction of computer-generated animated movies. tual camera work that mimics common textbook-style cin- ematic sequences. However, this combination of increased Categories and Subject Descriptors automation and decreased human input too often produces cinematography of little creative appeal or utility since it H.5.1 [Multimedia Information Systems]: Animations, minimizes the participation of filmmakers skilled in “taking Video ideas, words, actions, emotional subtext, tone and all other forms of non-verbal communication and rendering them in General Terms visual terms” [6]. Additionally, existing systems rely on pre- Algorithms, Human Factors coded knowledge of cinematography and provide no facility to adapt machine-computed cinematography to examples of Keywords human-created cinematography. This paper introduces the Director’s Lens, whose novel Virtual Cinematography, Motion-Tracked Virtual Cameras, workflow is the first to combine the creative intelligence and Virtual Camera Planning skill of filmmakers with the computational power of an au- tomated cinematography engine. ∗Area chair: Dick Bulterman 1.1 Overview Our Director’s Lens system follows a recent trend in pre- Permission to make digital or hard copies of all or part of this work for visualization and computer-generated movies, by including personal or classroom use is granted without fee provided that copies are a motion-tracked hand-held device equipped with a small not made or distributed for profit or commercial advantage and that copies LCD screen that can be aimed like a real camera (see Fig- bear this notice and the full citation on the first page. To copy otherwise, to ure 4) to move a corresponding virtual camera and create republish, to post on servers or to redistribute to lists, requires prior specific shots depicting events in a virtual environment. permission and/or a fee. MM’11, November 28–December 1, 2011, Scottsdale, Arizona, USA. In the typical production workflow, the filmmaker would Copyright 2011 ACM 978-1-4503-0616-4/11/11 ...$10.00. film the same action from different points of view, and then Number of suggestions with Extreme Long Shot, Suggestions High Angle with Close-Up Shot Length Suggestions with Low Camera Angle Previous Shot Last Frame Cinematographic Filters Currently selected suggestion Figure 1: Screenshot of Director’s Lens user interface in Explore Suggestions mode. later select, trim and combine shots into sequences and ulti- 1.2 Contributions and Outline mately create a finished movie. Our system instead proposes In the emergent Virtual Moviemaking approach [4], real- a workflow in which shooting and editing are combined into time 3D graphics and motion-tracked virtual cameras are the same process. More specifically, after having filmed a used in all production phases to provide creative team mem- shot with the motion-tracked device, the filmmaker decides bers with the ability of both reduce the time needed for pre- where a cut should be introduced (i.e. trim the shot) and the visualization, and to enable them to enjoy immediate and system will compute a set of suitable camera placements for direct control of the process, from pre-production to produc- starting the subsequent shot. These suggestions represent tion. For example, designed virtual sets can be interactively semantically and cinematically distinct choices for visualiz- explored to accurately plan shots and refine creative ideas ing the current narrative, and can keep into account consis- or virtual props before actual shooting takes place. tency in cinematic continuity and style to prior shots. The Our system aims at improving Virtual Moviemaking pro- filmmaker can visually browse the set of suggestions (using cesses by introducing, for the first time, the combination the interface shown in Figure 1), possibly filter them with of manual control of the virtual camera with an intelligent respect to cinematographic conventions, and select the one assistant. This provides a filmmaker with a way to interac- he or she likes best. tively and visually explore, while shooting, a wide variety of Once a suggestion is selected, it can be refined by manu- possible solutions for transitioning from a shot to another, ally moving the virtual camera into a better position, angle, including some perhaps not considered otherwise, and in- or adjusting lens zoom, and shooting can start again. In stantly see how each alternate would look like without hav- addition, movies with shots composed of still cameras can ing to walk and turn around with the motion-tracked de- be quickly generated with just keyboard and mouse. vice (or move a virtual camera around with keyboard and The system exploits an annotated screenplay which pro- mouse). In addition, in the innovative production workflow vides the narrative context in the form of text descriptions of we propose, shooting and editing are combined in a seam- locations, subjects, and time-stamped actions, with links to less way, thus enabling very rapid production of animated the 3D models employed in the scene. The system uses the storyboards or rough movies for previsualization purposes. screenplay to provides on-screen information about timing Furthermore, a novice filmmaker could use the system as of actions and dialog while shooting and provides our cine- a hands-on learning tool to quickly create camera sequences matography engine requisite knowledge about the 3D geom- that conform to the textbook guidelines of composition and etry and animation being shot. To generate the suggestions, continuity in cutting from one shot to the next. the cinematography engine leverages both its knowledge of Finally, our work constitutes the first effort towards auto- classical cinematography rules [3], and the knowledge it gets matically learning cinematography idioms from live human by analyzing the compositions created by the filmmaker. camera work, and therefore building systems that are able to learn from examples of human experts, instead of trying the output often depends on carefully formulating the con- to pre-encode a limited number