
Rekall: Specifying Video Events using Compositions of Spatiotemporal Labels Daniel Y. Fu, Will Crichton, James Hong, Xinwei Yao, Haotian Zhang, Anh Truong, Avanika Narayan, Maneesh Agrawala, Christopher Re,´ Kayvon Fatahalian Stanford University {danfu, wcrichto, james.hong, xinwei.yao, haotianz, anhlt92, avanika, maneesh, chrismre, kayvonf}@cs.stanford.edu ABSTRACT required to then train an accurate model. We seek to enable more Many real-world video analysis applications require the ability to agile video analysis workflows where an analyst, faced with a video identify domain-specific events in video, such as interviews and dataset and an idea for a new event of interest (but only a small commercials in TV news broadcasts, or action sequences in film. number of labeled examples, if any), can quickly author an initial Unfortunately, pre-trained models to detect all the events of interest model for the event, immediately inspect the model’s results, and in video may not exist, and training new models from scratch can then iteratively refine the model to meet the accuracy needs of the be costly and labor-intensive. In this paper, we explore the utility end task. of specifying new events in video in a more traditional manner: by To enable these agile, human-in-the-loop video analysis work- writing queries that compose outputs of existing, pre-trained mod- flows, we propose taking a more traditional approach: specifying novel events in video as queries that programmatically compose els. To write these queries, we have developed REKALL, a library that exposes a data model and programming model for composi- the outputs of existing, pre-trained models. Since heuristic com- position does not require additional model training and is cheap to tional video event specification. REKALL represents video annota- tions from different sources (object detectors, transcripts, etc.) as evaluate, analysts can immediately inspect query results as they it- spatiotemporal labels associated with continuous volumes of space- eratively refine queries to overcome challenges such as modeling time in a video, and provides operators for composing labels into complex event structure and dealing with imperfect source video queries that model new video events. We demonstrate the use of annotations (missed object detections, misaligned transcripts, etc.). To explore the utility of a query-based approach for detecting REKALL in analyzing video from cable TV news broadcasts, films, static-camera vehicular video streams, and commercial autonomous novel events of interest in video, we introduce REKALL, a library vehicle logs. In these efforts, domain experts were able to quickly that exposes a data model and programming model for composi- (in a few hours to a day) author queries that enabled the accurate tional video event specification.REKALL adapts ideas from multi- detection of new events (on par with, and in some cases much more media databases [4,15,23,31,32,36,40] to the modern video analy- accurate than, learned approaches) and to rapidly retrieve video sis landscape, where using the outputs of modern machine learning clips for human-in-the-loop tasks such as video content curation techniques allows for more powerful and expressive queries, and and training data curation. Finally, in a user study, novice users of adapts ideas from complex event processing systems for temporal data streams [8,16,25] to the spatiotemporal domain of video. REKALL were able to author queries to retrieve new events in video given just one hour of query development time. The primary technical challenge in building REKALL was defin- ing the appropriate abstractions and compositional primitives for users to write queries over video. In order to compose video anno- 1. INTRODUCTION tations from multiple data sources that may be sampled at different Modern machine learning techniques can robustly annotate large temporal resolutions (e.g., a car detection on a single frame from a video collections with basic information about their audiovisual deep neural network, the duration of a word over half a second in contents (e.g., face bounding boxes, people/object locations, time- a transcript), REKALL’s data model adopts a unified representation aligned transcripts). However, many real-world video applications of multi-modal video annotations, the spatiotemporal label, that require exploring a more diverse set of events in video. For exam- is associated with a continuous volume of spacetime in a video. ple, our recent efforts to analyze cable TV news broadcasts required REKALL’s programming model uses hierarchical composition of models to detect interview segments and commercials. A film pro- these labels to express complex event structure and define increas- duction team may wish to quickly find common segments such as ingly higher-level video events. action sequences to put into a movie trailer. An autonomous vehicle We demonstrate the effectiveness of compositional video event development team may wish to mine video collections for events specification by implementing REKALL queries for a range of video like traffic light changes or obstructed left turns to debug the car’s analysis tasks drawn from four application domains: media bias prediction and control systems. A machine learning engineer de- studies of cable TV news broadcasts, cinematography studies of veloping a new model for video analysis may search for particular Hollywood films, analysis of static-camera vehicular video streams, scenarios to bootstrap model development or focus labeler effort. and data mining autonomous vehicle logs. In these efforts, REKALL Unfortunately, pre-trained models to detect these domain-specific queries developed by domain experts with little prior REKALL ex- events often do not exist, given the large number and diversity of perience achieved accuracies on par with, and sometimes signifi- potential events of interest. Training models for new events can cantly better than, those of learning-based approaches (6.5 F1 points be difficult and expensive, due to the large cost of labeling a train- more accurate on average, and up to 26.1 F1 points more accurate ing set from scratch, and the computation time and human skill for one task). REKALL queries also served as a key video data re- 1 Iterate on Query def bernie_and_jake(faces): bernie = faces .filter(face.name == “Bernie”) Analyst jake = faces .filter(face.name == “Jake”) Analysis Basic Rekall bernie_and_jake = bernie Interviews Annotations Queries .join(jake, Face Detections predicate = time_overlaps, Data Curation merge_op = span) 3:15-3:16: BERNIE... 5:18-5:20: THANK YOU... return bernie_and_jake Commercials 9:15-9:17: TODAY IN... no Downstream Captions Evaluate Query Results yes Applications Video Collection Satisfactory? Figure 1: Overview of a compositional video event specification workflow. An analyst pre-processes a video collection to extract basic annotations about its contents (e.g., face detections from an off-the-shelf deep neural network and audio-aligned transcripts). The analyst then writes and iteratively refines REKALL queries that compose these annotations to specify new events of interest, until query outputs are satisfactory for use by downstream analysis applications. trieval component of human-in-the-loop exploratory video analysis Some supplemental videos can be found at http://www.danfu. tasks. org/projects/rekall-tech-report/. Since our goal is to enable analysts to quickly retrieve novel events in video, we also evaluate how well users are able to for- 2. AN ANALYSIS EXAMPLE mulate REKALL queries for new events in a user study. We taught To better understand the thought process underlying our video participants how to use REKALL with a one-hour tutorial, and then analysis tasks, consider a situation where an analyst, seeking to un- gave them one hour to write a REKALL query to detect empty park- derstand sources of bias in TV political coverage, wishes to tab- ing spaces given the outputs of an off-the-shelf object detector. ulate the total time spent interviewing a political candidate in a Users with sufficient programming experience were able to write large collection of TV news video. Performing this analysis re- REKALL queries to express complex spatiotemporal event struc- quires identifying video segments that contain interviews of the tures; these queries, after some manual tuning (changing a single candidate. Since extracting TV news interviews is a unique task, line of code) by an expert REKALL user to account for failures in we assume a pre-trained computer vision model is not available to the object detector, achieved near-perfect accuracies (average pre- the analyst. However, it is reasonable to expect an analyst does cision scores above 94). have access to widely available tools for detecting and identifying To summarize, in this paper we make the following contribu- faces in the video, and to the video’s time-aligned text transcripts. tions: Common knowledge of TV news broadcasts suggests that inter- • We propose compositional video event specification as a hu- view segments tend to feature shots containing faces of the candi- man-in-the-loop approach to rapidly detecting novel events date and the show’s host framed together, interleaved with head- of interest in video. shots of just the candidate. Therefore, a first try at an interview detector query might attempt to find segments featuring this tem- • We introduce REKALL, a library that exposes a data model poral pattern of face detections. Refinements to this initial query
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-