Rekall: Specifying Video Events Using Compositions of Spatiotemporal Labels

Total Page:16

File Type:pdf, Size:1020Kb

Rekall: Specifying Video Events Using Compositions of Spatiotemporal Labels Rekall: Specifying Video Events using Compositions of Spatiotemporal Labels Daniel Y. Fu, Will Crichton, James Hong, Xinwei Yao, Haotian Zhang, Anh Truong, Avanika Narayan, Maneesh Agrawala, Christopher Re,´ Kayvon Fatahalian Stanford University {danfu, wcrichto, james.hong, xinwei.yao, haotianz, anhlt92, avanika, maneesh, chrismre, kayvonf}@cs.stanford.edu ABSTRACT required to then train an accurate model. We seek to enable more Many real-world video analysis applications require the ability to agile video analysis workflows where an analyst, faced with a video identify domain-specific events in video, such as interviews and dataset and an idea for a new event of interest (but only a small commercials in TV news broadcasts, or action sequences in film. number of labeled examples, if any), can quickly author an initial Unfortunately, pre-trained models to detect all the events of interest model for the event, immediately inspect the model’s results, and in video may not exist, and training new models from scratch can then iteratively refine the model to meet the accuracy needs of the be costly and labor-intensive. In this paper, we explore the utility end task. of specifying new events in video in a more traditional manner: by To enable these agile, human-in-the-loop video analysis work- writing queries that compose outputs of existing, pre-trained mod- flows, we propose taking a more traditional approach: specifying novel events in video as queries that programmatically compose els. To write these queries, we have developed REKALL, a library that exposes a data model and programming model for composi- the outputs of existing, pre-trained models. Since heuristic com- position does not require additional model training and is cheap to tional video event specification. REKALL represents video annota- tions from different sources (object detectors, transcripts, etc.) as evaluate, analysts can immediately inspect query results as they it- spatiotemporal labels associated with continuous volumes of space- eratively refine queries to overcome challenges such as modeling time in a video, and provides operators for composing labels into complex event structure and dealing with imperfect source video queries that model new video events. We demonstrate the use of annotations (missed object detections, misaligned transcripts, etc.). To explore the utility of a query-based approach for detecting REKALL in analyzing video from cable TV news broadcasts, films, static-camera vehicular video streams, and commercial autonomous novel events of interest in video, we introduce REKALL, a library vehicle logs. In these efforts, domain experts were able to quickly that exposes a data model and programming model for composi- (in a few hours to a day) author queries that enabled the accurate tional video event specification.REKALL adapts ideas from multi- detection of new events (on par with, and in some cases much more media databases [4,15,23,31,32,36,40] to the modern video analy- accurate than, learned approaches) and to rapidly retrieve video sis landscape, where using the outputs of modern machine learning clips for human-in-the-loop tasks such as video content curation techniques allows for more powerful and expressive queries, and and training data curation. Finally, in a user study, novice users of adapts ideas from complex event processing systems for temporal data streams [8,16,25] to the spatiotemporal domain of video. REKALL were able to author queries to retrieve new events in video given just one hour of query development time. The primary technical challenge in building REKALL was defin- ing the appropriate abstractions and compositional primitives for users to write queries over video. In order to compose video anno- 1. INTRODUCTION tations from multiple data sources that may be sampled at different Modern machine learning techniques can robustly annotate large temporal resolutions (e.g., a car detection on a single frame from a video collections with basic information about their audiovisual deep neural network, the duration of a word over half a second in contents (e.g., face bounding boxes, people/object locations, time- a transcript), REKALL’s data model adopts a unified representation aligned transcripts). However, many real-world video applications of multi-modal video annotations, the spatiotemporal label, that require exploring a more diverse set of events in video. For exam- is associated with a continuous volume of spacetime in a video. ple, our recent efforts to analyze cable TV news broadcasts required REKALL’s programming model uses hierarchical composition of models to detect interview segments and commercials. A film pro- these labels to express complex event structure and define increas- duction team may wish to quickly find common segments such as ingly higher-level video events. action sequences to put into a movie trailer. An autonomous vehicle We demonstrate the effectiveness of compositional video event development team may wish to mine video collections for events specification by implementing REKALL queries for a range of video like traffic light changes or obstructed left turns to debug the car’s analysis tasks drawn from four application domains: media bias prediction and control systems. A machine learning engineer de- studies of cable TV news broadcasts, cinematography studies of veloping a new model for video analysis may search for particular Hollywood films, analysis of static-camera vehicular video streams, scenarios to bootstrap model development or focus labeler effort. and data mining autonomous vehicle logs. In these efforts, REKALL Unfortunately, pre-trained models to detect these domain-specific queries developed by domain experts with little prior REKALL ex- events often do not exist, given the large number and diversity of perience achieved accuracies on par with, and sometimes signifi- potential events of interest. Training models for new events can cantly better than, those of learning-based approaches (6.5 F1 points be difficult and expensive, due to the large cost of labeling a train- more accurate on average, and up to 26.1 F1 points more accurate ing set from scratch, and the computation time and human skill for one task). REKALL queries also served as a key video data re- 1 Iterate on Query def bernie_and_jake(faces): bernie = faces .filter(face.name == “Bernie”) Analyst jake = faces .filter(face.name == “Jake”) Analysis Basic Rekall bernie_and_jake = bernie Interviews Annotations Queries .join(jake, Face Detections predicate = time_overlaps, Data Curation merge_op = span) 3:15-3:16: BERNIE... 5:18-5:20: THANK YOU... return bernie_and_jake Commercials 9:15-9:17: TODAY IN... no Downstream Captions Evaluate Query Results yes Applications Video Collection Satisfactory? Figure 1: Overview of a compositional video event specification workflow. An analyst pre-processes a video collection to extract basic annotations about its contents (e.g., face detections from an off-the-shelf deep neural network and audio-aligned transcripts). The analyst then writes and iteratively refines REKALL queries that compose these annotations to specify new events of interest, until query outputs are satisfactory for use by downstream analysis applications. trieval component of human-in-the-loop exploratory video analysis Some supplemental videos can be found at http://www.danfu. tasks. org/projects/rekall-tech-report/. Since our goal is to enable analysts to quickly retrieve novel events in video, we also evaluate how well users are able to for- 2. AN ANALYSIS EXAMPLE mulate REKALL queries for new events in a user study. We taught To better understand the thought process underlying our video participants how to use REKALL with a one-hour tutorial, and then analysis tasks, consider a situation where an analyst, seeking to un- gave them one hour to write a REKALL query to detect empty park- derstand sources of bias in TV political coverage, wishes to tab- ing spaces given the outputs of an off-the-shelf object detector. ulate the total time spent interviewing a political candidate in a Users with sufficient programming experience were able to write large collection of TV news video. Performing this analysis re- REKALL queries to express complex spatiotemporal event struc- quires identifying video segments that contain interviews of the tures; these queries, after some manual tuning (changing a single candidate. Since extracting TV news interviews is a unique task, line of code) by an expert REKALL user to account for failures in we assume a pre-trained computer vision model is not available to the object detector, achieved near-perfect accuracies (average pre- the analyst. However, it is reasonable to expect an analyst does cision scores above 94). have access to widely available tools for detecting and identifying To summarize, in this paper we make the following contribu- faces in the video, and to the video’s time-aligned text transcripts. tions: Common knowledge of TV news broadcasts suggests that inter- • We propose compositional video event specification as a hu- view segments tend to feature shots containing faces of the candi- man-in-the-loop approach to rapidly detecting novel events date and the show’s host framed together, interleaved with head- of interest in video. shots of just the candidate. Therefore, a first try at an interview detector query might attempt to find segments featuring this tem- • We introduce REKALL, a library that exposes a data model poral pattern of face detections. Refinements to this initial query
Recommended publications
  • TFM 327 / 627 Syllabus V2.0
    TFM 327 / 627 Syllabus v2.0 TFM 327 - FILM AND VIDEO EDITING / AUDIO PRODUCTION Instructor: Greg Penetrante OFFICE HOURS: By appointment – I’m almost always around in the evenings. E-MAIL: [email protected] (recommended) or www.facebook.com/gregpen PHONE : (619) 985-7715 TEXT: Modern Post Workflows and Techniques – Scott Arundale & Tashi Trieu – Focal Press Highly Recommended but Not Required: The Film Editing Room Handbook, Hollyn, Norman, Peachpit Press COURSE PREREQUISITES: TFM 314 or similar COURSE OBJECTIVES: You will study classical examples of editing techniques by means of video clips as well as selected readings and active lab assignments. You will also will be able to describe and demonstrate modern post-production practices which consist of the Digital Loader/Digital Imaging Technician (DIT), data management and digital dailies. You will comprehend, analyze and apply advanced practices in offline editing, online/conforming and color grading. You will be able to demonstrate proficiency in using non-linear editing software by editing individually assigned commercials, short narrative scenes and skill-based exercises. You will identify and analyze past, present and future trends in post-production. Students will also learn how to identify historically significant figures and their techniques as related to defining techniques and trends in post-production practices of television and film and distill your accumulated knowledge of post-production techniques by assembling a final master project. COURSE DESCRIPTION: Film editing evolved from the process of physically cutting and taping together pieces of film, using a viewer such as a Moviola or Steenbeck to look at the results. This course approaches the concept of editing holistically as a process of artistic synthesis rather than strictly as a specialized technical skill.
    [Show full text]
  • Motion-Based Video Synchronization
    ActionSnapping: Motion-based Video Synchronization Jean-Charles Bazin and Alexander Sorkine-Hornung Disney Research Abstract. Video synchronization is a fundamental step for many appli- cations in computer vision, ranging from video morphing to motion anal- ysis. We present a novel method for synchronizing action videos where a similar action is performed by different people at different times and different locations with different local speed changes, e.g., as in sports like weightlifting, baseball pitch, or dance. Our approach extends the popular \snapping" tool of video editing software and allows users to automatically snap action videos together in a timeline based on their content. Since the action can take place at different locations, exist- ing appearance-based methods are not appropriate. Our approach lever- ages motion information, and computes a nonlinear synchronization of the input videos to establish frame-to-frame temporal correspondences. We demonstrate our approach can be applied for video synchronization, video annotation, and action snapshots. Our approach has been success- fully evaluated with ground truth data and a user study. 1 Introduction Video synchronization aims to temporally align a set of input videos. It is at the core of a wide range of applications such as 3D reconstruction from multi- ple cameras [20], video morphing [27], facial performance manipulation [6, 10], and spatial compositing [44]. When several cameras are simultaneously used to acquire multiple viewpoint shots of a scene, synchronization can be trivially achieved using timecode information or camera triggers. However this approach is usually only available in professional settings. Alternatively, videos can be synchronized by computing a (fixed) time offset from the recorded audio sig- nals [20].
    [Show full text]
  • Virtual Video Editing in Interactive Multimedia Applications
    SPECIAL SECTION Edward A. Fox Guest Editor Virtual Video Editing in Interactive Multimedia Applications Drawing examples from four interrelated sets of multimedia tools and applications under development at MIT, the authors examine the role of digitized video in the areas of entertainment, learning, research, and communication. Wendy E. Mackay and Glorianna Davenport Early experiments in interactive video included surro- video data format might affect these kinds of informa- gate travel, trainin);, electronic books, point-of-purchase tion environments in the future. sales, and arcade g;tme scenarios. Granularity, inter- ruptability, and lixrited look ahead were quickly identi- ANALOG VIDEO EDITING fied as generic attributes of the medium [l]. Most early One of the most salient aspects of interactive video applications restric:ed the user’s interaction with the applications is the ability of the programmer or the video to traveling along paths predetermined by the viewer to reconfigure [lo] video playback, preferably author of the program. Recent work has favored a more in real time. The user must be able to order video constructivist approach, increasing the level of interac- sequences and the system must be able to remember tivity ‘by allowing L.sers to build, annotate, and modify and display them, even if they are not physically adja- their own environnlents. cent to each other. It is useful to briefly review the Tod.ay’s multitasl:ing workstations can digitize and process of traditional analog video editing in order to display video in reel-time in one or more windows on understand both its influence on computer-based video the screen.
    [Show full text]
  • Psa Production Guide How to Create A
    PSA PRODUCTION GUIDE HOW TO CREATE A PSA HOW TO CREATE A STORY BOARD FREE VIDEO EDITING SOFTWARE HOW TO CREATE A PSA 1. Choose your topic. Pick a subject that is important to you, as well as one you can visualize. Keep your focus narrow and to the point. More than one idea confuses your audience, so have one main idea per PSA. 2. Time for some research - you need to know your stuff! Try to get the most current and up to date facts on your topic. Statistics and references can add to a PSA. You want to be convincing and accurate. 3. Consider your audience. Consider your target audience's needs, preferences, as well as the things that might turn them off. They are the ones you want to rally to action. The action suggested by the PSA can be spelled out or implied in your PSA, just make sure that message is clear. 4. Grab your audience's attention. You might use visual effects, an emotional response, humor, or surprise to catch your target audience. 5. Create a script and keep your script to a few simple statements. A 30-second PSA will typically require about 5 to 7 concise assertions. Highlight the major and minor points that you want to make. Be sure the information presented in the PSA is based on up-to-date, accurate research, findings and/or data. 6. Storyboard your script. 7. Film your footage and edit your PSA. 8. Find your audience and get their reaction. How do they respond and is it in the way you expected? Your goal is to call your audience to action.
    [Show full text]
  • Avid Film Composer and Universal Offline Editing Getting Started Guide
    Avid® Film Composer® and Universal Offline Editing Getting Started Guide Release 10.0 a tools for storytellers® © 2000 Avid Technology, Inc. All rights reserved. Film Composer and Universal Offline Editing Getting Started Guide • Part 0130-04529-01 • Rev. A • August 2000 2 Contents Chapter 1 About Film Composer and Media Composer Film Composer Overview. 8 About 24p Media . 9 About 25p Media . 10 Editing Basics . 10 About Nonlinear Editing. 10 Editing Components. 11 From Flatbed to Desktop: Getting Oriented. 12 Project Workflow . 13 Starting a Project . 14 Editing a Sequence . 15 Generating Output . 16 Chapter 2 Introduction Using the Tutorial. 17 What You Need . 19 Turning On Your Equipment . 19 Installing the Tutorial Files . 20 How to Proceed. 21 Using Help. 22 Setting Up Your Browser . 22 Opening and Closing the Help System . 22 Getting Help for Windows and Dialog Boxes. 23 Getting Help for Screen Objects . 23 Keeping Help Available (Windows Only) . 24 3 Finding Information Within the Help . 25 Using the Contents List . 25 Using the Index . 25 Using the Search Feature . 26 Accessing Information from the Help Menu. 27 Using Online Documentation . 29 Chapter 3 Starting a Project Starting the Application (Windows). 31 Starting the Application (Macintosh). 32 Creating a New User . 33 Selecting a Project . 33 Viewing Clips . 34 Using Text View. 35 Using Frame View. 36 Using Script View . 37 Chapter 4 Playing Clips Opening a Clip in the Source Monitor. 39 Displaying Tracking Information . 40 Controlling Playback. 44 Using the Position Bar and Position Indicator . 45 Controlling Playback with Playback Control Buttons . 46 Controlling Playback with Playback Control Keys .
    [Show full text]
  • 10 Best Free Video Editing Software Review and Download
    Copy Right www.imelfin.com 10 best free video editing software review and download What are the best free video editing software? In this post, you are bound to find the best video editing freeware to your taste. When we want to create a personal video, burn a DVD, or upload to YouTube/other video-sharing sites, we will need free video editing software to crop, edit subtitles, insert audio, and add other effects. So what is the best free video editing software? What is the easiest video editing software to use? This post covers a list of the best video editing software, you will be able to edit your videos free on Windows, Mac or Linux. 10 best free video editing software review and download 1. ezvid free video editing software 1 Copy Right www.imelfin.com ezvid is open-source video editing software applicable to Windows XP (SP3), Vista, Win 7/8. It is fast, easy-to-use and functional. Besides video editing, it can also be used as a voice recorder and screen recorder. As with video editing, ezvid enables you to resize, add text/images. Powerful functions along with the revolutionary screen drawing feature have made ezvid one of the best video editing freeware. ezvid video editing software free download 2. Windows Movie Maker 2 Copy Right www.imelfin.com Windows Movie Maker offers a simple solution to beginners to create or edit videos. You can drag and drop images, videos as well as the real-time screenshots/videos to the timeline and add titles, credits, video/transition effects as you like.
    [Show full text]
  • SENSAREA, a General Public Video Editing Application Pascal Bertolino
    SENSAREA, a general public video editing application Pascal Bertolino To cite this version: Pascal Bertolino. SENSAREA, a general public video editing application. ICIP 2014 - 21st IEEE International Conference on Image Processing, IEEE, Oct 2014, Paris, France. hal-01080565 HAL Id: hal-01080565 https://hal.archives-ouvertes.fr/hal-01080565 Submitted on 7 Oct 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. SENSAREA, A GENERAL PUBLIC VIDEO EDITING APPLICATION Pascal Bertolino GIPSA-lab, Grenoble Alpes University ABSTRACT In [9], we proposed a first version of Sensarea that was dedi- cated to accurate video object tracking only. The version that we In this demonstration, we present an advanced prototype of a novel present here (figure 1) is functionally and technically enriched with general public software application that provides the user with a set particularly (1) the possibility to apply effects to the tracked objects, of interactive tools to select and accurately track multiple objects in (2) a novel video object tracking algorithm and (3) a key frame based a video. The originality of the proposed software is that it doesn’t morphing. With this new release, we want to gather both the power impose a rigid modus operandi and that automatic and manual tools of several tracking algorithms and the necessary set of tools to pos- can be used at any moment for any object.
    [Show full text]
  • Studio Descriptions Booklet | Sem 2 2015
    Media Studios semester 2 2015 Character Catherine Gough-Brady Tues. 12.30-2.30 (Rm 9.3.9) Thur. 9.30-12.30 (Rm 9.2.15) Reggio & Fricke, Unnamed Billboard 1 ‘Character is plot, plot is character’ What is the "" Attributed to F. Scott Fitzgerald significance of character within/for narrative? description The character is one of the most important tools in your box, whether you are creating fiction or creating documentary. Through character the viewer can find out about theme, plot, abstract ideas, human nature, time and place - basically everything. Each week in this studio you will be expected to have read/watched/ listened to classic works. In class, we dissect the characters of the works and create short exercises to test out your ideas. You will be creating short works that explore or push the boundaries of theories on character. Throughout the studio you will be keenly aware that creating media is about connecting with audience. So you will be thinking about the viewer as you create. Your fellow students will be your initial ‘viewers’ and will provide feedback, so you can see how certain devices work, or don’t. In the final weeks you will create your main work, which can be fiction or documentary, or a combination of them. In can be video or audio. This work will be included in the end of year exhibition. Areas we will explore practically and theoretically in this studio include: the hero’s journey/ using first person / dialogue and speeches / ironic relationship to character / retelling and flashback / blending the real with fiction / verbatim / ensemble characters / the antagonist / non-human characters aims • to read/watch/listen to works, focusing on the characters • to explore aspects of character within the narrative, in fiction and non-fiction • to experiment, by creating characters Pen, paper, a smart phone and a laptop will be vital.
    [Show full text]
  • Video Production Putting Theory Into Practice
    Video Production Putting Theory into Practice Steve Dawkins (Coventry University) Ian Wynd (North Warwickshire and Hinckley College) Contents Introduction Section One: Theory and Practice 1. Knowing: The Theory of Video Production 2. Doing: Preparing for Video Production 3. The Practice of Video Production: Pre-Production 4. The Practice of Video Production: Production 5. The Practice of Video Production: Post-Production Section Two: The Briefs 6. The Television Title Sequence 7. The Magazine Programme 8. The Documentary 9. The Drama Short Introduction People working within the creative media industries often mystify the process of video production. There is a well-known saying that video production isn’t as easy as it looks but isn’t as difficult as it is made out to be. While it is true that professional working practices are the result of much training, our starting point is that we believe that anyone has the potential to produce excellent videos. However, what marks out good or exceptional video production, whether professional or non-professional, is two things: the ability of the video- maker to understand and effectively work through the different stages of production systematically and their ability to think about what they’re actually doing at each of those stages and act upon those thoughts. Video Production: Putting Theory into Practice is a book for students who are new to video production in further and higher education. It is a book that links the types of theory that are applicable to video production that you will encounter on a range of different communications, cultural or media courses with the practical skills of video making.
    [Show full text]
  • Video Editing Guidelines
    VIDEO EDITING GUIDELINES Log and Label your Shots You can pay me now or you can pay me later. It will take a little time to log and label all your shots in the beginning, but it will pay off in the long run. Use a topic and descriptor for each shot, i.e. Capitol Front, Med, Pan R Common Descriptors: Wide MW (Medium Wide) Med (Medium) C/U (Close) XC (Extreme Close up) Pan R Pan L Tilt Up Tilt Down Zoom In Zoom Out Low (Low Angle) High (High Angle) C/A (Cut Away) 2 Shot Rev (Reverse Angle) Tell a Story With Your Video The same way you would construct a story with words, construct your video with building blocks to develop the visual story. Each Frame is a Word Each Shot is s Sentence Each Sequence is a Paragraph Multiple Sequences make a Chapter Choose the Best Footage It may sound a little silly, but be selective. It is common to shoot more footage than you actually need and choose only the best material for the final edit. Often you will shoot several versions (takes) of a shot and then choose the best one when editing. If a shot it too shaky, don’t use it. If it’s out of focus, don’t use it. Develop Your Sequences A basic sequence might be: Wide Shot Medium Shot Close Up Extreme Close Up Cut Away/Transition Shot Repeat But you could just as easily do: Medium Shot Medium Shot Close Up Medium Wide Shot Extreme Close Up Close Up Medium Shot Wide Shot Cut Away/ Transition Shot Think about continuity when building your story.
    [Show full text]
  • Lecture 2 Editing Technique: Art & Craft Reminder CW1 Hand-In
    162: developing a narrative Lecture 2 Editing technique: Art & Craft Reminder CW1 Hand-in ET Reception - Monday 6th Feb 8:30-4:00pm A hard copy of your script with the covering page giving your name, title of your script and Blog address that shows your reflections on the process you have been through during the writing process. A hard copy of your Script Report (Produce a script report on a different script to your own using the provided template as a guide) Blog Task? “Film editing is now something almost everyone can do at a simple level and enjoy it, but to take it to a higher level requires the same dedication and persistence that any art form does... The editor works on the subconscious of the viewer, and controls the story, the music, the rhythm, the pace, shapes the actors’ performances, ‘re-directing’ and often re- writing the film during the editing process, honing the infinite possibilities of the juxtaposition of small snippets of film into a creative, coherent, cohesive whole.” Walter Murch CITY OF GOD (2002) - HTTP://WWW.IMDB.COM/TITLE/TT0317248/ CITY OF GOD (2002) - HTTP://WWW.IMDB.COM/TITLE/TT0317248/ fast cuts match cuts (cam flash/knife-carrot/drum-feet/match flame/ Match on action - falling feathers integrated motion titles jump cuts Montage (knife-instrument / knife - chicken / kebabs-drums) sound for effect - levels - rhythm - vibrancy - urgency pathos (for the chicken) mostly cut from the chickens perspective Pace - is fast and gets faster a story within a story - tells us about the rest of the film Pace slows after escape cuts more for continuity shot reverse shot match cut/ fade flashback sound effects to convey feelings of character pathos passes from the chicken to rocket - they are both in the same predicament Cuts & Construction of meaning Cutting for continuity… “An editor is successful when the audience enjoys the story and forgets the juxtaposition of the shots.
    [Show full text]
  • Advance Digital Non-Linear Editing Technology
    Vol-1 Issue-4 2015 IJARIIE-ISSN(O)-2395-4396 Advance Digital Non-Linear Editing Technology SONU SHARMA Assistant Professor, Department of Mass communication & Television-Film Technology NIMS University Jaipur, Rajasthan, India ABSTRACT Non Linear Editing technological changes have significantly simplified digital video editing. Digital video editing in its basic form can be completed by subsequent simple events and through the use of hardware and software normally found on personal computer. In order to produce excellent and intelligible audio-videos the editing process is generally directed according to the superior rules called video gramma. Computer system demonstrates promising results on large video groups and is a first step towards increased automation in non-linear video editing. This paper proposes a method to automatically section the raw video resources into valuable sections and useless sections based on the video grammar as a part of the audio video editing support system using the camerawork and cut point information. This paper justifies the steps to follow in digital video editing. It’s describes some reasons that made digital video editing simple for some but difficult for others. It’s explains the steps complicated in digital video editing and the obtainable technological alternatives in each step. We provide an overview of the use of digital non-linear editing application. Keywords: Digital Video, Linear Editing, Non Linear Editing Introduction: - Digital non-linear editing we individuals can follow simple steps to edit digital videos through their basic information of computer operations and application software. Video editing has several unique challenges not found with other media. Digital video is a time-based medium.
    [Show full text]