Quick viewing(Text Mode)

The Post-Production Process

The Post-Production Process

THE POST-PRODUCTION PROCESS

The time has come to do a project that is a bit more involved than a simple commercial or recording a "talking head" on . One that involves extensive editing, electronic graphics, custom audio, etc. This may be a scary proposition if you've never had to do this before, but also at times if you're somewhat experienced. How to properly estimate your needs. How to stay on budget and so on. We would like to take a few minutes of your time with this writing to explain the process in some detail in order to make life for you easier and more efficient.

PRE-PRODUCTION

As in the world of location shooting, pre-production meetings are also essential for post-production. It will help all aspects of a project if the post-production "team" is included in as many meetings as possible at the beginning of a project. This should even be before budgets are established and a treatment or script is finalized. This insures that the post team is aware of your requirements (no last minute surprises) and can also offer many suggestions to improve the production. It even helps to see how the chemistry of individuals, such as the client and the editor, works out at this early .

SHOOTING

There are many things that are done during the location or production process that can greatly effect the post-production phase - both for the good and the bad. Production can occur in any format desired (16mm or 35mm , Digital , Betacam-SP, DVC, etc.) and still be independent of the post. Generally, if you are shooting on tape, it is best to record on the same format that is available for playback during post-production (i.e. without across or "bumping" to another format). This keeps the number of generations down and makes "housekeeping" much easier. In contemporary editing, an electronic timecode signal is the reference for locating any information on the tape and parking it accordingly during play and editing.

Timecode is recorded onto the source at the time of the shooting (though some formats allow the addition of timecode later) and is a numbering sequence similar to a digital clock, displaying consecutive, ascending digits for hours, minutes, seconds and frames (30 frames/second in standard NTSC video). Each frame is thus given a unique number which can be read by -assisted edit systems and by timecode readers.

There are several types of timecode, such as "non-drop frame" and "drop frame". It is not important for you as a producer to understand the technical difference, since most edit systems can mix the two and calculate all necessary offsets. However, it is less confusing during editing when only one or the other is used and, therefore, as a producer, you should try to coordinate this type of decision between the shooting crew and the post-production facility. Avoid mixing non-drop frame and drop frame timecode recordings on the same

1 piece of videotape.

Timecode is the basis for tracking all scenes on the tapes and therefore you can organize it to your best advantage. Producers often use the hour digits of the code to reflect the reel number of the tape being recorded. In this manner, reel one starts at 01:00:00:00, reel two at 02:00:00:00, etc. However, if you pass 23 reels, you may wish to use a different scheme, since the timecode "clock" resets to 00:00:00:00 after 23:59:59:29. Generally the most common practice is to call the 24th reel number 101 and start with timecode hour 01:00:00:00 again. If you use this practice, the 25th reel is number 102 (2 hour timecode) and so on in ascending order. Do not repeat numbers on the same reel as this will cause a lot of confusion!

Other location questions include the nature of the audio. Is it ? Was noise reduction, such as Dolby, used? Are the takes also slated? Was someone taking notes during shooting? Anticipating potential editing questions as well as selecting possible "takes" while still on location will greatly speed up post-production.

In this vane, it is extremely critical to properly label all location tapes. Recommended information includes:

Production company name Client name Project title and/or job number Location and date of the recording Reel ID number Approximate starting and ending timecode numbers on each tape What was recorded on each audio track (usually which microphone or talent’s name) Whether noise reduction was on or off

FILM TRANSFER

If your production is to be on film, the path into the edit suite will be slightly different than when shooting on video. In most film productions, the project will have been shot "double-system" (audio is recorded on a field recorder which is separate from the camera exposing the film). This will require the audio to be synchronized to the picture as part of the post-production process. Generally your project will have been shot on 16mm or 35mm negative at 24 frames or 30 frames per second.

After processing, the film will usually be transferred directly from the negative (without a print) to videotape via a device (e.g. a /BTS or Rank Cintel "flying spot scanner"). This process usually involves scene-by-scene -correction, may be silent and may or may not require your supervision (at your ). The transfer should usually be made to D1, D2, Digital Betacam or 1”, even though the post may be in a “lower” format, such as Betacam-SP. This makes it possible to always go back to a high quality image

2 transferred directly from the film negative. Often producers will transfer only the "select" takes (those deemed to be possibilities during the shooting and identified as “selected” or “circle” takes on the camera reports and shooting notes), in order to transfer no more than is absolutely necessary. Each take can be identified on the film and sound rolls via the slate or “clapstick” used during the filming at the beginning of each take.

A negative film-to-tape transfer session with color-correction takes about 4-5 hours for every hour of , so, as you can see, it is not cost-effective to transfer undesirable or false takes. To easily synchronize sound during the transfer session, the sound rolls must have been recorded on location with timecode as well. If this is the case, a slate which displays timecode numbers on a built-in LED display should be used during filming. These “smart slates” make it easy to identify the timecode visually in order to easily find the corresponding start of the sound take on the audio rolls. This will add some time to the transfer session, but when the transfer is completed, all picture and sound are “married” together on the videotape transfer masters and editing can proceed. If sound is synchronized outside of the transfer session (in a separate audio or edit session), this will delay the start of the rest of the editing process.

TRANSFER OF SLIDES AND FLAT ARTWORK TO VIDEOTAPE

Many projects incorporate the use of existing slides, pictures, artist's renderings and other "flat" artwork to enhance a program or to cover material which cannot be filmed or taped for various reasons. Such scenes are usually "spiced up" by adding camera movement, thus giving at least the suggestion of a live action scene shot in the studio or location. Movement helps to keep the viewer's interest. Such camera movement is generally introduced by photographing the artwork with a video or film camera using either a tripod with a human camera or an stand with computer-assisted camera control. The resultant image is recorded on either videotape or film and then becomes part of the total available footage for the project. Ken Burns’ “The Civil War” is a good example of this technique.

Slides must be front- or rear-projected against a screen or a ground glass to increase the image size in order to provide enough area in which smooth camera movement can occur. This often introduces additional grain and contrast to the picture, so it may be best to have all artwork converted to large prints, such as 11"x 14" or 16"x 20" cibachromes (budget permitting). It is also possible to transfer slides in a telecine suite (usually used for motion film) with advance notice for telecine set-up. The telecine operator can also add high- quality electronic moves simulating camera pans and zooms.

It is also very effective to scan photos or slides into a Mac or PC and create all the moves electronically. Although this will generally take longer due to the rendering time required, it has the advantage of being to work a higher resolution than the normal video screen resolution. By scanning in a photo at the highest resolution, it is possible in an application like After Effects, to create digital zooms that do not cause the image to blur or soften. This method also allows you to apply precise color-correction and other touch-ups, if needed.

3 4 PREPARATION BEFORE EDITING

Many functions must occur before either offline or online editing can start. First, all footage should be organized so that all material is on one format. Various formats can be mixed in multiformat edit suites (such 3/4", Betacam-SP, 1", D2, etc.), however, it is often most convenient to reduce all material to only one or two videotape formats. All artwork (slides, flat ) and film should also be transferred to videotape. This process may involve "bumping-up" from one format, such as 3/4" or even VHS, to another, such as D2, for editing. It is also a time during which unnecessary footage can be "culled out", so the total amount of footage that you have to deal with in the edit is reduced. Reels should be organized. This means that all tapes should have sequential timecode and should be numbered. Often it is better to record new timecode on a tape in order to prevent problems of overlapping or inconsistent timecode values.

After all material has been organized and has timecode, you will usually want to dub a reference copy onto ¾” or VHS for or simply viewing. These dubs generally have a "window" in the video displaying the timecode. These dubs are referred to as "burn-in" dubs which correspond to the same timecode as the original or source master tapes.

If you intend to master your final product on an analog format in an online edit session (1” or Betacam-SP), then another preparatory step that should be taken before editing is to dub a duplicate copy of all reels to be used in the edit. These are referred to as "B-roll" dubs and should be in the same or higher videotape format as the original tapes and should have identical video, audio and timecode tracks. This will be essential for any effects such as dissolves, wipes and digital effects where the transition is to/from material on the same reel.

If you intend to master your final product on a digital format that supports a function called “preread” (Digital Betacam or D2), then creating B-roll dubs will be unnecessary. I will explain “preread” later on. Consult with your post-production facility as to which method will be employed.

OFFLINE EDITING

The goal of offline editing is to end up with the basic decision-making process completed. The most important result of offline editing is the "", not a finished tape. Offline editing now is thought of in two categories: linear and nonlinear. Linear offline edit suites are videotape-based and generally use two or three lower-end VTRs (VHS or 3/4”). Nonlinear edit systems use Mac, PowerPC or PC-based computer edit systems from a variety of manufacturers.

All current nonlinear systems record “digitized” audio and video files from the onto hard drives connected to the . Each system can provide good results, but nonlinear systems make revisions and the editing of various versions quite painless. Some

5 popular nonlinear systems are Avid , Media 100, Discreet Edit, and Accom Stratasphere.

The objective of offline editing is to decide such questions as: 1) which scenes and takes to use; 2) on which frame (timecode value) to start and end each scene; and 3) what the order of the scenes should be and how they work together. An experienced producer might be able to determine this type of information by simply viewing the footage on a single vcr and logging the information. However, it is often best to actually try out an edit and see scenes side-by-side before committing to the real edit points with certainty.

Offline may actually take two steps. Step one is the viewing and organizing of tapes and scenes from the "burn-in" tapes. This may even include some simple editing. Many producers refer to this step as "off-offline" editing. Step two is to actually edit the program into the rough version of the finished form. At its most sophisticated level, this stage of offline could involve a computer-assisted edit controller and include dissolves, fades, etc.

Nonlinear systems make it very easy to temporarily add graphics, effects and audio mixing and edit a version that can appear nearly final. This makes approval and review by the final client(s) very easy and without a lot of guess work or imagination required. In addition, the database for keeping track of footage to be used and tracked by nonlinear edit systems in a project is often compatible with other PC or Mac logging, spreadsheet, or database . This way producers can use readily available software running on a laptop during their “off-offline” phase, thus reducing the manual labor of typing entries into a computer at later phases of editing. For instance with Avid edit systems, Avid’s own Medialog, as well as Filemaker Pro, Microsoft Works and others can be used to create Avid-readable logs when screening tapes or even while shooting in the field.

In any case the result of offline editing should be an accurate log or printout (or online-compatible floppy disk, e.g. CMX, Grass Valley, ) that identifies:

1) source reel numbers 2) source ingoing edit points (timecode) 3) source outgoing edit points (timecode) 4) type of edits (audio, video, both) 5) type of transition anticipated (cuts, dissolves, wipes, ADO, etc.).

Other information should also be determined in the offline, such as which scenes require color-correction, where are supers to be added, what type of digital effects are desired and so on.

If a tape is generated as part of the offline, it is essential that those in a position of approving a finished project view the offline edited tape. Their input should be incorporated into a revised offline edit, so that the basic show has been completed and approved in the offline stage before any online editing starts. A word of caution. Many times a client new to video will not understand what they are looking at when viewing an offline edit master.

6 Tapes which are "cuts-only"( they have no supers or effects and are not mixed) and have the timecode display in the picture often make it hard for some people to properly visualize the look of the final product. You must take great care that such a client comprehends the nature of what they are viewing and what will still happen in the online editing process.

Nonlinear edit systems make it easier to create such effects, graphics, etc. for easier visualization. They can also be transferred to videotape for client review and approval. Image quality is a consideration, though. In order to get the maximum amount of source footage onto the hard drives, is used. The larger the amount of footage that has to be “loaded” onto the hard drives, the lower the that the editor will choose. In some cases the quality level may make it difficult to determine the best quality of performance, proper sound , etc. Sometimes an editor will start at a lower resolution with a lot of footage, get an initial version of the project edited, then delete audio and video media files and “redigitize” only the selected source footage back onto the hard drives, in order to continue the process at a higher image resolution.

ONLINE EDITING

The goal of the online editing process is to complete an edited master tape, with the addition of effects, supers, etc. There are now two methods to use for online editing/mastering. Just as in offline, you have the choice to edit in a linear (on tape) or nonlinear (random access hard drives) fashion. Linear, tape-based edit suites come in all flavors and sizes and may utilize any or all of the wide spectrum of videotape formats in use today.

Nonlinear suites are computer workstations which may either be an outgrowth of their offline siblings or they may be specialized high-end workstations targeted only for the highest quality image. The former include products from Avid, Media 100, Discreet, Accom and others. These PC or Mac-based units generally use various schemes so that the same system at low image resolution can be used for offline and then you can reload (“redigitize”) high resolution images for online finishing.

The higher end systems, which include the Quantel Henry and Editbox, Avid Symphony and Softimage|DS, Discreet Logic Fire and others, operate in an image quality which is comparable to tape (usually without compression). As a result, they cannot store massive amounts of video (2 to 4 hours or less is typical), so use of these types of systems almost definitely requires an offline step first. Going straight from shooting to online editing is probably very impractical in such systems. They do, however, still offer all the advantages of nonlinear in the online realm: random access to footage, quick ability to make changes and variations in the edit and the ability to “ and paste” the project as in a work processor. Another advantage is that layered effects information created at a lower resolution on the same system, or on a compatible unit, will all transfer directly at the higher quality, without having to rebuild the information which actually makes up the effect.

In a linear (tape-based) online system, it is the "edit decision list" (EDL) created in the

7 offline process that makes it possible to automatically repeat the offline-edited event using the original camera tapes. The EDL is entered into the computer-assisted edit controller (CMX, Grass Valley, Sony, etc.) for this assembly. If a compatible floppy disk is provided, this is simply a matter of loading the files; if a hand-written log is provided, the editor must type in the information. Simple with largely cuts, wipes and dissolves and few supers and effects can be quickly "auto-assembled" into a final master from the original source tapes; but the list must be accurate and all the information organized ahead of time. Projects that go through extensive revisions due to client changes or that involve complicated digital effects or graphics segments will go quite slowly in an online session. They are literally created in the online suite with all the decision-making occurring "on the clock".

Online edit suites are configured with either analog or digital signal paths, regardless of the tape formats used. Digital suites are generally more powerful for effects work, because the editor can create effects that require several generations of building layers back and forth between tapes. In a digital suite, these generations are lossless, as compared to an analog suite, in which each videotape layer introduces some further degradation of image quality. Some digital videotape formats, such as Digital Betacam and D2, include a feature called “preread”. This function allows the record VTR to also be used as a source in the edit. For instance, a between sources can be done with only two VTRs, whereas in the past with analog formats, such as 1”, this would have required three machines. Working in a digital suite or with the right digital formats can eliminate the need to record B-rolls before the session.

Many non-linear offline systems, such as Avid, allow you to create effects and layers in the offline edit stage. This information does not translate directly into the online edit controller. Instead, a separate EDL must be generated for each video layer and the effects information must be deciphered by the editor by simply reviewing and visually matching (as best as possible) a video reference tape made from the offline session. The more of this type of work is used in the edit, the less “automatic” the online session will become.

It is generally a good idea to draw for all digital effects sequences before an online session and get the client's approval on these, just as if you were planning an optical effect created in a film lab or optical house. This lets the editor know exactly what to expect and cuts down on the amount of "ad hoc" creativity that can occur during an online session. Storyboards for effects and animation should be considered as a type of visual "contract" and should be as detailed and self-explanatory as possible. This should be thoroughly discussed in your pre-production meeting with the post team.

In addition to effects, color-correction can also be a great part of the typical online edit session. It is possible to correct improperly balanced video or simply to enhance existing when editing from tape to tape. This can also be a way of saving time and money on film transfer. If the correction is done in the edit suite rather than in the film transfer suite, only the shots that end up in the final edited master are dealt with. However, the latitude of color-correction of the simpler devices commonly used in edit suites is not as wide as

8 generally found on those in transfer suites. Post-production color-correction also allows the editor to "touch up" footage from a film shoot where the transfer to tape was done without client supervision or on a location video shoot where the client could not see color video.

Frequently much time is spent in edit sessions adding supers to video. Supers in video should be treated like printing. Plan ahead. Pick typestyles and sizes that are appropriate, attractive and effective. You will usually have the option of using the or graphics systems at the facility or of supplying artwork - either physical camera-ready artwork, or computer-generated graphics files, such as Photoshop files. Character-generated text will generally appear to be more readable in video that artcards, so it is best if you can work within the typestyles and sizes available on the character generator supplied. However, if artwork must be used, provide a card with white letters on a black background of at least 8"x 10" or larger. Small detailed text (such as used for legal disclaimers) will "key" very poorly over video. Multicolored art, such as logos, must be dealt with on an individual basis and may work best as artwork or against a chromakey background. Graphics software available for Apple Macs, PowerPCs and IBM-compatible PCs offer great alternative to traditional high-end and expensive broadcast equipment.

AUDIO

There are many different ways in which to handle the audio portion of any video post-production project. It can either be completed entirely in an audio suite (before or after the online edit) or partially in the audio as well as in the online suite. The approach taken depends on the sources for the elements, budgets and often work preferences of a producer. A simple show or commercial, in which the elements originate on audiotape only (such as a voice-over announcer and library music), is best mixed in advance in an audio studio. This does mean that the audio track will determine timing rather than the length of the pictures. In this process pictures are conformed to the completed audio track.

The next method is to build the final mix in the edit suite. This may be the preferred approach when the elements consist of a mixture of video and audiotape sources and not too many audio "layers" (or tracks) are required in the final mix. For example, this might include sync audio (videotape), voice-over audio tapes, library music and ambiance sound effects. In this case the common method is to build three submixed audio tracks (voice, music and effects) which are then mixed to a final audio track on the edited video master.

The third approach, referred to as "audio sweetening" or "mix-to-pix", requires that audio finishing occur in a multi-track audio studio after the video edit has been completed. During the video edit, basic audio tracks are also edited (unmixed and uncorrected) for the on-camera dialogue, sync natural sound and some voice-over pieces. When the video edit is completed the edited audio tracks are dubbed to a multi-track audio tape or into a digital audio workstation ("layover") and the picture to a 3/4" videocassette (for visual reference only), both with common timecode that matches the edited master videotape. Next, all additional audio tracks are added ("lay-up"), including music (stock or custom scoring), sound effects and other enhancements.

9 If the audio session is being done using multi-track audio tape, this will be a linear process, just like videotape editing. If a digital audio workstation is used, the audio mixer/editor has the same type of benefits as the offline editor working on a nonlinear offline edit system. Editing and locating source clips and sound effects is random access and non-destructive (edits can be undone). Some digital workstations can also provide features like time compression/expansion to fit audio clips to available spaces.

Once all the various tracks are built they are mixed down to form the finished audio tracks. These mixed tracks are then transferred back to the audio tracks of the video edit master in perfect sync with the picture ("layback"). In this way complex audio tracks can be built in suites that specialize in only the audio portion of a program and with the same attention that a music recording artist might receive - and often at a rate less than online .

An alternative to audio sweetening might be to use the actual tracks from the offline edit session. Systems like the Avid may use low resolution for video, but offer audio resolution comparable to the best audio workstations. In the hands of a skilled offline editor, the same kinds of edits that an audio editor/mixer might perform can also be done on the offline unit. The number of tracks will vary with various models, but it is possible to build full mixes or split tracks (separated voice, effects, music) and then lay that to the master tape. This speeds up the online edit session because only picture edits have to be performed. If split tracks were produced, then the final mix of these tracks (and the addition of sound processing, such as equalization, reverb, etc.) can be completed in either the online or the audio post suite.

Remember that if you are working in stereo, the number of the required working tracks will increase and often the complexity of the mixing process does as well. Be sure to also check whether or not audio noise reduction was used on any of the audio recordings. If so, be sure that the system used is compatible with the hardware at the audio or post facilities that you are using.

Music and effects can come from live, custom-recorded and/or stock library sources. Regardless of the source, it is important that the producer secure proper licensing for the use of these materials (rights for both the composition and performance) in keeping with the program's target audience (and market), duration and period of use.

GRAPHICS

Many programs today put a heavy reliance of the use of 2-D and 3-D electronic graphics and animation in their programs. These should be treated as separate elements of a project and defined prior to the time of the online edit. Much animation is done for clients at "long distance". This can only be done when graphics and effects are properly storyboarded and everyone is in agreement as to the requirements of the job. Once the creative design and storyboards have been completed an animation or post-production

10 facility can best determine which approach to take and how to properly bid the cost.

Electronic paint systems generally create 2-D images. The result is about the same as an artist can achieve using standard mechanical drawing media - though usually less messy, faster, more versatile and less expensive. The look of the artwork will reflect the artistic talent of the designer just as with non-electronic methods. Therefore, when considering 2-D graphics, you should consider all the same techniques and styles used in film and print ( animation, cartoons, watercolors, pen-and-inks, rotoscoping, etc.).

3-D graphics are quite different and are created by building, defining and moving electronic models around in an imaginary three dimensional environment. Parameters that can be defined include : a) object shape, size and movement; b) light sources, movement and colors; and c) "camera" movement through the environment (your point of view). Additional options include texture mapping, bump mapping, distortion mapping, reflection mapping and ray tracing.

In texture mapping, images that are to appear as the surface of a 3-D object (such as a wood texture) are drawn on a 2-D paint system or taken from a digitized video input (such as a video camera). This then becomes the "surface skin" of the object and will respond to effects in the surrounding "environment", such as light and shadows. This will change the surface of the object from a dull, uniform colored appearance to that of brick, wood, scorched metal, etc.

Bump mapping allows the artist to make a surface appear to have irregularities. A bump map, for instance, can help the artist create ripples on a water surface and bumpy skin on a reptile.

Distortion mapping allows the software to deform shapes based on light and dark areas of the distortion map file. For instance the image (hicon) of moving smoke can be used as a distortion map. This file can in turn be used to stretch an object into “peaks” and “valleys” corresponding to the light and dark areas in the moving smoke image.

Ray tracing is the reflection of the 3-D objects onto themselves within the models created on the system, i.e. a ball reflecting onto a polished floor and vice versa. These reflections will display distortions of lights, shadows and reflections in the 3-D images as a result of the shapes that these lights and shadows fall onto.

Reflection mapping is the ability to create imagery that is unseen on screen as part of the "environment" but rather is an "offscreen" image that is only seen in a reflection on one or more of the objects in the scene. These reflections change and distort according to the properties of the image on which we see the reflection and the effects in the "environment" around the object. An example is an image seen only as the reflected or refracted image inside of a 3-D crystal ball.

11 The proper use of these effects and properties will increase the photorealism of an 3D animation. They will also lengthen the rendering time required for the computer workstation to generate the finished animation files, so often artists can “cheat” the effects to get close to the optimum look, but without lengthy rendering times.

In recent years, , designers and facilities have embraced the opportunities afforded them through the use of 2D graphics and 3D animation software running on Macs, PowerPCs, PCs and Windows NT workstations. No longer considered consumer-grade, these software packages can deliver as professional of a look as any proprietary, high-end broadcast graphics workstation.

Broadcast designers today rely heavily on Photoshop, Illustrator, Painter, After Effects, 3D Studio Max, Maya, Softimage and many others. This also makes it possible for a client to design their own graphics and supply the finished files to the post facility for integration into the online edit session. Since there are some specific guidelines that differentiate readable broadcast graphics from print-oriented designs, it is highly advisable that a client supplying graphics to the facility should consult with them on the requirement first. As an example, Photoshop tends to be a video-friendly application, while Quark Express - a favorite for print work - often presents problems when those files are to be integrated into video projects.

TAPE TERMINOLOGY

Many companies have different methods for naming the various types of tapes and elements generated in the post- production process, as well as different policies regarding their storage and/or disposal. Be sure that you understand these, so that you know what tapes are available for future revisions, dubs, etc.

The first generation videotape recording (unedited) is usually referred to as the Original or Camera Master or Master Footage. Film-to-tape transfers (1st gen. video) are usually Film Transfer Masters. Most intermediate tapes between the Original and the Edited Master (2nd to ?? gen.) are Elements.

Many projects necessitate that an interim stage of the edited master be generated (usually one generation before). Most facilities refer to this as a Submaster (or Unmixed Master). This type of tape will usually differ from the final Master by having "split" audio (different audio information on each channel, i.e. unmixed) or missing supers or lacking color-correction. Having a Submaster often makes it easy to revise the program or create alternate versions in the future (such as foreign language versions).

The final version of a program is called the Master, Mixed Master, Edit Master, EE (electronically edited) Master, etc. In any case, there is usually one and only version of this tape, so if subsequent revisions are made, care must be taken not to get the various versions confused. Finally, many facilities prefer to make Protection Masters (also called Safety Copies) of all Edit Masters, just in case a problem occurs with the Edit Master.

12 Protection Masters are generally one generation later than the Edit Master and should be identical to the Edit Master including having common timecode. Dubs for distribution are often made from the Protection Master, thus keeping "wear and tear" on the Edit Master to a minimum.

© Copyright 1990-1999 Oliver Peters

13 Foreword by the Author.

The following is a series of articles I have written for the ITVA Newsletter. They cover a span of 1992 to 1999. Although some of the earlier articles may be a bit dated in light of more recent developments, they still contain valuable information and create a sense of history for the more recent status of the industry. Over the decade the industry has evolved and we, of course, have been changing with it. Opinions have changed – technology has changed. These articles reflect that process. Besides, it’s fun to see how many of the players have changed and how our thinking has changed in such a short period of time.

- Oliver Peters November, 1999

1. D2 Myths and Realities

With D2 videotape becoming the new "state-of-the-art" video and thereby quickly replacing one inch as the preferred medium, it is time to sort out some fact and fiction about D2. The D2 video format is a composite digital recording scheme using 19mm wide (approx. 3/4") metal particle tape. Although the VTRs record audio and video internally in a digital manner - and it is possible to transfer information between D2 VTRs in a digital fashion - most facilities have their D2 VTRs installed in the place of an analog VTR (one inch, betacam, etc.). This means that video going into and out of the VTR undergoes an analog-to-digital and digital-to- analog conversion process for each pass to another tape.

Some of the advantages of D2 are improved error correction and concealment (covering tape damage and drop-outs) and four digital audio tracks. One of the claims made by the manufacturers for D2 VTRs is the lack of image degradation over many generations. This may be true if the post-production signal path is strictly composite digital as in straight machine-to- machine dubbing or editing or when a D2 switcher is used. This is usually not the case and I know of no local facilities using any of the few D2 switchers manufactured. When operated in a standard fashion within an otherwise analog environment, claims for signal loss is much more comparable to one inch VTR response. If you would go seven to ten generations in one inch, you'll probably be happy with ten to fifteen generations of D2.

An interesting feature of the D2 VTRs is the "pre-read" recording mode, whereby video is played back on the video confidence heads rather than the normal play heads. In this mode you can reenter the play signal into a switcher, add video to it as in a key or dissolve and re-record the composite back to the same VTR. Although this can save one VTR in the session, it still results in the same generation loss, because the signal is going through the same A-D and D-A conversions. In "pre-read" you get no margin for error.

As an audio function the "pre-read" mode can be quite useful. D2 VTRs have four digital audio tracks. By building up tracks and submixing them back on the same VTR, it is possible to create quite complex in the edit suite without the use of an additional multitrack audio recording.

1 In conclusion, you should understand that the D2 format is one which is still evolving. In this early stage we must still suffer some problem of tape stock, head wear and software quirks, but in general most users feel that the format offers enough benefits to be worthwhile. In the future many D2 components of the post-production chain will be developed that will bring out the full advantages of D2.

2. Offline Editing and Online Editing

Not a lot of folks are very familiar with the differences between offline and online editing. These are terms borrowed from computer jargon and really do not translate very well to video applications. These terms are best defined by the desired result of each process. Using this approach, the point of the offline edit session is to finalize the creative editing process (comparable to editing the in film) and the result should be a finished "edit decision list". The online edit session should result in a finished master videotape (comparable to the lab and optical work in film). These definitions are independent of the type of hardware used in each session. Although most producers think of "cuts-only" 3/4" or VHS editing as "offline" and an A/B roll one inch or D2 edit session as "online", it could be the other way around in certain situations.

The intent of offline is to get the most time consuming part of the editing process completed at the least expensive hourly dollar rate. It is during the offline session that the right "takes" should be selected and that editor and producer should try out the creative possibilities to see how the scenes best come together. The end result should be a fairly accurate "" that is the basis for the look of the finished program or commercial. It is very important that the client is heavily involved in the offline process. The producer should cut as many versions as necessary to get final client approval of the "rough cut" prior to going on to the online session. Changes made later in online are very costly.

Once a "rough cut" is completed, all necessary edit information must be logged creating the "road map" for the online editor to follow. Generally the basis of such information is timecode recorded on the master tapes. This timecode is superimposed over the video on the cassette copies used in offline editing, allowing the offline editor to properly log all edit information. This can be as simple as numbers written on a pad of paper or as involved as a fully CMX-compatible floppy disc generated during the offline edit session. Although the CMX-compatible disc is not necessary, it does save a considerable amount of time in online editing which would be devoted solely to re-entering timecode numbers.

As a rule of thumb, an online edit session in which no prior offline editing was done will probably not proceed any more quickly than a rate of five to ten edits per hour. An online session based on a "paper list" written on a pad and created in a simple offline session will proceed at ten to twenty-five edits completed per hour. And finally, an online session which can use a finished list provided on a CMX-compatible floppy disc can clip along at over 100 edits per hour. And as we all know, time is money !

2 3. Component

We've all discussed digital video but it is helpful to know that all digital is not created equal. In an earlier article I wrote about the D2 video format. This is composite digital - i.e. luminance and chroma information are combined as one signal. This encoding results is certain artifacts that are common in analog recording as well. To get around these artifacts, Sony, Quantel, Grass Valley Group and other manufacturers offer hardware that is component digital - i.e. luminance and chroma travel separately in a three-wire system. Component digital equipment is known by its various labels: D1 (the tape format that predated D2), 4:2:2 (the sampling scheme that stands for luminance and two color-difference signals) and CCIR 601 (the technical standard for the three- wire method of component digital signals).

There are many reasons for developing component digital post-production methods. These include almost limitless generations without perceptible loss, better color rendition and better of keys and mattes for graphics and blue screen (chromakey) work. The main disadvantages are cost and ease of working. D1 VTRs are about three times the cost of D2 VTRs and in a post-production facility require three times as much routing, wiring, patch panels, distribution amplifiers, etc. In addition, D1 VTRs are not particularly "friendly" editing VTRs, compared to one inch, D2 or Betacam VTRs. In order to truly reap the benefits of component digital, the entire installation using D1 VTRs or 601 devices must be designed and installed as a component digital facility.

Due to the cost you will rarely see a complete post-production facility that is a pure component digital plant. It simply doesn't makes a lot of sense to do this for the types of projects that most facilities post. Instead, you will tend to find component digital "islands" within largely composite digital and/or analog facilities. These "islands" are usually graphics-oriented. It is in the creation of graphics and animation that the techniques of layering and compositing are very essential to the creative process. This means many generations and the best possible quality in keying and matting. Although such quality can be attained in the composite domain, the one to five percent difference that separates the average from the awesome can only be attained using component digital technology. Once the graphic or animation is completed, it can be encoded into composite video (D2, one inch, Betacam, etc.) and incorporated into the rest of the project. Side-by-side there will be no difference, except that the animation element has not "fallen apart" getting to this stage.

In short, for a "cuts-and-dissolves" project, composite video is quite acceptable and far surpasses the image resolution displayed in broadcast or a VHS dub on someone's VCR and will continue to do so for many years. However, for intensive "cutting-edge" effects and graphics, D1 and the world of component digital are well worth the expense and effort.

3 4. The Small Format Promise

In the last few years the "format wars" have been complicated by the addition of several new "prosumer" videotape formats - S-VHS and more recently Hi8. According to their proponents these are "the greatest thing since sliced bread" and offer "better than one inch" image quality. Although none of these claims is close to the truth, S-VHS and Hi8 do offer certain customers quite a valuable product.

S-VHS was first developed with the intent of providing a better distribution format to the consumer, primarily for movie sales and rental on videocassette. Although this market segment has not really flourished, the improved image quality coupled with the increased cost of 3/4" professional equipment has made the S-VHS format a viable alternative for budgets in the low end 3/4" price range.

The improved image quality is a function of viewing and processing the image in a Y/C mode, where luminance and chroma stay separated, as in the 3/4" dub mode. The luminance specs are superior to broadcast (in the "S" or Y/C mode), but the chroma specs are similar to standard VHS. As a result, when working with S-VHS in a post facility which is largely composite video, the S-VHS picture looks a lot like regular VHS video. Because of its original design intent S-VHS does not hold up well after the first couple of generations. A lot of the improved look in the format is a result of the metal particle S-VHS tape and when used for standard VHS recordings in a standard VHS VCR also offers improved video (particularly drop-outs).

Hi8 on the other hand is an improved format using highband recording methodology on metal particle (and eventually metal evaporated) 8mm videocassettes. Subjectively, the image quality is a lot like early version of 3/4" U-matic, and is, therefore, also an alternative to 3/4" equipment. Hi8 offers many of the same image advantages that S-VHS does, but in a smaller package and with the backing of more manufacturers. This means that options also exits with the consumer equipment suppliers: options in Canon, Pentax, etc.

Which format will survive is largely a marketing question that once again pits Sony and Hi8 against Matsushita (, JVC) and S-VHS. Neither format is particularly desirable for post- production, but if budget is of critical concerns, then both are worth looking into for field production, offline editing and distribution. If you intend to shoot original material in either format, transfer the material to a higher format for further post-production (probably Betacam). Video transferred from S-VHS or Hi8 to Betacam looks best when you stay out of the composite video domain. This means that the transfer should go through a timebase corrector that accepts Y/C inputs from the Hi8 or S-VHS deck and outputs in Y, R-Y, B-Y (component analog) to the Betacam recorder.

If you currently need to buy equipment that delivers a lot of "bang for the bucks", both S- VHS and Hi8 are good options. But don't expect then to be around for more than five years!

5. Video Workstations

4 The power of the personal computer is quickly invading all aspects of the professional video post-production arena. The PC, whether Apple or IBM-based, has been effective when used for non-linear offline editing systems and digital audio workstations. I will elaborate more on those in later articles. The are also becoming increasingly important in many online editing and graphics functions. The general approach of these systems is to use the PC as the controller for the system, the "brains", often linked to other pieces of proprietary video and audio processing equipment. Several notable variations of this approach are Composium, Paint F/X and Video F/X systems by Digital F/X.

The Composium and Paint F/X are basically a single workstation that combines edit control (of external VTRs or drives), switcher layering, digital video effects and 2D paint/typography/library functions into one unit. The difference between the two is the number of VTRs and number of switcher levels controllable at one time. The Composium can handle up to four machines and video can be of any format (composite or component, analog or digital). Although these systems do not offer the brute edit control and list management power that a CMX might have, they are ideally suited for complex, graphics-oriented post-production that involved many layers or generations to accomplish. Since they function internally in component digital, when hooked to several D1 VTRs, the designer/editor can operate without any regard to how many generations or layers it takes to achieve the final result. With the paint system internal, if the designer/editor wants to add that extra touch-up, he or she can do that right online as the editing happens.

The Video F/X takes a different approach. At first glance it appears to be a low end edit controller using a Mac II as the platform. This hardware controls audio and video switching equipment and 2- or 3-VTR control of a variety of VTRs, up to the Betacam level. The method of editing takes a "windows" approach in which playback and recorder video as well as edit information are viewed on the same 19" Mac screen. Instead of a strictly numerical edit decision list, digitized images for the heads of each scene are saved in a list. This is similar to some of the non-linear offline edit systems, but the Video F/X is presently designed as a linear, online edit system. Its true power is that Digital F/X has written software to convert Postscript files into anti- aliased video character and paint information. That means that nearly any of the page composition, drawing and word processing software used on Macs for desktop publishing computer software can also be adapted as high-quality, anti-aliased supers and graphics over video.

Other variations of the video workstation include the Colorgraphics DP 4:2:2, Pinnacle Prism, NEC VUES and more. Although these are not standalone systems that fully replace the modern online edit suite, they do cover many of the function done in a such a facility. Though not here quite yet, the days of the "edit suite in a box" are probably just around the corner, thanks to the lowly personal computer.

6. Non-Linear Editing

5 The jargon in this business just keeps mounting up ! What the heck is non-linear editing ? Well, it's video editing the way you always wanted it to be. In fact, if you've ever edited film or audio tape or run a word processor, you've been engaged in non-linear editing. When applied to video the term relates to a number of edit systems that allow you to make changes in length or the order of scenes without regard to what effect those changes have on the material that follows. In traditional video editing systems, when you delete a scene from the middle of a sequence, you must re-edit all footage following, because the positions of those scenes on the master tape have now changed. That is said to be linear editing. When you edit film workprint, you simply cut out that scene, splice the remaining ends together and proceed. That is non-linear editing.

Early approaches to non-linear editing used either multiple VHS or cassette decks with duplicate video or used to give the editor the non-linear capability. The cassette approach is typical of the Cinedco Ediflex, the Picture Processor and the Touchvision systems. The champions of the are the CMX 6000 and the Lucas Editdroid. In the cassette method, by keeping duplicate versions of the raw footage on the cassettes the editor is able to go between scenes very quickly and the edit controller cues to whichever cassette is parked closest to the desired scene. This mimics random access. When viewing an edited scene, the editor does not see a sequence recorded onto a master tape, but rather views a sequence played back from multiple cassettes as if edited it had been edited onto one tape. This allows for rapid changes and many versions of a cut if need be.

The videodisc systems offer true random access, but the cost of the videodiscs make these systems expenses to operate. The arrival of last minute footage is also a problem because it must be incorporated into the rest of the disc-based material. Videodiscs, unlike VHS or Betamax cassettes cannot be reused.

The newest and most promising non-linear edit systems are the PC-based systems like the Avid/1 Media Composer and the EMC2. These systems digitize the video of the raw footage and store the data on some form of re-recordable computer media such as hard disks, magnetic- optical disks, data cartridges, etc. Because the data required for each frame of full-color is too massive to deal with, both systems employ "cheats" - the video is low resolution and motion compression schemes are used, rather than playing an honest 60 fields per second. Although better video quality would be nice, the quality is sufficient to make accurate offline editing decision and produce a good "rough cut" or "workprint". What they offer in addition to true random access and non-linear editing, is also a "windows" and mouse-driven approach to the methods of editing and viewing selects and clips on the screen. As a result, it is easier for people not familiar with the often arcane interfaces of CMX-style online editing to adapt to these new non-linear systems.

Today these systems are strictly for offline use. However, as technology breakthroughs occur and the image quality rises, non-linear editing from video stored as data will most certainly become the norm for online editing as well.

7. Why PAL ?

6 Most of us have grown up around and are used to only one television standard : good old NTSC - 525 scanlines of video and 60 fields per second. But to a large part of the rest of the world other television standards are the norm - PAL, SECAM and their variations. With western Europe (as well as eastern Europe in the future) becoming a major block that may soon be a more driving consumer factor than the United States, it is critical for American video producers to understand what "sells" in this burgeoning marketplace.

A fact that many Hollywood production companies have learned the hard way is that foreign buyers of American programming often reject video that has been standards converted from NTSC to PAL. PAL is the dominant video standard in Europe*, Australia and many other countries in the world. PAL is 625 scanlines of video at 50 fields each second and to the NTSC viewer has somewhat higher resolution and a bit of flicker (which is less apparent after getting used to the picture). The problem with standards conversion is that under close inspection the video shows quantizing artifacts due to the digital processing (similar to the degradation seen in one or two passes through a digital effects device). Worse yet are motion artifacts caused by the conversion of 60 fields of NTSC into 50 fields of PAL. Some imagery like ADO moves or 3D animation will have a jerky appearance. Hollywood producers have gotten around this by shooting and posting entirely in film or through several proprietary processes that are essentially "hot-rodded" approaches of standards conversion.

So why not simply produce and post in PAL in the first place and avoid the problem? Locally, several companies offer rental of PAL field video production equipment. This has come about because of Florida's importance as both a location for European television producers as well as a convention and vacation destination for overseas corporations. These producers want to be able to stay in PAL because they are posting in PAL at home.

Recently there has been a growing interest in the US to also be able to post in PAL here. True PAL film-to-tape transfer and PAL editing is available in New York, Washington, Los Angeles and now here in Orlando at Century III. The driving reason again is the desire of the international producer to stay in PAL for foreign distribution. In Century III's case, the need was to provide simultaneous post (film transfer, editing, graphics and effects) in both NTSC and PAL of the television series The Adventures Of and Super Force as well as other future projects. The "proof of the pudding" is that the guarantee of direct post in PAL from the 35mm film negative has closed several sales in the foreign markets for these shows.

The point of these examples should not be missed by the corporate video producers either. If US corporations are to compete effectively in the global economy, they must be prepared to present an image that is no less impactful to the foreign employee or buyer than his or her own television images are at home. We preach this about our own video productions, but it is no less true overseas.

*Often producers in SECAM countries work in PAL for ease and then convert to SECAM for broadcast.

8. Digital Audio Workstations

7 Those of us who've spent time editing audiotape with razor blades will certainly appreciate the ease and power of editing audio tracks on a computer. The new host of digital audio workstations (DAWs) offer great power and flexibility at lower cost than ever before.

DAWs all basically operate in the same way. A computer hardware platform (such as a Macintosh or IBM) is coupled with digital sampling and processing cards and special audio editing software. For the user the software generally mimics the layout of a multitrack audio recorder. Internally stored audio data is presented to the operator as information on one of several "tracks" positioned against a timeline (usually timecode). The different workstations vary in their approach; some limit the actual tracks to the number found on a real audio deck, usually from two to eight. Others use an unlimited number of tracks (in software) so that virtually each separate effect or cue can be placed on its own track. Then the output of these unlimited "tracks" is mixed to several (two to sixteen) real hardware outputs.

The great power that DAWs give the operator is in the ability to perform non-destructive editing. That is if you perform that "razor blade edit" in the software and you go too far, you can always put it back to the way it was. Another powerful feature is that tracks can be "slipped" in their sync relationship to each other and even elements can be moved within a track. Try that on your multitrack deck!

Since DAWs deal in digital data certain operational procedures become necessary. Internal storage, whether in RAM or on a hard disk, is finite - usually from one to four hours, tops. The raw audio must first be transferred into the system from some analog source: live mics, CDs, tapes, etc. This is done in real time. Most data is not stored on any type of removable media, therefore, when the operator goes from one project to another, project "A" must be "backed up" (data output from the system) and the new project "B" must be "restored" (data input to the system). Different brands vary in the speed of data I/O, ranging from real time up to about 2.5 times real time. To help in this process, manufacturers are starting to offer removable computer media, such as the new magnetic-optical data disks (which are re-recordable). Unfortunately these are still a bit costly.

If you think that the DAWs are out of your price range, you are wrong. System prices range from about $12,000 at the low end for the Studer Dyaxis to the $100K plus stratosphere of the NED Post-Pro, AMS Audiofile and Lexicon Opus. Generally the differences between high and low are based on the number of "bells-and-whistles", the internal "number-crunching" speed and the quality of the "scrubbing". (This means how well the digital audio can be made to sound like analog audio when you "rock the reels" on a traditional reel-to-reel deck.) For an even lower price several companies are offering plug-in processing cards and software that can be added to an IBM-type or Macintosh personal computer. Not bad when you realize that a high-quality two-track audio recorders such as a Sony APR-5002 or Otari MTR-10 are in excess of $10,000 ! Like it on not, you can't help but want to move into the digital audio era.

9. The Film Look

8 Let me start out by saying that if you really want something to look like film, shoot it on film! However, if budget and/or logistics rule this out, then there are quite a few ways that people can be fooled into interpreting a video-originated image as appearing like film.

The technical differences between the film and video image are that film has a visible grain structure and the ability to handle higher color saturation and contrast compression. The difference in frame rates (24 frames/sec for film vs. 60 fields/sec for video), results is certain psycho- temporal perceptions that create a mental boundary which says "this image is imaginary". Often for the casual viewer it is the artistic elements such as lighting and filtering within the scene that tricks the brain to recognize it as "film", even though they are viewing a video-originated picture.

Several companies have created patented processes whereby you can send them video- originated footage and they will convincingly process them to look just like film material which was transferred to video. If you would like to achieve similar results, but at a lower investment, follow some of these guidelines.

The first key is in the shooting. Use good artistic lighting, just the way it would be done for film. There are number of excellent books on electronic which are applicable here. Single camera shooting means naturalistic lighting, i.e. light should look like it comes from the logical sources - windows, lights in the room, etc. Although many people associate the "film look" on video with heavy filtering of the image (to make it "fuzzy"), it should be noted that the 35mm film image is extremely crisp. Extreme filtering should only be done for aesthetic reasons. It may be helpful to back off on the detail setting of the video camera, though. The image enhancement (detail) of video is often set too crisp from the factory and a slightly "softer" setting is more pleasing for EFP work. Using a star filter (even though you are not shooting for a "star" effect) can also achieve similar results. Also never shoot with levels that end up "clipping" video, such as burned out skies or windows.

Next comes post-production. In order to mimic film imagery more closely, you should increase color saturation during playback, decrease white levels, slightly "crush" black levels and raise the gamma knee a bit (stretching contrast). The last trick is to alter the frame rate. Devices like the GVG Kaleidoscope have an "incremental freeze update" function where they alternately freeze then update fields or frames at a variable rate. Through experimentation (usually the minimum setting) you can often create a look that resembles the "3/2 pulldown" appearance of film transfer (24 frames converted to 60 video fields). These setting will vary from one scene to another and work best with motion rather than static images.

With a little care and trial and error you will achieve an interesting look that is somewhere between film and video and will be quite pleasing to most viewers - and all at a fraction of the cost of large-scale 35mm shooting and post.

10. Computer Software For The Producer

Most of the aspiring producers that I know in this business own a personal computer. If you don't, but fit into the category, I strongly suggest putting that on your holiday shopping list. Regardless of whether it is an Apple Macintosh or one of the many IBM clones, a PC can make

9 your life so much easier - or at least a bit more organized.

I will keep my examples to the IBM world because I have personal experience with those systems and software, but I am sure that all of my comments are applicable to the appropriate Mac or Amiga products as well. Various software developers are writing edit decision, storyboarding and multimedia software to make the PC a direct extension of the production effort. The ideas that I am going to present are much more mainstream and use standard "off-the-shelf" business software.

The first type of software that I recommend to any producer is some brand of high quality word processor, such as WordPerfect, Word, Ami Pro, Multimate, etc. The most current version of these are very easy to use yet offer a lot of features for the "power user", such as mailing lists and basic desktop publishing. The most important feature is the ability to handle two or more columns in the text so that scripts can be easily written and changed in the word processor.

Next is some type of spreadsheet like Lotus, Quatro Pro, Excel, etc. I personally favor Lotus 1-2-3, which I have used as both a spreadsheet and a database. Such software makes it a breeze to handle all of your basic accounting records, whether they are business invoices, P&Ls or production budgets. I have used Lotus to create several custom variations of the AICP bid forms familiar to most commercial producers.

Using software like Lotus 1-2-3 as a database may seem strange when there are many specific database packages available (dBase IV, RBase, DBM, etc.). I find that for the needs of most of the people in this field, Lotus 1-2-3 works much more easily than a standard database package. In this application, I have created applications for tape libraries, work order records, "rolodex" files, sound effects cue sheets, etc. The reason I like Lotus for these uses is that I can display the whole file of records (usually up to 1,000 individual records depending on the amount of detail in each entry) on one screen, rather than displaying only one record at a time as in most true databases. I can sort by any category alphabetically or numerically and then scroll through the list to find the right record. I find this to be an easier method of finding information than to query a database where often the information entered must be identical or no match will be found.

Other handy software that I recommend includes forms software (for creating custom logs, work orders, etc.), stationary software (for letterhead, signs, etc.), and drawing software (for storyboards, set designs, etc.). If you really want to get deep into production software, then I suggest the whole Comprehensive line of "computer aided video" software products. These include CMX-compatible edit decision list software, integrated tape logging database and even complete edit systems (software and hardware).

11. Film-To-Tape Transfer

Transferring film-originated material to videotape has undergone many technological advances in the last ten years. This is the part of a post-production facility that has the most impact for a television program or high-end commercial producer. It is in the transfer suite that the

10 "look" is really established and it is for this reason that the rapport between the colorist and the client is more critical than in other areas of the operation.

The colorist's tools these days usually include the following: telecine, color correction system, image enhancement and sound synchronization. As far as the telecine is concerned, the preferred choice for most producers is the Rank Cintel "flying spot scanner" - usually a model IIIB, IIIC, 4:2:2, Turbo or now Ursa. A close second to the Rank is the BTS (Bosch) FDL-60. Both devices feature direct transfer from 35mm and 16mm negative or positive (print) film running at either 24 frame per second (fps) or 30fps. Some models offer other speeds as well as transfer of Super 8mm film and/or slides.

Since the basic color correction within the are similar to the controls on a camera (video levels and basic RGB color balance), several independent manufacturers developed outboard color correction systems, such as the daVinci and the Sunburst. These systems control not only the internal colorimetry of the telecine (primary color controls) but also add further color manipulation to the video image (secondary color controls). As an example, the daVinci splits up the color spectrum (as seen on a vectorscope) into 16 segments, so that it is possible for the colorist to independently alter shades of a particular color without affecting all shades of that color. In other words, one shade of blue can be literally altered to look red, while most other shades of blue in the picture remain unaltered. This kind of range gives the colorist a palette to "paint" the picture, based on his eye or input from the producer of DP.

Lately image enhancers have been an area of new development and the device of choice seems to be the Accom DIE-125. This unit enhances as well as reduces grain. Enhancement is important because the basic Rank picture is somewhat soft and most viewers prefer more crispness to the picture detail. By being able to "knock out" some of the grain, the Accom also goes a long way to making 16mm film material look like 35mm film.

Sound synchronization has also changed in the last few years. Since transfer is done from the negative, audio has been recorded on a separate audio tape. This can either be transferred to magnetic film for synchronization to the picture or it can be synchronized directly. For direct synchronization in most modern transfer suites, the audio track must have been recorded using SMPTE timecode (such as with a Nagra IV-TC recorder).

Film transfer is an important area for any producer shooting on film and posting in video. Since the "look" can be radically affected for the good by the colorist, it is a good idea to know your colorist so that you both are talking the same language and know what is meant when one of you uses those lovely film terms, like "warm it up!"

12. Electronic Graphics And Animation

Even in the world of corporate video electronic graphics and animation has become as important as to the image of television stations and networks. 2D and 3D paint and animation can add the extra sizzle that keeps your viewer's attention and makes your program compare favorably with each night's prime time fare. But not all graphics and animation systems do the

11 same thing, so it is important to understand how they work and how to get the most for your money.

Two-dimensional paint systems (2D) are simply electronic art departments inside a box. In order to make the software comfortable to a graphics designer/artist who has not come from a computer background, the choices and palettes are made to "look" and "feel" similar to real art media, like paint, chalks, airbrush and so on. This is easy to relate to and keep the learning curve short.

Top-of-the-line 2D paint systems are so-called full-color systems employing 24-bit color processing. This means that any of the theoretically possible 16 million colors in NTSC video can be displayed at any time. Cheaper systems use 8-bit processing which means that at any given time a palette of only 256 color choices can be used out of the 16 million possible choices. Though cheaper as well as more expensive systems offer the same actual resolution, the higher end full-color systems look like they have higher resolution because the 24-bit processing allows for an unlimited palette. This means that anti-aliasing can be used to eliminate "jaggies". Anti- aliasing is the "smoothing" of diagonal lines and curves by shading adjacent of the image with similar colors from the palette. As an example, a white diagonal line on a black background would actually have several pixels of grays between the pure white and the pure black pixels, when viewing a "blow up" of the image. This is similar to looking closely at color pictures in a magazine and realizing how the image is really composed of tiny dots of color.

A three-dimensional (3D) animation system is totally different from a 2D paint system. 3D workstations are very similar to engineering CAD-CAM units. Math and programming is critical to image creation and rarely do 2D artists move into the role of operators of the 3D units. 3D animation is created in several steps. First a shape is created in three dimensions. This is usually viewed as a wireframe. Once the shape is correct, it is assigned certain attributes: transparency, color, how light is reflected (glossy or dull), etc. Usually the 3D object has some type of texture to it. Since 3D systems are not paint systems, this texture must be created in a 2D paint system first and then "imported" to the 3D system as a texture file. Through the combination of these textures and the attributes, imaginary objects like a 3D transparent wooden glass can be created. The next step is to add lights, define the color of the lighting and adding movement to the lights. Finally the movement of the 3D object as well as the movement of the "camera view" onto the scene must be choreographed by the .

Both 2D and 3D images offer outstanding opportunities to impress the viewer and with careful thought are powerful communications tools in the producer's bag of tricks.

13. Linear Editing Is Not Dead Yet !

With all the articles and touted benefits of non-linear offline editing, one would think that the "good old fashioned" form of videotape editing that we all know and love - namely linear editing - must be dead. This is certainly far from the truth. There are many non-linear systems used for offline editing, including Montage, Ediflex, CMX 6000, Editdroid, EMC2, AVID and many others. All offer the benefits of being able to

12 edit story material without regard to length. This allows for easy revisions without the need to re- edit all the material after a change - in essence, a "film-style" editing process.

But if this is so great, wouldn't everyone be changing to non-linear edit systems and away from linear edit systems such as Sony, CMX, Grass Valley, etc.? As an editor I have worked on nearly a dozen different edit systems, ranging from stand-up editing at quad VTRs to CMX keyboards and most recently to the AVID and I must admit that I am most comfortable with the response and feel of editing with a keyboard over the mouse manipulations of the AVID's Mac II computer. I bet there are a lot of editors that could turn out a product just as quickly (and accurately) using a linear offline or online system as they could on the non-linear system that their clients often think they ought to be using for the project.

First let's look at image quality. Most linear systems use 3/4" or better video and the resulting image quality is very good for making creative decisions. Non-linear systems display lower image resolution (VHS, Betamax, or digitized images). When using digitized images, creative decisions are made based on a "windowed" image on a 19" screen. This image is of dubious quality compared to real video playback. The manufacturers' claims of VHS- or 3/4"- like quality is pure baloney - a fine example of a spec sheet versus reality. Another factor is storage time. If you are cutting a very structured piece (one that was shot according to a script and follows it) and the source material is in the 4 to 6 hour range, then the non-linear systems work very well. However, if you try to cut a disjointed documentary-style piece with lots of , coming from many formats and the final form is being created while you cut it, then a linear system will often outperform the non-linear systems. Most clients find it hard to visualize a piece with text and video effects moves when they can only see a cuts-and-dissolves version of the show or commercial. Most linear edit bays give you the ability to include titles and effects moves while non- linear systems don't at all or in a very rudimentary fashion.

Though I like a lot of the features of the non-linear systems, I hate to see decisions made based on jargon and sales hype. Pick the human editor first and put your confidence in him or her to pick the right system for your project. You may find that the overall time for the editing will be the same regardless of the edit system used for the task.

13 14. In Search Of A More Efficient Online

The online edit session is often approached as if the producer had all the time in the world, but when the session comes close to the end everyone starts worrying about the bill. This is often because adequate preparation was not made and there is not enough recognition early on that time is money. I'd like to offer some pointers as to how to make your next online edit session go more smoothly and be a pleasant experience.

a. Walk into the session with approved material and an approved script. If you were able to get an offline edit done and get your client's approval you are vastly ahead of the game. In most cases, however, non-video clients cannot tell much from a "cuts-only" offline and must see everything in place to make a judgement. Often an offline isn't even needed. In these cases have all your raw footage dubbed to viewing cassettes (usually VHS) with visually displayed timecode (the "burn-in window"). At your leisure prior to the session, make sure that you have viewed all footage, selected all takes, timed all takes and selected the proper in and out timecodes (preferably to the nearest second) and made a log of these reel numbers and timecodes corresponding to your script. If necessary, review these selected takes with your client so that you get their input, approval and understanding before the session.

b. Organize all your tapes. If possible, get all your source footage onto a single format. If this isn't possible, then be sure that you know which reels are on which formats and be sure that you have booked all the necessary VTRs for the session. In some cases, if you have source footage on several different videotape formats, it may be cheaper to get them dubbed (before the session) onto a single format than to incur the additional VTR charges during the edit session. The generation loss with such a dub is minimal. Be sure that all reels are properly numbered and labeled and that this information corresponds to the information in item 1. Most people like to have timecodes on the tapes correspond to the reel number. For example reel 1 has 1-hour timecode, reel 2 has 2-hour timecode and so on until reel 23. Since there is no 24-hour timecode (or higher), the 24th reel should be numbered 101 and have 1 hour timecode again.

c. Make sure all your tapes have valid timecode and know whether that timecode is drop frame or non-drop frame. Consumer/industrial formats (S-VHS, VHS, Betamax, laserdisk, 8MM, Hi-8MM) should be dubbed to higher formats ("bumped up") such as 3/4" or Betacam.

d. Determine if you will have to do a lot of dissolves to and from the same tape. If so, a B- roll dub will have to be made. If there are a lot of these, make duplicate dubs of all of your footage ahead of time (dub time is cheaper than edit time), keeping the same timecodes. If there are only a few scenes, then it is easier (and cheaper) to do this as part of the edit session.

e. Create all paintbox graphics and artwork ahead of time and provide the editor with the graphics on tape. Usually keyable graphics will require the color artwork on one reel and a hi-con matte (to cut the "keyhole") on a second reel (2 source VTRs required in the edit for this single graphic).

f. Define all character generator artwork before hand. Determine font styles and sizes,

14 colors, edge attributes and proper spelling. If possible, book cheaper "offline" time to compose and store lengthy text information. Avoid paying online edit time simply to type text. Avoid art cards; but, if you must bring them, they should have white text on black backgrounds. Avoid text with thin typefaces and fine detail.

g. Decide on the amount of digital effects required and, if possible, them (simple line drawings will do) so that the editor can easily follow what you want. Often consultations with the editor prior to the session are helpful here.

Hopefully these guidelines will ease things for you during your next online edit session. Remember that there is a lot of free advice out there from your friendly post house and that it is in your best interest to get as much input and also to convey as much information ahead of time as possible.

15 15. Some Guidelines For The Use Of Text In Video

One observation I've made during my years in edit bays is that often most of the time in a session is not consumed by editing the commercial or program, but rather in "monkeying around" with supers. It never ceases to amaze me that many clients who have spent a lot of years in this business haven't learned much about getting good text on the screen. Here are some guidelines to follow the next time you try to decide on how best to prepare your text for an online edit session.

The best display of text over live video is created when you use a character generator, such as a Chyron, a Vidifont or the text from a DF/X. These systems properly "cut the hole" for the key over video and fill it with a letter and some edge or shadow attribute. Most character generators offer a wide variety of typefaces and sizes, including many serif and sans serif faces or fonts. If the size isn't quite right, the available sizes of a given font can be cleanly resized in the character generator to offer more size versatility. This should be done ahead of time during "offline" hours, not during the session, as it is not a realtime process. Shrinking or blowing up fonts with a digital effects device (ADO, Kaleidoscope, etc.) will not give the cleanest results. There has been a lot of thought about the readability of different typefaces in print and I don't know how well it relates to video. It is generally presumed that non-italicized, serif typefaces in upper and lower case are the most readable for the eye. I would add that for video use, bolder typefaces or fonts are best and the use of some type of border or shadow will make the text stand out from the video behind it. I generally feel that white is also more readable than other colors for the letters, although most character generators offer at least 64 color choices and some as many as 512, 4096 and even 16 million.

Much of the text and logos originated as print material first - generally a white background with dark or black text. Video is the opposite and adjustments must be made. For instance, some companies use a "see-through" style of logo, in which the white paper background (in print) fills in the logo's shape. As a video key, the full shape of the logo is no longer correct. Such material is best handled in a paintbox so the logo can best be modified for video use. Artcards are the worst offenders of time consumption in a edit session. If you must bring text on art cards to the session, then make sure it is properly prepared. Bring cards with white text on black paper. This is easiest to key and will look the cleanest. Black text on white paper will have to be reversed in the switcher during keying and thus never looks as clean if there is any detail to the type (such as serifs or thin type). Logos should only be bold and keyable as a single color. Detailed logos that are intended to be multi-colored (such as a state or agency seal or a coat-of-arms) should never be keyed but rather should be taken in full color (such as from letterhead) and placed into a "window" in the screen (with a ) or should be modified for video in a paintbox.

I have achieved good results keying art cards which were Postscript pages created on a Mac. These used white text on black backgrounds and featured a variety of nice looking fonts in various sizes. When doing this, proper attention must be paid to video aspect ratio and composition, so that time isn't lost in online to literally cut-and-paste the art work. Regardless of the methods you use, make your decisions early enough and get as much done ahead of time during offline rather than online hours and life will go much more smoothly during the edit session.

16 16. Is Audio Post-Production For You ?

Whether you call it audio post-production, mix-to-picture or audio sweetening the ability to "manipulate" synchronized audio (on video projects) has become very viable and popular in the last few years. This is a concept that has been quite commonplace in film post-production since shortly after the introduction of the "talkies". With the advent of multi-track audio recorders that can be controlled by audio editing systems (synchronized to video and timecode) and more recently the digital audio workstation (such as the NED Post-Pro, AMS Audiofile, Lexicon Opus, etc.), we have seen audio post grow to be a part of nearly every major video production. But what can it do for you?

Audio post-production will allow you to concentrate on the audio portion of your project in an environment that is conducive to qualitative acoustic judgments. Although most modern video edit suites are designed to have pretty accurate sound monitoring, most audio post-suites are better and generally at or near recording studio specs. In addition to better sound, you, the producer, will be more in a frame of mind to pay attention to only the audio and not also worry about shots, effects, supers, etc., as you might be prone to do in a video edit suite.

Cost is also a significant factor. A video edit bay will generally range in the $250 to $450 per hour rate (depending on configuration) while an audio post suite will go for $150 to $250 per hour. Obviously you can afford to "tinker" with the audio portion a bit longer on the same budget dollars in the audio suite than in the video suite.

And then there is "horsepower". If you edit on 1" videotape you have two channels of audio and if you edit on D2 you have a maximum of four. Audio post suites usually have from eight to 48- track configurations so the ability to layer very nice, dense, detailed soundtracks is greatly improved in an audio post suite versus a video edit suite. There is a lot more audio processing equipment available to you (reverb, compression, special effects, noise reduction, etc.). Digital audio workstations allow instantaneous random access to the loaded source audio and the ability to edit on subframes (1/100th of a frame, i.e. 1/3000th of a second) rather than only on frames (1/30th of a second) as you do in a video edit suite.

Finally, a well-equipped audio post suite will have a healthy selection of sound effects and music library CDs available and may even offer the services of custom music scoring with a Synclavier or other similar device. The audio editor/mixer will probably also be familiar with and equipped to provide you with ADR (dialogue replacement, or "looping", of poor quality location sound) and (live sound effects, such as footsteps) recording services.

If you would like your next video project to have the kind of sound that your viewer is used to seeing in movies and prime time television shows, try out the audio post-production environment rather than the video edit suite for the audio portion of your next production.

17. Video Walls And Other Challenges

One of the more fun and potentially hair-pulling projects you can post is a multiple image

17 video presentation, such as a video wall. If you keep your wits about you, the post resembles more of a task of completing a picture puzzle than the standard online edit session. Organization and visual layout are equally important as creative storytelling.

There are several ways to build multiple image presentations. You can either create a presentation in which each player corresponds to a single screen or set of screens, or a presentation in which one or more players can be switched to a series of different monitors. The former approach is generally used in presentations with one to four monitors and the latter is used most often on video walls.

In the typical video wall the output of one to four videodisk players is displayed onto a symmetrical array of video monitors: 3 x 3, 4 x 4, 8 x 8, etc. A buffer that is part of the video wall electronics controls the "routing" of signals from the players to the monitors. For instance in an 8 x 8 array (64 monitors) one player can feed each monitor (64 repeated pictures) or the image can be expanded in increments up to the size of the full video wall (one image covering the entire 64 monitor array). A full size image will look degraded when viewed close-up, but the impact at a distance of several feet can be quite astounding. In addition, the buffer can freeze images to each monitor or send a color background ("color wash") to the monitor, so in the 3-player / 64-monitor example, you could display any combination (simultaneously) of 3 moving images (playback) and/or up to 64 stills (freezes of payback) and/or up to 64 monitors with color washes (same color on all).

In creating such a visual presentation the video wall programmer takes on nearly as much importance as the . It is the programmer who creates the display sequence for the video wall controller and this sequence determines when to select playback from a given player, whether to display it moving or frozen, and over how many monitors to spread each image. The controller will cue, park and play the players (videodisk or VTR), but it will not shuttle them around in any type of random manner, so the editing must be done in the correct linear fashion to allow the desired result to be programmed into the sequence. Most video wall controllers require a two frame delay for each command. It is important that the editor realizes this so that changes are not placed too tightly together. For instance you cannot have 64 different still frames "ripple" onto the wall at 1-frame intervals. Usually a 10 frame minimum duration for each effect is a good idea.

A big misnomer is the belief that when multiple synchronous playback is required from different masters, these masters have to be created simultaneously in the online session, requiring a six or seven VTR session. This is really not necessary. Usually the is common to all, so let that dictate the picture flow. Cut one point-of-view first, then reference the others to that. Careful storyboarding and attention to the edit decision list is very important here. When each master is done, build a composite reference tape to check if the total flow is pleasing and that there aren't any awkward sequences that stick out when everything is together. You don't have to show 64 images, though. In the example, if 3 sources feed 64 monitors, then a composite tape of the 3 images side-by-side will be sufficient to let you verify that everything is correct. So the next time audience impact sends you looking for a challenge, why not try your hand at video wall design !

18 18. Using Video For Film Release

Since the release of Terminator 2 the film production community has become more aware of video's usefulness as a replacement for film opticals in special effects scenes. In that film and several others, proprietary hardware and software systems were used by such companies as Industrial Light & Magic to scan in film footage, manipulate that footage in a digital video domain and then output the resulting composite scenes to a film recorder. Generally the process was not realtime and the resolution was in the area of approximately 2000 lines. Granted this is not your standard video set-up, but it does give you an idea of where things are headed in the future.

Most of us will never get the chance to work with the budget and resources of a film like T2, but there are many video projects that often are required to end up on film. Usually the reason for this is that the client requires a film print for theatrical projection. Often Super 8 film cartridge machines are still used in point-of-purchase displays. In the entertainment world, film is still requested as the desired medium of international exchange. For reasons of budget, availability of footage, special effects or time, it may be best to produce and/or post the material in video and then transfer all or part of the material to film for distribution. If this is your requirement then let's look at some ways to get the best possible quality throughout this process.

The first approach is to produce and post your master in a standard video fashion and use one of the various services around the country, such as Image Transform's process, to transfer the video master to film. This will yield good results for most projects. In recent years, many companies have developed their own proprietary systems for getting around NTSC's 525 line resolution limits by using variations of line doubling techniques to create a pseudo 1000 line or 2000 line system. These systems include Pacific Title/Post Group's "Gemini" and Composite Image Systems' processes. The quality is higher but at a far steeper price. Many producers will use one of these services for only the effects and title scenes and cut them back into a film negative that has been otherwise posted in a traditional film manner - if the production was on film.

If you plan to use such services, here are some ways to improve the appearance of your product. Shoot on film if possible and transfer to video for post. Film-video-film will look somewhat better than video-video-film. If you must shoot video, shoot Betacam-SP or MII, but stay in component video. Avoid obvious video production problems, like burned out skies, heavy backlighting, people in front of bright windows, clipped whites, etc. Since part of the film versus video difference has to do with different frame rates, avoid things that would tend to strobe in the final print. These would include fast lateral or vertical camera moves or digital effects moves of a similar type. The tape-to-film processes are all component based, so posting your material in a strictly component domain will help the final result. This means an all D-1 edit or component Betacam-SP or MII at worst. If you shot film, transfer to video should also go directly in component to these formats. In the case of Image Transform, PAL is used as an interim step, so providing a PAL master cuts out a few steps and improves the quality. If you can shoot and post strictly in the PAL (component) video standard, you would end up with even better results. If you shoot film, shoot and transfer for NTSC post at 30FPS and for PAL at 25FPS. This gives you a film to video frame rate relationship that is 1-to-1.

Following these guidelines will yield good results for a rather unique but often necessary

19 requirement. In any case, get to know the company that will be providing the tape-to-film transfer service for you and what their requirements are for the best results. Be sure to test any process you intend to use before it is time to use it.

19. Is Analog Dead Yet ?

With all the hype about digital, what use is there for analog anymore? Well, it turns out that there is still quite a lot of use and it will be that way for still some time. Analog is really not a "dirty word" and under certain circumstances yields better and/or more desirable results than digital.

Most existing , production and post-production companies throughout the world have a major investment in their installed plant and facilities. The basis of these installations is composite analog video and analog audio technology. Although many parts of the plant and many major pieces of equipment may be digital and/or component, these tend to be interconnected to each other via analog routing switchers and analog patching. It is obviously not very economically feasible for these companies to rip out their analog systems and replace them with all digital systems even if they wanted to; many would not want to even if they could pay for it.

All things digital are not equal. D1 digital VTRs and D2 digital VTRs do not up to each other without transcoding equipment between them. You can install and all-digital D1 or D2 "island", complete with digital switchers, digital effects and digital VTRs, but what do you do with your Betacam field footage or your 1" archive footage? Bring them in as analog, of course! The grand manufacturers' direction is down the path of serial component digital video routing which puts all the signal information on one small video cable. However, most current D1 and D2 VTRs have parallel digital connectors. If you now purchase a serial digital production switcher you have to also buy an extra "black box" (serial-parallel converter) for each input and output. At some point when you update to VTRs with serial digital connectors, these become "throwaway" devices.

As an example, Grass Valley now makes a 3000 production switcher. This is internally a D2 (composite digital) device that can accept either D2 digital or analog inputs from various sources. Its control panel works best in conjunction with the DPM-700 digital effects manipulator from Grass Valley. This is a CCIR 601 (component digital, like D1) device. Video routed between the switcher and the DPM-700 for digital video effects are best routed as composite analog video. The only complete D2 system is the Abekas A-82 switcher with A-53D digital effects and D2 VTRs. But what if you want to use an ADO or Kaleidoscope instead of the A-53D? Then video is moved in an analog fashion.

If you prefer an all D1 (component digital) "island", that is possible with various products from Sony, Grass Valley, Digital F/X, Quantel, Abekas, etc. Aside from prohibitive costs, these types of suites often miss some very basic features. For instance, component digital D1 VTRs have no standard video level controls. You can't increase chroma levels or shift hue subjectively if you wanted to without an additional (expensive) external color correction system. Component digital video will also allow you to create certain color extremes that are quite "illegal" in composite analog broadcast video or cassette duplication As we move into the next generation of an ever-changing video world, digital video (and

20 possibly high definition video) are definitely on the way. But they aren't here yet and there is a lot to be said for the reliable and versatile world of analog technology. So the next time you wrestle with the question of using D2 or 1" to post, rest assured that it really isn't as big of an issue as the industry would have you believe!

20. Does Anybody Really Know What Timecode It Is ?

My apologies to Chicago, but it seems that one of the basic building blocks of modern post-production often confuses the hell out of people - even those who work with it every day. Timecode (standardized in the 70's by the SMPTE) provides one of the best tools we have for organizing material, controlling VTRs and ATRs, and synchronize location sound, not to mention just keeping accurate time.

Timecode, the "sprocket holes" of electronic post-production is a counting scheme in which each video frame is assigned an 8-digit number that corresponds to hours, minutes, seconds and frames. In NTSC video there are 30 frames per second. There are two kinds: drop frame and non-drop frame. Since NTSC video is synchronized to a frequency of 59.94 Hz (not 60 Hz), video "time" is said to run "slow" as compared to clock time. This means that in non-drop frame timecode a duration of 1 hour as measured in timecode will be 108 frames longer (over 3 sec.) than 1 hour of actual time duration. Early on, this proved to be a problem for such purposes as accurately timing network television programs, so drop frame timecode was developed. Drop frame is a numbering scheme in which a total of 108 frames are "dropped" over a 1 hour period in the count. No actual video frames are "lost" - only certain numbers in the numbering sequence are skipped over. Therefore, in drop frame timecode, a 1 hour duration equals 1 hour of actual time. Most modern edit systems and timecode readers can deal with both types and calculate the proper offsets automatically if you mix tapes of different timecode types.

Sometimes there are other timecode types to deal with. In pure audio environments, such as a recording studio, timecode may be used to synchronize multiple audio decks. Since there is no interface to video required and the ATRs are locked to line frequency (60 Hz), such timecode may be synchronized to 60 Hz and not 59.94 Hz. If these recordings are later introduced into a video post session, speed will not be correct and adjustments must be made.

Film is now also using timecode. Aaton and Panavision, for instance, employ an optical and human readable timecode system in which each frame is numbered. Although the frame rate of the timecode count will vary with the speed of the film camera (24FPS, 25FPS or 30FPS), this will coincide with other timecodes (such as a Nagra recording sync location sound) at the rollover of each second. Nagras using timecode have various settings. As a general rule, if you are shooting film (16MM, 35MM, 24FPS, 30FPS - it doesn't matter) for video transfer, record timecode on the Nagra while using a slate with visual timecode readout. The Nagra should record timecode in 30FPS/Non-drop mode. Even though the film will not have timecode on it, at each "clap" of the slate there will be a visual readout corresponding to the Nagra's timecode which can be used for synchronization. Still use the clapsticks as a double-check. Most telecine suites will be equipped to resolve the Nagra to proper speed (based on timecode) as well as synchronize

21 sound to picture at each slate.

During video shoots, these simple rules will help. Record either drop or non-drop, but be consistent and know which it is. The timecode generator of the record deck should be set to stop and start with the recording (not free running) and should be continuous and ascending on each tape. Do not repeat or overlap timecodes on the same tape! Code each reel with a different hour digit in the timecode, so that reel 1 is hour 1, reel 2 is hour 2, etc. Have your location PA log each scene or take with the timecode from the deck. If you are used to a lined script, these timecodes should appear on the script.

If you are doing a multiple camera shoot, use a single timecode generator to feed all decks and genlock the cameras and timecode generator to each other to avoid timecode phase and drift problems. In this case you may wish to let the generator be free running and set it to time of day. In this way all recordings will be in sync with each other at the same timecode number. You won't have to worry about synchronizing multiple generators each time the decks are stopped and started and the PA can use a simple digital clock (set to the same time, of course) to keep script notes. Be careful, though, to accurately log when tapes are changed and what the starting and ending timecodes are on each tape so that there is no confusion later in post-production.

22 These simple rules will help to make timecode a much more valuable tool to you on your next production. Remember, timecode is our friend!

21. How Long Is the Online Supposed To Last ?

Have you ever been asked that question? I am often called upon to help with post- production bids and estimates and am frequently asked to "crystal ball" the editing and effects time and costs. Asking how long the edit will take for a 10 to 15 minute corporate image video (no other information) is like asking how much a typical car will cost. Ferrari or Yugo? I think you get the picture. But all kidding aside, there are some rules-of-thumb that can be used to answer the question in a general way, provided one is willing to live with a few assumptions.

Generally a 10 minute video (image, sales, point-of-purchase, training, etc.) will have about 200 edits. This may be a combination of audio and video edits, such as a piece with a mixture of "talking heads" and shots. In this type of video, the shots will be a bit longer and the pace slower. It might also be just video edits cut to a pre-recorded soundtrack, in which case the shots will be shorter and faster paced. The editing time will tend to average out to the same between both kinds of shows. If the producer and/or client has only made a cursory review of the source material and online edit time is used to view various takes, hunt for shots and make things up as the edit progresses, then it will be hard to average more than about 10 edits per hour. This means a 10 minute video will take at least 20 hours or more to edit.

Let's assume, though, that the producer has screened all material using viewing dubs with visual timecode ("burn in"), selected takes, determined in and out timecode numbers for all shots and maybe even done a simple offline, "cuts only" edit of the show prior to the online session. That information is written down in a detailed manner for the editor to follow - a "paper list", if you will. In this scenario, the pace will pick up and generally one can average 25 edits per hour - an 8 hour total for online.

A true offline edit with a computer-assisted edit system that can generate a CMX- compatible list on floppy disk would, of course, allow the online session to proceed almost automatically - the "auto-assembly" - under computer control. Auto-assembly can cruise along at around 100 edits per hour because no time is lost to the manual entry of timecode numbers during the online session. The producer does have to factor back into the budget the cost of the offline system and the value of his or her time that would be spent doing both the offline and the online sessions.

Factors that effect the total online time also include effects, audio and graphics. Digital video effects can be time consuming and costly. If only simple effects transitions are desired, such as an ADO-style "page twist" instead of a dissolve, the time penalty will not be too great - a few additional minutes each time the effect is used (compared to the dissolves). More involved, multiple image effects (split screens, quad splits, cubes, etc.) will take longer. Add at least 1 hour to the total for every one of these sequences. Simple lower third supers will also only add a few minutes each time they are required (depending on how fast and good a typist the editor is), but

23 lengthy full page text screens should be composed ahead of time during "offline" hours at a separate rate. Graphics have to be built ahead of time and laid off to tape for use during the edit. Audio in the edit session should be kept simple - 1 to 4 tracks with few specific "spot" sound effects and simple mixing. More than that and it is time for a separate audio post session. Remember that extra devices (character generators, digital effects, etc.) not only add time to the online but are usually also charged out as options with their own additional charges.

Last, but not least, the videotape format will effect the time. Cassette based sources (Betacam-SP, 3/4", D2) make editing faster than 1" sources because tape changes are quicker. D2 tapes shuttle fastest - 3/4" slowest. The more tape changes and the more transitions required (particularly dissolves requiring B-roll dupe tapes), the slower the session. I hope that this all helps the next time you try to put together a post budget. But if we're talking about commercials - all bets are off. That is a totally different ball game! And remember the editor's rule: edit time expands exponentially by the number of people in the room during the edit session!

24 22. Multimedia

Multimedia, as some of you know, used to be a term applied to slide shows. In the age of recycling, even words come around again. Multimedia is now the brave new world of computers and video - the "Holy Grail" for the likes of IBM and Apple. Multimedia is now defined as the integration of personal computer technology and audio and video playback.

The concept is rather simple. If you can "boil down" computer software, electronic graphics, audio playback plus still and moving image playback into a standard form of digital data, then it is possible to use a single source (such as a PC with the right audio/video cards) and a single storage device (such as a CD-ROM drive) to display any combination of text, graphics, stills, moving video, etc. Under the concept coined by Apple as "hypermedia", you could be scanning through a text reference from an encyclopedia (retrieved from a CD-ROM), hit a "key word" about which you want more detailed information and that would trigger a "hot link" to a motion video sequence which would be played back on the same computer monitor (retrieved from the same CD-ROM). As you can see, this requires software development, interactive design and video production expertise.

The need for better multimedia solutions has evolved out of the industry's experiences with interactive training over the years in which videodisks and level III programming sort of filled the bill. The problem with videodisk programs is that each student's unit needs its own player. Videodisks are displaying analog video, not data, and, therefore, the price break on a cost per student basis is too high. Being able to have all of this type of material stored in the form of digital data (whether it is text, sound or picture) allows the training situation to use common computer solutions, such as multiple terminals hooked together over a Local Area Network (LAN) and all drawing information from a central file server in a single location. Now that cost per student comes down significantly and the price break is much more economical.

There are several approaches in the works as to how to actually achieve this goal. CDI, DVI, JPEG and MPEG are some of the "buzz" words for the types of digitizing methods under development. All have some problems when it comes to reproducing 30 frame, full color, full screen motion video playback with even close to minimum NTSC video quality. But that will improve. A much more difficult problem is the need for the development of standards by which to create universal applications and the manner in which best to create and master such products. I am sure these will evolve quickly in the time ahead.

The industry now is in need of those few guiding souls that will change multimedia from a technology in search of an application to a technology that will benefit the Information Age!

23. Trends In Videotape Formats

NAB '93 has passed and the video world has yet a few more videotape formats from which to choose. I' afraid that the people pining for the days of the video format that survives 25 years will be very disappointed. Although "format wars" can be quite disconcerting, I recommend that the best way to buy video hardware is to decide what works best for your needs and the needs of

25 your clients, and forget about all other concerns.

First, I feel it is safe to assume that the dominant field acquisition format for all types of video projects will continue to be Betacam-SP. Yes, there will be a small percentage of many other formats (for EFP/ENG): 1", D2, D3, MII, Hi8, S-VHS, Umatic and, soon, even Digital Betacam. Nevertheless, Beta-SP will continue to grow in the field. Even though it isn't the best picture (1", D2 and D3 are better-looking), it is the best available mixture of ruggedness, portability, image quality, versatility and universal acceptance.

Second, 1" Type C will continue for several years as the main distribution format for broadcast-quality duplication. It is cheap (cost per minute), high quality and a worldwide standard. Type C is also the dominant format on video mobile trucks because of its ruggedness on the road.

Given these two points, I feel that anything you use in between (all levels of post used to generate a master) is your business and purchasing decisions need to be driven by the market you serve. I have by now seen 's DCT, Sony's Digital Betacam and Panasonic's D3 decks firsthand. All formats look great and work as advertised. Sony and Ampex use a 2:1 compression scheme while Panasonic's video is not compressed. In spite of published concerns over compression artifacts, all the examples I have seen look very, very good, with no real difference between the various formats. A 2:1 compression scheme is extremely "mild" and should not be considered in the same light as the levels of compression used on such systems as the AVID. Differences are really based on price and features. In my opinion, the Ampex DCT is the best VTR transport I've seen and a strongpoint for the "buy American" crowd, but in any future purchasing decision I would have no problem with any of the other choices, including Panasonic's upcoming D5.

I realize that my comments may not do anything to ease your mind if you're considering a post format to purchase, but that's just the reality of today's post environment. The Florida markets have largely gone with the purchase of D2 composite digital VTRs. Although full component digital post with D1 VTRs provides the most pristine results, it is really a very esoteric point. There is very little difference in most standard "cuts-and-dissolves" projects (commercials, corporate, video features, television programs, etc.) within the first few generation required to create a master and release dubs, whether posted on D1 or standard analog composite NTSC. So it is safe to say that the D2 format will fair well in this part of the country for at least 5 years. It continues to produce outstanding results under the toughest circumstances, even though it might now be in the "workhorse" category and out of the "cutting (or bleeding) edge" category. Though D2 and 1" are the dominant Florida post formats, if you shot or posted it on any format, you'll be able to find a Florida house that can deal with it.

24. Compression

The biggest single development in the future of video is the use of various compression schemes applied to digitized video. Compressing video allows us to "cram" more information into less space, whether that is a signal path, a computer hard drive or a piece of videotape. Compression gives us products like the Avid, Apple's Quicktime, Sony's Digital Betacam and in

26 the future, mega-channels of cable TV.

A standard full-color frame of NTSC video is about 1 meg of digital information, so it obvious that digital video of any length on a personal computer would be quite impractical. So compression is quite essential for items like hard-disk based nonlinear edit systems such as the Avid or , but it also allows us to "push the envelope" in tape formats, such as with the Sony "black box" that allows a 601 (component digital) signal to be recorded on a D2 (composite digital) VTR.

27 Compression comes in many levels, from the "mild" 2:1 compression used in the VTRs to 150:1 compression used in systems like Lightworks. There is "lossy" and "lossless" compression. In "lossless" compression only duplicate information is dropped; in "lossy" compression, some "insignificant" information which is not duplicated is also dropped. There are many methods of compression: JPEG, MPEG, MPEG II, wavelet, dct, and more. Even though many manufacturers claim transparency with regards to the effects of compression, there is no free lunch ! In the so- called mild 2:1 compression used in the VTR formats (Sony's Digital Betacam and Ampex's DCT) the image quality may in fact appear transparent, even at 100 - 200 generations down, but it has yet to be determined if there will be artifacts that show up when the signal in passed through future generations of effects hardware. It might also be that a margin is lost in the quality of error concealment the VTR can perform. In other words, the video looks great 99% of the time, but when it goes bad, it goes bad quickly, completely and is unrecoverable. These questions can only be answered once these newer VTRs get into wide post use.

The more extreme compression used in the nonlinear disk-based systems (AVID, ImMix, EMC, Lightworks, D/Vision, Montage, Ediflex, CMX Cinema, etc.) has allowed these systems to make quality claims of "better than VHS or Umatic" and "comparable to broadcast 1/2" VTRs". Avid and ImMix are now claiming online capabilities. The interesting dilemma is that these claims are pure "specsmanship". There is currently no quantitative method by which to test, evaluate and compare various compressed images. All these compression schemes show definite artifacts which are more or less evident depending on the subject matter on the screen. Beauty is in the eye of the beholder and the quality of compressed video is only based of your eye! So images that I find objectionable, you may deem acceptable for your target audience. This is the same in tape - I may prefer a D2 master, while an S-VHS master may be fine for your purposes.

In evaluating these systems, let me point out some things to look for. Most systems display video as 30 fields per second in which each field is doubled to create an apparent 60 fields per second. An true 60 field per second output is available on the Avid (option) and the ImMix (standard) systems. Under the best of circumstances, these systems show little or no motion artifacts but do show some ringing around edges - similar to Umatic, S-VHS or Hi8.

More artifacts are quite visible in darker scenes with rich blacks and subtle shadow detail. In these shots the dark areas show a "blockiness" or an appearance that these areas are made up of little squares. Film-originated material tends to hide some of the artifacts (probably because of the grain) while some video originals show it up worse. Although many of the systems use JPEG-based compression schemes, these files are incompatible because all use different applications to control the use of the JPEG chipsets in their units.

Compression is the way of the future. Hardware and software advances will only improve the image quality. I don't consider any of the current crop of disk-based offline/online systems to yet be really of online quality for all applications, but give them another year or two and watch out!

25. Tapeless Editing

For those of us who grew up secure in the knowledge that videotape technology was the

28 way of the future (over film), it is time to adapt or die! I firmly believe that within 5 to 10 years, 50% of all the editing we perform will be tapeless editing. This concept is a firm fixture in the audio world but just starting to impact the video (as well as the film) world.

Systems like EMC and Avid led the way with disk-based offline systems, along with Quantel's Harry for effects/graphics compositing. Now there is a new crop of systems that bring online (and near-online) quality to editing. These include Avid, ImMix, Quantel and various systems running on Silicon Graphics platforms (Avid, Softimage, Discreet Logic). Though quality varies depending on the amount of money you are willing to spend, it is clear that the industry is moving full steam ahead in the direction of editing and compositing video in the workstation environment. Not only is this true in video, but the traditional world of feature is also embracing this new technology as witnessed by the number of features that have used Avid and Lightworks systems for offline (workprint) editing and high-end workstations for effects, animation, retouching and compositing.

The main reason I feel that this is the way of the future is neither cost nor speed. In fact, nonlinear offline can result in longer total turnaround time and a higher tab. The important point is that nonlinear editing creates a method and style of working that is very inviting. Here at Century III we cut the audio for the Warner Bros. feature, "Christopher Columbus - The Discovery" under the eye of director John Glen (of the James Bond films fame). Glen is a traditionalist who had previously worked with film sound using sprocket technology (mag film). After exposure to the NED Post Pro digital audio workstation, Glen quickly learned to appreciate the options offered by nondestructive digital editing. The same becomes true of clients exposed to nonlinear video editing systems.

In the near future the cutting edge post facility will probably seem similar to its present layout - except - replace the central machine room with computer towers and drives and replace the edit suites with workstation rooms. These workstations may be used for editing, or animation, or paint, or even audio - all on the same platforms. Image quality within the system may be "scalable" and "resolution-independent", thus allowing video facilities to offer post services for film output. A handful a VTRs will be used for input to the hard drives and RAM, output ("print to tape"), data back-up and dubbing.

If this seems like a "brave new world", think back. How long ago was it when you first heard about nonlinear disk-based editing from EMC or Avid and how long ago was it when audio post was done only on multitrack. Did you think Hollywood would always only use Moviolas? If Hollywood can accept direct computer manipulation and output for editing and "opticals", then I'll bet it's not too long before you're doing workstation-based onlines.

26. Production Ideas

When embarking on each production, I'm sure you often wonder how to make this next show better than the last - how to give it a cutting edge - how to improve the production values without the expense. With the tendency to adopt the "MTV" look in all kinds of productions, a viewer is much more likely today to accept innovative approaches that might have been thought of

29 as bizarre a decade ago. Here are a few ideas to try.

Shooting Hi8 and S-VHS has been pushed as a more affordable alternative to 3/4" or Betacam, but the video quality is usually not as good. The lightness of the does make their use attractive for hand-held, tight shooting. One way to use the footage to your advantage is to shoot in a definite "shaky-cam" style and treat the video through a DVE like the GVG Kaleidoscope. This lets you change the video with color washes, black-and-white, posterization, contrast changes, etc. By treating the video in this manner, pristine video quality in the original format are no longer critical.

Maybe you'd like to shoot film but can't afford a typical 16mm or 35mm production. Try shooting 35mm still film on a 35mm SLR camera equipped with motordrive. Some camera mechanisms achieve 6 frames/second which is a useable rate that gives you a "step-framed", choppy look often used in music videos. This frame rate has been used in commercials for dialogue scenes as well. As long as you can transfer this to tape in a registered manner you can reassemble the stills into continuous sequences. Camera movement and dolly moves are all possible with still cameras, too. Remember, though, that only short takes are possible because of the length of still photography film rolls, so plan your shooting carefully.

With the advent of 's Photo-CD, it is possible to get the best still photo images to screen. By way of computers and Photo-CD players, it is possible to get these images onto tape as well. I don't know how well the images are registered, but I can't imagine that they wouldn't be correct. This becomes a great aid for the approach above with the 35mm SLR with motordrive.

Want to place your talent into backgrounds with other footage or graphics? I always hate blue-screen (Ultimatte, chromakey) shots. They generally aren't convincing and usually look hokey, plus, are a real pain to deal with - both in production and post. Try shooting your talent in front of a videowall. Several area companies provide videowall services and you can supply video from various tape formats (such as 3/4") and live camera sources, not just laserdisks. You are no longer trying to fool the audience, but at the same time are giving a very interesting look to the piece. If you add interesting camera angles and some videowall programming pizzazz, you get the benefit of creating some very exciting video without any added post-production costs. You can also create graphics and on personal computers that can be fed to the wall for a very dynamic look.

The next time you are looking for an innovative way to add some spark to your productions, without a heavy post-production tab, give some of these ideas a try!

30 27. Video's Future

There is a lot of wringing of hands about what the future of video will be like. Will there be high definition TV? Can we afford it? And so on. The only thing that I think it certain is that we don't know how it will turn out and that it will probably be like a stroll through Disney's Tomorrowland - a 1950's view of the future.

It is almost certain that we will end up with a digital means of over-the-air television transmission, as mandated by the FCC. By digitizing the transmission/distribution schemes for TV, a merger of TV and computer technologies is ensured. This has evolved as a byproduct of the high def debate, but may, in fact, prove to be considerably more important than viewing resolution. The industry in general seems to have backed off from its earlier, more aggressive, high def stance. The economy has been down and this has focussed real attention on the manufacturers' inability to build cost-effective high def production and display equipment. In addition, studies seem to indicate that the high def equivalent of your 19" home TV, when viewed at similar distances, does not look appreciably better to the average viewer. I doubt that these same viewers (you and I), will pay thousands of dollars for high def television sets. High definition shooting is still more cumbersome than film shooting and does not look as good projected, compared with 35mm film, so the promise of electronic cinematography in high def is also many years away.

In all of the studies and research, what does seem to be gaining interest is a wider aspect ratio and realistic enhancements to the NTSC picture. A 16:9 aspect ratio seems to be just around the corner. All VTRs have been and currently are capable of 16:9 (anamorphic) recording and playback. Cameras with the correct optic blocks and lenses can shoot in this ratio. Upon playback, the VTR or the monitor must stretch out the horizontal aspect to turn an anamorphic 4:3 image (on tape) into a wide-screen 16:9 image. Thus, this year you will see professional and consumer 16:9 monitors and .

The "rub" with 16:9 is that to correct the anamorphic image, you are losing 33% of the horizontal image resolution compared to 4:3. You have done virtually the same as stretching horizontal aspect in a DVE. To improve this, many in the industry advocate production and post- production using the 601 standard (as in D1 component digital recordings) with a higher bandwidth (18mHz). Essentially this allows a 16:9 image to be displayed with the same horizontal image resolution as it now has in 4:3. If you add line-doubling technology or some of the "magic" used by Snell & Willcox in its converters, and through digital transmission, can deliver that quality to the home television set, the resultant improvement will become very obvious to the home viewer without actually going the entire high definition route. This allows the industry to continue without making all previous equipment obsolete. If wide-screen imagery is actually adopted by the consumer (a big if, in my opinion), then I think you will be seeing such technology changes within the next 10 years.

The rest will develop in time. The proverbial 500-channel cable systems are on the way. First, testing - then, if successful, implementation over the next 10 years. But, remember, interactive television did not succeed with Warners' Qube and I doubt that we want 500 channels of the Home Shopping Network. So whether this proves to be progress is for the future to decide.

31 In any case, I think the short term won't witness upheaval, but, for certain, gradual revolutionary change (an oxymoron?) in coming.

28. Hidden Cost To Nonlinear Editing

Nonlinear video editing is a major advance in the industry which has helped attract even traditional filmmakers to the possibilities of electronic post. It allows a freeform approach to editing a project and opens the editor to a greater willingness to experiment without the drawbacks forced in linear editing. Nonlinear editing has moved from early systems using multiple VTRs and/or laserdisk players for random access into newer systems that record digitized and compressed video data onto computer hard disks. Nonlinear editing has become synonymous with these newer disk-based systems. Since tapeless, nonlinear editing is fast and revisions easy and painless, many producers and clients have the illusion that it also reduces the total post budget. This isn't necessarily true. I have onlined many projects from nonlinear-generated EDL's where the producer benefited from the efficiency promised by the hype. I have also edited projects that took just as long in online. The same time was spent, only on different things. Beware of many real costs that crop up when embarking on your own offline editing - nonlinear or otherwise.

First of all, nonlinear editing requires that you log, load and digitize source material onto the system's storage media. This takes longer than real time because of the need to log each separate clip (scene) in order for the system to access a database of source clips. This footage is usually loaded from 3/4" dubs of the camera footage. Therefore, you have the cost of work dubs (the same in other offline methods), your time and system time to load/digitize all source footage. Loading is not an automatic process, because storage limitations often force you to be selective as to which material to load at first. Some systems allow the creation a database of clips by logging on a simple PC-based workstation with a VCR in an "off-offline" setup which is then automatically digitized on the system when that list is loaded into the edit workstation. This is the first step in the editing process.

Storage on the system is more expensive than tape. The cost per minute is insane when viewed in the context of tapestock costs. If you have to pay for storage while waiting for approval and making revisions, it won't be cheap. If you "blow away" media and then later have to restore it for revisions, loading the source footage must be done in real time again (though only the footage needed for the cut, this time).

If you are editing on a system with lower image quality or have selected a lower resolution, you may end up with a cut that is good to make decisions by, but unacceptable to screen for your client. In this case you may have to redigitize at higher resolutions as is possible on the Avid, or auto-assemble a screening cut in a linear bay using your EDL. Again, additional cost.

Once in the online edit bay, you should be able to scream through an auto-assembly of the list and be done - right? No. If you second-guess all your offline decisions in the online bay, time slows to a crawl and you've negated the work done in offline. Most effects, graphics and audio mixes done on offline systems do not translate as data into online EDLs. These can be done for

32 reference on an Avid, EMC or ImMix, but must be redone in the online bay. A "cuts-and-dissolves" project with a few supers can be auto-assembled quickly from a nonlinear-generated EDL, but an effects-laden piece will be recut in online.

Don't get me wrong - I like nonlinear offline. Don't use it because you think it will save you money. Use it because you like that method of working and because it will make your product better. If you budget it, don't short-change your online edit and audio post time - they are still very essential !

29. How To Handle Audio In The Edit

One of the more frustrating portions of an online edit session of any complexity is the dilemma of dealing with the audio. It is easy to create an edit decision list on an offline system that becomes an automatic "road map" for the video but is still quite meaningless for the audio. This situation has hardly improved with nonlinear systems and as a result an edit session that moves along quickly for the video can get bogged down when it comes to proper handling of the audio.

Always use timecode-controlled or stable timebase devices. This must start prior to offline editing. If exact sync and edit points are essential, speed cannot drift. Do not use audio cassettes, non-timecoded open-reel audio, CDs, LPs and consumer VCRs not locked to an external reference. If these sources are used, dub them to a timecode-based medium, such as 3/4", Betacam, 1" or D2, which will become the real source for the offline and online edit. DATs seem to be stable and are OK to use without timecode, if the tracks are easy to match up later.

Be careful to keep proper track (pun, intended) of your tracks on all sources. Video formats like Betacam-SP have four audio tracks. The 2 analog tracks can employ Dolby noise reduction. Know what is an all the tracks and whether or not noise reduction was used. When editing, know what the mastering format and post house is capable of. For instance, D2 VTRs have 4 editable digital audio tracks; however, a lot of post houses are only wired to deal with 2 tracks at a time. D2s also have an analog cue track (a 5th channel). This can occasionally be useful, but most edit controllers cannot control inserts on it. If mastering onto Betacam-SP, tracks 3 and 4 are FM tracks and can only be recorded to when video is simultaneously being recorded.

When building a mix in the online bay, it is best to work with at least 4 tracks. In a mono show, this might be: track 1) voice-over, track 2) sound-on-tape, track 3) natural sound/ambiences/SFX, and track 4) music. These would be edited onto an edit master and then mixed in mono to a final mixed master. If extensive sound effects, stereo tracks or overlapping tracks are required, the need quickly grows to 8, 12 and 16 tracks. Additional tracks can be built up onto other copies of the master which are then slaved together for the mix. With D2's preread feature it is also possible to mixdown the 4 tracks onto a single track of the same master tape, thus freeing up more working tracks. But if you do this and "blow it", you get to start over from the beginning!

The best way to deal with audio in online is not to do it. Prepare the basic tracks and then proceed with an audio post session on a digital audio workstation, such as the NED Post Pro. Even here, a well-organized track layout on the video master can save time. For instance,

33 recording tracks at approximately the desired levels and adding fades where required will speed up the mix later.

Finally, know what the delivery requirements are. Should your dubs have noise reduction, or not? Betacam-SP, Betacam and Umatic-SP use onboard Dolby noise reduction for channels 1 and 2, but its activation is partially dependent on proper tape stock. Type C 1" has optional Dolby (A and SR) and DBX noise reduction, but these are usually outboard devices that most stations and post houses don't have. And VHS dubs - good luck ! Because of internal limiters and Dolby on some machines it is hard to say how to correctly set up duplicator VCRs so that they play equally well, and to your liking, on all destination decks. Get a check tape for large orders and play it on a system/player you know and trust. If it sounds good, go back to that house for future dubs.

30. Music

No, you cannot legally use that pop tune you like as a music bed for the next corporate video unless you get permission. Many people use popular recordings for corporate videos, sales and marketing pieces, annual presentations and even commercials. Generally, they get away with it and no problem ever arises, but it still isn't legal. When using music there are two licensing concerns. One is for the publishing rights and the other for the recording rights. If you license only the publishing rights of a song, you can get musicians of your choice to record a version of that song. If you license both publishing and recording rights, you can use the version recorded by the original artists. Securing these rights can be very costly and are often negotiated on an individual basis. For top commercials, the rates get very high and many artists do not wish to have their music used to advertise products. The most cogent example was when the remaining Beatles sued Nike and its agencies for using the Beatles' recording of "Revolution", even though Nike believed it had taken adequate steps (and paid millions) to properly secure these rights.

Several firms and agencies specialize in negotiating rights for the use of various recordings and published songs. It is best to go through one of these if you wish to take this route. A point of confusion is that broadcast outlets (stations and networks) do have blanket licenses that allow them to use popular music as part of television program production. That is why you hear current recordings in teases and promos for various sports shows on all the networks. These blanket licenses do not extend to the commercial use of the music. It does not give a local TV station the right to use a song as the bed for a local commercial, for instance.

The best way to go is either with original music or library music. There are several fine area composers who can create quite fitting music, scored correctly to the pace and tempo of you video. The cost for doing this is usually lower than you might think and allows for your interaction with the composer to get the type of feel and sound you envision. Music from various library companies (Omni, Network, Promusic, APM, etc.) offers probably thousands of high-quality CDs with great musical diversity. A skilled audio engineer/mixer with a good library selection at his/her disposal can put together music cuts into a finished track that would easily fool you into believing it was a custom score. Library music is licensed by its use and you generally pay either a fee ("needledrop") for each separate cut, or with some libraries, a blanket fee which is based on the length of your video, regardless of the number of selections used. Remember, that if you use the

34 same cut three times, that will be three "needledrop" charges. Licensing fees usually cover international rights for that one project for its entire life. If it is reedited and reissued, that becomes a new set of licensing fees. Fees vary based on the nature of the production: broadcast, corporate, cable, rental to the public, theatrical, commercials, etc.

So much for music. Remember that it is just as much of a copyright violation to use video clips without permission. You cannot legally use pieces of that Oscar-winning feature film from a VHS you rented at Blockbuster to make those points in your sales and marketing video - but, that's another story!

35 31. Graphics From Your PC

There are several newer software packages that are available for your personal computer which are great for producing broadcast graphics. They are generally available in IBM and Apple versions and some are also available for Amiga and Silicon Graphics platforms. With the correct display card for output in NTSC, it is possible to put together near-Paintbox quality images right on your home system.

The software variations seem to come in several groups: 2D paint, 2D fonts, 3D fonts, 3D animation and 2D photo-retouching/image-filtering. The font "manipulation" packages take advantage of software-based fonts that the computer normally would use for the printer, such as Adobe Postscript and True Type fonts. This allows you to use hundreds of licensed typefaces in nearly any size for straight character generator work as well as 3D versions that show extrusions. These software packages offer not only colors, sizes and shadows, but also textures like wood, metal, etc.

The 3D animation software packages have been the ones that seem be getting the most actual use for video at the PC level. 3D animation is the most expensive and time-consuming at the professional level, so if you can do it at home (and have a lot of time) then it becomes very cost-effective to use your computer, instead of going out of house. The available software offerings, such as Electric Image (Mac), Lightwave (Amiga/Toaster) and 3D Studio (IBM), provide you will very professional quality and sophisticated features. But to quote columnist Craig Birkmaier's law of rendering, "The closer the image quality gets to photorealism, the closer the rendering time gets to infinity!" In spite of that, Lightwave running on a "gang" of Toaster- equipped Amigas has found its way into the professional world of TV shows like "Seaquest" and "Babylon 5".

The most interesting PC software packages are the various paint and image-manipulation options. The basic paint systems function a lot like popular broadcast systems, but some offer features far beyond systems like the venerable Quantel Paintbox. Fractal Designs' Painter allows you to use brushes emulating various painterly styles - impressionistic, knife-strokes, watercolors, oils, etc. Paint Alchemy from Xaos Tools acts as an add-in to Adobe's Photoshop and allows you to define various image filters that can create quite interesting effects and textures on captured or painted images. Even Grass Valley Group is getting into the act with its Video Designer package, a bundled IBM-PC/software system that allows for direct composite digital (D2/D3) input and output.

So if you are in the "do it yourself" mode with enough work to justify a few thousand extra dollars, get together with the right knowledgeable folks and you'll be able to put together a complete video graphics and animation studio right in you own home or office !

32. The Best Offline Edit System For You

Deciding which offline editing system is the best for your application can be a difficult decision. Investing in this industry's hardware can be a scary proposition and no single system is

36 right for all applications. At Century III, we and our clients have used all types of systems for offline editing: Ediflex, Montage, Avid, EMC, CMX-linear bays, film and "cuts-only" systems (3/4", VHS, Hi8). All were appropriate for the desired task. Oliver Stone's editors cut "Doors" on an Editdroid, "Wild Palms" on Lightworks and "JFK" on a Sony 3/4" VO system with RM-450 controller! For some folks the best offline system is a VHS deck with a shuttle knob, tapes with "burn-in" timecode and a legal pad.

The bottom line is the price performance ratio, how you work as a producer and what your clients need to see for approval. Like it or not, the offline system with the best price performance ratio, highest current image quality and maximum storage capacity is a 3/4"-SP editing system with outboard PC-based list management. Unfortunately it isn't sexy, won't impress clients and is older technology. If you can deal with linear editing, then this gives you the most "bang for the bucks". It is ideal for the organized producer who does not have a tendency to make frequent revisions. You can drive away one of these babies for $15K -$25K (decks, controller, monitoring, computer, software).

If you like to experiment, make frequent changes and can't stand to deal with linear tape- based editing, then one of the non-linear, hard disk based editors is a better bet. Unless you are willing to spend big bucks, you will have to make compromises in image quality and storage capacity. Editing is faster, but don't forget to factor in the time to load/log/digitize all your source material. At the low end (similar price to the 3/4" system) you should look at many of the PC- based and Mac-based board-set and software packages, such as those offered by Avid, EMC, D/Vision and Montage. Remember to add the cost of a VTR and computer and be sure that true EDL output is available.

At the high end, some nonlinear offline systems rival the features and image quality of online edit bays. Pricing is in the $80K - $300K range (with VTR and peripherals) and systems include Avid, ImMix, Lightworks, Quantel Micro Henry and Ediflex Digital. Because some of these are being marketed as nonlinear online systems, they may lack some features needed in an offline system and may also create effects (such as built-in DVE moves) that cannot be translated into an EDL as data.

The appropriateness of a system also depends on the nature of your projects. Some systems like Avid are great for commercial/short-form material while others like the Ediflex are better for dialogue-based/long-form projects like TV shows and features. Are you concerned with the need for negative cutting lists? Do you have to cut multi-camera material? All these needs must be weighed. If you don't routinely use a system, then renting the appropriate system from others is best. Be careful if you are considering purchasing a system, particularly if you treat it as an online system and plan to finish on it. By doing this you will limit your post options to what can be done with the system you own (and have to pay for) and you will avoid other options available from other post suppliers.

33. Products To Help You Organize

The growing intermingling of video and computer technologies has brought us many

37 products that help organize our pre-production and post-production effects. These software and hardware items are often quite inexpensive and operate on all the popular platforms. They come from specialty manufacturers like CV Technologies and ETC, but increasingly also from mainstream software suppliers like Adobe.

To aide pre-production, there are many software choices for scripting and budgeting from the standard sources, such as Lotus, Microsoft and WordPerfect. Word processors allow you to format your writing in script format, so special script-style processors are not needed. You can easily develop spreadsheets to emulate AICP and other industry budgeting formats. Now with the advent of Quick Time (Mac) and Indio (Windows) it is possible, using software such as Adobe's Premiere, to develop complete electronic storyboards for presentations. Video clips can be captured for you to put together an electronic "rip-o-matic" for the to your client - completely within your own computer.

For post-production organization, you can choice from a host of logging and editing programs. Some let you prepare data, including into a CMX edit decision list format. Others let you track editing data with additional boards that hook to your VTR to read timecode or edit pulses. Yet others can function as complete edit systems, with the right additional hardware. Look for products from CV Technologies (Edit Master), PCP (Shotlister), RGB Video (Amilink), Sundance, ETC (Ensemble Pro), Strassner and many others. These devices run on Macs, PCs and Amigas, often functioning as cuts-only controllers, A/B-roll controllers and controllers with switcher control (including Toasters). Many feature more "bells-and-whistles" than the big-name online edit systems, yet only require a basic personal computer. In fact, some run better on older PC-XTs and PC-ATs than on newer, faster 386 and 486 systems!

For nonlinear, Avid, EMC, Montage and D/Vision have now scaled down to their bigger products into software and board sets for only several thousand dollars. Remember, though, that the storage devices required for the digitized footage can still be quite costly.

With many fonts available on PCs, not to mention onboard paint systems, it is quite possible to configure a PC, Mac or Amiga as a broadcast graphics workstation right in the home or office environment. Even Grass Valley Group offers a PC-based paint package that is quite affordable for the serious designer. With Kodak's Photo-CD you can do away with old-fashioned manipulations of slides. Have your stills developed as Photo-CDs and you can then import them into a paint system for manipulation and retouching, then, output them to video when completed (with the correct output cards). Even if you only have a Mac or PC and a good 24-pin or laser printer, it is possible to "spit out" camera-ready art cards which you can use right in the edit session. No more limited font selections - no more typos by the operator (you hope). Good luck!

38 34. Spruce Up Your Show In Post

There are many ways to add a little spice to your shows in post. Rather than add the typical ADO-style "flying boxes", try cutting your piece in a more "MTV" approach. A lot of simple things can be used - even mistakes - to get this look.

One way of doing this is to use a very frenetic pace to your editing. While adding more cuts, also add video like white frames instead of dissolves, or swish pans as transitional elements. Many editors cut in 10 frames of video "snow" or "hash" to create transitions. This is particularly effective if there is a to coincide. In addition to swish pans, some editors often use camera "runout" on a film shoot - the last few frame of a take where light contamination overexposes the picture. These all work well as transitional elements that avoid dissolves and effects moves, particularly if a funky cutting style is compatible with your piece.

Another music video technique is the use a black-and-white footage and letterboxing the frame. You can get black-and-white by several methods: turn down the chroma (completely) on a D2 VTR; lose color through a DVE like the Kaleidoscope; and sometimes you can achieve this through keying on some switchers. It also helps if the lighting and art direction is designed with contrast in mind. You may have to stretch contrast a bit in the VTR set-up. The letterbox effect gives you a more cinematic feel, preferred by many viewers.

Digital effects devices create another interesting effect when use to "step-frame" the video. When "step-framing" video each frame (or field) is frozen for a determined duration (in frames) before being updated by the next incoming (live or tape) video. In the least increment, it makes video look more film-like, while in a more obvious increment, it provides a definite effect. This same effect can be used to strobe the picture, giving it an "old time movie" appearance.

Finally, the selective use of graphics will also spruce up your next video. Look at how network news magazines and sports shows handle name supers, statistics pages and the like to take your lead. A simple name super may look better with a shaded, transparent accent bar of color behind it. An icon next to a title may help to create chapter headings. A Mac-like appearance to a menu page will also add visual interest. The next time you design a video piece, give some of these suggestions a try. Don't be too timid, your audience will appreciate it more than you might think!

35. Posting Multi-Camera Video

Most of us usually only deal with posting film-style shows and commercials and don't get involved in multi-camera productions unless they are live broadcasts or unedited live-to-tape productions. Multi-camera shows that do go into post have their own organizational requirements. These types of shows may include in-studio, dramatic productions, sporting events, concerts, infomercials, speeches and the like.

Many producers believe that when they plan on a multi-camera production, it will go straight

39 to tape with little or no editing required. This is usually erroneous for several reasons. If a definite duration is required (such as in a broadcast show or an infomercial) subject matter must get correctly edited to maintain continuity; but, most important to remember, if you have the opportunity to correct minor imperfections, you can - and will - in post. So plan your taping with the most efficient post in mind.

Record not only the production switcher output (line or program master) but also "isos". If you have enough VTRs, you should record an "iso" of each individual camera. You can also record "switched isos" if there aren't enough VTRs, but that will require an additional crew member to watch and correctly select the best alternate camera angles during the taping. In order to review the footage prior to editing, many producers record a "quad split display" to VHS or 3/4" during the taping. This tape usually shows the switched program and iso feeds with a time-code window, but, may only show iso cameras if more than three isos were recorded. Panasonic makes an inexpensive "black box" that creates a compressed quad display, but it is also possible to create this yourself by simply stacking up four monitors and reshooting this stack with an extra (cheap) camera. VHS tapes of the quad display and the switched program allows that producer to review the show at his/her leisure and make decisions about content and continuity edits, as well as to find alternate camera angles to fix problems, add audience reactions or otherwise improve the cut and pacing.

When taping, most producers use dropframe timecode that is synchronized to the time of day. This allows the taking script notes to keep track of the timecode for all events by simply watching a correctly set digital wristwatch. All recorders should get the same timecode from a master generator and there must be plenty of preroll at each pickup point in the taping before coming to program material because the code will jump ahead with every pick-up edit on the record VTRs (the code generator is free-running with clock time). A knowledgeable PA taking extensive script notes is worth a mint when it comes to post. Although I would always recommend offline editing if time permits, a talented script PA makes it possible to go straight into an online edit session without floundering.

Although many of the hard disk based, nonlinear offline systems, like Avid, offer multi- camera options, I would generally recommend either tape or laserdisk nonlinear offline systems (Ediflex, E-Pix) or linear bays for multi-camera projects. CMX-style linear offline bays with five or six VTRs (VHS, 3/4") or laserdisk players are the most common way in which multi-camera shows like sitcoms, soaps, game shows and concerts are edited.

36. Wide Screen Video Displays

A year ago I had the opportunity of posting the orientation video for visitors to Florida Splendid China. The presentation is projected video on a screen area of about 9' x 27'. In order to make this size image have highest image resolution possible and to take advantage of the cinematic screen ratio, a process using three overlapping video projectors was used. The method used was developed by Panoram Technologies of California and is similar to the way multi-projector slide shows are displayed. In addition to the Splendid China video, this technique can also be seen at the Earthquake ride at Universal Florida.

40 Let me start by saying that this video was the brainchild of Jon Binkowksi and Ken McCabe of Renaissance Productions, who not only pushed the post envelope with this one, but also took the effort to create an 80-page storyboard which became my "roadmap" to follow. This video employed a mixture of formats, including slides, original cinematography (35MM anamorphic), original videography (NTSC Betacam-SP), stock footage (3/4"), 3D animation and 2D electronic graphics. Brad Fuller was the film DP and Rusty Rustaad the videographer for all original production done in China. Once all these elements were together, it was up to Jon and Ken to work with me and the rest of the folks at Century III to post this presentation. A few words about the process. The Panoram technique allows you to overlap multiple projection sources to create widescreen and even "circlevision" style displays. The 3-projector method we used results in a 10 x 3 aspect ratio (or about 3.33:1) which is wider than most current films! The three projectors constitute a left, center and right image with overlaps between left and center and right and center of about 25%. On the display side, Panoram sells hardware which blends the overlaps together by controlling the projector shading and registration adjustments to create a seamless image across the three projections.

To post this kind of presentation the editor must create three videotape masters that become the sources for the left, center and right projections. Although tape sources can be used, generally these masters would be converted to videodiscs so that playback can be easily synchronized and rapidly recycled. It is in the editing that things become tricky. The best source for any true panoramic images is 35MM anamorphic film. This provides the necessary resolution for post. Shooting must be based on Panoram's chart system. During film transfer, the anamorphic image is "spread out" and divided into thirds. In short, three pin-registered transfers (with matching codes, positioning and color-correction) must be made for each scene used from film. Anamorphic video sources are possible but won't be as crisp. Once transferred, the "thirds" of each scene must be edited onto their appropriate masters in proper sync.

Regular 4 x 3 video can be used as "windowed" images on the screen or as part of "moving boxes" that travel across the screen. In creating any DVE moves in which an image crosses the overlapping screen areas, the editor must be careful to work out the math so that a move that start in the left, for example, is recreated for the center master (with an offset) and then again for the right master (with another offset). Viewed individually, the three "pieces" of the move will look as if they don't match, but once projected with the overlaps, the move will travel seamlessly from left to right across the screen area.

Though this type of project is a lot of work and requires a lot of preproduction planning and extensive storyboards, it can result is once of the most impressive large screen presentations possible with standard video. Line doublers will add vertical image resolution, but even without them, the presentation with appear favorable when compared to 35MM prints. So the next time audience pizzazz is important consider a wide screen presentation on video.

37. New Items To Improve Your Facility

All of us in this business enjoy the large and expensive "toys" that make our jobs fun, like new edit

41 systems, DVEs, cameras, etc. But often it's the little things, the "glue", that makes our work more efficient and more productive. This month I'd like to highlight a few items that have come on the market in the last year or so that are worthy of mention.

Various desktop systems have all caught our attention for use in editing and graphics, but it's hard to perform any of these tasks without VTRs. Betacam-SP is pretty much the de facto standard in field production so it is a format you will most likely have to deal with regardless of the other hardware you use. Therefore, it is a great boon to the format that Sony came out with the UVW series VTRs. The UVW decks are ideal VTRs for use with desktop systems like Avid, ImMix, Matrox, etc. The UVW decks offer slightly less resolution that the broadcast series BVW decks, but if you are only going a couple of generations that isn't very critical. Another thing you give up for cost savings is the lack of front panel machine controls, as these decks are intended to be tied to external computer control.

If you are an owner of an Avid system, or a heavy user of Avids and would like to keep stored media on hand, then the new family of 9 gigabyte hard drives is a great development for you. 9GB drives are now in the neighborhood of $4500 and in Avid terms this means about 4-5 hours of storage at medium resolution or about 90 minutes at Avid's "online" resolution. Such a drastic reduction in storage costs over just a few years ago goes a long way towards addressing this costly issue.

At the low-end of the cost spectrum, Mac owners have a lot of video-oriented resources available to them. For output to NTSC, the Video Explorer card seems to be the hardware of choice. With Aldus' purchase of CoSa (After Effects) and DFX (Hitchcock non-linear editing), low-cost options are available for editing, digital effects and layering. Also Media Translation's Media 100 seems to have many loyal followers. It's possibly the best image quality of the various Mac-based systems and is hard to beat for the price. Of course, don't forget Adobe's host of products: Photoshop, Illustrator and Premiere.

Finally, don't forget audio. It's unbelievable what you can get for a few thousand dollars these days. Versatile consoles from Mackie or Samson are a fraction of the cost of their bigger brothers. Sony's new Mini-disk format is a great improvement over the old broadcast cart machines. Digital audio workstation are everywhere. The next time you size up your equipment budget, look at the small items, not just the high end. You'll be amazed just how far your facility dollars can go today.

38. Matchback Film To Video

More and more, entertainment shows posted today in video also require that a finished negative (and prints) be delivered as well. This is a requirement of many studios because of international distribution or hedging against the future. This need has created many methods to make such posting possible and these methods can work to the advantage of many other producers as well. It allows you to use familiar electronic post-production techniques for film transfer, offline editing and audio, yet still end up with a finished product on film.

42 First of all, why should there be any problem trying to match up film and video editing at all? As we all know, film shot at 24 frames is transferred to NTSC video at 30 frames per second (actually 60 fields/sec). This conversion is possible via the "3-2 pulldown" method of film transfer in which every four film frames cover five video frames. In "3-2 pulldown" each film frame is alternately scanned for three video fields and then two video fields each. As a result, some video frames contain one field each of two adjacent film frames. Edits made in video can thus occur on video frames which do not have corresponding whole film frames. This means you would have a plus or minus one frame ambiguity at a cut.

If you use electronic post for film finishing, these are some guidelines to help out. All 35MM motion film negative contains an imbedded barcode signal called keycode. Keycode can be read by optical readers and during transfer if the telecine is equipped with such readers. Keycode numbers corresponds to each specific film frame, so it is very important that the correct keycode information be logged during film-to-tape transfer. Keycode logging can be done manually or electronically using telecine controllers, such as the TLC, or separate computers. In addition, the keycode signal can be recorded onto the transfer master tape, either as a "burn-in" or as an imbedded signal. In any case, it is critical to establish a traceable relationship between the keycode numbers and the timecode numbers of the film transfer. During offline editing, some systems, such as Avid, Lightworks or Ediflex, keep track of the negative (keycode) numbers along with the timecode EDL and allow you to generate a video edit EDL as well as a negative cut list. In order to do this correctly, each system has its own specific requirements, so be careful to fully understand the needs of the systems you intend to use, because it will effect the film transfer requirements. Some systems allow you to edit in a 24 frame mode. This means that the system (which is still operating with 30FPS video display) prevents the editor from making cuts on frames which are not valid film frames. This method ensures frame- accuracy in the negative cut list.

Even with systems that don't have a 24 frame option it is possible to get a cut list. Several software vendors offer packages that allow you to take a file created in film-to-tape transfer with the matching keycode/timecode relationships and merge that with a video EDL to create a negative cut list. This is probably only plus or minus one frame accurate, however, in the hands of a skilled negative cutter, these edits will each be checked and corrected as necessary. A device such as a LokBox can be used to synchronize a film transport to a video player, so that each edit can be compared to the video edit on a reference tape. In this way, no film workprint is required to verify the video edit prior to cutting negative.

Though not for all projects, cutting film adds life to your project. Since most of us work in the video realm, it is nice to know that we can use familiar tools and still get to the desired goal.

39. Monitor Inserts

Many of the videos we produce require the inclusion of monitor or TV screens within a shot. This is especially true when communication media is the topic of the piece being produced and, in that case, it is important to show the screen as clear as possible. Often this type of effect becomes

43 that "tail" that wags the "dog". So here are some ideas about making this effect work the next time you use it.

First of all, in order to look real, the image on the screen should not look too clear. If actual playback video is shot, that's not a problem, but if the image is inserted in post, effort must be taken to distort or degrade the look so that it appears to actually have been shot that way. If you are shooting the screen as playback you must synchronize the display speed and the shooting speed. Video playback to a monitor or TV at 30 FPS must be shot at 30 FPS to avoid a scan bar. This is OK for video, but in a film shoot, you must also shoot at 30 FPS. If 24 FPS is essential, such as for theatrical purposes (when 30FPS is not desired), then customized 24 frame video equipment must be used. It may be possible to use 30 FPS playback, if an LCD video projector is used instead of a monitor or TV screen. The lag of the video projector tends to eliminate the scan bar. All bets are off for computer screens, because they use various display methods that are not easily compatible for video purposes. It's best to mock these up with a video monitor or live with a scan bar.

If the image is to be inserted into the screen in post, follow these guidelines. The camera must be locked off. No camera movement ! Tracking a moving shot with an is very expensive and sometimes impossible. If you shoot film, the transfer of the monitor scene and the video to be inserted must be pin-registered for maximum telecine steadiness. Do not allow talent to intersect the image of the monitor screen. If talent is in front of the area where an insert is to be added, a "travelling matte" must be rotoscoped (frame-by-frame painting) to put the part of the talent that overlaps with the screen back on top of the insert.

Degrade the image somewhat by adding simulated grain (in film shoots), lighting glares, reflections, etc. If is always a good idea to have the production crew shoot the talent (which we see in an over-the-shoulder situation) also straight on. This additional shot can be used as a reflected image to be used in sync on the screen as a reflection of their action. Be careful to shoot the eyelines correctly of talent that are to appear inside the monitor. Though straight into the camera may be the most accurate display, a look away from the camera may create the better "feeling" composite shot in a conversation.

Though many of these ideas seem simple, you'd be surprised how often they are forgotten or violated. When done correctly, these carefully staged shots will not look like effects and will come across as quite transparent to the viewer.

44 40. Desktop Editing Price Tags

With desktop editing systems becoming more and more affordable many companies and individuals are interested in purchasing their own under the belief that they will be able to handle all their post-production needs inhouse. This idea works for many but not for all. Although these systems are less expensive, they are by no means cheap, so it is important not to forget all the realistic purchase requirements.

It is impossible to evaluate all the options in this article, but a few are worth noting. Avid Technologies has the strongest pedigree in the short-lived history of desktop manufacturers. Though they offer more and less expensive units, I feel that the best Avid buys are their mid-level systems, which offer nearly all the high-end features plus an upgrade path.

For approximately $50,000 you can purchase an Avid 1000, 800 or Film Composer. The Avid 1000 is intended as an online desktop system (complete with mixing and digital effects) with the highest Avid 60-field resolution level (AVR-26), but now also offer one low-resolution level for online editing. The 800 is like their full-blown 8000 system, but with only the various low and medium resolutions, but not the "online" 60-field resolution. The Film Composer is customized for 24 FPS editing (for frame-accurate negative cut lists). For the $50K price tag, you get minimum storage and small monitors.

For best results, the purchaser should upgrade to the larger 19" monitors ($1600 additional each) and add more storage, such as two of the 9GB hard drives ($4500 each). 18GB of storage at AVR-26 (60-field "broadcast" resolution) is about 2.5 to 3 hours of storage and about 9 to 18 hours at "offline" resolution. Of course, you will also need at least one VTR to load footage and record output, as well as a color video monitor and some odds and ends, such as a CD player, tape deck and maybe even a DAT machine. The VTR should be a Betacam-SP deck - either UVW or PVW series - so add $10K to $30K more for these items. Obviously, this totals to a system price of around $80K before you really have anything useful.

Another system is the ImMix Videocube, which has been giving Avid a strong run, especially for desktop online applications. Though this is a weaker offline system than Avid, it provides faster, easier response as a realtime, online desktop system. Now available with a Power Mac, the Cubes price out in the $40K - $60 range depending on options. The basic storage module (included with the base unit) holds one hour video and two hours of audio - all at "broadcast" resolution (no low-resolution "offline" options are available). To this add the same cost as with Avid for VTRs, monitors and other peripherals.

A comparable purchase can also be made in a linear system. A basic three-VTR Betacam-SP (PVW-series) edit package with edit controller, switcher/effects unit, monitors and peripherals is also in the $80K range. Though linear, it offers the advantage of "unlimited" storage, true "online" image quality and, in a sense, an open architecture system. You can easily add items from other manufacturers to a linear system, while you can't run other applications within ImMix's Videocube or Avid's Media Composers.

45 Other lower cost, but capable options include Media Translations, EMC, NewTek "Flyer" for nonlinear as well as Strassner and Matrox for linear. It is important to purchase any hardware systems with open eyes and to really determine if you are making the most efficient use of your financial resources. Good luck because today's sleek desktop "sports car" is tomorrow's "Edzel".

41. Developing Proper EDLs From Your Offline

Many producers are finding out the benefits of such systems as the Avid Media Composer, EMC and Videocube to visualize their final video product before an online edit session. The process of offline editing can cut down post time and save money when done correctly. It can also create a mess in the online edit session if done without the proper end result in mind. This month I'd like to look at how to create a good edit decision list (EDL) for later use in an online edit session.

The first step in the various nonlinear, desktop systems is the correct input of data while digitizing. The key at this stage is the proper entry of timecode and reel numbers. Generally I advise recording all field tapes and all post tapes with non-dropframe timecode. Most edit systems can work with either drop or non-drop and can mix between the two types; however, it is best to keep everything based in one framecode format or the other. Use non-dropframe timecode unless you are producing a broadcast length television program. In this application dropframe is useful for determining the correct duration of the program. The field tapes should be recorded with ascending code, starting with one hour on the first reel, two hours on the second reel, and so on. Record hours one to 23 and then start again at hour one. Reels should always be identified with a three-digit number. Avoid cute names or names that try to incorporate the production title. Although some nonlinear systems' data management software allow the reel number to be altered after digitizing, some do not, so it is important to do this correctly right from the beginning. Three-digit reel numbers are a universal method that will be acceptable to almost any of the standard linear online systems. In this method, reel one is labeled 001, reel two is 002, etc. This numbering continues until 023, then at the 24th reel (one hour timecode again), the cycle changes to 101, 102, etc. At each change back to one hour timecode, the hundreds digit will increase: 001, 101, 201, 301, etc. You may use other numbers, such as 090 to 099 to identify other element reels, such as graphics, voice-overs, music and so on.

Once you have achieved a final approved cut on the offline system, you will need to generate an EDL that can be used in an online edit system to complete the post of the project. Most nonlinear system will generate EDLs in various formats, such as CMX, Grass Valley, Sony and others. CMX 3600 will tend to be the most common format. It is important to realize that a correct EDL must also be in the proper disk format, not just text format. True CMX compatibility means that the file has been written to a 3.5", standard density (720kb), floppy diskette which has been formatted in the RT-11 disk format. RT-11 is a Digital Equipment Corp. format similar to MS-DOS. Only a few systems actually write an RT-11 file to the disk. These include Avid, EMC and Montage, but not Lightworks or the Videocube. It is more common for systems to write an [CMX text format] EDL to an IBM formatted diskette as an ASCII text file. These must then be converted to a true CMX file by using an external conversion utility. Sony files may be written to a standard IBM diskette, which can be directly read into a 9000 or 9100 edit system. A separate Sony utility software (EDLXpress) may be used when converting Sony to and from CMX and/or GVG edit decision

46 lists. The Videocube must also go through the additional step of converting from Apple to IBM format before its "CMX-compatible" file can actually be used in online editing.

A limitation of the CMX EDL format is that it only allows three digits for the event number column. This means that the largest number of edits in an EDL can only be 999. Projects which exceed 999 events must be divided into two or more edit decision lists. GVG and Sony EDLs can go to 9,999 events because they allow four digits in the event column. Be careful to study the output options for EDLs on the offline system you may be using. The Avid Media Composers offer the most number of EDL options. In an Avid, you should generally choice an A-mode sort (EDL sorted in the order of the ascending recorder times). Other choices are to optimize the list ( the "do not optimize" button is not highlighted in the menu), select separate dupe reel for each reel and select "main list" and "show comments" from the menu options. This will generate an EDL with the least number of edits, with comments displayed for each edit and a "B" designation behind every reel in a dissolve that must be duped to a B-roll.

When naming the file on the diskette, use only a six-digit alphanumeric file name. Do not use spaces or punctuation. The system you are using will provide some instructions about the proper file extension. Most system will automatically add a .EDL extension, so you don't need to type it yourself when naming the file.

Many of the nonlinear systems can create effect that cannot be duplicated in online through an EDL. A typical CMX EDL can deal with cuts and dissolves plus some keys and wipes. It can also handle up to four tracks of audio, but without information about track assignments, levels, EQ, etc. If your editing exceeds this, separate EDLs will have to be generated for each track of video or audio. Generally on a system such as the Avid, most audio editing doesn't exceed four tracks and can be incorporated into the list. With video though, it is a good idea to create a separate EDL for each layer of video. For instance, the first list will include video 1 and audio 1 through 4. The second list will be for video 2, the third for video 3, etc. These separate EDLs only indicate source and record timecode relationships, but no information about the effects itself. If you do a video effect, such as a DVE move or flipping a shot, it is a good idea to add a comment to this scene. Most systems will allow these comments to pass through into the EDL as a flag to the online editor that a manual operation is required.

Be careful of motion effects. Variable speed ranges of the digitized video can exceed what the tape machines are capable of. For example, on the Avid you can create a speed up of a shot at 800%, while the actual VTR will only go as fast as 300% in a single pass. Different VTRs have different ballistics, so that a Beta-SP tape playing on a Digital Betacam VTR can run at many more steps within the range of the VTR than if the same video is being played on a Betacam-SP (analog) VTR.

When you are ready to go to online, you should provide some backup to your floppy diskette, in case you didn't get the material properly saved to disk, or there has been a problem in conversion. Always print out the EDL and bring the printout with you to the session as a double-check. Record the output of the system to tape for reference, as well. This is helpful for duplicating digital video effects created on the nonlinear system. If also wouldn't hurt to have the online editor check your

47 file a day or so before the edit session, just to make sure that things will really proceed flawlessly. If you follow these steps, you'll have a higher degree of success the next time you use your EDL output for an online session.

42. Pay No Attention To The Man Behind The Curtain

A major focus of debate in post-production facilities throughout the US is what will be the future of online edit suites. The hardest part of this debate is deciding which equipment to purchase and what investments to make. With recent trends toward Avids, Videocubes and all the other desktop variations, the decision has become much harder. So let's look into the crystal ball at what an online edit suite of the future might look like.

First, let me point out that I'm writing this article in the last days of 1994 and that my opinions may change after the 1995 NAB, though I rather doubt it. Although there is a lot of interest in the various PC and Mac-based approaches, the rooms that are actually making money for their owners are generally traditional, tape-based, linear edit suites. The reasons for this are many, but one can usually point to better image quality, more cost-effective storage and real time image processing as the advantages in these bays. It is understandable that these factors will continue to improve in such systems as the Cube and the Avid, but that time isn't tomorrow. Besides the issues of ease, cost and accessibility, the video complexity attainable on a system like the Videocube is comparable to a linear system of 20 years ago!

In the coming years, I believe that the edit suites we work in will not look a lot different from what we see today. The single-interface workstation is great in theory, but not all that easy to work in actual client-driven sessions. The basic components of the room, such as the video switcher, are still more easily manipulated by most editors when a dedicated panel is used. In the future I think you will still see separate interfaces (control panels) for these devices: edit controller, video switcher, digital video effects manipulator, character generator and audio mixer. These devices may appear to be separate units in the edit suite, but may be all part of a central unit in the machine room. Like the Wizard of Oz, the function performed by the editor in the suite will be handled by a bigger processing unit in an adjacent room - "the man behind the curtain."

While the appearance of the room may not change greatly, its functions will evolve differently. A significant change will that all editing will become nonlinear. Currently the domain of desktop systems like the Avid, nonlinear editing systems for the online suite are being introduced by Axial, Sony, Grass Valley, Chyron and others. Storage systems are the limitation in making this work, so the next biggest change I foresee is the move to disk drives rather than videotape recorders. Disk-based systems can achieve the nonlinear, random access editing needed in these new rooms.

In order for disk systems to gain wide acceptance, units adhering to broadcast standards with longer storage times (minutes rather than seconds) must come to market. That has been happening in 1994 and will be a strong product area in 1995. This became possible once image compression for video gained wide acceptance. 2:1 compression, a so-called lossless method

48 (no video data lost) is common in high-end systems such as Sony's Digital Betacam and Ampex's DCT. Higher compression rates such as 6:1, which are lossy (some unimportant video data is lost), are becoming acceptable for onair playback systems and ENG applications. With compression rates of 2:1 or 4:1, it is possible to work with drive systems that allow up to hours of storage of picture and sound. Tektronix, BTS and Sierra Data are companies to watch for these systems.

In the future edit suite, I envision the "man behind the curtain" to be composed of a central processing unit containing plug-in modules for the switcher, DVE, character generator, machine control/editor and audio mixing. A close current version of this is an all-Sony edit suite installation. Though still separate units, the total rack space consumed by the combination of a BVE-9100, DVS-6000C and DME-3000 is a fraction of that consumed by the older equivalents of the CMX- 3600, GVG-300 and GVG Kaleidoscope! The future central unit will control one or more drives for recording, layering and archiving. It will also control multiple tape decks of the necessary formats of the day which will serve as source or playback decks during the edit session. In this hybrid approach, material will be edited from a tape to a disk drive and then further reediting or layering will be done between drives in a manner not unlike a tape-based system. The difference for the operator, besides speed, will be that the human interface will be graphic-based rather than text- based.

The lines between offline and online will become almost nonexistent, but I don't think they'll disappear. Like audio and graphics today, you can work on a high end unit and operate in a real time environment, or you can work on a simpler (cheaper) system and endure a little rendering time to get the job done. The independent producer who wishes to be self-contained will opt for the latter, while facilities will choose the former. In terms of final quality, there will be no offline or online, but individuals will still choice to use a cheaper system for visualization (offline), while still going to facilities to create the final product (online). The "front end" of the suite may look the same, but the methods of working will be revolutionary.

43. What Makes A Good Editor?

As a client you work with many different personalities in the production and post-production phases of a project. While the director and director of photography have a significant impact on the production, the editor will often be the one that molds the final look of a piece. The post- production phase is sometimes even longer than the shooting, so it is important that the editor and client not only see "eye-to-eye" but also feel that the experience was enjoyable. Chemistry is very important, so what are the marks of a "good" editor? I think I've boiled it down to a few points.

1. Congenial. A good editor should be a good "people-person". He or she should be able to adapt to many situations and be able to smooth the waters when a session gets tense. Though there may be creative differences, politeness should never be lost.

2. Receptive. A good editor should be open to new ideas and various options suggested by the client. Most ideas are valid and should be given a fair chance. Knowledgeable editors can

49 certainly help the client decide what is the best alternative - but it is always that client's final choice and a good editor won't have a problem with this.

3. Fast. A good editor is fast. This doesn't necessarily mean that he or she is the speediest person on a keyboard, but rather, that the overall session time progresses quickly and the editor doesn't get bogged down in one or two single aspects or the project. 4. Knows the gear. A good editor knows the equipment in the room inside and out. This means they know the capabilities and options and don't have to think about what to do when something is asked for. By knowing the keystrokes like a typist or pianist, a good editor can pay better attention to details, quality control and creative ideas offered during the session.

5. Creative. Though creativity isn't essential, good editors will generally have a few creative suggestions that can be "added to the pot". These often come from past techniques tried on other session or from ideas that evolve during a session. A good editor will be happy to offer these as part of the session - or not - depending on the feeling of the client!

44. Nonlinear Edit Systems Revisited

Nonlinear edit systems have become the current buzzword for many clients. Initial systems in the marketplace, such as the early Montages, Edit Droids and Ediflexes, were soon replaced by the digital upstarts - Avid and EMC. Although an inviting concept, nonlinear editing has been out of the reach of the pocket books of most independent producers until recent years. With a lot of new options to choose from, it is a good time to review some of the products available.

The king of the hill is definitely Avid Technologies, with products ranging across several price levels. Best known is the Media Composer series which includes various models to address the needs video post, film finishing and compositing. Avid is aggressively targeting the broadcasters with development of its Newscutter (to replace the ENG bay), Airplay (video file server) and purchase of various broadcast traffic and news software companies. The Media Composer line is definitely for better-financed operation, but two Avid products closer to most folks' budgets are the Media Suite Pro and the Videoshop. Media Suite Pro is a smaller version of the Media Composer intended for corporate use. Its output is intended for online rather than offline use. Videoshop is a software package intended for use as a Quicktime editor.

EMC and Montage, both early nonlinear developers, continue to provide strong alternative products to the Avid. Both are PC-based and are best used for offline applications. EMC also offers its Prime Time model, which like the Avid Media Composer 1000 and 8000, can provide 60 field/second "online" output quality. These are joined by Lightworks and D/Vision as strong contenders for the offline editing market for entertainment shows. Lightworks (and its Heavyworks version), the Montage III and the D/Vision Pro are highly respected systems in cities like Los Angeles because their editing methods work well for the creative editing of television and feature film projects. Avid and EMC are also well-used in these applications, but offer the additional benefit that they more well-rounded for all types of post-production needs, such as commercials and corporate videos.

50 In the corporate and commercial post market, the strongest competitor to the Avid is the ImMix Videocube. Based on a Mac for the user interface but using proprietary hardware to process the audio and video clips, the Videocube operates largely in realtime (no rendering for effects in an initial video layer) and as such, combines the best of both worlds: the Mac's user interface together with the functionality of editing in an edit bay (yet, still nonlinear). Though Edit Decision List output for further online editing is possible, the Videocube is really intended to provide a finished product.

In the middle of the spectrum is a whole series of options that include both full turnkey systems as well as bundled sets of computer plug in cards with software. These cover systems for PC, Mac and the Amiga. These include Data Translation's Media 100, D/Vision's Cineworks, the Newtek Flyer, the Matrox Studio and Fast Electronics. Some, like the Flyer, Matrox and Fast, can be both linear (when used with tapes) as well as nonlinear. Whether they meet your needs depends on patience, preference of computer platform and ease of the interface.

The hybrid approach may offer the most promise in the future. It combines both nonlinear for offline and linear (when VTR control is added) for online. This gets around the need for better image compression within the system for "broadcast quality". Videomedia (Oz), BTS (Rio), and Fast Electronics are all some of the options. Prices range widely and although they may be the best choice, they do not yet offer the market appeal to your clients of an Avid, Videocube or Lightworks (if that is important to you).

Finally the best bang may be in the consumer software packages, such as and Razor. These, along with Avid's Videoshop and D/Vision's Cineworks offer nonlinear offline at pricing below $1,000. Though full online support, such as EDL output, may not always be there, these programs offer a great way to visualize what a final piece will look like. In many ways they even surpass some online type features, such as is chromakeying abilities and layering, though usually not at online quality level. If this is your cup of tea, shop wisely. Be willing to put up with some hassle. And enter a brave new world!

45. Why Digital Betacam ?

As many of you know from reading the trades, Sony has moved its push for marketing videotape recorders to Digital Betacam. Pretty soon, it will be difficult to purchase any "high end" Sony format other than this. BVW-series analog Beta-SP decks, D2 and even D1 VTRs will soon disappear from the product line-up. Digital Betacam and a soon-to-be offered "lite" digital version for ENG purposes will make up the main part of Sony's offering to broadcasters and post facilities. I suspect that support for the PVW and UVW series decks will still continue for a few years.

So what does this format offer? Digital Betacam (often coined "Digibeta") is a format that uses the same basic cassette design as analog Betacam. The tape formulation is different, so Digibeta tapes can only be used in digital decks. Depending on options, a Digibeta VTR can play analog Betacam and SP tapes, but it will only record in the digital format onto digital tapes. Digital

51 Betacam uses a 2:1 data compression scheme in the recording process in order to make it possible to record a D1-type (CCIR 601 or 4:2:2) component digital signal on a 1/2" width tape. Though a lot has been written about the "questionable" merits of compression, in the practical experience that I have gained with Digibeta I find it to be a nearly ideal format. When editing in a digital suite, the format is transparent through many generations and layers. I have not seen any of the digital artifacts that are at least theoretically possible as a result of data compression. I find it considerably less problematic than D1 or D2 which are uncompressed formats. I doubt that the alternatives - Ampex's DCT and Panasonic's D5 - will gain nearly as large a market share, though each is an outstanding format in its own right.

Well, that is all well and good, you say, but since you shoot on analog Betacam-SP anyway, why should you be interested in seeking out Digital Betacam post? There are several reasons. Digital Betacam VTRs offer more of a universal interface than other past formats. Although it is a digital format, inputs and outputs include serial component digital (its native format), component analog (Betacam) and composite analog (depending on options). This means that a Digibeta deck may be used in any type of edit bay, not only digital bays. Editing from analog sources to Digibeta masters in a component digital bay allows you to preserve the maximum quality of the Betacam- SP originals. This is particularly true when keying graphics or creating chromakeyed composites. The Digibeta recorders support preread, so heavy layering is possible without generation concerns. The Digibeta players will play back analog tapes with better dropout compensation, better error concealment and smoother slomo than analog decks playing the same material. Digibeta recorders support a true four channels of digital audio, so it is possible to build more sophisticated tracks in a Digibeta-equipped suite than a Beta-SP suite.

Although Digital Betacam is a young format, Florida features several facilities throughout the state with full component digital edit suites based on the Digital Betacam format. Pricing is usually not outrageous and compares favorably with D2 post rates. Even in film transfer, many producers around the country are trying the "budget" approach of transferring in component analog Betacam-SP and then posting to a Digital Betacam master. This gives you the best of both worlds.

52 46. Editing Software To Make You Life Easier

Nonlinear edit systems have been getting the bulk of the press over the years but there are many lower cost solutions that can help producers streamline their post-production expenses. Whether you do a lot of linear editing or have moved on to the world of nonlinear, you can be helped by a wide range of software utilities available to help prepare Edit Decision Lists (EDLs) for online editing. Many of these work together with a basic "cuts only" offline system and some even work with just a list of timecode notes on a legal pad!

Several packages can be purchased to help build EDLs. The best, though costliest, is one from Pep, Inc., called Shotlister. Shotlister is software and a plug in computer card which will create an interface between your "cuts only" edit system (such as a Sony RM-450 with 2 VTRs) and a PC. This allows you to create shot logs of the footage and edit along in a standard fashion. While editing, Shotlister will keep a continually "clean" and accurate database. This will track through even successive generations if you cut and recut the project onto different tapes to get to a final cut. This database will generate an accurate final EDL for online.

Comprehensive's Edit Lister is another option, offered through Comprehensive's CV Technologies division*. Part of a family of software packages that include full edit systems with hardware, Edit Lister can be used to manually type in edit information. If you cut a project with "burn-in" timecode windows in the video, this information can be entered into Edit Lister to generate an EDL.

Utility programs are useful to take EDLs and prepare and correct them before heading to online. Generally edit systems leave "dirty" lists as a result of changes and re-edits. These often include over-recorded edit points and bogus edits. Separate software is used to correct, or clean, these errors. If you make several revisions in a linear bays and get to a final cut by editing down several generations, you will generally lose the original timecodes for the first generation source material. To get to this information, a list or lists must be traced. The best software package for both of these functions is Turbo Trace Plus.

Even editors using nonlinear systems like Avid need other software programs. For instance, many nonlinear edit systems generate CMX lists, but not in a CMX disk format. This format, RT11, is similar to DOS, but the lists are not simple DOS text files. Several shareware programs are available to convert CMX lists in a DOS text to RT11 formatted files so that a professional controller can read these files. Often EDLs must be converted from one format to another, such as CMX to Sony. EDLXpress is a good choice for this, offered through Sony.

D2 and Digital Betacam VTRs support preread editing, but most nonlinear edit systems do not generate EDLs with preread information. In this case a package called Pre!Reader comes in handy. Pre!Reader works well with Avid but can also be used as a standalone package (Mac or PC) for taking various EDL formats and converting them to preread lists. Pre!Reader information can be obtained on CompuServe and America Online.

53 Use of these software packages can make your next online go faster and they will quickly pay for themselves in one or two jobs!

* In recent months after writing this, I had heard that CV Technologies was no longer in business. I don't know if this is true, nor if someone else has picked up the EditMaster products.

47. Electronic Film Post-Production

I have recently posted several projects that were intended for theatrical distribution. One of these I edited on the Avid Media Composer 8000 and went through a full round of film finishing services after the cut. This project has taught me many valuable pointers about making this process work painlessly.

First of all, you have to have a realistic budget. Film post-production of a 90 minute feature with electronic assistance costs about $300K to $400K. This includes film transfer to video and electronic offline, digital audio post, film opticals and negative cutting. Depending on budget negotiations this will also cover a custom music score.

So now some recommendations. Film transfer must be done with the ability to read keycode frame-accurately from the film. Because of the 2:3 pulldown phenomena when transferring 24 frame film to 60 field NTSC video, the accuracy of the keycode versus a corresponding timecode will be no better than + 1 frame unless a TLC telecine controller is used. + 1 frame accuracy is acceptable if you check things closely later on, however, you will have to operate your offline system at 30 FPS, not 24 FPS.

The transferred film is then loaded/digitized into an offline editing system. You should use a system that offers film software, such as the Avid Media Composer 8000 or Film Composer, Lightworks, EMC Prime Time, Night Suite or D/Vision Film Cut. Most of these systems can operate at either 30 FPS (+ 1 frame accuracy) or 24 FPS (0 frame accuracy). The sequences will be edited in a standard fashion, but as part of the digitizing, the additional film information will be loaded by the assistant editor (cam rolls, keycode, sound roll, sound timecode). With the film software, this additional source information will allow the editor to generate video edit decisions lists (EDLs), negative cut lists and even EDLs for the audio using the sound rolls as sources.

Once the cut is completed, it is generally a good idea to auto-assemble viewing copies (online editing) for reference throughout the rest of the process. If only the finished film print is to be the only final product and not a simultaneous video master, then this offline can be "down and dirty" (burn-in code in the picture, cheaper format, etc.). All audio editing of dialogue, sound effects design and music composition will be done to these video reference copies.

A good idea for a cross check of the cut is to have the film workprinted and to have a film editor conform this workprint to your electronic edit, using the system's film cut lists. This may seem redundant, but it is a good way to avoid mistakes when the negative is cut. An experienced film editor is not essential for this, since a good assistant editor or a good film student can perform

54 this task. Once the finished print is conformed to the video list and any errors are corrected (such as those which might have been caused by any + 1 frame ambiguities) you can double-check lipsync against the soundtrack in progress. The finished workprint and negative cut information can then be turned over to a number companies who specialize in cutting negatives for theatrical releases. These companies will generally not perform this task from only a video reference tape.

Some important creative and cost points to remember. All effects become film opticals. This means all dissolves (unless the negative is printed A/B-roll), titles and other effects. Unlike video editing, any repeated footage must be duplicated by having the lab create a duplicate negative. Stock footage must also be duplicated. Motion effects (slomo), flipping shots for screen direction and freezes are all optical effects. The film cut must be organized in lab rolls of 1,000' or less. This means every 10 minutes or less there will be a reel change. Your scenes must be organized so that these changes occur on a cut between two scenes.

Once the final mix is done of the soundtrack (which has generally followed an electronic path) and it has be verified against the film picture for sync, it must be transferred to a format that the lab creating the print masters can accept. This might be DAT, an analog tape format for magstripe .

The last step is to "color time" the print. This means that you or a representative must sit with a lab timer and determine all color-correction settings for the film prints. For those in the electronic generation, this is similar to the work of a colorist in film-to-tape transfer, though not with as much latitude.

To us video folks all of this may seem like a big hassle, but it still remains the only way to get an image of maximum resolution onto a wide screen and in a format that is acceptable in all corners of the world. Now through nonlinear systems that offer film software and the use of keycode, it is possible for those of who grew up in the video ranks to participate with our talents in the film world, too.

48. The Tapeless Post Facility

Those of you who routinely check out my column know that I have often tended to downplay the importance of nonlinear editing (desktop) in favor of today's practicality of linear edit bays. In spite of that, I do feel that it is possible with hardware and software available today, to build the "cutting edge" facility of tomorrow. So I've decided for this month to describe just what one might put into such a facility if it were to be built in the next few months. Although some tape machines would be evident, the working atmosphere would be "tapeless".

First let me start by qualifying my choices. They are purely my opinion and aren't the only systems available, nor the only ones I would consider, but rather, those that I feel make good examples in today's field. The choices will tend to be slanted towards Macs and Avid, even though I personally prefer a lot of PC-based products. The Avid and Mac approach is best to date when networking the available systems together. So here is how the "dream team" goes.

55 Editing. My choice for editing is the Avid Media Composer 8000, their "top-of-the-line." This gives you online, offline, film and effects and multicamera as well as single-camera editing options without compromise. I would install at least two or more suites and network the rooms together through Avid's Mediashare using common storage (drive towers or servers) and common back-up systems. Video input/output equipment would include S-video, composite video, Betacam (converted to RGB) and serial digital (available soon from Avid). The MC 8000 should be the PowerMac version and include the 3D DVE effects upgrade.

Audio Post. For audio suites I would install Sonic Solutions. This tends to be a controversial choice for many audio folks because operators tend to love or shy away from the system. In my analysis of digital audio workstations, the Sonics fair best for networking, editing ease, internal mixing features, etc. The Sonic workstations can be equipped with a "moving fader" control panel or could work in conjunction with one of the new Yamaha digital mixers. A good second choice is the Avid Audiovision.

Graphics. Graphics creation, layering and compositing is best left up to other units than the Avids. Here, the choices abound, from Mac-based systems on the low end to SGI at the other end. Taking the Mac approach, a facility should put in several PowerMacs running a mixture of software, including Elastic Reality, Electric Image, Photoshop, Illustrator, Renderman, After Effects, Painter and others. It wouldn't be a bad idea to also install a unit like Data Translation's Media 100 (or the Grass Valley OEM version) for compositing. Though not as good an editing interface as Avid's, the image quality is superb and it supports Quicktime plus handshakes with other software, such as After Effects, Photoshop, etc.

If the SGI approach were to be taken, then the best choices are Alias, Matador and Advance, Discreet Logic Flint and Chyron Liberty and Jaleo. None of these choices are necessarily cheap, but they provide the missing ingredients for higher-end finishing for which an all-Avid approach falls short. Since fast character generation is also a weakness (particularly rolls and crawls), I would also recommend a dedicated character generator, such as the Chyron Max or Maxine. The graphics units and the character generator can be tied together via ethernet, digital routing and/or common file formats, such as Avid OMFI, QuickTime, etc.

VTRs, Drives and Other Devices. Naturally, the facility can't be totally tapeless, because your source material, masters and dubs will still be on tape. So several VTRs of various formats will be required. I suggest PVW decks for the Beta-SP format as well as Digital Betacam decks for higher-end digital mastering. A D2 deck or two might also be helpful for interchange with older material. On the audio side the DA-88 small-format digital multitrack recorders are a good choice, plus a smattering of DATs, CDs, audio cassettes and maybe even 1/4" open reel decks. Unlike a linear facility, the lowest quantity of each format is required because they are only needed for input (digitizing into the workstations), output (laying off final material to tape) and tape-to-tape duplication.

In addition to VTRs, digital disk recorders such as those by Abekas, Tektronix or Hewlett Packard should be included, because many of the software choices listed offer fast output to these devices and make for a better medium of exchange between graphics and edit than tape or computer

56 files.

A lot of is still done on film and a "tapeless" facility of this type would need to tie in closely with another facility providing film transfer. A close symbiotic relationship is recommended because Avid is now making several products that tie into the transfer process. These include the Media Recorder Telecine (MRT) and the Media Reader. These devices allow the creation of an Avid bin during transfer and even the direct recording to Avid hard drives (bypassing tape entirely). A facility like the one described might purchase such units and "loan" them to the transfer facility in order to get material into the desired format for editing.

Now, I recognize that the facility I describe is by no means cheap. It would, however, cost considerably less than the comparably appointed linear facility of the last few years. Not only that, but it breaks the bounds of format wars that have plagued facility owners for decades. With proper planning and networking, the same material is available to all parts of the facility and can be simultaneously mixed, edited, composited or whatever. At the risk of being the pioneers with the arrows in their backs, I would wager that the early entrepreneurs who build these first "tapeless" facilities, will create the way clients do business for years ahead.

57 49. Products To Watch In The Future

With another new year, it is time to take a look at the products that might be revolutionary for the industry. Obviously in this category, nonlinear still captures our attention. In the nonlinear realm, I feel that the Tektronix family of products is destined to be the new frontrunner. The new Profile system is being combined with Lightworks and elements from Grass Valley to become the true "online edit suite in a box" that many people are waiting for. Profile is a variable-compression digital disk recorder/server that can also handle several plug-in cards. Such cards will soon include a Lightworks controller, a GVG-4000 switcher mix-effects cards and a channel of Krystal (DVE replacement for the GVG's K-scope). With quality up to Digital Betacam level, this system will reach many customers.

Avid, of course, is not to be forgotten. The newest levels of resolution, AVR-70 through 75, will also approach Digital Beta's level. Moving totally to the PowerMac platform, new increases in performance and features are also anticipated. Avid is also continuing its move to the SGI platforms for increased performance, featuring the Media Spectrum, its Onyx-powered "online workstation", running Avid, Parallax and Elastic Reality software in a single cohesive unit for editing and compositing.

Other key players are setting up their systems to run under Windows NT on the Pentium and, soon, PowerPC platforms. In the nonlinear arena, that includes D/Vision and Play, Inc. D/Vision has been coming up slower from behind, but makes a very robust editing system with spruced-up interfaces for offline, online and . Play, Inc., an outgrowth of former NewTek (the Toaster) founders, has introduced several devices that encompass linear editing, nonlinear editing and digital switching with effects. Their systems look more like Nintendo controllers than standard video gear, but may offer the best "bang for the buck" in the business. Many other former Amiga-related products have also been ported over to Windows NT. The best example is Lightwave, a 3D animation software package, that rivals high-end competitors, such as Alias/Wavefront and Softimage.

Other products to watch out for in nonlinear include ImMix's Turbocube, EMC's Primetime, Fast Electronics and ETC!'s Ensemble Gold. In animation keep an eye out for Caligari True Space, Autodesk's 3D Studio and Ray Dream Studio. On the 2D side, the contenders include the various Adobe products (Illustrator, Photoshop, etc.), Fractal Designs' Painter, Chyron's Liberty and the offerings from Parallax (Avid) as well as Discreet Logic.

58 50. Component Vs. Composite

The industry has come from a tradition of composite video - one in which luminance and chromanance information travel combined as one signal on one wire. It is moving very quickly to component - where color and luminance signal information stay separated throughout production and post. The general feeling of most video folks is that component is better than composite, but that isn't always true.

When a signal is recorded in component and converted to composite in post, the signal is encoded, or combined. When it is composite and converted back to component it is decoded, or separated. Encoding and decoding add artifacts to the picture that accumulate with each pass. These artifacts are often greater than those in the conversions between analog and digital signals. As a general rule, you should try to avoid as many conversions in the post process as you can to avoid artifacts.

When you record field footage with a Betacam-SP , you are recording a component signal. In post this signal can either be played in a "native" component environment or can be encoded as composite and posted in a standard composite method. Component is cleaner and will pass all of the bandwidth of the original recording, while composite will "cut off" some of this bandwidth. Betacam-SP in a composite edit suite still has a very clean and pleasing look, but not as crisp as in a component suite.

When a production switcher signal (usually composite analog) is recorded on Betacam-SP, a composite signal is decoded and recorded on the tape. Some of the bandwidth is again lost and the image picks up minor artifacts. Recording this same signal on a 1" or D2 decks would be preferable. On the other hand, when you record field footage on a D3 camcorder (composite digital), or on D2 or 1", the composite signal contains full bandwidth information. If this material is posted in a composite digital environment, it will retain of its information, without additional artifacts.

In spite of all our attempts to make the cleanest, crispest image possible, we are still encumbered by a composite video signal system where the lowest common denominator is the broadcast signal or the playback of a VHS dub. In order to get the best image to this destination, I generally advise editing "up" to the best format possible. Under this approach, if you are editing Betacam- SP sources in a composite suite, edit to a 1" or D2 master. This will preserve the image. If you edit Betacam-SP back to Betacam-SP in composite, you will pick up two encoding/decoding "hits" with each generation. If you are editing in a component suite, master to Digital Betacam, DCT, D1 or D5 rather than Betacam-SP. Though you don't get the encoding/decoding, you get analog generation loss, as well as a higher drop-out count.

With composite sources, such as D3 or D2, it might be best to post in a composite digital suite (D3 or D2). This will keep the purest image quality. A component digital suite still will have better keying function, particularly chromakeying, and for this reason, may still be a better post

59 alternative. This single decoding pass will introduce minimal artifacts which will generally go unnoticed. Panasonic offers an interesting wrinkle, by making its D5 component digital recorders playback compatible with D3 tapes recorded on D3 composite digital recorders. This is because Panasonic does not make D5 camcorders, only D3. Sony differs in its approach by offering a full line of Digital Betacam decks, but Digital Beta studio decks can also play Betacam-SP recordings (with the analog playback option).

51. Nostalgia

I'm sorry to say that during the 25 years I've been in the business I've seen many formats come and go. Like other aspects of life, everything old is new again.

My experience started not only a while ago, but I also had the chance to work at WMFE in Orlando as my first television job. Since WMFE had all the usual Public TV financial woes, it received many "hand-me-downs" from other stations. As a result, it became a veritable museum for video equipment and gave me exposure to the rich heritage of our industry. It was at WMFE that I worked with many items that were past there prime, such as the RCA TRT-1A video recorder - four racks of electronics and one of the first series of VTRs manufactured. Add to this, GE turret- lensed studio cameras and a B&W switcher. At WMFE I first learned to make supers - letters on tabbed blocks that you tore off and placed in a holder to spell a name, then with double-stick tape adhered to a black card for the camera. At WMFE I was still able to watch someone splice 2" quad tape, though for repair, not editing.

In school (University of Central Florida, then Florida Technological University), I worked with 1/2" Sony Portapaks. These were color and B&W portable forerunners of our modern ENG/EFP equipment. WMFE had some of the earliest 1" VTRs : IVC-800 and 900s. These preceded the Sony and Ampex type "C" 1" VTRs by several years and were themselves eventually "killed off" by 3/4" decks. IVC also made the 9000, a 2" VTR that was the best multi-generation deck made. Used by many high-end post houses, it was unsurpassed in image quality until the digital formats came about. Unfortunately they were too expensive to maintain and IVC just died off.

Throughout the years I've seen and worked with many other formats that are no more. Anyone remember quartercam? This was a 1/4" cassette format proposed by various groups, including Bosch and Philips (and challenged by 8MM proponents), which was nearly adopted by ABC News as its ENG format of choice. Eventually Betacam won out, though we may now see it come back in a sense in DVCPro. Then there was Bosch's B-format 1" VTRs, which were well-received internationally, but beat out by Sony and Ampex's type "C" 1" in the US. For a while I worked with Panasonic's Recam, the 1/2" format that was replaced by Panasonic's MII (and was later referred to as MI). You may remember that this format could use consumer-grade VHS tape, something the early Betacam format also claimed (consumer-grade Betamax), though less successfully. Speaking of Betamax, remember the industrial grade Betamax decks (Beta I), SuperBeta or Extended Definition Beta (ED-Beta)?

60 So formats come and go. Whether anyone will remember DVCPro, Digital S-VHS or even Digital Betacam, D5, D3, D2 or DCT for that matter is a market decision. As painful as it sounds, formats are an expendable item that, like yesterday's 286 PC, serve a purpose and then are replaced.

52. Compression

Image compression techniques are a factor of nearly all video technology in the foreseeable future. Though compression may be viewed as a new situation, it has been with us for a while.

The NTSC image format is in itself a way of compressing reality. The NTSC picture makes certain compromises in handling light, contrast and color saturation to accommodate the television technology of the 50's and 60's. Newer digital representations, such as the 601 or 4:2:2 image, provide lower color resolution than luminance resolution, based on research that shows the human eye is less sensitive to color detail. In the computer industry, data compression techniques for archival applications are quite commonplace. I don't see many folks worried about the reliability of data compression and this is in an industry where errors could result in the loss of millions - even billions - of dollars.

Examples of data compression in video include VTRs (Sony and Ampex using dct compression), nonlinear editing (either motion-JPEG, such as Avid or wavelet such as the VideoCube) and distribution (MPEG1 and 2). The various approaches generally involve proprietary methods, so that an Avid file cannot be read by an EMC units, even though both use motion-JPEG algorithms. Some algorithms, such as Quicktime can be used across platforms and applications, but it isn't as easy as a moving a tape from one deck to another. MPEG, the newest and most highly compressed format to come, may change this, since it is intended to be a compatible standard. MPEG is here now, but delivers about the same image quality as VHS dubs.

As far as the high-end use of compression, a 4:1 to 2:1 level of compression is generally considered to be lossless, that is, unnecessary information is eliminated for no visual loss. This is debated by many manufacturers, such as Panasonic, who claim that their uncompressed D3 and D5 are better choices. Their argument is that a compressed format may appear lossless in one or a few generations, but that the cumulative effects of post and signal processing through systems that have their own compression, may cause unwarranted and unforeseen artifacts. For instance, Digital Betacam sources, posted in an Avid and then distributed via an MPEG path may look like a real mess. On the flipside, Ampex, with its 2:1 compressed DCT recorders, has publicly taken on Panasonic, claiming to be able to prove that its DCT images are superior to D5.

My experience so far has been with Digital Betacam posting in both composite analog and component digital suites. I have posted Digibeta projects to levels in excess of 50 generations with no visible artifacts. When comparing this to tests I have done with DCT and D5 decks (in the same suite) I see no better results than with the Digibeta. In short, all three are good and the arguments are "much ado about nothing!" All are being compared to the yardstick of D1 recordings. I personally dislike D1, because the VTRs are fairly unfriendly in an editing environment and interchange between decks is problematic, so in my opinion, all three are better

61 than D1 in the real world.

When raising concerns about the cumulative effects of compression between systems, I also think that it isn't a real-world problem. Computer-based drives and servers, such as Avid's are, at best, Betacam-SP quality. At the top end, claims of "Digital Betacam"-quality may not prove to be exactly that good when actually tested out. In short, if the quality is better going in than the internal processing, the server becomes the lowest common denominator anyway. MPEG distribution will definitely be that way for a while, becoming the "VHS dub" of the future.

53. Open and Closed Architecture

Since video is becoming more and more an industry dominated by computer hardware and software applications, we must make decisions on purchases that choose between open and closed architecture systems. By definition, the term closed architecture applies to those turnkey systems that are generally supplied by one manufacturer and are made up of proprietary hardware and software. Open architecture systems are those in which hardware platforms can be selected from commonly available components and which let you install your choice of software for the desired applications. Closed systems generally require you to continue with the same manufacturer for upgrades, while open systems let you retain the same hardware platform and change to more advanced software later on.

This all seems quite straightforward and one would think that an open system would be the best choice. That isn't necessarily so. Typical examples of closed systems include such venerable and stable vendors as Sony, Quantel, Grass Valley, etc. Open systems are typified by many of the newcomers: Avid, Silicon Graphics, Data Translation, etc. There are pros and cons to any of these choices. Closed systems allow you often to have a better direct conduit to the vendor in case of problems. Closed systems can often offer faster performance than open systems because of their proprietary solutions. Open systems must frequently run under Mac OS or Windows, adding computing overhead which bogs you down. Open systems, however, give you the benefit of greater choices, so if you like the way a Mac runs, you can choose various software solutions that best fit your needs, whether it is nonlinear editing, 2D graphics or 3D animation. A closed system can also become a major financial mistake if you happen to purchase a system that is no longer supported by a vendor or was produced by a company that is now out of business. Just look at the older graphics systems or the earlier nonlinear editing systems to see this. But on the other hand, open systems aren't immune to that either - witness the Amiga saga. Mac owners should shudder that Apple is only 7% or the market!

Are open systems truly as open as they appear? I really don't think so. Take a look inside an Avid and you'll find a lot of proprietary hardware that makes the units function. Same with the VideoCube, Media 100, Lightworks, etc. Purchase these as turnkey systems if you expect them to work correctly. If you purchased software optimized for a Mac Quadra 950, it may not run correctly on a PowerMac, yet the company may not immediately upgrade you to native PowerMac software without a hefty charge. Systems that are touted as open architecture may not be as open as you think - and closed architecture systems may not be such a bad choice after all. For instance, you

62 will have made an equally good choice if you purchased a Discreet Logic Flame (open) or a Quantel Henry or Hal (closed) for a compositing/special effects workstation. If you want the SGI Onyx that runs the Flame software to also run Alias 3D animation, then the open choice is for you. If you only see that unit performing one function, then closed may be better.

54. Video Production That Gives You The "Film Look"

In past articles I have written about various ways to achieve the "film look" on video. In general, some of the things that trick the mind into "feeling" something is film or video can be duplicated in post-production, using color-correction and video effects techniques. In order to do this successfully, you must start with a good, film-like image.

Various attempts in the past to make a camera that can be optimized for "electronic cinematography" date back to the RCA TKP-45, the CEI-310 (which became Panavision's Panacam) and the Ikegami EC-35. These achieved their "ec" claims by offering a variety of lens choices and fairly versatile control of the image when in the hands of a good video control operator ("shader"). Unfortunately, these cameras lacked the sophistication of modern electronics and the advances made in CCDs. CCDs, rather than tubes, now allow a to handle lighting extremes in ways that approach film imagery. In addition, the research and development that has gone into high definition television has had the spin-off effect of giving us better cameras for current NTSC productions.

Both Sony and Panasonic now offer you choices that provide a very film-like reproduction of the image. The Sony series is based on their Digital Betacam camcorders while the Panasonic line is based on D3. These cameras offer the finest in electronic image reproduction, a choice of lenses and film peripherals (matte boxes, focus handles, etc.), 16x9 or 4x3 aspects and the ability to record a clean, high-bandwidth picture straight to digital tape. What makes the biggest selling point for these cameras is that the operator can set up these cameras digitally, with control over balance, gamma and chroma. This means a director of photography can develop and store different looks for various moods.

In side-by-side comparisons of an image photographed with 35MM film and these cameras, image quality looks very comparable over many different moods and light settings. This includes interiors, exteriors, harsh daylight and hard mood shadows. In all honesty, I feel that in the shots I have seen, the 35MM still has better resolution (crispness), but that this different is lost on all but the best control-room grade monitors. If a 35MM negative or print is not required for release, them the use of one of these "ec" video systems might be a good alternative to film production, as well as an incredible cost savings. If a truer "film-look" is still required, showing a grain pattern and motion artifacts as a result of 24FPS or 30 FPS film transfer, then an electronic post technique for achieving this can still be added.

63 55. Facility Design

With a couple of decades in the business, I have had the opportunity to design, build and install several different audio, video, film and mobile facilities. Each new facility offers its own unique challenges, but many things can be done to head off the same old mistakes.

All facilities have a few common elements. These are: heating and air conditioning (HVAC), electrical power, sound isolation, lighting, operator ergonomics and video synchronization signals. If dealt with incorrectly, each can become far more of a nuisance than mistakes made in the equipment you purchase. You must correctly deal with these elements before anything else is finalized in a new facility.

Electrical. Make sure that you have adequate, conditioned power that is equally distributed for your equipment needs. Design enough capacity by assuming that all your rooms may become technical rooms at any time in the future. Today's office could be tomorrow's nonlinear editing suite. Power, especially in Florida, tends fluctuate and be unreliable. Work to prevent and filter this. Add surge suppression and UPS systems wherever possible. This will keep you from losing valuable systems at the most inopportune time.

HVAC. Air conditioning comfort is rated as the biggest complaint in nearly all businesses. Put each room on its own thermostat and make sure you have enough capacity for not only the equipment but the differential in people. An edit bay may have one editor or half a dozen clients at different times.

Sound isolation. Each room in a facility generates noise to the outside which needs to be muted. This is true of even graphics suites, where artists frequently play music to pass the time during sessions. The only really critical rooms are any where open mics are used. Audio control rooms should also be well-isolated from outside noises so that the concentration can stay on the sounds of the session. Outside noise leakage also tends to shake a client's confidence in the acoustic integrity of the entire audio facility.

Lighting. Each room should offer the operators a lot of control over the lighting environment. Lighting should be indirect with additional accent lighting for keyboards and tasks. I personally like windows and recommend a lot of natural lighting. This counters the cave atmosphere of most suites. Some operators like the cave, therefore, windows need to be able to be covered with blinds, shades or curtains. In any case, careful placement of lighting will prevent direct light and reflections on monitor screens.

Ergonomics. The human interface is the most important, yet often most overlooked part of a facility. Make sure console levels are at proper working heights for keyboards, trackballs, mice, etc. Chairs should be easily adjustable in height and give good back support for long hours. Give plenty of surface space on consoles so that the operators can spread out their work. A console that is nothing but keyboards without any room for scripts, a cup of coffee and general elbow room is too cramped. The amount of empty surface area should be more than that which is consumed

64 by keyboards.

Synchronization. Video sync is a key element in how well your video hardware and locked audio hardware will function. The video sync signals are often looped from one device to another. Although this will work, in later years it will prove to become a source of trouble. Design a sync signal distribution system with enough distribution amplifiers to run direct signals to all devices separately.

56. Computer System Maintenance

Computers are such a general part of our working environment in the video business that they are taken for granted. We tend to look at the computers that we use for work in the same way that we view our office or home PCs and Macs. This is a mistake.

I have found that the Macs and PCs used for systems such as audio workstations and Avid edit systems are frequently "abused" by other staff members. Doom is played on the PCs used by the Operations Department for scheduling. Interns' resumes are found on the Macs in audio. Sounds like a familiar problems? I'll bet it is! These are symptoms of problems that will choke your systems. You may note system crashes, slow downs and other problems if you don't perform routine system maintenance. Here are some suggestions.

1. Don't put any unnecessary applications on the computer. Various applications require different system configurations. Changing from one to the other without making the changes will crash your computer. For instance, MS Works (for Mac) doesn't co-exist well on a Mac with the Avid application, while Photoshop and Fractal Design's Painter does.

2. Keep games off of the system. Generally these aren't a problem, but in some cases, warrantee service will be terminated if the supplier finds games on the system during troubleshooting.

3. Frequently remove old project files. Back these up to floppies if they are mainly data and get them off of the hard drive.

4. Run diagnostic routines on your hard drive. This means running such utilities as Norton Utilities, Disk First Aid, Disk Fix, etc. Generally you want to check for viruses, check for problems and optimize the drive by defragmenting files.

5. If you are using external hard drives, such as on an Avid Media Composer, format the drives again after several projects. This will truly clean off old material rather than simply by dragging files to the trash.

By following this simple guidelines, you'll find that you are encountering unexplained system problems less frequently and your computer will run more smoothly in general.

65 57. Post, 2001

With the next millennium only a few years away, one should think about what portions of video post-production will still remain the same? What will facilities look like? These questions were raised recently on-line in CompuServe's Broadcast Professionals Forum, but, unfortunately drew little response. I suppose that means that those of us in the industry are as much in the dark as those outside! Prompted by this, I have tried to polish my crystal ball and offer some observations of my own.

Obviously post is changing and the general wisdom is that more and more editing suites will boast the latest non-linear, random-access editing workstation (NLE) - Avid, Media 100, D-Vision, etc. But will there even be a need for post-production facilities at all? I think so - and I'm not sure that the general wisdom's are right.

Post-production facilities are not sitting around waiting to become obsolete. They are trying to model themselves after similar examples in other industries, such as the publishing/printing industry successes after the introduction of desktop publishing. This is generally taking two routes for video post-production facilities: a) they are evolving into service bureaus for high-end finishing; and, b) they are offering more creative services and production support. These routes are becoming necessary because many former editing customers are purchasing desktop editing and doing it themselves. The impact of the Internet and the World Wide Web is forcing facilities to re-evaluate their role as communications - and not just video - companies.

So what can post-production facilities offer to producers at the start of the next millennium. First of all, there will be many high-end services for which cheaper, desktop solutions will still not be available. These include: high-quality standards conversion; film-to-tape transfer, high-quality, real- time effects work; high-quality compression (MPEG, DVD, etc.). Format wars will continue. As the production community moves away from Betacam-SP to the next field format (DVC, Betacam-SX, hard-disk recording?) facilities will still be needed to mix all the various formats to a final master, because independent producers and videographers simply won't be able to invest in all the variations that are out there. When advanced television becomes a reality, the standards will once again change and if that change is to HDTV or even 16x9 enhanced NTSC, then desktop systems will have a hard time delivering the throughput needed for on-line post-production work.

So what does the crystal ball say about how a facility will look in the coming century? I think the answer is that a lot fewer things will change than many believe. A lot of suites will be based on NLE-technology, but linear, tape-based editing is far from dead, especially for long form programs. Offline editing will be nearly totally non-linear and online will tend to be mixed between linear, non-linear and hybrid suites. Many facilities will feature strong graphics departments so that they can address the interchange between video graphics/effects, Internet/Web/multimedia production and possibly other revenue streams, such as feature film effects or video games. Post- production audio suites are still a necessity for the full-service facility.

The facility in general will probably continue to work within a familiar structure. Although much R&D has been invested in server technology, it will still be a long time before a central file server will

66 replace the terminal gear and videotape decks in the central machine room of most existing facilities. The reasons for this are the high replacement cost, redundancy in the event of failure and the ease of moving material from archive to online status.

Edit suites will still feature familiar approaches - dedicated devices, rather than a single keyboard and the computer's graphical user interface. Nearly all NLE manufacturers offer a variety of other control surfaces for switching, audio mixing, digital effects and editing control. Many offer multiple monitor configurations for the interface and the video image. This is the easiest way to work and the most familiar to operator and client alike.

As we look ahead to the next millennium, producer/clients and facility personnel together should not look for ways to undercut each other - fearing increased competition - but rather, should look for ways to evolve. It is a symbiotic and mutually beneficial relationship that is not extinct, but rather needs to grow to accommodate newer communication methods and tools. Only in this way can the diverse aspects of a client's multithreaded marketing plan - television, videos, CD-ROMs, DVD disks, print and the Web - all be tackled by the same team with any level of consistency and cohesiveness.

58. The Next Field Format?

Betacam and Betacam-SP have been around for over a decade and have become the dominant video format for news gathering (ENG) and field production (EFP) throughout the world. Currently the video industry is in a great period of change - analog to digital; linear to nonlinear; MPEG compression; etc. Likewise, the camera and deck manufacturers have started to crank up the effort (R&D and marketing) to replace the venerable Betacam-SP format - first for ENG and later for EFP. The "short list" of choices now seems to include mainly three: 1) Sony with Betacam-SX; 2) Panasonic and Philips with DVCPro; and 3) Ikegami and Avid with CamCutter and .

Let's look at the Avid/Ikegami approach first. This is touted the way of the future, because the recording media is a dockable hard drive instead of a tape deck. Of course, the drive format will only work (thus far) in an Avid editing product , primarily the NewsCutter. The Avid CamCutter or Ikegami EditCam offers many interesting features, such as built-in non-linear editing, the ability to delete unwanted takes and a looping record function. The units is very rugged and image quality is excellent and relatively free of compression artifacts. Because the digitizing is done straight from the camera's imaging system, better results are achieved with more extreme compression rates than in the Media Composer product line.

The key to whether or not the industry adopts this is the economics of hard drive recording. A hard drive "back" will store about 15 minutes of footage at a cost of about $2500. This cost is expected to drop, but I doubt that it will approach the per minute cost of tape. Even with multiple drives, it will be necessary to delete bad takes to conserve space and make a lot of conscious decisions about whether to archive material after editing. And what form will that archive take? Yes - you guessed it - tape - data tape, but tape, nonetheless?

67 Philips and Panasonic are banking on a tape-based future for ENG and are promoting the DVCPro line. The consumer DV format is backed by a consortium of electronics manufacturers who are all manufacturing to the same standard, but both Sony's and Panasonic/Philips' professional variations deviate from the consumer DV standard. The DV cassette cartridges are similar in size to Hi8 and 8MM cassettes. The DVCPro variation uses a different tape formulation which is more conducive to professional production and editing. DVCPro editing decks can playback consumer DV tapes and also Sony's DVCam professional format (though Sony cannot playback Panasonic's DVCPro recordings).

DVCPro uses 5:1 compression which looks excellent. Part of the attraction is the smaller, lighter equipment packages made possible by this smaller cassette size. One of the products in the DVCPro line-up is a laptop-sized, 2-VTR edit unit for ENG editing in the field.

Although Sony makes its own DVCam format, it is marketed for the non-broadcast, industrial sector. For broadcast, ENG and high-quality field production, Sony has introduced Betacam-SX as the direct successor to Betacam-SP. This is a component digital (4:2:2), MPEG-based format fitting into the existing Betacam-sized cassette cartridges. It uses 10:1 compression, which points out a new aspect of this era - "format wars" have now become "algorithm wars". Sony's 10:1 looks as good as or better than the 5:1 or even 2:1 of others, making these kinds of specs not truly "apples-to-apples" comparisons.

Betacam-SX decks are playback-compatible with Betacam-SP archive tapes, though the SX tapes use a new formulation (look for Sony's bright yellow cases). Sony, in trying to bridge the "tape/tapeless" debate, has developed hybrid recorders with built-in hard drives. You can edit from tape to the hard drive within the same VTR! You can also "dump" selected takes from various tapes to a single hard drive and then use that drive as a source in the edit. This gives folks a pseudo-nonlinear approach in a package that looks and feels like a familiar BVW-75. In addition, Sony has offered SX at an attractive price point for product and stock compared to even analog SP.

Whichever format or method wins out will always be a marketplace decision. It seems that no choice is a bad choice if it adequately meets your needs.

59. User Interfaces

The user interface (UI) of a device is the portion with which the operator must interact in order to get the results desired. This includes display screen layout and control devices (keyboard, mouse, stylus, etc.). It is the portion of a workstation or system, whether it is an edit controller, paint system or whatever, that is most visible to client and operator alike - and the part that is sometimes the most confusing. Although professional, post-production-oriented UIs are not totally standardized, such as with a Mac or Windows application, they generally fall into four categories. These include the EDL, Tree, Paintbox and Timeline styles of interfaces.

68 The EDL (or Edit Decision List) style of UI is the oldest and typical of CMX, Grass Valley, Sony and other brands of linear edit controllers. Sections of the screen (monochrome and color) display are divided up for VTR/Device status and timecode locations, type of current edit and a list (EDL) of all edits made to that point. Since nearly all brands of linear controllers use very similar screen and keyboard layouts, it is quite easy for an experienced editor to move among various types of controllers with minimal learning curves. Instead of a single control device, such as a mouse, the editor have 10 control devices (fingers) to quickly move through keyboard keystroke commands!

The most common new UI is the Timeline, which is typical of most nonlinear editors, like Avid, Media 100, Stratasphere, etc. This interface is essentially the best of the film world's tactile/visual style of editing, married with the Mac/Windows UI for applications. Often using two computer screens so that the operator can better layout bins and windows on the screens, this method has sections for the source clips, a section for the edited sequence (like a "recorder" window) and a graphical timeline showing all audio, video and effects edits made to that point. These graphical representations are equivalent to the textual information displayed on a linear editor's EDL-style of UI.

The Timeline may also sometimes represent the equivalent of a track sheet used in audio recording studios to identify which instruments are recorded onto which specific tracks of a multitrack audio recorder. In addition to edit controllers, variations of this type of display are also common in 3D animation programs and some compositing programs like After Effects.

The Paintbox style of interface has been around for nearly two decades and is most common to 2D paint system menus, including Quantel, DFX, Liberty and others. In the Quantel Paintbox everything is controlled by the artist's stylus and "swiping" the pen up/down, left/right activates various pop-ups menus or color palettes. Quantel has taking this UI even further, making it the common basis for nearly all of its product line, including graphics systems (Hal, Harry) and editing systems (Henry, Editbox). In the editing and composting programs, linear material such as motion footage is represented by filmstrip-style displays which the artist/operator can move forward and backwards for trimming and editing by using the pen interface.

Finally, the newest UI to gain strong acceptance is the Tree interface. This is most often used in compositing programs to show the layout of how effects command are placed and in which order. Programs using the Tree include Avid Illusion, Kodak , and Discreet Logic (Flame, Flint, Inferno). The Tree layout displays a flowchart of blocks (or circles) connected by lines with intersecting nodes. It is very easy to change the priority of which effects and masks are used in image compositing, simply by moving or changing the "insertion" point at which an effect or mask is placed into the composition. This is somewhat analogous to the thought processes an editor or TD uses in setting up effects layers on a multiple re-entry video switcher - only it is visually represented as a graphical flowchart.

All four styles are likely to continue in the industry for many years. They are comfortable to understand and operate and make it easy for operators, editors and artists to transition between various hardware and software products.

69 60. The Productive Producer

There are many tools out these days that can really aid a producer in his or her daily production and post-production tasks. Today it is unimaginable to see a producer in a session without a cell phone and a laptop, but what are some of the useful items that ought to be on that laptop?

A skillful producer should have a good handle on what footage is to be used and where it can be located before starting an edit session. Many projects start at the post-production phase with an Avid offline session. Even here it is productive for a producer to have screened and selected possible footage prior to offline. A real aid here is a VHS deck with shuttle and pause capabilities. Screening a VHS dub of your footage with "burned in" timecode and logging takes can be invaluable. To this end, general application software, such as Works and Filemaker Pro can be used to create tape logs which can be directly imported into Avid Media Composers and turned into bin information. This step avoids having to do all the screening and logging while connected directly to an Avid. Likewise, bin information can be exported and opened in Filemaker Pro or Works for easy sorting and searching during the edit if you use a laptop.

Much of what we do in edit sessions revolves around graphics. It is quite common these days to use animations, graphics and supers created on desktop systems (Mac and/or PC) running such applications as Corel Draw, , Fractal designs Painter, Adobe After Effects, etc. If you are artistically minded, then it would be invaluable to be able to create these graphics yourself. Generally files that are 640x480 or 720x486 pixels at 72dpi resolution can be directly converted to video. If you wish to create these yourself, then Photoshop is the most important and best all-purpose program to use.

If you really only want to be able to move around and convert files between various types for eventual export to video and are not interested in creating these from scratch, then less costly software is your best bet. These should include such applications as CompuServe/Spry's Image Viewer (PC), GraphicConverter (Mac) or JASC Paint Shop Pro (PC). These are generally available at minimal cost as shareware through the various online services.

There are also a lot of little plug-in programs that are useful for adding that artist flair to your project. For instance, for Photoshop you can get Alien Skin Textureshop which is great for producing unique textures for backgrounds. Another Photoshop plug-in is Phototext, which allows you to generate text with various modifiers for shadows, embossing, etc. Leaf through the many computer mail order catalogs and you will find many royalty-free CD-ROMs of photos and textures - all are useful for backgrounds and "eye candy".

Chyron has also recently released a software package for Windows 95 and/or NT called WinFinit!. It is set up to work with a specific Chyron Infinit!, Max or Maxine based on that unit's site license ID number. As such, the software can be freely copied, since its files will only run on one particular unit. If the facility has a PC networked with its Infinit!s and is running this software, then you can generate message pages on your laptop (with this application) offline, move the files to the networked PC and transmit them directly into the Infinit! WinFinit! will also converted

70 Photoshop-generated graphics into Infinit! format, thus allowing you easy offline graphics creation for use on an Infinit!.

If you do a lot of work with edit decision lists (EDLs) and you understand their variations and how to manipulate them, then there are a number of choices you should include. A good selection would include EDLXpress (Alba Editorial), EDL Max (Brooks Harris) or PreReader! (The Software Grill). Again, look around on the online services (CompuServe, AOL) for these. If you can get a copy of Avid's EDL Manager (Mac), this is also a worthwhile application for EDL-file conversion. I also like an older DOS program which may be impossible to get these days, called Edit Lister.

Whether your needs are graphics, EDLs or just to get better organized, the software is there for you.

61. High Definition Experiences

I recently had the pleasant experience to post my first High Definition TV project. Those of you who have followed the politics and technology of HDTV may know that it has been kicking around for about a decade with little results. During the end of 1996 the FCC approved a transmission standard developed by a consortium of originally-competing, proposed HDTV manufacturers, called the Grand Alliance. As this standard is meant to define the "carrier" method for the next type of TV transmission and not the production/post-production format, it includes the ability to transmit various type of TV signals. The stations and networks can determine this and "smart" receivers at the home will decode the signal appropriately. So far the highest-quality image possible, for which actual products exist or can be readily manufactured, is the SMPTE- approved HDTV standard, featuring 1125 scan lines at 30FPS (60Hz) and a wider aspect ratio of 16 x 9, rather than our familiar 4 x 3.

The marketplace will dictate whether or not HDTV will become the next "must have" production and post standard for top-quality video production under the new digital television scheme, but in spite of the some ambiguity, HDTV has already provided many benefits in non-broadcast applications. In my case, our client was marketing flight simulation image generators at a trade show and wanted the best possible video image in order to adequately represent the product. Standard NTSC just wasn't going to cut it! There are only a handful of places where you can post HDTV projects. One of them is the Sony Pictures HD Center in Los Angeles. In addition to servicing outside clients such as ourselves, one of the big "clients" at Sony is in-house. They master all the Columbia (Sony Pictures)/TriStar library to HDTV video first and then from the HD masters "downconvert" NTSC, PAL and now DVD duplication masters.

Posting HD is quite interesting. Since R&D on a lot of the product line had long ago been halted, pending political and market decisions, many of the items we are accustomed to in standard edit suites don't exist in the HD world. Generally all projects are best handled as "cuts and dissolves" projects - no fancy "bells & whistles". The video switcher is rudimentary and only a crude real-time 2D DVE exists (resize and reposition only). The only true real-time effects tool is an Ultimatte 6

71 which has been "hot-rodded" for HD. Tapes take five minutes to shuttle end-to-end and are subject to head clogs. The VTRs are digital but the signal path is analog.

But now the good news. The image is great! You can offline in NTSC on an Avid, for instance, and the EDL translates. Resolution-independent software lets you create HDTV graphics and animation which can be transferred to HDTV tape for editing. This includes most of the popular SGI-based animation programs as well as Mac software such as Photoshop and After Effects. On this project we created several 3D animation on SoftImage and pseudo-3D animations using After Effects on a Mac 9500.

Other resources at the HD Center which we did not use were the various film services, which include transfer, color correction and film output. Given the resolution of HDTV, it is feasible and cost-effective to shoot 35MM film, transfer and post in HDTV and then transfer back to film for a 35MM theatrical product. Though in theory this image would not have the full resolution of a film negative, I doubt that anyone would see the difference - especially if the film used many optical effects - when compared against actual release prints in a completely film-posted project.

So HDTV has plodded along in the background, providing many special venue productions with the resolution required. Whether or not it becomes mainstream, it will most likely continue to be a good alternative for trade shows and other non-broadcast applications.

62. Recycling

Most of us are familiar with the recycling concept and in many communities are used to saving plastics, glass, etc. for pick-up and reuse. This concept can be extended to the video production and post business as well. It is one which we at our facility have tried to put into practice to help us move forward into the future. I'd like to share some examples.

As you know, computer platforms constantly evolve and what you purchase today is instantly obsolete. This is especially painful in the area of animation and graphics, as expensive workstations, such as those from Silicon Graphics, have found their power eclipsed by PCs and Macs. Several years ago we purchased an SGI Personal Iris for 3D animation - at the time a "screamer" at 33Mhz - and very expensive. As a modern "tortoise", it is hard to sell to anyone else for near its value on the ledger. We have since added several newer, faster SGIs and Macs and needed a network. Well, low and behold, once we tied the Iris to a 9GB SCSI drive "liberated" from one of our Avid systems, we were able to turn this "tortoise" into a dandy network file server. In its new role, the 33Mhz CPU loafs along quite adequately and keeps us from bogging down the network by having one of the more productive units fill this role.

We are in the process of completing construction of a second digital online edit suite. Our first one was built with the newest gear from Sony. As the industry shifts to more nonlinear editing, building a new tape-based edit suite with expensive, brand-new equipment is, at best, risky. We decided to take a different tact and scour the country and various equipment brokers for great deals on trade-ins and "nearly new" equipment. To our surprise, this turned our extremely productive and as a result we are building a total Grass Valley Group-based suite. GVG editors, long favored by

72 many online editors are readily available and inexpensive in the used equipment market. Since these units are heavily software-based and its original software developers are still in business and actively supporting the product's client base with new software, all of the existing units can be readily updated to the newest versions.

The GVG switcher we chose was one originally designed for telecine suites and is, therefore, small. As telecine rooms are changed around, many of these switchers are on the market and, since, digital suites earn their place with composting and layering - one pass at a time - a larger switcher is unnecessary. Add to this an Ultimatte-quality chromakeyer, and this switcher became for us an excellent choice.

Another room getting a face lift is our Avid Media Composer 8000 suite. In order to keep the system current with the latest software and capable of Avid's best level of nonlinear online video quality, it was necessary to change the Mac hardware platform from a Quadra 950 to the PCI- based PowerMac 9500. This, of course, leaves the Quadra 950 unused - or does it? Well, no. The work done in our Avid suites more often involves interfacing with our graphics department and being able to run graphics software, such as Photoshop, After Effects, etc. In the new Avid suite, we will be able to use both systems - the PowerMac 9500 for Avid editing and the Quadra 950 for in-session graphics by keeping those applications resident on the 950 and available in the suite, without hindering any editing work.

The next time you think about throwing out the old and buying the new, take a second look. You probably won't get much money for the old, but it sure can go a long way to making life easier in your daily operations.

63. What Every Editor Should Learn

I have been in the industry since 1970 and an active editor since the early 70's. Much of what I have learned has come because survival in this technology-driven business depends on being a student of the craft and always being ready to change. This is as true today with the advent of nonlinear as it was in the beginning of tape editing and film before that. Frequently I see experienced editors getting lost by the times as technology moves ahead. I also see newer editors, working exclusively in the nonlinear world (Avid, Media 100, Stratasphere, etc.), never take the time to understand the fundamentals. Here are some things that every editor should know.

The first and most important is having an understanding of the basics of the video image. What makes up the elements of a standard video signal and what can be done to improve that image. Knowing the differences between film and video, composite and component, digital and analog, and so on. This specifically means being able to properly read a waveform monitor and vectorscope. It means understanding the nuances of the timecode signal and how it can be used to benefit a project.

The second important skill is having a complete understanding of Edit Decision Lists (EDLs). These arcane databases are the most common medium of exchange between offline and online, between various brands of edit systems (including nonlinear) and between video and audio. EDLs

73 and the film variants are also the best way to correlate video to film when cutting negative based on an electronic offline. It is an area that I personally find most lacking in many budding nonlinear editors, who never bothered to understand EDLs, erroneously assuming that their systems' output is sufficient for every client's online needs.

Another skill that isn't quite as readily appreciated is the need to be able to operate online graphics systems. Specifically this means basic operating skills on Chyron's Infinit! character generator family (Infinit!, Max, Maxine). Like it or not, Chyron is the yardstick for all character generation and the brand most often used by high-end facilities and nearly all major networks. All editors should be able to create basic messages on one of these units and this is true for nonlinear editors as well. Even though nonlinear editors might use such units infrequently in their suites, they should at least know what these systems will do in order to recommend the best options to their clients, when projects move into online.

Established online editors have a lot of learning to do as well. Most have tackled the previous items I've discussed, but many are fearful of the new nonlinear technology and haven't kept abreast. An obvious one is that tape-based, linear online editors should immerse themselves as heavily as possible in the nonlinear world. Particularly this means learning to edit proficiently on some brand of nonlinear edit system, such as an Avid. In addition, since much of this work is oriented around graphics and Mac-based graphics applications, these editors also need to learn some of the basics about such programs as Photoshop, After Effects, Painter, Illustrator and others. Even if they don't work on Avids on a daily basis, software such as Photoshop is playing an increasing role in all video sessions, as an easy method of exchange for client-supplied artwork.

If you don't learn the new you will be a dinosaur and become extinct; however, if you don't learn the old and the basics, you have no foundation for your craft. Hit the books!

64. Digital Film Master

The era has now arrived in which it is possible to completely post-produce a feature film using electronic methods. Although, still very costly, several studios are developing facilities in order to be able to generate an "electronic negative" (digital film master) for their features. Such a master would mean that an entire completed feature film would exist on a set of digital data archive tapes. New "first generation" negatives, prints or whatever could be generated from these files at any point in the future.

Several reasons drive these efforts. If a high-resolution digital film master (DFM) has been created, then it is easy to create both NTSC and PAL versions without the usual artifacts of standards conversion. Presumably a DFM would be of higher quality than any current or future video standard, therefore, future-proofing the product. A DFM would eliminate the costly need for restoration of a damaged film negative in the future.

Many components go into making this possible because post-production hardware and software has increasingly been designed to be resolution independent. NTSC video is about 500 lines,

74 HDTV is about 1000, half resolution 35mm film is about 2000 and full resolution 35mm film is about 4000 lines. (Note that here the term lines is referring to approximate vertical resolution, i.e. scan lines, and can be equated to pixels.)

There are many electronic methods that are used to get a lower resolution image (NTSC or even HDTV video) to appear to be of acceptably high resolution when transferred to 35mm film for projection. In addition, the multiple optical processes uses in film finishing come with their own "generational penalties", causing some image degradation which, in short, means that a seen in a theater is not necessarily delivering a full 4000 lines of resolution anyway. If this is the case, a top-of-the line, electronically-posted image could actually surpass the quality achieved with traditional photo-chemical techniques. Of course, the higher the true resolution and the fewer electronic "cheats" that are employed, the longer the process while take, the more costly the equipment and the larger the data storage requirements.

Many pieces make up the puzzle to create a DFM, but everything starts when the original camera negative is scanned electronically. This generally occurs at some factor slower than real time, depending on the resolution desired. Creative editing can occur on an Avid or Lightworks as is currently the method more and more anyway. Effects can all be created electronically using such devices as Avid's Illusion, Kodak's Cineon, Quantel's Domino, Discreet Logic's Inferno and even Adobe After Effects. The final editing would then appear more like an online edit session, in which the Avid's EDL is "conformed" electronically from the storage files of the transferred camera negative. The last link is color-correction. DaVinci, the preferred video color-correction system, now makes a resolution independent system, Resolve SST, which can be used to color-correct a DFM using methods familiar to any standard video film transfer session.

Once the money gets worked out, and schedules permit the creation of a digital film master, true tactile film editing and finishing may totally become a thing of the past. This will be mourned by many who grew up in that era, but, it will open the world of feature films to a whole new generation who have strictly worked in the electronic world.

65. Software Utilities for Editing

In working with Avid nonlinear systems and Macs in general for the last few years, I've become familiar with a number of small utility software applications that make life easier, both for Avid editors as well as others in production and post. Many of these programs are freeware, shareware or at least fairly low cost and have been distributed in the past as part of the purchase of an Avid system. They run independent of the Media Composer application and as such will run on nearly any Mac. In this article I've summarized some of these utilities.

EDL Xfer and EDL Manager are two Avid utilities which are invaluable for the offline-to-online conversion. EDL Xfer allows you to read and write files to RT-11 formatted floppy disks - the format used by older CMX and Grass Valley editors. You can also format RT-11 disks. EDL Manager creates and converts Avid sequences into most flavors of Edit Decision Lists (EDLs). It can also convert one type of list to another.

75 Medialog and Avid Log Exchange (ALE) are used to manipulate database information into Avid- compatible bin information prior to digitizing. Medialog (available in Mac and PC versions) can be used to log video and film shoot data offline during production and/or before editing as well as to convert such data generated in other programs (word processors, database programs, etc.) into Avid bins. ALE is used to convert other file formats, such as telecine logs (Evertz FTL files, TLC Flex files, etc.) into Avid bin information.

I have received from Avid-related sources, such as their bulletin board and from other users, a number of "cool" and simple apps for timecode calculation. These include Avid's Timecode Calculator, which can run outside of the Media Composer; FF Translator, which allows for math conversions between film and video values; and StorageCalc, a utility that estimates storage consumption based on the amount of footage and the resolution used.

An invaluable program for me has been CompactPro, a shareware compression/back-up utility which was shipped with our Avid software. I have been able to use this as an all-purpose back-up program for any type of file on any Mac, especially in backing up Avid projects. I have also used the Avid Drive Utility program as a general drive formatting utility. This program can be used for most hard drives and also support striping drives.

Although there are many programs that are useful, I'll close this out with two others that are cheap and readily available. One is GraphicConverter, a graphic file conversion programs that offers a lot of the same functionality as Photoshop and Debabelizer. I use this constantly to do things like batch convert a series of PC-generated Targa files in a Pict sequence or Quicktime movie for import into Media Composer. Along these lines are also a variety of AVI to Quicktime conversion programs. These let you convert PC-generated animation sequences (as AVI files) into Quicktime for import into Media Composer.

Although many of these programs have come to us through the use of the Avid Media Composer, they are useful whether you are using that nonlinear system or not. Most can be purchased as shareware and some of also available for free but all are worth getting your hands on.

66. Are you nonlinear?

I've been an editor for many years now and have made the transition from tape to nonlinear without much trouble. On a routine basis, I work in both worlds depending on the project. This has also given me an insight into a wide range of clients as well. I've noticed that not only have editing techniques changed but so have clients.

In the past, an organized client entering an online edit session would usually have some type of paper edit decision list. This means that they had screened a dub of their footage with visual timecode, had written down in and out numbers for all selected shots and had developed a mental and written structure for how their project was to be edited. In linear editing, lack of organization and indecision can make the process quite unpleasant. Nonlinear has changed those "penalties".

76 The evolution of nonlinear editing has coincided with the development of better linear editing as well. The preread function on digital VTRs has made sophisticated layering a common technique in most facilities. With this comes a down side though. Preread editing is "destructive", meaning that with each edit you are recording over the previous video. If you don't like the result after it has been recorded, it may take quite a few steps to correct the matter. The funny thing that I have noticed, is that many clients who have worked heavily in nonlinear suites can no longer organize themselves well enough to function in a linear environment again - even if the tools might be better! They have trouble making any decision at all without seeing every option. So if this description hits a little close to home, what can you do to improve the situation?

View your footage ahead of time and make notes. Pick the best performances on multiple takes and write down the starting and ending timecode numbers from the timecode window on the screen of your dub. Pick out typefaces and organize all information for supers. Most producers have PCs or Macs, so there is no reason a final typed list of supers can't be provided. Select all graphics that need to be created and make sure you have scheduled graphics session time well ahead of time. Select the music cuts you intend to use. If you have layered scenes using digital effects, try to storyboard these out as well as possible.

Once you've done this, if you haven't really been able to visualize your completed project you should book a nonlinear offline session first. This will help to better organize your "cut". Then go to online - which might be linear or nonlinear. If linear, try to select a digital bay. Layering is cleaner and general loss is non-existent, so if you have to change your mind, if is easier to drop down a general and work to a revised master. In fact, it is frequently better in a linear bay to make 2 cuts - one to a longer, segmented submaster and then recut the submaster into a shorter, more refined edit master.

67. 16 x 9

Although the FCC ruled this year that DTV (the digital TV transmission standard) will be phased in and the NTSC signal will be no more by 2006, it is as yet unclear what standard will be dominant for program production and post-production. The general consensus is that regardless of resolution or frame rate of any new standard, the aspect ratio will most likely change from the familiar 4x3 (1.33:1) to the wider 16x9 (1.77:1). This size was agreed upon as a compromise among many options, mainly because most films are produced with theatrical projection, not NTSC or PAL TV, in mind. The standard projection ratios range from 1.85:1 in the US to 1.66:1 in many other countries. Of course many films are produced in the wider 2.40:1 size, which would still appear as letterboxed in the future.

In anticipation of a wider ratio, many camera manufacturers are currently producing video cameras and camcorders which operate in either 4x3 or 16x9 modes. Shooting 16x9 on these cameras now can be helpful if you want a more "cinematic-looking" project or if you are creating a video for eventual tape-to-film conversion or with an eye towards trying to "future-proof" your production and/or equipment investment. The latter option might be a bit of wishful thinking, since equipment changes over the next few years are truly an unknown. Nevertheless, you can now purchase $4,000 consumer-grade DV-format camcorders with 4x3/16x9 switchable

77 configurations.

If you shoot 16x9 today - what happens? First, remember that in 16x9 recording the image on the tape is actually a compressed 4x3 frame that appears to be slightly anamorphic. The true 16x9 image is achieved by using a 16x9 capable monitor (which stretches out the image) or by passing the image through a DVE to alter the aspect ratio (expanding the horizontal or squeezing the vertical). Since the anamorphic appearance of the uncorrected video is not that extreme, it is quite easy to post an entire 16x9 project while viewing 4x3 images. There are no problems doing this with any linear or nonlinear edit system. The only problems occur if using circular wipes or effects, because their shape is based on 4x3 viewing. Some switcher and DVEs has a 16x9 option to make the calculations correct for this material.

Since using a DVE to correct the aspect adds some image degradation and also limits the future wide-screen use of the master, I would recommend posting through to a completed master with the video in it's uncorrected (anamorphic) ratio. Then if the video is to be viewed on standard 4x3 TV sets and monitors, use a DVE to create a corrected master. The best quality is attained by compressing the Y axis of the DVE to .77. Sometimes leaving it set a bit higher, such as at .80 may look a bit better because - though not mathematically correct - it counteracts the camera's natural tendency to make people look a bit heavier. You might also wish to zoom the image slightly in order to reduce the blank letterbox bands on the top and bottom edges of the screen. If you are only going to view the tapes on 16x9 monitors, no correction is required, because the monitor is doing the DVE work for you.

68. Film Post-Production

One of the nice by-products that electronic nonlinear editing has brought us is the easy ability to cross over from video to film editing. Many editors and directors have benefited from this, bringing their realizations to the silver screen without the tedium and expense traditionally associated with film finishing.

If you are ready to forsake the Moviola or flatbed and want to cut a feature on an Avid, Lightworks or even linear CMX, for that matter, there are a few rules to follow to avoid the pitfalls. For theatrical release you are generally going to shoot in 16mm, Super 16mm, 35mm or Super 35mm film formats at 24 frames per second (FPS). This will be transferred to 30FPS video (actually 29.97) by running through a telecine at 24FPS (actually 23.97). During the transfer process, 24 film frames are scanned into 60 video fields (30 frames) using the process referred to at 2:3 "pulldown". Four film frames correspond to 5 video frames in what is considered an A/B/C/D frame sequence. An "A" frame is the film frame at the start of this sequence that evenly corresponds to a complete video frame. Other frames - B/C/D - do not evenly correspond, so at any given point the relationship of film to video frames has a + or -1 frame error.

All linear edit systems work in only a 30FPS video mode, but some nonlinear systems can also work in a true 24FPS film mode. 30FPS editing means that any video online edit decision list is frame accurate but a negative cut list may only be + or -1 frame accurate. If you edit in 24FPS, the cut list is accurate, but any EDL will be + or -1 frame. You can cut correctly on a 30FPS system if

78 the telecine transfer is done correctly and you have good conversion software to make the correct adjustments for these offsets. Avid makes software to allow for 30FPS and 24FPS film cutting that works best with their Media Composers, but you can also purchase 3rd party software, such as Trakker Technology's Slingshot software which lets you do the same calculations using any 30FPS system.

There are a lot of myths about how to do the telecine transfer correctly for accurate data. The most important information is the correct identification of the film negative's keycode numbers and the correct identification of each "A" frame. This information can be recorded in the video (visibly), in the vertical interval (VITC) and/or on floppy disk. As long as the data is accurate and your edit software can import or interpret it correctly, you are set. It is best to transfer a complete film roll without stopping and then sync the audio afterwards, either in a separate audio pass or directly on the edit system itself. It is best if only one film gauge is used, since the software will calculate on one information base, i.e. counts based on either 16mm or 35mm, but not mixed. You can mix formats but it creates extra work and headaches.

Although I will always recommend cutting a workprint before cutting any negative, in order to double-check any errors, I will also acknowledge that on low-budget projects, the workprint stage is often the most expendable. If so, be sure that the negative cutter you chose is very comfortable working from an electronic image and cut list. Once the picture is truly "locked" - the negative is cut and you have an answer print - then you can start editing and mixing audio. The reason for waiting is so that if any errors are made in cutting the negative, they will usually be minor errors of a frame or so throughout the film, based on the + or - 1 frame calculations mentioned earlier. These result in some sync errors in the audio. It is easier to correct these in the audio editing process than to try to go back and fix the negative - which may not be possible.

If you have that movie mogul lurking inside somewhere, follow these simple guidelines and you'll see your dream on the silver screen in no time.

69. Audio Peripherals

On many productions audio is more than 50% of the message. Listen to your favorite shows, movies or productions without picture and then watch them without sound and see which way has the most impact. To use sound to its fullest is more than just adding music or some sound effects. To properly mix a sound track means that the sound needs to be given texture to achieve the right psycho-acoustic effect, much like mixing a hit record. Getting the most out of your track means having some familiarity with the tools of the trade. Here are a few of the basics.

Compressor / limiters. This family of devices reduced the range of the dynamics, holding down the peaks and sometimes bringing up the quiet spots. Reduced range makes audio sound louder, hence the complaints about commercials being louder than TV shows, since commercial announcer tracks are usually more compressed than TV show dialogue tracks.

79 Equalization. In a general, equalizers act like the bass and treble controls on your hi-fi system. Some let you affect general ranges (low, mid, high) while others let you select specific frequencies. Then, you can boost or decrease the volume of sound in the area of those frequencies.

Echo / reverb / delay / chorus. This group alters the apparent spatial relationships of sounds. They are similar but each has different effects. Echo effects add more of a bounce effect to the sound, like yelling in a canyon. Reverb effects are more like the "liveness" of a large room, such as a hall or church. This "slapping" effect to the sound varies with the size of the room and sounds different than echo. Delay adds a repeating pattern to a sound which decays over time. Chorus is used to add a "mutiplier" effects to sounds. If used on a singer, it sounds like the same person singing identically along with themselves. Delay and chorus and used most frequently with musical instruments.

Harmonizers are used to shift the pitch of sounds. They can be used to bring slightly out of tune performances back in tune, but can also be used for various special effects.

Flanging. This effect originated by dragging the thumb on the reel flange of an open reel audio deck to slow down an audio track that was playing in synch with a copy of itself on another machine. The result of the two audio tracks speeding or slowing in relation to each other caused a phasing effect as some frequencies canceled each other out momentarily. Electronics can now recreate the effect without the open reel decks.

Rotary speaker ("Leslie") effect. This is an electronic effect used to duplicate the sound of a Leslie-style speaker cabinet frequently used with Hammond organs. This cabinet uses a rotating horn speaker to cause a "fluttering" effect to the output of the sound, much like playing audio through an electric fan. "Leslie" effects are used most effectively with musical instruments.

Although there are many other devices that a mixer has in his or her bag-o-tricks, these are some of the basics that you might consider using on your next audio track to spice up the mix. Maybe you'll even have a "hit" on your hands!

70. Blue and Green Screen Composites

Compositing live action over other backgrounds is one of the tricks that makes TV magical to the viewer. In the TV world this started out as standard chromakeying (the weatherman over the map) and in film as matting (as in the cockpit shots). A decade or more ago the Ultimatte Corporation created their process which duplicated the film matting techniques in an electronic manner. The basic principle in all of these techniques is to shoot talent or objects over a colored background, which will be replaced by other images during the compositing process.

Several factors are involved in creating realistic-appearing composites. The first is lighting. This is the most important part of shooting your talent over a keyable background. Not only must the background be lit uniformly, but the lighting on the talent must match the natural lighting values if

80 the talent were really in the scene to start with. Shadows and highlights must come from the proper directions and be of the correct intensity. The background selected can be either green or blue and should be painted with a custom color recommended by Ultimatte, although compatible felt coverings are also useable. Green should be used if the foreground talent is brightly lit, whereas blue should be used under more normal or darker lighting conditions. Often the most realistic lighting is achieved when the foreground and screen are photographed outside under exterior lighting (sunlight plus other lighting)!

Pay careful attention to the video levels of the background. Peak video should be about 50IRE to 60IRE and chroma saturation peak-to-peak should be as close to 40 units as possible. Be careful on the foreground, too. Obviously the foreground image should not contain the same (or close version) color, but you should also stay away from whites, yellows (with a green background) and metallics. Unless dark lighting is desired, the foreground talent should be lit more brightly than the screen behind, but be careful not to clip any of the video. Stay away from peaked levels, such as "clipping" whites, as might be on the collars of a talent's white dress shirt. If at all possible, there should be no contamination of the background screen color onto the foreground object, which means that the talent should be as far away from the background as possible. Green is more reflective than blue, so this is even more of a concern with green screen backgrounds. Floors present an especially difficult challenge.

The type of recording methods will also dictate success. Always use a component recording device, such as a Betacam-SP camcorder. Digital recording, such as a Digital Betacam camcorder, will help, but is not essential. The higher the signal-to-noise ratio, the better, so under darker lighting, Digital Betacam recording might give you a better "edge". If your production is on film, you should shoot 35MM and transfer with pin-registration from the negative to a component digital medium, such as Digital Betacam or D1. If video is your final product, 30FPS will give you better results than 24FPS.

The composite can be made using various techniques. Most options (hardware and software) give you "handles" to control removal of the background color (blue or green), reduce contamination (spill of the color onto the talent), control opacity of the composite, and to deal with spectral highlights and shadows (such as in keying glass or smoke). Ultimatte Corp. has made many standalone units from the original Ultimatte 4 to the current Ultimatte 8. The Ultimatte 4's and 5's are best used when composites are created during the production, while 6's, 7's and 8's may be used in post. Nearly all modern digital switchers use chromakeying circuitry which rivals and may even surpass the results you can achieve with the Ultimatte systems. Many nonlinear edit systems and graphics workstations use software solutions to achieve the same results. Some are proprietary and some use Ultimatte's plug-in software.

Remember that nearly any modern solution offered will work, but if the production was done incorrectly, no amount of money will make the shot look right. Just take a look at Waterworld, where you could see some of the best ("Exxon Valdez" composites) and worst (hot air balloon composites) - all in the same, very expensive movie!

81 71. Digital Television

Over the next decade all US television broadcasts will have switched over from the NTSC signal we know to what is called Digital Television (DTV). This is by FCC mandate and the ten year period is intended to allow the broadcasters and consumers to have a transition period. During that time television stations will simulcast NTSC and DTV signals. At the end of the decade, there will be no more NTSC, meaning your TV sets will either be obsolete, or at best, require some type of smart set-top converter in order to view television signals.

DTV is by no means High Definition Television (HDTV). In fact the DTV standard is a transmission standard which encompasses a series of different allowable formats within that carrier. This ranges from a variation of what we currently have all the way up to and beyond what we currently recognize as HDTV. These formats include various possible frame rates, screen ratios, screen resolutions and both interlaced and progressive scanned images. DTV is the "pipe" and it will be up to the "smart" DTV receiver at the end of the "pipe" to decode the correct information sent to it. Since the consumer manufacturers will most likely produce TV sets and external decoders which will handle nearly all the potentially popular formats, this next level of "" should be transparent to the consumer.

It is important to understand that the implementation of the DTV standard will affect all parties, especially post facilities and broadcasters. This will be a financial impact that becomes increasingly costly the higher the image resolution that is expected. The nature of the DTV signal is that a broadcaster can, in fact, transmit more than one signal within the bandwidth of their station's frequency if the current image resolution is maintained. If an HDTV signal is transmitted, only one signal can be sent. Because these upgrades will be costly, many broadcasters will be eager to add additional revenue streams that multiple channels would offer. Unfortunately this is as much a political decision as technical and fiscal. Congress has recently raised its eyebrows at the suggestion of using the DTV signals in ways that wouldn't result in HDTV (or at least vastly improved) television images. Broadcasters may be forced into transmitting HDTV without significant revenue to offset the investment.

In subjective testing that has been done, viewers couldn't see much quality difference between current signals and DTV/HDTV signals until the screen size was increased to something bigger than the average 20" set. What was noticeable was the aspect ratio difference when changing from 4X3 to a wider 16x9. Analogies have been made to the consumer success of the CD over the LP and color TV over black and white. I'm not so sure that is the case and would suggest that the better analogy might be to Am stereo radio or stereo TV broadcasts. AM stereo never really took off because of the lack of a single standard to which all stations and manufacturers adhered. TV stereo demonstrates lackluster performance at best and TV stations that broadcast synthesized stereo (from mono sources) don't fare any worse for it than station who are truly operating stereo plants.

At the start of this transition, little is known, but it seems obvious that the major studios and networks are all going in different directions. Post facilities will probably be caught in the middle - pushed by clients to move towards HDTV while not being able to charge the premium rates needed to get there. For the time being (3 years?), hardware probably won't change radically, but

82 try to protect your projects by future proofing as much as possible. This can take many forms, but here are some choices: shoot in Super35MM film; shoot and post in HDTV; shoot and post in Digital Betacam (at 16x9 aspect); shoot in BetaSP and post in Digital Betacam. "Upward" conversions of videotape masters from a current standard to a newer DTV/HDTV format will probably be a common and necessary evil in the next few years, so you've got some time. This is as much the case of conflicting standards as it is the fact station won't be able to retrofit their entire facility. Stay tuned ... you ARE living in interesting times!

72. Budgeting Nonlinear Time

Many experienced producers are very comfortable in estimating the amount of time to budget and book for traditional production and post-production methods. Shooting, film transfer and tape- based editing all have established fairly predictable yardsticks for the length of time a particular project should take. When these same producers get into the nonlinear edit sessions, they frequently underestimate the amount of time required to complete the program. In this article, I'll try to put forth some ideas to help.

Nonlinear editing can be used for offline and online editing, regardless of the workstation used - Avid, Stratasphere, Media 100, etc. The steps are the same. First, footage must be logged and digitized (loaded onto the unit's hard drive storage), then the show is edited and finally recorded to the master tape. The time required for digitizing is usually overlooked. Because the material is being logged and reviewed as part of this step, the digitizing phase should take about 1 1/2 to twice the length of the material to be loaded. Five hours of camera footage will take a day or more to digitize. If the same unit is used for both offline and online editing, frequently the offline portion is done at a lower resolution in order to be more efficient with the amount of hard drive space available. Then for the online phase, the footage is re-digitized at a higher resolution for final quality. This doesn't take as long as the first time, since only the footage used in the final cut needs to be loaded, but the amount of time is comparable to an auto-assembly in a linear bay.

In nonlinear editing, the speed of the editing session can progress quickly. Piecing together a "cuts-and-dissolves" show is quick until you get to effects and graphics. Since these are usually done in software, unlike the dedicated hardware found in a linear bay, DVE moves, titles, etc. require a certain amount of rendering time to complete. Some things like a rolling credit list are usually not worth the effort! If the editor or artist is using a separate program, such as After Effects, for the effects segments, the time with increase again. In most cases, unless different systems are used, only one thing can be going on at a time.

If the nonlinear edit is only for offline and the online phase occurs in a tape-based linear edit bay, the editor still will need some time to prepare materials for the online editor. This can take a few minutes to several hours. Usually the offline editor will general an edit decision list (EDL) of the session to match the system used in the online bay. If the offline project involved layering and effects, the editor will usually have to create several EDLs to represent the various video layers. None of the effects information in the offline edit will translate as a readable data file into any of the online hardware, which means that the offline editor will have to spend additional time making notes and comments to explain what is needed to duplicate those effects in online. Sometimes

83 this will include visual aids, such as storyboards, a printout of the edited sequence's timeline, etc. The offline editor will usually record a videotape copy to take so that the online editor can see what everyone is talking about. All in all, you should allow a day of so for this turnaround.

The bottom line in all of this is to understand that the validity of nonlinear editing is in the flexibility gained and not the time. It will usually take longer from start to finish working in the nonlinear environment, but this is so that you can be sure you've explored the options. Just be sure to allow for that.

73. Editor Types

Just as in anything else, editors - the human type - come in all types. Finding the best editor for the type of project you need to post will often determine your satisfaction with the results as well as the ease with which the two of you work. This is independent from the issues of speed, knowledge of the equipment or other factors.

Editors, like graphic designers and animators, are artists. As artists, they don't all work well in every medium. To follow the metaphor, some artists are sculptors, some work in oils, some in other media. I classify editors in three types - storytellers, compositors and generalists. Pick the right one and you'll be happy - pick the wrong one it might be a real struggle.

Storytellers. I would classify most film editors and long-form editors in this group. These are editors who work well with scripts and dialogue-based programs. They understand plot structure and like to use simple visual techniques to move the program along. They might work with film, linear tape or nonlinear workstations as their tool of choice. Digital effects moves and composites are not their forte, so their use of these techniques will tend to be a bit "off the mark" from those in the other groups. Storytellers are good for the "big picture".

Compositors. This group works best with effects. In using the analogy of the timeline, they tend to edit vertically rather than horizontally, concentrating on the density and finesse of effects scenes. These editors are great with visual effects, DVE moves and often even graphics and animation. Compositors usually have a good sense of graphic design and in modern facilities, have gravitated away from traditional edit suites and towards devices like Flame and Illusion. Some compositors handle the story items well, but usually prefer to work on the small snippets, rather than the big picture. A good compositor can make and effects scene look and feel totally organic, rather than just an effect for its own sake.

Generalists. Editors in this category are "jacks-of-all-trades". They combine elements of both the storytellers and the compositors. This group tends to be best for shows that combine a bit of everything - story, effects, graphics and so on. Generalists are good for music videos, too. They may not have a cutting edge design sense, but they'll usually give a project a fresh look and get the job done with good speed and quality. Generalists often of the most savvy of the three groups when it comes to technical issues. They have touched a bit of everything and know how to get from point A to B.

84 So how do you know? A good place to start is to judge by the reputation of the facility you are using and to meet with the editor who is going to do the job. Look at reels and make you best judgment. If they've done similar things, they can do your project well. Sometimes you have to take a chance, but the results will usually be worth it when everyone approaches it with a open mind and a can-do attitude.

74. Editing Tips For Storytellers

The talent for editing is something that cannot be taught. You can teach rules, techniques and tips, but you can never teach finesse and style - when the rules should be followed and when they can be broken. When I edit a program or commercial, I do it as much by "what feels right" as other factors. That difference is often as simple as trimming a few frames one way or another.

Corporate programs for training and image are often dialogue-based and, as such, are structured in the same way as entertainment programs using classic film-making techniques. In this type of editing, video effects, such as DVE moves, are often very counter-productive and a simple cut can have the most impact. Here are some stylistic ideas to incorporate into these programs.

Most dialogue-based shows rely on cuts and dissolves for editing. Dissolves, used in a classic style, are usually applied as transitions to show a passage of time. Dissolves can also be used to smooth out visual montages, musical sections and so on. Another favorite transitional device is to use an , such as a building exterior, to indicate a change in location. Building exteriors at a different time of day can indicate a passage of time and a new location. Usually these establishing shots may be tied to an adjacent scene with a dissolve, but often a cut is just as effective.

An effective use of a cut as a transitional device is to make an abrupt edit from one scene to another in which the second shot starts with something impactful, such as a woman screaming or other loud accompanying noise. This type of shock value isn't necessary, though, to be totally effective. A cut to a moving close-up will also work. These technique as also used most often to convey passage of time or place.

When shooting dialogue scenes, the general procedure in film-style production, is to film (or tape) the entire scene once through with a wide framing. This is your "master" shot. Then the camera is reframed and various portions of the scene are filmed again with tighter shots of one of more actors at a time. The job of the editor is to piece all of these takes together to make a complete scene that effectively delivers the script points and gets the best performances from each . A well-edited scene show feel totally seamless to the viewer and should lead your mind through the shots that your brain is expecting. In a perfectly edited scene, the editing should be "invisible".

With dialogue though, the most correct "technical" edit is not the one that feels natural. For instance, when an audio/video edit is made from one shot at the exact end of an actor's line to the next shot at the beginning of the next line, it usually doesn't feel right. Skilled editing usually employ overlapping edits (also called L-cuts or split edits) where the audio and video edits don't occur at the same time. Sometimes video will precede or follow the audio edit by a few frames, a word or

85 two, or even part of the entire dialogue line. Sometimes an entire line will be heard "off camera" and over the reaction of another actor. All these techniques are used because they vary the pace and they feel natural. When and how to overlap an edit cannot be explained and is the difference that separates the average editor from the craftsman.

To round out this list of classic film editing techniques, I would also suggest editing with "body wipes" and editing between moving shots. In the later, the intent is to make an edit that is easy on the eyes. Editing from one camera move to another, without either stopping, is a good way to lead the viewer through a scene. Obviously it helps if the relative speeds of the camera moves are the same. This gives the mind the feeling that the two shots are really part of the same move. The same is true of "body wipes". In this technique, the editor will use an element in the picture that blocks the screen, such as a person walking through the shot close to the camera, as a point to make the edit. If such items occurs in both shots and can be used for both the outgoing and the incoming edit point the cut will be totally seamless.

Practice these crafts on your next show and see if simpler is truly better. After all, 100 years of perfecting a craft does create results!

75. Getting the Most Out of Compression

Most nonlinear edit systems use a compression scheme which is that brand's own proprietary variation of the JPEG algorithm - originally designed for the data compression of still pictures. These are generically referred to as motion-JPEG or M-JPEG. The more compressed an image is the more the image looks like it is made up of blocks. These artifacts are a result of the algorithm trying to create an image out of little actual information. The more visible the artifacts or blockiness, the more real loss there is to actual picture information. Compression values are usually rated in terms of file size specs (kilobytes/frame). 600 KB/Frame is said to be uncompressed, 300KB is 2:1 compressed, 150KB is 4:1 and so on.

Unfortunately you can't judge image quality purely based on these numbers. A lot goes into getting the best results out of image compression. For instance, Avid products use a variable compression scheme on the Mac platform, which means that AVR-77 at a maximum quality of 300KB (2:1) actually applies the least amount of compression on the most complex scenes (more detail to the image) and the least to the simplest. As a result not all images will show an equal amount of (or lack of) blocky-looking artifacts. This has been a trade-off in order to get the most efficient storage of media onto hard drives.

The amount of artifacts produced also depends on the amount of noise to the incoming image. The more video noise from the tape, the more apparent detail the nonlinear systems is trying to deal with. The cleaner the image is to start with, the better the compressed results. Here are some tips to get the best out of your system.

Record on the cleanest format. This would mean a component digital tape format, such as Digital Betacam as a first choice. Beta-SP, Beta-SX, DVCam or DVCPro would follow that. D2, D3 and 1" which are high-end composite digital and analog formats would be in the middle and ¾" and

86 VHS last, of course. Playback will also affect this. Playing an analog Beta-SP recording back on a Digital Betacam deck (with analog option) gives you better looking results than playing the same tape on a Beta-SP deck.

Note though, that as yet, we don't know a lot about the artifacts created by mixing different compression schemes. For instance, DV recordings might results in some odd artifacts resulting from file conversion into M-JPEG. This is why most NLE manufacturers are developing native DV- based systems for facilities, such as news organizations, working exclusively in those formats.

The input to the nonlinear system should be in either a component digital or analog signal path. Your NLE's native signal processing is component digital, so anything you can do to avoid its front end processing electronics ( internal analog-to-digital conversion or composite-to-component decoding ), will result in an image with fewer artifacts. If your system is capable and you have the right VTRs, you should digitize footage using a serial digital input. This means a nonlinear editor equipped with one of the newest Targa video boards and a Digital Betacam or DVCPro playback deck. This signal is closest to the native format and uses less conversions in the signal path. The next best choice is to connect the deck and the input via YUV connections (Beta-SP's Y, R-Y, B-Y format) so the signal stays in component analog. If these aren't available you can use S-Video connections if your deck and editor are so equipped. The last choice is composite analog.

The same is true of the path out of the editor when laying back to the master VTR. Digital Betacam recordings digitized into an Avid at AVR-77 and then mastered back to Digital Betacam using only serial digital signals yield results that are almost totally transparent. Conversely, digitizing and mastering the same material using the composite analog connections will yield results that are worse than the same type of composite analog signal path through a composite analog (linear) edit suite.

Working with nonlinear edit systems is still very much a learning experience for many editors. We are all still very early in the era where data compression is an accepted high-quality choice. Learning how to get the most out of these systems will help maintain the quality clients expect - coupled with the ease of operation that nonlinear editing offers.

76. More Editing Tips

How you prepare a master when you are editing will determine how easy it is to change or update a project down the road. Face it - all projects go through various changes. Whether you edit in a linear or nonlinear suite, these tips can help you.

I am a proponent of editing a split-track, textless submaster as the first pass. On the second pass, this tape is mixed and supers are added to create a second tape - the final master. In a linear suite, especially in a digital environment, this makes it easy to tighten a show as well as make changes, such as re-arranging sections without image degradation. If you have a change to make in a graphic, you always have the clean submaster tape, which is "base" without graphics over. In a nonlinear system, the same can be accomplished in the way you organize your various

87 sequences. This creates a "virtual" submaster, if you will. These can be recorded to tape as well as the final master. Alternate versions, such as foreign language versions are also easily created.

Audio is an area that many editors don't deal with very well. Either this is because they have generally only edited picture to completely mixed tracks or because most of the audio is cleaned up and mixed by an audio mixer in an audio post suite. When editing audio, try to use the tracks you have available to you. Most current tape formats utilize four audio tracks, so you should try to organize your material into four tracks if possible. If properly edited, you can create a submaster tape with "split" or separated tracks that then can be easily mixed onto another tape. When editing audio on most corporate presentations or feature programs, you will have the following elements: on-camera dialogue (sound bites), voice-over audio, on-camera sound (under any voice-overs), natural sound, music and additional sound effects. To make mixing easy, I usually like to edit voice-overs on track 1, on-camera and nat sound on track 2, SFX on track 3 and music on track 4.

When I edit these, I try to adjust the levels as I record them to separate tracks so that when summed, they are already mixed. It is helpful to be able to monitor these tracks in a mono or summed speaker configuration in order to get a proper mix level. This has the advantage of making it easy to change elements later, such as replacing the language of the voice-over from English to Spanish, for instance. If you are trying to keep the show in stereo though, 8 tracks might be the minimum. This would require a separate audio deck like a DA-88 or to work in a nonlinear system with 8 or more virtual tracks.

Adopting these techniques makes life easy in an edit suite and you'll certainly thank yourself for doing if when changes are called in a few months down the road.

77. DTV In Your Future

Digital Television is set to start broadcasting later this year with a transition out of NTSC by the end of a decade. This may have the most impact in our lifetime of any technological change in the industry. Unfortunately, no one can yet define what DTV is going to be like. It seems pretty sure that a wider aspect (16x9) will be adopted, but beyond that nothing is really clear. DTV has pitted the computer manufacturers against the consumer electronics companies, though recently Intel announced that it would support all standards. The current table of production standards that are possible and could be transmitted as part of a DTV signal includes about 30 variations. What are the issues?

Traditional TV transmission and display has used interlaced scan in which each field (2 per frame) includes half the lines of the television signal. One field has the odd-numbered lines, the other the even - together they make up the complete information for a whole frame. Computers have used a progressive scan technique for monitors, meaning that in each vertical scanning of the tube, all the information is displayed that makes up a complete frame. Computer monitors scan at different frequencies, usually faster than 60Hz, which is why you get a scan bar in the picture when you try to shoot a computer monitor with a video camera.

88 With digital TV signals and computer displays, sizes differ. Computers use square pixels while TV uses rectangular pixels. This is why round images on a computer appear oval in TV unless the size is correctly converted. Finally TV uses a different than computers. Computers work in true RGB values without luminance, while TV works in YUV, or 4:2:2, color space in which the signal is made up of luminance with the addition of color.

These issues make it difficult to go between the two types of hardware and why the computer industry pushed hard to adopt a DTV standard which would cause them the least effort necessary to make their hardware handle a TV signal. As it turns out, the broadcasters got a more friendly, though confusing, series of possible standards. It is hard to say which will become the marketplace favorite. The interlaced formats offer better temporal resolution at a lower cost point, while the progressive scan formats are better for text and graphics and fine detail. It seems that there is also no clear cut advantage yet to linear or nonlinear as it relates to DTV, since any DTV signal will be a compressed image anyway. Compression is a function of number-crunching and most of the computer platforms makers and manufacturers of accelerator cards are hard at work developing products with the speed and bandwidth to deal with the DTV signals.

At the moment it seems like the most likely candidates for DTV in the near future are either 480p (480 lines/progressive scan), (1080 lines/interlaced scan) or possibly (720 lines/progressive). All seem to be doable with upcoming versions of known technology. The best possible image of is probably not readily achievable - at least not within a cost effective range. The de facto tape format seems to be the two variations of their D-5 recorder which Panasonic has been marketing. So stand by and fasten your seat belts - it's going to be a bumpy ride!

78. What is 24sF and do you need it?

In the whole current transition from NTSC television to Digital Television (DTV) a lot of confusion has arisen over the various High Definition TV formats - which may or may not become the production/post-production methods used to generate programming for DTV. In this brave new world, there won’t be one standard, but many. The differences revolve around image size, aspect ratio, frame rate and whether the picture is progressively scanned (like a computer display) or interlaced (alternating fields, like current TV displays). The numerical “soup” includes such cryptic figures as 480i, 480p, 720p and 1080i.

Since the major networks have all picked different and competing HDTV formats for program delivery - to the consternation of post houses and producers alike - popular sentiment has been raised for the use of 24 fps (frames per second) as a common denominator. This matches film production and much of prime time network programming is originated on film. The experts all agree that progressive scan is preferable from a quality standpoint, if the frame rate is high enough to render good temporal changes for action footage, such as sports. The “Holy Grail” is 1080 lines of vertical resolution scanned progressively at 60 fps. Unfortunately, this is costly and the technology is several years away at best. So the camps have been divided into 720p (“it must be progressive scan”) and 1080i (“HD interlaced looks fine and is good enough”), with some

89 minority sentiments for other variations. Now come plans by various major broadcast manufacturers such as Sony and Panasonic to offer equipment that can also function in a 1080/24p mode. This is very doable with the current equipment designs. More interestingly, Sony has suggested a variant called “segmented frame” progressive, identified as 24sF.

The inherent nature of photography is, in effect, progressive “scanning”; CCDs in video cameras scan progressively, as do CCDs and CRTs in telecines - but are converted to interlaced scanning for video output. With Sony’s scheme the progressive frame is subdivided into two fields with odd and even lines, resulting in a frequency of 48 fields per second (versus 60 in interlaced NTSC or HDTV). This is not, however, interlaced, because the image starts out as one progressive frame (from film, video or computer), is handled separately and is combined at the end into a progressive frame again, without any interlace artifacts. The advantage is that a 1080/24p image can be easily converted to 1080i and/or 720p using conversion and pulldown techniques similar to that used in film transfer technology. Additionally, with the segmented frame approach, the same hardware can handle 1080/30i and 1080/24p material, simply by switching (or rebooting) equipment between one standard or the other. This is ideal for both manufacturer and post house alike, since post suites are not limited to one genre of material (like film-originated TV programs) or the mutually exclusive delivery requirements of different clients.

The natural extension of this is a 24fps video camera to shoot with a “film look”. The goal of camera makers is to entice to move away from 35mm film and towards 24p video, both for television shows and eventually the production of feature films for theatrical release. Switchable 24p/30i equipment should be available this year at a cost not too much higher than first generation Digital Betacam machines. At that price of admission, HDTV production and post is here and now for all applications in the video world. If you are a low budget film maker, you can count on shooting and posting in 24p video, record out to film and have a high-quality 35mm release print at fraction of the production and post cost associated with traditional 35mm film methods. Maybe the “brave new world” is looking pretty cool after all!

79. DV, DVCam, DVCPro

In the middle of the whole professional transition from analog to digital formats, has come the DV format. DV is a digital consumer format endorsed by most of the VTR and video camera manufacturers. DV uses a high compression ratio, records on metal-evaporated tape stock and has spawned several popular consumer cameras in the $4,000 range which have garnered the respect of Betacam-toting professional videographers. All consumer-level DV recordings are interchangeable. Both Sony and Panasonic have manufactured professional “spin-off” formats to DV, which are not necessarily compatible with each other. JVC and Philips also market their own version of the Panasonic machines.

Sony’s DVCam and Panasonic’s DVCPro take slightly different approaches to providing professional DV equipment. These differences include differing video track widths (Sony matches the consumer configuration - Panasonic is wider), tape stock types and features. Without getting into the particulars, each method has its pros and cons, but a key difference is that Sony’s DVCam uses a metal-evaporated tape (like the consumer version), while Panasonic’s DVCPro

90 uses metal particle tape (like Digital Betacam). MP tape is considered more robust for editing and less drop-out prone, therefore, Panasonic has extended their DVCPro product line into more studio versions, such as full-featured editing decks.

Both DVCam and DVCPro can play consumer DV tape (some units require an adapter to accommodate the mini-DV cassette size), while the Panasonic VTRs can also play the Sony DVCam format. This year Sony will also introduce VTRs which will play the current (25 Mbps) DVCPro tapes (in addition to DV) and will also offer preread editing capabilities in DVCam format.

Panasonic has structured its product line into three tiers: DVCPro (25), DVCPro50 and DVCPro100. The numbers refer to the data rates of each version. The current DVCPro data rate is 25 megabits per second (Mbps), while DVCPro50 doubles this, allowing for 4:2:2 image processing, the same as D1, D5 and Digital Betacam. Current DVCam and DVCPro25 units use (and will continue with) 4:1:1 processing. This makes DVCPro50 better for higher end applications like image compositing, blue screen effects, etc. DVCPro100 is Panasonic’s HDTV variation. These machines would permit DVCPro to function in HD modes such as 1080i and 720p. With DVCPro100 a current 126 minute tape would yield about 45 minutes of recording time.

Another interesting byproduct of DV technology, is that since the DVCPro and DVCam compression ratio is 5:1, hardware have been licensed to other manufacturers for HDTV applications. This is the basic method used to make a D5 VTR (Panasonic’s uncompressed 1/2” digital VTR format) into an HDTV recorder and it is also the method Pluto has used to turn their digital disk recorders into HDTV versions.

If you plan to stay in the lower end of the DV product spectrum, then DV, DVCam and DVCPro will provide you with a great feature set and quality superior to previous analog formats. The exact version and model is dependent on budget and specific features, but even at the consumer DV level, you can get switchable 16x9 aspect ratio, digital audio tracks, image stabilization, etc. That is why even veteran videographers have picked up at least one of these to add to their arsenal.

80. Importing Graphics

Since the move to desktop systems for the creation television graphics, it has become fairly common for a client to bring logos and other graphics to a session which were created by a non- broadcast design company. It is easy to incorporate these into edit sessions with any number of popular non-linear editing applications - if you know how to format the graphics correctly for video.

Video pixels are rectangular, while computer pixels are square. As a result, when a computer graphic file is imported into a video application, an aspect ratio change occurs and round items become elliptical. This adjustment should be anticipated in the creation of the graphic. As a general rule, editing systems operate with an image size of 720 x 486 pixels (NTSC) - rectangular pixels. If you are creating a Photoshop file for use in an Avid, for instance, you should create the file in a size of 720 x 540. Build all images as you normally would and then when you are finished,

91 change the image size to 720 x 486 (with the “constrain proportions” button off). The image will now appear squashed. When you import the file into an Avid, the change from square to rectangular pixels will correct the “squashed” look in the video world.

Screen resolution is also a limiting factor. Video operates at an equivalent image resolution of 72dpi. You can scan in images and work in higher print-oriented resolutions (150dpi, 300dpi, etc.), but the final image quality must be converted to 72dpi at the above sizes.

Video also operates in a different than do computer graphic files. Computers display images based on equal values of red, blue and green. Video works on luminance value plus color. This is the so-called CCIR-601 color gamut of digital video. In addition, computer monitors and video monitors have different gamma values, so the contrast of shots will look quite different on the two media. This may often take a little trial-and-error to get the best results. In general, when working in Photoshop, you should work in the SMPTE-C color preference and possibly apply the “legal colors” setting in the video filter.

Because of the nature of computer monitors (progressive scan) versus video monitors (interlaced scan), fine detail, such as detailed texture files and thin lines, often appear to “vibrate” once imported to video. Graphic designs should use bolder type and fatter lines. Often textures and images that look very crisp within the graphic application need to be slightly blurred, in order to look more pleasing in video. Again, a bit of experimentation may be needed.

In designing a broadcast graphic in Photoshop or the like, be mindful of “safe title” area. This is the amount of image on the edges lost in transmission, due to the transmission paths and also due to the construction of consumer TV sets. Assume that 10% - 15% around the edge of an image may not be visible on a lot of sets.

Be sure to construct the graphic file correctly. If you work in Photoshop creating a layered image, this should be “flattened” when you are done, since most editing systems will not be able to do anything with the layer information, not to mention that “flattening” reduces the file size. If there is an alpha channel associated with the graphic - a high contrast black-and-white image used for keying - some systems will pick this up and others won’t. If you don’t know, then a separate file should be created with only the alpha channel as the image information in the file.

Keeping these rules in mind will let you create, modify and import nearly any type of graphic image and use it inside of a video editing application.

81. HDTV, SDTV and 16 x 9

One of the most striking things that nearly every consumer will notice about the new digital television standards which broadcasters are adopting is that they have a wider aspect ratio. Regardless of actual image resolution, consumers CAN see the wider screen. Up until now, TV standards around the world have all used a 4 x 3 (1.33:1) aspect ratio. This stems from the fact

92 that in early NTSC development, the lead was taken from film, where the actual 35mm negative frame size is this same ratio.

In the development of HDTV, a wider aspect ratio was sought, because it is largely accepted that a wider view is more natural to the eye. Most theatrical film projects are shot with a wider aspect ratio in mind - regardless of the actual frame size of the exposed negative. At first, during analog development 5 x 3 was chosen, which eventually changed to 16 x 9 during conversion to digital technology. In film terms, this equates to a 1.77:1 ratio, which is a compromise to the 1.85:1 ratio used for the projection of most films in the US. 16 x 9 will accommodate most of the image area intended by most of the directors behind most of the theatrical films produced.

The 16 x 9 ratio has been accepted by manufacturers and integrated into most of the video camera equipment being produced for current, standard definition video equipment (SDTV) and is the basis of all of the new high definition TV formats. Current NTSC cameras which offer 16 x 9 are usually switchable between 4 x 3 and 16 x 9. This is a function of the camera’s optic block, if you purchased this option. It is also a standard feature of many consumer-grade DV camcorders. The actual image recorded onto the tape, however, is a “squeezed” 4 x 3 frame. In this manner, a 16 x 9 image can be recorded and edited with all standard editing equipment and VTRs, even down to VHS.

In order to view 16 x 9 properly, you must have a monitor with 16 x 9 capabilities, which usually means that the monitor switches to a letterbox mode, displaying the correct view of the wider aspect ratio. The newer versions of many nonlinear editing applications also permit you to work in a 16 x 9 mode, so the aspect ratio is corrected on your editing display. The thing to remember, though, is that if you shoot 16 x 9 images today with a standard definition camera, you can only use the material in one of two ways when the end result is to be viewed on current 4 x 3 consumer TV sets. Either the material must be completed and displayed in a letterbox fashion; or, the image must be expanded, cropped and possibly panned and scanned with some resultant loss of image quality.

In HDTV, the 16 x 9 aspect ratio is built-in and is a normal mode for the equipment, so no image degradation occurs by working in a 16 x 9 mode. If you shoot with an HDTV camera and want to end up with a standard definition image (NTSC or PAL), you can either post in HDTV and downconvert the master, or downconvert the field tapes and post in traditional manner. At the point of downconversion, you have the same choices for 4 x 3 mastering and viewing: a) a “squeezed” 4 x 3 full frame image; b) a letterbox image; or 3) a full frame image which is cropped with some pan/scan repositioning. Sony and Panasonic offer built-in options to their HD VTRs which permit downconverted SDTV outputs in these modes. Because the HD image starts with so much more resolution than in SDTV, even cropped, downconverted images often look superior to the same shots originating from a standard camera.

If you are shooting in 16 x 9 (HD or SDTV), then the cameraman must be careful to properly center the image information so that the content works for both 16 x 9 as well as 4 x 3 viewing. All areas of the frame must be “protected”, meaning that the whole shot must be useable. It would be a mistake to see something like a piece of production gear in the shot, assuming that it will be cropped out. If you are posting in HDTV 16 x 9 for viewing in SDTV 4 x 3 applications (cropped

93 and not letterboxed), then graphics become a concern. Graphics that are justified left or right or cover the screen edge-to-edge will be cropped. If this is a concern, either two versions will have to be edited, or a “textless” version in HDTV will have to be downconverted first, before adding titles to the standard definition 4 x 3 version.

Although this may now all seem confusing to professionals and consumers alike, it may offer many advantages in the future. For instance, full HDTV post in 16 x 9 makes it easy to convert the master to NTSC, PAL and even film. Yes, the choices become more complicated, but, the tool kit of the video producer has been extended. And yes, the viewer WILL notice the difference.

82. Moving From Tape to Data - MPEG & DVD

For distribution of video material to customers, VHS has been king for many years. Now data distribution is the next method for sending programming to clients. This started with Quicktime files and CD-ROMs but has moved to MPEG and DVD. It has become increasingly common for companies to distribute their marketing and training material in disk format, as well as VHS dubs.

First, let’s look at the media. CD-ROM and DVD disks can come in many flavors. The type of file you write is independent from the media transporting that file. CDs and are similar, in that they can be recorded as single copies with a standalone recorder, or they can be duplicated in mass through a replication service. If you own the recorder, the cost for single unit mastering is relatively cheap. The cost of CD recorders has dropped into the sub-$500 level, but not so for DVD, where recorders are still in thousands.

CDs are a good medium for music, still frames (photos) and software applications, but they are not particularly good for large amounts of moving video, due to the amount of data and the throughput required out of the CD player. DVDs were developed to make moving video on a disk a reality, but can also be used for music, photos and software. A CD stores data in one layer on one side of the disk and can hold 74 minutes of music or 650MB of data. The DVDs come in various types, ranging from single layer / single sided (like a CD) to dual layer / dual sided. Data capacity ranges from around 4GB to 13GB. Dual layer / dual sided is the most expensive, so movie distributors typically use single layer / dual sided or dual layer / single sided at a capacity of around 8 to 9GB.

The files used to create CD-ROMs and DVDs can run independent of the media itself. If you created video for a CD-ROM, this would typically be a Quicktime (Mac or PC) or AVI (PC) file. These files utilize various software compression methods (codecs) to reduce the file size and usually do not run well on the desktop, especially if they are formatted at anything close to full screen size. These files run better if playing from the computer’s hard drive. Unless a CD is specifically needed, Quicktime and AVI files can be moved around in any manner. Quicktime also permits large file sizes with no compression, so it also can be used as a transport method between applications and different computers.

94 DVD grew out of the MPEG compression schemes. MPEG is a form of data compression where information is compressed across frames and not within each discreet frame, thus making it better for distribution than editing. MPEG allows for a far greater amount of compression with high quality results, than does motion-JPEG, the method used by most nonlinear edit systems. MPEG 1 was the first form and is still used, but is generally lower in image quality. Intel made a big marketing push a few years ago for MMX technology. This was part of the instruction set on the Pentiums which were released just before Pentium IIs came to market and is a software acceleration designed to optimize the playback of MPEG 1 video on PCs.

MPEG 2 is the standard around which DVDs are structured, as well as other distribution schemes, like direct TV satellite signals. MPEG 2 compression software allows you to vary the data rate in order to get the best trade-off between image quality and data storage. At its best, MPEG 2 looks comparable to a digital master (to the casual observer) and can run from any type of hard drive or server, given the right playback hardware and software.

DVD authoring and mastering is a “subset” of MPEG 2. DVD specs call for the higher end side of MPEG 2’s quality range and add additional features for audio and the interface. In order to design for these features, DVD authoring is required. This is a step in which files are organized on a workstation in order to prepare the program for mastering to a DVD disk. Some of the features offered by DVD include surround audio mixes, multiple language tracks, subtitles (enabled or disabled by the viewer), letterbox or full screen view (selected by the viewer) and the intelligent compression and playback of 24fps film material.

The DVD authoring technician can prepare a DVD file for all of these functions. In addition, they can set up all of the interactive files and menus, which allows the viewer to select the options pertinent to them, such as language, which version of the movie (director’s cut, PG or R-rated version), subtitles, etc. These same features which are important to the motion picture industry are also relevant to corporate programmers. For instance, you can prepare a training DVD in various languages if you have a multi-lingual staff. You can include several different “pitches” on a sales and marketing disk with a easy menu from which the viewer can choose.

Software solutions for Quicktime, AVI and MPEG 1 mastering is easily attained and fairly inexpensive. MPEG 2 or DVD authoring and mastering is another story. This is a very data intensive function and is best handled with a hardware approach. Several companies, such as Digital Vision, Vela Research and Sonic Solutions make excellent mastering systems which are quite expensive. You do get what you pay for, so for the moment, it is best to use an outside company for MPEG 2 and DVD mastering if you want the best results and efficient turnaround time. In addition, other services, such as color correction and noise reduction may also be provided.

Now that newer PCs are increasingly offering DVD players as an inexpensive add-on, DVD capabilities will be almost universally available. If you have only sent out programs on VHS up until now, this year may be the time to add DVD to the mix. The quality will certainly improve and the features may be worth it on their own.

95 83. Film Matchback

Software has allowed film editing to move into the realm of electronic editing and away from the traditional use of Moviolas and flatbeds. The rapid adoption of electronic editing by most major directors and studios has spawned techniques and applications that extend these features to even the lowest budget filmmakers.

Film for theatrical release is almost always shot and posted at 24 frames per second. If electronic editing is involved, the film must be transferred to videotape, in which two things occur: 1) the 24fps film is actually run at 23.97fps in the telecine (i.e. it runs slightly lower); and, 2) the 24 film frames are converted to 60 video fields using a scheme called 3:2 pulldown , in which some film frames are alternately scanned for 2 fields, then 3 fields, and so on, to make up the “missing” information. Any system used to edit electronically for the purpose of going back and cutting the negative later, must take these adjustments into account.

There are two accepted methods of electronic editing for film - matchback and 24fps editing. In the latter, the editing system only works with the real film frames, thus allowing the editor to make an edit only on a valid film frame. Avid Media Composer 8000, 9000 and Film Composer models offer a true 24fps editing mode. Any video EDL generated from these systems may have a + or -1 frame error for any given edit. The second method, matchback, lets the editor work in a standard 30fps fashion, allowing for a frame-accurate video EDL, but a + or -1 frame ambiguity for any film cut points. Film matchback software can be found as an option offered on Avid Media Composer, Film Composer and Xpress systems, but more recently, also as standalone Mac and Windows NT applications from Trakker Technologies (Slingshot) and Avid (FilmScribe).

In order for 24fps editing and matchback to work correctly, a proper telecine log must be generated. The point of this log is to provide a cross-reference between the timecode of the videotape film transfer masters and the film numbering scheme of the negative, which might be 24fps film timecode, keycode or lab-generated edge numbers. In addition to a timecode cross- reference, the next most important aspect of the telecine log is to properly identify the “A” frame. Since, through the use of 3:2 pulldown, a single film frame may span over two adjacent video timecode frames, the system must properly identify the coincidence of the times when one exact film frame equals one exact video frame, referred to as the “A” frame. These reference points are used to properly calculate the rest of the pulldown sequence. With this information, 24fps editing systems know which are the valid film frames, and matchback software applications know how to adjust for the 1 frame ambiguity. A number of units create valid telecine logs, but the most known are Time Logic Controller’s Flex and Evertz’ FTL formats.

In 24fps systems such as Avid’s, telecine logs are imported as bin files and editing proceeds in a normal fashion. The editing application provides all the information needed by a negative cutter to conform the actual film negative. In matchback, the software conversion of film versus video is independent of the editing application. Editing can occur on any 30fps system that can generate a clean video EDL. In a standalone matchback application, the video EDL is merged with the original telecine logs and the result is the information needed by the negative cutter. Since you are using a standalone application, you can use a Mac-based software for editing and an NT-based

96 application for the matchback. It would also allow you to use a “lower-end” editing application, like Premiere, to cut a feature film.

The final consideration - that of the speed change made in the film transfer - is largely compensated for during the audio post of the film. The video editing application can generate an audio EDL for all audio functions, assuming the location sound recording was made with a timecode-based machine, like a Nagra or DAT with timecode capabilities. During post, a pull-up or pulldown function is engaged to adjust for the speed changes in either direction.

By careful attention to these details, it is possible for any level of filmmaker to complete his or her vision for the silver screen.

84. The “Must Have” Items For Your Edit System

More and more projects are being finished on nonlinear edit systems, rather than in traditional tape-based post suites. One of the nice byproducts of this shift is that a fully-equipped nonlinear editing workstation can also be a place to create graphics and animation. In a sense, your NLE workstation can become the “Swiss Army Knife” of the facility. This is true whether you work on an Avid, Media 100, Stratasphere or other make of NLE. Towards that focus, several pieces of hardware and software become essential.

The most obvious editing hardware peripherals are, of course, things like the VTR, monitoring, a mixer, CD player, speakers, etc. Computer peripherals are just as needed. To start, you should have 3.5” floppy, CD-ROM, Zip and Jaz drives. This will allow you the ability to input client-supplied files from most of the popular sources. Zip and Jaz cartridges are the most common “transport” media for files today. They are also handy for backing up your project data. A less common item that has become very affordable in the last year is a recordable CD-ROM drive. This is great for backing up all types of files and is superior to Jaz cartridges in many respects. CD-ROM recordings are very reliable, universally interchangeable, cost about $2 per blank and hold about 650MB of data. This compares to a standard Jaz which costs over $100 for 1GB of data. At such a low cost, you can afford to live with the “write-once” aspect of recordable CDs. Re-writeable CDs are also available and low cost, but are not as universally interchangeable. With the right software, such as Adaptec’s Toast, you can “burn” CDs as PC, Mac and/or audio formats.

Other nearly-essential hardware items are a fast modem, a photo-quality color inkjet printer and a scanner. The modem allows for Internet access, which is a good way for editors to check up on popular sites, but, it also allows for the client to check their e-mail messages, send over files needed for the session or grab quick logo graphics from a website that might be needed for the session. With the a good color inkjet like an Epson or Hewlett-Packard, you can print high-quality graphics and also print-out screen grabs made from video captures. The scanner might seem like overkill, but I have frequently had to scan client-supplied photos and graphics for an edit session. In the traditional suite, this would have been done with a camera, but in the NLE suite, a scanner makes more sense and often looks a lot better.

97 There are too many great software applications to go into detail here. Each editor has his or her own preferences, based on style and knowledge, but I’ll go into the few essentials I frequently use to supplement my editing application. First of all, most of these are available as Mac or PC, but I will discuss these in Mac terms, since most current NLEs today are running on Macs. The most “Swiss Army Knife” of all the applications you can own is a current version of Adobe Photoshop with as many plug-in effects filters as possible. Photoshop is a great tool for creating graphics, but it is also an excellent transitional tool between the video and print worlds. If you have added nothing else extra, you should at least have - and know - Photoshop. A close second is Adobe After Effects. This is the most-used desktop animation and effects compositing tool today. After Effects gives you animation and motion effects capabilities beyond that available in the NLE application. It takes a bit to learn this completely, but once learned, it gives you the desktop equivalent of the high-end “monsters”, like the Quantel Henry.

Beyond the two Adobe apps, many other choices come in handy in the “second round”. I like to have Microsoft Office handy to deal with Word, Excel and Power Point files a client might bring. Adobe Illustrator is also very useful, but since its graphics creation is vector-based (compared to Photoshop’s screen-based methodology), it is really more applicable to print work. Several utilities for sound add capabilities to your system. On a Mac, SimpleText or sound apps like CD- To-AIFF, SoundHack or SoundEdit16 let you convert CD audio files to sound files in a computer format, such as AIFF. These can be imported into many editing applications, like Avid Media Composer, thus eliminating the need for a separate audio CD player. It also keeps the import from CD to editing totally digital.

The list is endless, of course. Add to the software applications listed above, many of the popular plug-ins - Alien Skin, Final Effects, Boris, Ultimatte, Kai Power Tools, etc. These will truly supercharge the application’s power. So start small and add on a constant basis and you’ll truly have an all-purpose workstation that can tackle nearly any type of post work.

© Copyright 1992,3,4,5,6,7,8,9 Oliver Peters All Rights Reserved

Authors’ Comments

If you liked these articles, then I’d like to suggest some other reading. That is the book Pre- Production Planning for Video, Film and Multimedia * written by Steve Cartwright. Steve interviewed me for the chapter on post-production planning, so a lot of what I have written about over the years has made it into the book. Enjoy.

* Pre-Production Planning for Video, Film and Multimedia Steve R. Cartwright © 1996 Focal Press ISBN 0-240-80271-3

98