<<

SPECIAL SECTION

Edward A. Fox Guest Editor Virtual Editing in Interactive Multimedia Applications

Drawing examples from four interrelated sets of multimedia tools and applications under development at MIT, the authors examine the role of digitized video in the areas of entertainment, learning, research, and communication.

Wendy E. Mackay and Glorianna Davenport

Early experiments in interactive video included surro- video data format might affect these kinds of informa- gate travel, trainin);, electronic books, point-of-purchase tion environments in the future. sales, and arcade g;tme scenarios. Granularity, inter- ruptability, and lixrited look ahead were quickly identi- ANALOG fied as generic attributes of the medium [l]. Most early One of the most salient aspects of interactive video applications restric:ed the user’s interaction with the applications is the ability of the programmer or the video to traveling along paths predetermined by the viewer to reconfigure [lo] video playback, preferably author of the program. Recent work has favored a more in real . The user must be able to order video constructivist approach, increasing the level of interac- sequences and the system must be able to remember tivity ‘by allowing L.sers to build, annotate, and modify and display them, even if they are not physically adja- their own environnlents. cent to each other. It is useful to briefly review the Tod.ay’s multitasl:ing workstations can digitize and process of traditional analog video editing in order to display video in reel-time in one or more windows on understand both its influence on -based video the screen. Users citn quickly change their level of in- editing tools and why it is so important to provide teraction from passvely watching a movie or the net- virtual editing capabilities for interactive multimedia work news to activ ?ly controlling a remote camera and applications. sending the output to colleagues at another location [g]. A video professional uses one or more source video- In this environment, video becomes an information tape decks to select a desired video shot or segment, stream, a data type that can be tagged and edited, ana- which is then recorded onto a destination deck. The lyzed iand annotatetl. defined in and out points of this segment represent the This, article explc res how principles and techniques granularity at which a movie or television program is of user-controlled video editing have been integrated assembled. At any given point in the process, the editor into four multimed a environments. The goal of the may work at the shot, sequence, or scene level. Edit authors is to explai I in each case how the assumptions controllers provide a variable speed shuttle knob that embedded in particu1a.r applications have shaped a set allows the editor to easily position the at the of tools for building constructivist environments, and to right frame while concentrating on the picture or comment on how tile evolution of a compressed digital . Desired edits are placed into a list and refer- enced by SMPTE time code numbers, which specify a location on a videotape. A few advanced systems also UNIX is il trademark of AT%T 1314 Laboratories. MicroVAX is a trademark ( f Digital Equipment Corporation. offer special features such as iconic representation of PC/RT ic; a trademark of IEM. shots, transcript follow, and digital sound stores. Parallax is a trademark of l’arallax, Inc. The combined technology of an analog video signal and presents limitations that plague edi- 0 1989 ACM OOOl-0782/89/0700-0802 $1.50 tors. Video editing is a very slow process, far slower

802 Communications of the .4CM ]uly 1989 Volume 3:! Number 7 SPECIAL SECTION than the creative decision-making process of selecting In each case, the information designer is visually liter- the next shot. The process is also totally linear. Some ate, but takes for granted conventions that are not rec- more advanced (and expensive) video editing systems ognized by the others. Conflicts arise from the clash in such as Editflex, , and Editdroid allow editors assumptions. Although new conventions will probably to preview multiple edits from different or become established over time, visual designers will for videodiscs before actually recording them. This concept now need tools to suggest reasonable layouts and users of virtual viewing was first implemented for an entirely need to be able to override those suggestions based on different purpose in the Aspen Interactive Video project current needs. of 1980 [a], almost four years before Montage and Editdroid were born. MULTIMEDIA TOOLS AND APPLICATIONS Some features of analog video editing tools have Assumptions about the nature of video and of interac- proven useful for computer-based video editing applica- tivity deeply affect the ways in which people think tions. However, the creation of applications that pro- about and use interactive video technology. We are all vide end users with editing control has introduced a greatly influenced by television, and may be tempted to number of new requirements. Tool designers must place video into that somewhat limited broadcast-based decide who will have editing control, when it will be view. Notions of interactivity tend to extend about as available, and how quickly it must be adjusted to the far as the buttons on a VCR: stop, start, , user’s requirements. Some applications give an author etc. However, different applications make different or programmer sole editing control; the end user simply demands on the kinds and level of interactivity: views and interacts with the results. In others, the au- l Documentary film makers want to choose an optimal thor may make the first but not create any particu- ordering of video segments. lar interactive scenario; the user can explore and anno- l Educational software designers want to create inter- tate within the database as desired. Groups of users active learning environments for students to explore. may work collaboratively, sharing the task of creation l Human factors researchers want to isolate events, and editing equally. The computer may also become a identify patterns, and summarize their data to illus- participant and may modify information presentation trate theories and phenomena. based on the content of the video as well as on the l Users of on-line communication systems want to user’s behavior. share and modify messages,enabling the exchange of Different perspectives in visual design also affect research data, educational software, and other kinds editing decisions. For example, an English-speaking of information, graphic designer who lays out print in books knows that the eye starts in the upper left-hand corner and The authors have been involved in the development travels from left to right and downward in columns. of tools and applications in each of these areas, and A movie director organizes scenes according to rules have struggled with building an integrated environ- designed for a large, wide image projected on a distant ment that includes digitized video in each. This work movie screen. Subjects are rarely centered in the mid- has been influenced by other work, both at MIT and at dle, but often move diagonally across the screen. Video other institutions, and the specific tools and applica- producers learn that the center of the television screen tions have influenced each other. is the “hot spot” and that the eye will be drawn there. Table I summarizes four sets of multimedia tools and

TABLE I. Multimedia Tools and Applications under Development at MIT ResearchAres Tool Appticaiifit~_ . ;I_ ‘> ~idq&~&y#i&& ,B,: b Interactive Documentaries Film/Video interactive Viewing A City in Transition: New Recombination of video & Editing Tool Orleans 1983-86 segments Seamless editing Database of shots and edit- lists lconic representation of shots Display of associated annotation Learning Environments Athena Muse: A Multimedia Project of different Construction Set French Language media User control of dimensions User interface Research EVA: Experimental Video Experiment in Intelligent Live annotation of video Annotator Tutoring Multimedia annotation Rule-based analysis of video Multimedia Communication Pygmalion: Multimedia Neuroanatomy Research Networked video editing Message System Database Sharing interactive video data

]uly 1989 Volume 32 Number 7 Communications of the ACM 803 SPECIAL SECT/LX’/

applications under ~levelopment at MIT’s Media Labo- and ancillary documentation which are critical for in- ratory and Project tithena’ and successively identifies depth analysis and interpretation. video editing issues rai.sed by each. Most of this work has belen developed on a distributed network of visual An Interactive Video Viewing and Editing Tool workstations at MI’1’.* This table is not intended to be A generic interactive viewing and editing tool was de- exhaustive; rather, it is intended to illustrate the differ- veloped to let viewers browse, search, select, and anno- ent roles that digitir,ed video can play and trace how tate specific portions of this or other videodisc movies. these alpplications and tools have positively influenced Each movie has a master database of shots, sequences, each other. and scenes; a master “postage stamp” icon is associated with each entry. Authors or users can query existing INTER.ACTIVE DOIXJMENTARIES databases for related scenes or ancillary docu,mentation Movie-,making is hil;hl:y interactive during the making, at any point during the session or they can build new but quite intentionally minimizes interaction on the databases and cross-reference them to the main data- part of the audience in order to allow viewers to enter a base. The Galatea videodisc server, designed by Daniel state of revery. To a filmmaker, editing means shaping Applebaum at MIT, permits remote access to the video. a cinematic narrati1.e. In the case of narrative films, The current prototypes include four to six videodisc predefined shooting constraints, usually detailed in a players and a full audio break-away crosspoint storyboard, result ir. multiple takes of the same action; .switcher; for some applications this is sufficient to the editor selects the best takes and adjusts the length achieve seamless playback of the video. of shots and scenes to enhance dramatic pacing. Edi- A viewer can watch the movie at his or her own pace tor’s logs are an into gral aid in the process. Music and by controlling a spring-loaded mouse on a shuttle bar or special sound effect; are added to enhance the cine- pressing a play button. The user can pause at any point matic experience. to select and mark a segment. Segments can be saved Good documental y on the other hand tries to engage by storing a representative postage stamp icon on a the viewer in an ex ?loration. The story is sculpted first palette; the icon can later be moved to an active edit by the filmmaker 011the fly during shooting. As all strip where it is sequenced for playback. Once a shot shots are unique, ec.iting involves further sculpting by has been defined, the user can annotate it in ,a number selecting which shots reflect the most interesting and of associated databases. A graphical representation of coherent aspects of the story. Editors must balance the shot or edit strip allows advanced users to mark what is available against exactly what has already been audio and video separately (Figure 1). included and select:.ng those shots that will make the The traditional concept of edit-list management is sequence at hand a:, powerful as possible. A major ad- used to save and update lists made by users as they fill justment in one sequence may require the filmmaker to a palette or make an edit strip. Users can give each new make dozens of othl!r adjustments. list a name and a master icon-a verbal and visual memory aid; these are then used to place sequenced A City in Transitiox New Orleans, 1983-88 lists into longer edits or into multimedia documents. The premise of the ,:ase study, “A City in Transition: The database design significantly expands the informa- New Orleans, 1983- 86.” was that cinema in combina- tion available to both the computer and the e,ditor tion with text could provide insights into the people, about a particular shot. their power and the process through which they affect urban change. Prod iced by Glorianna Davenport with Future Directions cinematography by Richard Leacock, the z-hour 41% The goal of this experiment is to represent shots and minute film was edi ted for both linear and interactive abstract story models in a way that allows the com- viewing [Z]. The interactive version of this project was puter to make choices about what the viewer would developed as a curr:.culum resource for students in ur- like to see next and create a cohesive story. The kind of ban planning and political science. Rather than creating information needed to make complex editing decisions a thoroughly script6 d interactive experience, faculty may be roughly divided into two categories: (‘1)content who use this videodisc set design a problem that moti- [who, what, when, where, why); and (2) aesthetics vates the student as explorer. The editor’s log is re- [camera position relative to object position in a time placed with a datab lse that contains text annotation continuum) [9]. Much of the information that is now entered into a (databasemanually could be encoded during shooting or ’ The Media Lab at MIT eq lores new information technologies, connecting (extracted from the pictures using advanced signal pro- advanced scientific researclt with innovative applications for communications media. Project Athena is an eight-year. $100.million experiment in education cessing techniques. will also allow the at MIT, co-sponsored bv_. Dirital Eauiument. . Corooration and IBM. computer to generate new views of a scene from spa- ’ The hardware consists of I IEC MicroVAX or IBM PC/RT workstations run- ning UNIX and the X Window System. Each workstation has a Parallax board tially encoded video data. Finally, it will become easier which digitizes video in real-time from any NTSC video source, such as a to mix computer graphics with real images, which will video camera or a videodisc. Motion or still video can be mapped into win- dows on a high-resolution c&w graphics monitor in conjunction with text both encourage the creation of new constructivist envi- and graphics. ronments and make all video suspect.

804 Commul~ications of the .acbl ]uly 1989 Volume 32 Number 7 SPECIAL SECTION

FIGURE1. , Film/Video Tool for Editing Video with Two Sound Tracks (Graphical Interface Designed by Hal Birkeland, MIT)

INTERACTIVE LEARNING ENVIRONMENTS object-oriented databases and learning environ- The use of digitized video in education spans a range of ments [6]. Coastal navigation was chosen as a test appli- educational philosophies from goal-oriented tutorials to cation to push the limits of the technology. Not only open-ended explorations. The underlying philosophy does it require real-time handling of a complex set of tends to dictate the level of interactivity and video edit- real images, symbols, text, and graphics, but it has also ing control given to authors and users of any program. been presented to students using a wide range of edu- Programmed instruction and its successors are interac- cational philosophies, ranging from structured military tive in the sense that a student must respond to the training to open-ended experiential learning at information presented; it is not possible to be a passive Outward Bound. observer. However, only the author has flexible video The heart of the navigation information database is a editing capabilities; the student is expected to work videodisc containing over 20 discrete types of informa- within the structures provided, following the desig- tion, including nautical charts, aerial views, tide tables, nated paths to reach criterion levels of mastery of the navigation instruments, graphs and other reference information. Hypermedia provides the user with a materials. The videodisc also contains over 10,000 still wider range of opportunities to explore, by following photographs taken systematically from a boat in Penob- links within a network of information nodes. An even Scot Bay, Maine, to enable a form of surrogate travel richer form of interactivity allows users to actively con- similar to the MIT Aspen project mentioned earlier. struct their environments, not just follow or even ex- The synchronization of images in three-dimensional plore. The constructivist approach provides students space was essential to the visualization of this applica- with some of the same kinds of editing and annotation tion. The project leader, Matt Hodges, brought the ideas tools as authors of applications. and videodiscs to the Visual Computing Group at Proj- ect Athena. This project became one of the inspirations The Navigation Learning Environment for the development of Athena Muse, a collaborative In 1983, Wendy Mackay, working with members of the effort with Russ Sasnett and Mark Ackerman [3]. Educational Services Research and Development Group at Digital Equipment Corporation, began a set of re- Athena Muse search projects in multimedia educational software. At Project Athena, the required spatial dimensions for The goals were to compare different instructional strat- the Navigation disk were generalized to include tem- egies, improve the software development process and poral and other dimensions. In particular, several for- address the technical problems of creating multimedia eign language videodiscs funded by the Annenburg

July 1989 Volume 32 Number 7 Communica‘tions of the ACh4 805 SPECIAL SECTION

Foundlation were b :ing produced under the direction of Janet Murray. An i:nportant goal was to provide cul- tural context in adclition to practice with grammar and vocabulary. Students were presented with interactive scenarios featuring native speakers. In order to respond correctly, students needed to understand the speakers. Thus, they needed subtitles synchronized to the video and the ability to control them together. The concept of u,;er-controllable dimensions was cre- ated as a general solution to the control of spatially organized material [as in the Navigation project) and temporally organizcid material (as in the foreign lan- guage projects). Atl.ena Muse packages text, graphics, and video informat .on together and allows them to be linked in a directed graph format or operated indepen- dently (Figure 2). Different media can be linked to any number of dimensions which can then be controlled by FIGURE2. An Example from Athena Muse in whiichVideo the student or end ‘lser. Segments and Subtitles are Synchronized along a Temporal When reimplemented in Athena Muse, the Naviga- Dimension tion Learning Environ:ment used seven dimensions to simulalte the movernent of a boat. Two dimensions rep- calculate these relationships. resent the boat’s position on the water and two more Digital encoding would also allow views to be created represlent the boat’s heading and speed. A fifth tracks that were not actually photographed. For exalmple, the the user’s viewing angle and the sixth and seventh eight still images representing a 360-degree view from manage a simulatecI compass that can be positioned the water could be converted into a moving video se- anywhere on the screen. The user can move freely quence in which the user appears to be slowly turning within. the enviromnent and use the tools available to a in a circle. If enough visual information has been en- sailor to check 1oca:ion (charts, compass, looking in all coded, it may also be possible to create the illusion of directions around t:re boat) to set a course. Other as- moving in any direction across the water. Digital repre- pects of a simulation can be added, such as other boats, sentation will also make it easier to provide smooth weathler conditions uncharted rocks, etc. Here, the zooming in and out to view closeups of particular user does not chanj:e the underlying structure of the objects. information, since it is based on constraints in the real Given advances in limited domain natural language world, but he or shl: can move freely, ask questions, recognition and natural language parsing, it may be and save informatic’n in the form of notes and annota- possible to automatically encode the audio portion of a tions for future use. video sequence. Future versions of the foreign language projects would then allow students to engage in more Future Directions sophisticated dialogs. Direct recording of compressed digital video data may provide more sophisticated ways to embed visual data VIDEO DATA ANALYSIS into simulations of real and artificial environments. Video has become an increasingly prevalent Form of Issues of synchroni::ation will change, as some kinds of data for social scientists and other researchers, requir- data are encoded as part of the signal. For example, ing both quantitative and qualitative analysis. The term subtitles or sign language for the hearing impaired will video data can have several meanings. To a program- be an integral part of the original video, obviating the mer, video data is an information coding scheme, like need for external synchronization. Other kinds of data ASCII text or bitmapped graphics. To a researcher, who will continue to ret uire externally-controlled synchro- uses video in field studies or to record experiments, nization, to handle the changing relationships among video data is the content, rather than the format, of the data in different ap:,lications. Users should have more video. sophisticated methods of controlling these associated The requirements researchers place upon video edit- data types, either irl real time or under program ing go far beyond the capabilities of traditional analog control. editing equipment. They want to flexibly annotate Cross-referencing of visual information across data- video and redisplay video segments on the fly. They bases will also be e.isier. For example, a photograph often need to synchronize video with other kinds of of an island, taken iom a known location in the bay, data, such as tracks of eye movements or keystroke must currently be linked by hand to the other forms of logs. They want methods for exploring their data at information (charts. aerial views, software simulations) different levels of granularity, identifying patterns from which it might also be viewed. Digitally encoded through recombination or compression and summariz- representations of these images will make it easier to ing it for other researchers.

606 Communications of the ACM July 1989 Volume 32 Number 7 SPECIAL SECTION

A clear priority for many researchers is to reduce the events, and events will often be tagged several seconds total amount of viewing time required for a particular after they occur. Subsequent passes are almost always type of analysis. Researchers who use multiple cameras necessary to create precise annotations of events. While have an even more difficult problem; they must either EVA does not address all of the general problems of synchronize multiple video streams and view them to- protocol analysis, it does provide the researcher with gether or significantly increase viewing . Some re- more meaningful methods for analyzing video data. searchers must also share control of their data, main- taining the ability to create and modify individual Future Directions annotations without affecting the source data. Just as Digital video offers possibilities for new kinds of data with numerical data, different researchers should be analysis, both in the discovery of patterns across video able to perform different kinds of analyses on the same segments and in the understanding of events within a data and produce different kinds of results. segment. Video data can be compressed and summa- The ability to integrate video and has rized in order to provide a shortened version of what spurred the development of new tools to help research- occurred, to tell a story about typical events, to high- ers record, analyze and present the latter form of video light unusual events or to present collections of inter- data. (A number of these tools are described in [ll].) esting observations. The use of highlights can either Wendy Mackay created a tool, EVA [B], written in concisely summarize a session or completely misrepre- Athena Muse to help analyze video data from an exper- sent it. Just as with quantitative data, it is important to iment on intelligent tutoring. The goal was to allow the balance presentation of the unusual data points (out- researcher to annotate video data in meaningful ways liers) with typical data points. In statistics, the field of and use existing computer-based tools to help analyze exploratory data analysis provides rigorous methods for the data. exploring and seeking out patterns in quantitative data. These techniques may be applied profitably here. EVA: An Experimental Video Annotator An application that requires graphic overlays over EVA, or Experimental Video Annotator, allows re- sections of video needs a method of identifying the searchers to create their own labels and annotation video frame or frames, a method for storing the overlay symbols prior to a session and permits live annotation and method for storing the combination of the two. of video during an experiment. In a typical session, the Other kinds of annotation could permit later redisplay researcher begins by creating software buttons to tag of the video segments under program control. For ex- particular events, such as successful interactions or ample, linkages with text scripts would enable a pro- serious misunderstandings (Figure 3). During a session, gram to present all instances in which the subject said a subject sits in front of a workstation and begins to use “aha!” Researchers could use rules for accessing video a new software package. A video camera is directed at segments with particular fields; for example, all seg- the subject’s face and a log of the keystrokes may be ments might have a time stamp and a flag to indicate saved. The researcher sits at the visual workstation and whether or not the subject was talking. Then a rule live video from the camera appears in a window on the might be: If (time > 11:OO)and (voice on) then display screen. Another window displays the subject’s screen the segment. and an additional window is available for taking notes. The researcher has several controls available through- out the session. One is a general time stamp button, which the researcher presses whenever an interesting but unanticipated event occurs. The rest are buttons that the researcher created prior to the session. Annotation during the session saves time later when the researcher is ready to review and analyze the data. The researcher can quickly review the highlights of the previous session and mark them for more detailed analysis. The original tags may be modified and new ones created as needed. Tags can be symbolic descrip- tions of events, recorded patterns of keystrokes, visual images (single-frame snapshots from the video), pat- terns of text (from a transcription of the audio track), or clock times or frame numbers. Tags that refer to the events in different processes, such as the text of the audio and the corresponding video images, can be syn- chronized and addressed together. Note that annotation of live events, while useful, re- quires intense concentration. The mechanics of taking FIGURE3. Live Video Is Captured from a Video Camera and notes may cause the researcher to miss important Presented on the Researcher’s Screen

Iuly 1989 Volume 32 Number 7 Communications of the ACM a07 SPECIAL SECTION

MULTIMEDIA ca MMUNICATION ‘89 Conference, where a preliminary version will be Electronic mail anIl telephones provide two separate made available to conference attendees. Over 30,000 forms of long-distance communication, both involving people will be able to send and receive multimedia the exchange of in!ormation. Early attempts to incorpo- messagesfrom a distributed network of workstations. rate video into Ion; distance communication were The software is being developed by individuals from based on a model elf the telephone, simply adding video MIT’s Project Athena, the Media Lab, MIT’s !

808 Communications of the ACM July 1989 Volume 32 Number 7 SPECIAL SECTION

FlGlJRE 4. Accessing One or More Elm:tronic Pages that Contain Text, Graphics, and/or Video

cooperating or competing, and what they’re trying to algorithms cause delays at different ~.points in the accomplish. It also depends on whether they are mak- compression/decompression cycle. Thus, some applica- ing decisions about a fixed order of video segments, tions that require high quality images and long video e.g., looking at different views of the same visual data- segments may not find a 30 second preliminary delay base or providing different kinds of annotations. In all problematic. Other applications may require very rapid of these cases, the participants are working together to changes among short segments, in which delays of sev- construct a shared information environment. eral seconds may not be acceptable. The importance of image quality varies across applications, and the advent CONCLUSIONS of HDTV may further change the requirements. All Computer control of analog video has been commer- new technologies present tradeoffs between costs and cially available to broadcasters for almost two decades features. Before deciding on those tradeoffs, it is impor- and in personal computing environments for almost a tant for manufacturers of digital video equipment to decade. The requirements for interactivity and flexible understand the ways in which it will be used. The tools manipulation of video and associated data are increas- and applications presented in this article are biased to- ing and emphasize the limits of current technology. ward a constructivist point of view, which maximizes Analog video storage on optical videodiscs is expensive user control over the media and the environment. Digi- and videotape does not provide sufficiently accurate or tal video offers the potential for significantly enhancing precise control. Cooperative work applications, as in them all and expanding our views of interactivity. shared authorship of multimedia documents, and video prototyping, as in interactive design of multimedia soft- Acknowledgments. The authors would like to thank ware, are promising new application areas that will Geoffrey Bock, Andy Lippman, Lester Ludgwig, and extend the requirements of video as an information Deborah Tatar for early comments on this article. The medium. work described here occurred in a stimulating environ- New techniques for compressing video and encoding ment at MIT and has been influenced by many creative content and other information into the signal promise people. We would like to thank Hal Birkeland, Hans to both decrease the costs and increase the possibilities Peter Brondmo, Jeff Johnson, Patrick Purcell, and Ben for future applications. It is important that assumptions Rubin at the Media Lab; Ben Davis, Matt Hodges, about current and future video-based applications be Evelyn Schlusselberg, Win Treese, and Steve Wertheim taken into account when choosing among compression from Project Athena; and Mark Ackerman, Dan Apple- tradeoffs. Some applications require real-time de- baum, Brian Gardner, Brian Michon, and Russ Sasnett compression and can afford to wait for a long compres- who have been associated with both the Media Lab and sion process. Others may require the reverse. Different Project Athena.

July 1989 Volume 32 Number 7 Communications of the ACM 899 SPECML SECTfON

The authors wo dd also like to thank the participants 10. Sasnett, R. Reconfigurable Video. Master’s , Department of Architecture, Massachusetts Institute of Technology, 1986. and reviewers for the ACM/SIGCHI Workshop on 11. Mackay, W.E., Ed. Workshop on Video as a Research and Design Tool Video as a Researc:h and Design Tool, particularly Aus- ACM/SIGCHI, 1989. In press. tin Henderson, Deborah Tatar, Raymonde Guindon, CR Categories and Subject Descriptors: D.2 [Software]: Software En- Marilyn Mantei, and Lucy Suchman. gineering; D.2.2 [Software]: Tools and Techniques--user interfaces; D.2.6 [Software]: Programming Environments-interacfine; Em [Data]; Mis- cellaneous--video data REFERENCES General Terms: Design, Human Factors 1. Backer, D., and Lip’xnan, A. Future Interactive Graphics: Personal Additional Key Words and Phrases: Digital video, video Video. Presented tc NCGA Conference, Architecture Machine Group, MIT, Baltimore, Md., June, 1981. ABOUT THE AUTHORS: 2. Davenport, G. New Orleans in Transition, 1983-1986: The Interac- tive Delivery of a Cinematic Case Study. International Congress for Design and Planning Theory, Film/Video Group, MIT Media Laho- WENDY E. MACKAY has worked for Digital Equipment Cor- ratory, August, 198 7. poration for 10 years. Currently, she is a doctoral candidate at 3. Hodges, M. The Vialal Database Project: Navigation. Digital Equip- MIT under a scholarship from Digital. Her interests include ment Corporation, ‘3edford, Mass., May, 1986. multimedia education and communication. Author’s Present 4. Lippman, A. Movie ,Maps: An application of the optical videodisc to computer graphics. SIGGRAPH ‘80 Conference Proceedings, Archi- Address: MIT Project Athena, E40-366, 1 Amherst Street, Cam- tecture Machine GI oup, MIT, July, 1980. bridge, MA 02139. 5. Ludwig. L.. and Do m, D. Laboratory for Emulation and Study of Integrated and Coo ,dinated Media Communication. SIGCOM ‘87 GLORIANNA DAVENPORT is an assistant professor of media Conference Proceetlings. SIGCOM. Stowe, Vt., 1987. technology and director of film/video research at MIT’s Media 6. Mackay, W.E. Tuto.ing, Information Databases and Iterative Design. In instructional Designs for Microcomputer Courseware, D.H. Jonassen, Laboratory. She has produced, shot, and edited a number of Ed. L. Erlbaum Assxiates, Inc., Hillsdale, N.J., 1988. obskrvational documentary movies and has been involved in 7. Mackay, W.E., Treese, W., Applebaum, D., Gardner, B., Michon, B., the development of interactive editing and delivery systems Schlusselberg, E., P ckerman, M., and Davis, D. Pygmalion: An Ex- since 1983. Author’s Present Address: MIT Film/Video Group, periment in Multi-lrledia Communication. SIGGRAPH ‘89 Panel Proceedings, Bostoll, MA, July, 1989. Special event to be presented The Media Laboratory, El5-432, 20 Ames St., Cambridge, MA at SIGGRAPH ‘89. 02139. 8. Mackay, W.E. EVA An Experimental Video Annotator for Symbolic Analysis of Video Ilata. SIGCHI Bulletin 21, 1 (Oct. 1989). To be Permission to copy without fee all or part of this material is granted published in the Special Issue on Video as a Research and Design provided that the copies are not made or distributed for direct commer- Tool. cial advantage, the ACM copyright notice and the title of the publication 9. Rubin, B., and Davlmport, G. Structured Content Modelling for Cin- and its date appear, and notice is given that copying is by permission of ematic Information. SXCHI Bulletin 21, 1 (Oct. 1989). To be pub- the Association for Computing Machinery. To copy otherwise, or to lished in the Speci; 1 Issue on Video as a Research and Design Tool. republish, requires a fee and/or specific permission.

ACMVIDEOTAPE ON INTERACTIVE ~iiiz&l DIGITAL VIDEO l-YGi= L J 1 I

Excerpts from Palenque ACM is developing a videotape (an InteractiveHistory Lesson) t:hat will further demonstrate the The Carnegie Mellon %ftware technologies discussed in these Engineering Training Program pages. In addition to serving as a using interactive video video supplement to this issue, Samples of the state-of- the-art work being done at the the videotape will contain the MIT Media Lab following material: The use of video compression and playback developed by Intel - Circle II I I8 on Ileader Service Card

810 Communications of the ACM July 1989 Volume .32 Number 7