A Game Developer's Review of SIGGRAPH 2002

A Game Developer's Review of SIGGRAPH 2002

A Game Developer’s Review of SIGGRAPH 2002: San Antonio Morgan McGuire <[email protected]> July 2002 San Antonio, TX. SIGGRAPH is the ACM’s annual conference on computer graphics. This year the week-long event brought 20,000 members of the computer science, film, games, web, and art community together with the general public in San Antonio, Texas. The city has a beautiful European style river walk. It winds out of the past by the historic Alamo, through the present downtown Tex-Mex restaurants and hot dance clubs to end right at the convention center. Hosting SIGGRAPH, the convention center is filled with ideas and products straight from science fiction. Several sessions showed that computer generated imagery (CGI) for film and television passed the point where we can tell real from rendered, and papers showed that this trend will continue to dominate with completely synthetic environments and characters. Computer gaming is the common man’s virtual reality and it has emerged as a driving force behind graphics technology and research. ATI and NVIDIA both announced new cards and software for making games even more incredible than the current generation, and several game developers were on hand to discuss the state of the industry. The rise of gaming and enthusiasm over the many high quality CGI films this year was offset by an undercurrent of economic instability. Belt tightening across the industry was clearly evident in smaller attendance and a subdued exhibition. Yoda It is common knowledge the animation team at Industrial Light and Magic (ILM) created virtual characters, space ships and environments for Star Wars Episode II: Attack of the Clones. We know that giant space ships and four armed aliens don’t exist and can easily spot them as CGI creations. What is amazing is the number of everyday objects and characters in the film that are also rendered, but look completely real. The movie contained over 80 CG creatures, including lifelike models of the human actors. When Anakin Skywalker mounts a speeder bike and races across the deserts of Tatooine, the movie transitions seamlessly from the live actor to a 100% CGI one. During Obi-Wan Kenobi and Jango Fett’s battle on a Kamino landing pad the actors are replaced with digital stunt doubles several times. One of the hardest parts of making these characters completely convincing is simulating the clothing. Any physical simulation is a hard task—small errors tend to accumulate, causing the system to become unstable and explode. Cloth is particularly hard to simulate because it is so thin that it can easily poke through itself and the continuing deformation makes collision detection challenging. New techniques were developed to simulate the overlapping robes that form a Jedi’s costume. These appeared in a paper by Robert Bridson, Ronald Fedkiw and John Anderson presented at the conference. The purple cloth in the rendered image on the left is an example from their paper. Here, the cloth has dropped onto a rotating sphere and is twirled around it due to static friction, with proper folding and no tearing. One of the highlights of Star Wars Episode II was the “Yoda fight” where the 800 year old little green Jedi trades in his cane for a light saber and takes on the villainous Count Dooku. A team from ILM: Dawn Yamada, Rob Coleman, Geoff Campbell, Zoran Kacic-Alesic, Sebastian Marino, and James Tooley, presented a behind-the-scenes look at the techniques used to animate and render the Yoda fight. When George Lucas revealed the script to the team just two days before shooting began, they were horrified. At the time, the team was still reviled by Star Wars fans for bringing the unpopular animated character Jar Jar Binks to life in Star Wars Episode I. Now they were expected to take Yoda, the sagacious fan favorite, and turn him into what George Lucas himself described as a combination of “an evil little frog jumping around... the Tasmanian devil... and something we’ve never seen before.” The scene obviously required a computer generated model because no puppet could perform such a sword fight. Their challenge was to somehow make the completely CG Yoda as loveable as the puppet while having his spry fighting style still seem reasonable. They began by creating a super-realistic animated model of Yoda based on footage from Star Wars V: The Empire Strikes Back. This model is so detailed that its eyes dilate when it blinks and it models physical ear-wiggling and face squishing properties of the original Yoda model. Interestingly, puppeteer Frank Oz disliked the foam-rubber character of the original model and wanted the animators to use a more natural skin model, but Lucas insisted on matching the original, endearing flaws and all. To develop a fighting style for master swordsman Yoda, the animators watched Hong Kong martial arts movies both old and new. Clips from these movies were incorporated directly into the animatics—early moving storyboards of test renders and footage from other movies used to preview how a scene will look. The SIGGRAPH audience had the rare treat of seeing these animatics, which will never be released to the public. The Yoda fight animatic was a sci-fi fan’s ultimate dream: the ILM team took Michelle Yeoh’s Yu Shu Lien character from Crouching Tiger, Hidden Dragon and inserted her into the Yoda fight scene in place of the Jedi. In the animatic, a scaled down Michelle Yeoh crossed swords and battled Count Dooku using the exact moves and motions that Yoda does in the final movie. The digital wizardry continued with behind-the-scenes shots from other movies. In Panic Room the entire house was digitally created in order to allow physically impossible camera movement. In The One, Jet Li’s face is mapped onto stunt doubles to let him fight himself in parallel universes and virtual combatants are moved in slow motion to enhance his incredible martial arts skills. Most of the scenes in Spider-Man had a stylish rendered look, but amazingly some scenes that seemed like live action were completely rendered. At the end of the movie Spider-Man dodges the Green Goblin’s spinning blades. We assume the blades are rendered, but in fact, nothing—the set, the fire, the blades, even Spider-Man himself—is real! A scientific paper showed the next possible step for virtual actors. In Trainable Video Realistic Speech Animation, Tony Ezzat, Gapi Giger, and Tomaso Poggio of MIT showed how to take video of an actor and synthesize new, artifact free video of that actor reciting whatever lines the editor wants to put into the performer's mouth. The authors acknowledged the inherent danger in a technology that allows manipulation of a video image. It now brings into question how long videos can be used as legal evidence and raises questions about what part of a performance and actor’s appearance is owned by the actor and what is owned by the copyright holder of the production. Although we’ve seen similar work in the past, nothing has ever been so realistic. Previous work spliced together video or texture mapped 3D models. The author’s new approach builds a phonetic representation of speech and performs a multi-way morph for complete realism. This allows synthesis of video matching words, sentences, and even songs. As proof that the system could handle hard tasks like continuous speech and singing, a simulated actor sang along with “Hunter” by Dido and looked completely convincing. The authors followed up with an encore presentation of Marilyn Monroe (from her Some Like It Hot days) singing along to the same song. Both videos were met with thunderous applause. The authors’ next goal is adding emotion to the synthesized video and producing output good enough to fool lip readers. Fast Forward This year the presentation of scientific papers was improved in two ways: they were published as a journal volume and a brief overview of each was given at a special session. For the first time, the SIGGRAPH proceedings were published as a volume of the ACM Transactions on Graphics (TOG) journal. This change is significant for professors because accumulating journal publications is part of the process of qualifying for tenure. The journal review process is also more strict and maintains a high quality standard for the papers. The official proceedings are not yet available on-line, however Tim Rowley (Imagination Technologies) has compiled a list of recent SIGGRAPH papers available on the web. The papers were preceded by a special Fast Forward session. Suggested by David Laidlaw (Brown), this special session gave each author 62 seconds to present an overview of their paper. This enabled the audience to preview the entire papers program in one sitting and plan the rest of their week accordingly. The format encouraged salesmanship and many researchers gave hilarious pitches along the lines of “come to my talk to learn how to slice, dice, julienne and perform non-linear deformations on 2-manifolds.” Two short presentations displayed particular creativity. Barbara Cutler (MIT) recited a sonnet about the solid modeling paper she co-authored with Julie Dorsey, Leonard McMillan, Matthias Mueller, and Robert Jagnow – and we thought MIT wasn’t a liberal arts school. Ken Perlin (NYU), famous for his Perlin Noise function and work on Tron, donned sunglasses and pulled open his shirt to rap about his improved noise function. In the end, it was 2002 Papers Chair John “Spike” Hughes (Brown) who stole the show with an incredible intellectual stunt. Several paper authors failed to appear to give overviews of their work.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us