Implementation of a Low Budget, Raster Based, 3 D Motion Capture

Implementation of a Low Budget, Raster Based, 3 D Motion Capture

Session 2538 Implementation of a Low Budget, Raster Based, 3D Motion Capture System Using Custom Software and Modern Video Tracking Technology W. Scott Meador, Carlos Morales Purdue University Abstract This paper details the implementation of a system developed to generate 3D motion capture data through the analysis of raster based motion video. The system’s general procedure includes acquiring video, processing the raster data to raw motion data through motion tracking technology, formatting the raw data into various useable forms using custom software, importing it to 3D animation software via custom scripts and then applying it to 3D geometry. The purpose of the project is to use the realism and efficiencies that motion capture provides, but without the high cost of traditional motion capture equipment. Though this system may not always provide the resolution or possibility for real time applications that traditional motion capture can, it does allow users to apply real-world motion to virtual objects in an efficient manner. I. Introduction While motion-capture techniques have been accepted by larger production companies as a cost-effective means of achieving extremely realistic movements, the technology has not gained industry wide acceptance among smaller cost-conscience firms due to the high entry-level cost associated with the process, which can start out in the tens of thousands of dollars. The cost can impact the user both in terms of the equipment required and the expertise needed to engage the motion-capture process from planning to the actual application of the data to three-dimensional geometry. This paper details a method for engaging in motion-capture in a cost-effective manner through the use of low-cost raster tools. Production animation firms and academic institutions that can cope with the entry-level costs associated with this process benefit in numerous ways. First, they are able to produce animations with more realism than production companies that do not have access to this technology. In producing scenes for The Mummy, ILM used an optical system from Oxford Metrics to capture Arnold Vosloo’s movements and map the motion to the main character. According to ILM’s Jeff Light, motion capture allowed them to capture the essence of the actor’s movement 1. In video gaming scenarios, where the rendered graphics need to react to the player’s movement, motion capture makes it possible to generate realistic animation. In Parasite Eve animators from Square Soft used motion capture for situations where “there is a lot of physical dynamics to the motion and you really see the gravity of the character…because that sort of movement can be really hard to achieve in keyframe animation.”2 6.551.1 Page “Proceedings of the 2001 American Society for Engineering Education Annual Conference & Exposition Copyright ? 2001, American Society for Engineering Education” Second, these firms gain the ability to build reusable motion libraries that can be amortized over time. “Once we have a motion captured, we can use it as many time as we want…,” says Richard Fiore 3. High Voltage Software (HVS) used motion files captured for their NCAA Final Four video game and in the production of their proposed Pacers animated opening. After the proper skeleton has been set-up in the animation package, motion libraries can be reapplied with just a few mouse clicks. Motion capture data can also be captured at an astonishing rate. HVS captured all of the motion for their NBA Inside Drive 2000 in two days 4. The ability to build a library of motions quickly is a tremendous financial advantage. Third, companies gain the ability to produce live real-time animation. Donna Coco tells us that real-time animation would be impossible without the use of motion capture 5. On the show Canille Peluche, Mat the Ghost is captured, composited with live footage, and broadcast in real-time 6. Without the ability to capture the character’s movement and render the results in real-time, it would be impossible to have the character react to actors in a live broadcast setting. The net result is that companies who have access to motion-capture methods can produce more realistic animations faster and less costly than their counterparts. Emmanuel Javal from Medialab tells us that keyframe animation will give way to motion-capture as the prevalent mode for animation production, because it is a more cost-effective, production worthy alternative 7. Dominique Pouliquen, marketing manager at Medialab tells use that motion capture can lower the cost of producing a 3D animated show to almost the level of doing a 2D show 8. II. Development of System By integrating the two dimensional motion capture capabilities of Adobe After Effects, the math functions in Microsoft Excel, and a few custom scripts, the authors were able to produce a motion capture system that could be used in cost-conscience situations and provide many of the same capabilities as some of the more conventional and costly alternatives. The central part of the system was based on using the two-dimensional tracking capabilities of Adobe After Effects to track points on video shot from stationary cameras. Because of the time needed for tracking points, then formatting and importing data, the system would not be suitable for a real-time motion capture application. Also, because the system would be limited to the resolution offered by standard NTSC video it would reduce the accuracy of this system compared to conventional motion capture systems. In addition to the inherent limitations of NTSC video, the authors would also have to keep in mind that their method would not allow the cameras to report their settings to the capturing computer in real-time. As a result the authors would need to only employ the system in cases where small scale facial motion-capture or in full body cases where the camera would not have to pan, tilt, or zoom to track the user. This severely restricted the type of motion capture that could be done with this system in comparison to multi-camera commercial set ups. 6.551.2 Page “Proceedings of the 2001 American Society for Engineering Education Annual Conference & Exposition Copyright ? 2001, American Society for Engineering Education” Understanding the limitation that could not be overcome, the authors focused on setting up samples for both facial motion capture using a single camera and for full body motion capture using two cameras. For facial-motion capture a single camera was pointed onto a subject’s face. The subject was marked up using a series of colored dots at strategic points (see figure 1) Two important markers were the dot between the eyes, and the one on the nose. These features do not move on a person with regard to their facial expression. Therefore these could be used for calibration during shoots on different days. Figure 1. Subject with Markers For full body motion capture, two cameras were set up at ninety degrees to each other facing the subject. Both cameras were leveled on a tripod and zoomed to the same degree and were the same distance from the center of the subject. This would insure that image captured through both cameras would be in the same scale. This would be critical during the post-processing of the frames. Additional considerations at this point were the size of the dots used in relation to the image or purpose of the shoot. For optimal processing the authors found that the dots should be at least10 pixels squared. Thus, for facial motion capture, when the camera would be relatively close to the face, the dots would have to be physically smaller than when used for full body motion capture. Thus, the intended use for the motion-capture data dictated the size of the markers used. In shooting the video, the authors were concerned primarily with the effects of the quality of image provided by the camera and the digitizing process on the end motion-capture data, and the lack of synchronization between the cameras. Because the cameras used did not have a data feed to the computer reporting their position or timing information, the authors used a physical placard to manually synchronize the events. Before shooting the actual subject, the camera would be locked down and the authors would insert a placard in the middle of the scene that would be captured by both cameras. This would allow the authors to locate the same point in time later by examining the video. The quality of the video would also affect the quality of data. The authors found the 6.551.3 Page resolution and image fidelity provided by the MiniDV (720 pixel by 480 pixels) format to “Proceedings of the 2001 American Society for Engineering Education Annual Conference & Exposition Copyright ? 2001, American Society for Engineering Education” be more than sufficient for the task. The five to one compression imposed by the MiniDV format did not affect the ability of the computer to find the location of the markers during the post-processing session. The authors used a Canon XL-1 and a Canon GL-1 for the video shoot. The video was then digitized using an OHCI IEEE 1394 port on a laptop computer. Processing the video required using the two dimensional tracking features of Adobe After Effects 4.1 Production Bundle. One of the features of this software allows the user to track a feature on a series of consecutive frames and then apply those tracking data to a second layer. The authors found this could be exploited to perform the bulk of the work for the motion capture method. First, the frames were imported as an animated sequence and resized from 720 by 480 to 720 by 540.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us