Converting 2D Video to 3D: an Efficient Path to a 3D Experience Multimedia at Work
Total Page:16
File Type:pdf, Size:1020Kb
[3B2-9] mmu2011040012.3d 24/10/011 12:55 Page 12 Wenjun Zeng Multimedia at Work University of Missouri-Columbia Converting 2D Video to 3D: An Efficient Path to a 3D Experience Xun Cao ide-scale deployment of 3D video tech- development in home entertainment since Tsinghua University Wnologies continues to experience rapid high-definition TV. Yet, aside from the wide growth in such high-visibility areas as cinema, availability of theatrical 3D productions, there’s Alan C. Bovik TV, and mobile devices. Of course, the visual- still surprisingly little 3D content (3D image University of ization of 3D videos is actually a 4D experience, and video) in view of the tremendous number Texas at Austin because three spatial dimensions are perceived of 3D displays that have been produced and as the video changes over the fourth dimension sold. The lack of good-quality 3D content has Yao Wang of time. However, because it’s common to de- become a severe bottleneck to the growth of Polytechnic Institute scribe these videos as simply ‘‘3D,’’ we shall the 3D industry. of New York do the same, and understand that the time di- There is, of course, an enormous amount of University mension is being ignored. So why is 3D sud- high-quality 2D video content available, and denly so popular? For many, watching a 3D it’s alluring to think that this body of artistic Qionghai Dai video allows for a highly realistic and immer- and entertainment data might be converted Tsinghua University sive perception of dynamic scenes, with more into 3D formats of similar high quality. Our deeply engaging experiences, as compared to goal in this article is to introduce and explain traditional 2D video. This, coupled with great emerging theories and algorithms for 2D-to- advances in 3D technologies and the appear- 3D video conversion that seek to convert ordi- ance of the most successful movie of all time nary 2D videos into 3D. in vivid 3D (Avatar), has apparently put 3D video production on the map for good. A typical 3D video system includes the steps 3D content generation of content creation, coding and transmission, An obvious approach toward ameliorating reconstruction, and display. Three-dimensional the current shortage of good-quality 3D video processing and display technologies have dra- content is to explore effective 3D content- matically advanced—for example, glasses-free creation methods. Most contemporary 3D con- (auto-stereoscopic) 3D displays are commer- tent (for example, commercials and movies) is cially available on small-screen mobile devices. created using commercial modeling software, Large-format 3DTV cable receivers and display such as Autodesk’s 3ds Max and Maya. The systems are now commonly found in the basic mode of operation is to first build the tar- home, which is regarded as the biggest get object’s 3D structure (a graphical mesh rep- resentation), then render the meshes using surface texture and lighting models to create re- Editor’s Note alistic appearances (see the middle of Figure 1a). Computer graphics animation is a successful The 3D experience is poised to be the future of entertainment. The example of this category of 3D content produc- path to that future, however, is bumpy. Despite widely available 3D dis- tion (for example, Shrek 3). The advantage of play devices, a lack of 3D content impedes the 3D industry’s rapid growth. Here the authors discuss some common 3D content-creation modeling software is that the synthesized 3D processes and the key elements that affect quality, and present an models are accurate and easily manipulated. approach that converts 2D content to 3D content as an efficient path The drawback is that artificial 3D models are to a 3D multimedia experience. quite time consuming to create, particularly —Wenjun Zeng when highly realistic or otherwise complex scenes are required. 12 1070-986X/11/$26.00 c 2011 IEEE Published by the IEEE Computer Society [3B2-9] mmu2011040012.3d 24/10/011 12:55 Page 13 Beyond graphical modeling, 3D capture de- 3D vices have also greatly progressed in recent capture Scene years. Stereoscopic cameras and time-of-flight (TOF) cameras, both consumer and profes- Binocular P sional, are increasingly common and used in 3D a variety of applications. Similar to the binocu- model lar vision system found in humans and many Disparity other animals, a stereoscopic camera setup Image plane P1 P2 P1 with two cameras, located in nearly the same 2D-to-3D place and pointing in almost the same direc- conversion Monocular tion, capture the same visual scene (see the top of Figure 1a). In this way, the video is cap- tured from two viewpoints with slightly differ- 3D content-creation methods ent perspectives, yielding slightly different Camera 1 Camera 2 projections onto the cameras’ recording appa- (a) (b) ratus. The left and right views are then inte- grated in specific visual centers of the brain to form the 3D depth experience, which is called streams. Once the 3D information has been Figure 1. Generating the cyclopean image. This 3D experience is estimated, it can be deployed to create a 3D 3D content. quite different from viewing the 2D images sep- (stereoscopic) video from the existing 2D (a) Contemporary arately. The sense of depth, 3D perspective, and video. The successful development of efficient methods used for 3D shape is profoundly amplified beyond the and reliable 2D-to-3D video-conversion algo- 3D content creation. 3D experience delivered by other visual cues, rithms could make it possible to create large (b) Stereoscopic such as motion, size, surface-luminance gra- quantities of high-quality 3D content at a low camera and the dients, and so on. cost in terms of time and resources. By analogy, concept of disparity. Alternatively, TOF 3D cameras directly cap- just as many classic movies were once color- ture depth information using a single camera. ized, and as they are now being up-sampled The basic principle of TOF cameras is as follows: to high definition using super-resolution tech- the camera emits acoustic, electromagnetic, or niques, we believe that much of the available other waveforms; these are reflected from corpus of attractive archived videos will be con- object(s) in the scene, some of which are inci- verted to 3D formats. dent upon the sensor’s detector. The distance between object and sensor, at each pixel, can Perceptual depth cues be computed from the travel time of the emit- The real world is 3D; however, during the ted, reflected, and sensed waveform. Once the process of projection through the lens, one of depth information for each pixel is known, the three dimensions is lost: depth. To obtain left and right views can be rendered for 3D re- the survival advantage afforded by depth infor- production using a depth-image-based render- mation, humans have evolved two frontally ing technique.1 placed eyes separated by an interocular distance While 3D capture, processing, and display of about 65 millimeters (in adults). Spatial techniques have evolved rapidly in recent points in the scene project to slightly different years; there’s a relative dearth of 3D content positions between the 2D projection on left and compared to potential market demand. The right eyes; this displacement between the pro- ocean of older 2D videos captured during past jected positions is called disparity, as Figure 1b October decades contains a treasure trove of 3D possibil- illustrates. If the left and right views are placed ities. Because generating native 3D content is a so that the optical axes of the eyes (or cameras) relatively slow process, 2D-to-3D video conver- are parallel, then the disparity between the sion offers a tremendous opportunity to allevi- locations of points that were projected from December 2011 ate this shortage. the 3D scene onto the left and right 2D image The target accomplishment of 2D-to-3D planes is inversely proportional to depth. conversion techniques is to be able to estimate Such a camera configuration is called a parallel the true 3D information present in the scene optical axis binocular geometry.Thiscamera (for example, in the form of depths relative to arrangement simplifies algorithm design in cer- some baseline) from single, monocular video tain ways but doesn’t exploit other advantages 13 [3B2-9] mmu2011040012.3d 24/10/011 12:55 Page 14 Multimedia at Work available if the cameras can toe-in, or verge as Fully automatic 2D-to-3D video human eyes are normally able. conversion A normal human viewer has eyes that are Fully automatic 2D-to-3D video conversion highly mobile and that engage in a number of means that 3D information is estimated from types of eye movements, including binocular 2D video clips automatically, without a priori vergence, whereby the optical axes of the two knowledge of depths in the scene. This tech- eyes fixate on the same point. Cameras can nique aims at scenarios where humans are un- also be made to verge under computer control able to participate in the depth-computation provided that points of stereo fixation can be process, as well as real-time applications where selected in the 3D scene. A vergent stereo imag- human participation is impossible. For exam- ing geometry is a powerful means for concen- ple, a desirable goal is the development of trating the visual attention of a vision system real-time, 2D-to-3D video-conversion modules on a particular region of 3D interest. However, suitable for incorporation into 3DTV receivers. a more complex geometric relationship exists In this situation, algorithms must be able to ex- between points that are projected to a vergent tract information from the input 2D video that stereo pair of videos.