DISPARITY REMAPPING FOR HANDHELD 3D VIDEO COMMUNICATIONS Stephen Mangiat and Jerry Gibson Electrical and Computer Engineering University of California, Santa Barbara, CA 93106 smangiat, gibson @ece.ucsb.edu { } ABSTRACT light from the desired depth). However, when viewing im- agery on a stereoscopic display, this link is broken because With the advent of glasses-free autostereoscopic handheld the eyes must always focus light at the distance of the dis- displays, a number of applications may be enhanced by 3D play, regardless of their convergence angle. This disconnect perception, particularly mobile video communications. On is a main source of discomfort and eye fatigue [2], and it is handheld devices, front-facing stereo cameras can capture the inherent to all stereoscopic systems that use planar screens. two views necessary for 3D display. However, the short dis- The discomfort caused by this “vergence-accomodation tance between the user and cameras introduces large dispar- conflict” can be mitigated by shrinking the stereo baseline to ities and other stereoscopic challenges that have tradition- limit disparities, which unfortunately means reducing the 3D ally plagued close-up stereo photography. Maintaining both effect. When the stereo baseline is too small, the resultant viewer comfort and 3D fusion under varying conditions is video will exhibit the “cardboard cutout” effect, and will not therefore a technological priority. In this paper, we discuss capture 3D structure within the user’s face [1]. In this case, the main stereoscopic concerns of handheld 3D video com- stereo cameras would fail to enhance immersion. munications and outline a new post-processing technique to In Sec. 2, we discuss stereoscopic concerns that help de- remap on-screen disparities for viewer comfort. termine the placement of two front-facing cameras for 3D Index Terms— 3D Video, Disparity Remapping, Stere- video communications. Once camera positioning is fixed, oscopy, Mobile Videoconferencing post-processing methods may be used to remap disparities for comfortable viewing. Prior methods are not immediately applicable to video communications due to artifacts such as 1. INTRODUCTION depth discontinuity and warping, in addition to complexity Stereoscopy enhances the realism of images and video by pre- constraints. In Sec. 4, we introduce a new method to shift senting different views to each eye, creating an illusion of disparities using a rough depth estimate and knowledge of depth. The brain can reconstruct 3D volumes from 2D im- depths within a face. Sample results are shown in Sec. 5, agery using a variety of monoscopic depth cues (often shaped followed by conclusions and future work in Sec. 6. by prior experience). However, stereoscopy eases this process by directly presenting a fundamental optical depth cue [1]. 2. STEREOSCOPIC VIEWING COMFORT Views of faces can be dramatically enhanced by 3D, as the Problems contributing to stereoscopic viewing discomfort mind naturally expects faces to exhibit particular structure include ghosting/crosstalk, misalignment, vertical disparities, and depth features. temporal discontinuities, and the vergence-accomodation Traditional 3D displays are unsuitable for video com- conflict [1]. Video “quality” is also heavily dependent upon munications because they require glasses, yet handheld au- the unique perception and anatomy of individual users. Stere- tostereoscopic displays eliminate this hurdle. There now ographers typically rely on rules of thumb, resulting in ambi- exists a new opportunity for realistic communications, using guity throughout the literature. However, recent advances in handheld systems that can comfortably display 3D video of the science of stereoscopic viewing provide a foundation for a user’s face. The key difficulty is finding a balance between new stereo video guidelines [2], [3]. viewing comfort and 3D perception. Stereoscopy stimulates eye convergence (the brain pro- 2.1. The Zone of Comfort cesses retinal disparities and converges the eyes to fuse a par- The vergence-accomodation conflict arises from the differ- ticular depth). When viewing real-world objects, convergence ence between vergence distance (in front of or behind the is directly linked to accommodation (pupils adjust to focus screen) and viewing distance (always on the screen). Figure This work has been supported by Huawei Technologies Co. Ltd. and 1 shows the relationship between these distances, both mea- HTC Corporation. sured in diopters D (the inverse of distance in meters) [3]. The center diagonal represents standard viewing (no conflict). Vergence Distance The surrounding lines marked “near” and “far” represent the largest conflict that can be comfortably viewed, either behind the screen (far) or in front of the screen (near). The resul- Uncrossed Disparity Interocular tant region between these lines is referred to as the “zone of comfort” (Percival and Sheard [3]). Viewing Distance Display Crossed Interocular Disparity Vergence Distance Fig. 2. Uncrossed and Crossed On-Screen Disparities neck/shoulders will always reach the bottom and sides of the screen. It is therefore evident that scene depth should only be placed on and behind the screen during a 3D video call. 3. CONTROLLING DISPARITY The factors described in Sec. 2 provide guidelines for the Fig. 1. Zone of Comfort [3] range of disparities that can be comfortably viewed on a hand- held stereo display. Methods to control disparity can then be Typical viewing distances for different sized displays are divided into two categories: (1) stereo camera/display setup also marked in Fig. 1. For mobile devices, the average view- and (2) post-processing. ing distance shown here is 3 diopters (33 cm), however for our analysis, we use a nearer viewing distance of 30 cm. Figure 1 3.1. Camera Convergence illustrates that a typical viewer can comfortably fuse dispar- One option to remove disparity at a particular depth is to “toe- ities depicting 3D objects between 26 cm and 39 cm away. in” the stereo camera, yet this is impractical since the optimal Given an interocular distance of 65 mm, these depths corre- depth varies. Alternatively, when the optical axes are parallel, spond to on-screen disparities of 10 mm (crossed) and 15 mm the convergence depth is placed at infinity (objects at infinite (uncrossed). As depicted in Fig. 2, crossed disparities appear depth will appear in the same location in each image). All in front of the display and uncrossed disparities appear behind objects will appear in front of the display, so images must be the display. For mobile devices, Shibata et al. [3] found that shifted in post-processing. Yet, since disparity goes to zero objects that appear in front of the display are less comfortable as depth increases, the largest disparity is now determined by to view than those that appear behind the display (the opposite the depth of the closest object, i.e. the user’s face. is true for larger displays). 3.2. Camera Baseline 2.2. The Stereoscopic Window For parallel cameras, the disparity (d) of an object is propor- In addition to disparity range, the stereoscopic window (aka tional to baseline (distance between the cameras), with proscenium rule [4]) has a significant effect on viewing comfort. A 3D scene reproduced by stereoscopic images is b d = f , (1) viewed through a window defined by the edges of the display. Z Information behind the window edges is missing, producing conflicts that the brain cannot resolve. As a rule, objects in where f is camera focal length, Z is object depth, and b is front of the screen should never cross the left or right edges the baseline (pinhole camera model). It is often desirable to of the window (except in brief bursts). It may be possible to use a baseline equal to the human interocular distance (65 “bend” the stereoscopic window at the top and bottom edges, mm), yet this is not always the case. A rule of thumb used by however when viewing closeups of a face, stereographers stereographers is that the baseline should be at most 3% of the advise that the top of the head should never appear cut off distance to the nearest object [1]. In a handheld scenario, with while in front of the display [1]. It is difficult to completely the user’s face about 300 mm from the cameras, the baseline avoid a scenario where the user’s head is cropped, and the would be only 9 mm! However, since the amount of depth is also proportional to baseline, the scene will appear more sparse set of correspondences and pixel importance metrics two-dimensional as the baseline is decreased. are used to compute a deformation of the input views in order In fact, the difference between recording and viewing ge- to meet target disparities. Although it attempts to warp the ometries introduces depth distortion. In order to accurately images only in smooth and unimportant regions, this method preserve 3D shape, a camera/display system must adhere to may introduce distortions that would counteract any percep- the depth consistency rule, tual advantages of 3D. These methods are also ill suited for real-time applications. As such, we investigate a new shift b b = ! , (2) convergence algorithm informed by the unique stereoscopic Z Z! constraints of handheld 3D video communications. where b and Z are the baseline and depth in camera record- ing space, and b! and Z! are the baseline and depth in display 4. DISPARITY REMAPPING space [4]. If two users communicate using the same device, In order to maximize both depth perception and viewing com- then Z and Z are approximately equal (depending on arm ! fort during a 3D video call, the object nearest to the cam- length). However, b must be significantly smaller than b , the ! eras should always be placed on the screen. This will nor- interocular distance. This means that the roundness factor (ra- mally correspond to the tip of the user’s nose, yet this may tio between b and b! ) is much less than one, leading to the Z Z! vary.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-