Depth-Of-Field Blur Effects for First-Person Navigation in Virtual Environments

Depth-Of-Field Blur Effects for First-Person Navigation in Virtual Environments

VR Software and Technology Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments Sébastien Hillaire ■ INRIA/France Télécom R&D Anatole Lécuyer ■ INRIA/Collège de France Rémi Cozot ■ University of Rennes 1/INRIA Géry Casiez ■ University of Lille 1/INRIA n human vision, the depth of !eld (DoF) is the the blur effect in"uenced gamers’ performance range of distances near the focus point where during a multiplayer !rst-person-shooter (FPS) the eyes perceive the image as sharp. Objects session. Ibehind and in front of the focus point are blurred. DoF and its associated blur effects are well-known Visual Blur Effects for classic depth cues in human vision.1 Virtual im- First-Person VE Navigation Depth-of-!eld blur effects ages that lack a DoF blur can sometimes look “too We use a model for dynamic visual are well-known depth cues perfect” and therefore synthetic. System designers blur that combines DoF and pe- in human vision. Computer therefore added DoF blur effects early to computer ripheral blur effects. For landmark graphics pipelines added graphics pipelines.2 Movies also use the classic blur research and state-of-the-art im- DOF effects early to enhance effects of focal-distance changes to convey sensa- plementations relative to these imagery realism, but real- tions or otherwise capture viewers’ attention. topics, see the sidebar, “Develop- time VR applications haven’t Real-time VR applications haven’t yet introduced ment and Related Work in Visual yet introduced visual blur visual blur effects. We now have the programming Blur Effects” (next page). effects. The authors describe capabilities and the processing power to compute new techniques to improve them in real time. However, we don’t know how The DoF Blur Effect blur rendering and report such effects will in"uence user performance and This effect simulates visual blurring subjective experience. We therefore need to by blurring the pixels of objects in experimental results from front of or behind the focus point. a prototype video game ■ develop new models of realistic visual blur ef- The focus point is associated with implementation. fects for virtual environments (VE), taking a focal distance (fd)—the distance interactivity and real-time constraints into ac- between the eyes (or camera) and count, and the focus point. ■ evaluate visual blur effects in terms of both VE user performance and subjective preferences. The lens model. We use the classic lens model intro- duced by Michael Potmesil and Indranil Chakra- Here, we describe a novel model of dynamic vi- varty.2 In this model, the amount of blur—that is, sual blur for !rst-person VE navigation. We also the diameter of the circle of confusion (DCoCdep) report results from an experiment to study how of a point projected on screen—is Published by the IEEE Computer Society 0272-1716/08/$25.00 © 2008 IEEE IEEE Computer Graphics and Applications 47 Authorized licensed use limited to: IEEE Xplore. Downloaded on January 14, 2009 at 08:58 from IEEE Xplore. Restrictions apply. VR Software and Technology Development and Related Work in Visual Blur Effects omputer graphics researchers introduced visual blur their results show no evidence of performance improve- Csimulation early to improve the photorealistic as- ment. They concluded that their application’s very slow pect of synthetic images. Michael Potmesil and Indranil frame rate might have been the cause and suggested Chakravarty !rst proposed simulating an optical lens implementing and further evaluating real-time DoF blur to simulate depth-of-!eld (DoF) blur. Their algorithm effects in virtual environments. uses the original sharp image, each pixel’s depth, and a Przemyslaw Rokita !rst suggested using visual blur postprocessing step to compute the blur. The lens simula- effects in VR.8 In the latest generation of video games, tion provides the amount of each pixel’s blur according Epic Games’ Unreal Engine 3 (www.epicgames.com) and to its depth. With lens simulation, an out-of-focus point Crytek’s CryEngine 2 (www.crytek.com) propose tem- becomes a disk or circle after the projection through the porary DoF blur together with motion blur. Techland’s lens. The diameter of the resulting circle of confusion Chrome Engine (www.development.techland.pl) also (CoC) corresponds to the amount of blur.1 introduces a dynamic DoF blur effect, but it remains lim- After Potmesil and Chakravarty’s pioneering study, most ited to a “sniper mode” with only a few depth plans. All of researchers used this lens model to compute the DoF blur.2 these DoF blur effects suffer from leaking artifacts. However, Brian Barsky introduced the alternative concept of vision-realistic rendering, which uses all of an individual’s optical-system characteristics.3 This let Barsky accurately References simulate the foveal image scanned from wavefront data of 1. M. Potmesil and I. Chakravarty, “A Lens and Aperture Camera human subjects as measured by an aberrometry device. Model for Synthetic Image Generation,” Proc. Siggraph, ACM The main problem of DoF blur algorithms is color leaking Press, 1981, pp. 298–306. across depth discontinuities. This artifact blurs edges of 2. J. Demers, “Depth of Field: A Survey of Techniques,” in-focus objects that are in front of a blurred background. GPU Gems, R. Fernando, ed., Addison-Wesley, 2004, pp. DoF algorithms compute the blur itself and avoid color 375–390. leaking in different ways. In a survey of DoF algorithms, 3. B.A. Barsky, “Vision-Realistic Rendering: Simulation of the Joe Demers divides the different techniques into three Scanned Foveal Image from Wavefront Data of Human main categories: scattering, gathering, and diffusion.2 Subjects,” Proc. Symp. Applied Perception in Graphics and Gathering techniques (also called reverse-mapping tech- Visualization, ACM Press, 2004, pp. 73–81. niques) use only the sharp image’s pixels. For each of the 4. G. Riguer, N. Tatarchuk, and J. Isidoro, “Real-Time Depth of !nal image’s pixels, the algorithm gathers and blends Field Simulation,” ShaderX2: Shader Programming Tips and source-image pixel colors that belong to the current Tricks with DirectX 9, Wolfgang Engel, eds., Wordware, 2003, pixel’s CoC. Simple depth tests during the gathering step pp. 529–556. avoid color-leaking artifacts. This approach is easily imple- 5. M. Sereno et al., “Borders of Multiple Visual Areas in mented on current graphics hardware.4 Humans Revealed by Functional Magnetic Resonance,” Other blur effects can further enhance digital images’ Science, vol. 268, no. 5212, 1995, pp. 889–893. appearance. For instance, peripheral blur refers to the eye’s 6. N. Max and D. Lerner, “A Two-and-a-Half-D Motion-Blur coarser acuity from the fovea to the periphery.5 Nelson Algorithm,” Proc. Siggraph, ACM Press, 1985, pp. 85–93. Max and Douglass Lerner de!ne a motion blur that simu- 7. J.P. Brooker and P.M. Sharkey, “Operator Performance lates the images obtained from a digital camera.6 This blur Evaluation of Controlled Depth of Field in a Stereographically corresponds to the recording of objects that move rapidly. Displayed Virtual Environment,” Stereoscopic Displays and Indeed, integrating such images while the shutter is open Virtual Reality Systems VIII, Proc. SPIE, vol. 4297, 2001, pp. generates a blur. 408–417. Julian Brooker and Paul Sharkey investigated the DoF 8. P. Rokita, “Generating Depth-of-Field Effects in Virtual blur effect using a stereoscopic display and an eye-track- Reality Applications,” IEEE Computer Graphics & Applications, ing system to !nd a path in a 3D labyrinth.7 However, vol. 16, no. 2, 1996, pp. 18–21. optimal way to determine the focal distance in Df fd z ss 3 DCoCdof real time. However, such devices are expensive, ffd s z (1) complex, and unavailable for a mass market. In the absence of such a system, we chose a para- where D is the lens diameter, f is the lens focal length, digm used in FPS games, where users employ a 2D fd is the focal distance, and z is the point depth. mouse and keyboard to manipulate a virtual visor always located at the screen’s center. In this ap- The autofocus zone. Eye-tracking systems offer an proach, we can assume the user looks mainly at 48 November/December 2008 Authorized licensed use limited to: IEEE Xplore. Downloaded on January 14, 2009 at 08:58 from IEEE Xplore. Restrictions apply. the part of the screen close to the visor. In fact, using an eye-tracking system, Alan Kenny and his colleagues found that more than 82 percent of the time, FPS video gamers indeed watched a central area corresponding to half the monitor’s size. We therefore introduce a notion called the au- tofocus zone—an area at a screen’s center that the (a) (b) user is supposed to look at preferentially. This re- calls digital-camera autofocus systems, which also When we implemented the !nal blur model to Figure 1. aim to determine an appropriate focal distance study its in"uence on gamers’ performance, we set The depth- when taking a picture. We can compute the depth WGmin to 0.7, WGmax to 1, WSmin to 0.004, and of-!eld blur of autofocus-zone pixels by using an auxiliary buf- WSmax to 1. when using a fer. As in digital cameras, the function to compute rectangular the focal distance from the depths of all pixels GPU computation of focal distance. Computing Equa- autofocus zone in the autofocus zone could be minimum, maxi- tion 2 on a CPU takes a long time, especially for (the white mum, or average. In FPS games, some objects in a large autofocus zone. We therefore compute the rectangle): the environment are more important—for exam- focal distance using a general-purpose technique (a) Without ple, enemies or bonus objects.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us