
Imaging defects of real camera systems Traditional 3D rendering uses a perfect, idealized camera model that is easy to define in math- ematical terms and provides a reasonably fast and efficient rendering. Real world cameras, and similar optical imaging systems such as human eyes, exhibit quite a few inherent defects, some of which are actually essential to the perceived “realism” of an image. Too perfect images easily stand out as being synthetic. The following text is an attempt to briefly summarize the various defects and their causes, along with some examples on how they may be simulated using digital methods. Higher resolution versions of the example images can be found on the web at: http://www.itn.liu.se/~stegu/TNM057-2002/. Depth of field A synthetic camera renders everything in view with perfect sharpness. A real camera has a finite depth of field, so that only objects at a certain distance, in the focal plane, are rendered distinctly, and objects at other distances are progressively more blurred with increasing distance to the fo- cal plane. The depth of field is dependent on the focal length of the lens and the aperture. Tele- photo lenses generally have a smaller depth of field than wide angle lenses. A smaller numerical aperture (a larger aperture opening) lets more light through the lens but yields a smaller depth of field. Depth of field is one of the most prominent and artistically important effects of a real camera, but unfortunately it is rather difficult to simulate. One option is to cheat by saving the depth in- formation for each pixel in the rendered scene, and blurring the image in a post-processing step by a convolution filter that is chosen differently for different parts of the image depending on the distance to the objects. This is a comparably cheap simulation, and it is quite often sufficient, but it does have its problems around edges between objects that are in focus and objects that are not. A common method for depth of field simulation in raytracing is to use multisampling. Several camera rays can be traced through each image pixel on the projection plane from slightly dif- ferent projection reference points. The resulting intensity for the pixel is calculated as the aver- age of the intensity value from each ray. All camera rays for one image pixel are traced so that they converge in a common projection plane, which becomes the focal plane of the rendering. The area over which the projection reference points for the different rays are spread is directly equivalent to the aperture of the camera. The equivalent of multisampling can be made for regular scanline-based rendering as well. Depth of field can be simulated by projecting the scene against the projection plane with several slightly separated projection reference points, and averaging a fairly large number of images. Figure 1: Depth of field (DOF). Left to right: infinite DOF, post effect DOF, multisample DOF Defocus A real camera, particularly if it is of low quality, often has a visibly imperfect focus even in the focal plane. A generally defocused image is quite easy to simulate by applying a local averaging filter on the image. Blurring an image on a wholesale basis like this is a very common and sim- ple operation available in almost any image editing program. Formally, the blurring operation is a convolution with a filter kernel. Smaller kernels give little blur, and large kernels give heavy blur. For most blurring operations, a simple Gaussian filter kernel shape gives good results. Noise and grain Noise in electronic image sensors, and its analog counterpart, the grain in photographic film, are defects that are often clearly noticeable in real-world images, and may therefore need to be add- ed to synthetic images, particularly if they are to be blended with real images. Electronic noise and film grain look quite different, but both are quite easily simulated as an image post effect by adding some randomness to the pixels of the perfect synthetic image. Figure 2: Blur and grain. Left to right: sharp image, blurred image, simulated film grain Motion blur Another common defect is motion blur, which occurs because a real camera has a shutter. Some time is required for the available light to expose a photographic film or give a large enough sig- nal in an electronic image sensor, and any objects that move in the image during that time are blurred. A simple synthetic camera samples the scene at one single instance in time, and thus does not exhibit motion blur. Motion blur is often simulated by averaging together several sampled images, distributed over some simulated shutter time. A successful motion blur simulation done in this way requires quite a few images, so motion blur often adds considerably to the rendering time. For raytracing, the multisampling mentioned above works well also for simulating motion blur. Motion blur and depth of field can be simulated together by distributing the ray samples for each pixel both in time and space, thereby cutting down some on the total increase in rendering time. However, methods which incorporate explicit averaging of images over time are general but rather costly method. The effort can be reduced by performing the explicit averaging only in those areas of the image where objects move. Animation software packages have options for specifying which objects in a scene should be motion blurred. This is often called object motion blur, and it has the advantage of not adding anything to the rendering time for static parts of the scene. Just like depth of field, motion blur may also be simulated by image post-processing methods. At rendering time, the renderer knows the position change of each object in the scene between one frame and the next. Velocity information can be stored for each object, or even for each pix- el, making it possible to blur moving parts of the image without having to spend any extra work on the non-moving parts. This method is often referred to as image motion blur. For many ap- plications it can be both faster and better than multisample motion blur. Figure 3: Motion blur. Left to right: no motion blur, post effect blur, multisample blur Camera motion blur is a similar but more global effect. Even static scenes can be blurred from a camera motion. Blur from panning, tilting or rotating an otherwise stationary camera can be successfully simulated in a post-processing step as a local pixel averaging along the direction of the apparent motion in the image. Motion blur for camera trucking (changing the location of the camera in the scene) requires multisampling for an absolutely correct appearance, but that is only rarely an issue. It can be simulated a lot easier and quite accurately using depth informa- tion for the scene. Even without depth information, cheap post-processing methods using linear or radial blurring can often be sufficient with proper care and thought. Figure 4: Post-effect camera motion blur. Top row: stationary camera, left-right pan, up-down tilt Middle row: camera rotation, camera zoom Figure 5: Multisample camera motion blur. Left: sidewards motion. Right: forwards motion. Glare, bleeding and soft focus Glare manifests itself as glowing halos around strong light sources, and a generally reduced contrast for dark parts of scenes with strongly illuminated objects. The effect is due to imper- fections of the camera optics even at the focal plane. Most of the light is projected onto a small vicinity of the ideal projection point, but some smaller part of it ends up in a larger vicinity of that point, sometimes even spread out over the entire image. Glass imperfections, dust, dirt and fingerprints can all have that effect, as can reflections within the camera housing. (That’s why the inside of a camera is matte and black.) Glare can also arise from lateral light transport in the film or in the image sensor, or from electrical crosstalk between image sensor pixels, although in these cases it is often referred to as bleeding. Glare is not necessarily due to the camera. It can also arise from atmospheric light scattering by dust particles. Any type of glare can be easily simulated by a convolution operation with a large filter kernel, and adding a small part of the blurred result to the original image. Compared to defocusing, glare is a quite subtle secondary effect, but it is very visible around high intensity parts in the image. If the dynamic range for the digital image intensity is restricted to 8 bits, the glare effect has to be fudged by making a separate blurring of only the maximum intensity parts of the im- age. In rendering software, there is often a facility for specifying only some light sources or ob- jects which should have simulated glare around them, and this is often good enough. Generally speaking, glare simulation is best performed on real radiance values for the image pixels. If possible, truncation to 8 bit image pixels should not be done until after such simula- tions. A strong and wide-spread glare is sometimes used for a dream-like or romantic effect called soft focus. Such strong glare is not only visible around light sources, but also around any reasonably light object. Commercial soft focus optical filters exist for photography, but a similar effect can be achieved very inexpensively by smearing some vaseline on part of the front lens. (The vase- line method is in common use by professional photographers.) As with glare, soft focus is easily simulated by a convolution filter.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-