<<

Imaging defects of real systems

Traditional uses a perfect, idealized camera model that is easy to define in math- ematical terms and provides a reasonably fast and efficient rendering. Real world , and similar optical imaging systems such as human eyes, exhibit quite a few inherent defects, some of which are actually essential to the perceived “realism” of an image. Too perfect images easily stand out as being synthetic. The following text is an attempt to briefly summarize the various defects and their causes, along with some examples on how they may be simulated using digital methods. Higher resolution versions of the example images can be found on the web at: http://www.itn.liu.se/~stegu/TNM057-2002/.

Depth of field A synthetic camera renders everything in view with perfect sharpness. A real camera has a finite , so that only objects at a certain distance, in the focal plane, are rendered distinctly, and objects at other distances are progressively more blurred with increasing distance to the fo- cal plane. The depth of field is dependent on the of the and the . Tele- photo generally have a smaller depth of field than wide angle lenses. A smaller numerical aperture (a larger aperture opening) lets more light through the lens but yields a smaller depth of field. Depth of field is one of the most prominent and artistically important effects of a real camera, but unfortunately it is rather difficult to simulate. One option is to cheat by saving the depth in- formation for each in the rendered scene, and blurring the image in a post-processing step by a convolution filter that is chosen differently for different parts of the image depending on the distance to the objects. This is a comparably cheap simulation, and it is quite often sufficient, but it does have its problems around edges between objects that are in focus and objects that are not. A common method for depth of field simulation in raytracing is to use multisampling. Several camera rays can be traced through each image pixel on the projection plane from slightly dif- ferent projection reference points. The resulting intensity for the pixel is calculated as the aver- age of the intensity value from each ray. All camera rays for one image pixel are traced so that they converge in a common projection plane, which becomes the focal plane of the rendering. The area over which the projection reference points for the different rays are spread is directly equivalent to the aperture of the camera. The equivalent of multisampling can be made for regular scanline-based rendering as well. Depth of field can be simulated by projecting the scene against the projection plane with several slightly separated projection reference points, and averaging a fairly large number of images.

Figure 1: Depth of field (DOF). Left to right: infinite DOF, post effect DOF, multisample DOF Defocus A real camera, particularly if it is of low quality, often has a visibly imperfect focus even in the focal plane. A generally defocused image is quite easy to simulate by applying a local averaging filter on the image. Blurring an image on a wholesale basis like this is a very common and sim- ple operation available in almost any image editing program. Formally, the blurring operation is a convolution with a filter kernel. Smaller kernels give little blur, and large kernels give heavy blur. For most blurring operations, a simple Gaussian filter kernel shape gives good results.

Noise and grain Noise in electronic image sensors, and its analog counterpart, the grain in , are defects that are often clearly noticeable in real-world images, and may therefore need to be add- ed to synthetic images, particularly if they are to be blended with real images. Electronic noise and film grain look quite different, but both are quite easily simulated as an image post effect by adding some randomness to the of the perfect synthetic image.

Figure 2: Blur and grain. Left to right: sharp image, blurred image, simulated film grain

Motion blur Another common defect is motion blur, which occurs because a real camera has a . Some time is required for the available light to expose a photographic film or give a large enough sig- nal in an electronic , and any objects that move in the image during that time are blurred. A simple synthetic camera samples the scene at one single instance in time, and thus does not exhibit motion blur. Motion blur is often simulated by averaging together several sampled images, distributed over some simulated shutter time. A successful motion blur simulation done in this way requires quite a few images, so motion blur often adds considerably to the rendering time. For raytracing, the multisampling mentioned above works well also for simulating motion blur. Motion blur and depth of field can be simulated together by distributing the ray samples for each pixel both in time and space, thereby cutting down some on the total increase in rendering time. However, methods which incorporate explicit averaging of images over time are general but rather costly method. The effort can be reduced by performing the explicit averaging only in those areas of the image where objects move. Animation software packages have options for specifying which objects in a scene should be motion blurred. This is often called object motion blur, and it has the advantage of not adding anything to the rendering time for static parts of the scene. Just like depth of field, motion blur may also be simulated by image post-processing methods. At rendering time, the renderer knows the position change of each object in the scene between one frame and the next. Velocity information can be stored for each object, or even for each pix- el, making it possible to blur moving parts of the image without having to spend any extra work on the non-moving parts. This method is often referred to as image motion blur. For many ap- plications it can be both faster and better than multisample motion blur.

Figure 3: Motion blur. Left to right: no motion blur, post effect blur, multisample blur

Camera motion blur is a similar but more global effect. Even static scenes can be blurred from a camera motion. Blur from , tilting or rotating an otherwise stationary camera can be successfully simulated in a post-processing step as a local pixel averaging along the direction of the apparent motion in the image. Motion blur for camera trucking (changing the location of the camera in the scene) requires multisampling for an absolutely correct appearance, but that is only rarely an issue. It can be simulated a lot easier and quite accurately using depth informa- tion for the scene. Even without depth information, cheap post-processing methods using linear or radial blurring can often be sufficient with proper care and thought.

Figure 4: Post-effect camera motion blur. Top row: stationary camera, left-right pan, up-down Middle row: camera rotation, camera zoom Figure 5: Multisample camera motion blur. Left: sidewards motion. Right: forwards motion.

Glare, bleeding and soft focus manifests itself as glowing halos around strong light sources, and a generally reduced contrast for dark parts of scenes with strongly illuminated objects. The effect is due to imper- fections of the camera optics even at the focal plane. Most of the light is projected onto a small vicinity of the ideal projection point, but some smaller part of it ends up in a larger vicinity of that point, sometimes even spread out over the entire image. Glass imperfections, dust, dirt and fingerprints can all have that effect, as can reflections within the camera housing. (That’s why the inside of a camera is matte and black.) Glare can also arise from lateral light transport in the film or in the image sensor, or from electrical crosstalk between image sensor pixels, although in these cases it is often referred to as bleeding. Glare is not necessarily due to the camera. It can also arise from atmospheric light scattering by dust particles. Any type of glare can be easily simulated by a convolution operation with a large filter kernel, and adding a small part of the blurred result to the original image. Compared to defocusing, glare is a quite subtle secondary effect, but it is very visible around high intensity parts in the image. If the dynamic range for the intensity is restricted to 8 bits, the glare effect has to be fudged by making a separate blurring of only the maximum intensity parts of the im- age. In rendering software, there is often a facility for specifying only some light sources or ob- jects which should have simulated glare around them, and this is often good enough. Generally speaking, glare simulation is best performed on real radiance values for the image pixels. If possible, truncation to 8 bit image pixels should not be done until after such simula- tions. A strong and wide-spread glare is sometimes used for a dream-like or romantic effect called soft focus. Such strong glare is not only visible around light sources, but also around any reasonably light object. Commercial soft focus optical filters exist for , but a similar effect can be achieved very inexpensively by smearing some vaseline on part of the front lens. (The vase- line method is in common use by professional photographers.) As with glare, soft focus is easily simulated by a convolution filter. Soft focus, too, is best simulated from real radiance pixel val- ues, even though an 8 bit input image can yield acceptable results, since the effect is strong and therefore not as isolated to the high intensity parts of the image. Streaks A scratch somewhere on the lens will give a visible streak in the image. The streak emanates from strong light sources and runs perpendicular to the scratch. A lens with many scratches on it will make a star-like pattern around light sources in the image. This can in fact be a nice ar- tistic effect, and there are optical star effect filters available. Streaks are easily simulated from images with real radiance values by means of convolution filters. If an 8 bit input image is used, the effect has to be isolated to only certain parts of the image, just like glare.

Lens flare Lens flare is a which easily gets cheesy if it is over-used, but in some cases it is desirable. Lens flare is due to multiple internal reflections between the lens surfaces of the op- tics, and exhibits itself as halos, light spots and repeated images of the aperture shape when a light source is in view, or close to being in view. Differently from glare and streak effects, a lens flare consists of several distinct light shapes distributed along a straight line in the image. The line runs through the high intensity light source and the center of the image. Because most real lenses have a circular field of view which is slightly larger than the diagonal of the film frame, lens flare might be present even if the light source is somewhat out of view. (The same holds true for glare and streaks.) To avoid unwanted lens flares in real cameras, a can be used to block out most parts of the field of view that fall outside the image, so that a lens flare is rarely present unless a strong light source is in direct view. Optics with lots of lens elements generally give stronger and more complex flares than those with only a few lens surfaces. Sim- ulating a lens flare digitally in a simple manner is only a matter of blending a sequence of flare images into the picture along a straight line. Simulating it in a physically correct way requires detailed knowledge of the construction and material properties of a real lens, but this would probably never be an issue.

Figure 6: Top row: no special effect; glare around the light source. Bottom row: glare and streaks; glare, streaks and lens flare. Figure 7: Perfect focus (left) and soft focus (right)

Coronas Coronas are similar to glare in that they appear around strong light sources, but they have a dis- tinct, ring-like appearance, and they can shift in the of the rainbow. Coronas are not al- ways a camera defect, but can also arise from light scattering or refraction by particles, ice crystals or water droplets in the atmosphere. A corona is rather easy to simulate using a post- processing step, either by using high dynamic range radiance values as input, or isolating the effect to only some objects or light sources. It is not that visually important to get it exactly right. To simulate its exact appearance, a sampling with significantly more than three channels is required, so called hyperspectral color sampling.

Vignetting A real camera has a higher intensity for the projected image in the middle than at the edges. This radial light falloff is called . It can sometimes be clearly visible, in particular for very cheap cameras. Vignetting occurs because the amount of light that is projected through the op- tics decreases with the angle of incidence θ against the optical axis, for several reasons. First, the apparent aperture size from an oblique angle decreases with cosθ . Second, the distance from the film plane to the aperture increases with 1 ⁄ cosθ towards the edges of the image, giv- ing an apparent size of the aperture as seen from the film plane that decreases with cos2θ . Third, the solid angle subtended by some small area (e.g a sensor pixel) of the image sensor decreases with cosθ . The collective impact of these effects would mean that the image intensity should decrease with the fourth power of the cosine of the angle of incidence, cos4θ . This is often re- ferred to as the “cos4 law”. The “law” is actually only valid for a . It is at least approximately correct for a single, fairly flat lens with a small aperture, but modern high quality optics are designed to reduce vignetting, so the effect is not very strong. If desired, vignetting is a very simple effect to simulate by a post-processing step, where each pixel is attenuated with a factor depending on the distance to the image centre.

Figure 8: No vignetting (left) and strong vignetting (right) Field distorsion Real camera lenses might project an image to the image plane somewhat differently from a per- spective projection. This is particularly true for large focal range zoom optics, very wide angle lenses or very cheap lenses. The most prominent distorsions are referred to as pincushion dis- torsion and barrel distorsion, where straight lines in the scene exhibit a curvature inwards or outwards from the image centre. The distorsion is worse towards the image edges. Other field distorsions exist, but pincushion or barrel distorsion is the most prominent defect. If it should happen to be a desired effect, simulating it digitally is a simple post-processing resampling step.

Color aberration Optical materials have an index of refraction that varies at slightly with the wavelength of light. This makes it difficult (in fact impossible) to build lenses that have exactly the same imaging properties over the entire visible range of wavelengths. Modern lens makers generally do a very good job with compensating for this, but it is still a problem. Wavelength variations yield color aberration, which manifests itself as a colored, rainbow-like edge ghosting towards the periph- ery of the image. It is doubtful whether anyone would like to simulate color aberration digitally, since it is a quite disturbing artifact, but similar results could be achieved by resampling the R, G and B color channels to simulate slightly different field distorsions for the different channels. To simulate color aberration in a physically correct manner, hyperspectral color sampling is re- quired.

Figure 9: Top row: ideal planar projection, barrel distorsion. Bottom row: pincushion distorsion, color aberration (simulated from RGB components).

Copyright Stefan Gustavson ([email protected]), 2002-01-26 You may copy and use this document freely for any non-commercial, not-for-profit purpose. Commercial or for-profit use is not allowed.