<<

V VV VV

VVVV

VVVV Basic in 180 Days Book XI - Image Processing Editor: Ramon F. aeroramon.com Contents

1 Day 1 1 1.1 processing ...... 1 1.1.1 History ...... 1 1.1.2 Tasks ...... 1 1.1.3 Applications ...... 2 1.1.4 See also ...... 2 1.1.5 References ...... 3 1.1.6 Further reading ...... 3 1.1.7 External links ...... 3 1.2 ...... 3 1.2.1 Basics of image editing ...... 4 1.2.2 Automatic image enhancement ...... 7 1.2.3 compression ...... 7 1.2.4 Image editor features ...... 7 1.2.5 See also ...... 13 1.2.6 References ...... 13 1.3 Image processing ...... 20 1.3.1 See also ...... 20 1.3.2 References ...... 20 1.3.3 Further reading ...... 20 1.3.4 External links ...... 21 1.4 Image analysis ...... 21 1.4.1 Image Analysis ...... 21 1.4.2 Techniques ...... 21 1.4.3 Digital Image Analysis ...... 22 1.4.4 Object-based Image Analysis ...... 22 1.4.5 Land cover mapping ...... 22 1.4.6 References ...... 24 1.4.7 Notes ...... 24 1.4.8 External links ...... 24

2 Day 2 25 2.1 Photo manipulation ...... 25

i ii CONTENTS

2.1.1 History ...... 26 2.1.2 Political and ethical issues ...... 28 2.1.3 Types of digital photo manipulation ...... 32 2.1.4 Photoshopped ...... 32 2.1.5 See also ...... 34 2.1.6 References ...... 35 2.1.7 Further reading ...... 37 2.1.8 External links ...... 37

3 Day 3 38 3.1 Layers (digital image editing) ...... 38 3.1.1 Layer types ...... 38 3.1.2 Layer (basic) ...... 38 3.1.3 Layer mask ...... 38 3.1.4 Adjustment layer ...... 39 3.1.5 See also ...... 39 3.1.6 References ...... 39

4 Day 4 42 4.1 Image histogram ...... 42 4.1.1 Image manipulation and histograms ...... 43 4.1.2 See also ...... 43 4.1.3 References ...... 43 4.1.4 External links ...... 43 4.2 Curve (tonality) ...... 43 4.2.1 See also ...... 43 4.2.2 References ...... 45 4.2.3 External links ...... 45 4.3 ...... 45 4.3.1 Details ...... 45 4.3.2 See also ...... 45 4.3.3 References ...... 46 4.3.4 External links ...... 46 4.4 ...... 48 4.4.1 In photography ...... 48 4.4.2 In printing ...... 48 4.4.3 See also ...... 48 4.4.4 References ...... 49

5 Day 5 50 5.1 correction ...... 50 5.1.1 Explanation ...... 50 CONTENTS iii

5.1.2 Generalized gamma ...... 50 5.1.3 Film photography ...... 51 5.1.4 Windows, Mac, sRGB and TV/ standard gammas ...... 51 5.1.5 Power law for video display ...... 51 5.1.6 Methods to perform display in computing ...... 52 5.1.7 Simple monitor tests ...... 53 5.1.8 Terminology ...... 54 5.1.9 See also ...... 54 5.1.10 References ...... 55 5.1.11 External links ...... 55 5.2 Digital compositing ...... 59 5.2.1 Mathematics ...... 59 5.2.2 ...... 60 5.2.3 See also ...... 61 5.2.4 Further reading ...... 61 5.3 Computational photography ...... 61 5.3.1 Computational illumination ...... 63 5.3.2 Computational ...... 63 5.3.3 Computational processing ...... 63 5.3.4 Computational sensors ...... 63 5.3.5 Early work in computer vision ...... 63 5.3.6 See also ...... 63 5.3.7 References ...... 63 5.3.8 External links ...... 64 5.4 Inpainting ...... 64 5.4.1 Applications ...... 64 5.4.2 Methods ...... 65 5.4.3 See also ...... 66 5.4.4 References ...... 66 5.4.5 External links ...... 66

6 Day 6 67 6.1 Image ...... 67 6.1.1 Function ...... 68 6.1.2 Models ...... 68 6.1.3 References ...... 69 6.1.4 See also ...... 69 6.2 ...... 70 6.2.1 In audio ...... 70 6.2.2 In images ...... 71 6.2.3 See also ...... 73 6.2.4 References ...... 74 iv CONTENTS

6.2.5 External links ...... 75 6.3 Iterative reconstruction ...... 75 6.3.1 Basic concepts ...... 76 6.3.2 Advantages ...... 76 6.3.3 See also ...... 77 6.3.4 References ...... 77

7 Day 7 79 7.1 (optics) ...... 79 7.1.1 Radial distortion ...... 79 7.1.2 Software correction ...... 82 7.1.3 Related phenomena ...... 83 7.1.4 See also ...... 84 7.1.5 References ...... 84 7.1.6 External links ...... 84 7.2 Perspective control ...... 85 7.2.1 Perspective control at ...... 87 7.2.2 Perspective control in the ...... 88 7.2.3 Perspective control during digital post-processing ...... 88 7.2.4 Perspective control in virtual environments ...... 88 7.2.5 See also ...... 89 7.2.6 References ...... 89 7.2.7 External links ...... 89

8 Day 8 90 8.1 (image) ...... 90 8.1.1 Cropping in photography, print & design ...... 90 8.1.2 Cropping in & broadcasting ...... 92 8.1.3 Additional methods ...... 93 8.1.4 Uncropping ...... 94 8.1.5 References ...... 94 8.2 Image warping ...... 94 8.2.1 Overview ...... 94 8.2.2 In the news ...... 95 8.2.3 See also ...... 95 8.2.4 References ...... 95 8.2.5 External links ...... 95 8.3 ...... 95 8.3.1 Causes ...... 96 8.3.2 Post-shoot ...... 98 8.3.3 See also ...... 99 8.3.4 References and sources ...... 99 CONTENTS v

9 Day 9 101 9.1 ...... 101 9.1.1 Lossy and lossless Image compression ...... 102 9.1.2 Other properties ...... 102 9.1.3 Notes and references ...... 103 9.1.4 External links ...... 103 9.2 Lempel–Ziv–Welch ...... 103 9.2.1 ...... 103 9.2.2 Example ...... 105 9.2.3 Further coding ...... 106 9.2.4 Uses ...... 107 9.2.5 Patents ...... 107 9.2.6 Variants ...... 107 9.2.7 See also ...... 107 9.2.8 References ...... 108 9.2.9 External links ...... 108 9.3 Image file formats ...... 108 9.3.1 Image file sizes ...... 108 9.3.2 Image file compression ...... 109 9.3.3 Major graphic file formats ...... 109 9.3.4 References ...... 115 9.4 Comparison of graphics file formats ...... 115 9.4.1 General ...... 115 9.4.2 Technical details ...... 115 9.4.3 References ...... 115

10 Day 10 116 10.1 TIFF ...... 116 10.1.1 History ...... 116 10.1.2 Features and options ...... 116 10.1.3 See also ...... 122 10.1.4 References ...... 122 10.1.5 External links ...... 124 10.2 ...... 124 10.2.1 Rationale ...... 124 10.2.2 File contents ...... 125 10.2.3 Processing ...... 127 10.2.4 Software support ...... 129 10.2.5 Raw filename extensions and respective manufacturers ...... 131 10.2.6 Raw files ...... 132 10.2.7 See also ...... 132 10.2.8 References ...... 132 vi CONTENTS

10.2.9 External links ...... 134 10.3 Digital ...... 135 10.3.1 Rationale for DNG ...... 135 10.3.2 Technical summary ...... 136 10.3.3 Timeline ...... 136 10.3.4 Reception ...... 138 10.3.5 DNG conversion ...... 138 10.3.6 Summary of products that support DNG in some way ...... 139 10.3.7 Versions of the specification ...... 140 10.3.8 Standardization ...... 141 10.3.9 See also ...... 141 10.3.10 References ...... 142 10.4 List of supporting a raw format ...... 143 10.4.1 Still cameras ...... 143 10.4.2 Native in-camera raw video support ...... 157 10.4.3 output via HDMI ...... 158 10.4.4 Other ...... 159 10.4.5 See also ...... 159 10.4.6 References ...... 159 10.4.7 External links ...... 160 10.5 ...... 160 10.5.1 Background ...... 160 10.5.2 Technical ...... 160 10.5.3 Geolocation ...... 161 10.5.4 Program support ...... 161 10.5.5 Problems ...... 161 10.5.6 Related standards ...... 162 10.5.7 Example ...... 162 10.5.8 FlashPix extensions ...... 162 10.5.9 Exif audio files ...... 164 10.5.10 MakerNote data ...... 164 10.5.11 See also ...... 164 10.5.12 References ...... 164 10.5.13 External links ...... 166 10.6 Comparison of image viewers ...... 166 10.6.1 Functionality overview and licensing ...... 166 10.6.2 Supported file formats ...... 166 10.6.3 Supported operating systems ...... 166 10.6.4 Basic features ...... 166 10.6.5 Additional features ...... 166 10.6.6 See also ...... 166 CONTENTS vii

10.6.7 Notes ...... 166 10.6.8 References ...... 167 10.7 IPTC Information Interchange ...... 167 10.7.1 Overview ...... 167 10.7.2 History ...... 167 10.7.3 See also ...... 168 10.7.4 References ...... 168 10.7.5 External links ...... 168

11 Text and image sources, contributors, and licenses 169 11.1 Text ...... 169 11.2 Images ...... 175 11.3 Content license ...... 180 Chapter 1

Day 1

1.1

This article is about mathematical processing of digital images. For artistic processing of images, see Image editing.

Digital image processing is the use of computer to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processig may be modeled in the form of multidimensional systems.

1.1.1 History

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conver- sion, , videophone, character recognition, and enhancement.[1] The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital im- age processing proliferated as cheaper and dedicated hardware became available. Images then could be processed in real time, for some dedicated problems such as standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and generally, is used because it is not only the most versatile method, but also the cheapest. Digital image processing technology for medical applications was inducted into the Space Foundation Space Tech- nology Hall of Fame in 1994.[2] In 2002 Raanan Fattel, introduced Gradient domain image processing, a new way to process images in which the differences between are manipulated rather than the values themselves.[3]

1.1.2 Tasks

Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisti- cated performance at simple tasks, and the implementation of methods which would be impossible by analog means. In particular, digital image processing is the only practical technology for:

• Classification

• Feature extraction

1 2 CHAPTER 1. DAY 1

• Multi-scale signal analysis

• Pattern recognition

• Projection

Some techniques which are used in digital image processing include:

• Anisotropic diffusion

• Hidden Markov models

• Image editing

• Image restoration

• Independent component analysis

• Linear filtering

• Neural networks

• Partial differential equations

• Pixelation

• Principal components analysis

• Self-organizing maps

• Wavelets

1.1.3 Applications

Further information:

Digital camera images

Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from their into a -corrected image in a standard image file format

Film

Westworld (1973) was the first feature film to use the digital image processing to pixellate photography to simulate an android’s point of view.[4]

1.1.4 See also

• Computer vision

• CVIPtools

• Digitizing

• GPGPU

• Homomorphic filtering 1.2. IMAGE EDITING 3

• Image analysis • IEEE Intelligent Transportation Systems Society • Multidimensional systems • • Superresolution

1.1.5 References

[1] Azriel Rosenfeld, Picture Processing by Computer, New York: Academic Press, 1969 [2] “Space Technology Hall of Fame:Inducted Technologies/1994”. Space Foundation. 1994. Archived from the original on 4 July 2011. Retrieved 7 January 2010. [3] Bhat, Pravin, et al. “Gradientshop: A gradient-domain optimization framework for image and video filtering.” ACM Transactions on Graphics 29.2 (2010): 10. [4] A Brief, Early History of Computer Graphics in Film Archived 17 July 2012 at the Wayback Machine., Larry Yaeger, 16 August 2002 (last update), retrieved 24 March 2010

1.1.6 Further reading

• Solomon, tag">C.J.; Breckon, T.P. (2010). Fundamentals of Digital Image Processing: A Practical Approach with Examples in Matlab. Wiley-Blackwell. doi:10.1002/9780470689776. ISBN 0470844736. • Wilhelm Burger; Mark J. Burge (2007). Digital Image Processing: An Algorithmic Approach Using . Springer. ISBN 978-1-84628-379-6.

• R. Fisher; K Dawson-Howe; A. Fitzgibbon; C. Robertson; E. Trucco (2005). Dictionary of Computer Vision and Image Processing. John Wiley. ISBN 978-0-470-01526-1.

• Rafael C. Gonzalez; Richard E. Woods; Steven L. Eddins (2004). Digital Image Processing using MATLAB. Pearson Education. ISBN 978-81-7758-898-9.

• Tim Morris (2004). Computer Vision and Image Processing. Palgrave Macmillan. ISBN 978-0-333-99451-1.

• Milan Sonka; Vaclav Hlavac; Roger Boyle (1999). Image Processing, Analysis, and Machine Vision. PWS Publishing. ISBN 978-0-534-95393-5.

1.1.7 External links

• Lectures on Image Processing, by Alan Peters. Vanderbilt University. Updated 7 January 2016. • IPRG Open group related to image processing research resources • Processing digital images with computer algorithms • IPOL Open research journal on image processing with software and demos.

1.2 Image editing

For the uses, cultural impact, and ethical concerns of image editing, see Photo manipulation. For the process of culling and archiving images, see Digital asset management. Image editing encompasses the processes of altering images, whether they are digital photographes, traditional photochemical , or . Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into editors, editors, and 3D mod- elers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create from scratch. 4 CHAPTER 1. DAY 1

A colorized version of originally photo, colorized using GIMP

1.2.1 Basics of image editing

Raster images are stored in a computer in the form of a grid of picture elements, or pixels. These pixels contain the image’s color and brightness information. Image editors can change the pixels to enhance the image in many ways. The pixels can be changed as a group, or individually, by the sophisticated algorithms within the image editors. This article mostly refers to bitmap graphics editors, which are often used to alter photographs and other raster graphics. However, vector , such as , CorelDRAW, Xara Designer Pro, PixelStyle Photo Editor, or Vectr, are used to create and modify vector images, which are stored as descriptions of lines, 1.2. IMAGE EDITING 5

Original black and white photo: Migrant Mother, showing Florence Owens Thompson, taken by Dorothea Lange in 1936.

Bézier curves, and text instead of pixels. It is easier to rasterize a vector image than to vectorize a raster image; how to go about vectorizing a raster image is the focus of much research in the field of computer vision. Vector images can be modified more easily, because they contain descriptions of the shapes for easy rearrangement. They are also scalable, being rasterizable at any resolution. 6 CHAPTER 1. DAY 1

This is a photo that has been edited as a effect, using a Gaussian blur. 1.2. IMAGE EDITING 7

1.2.2 Automatic image enhancement

Camera or computer image editing programs often offer basic automatic image enhancement features that correct color and brightness imbalances as well as other image editing features, such as red eye removal, sharpness ad- justments, zoom features and automatic cropping. These are called automatic because generally they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction.

1.2.3 Digital

Many image file formats use data compression to reduce file size and save storage space. Digital compression of images may take place in the camera, or can be done in the computer with the image editor. When images are stored in JPEG format, compression has already taken place. Both cameras and computer programs allow the user to set the level of compression. Some compression algorithms, such as those used in PNG file format, are lossless, which means no information is lost when the file is saved. By , the JPEG file format uses a algorithm by which the greater the compression, the more information is lost, ultimately reducing image quality or detail that can not be restored. JPEG uses knowledge of the way the human brain and eyes perceive color to make this loss of detail less noticeable.

1.2.4 Image editor features

Listed below are some of the most used capabilities of the better graphic manipulation programs. The list is by no means all inclusive. There are a myriad of choices associated with the application of most of these features.

Selection

One of the prerequisites for many of the applications mentioned below is a method of selecting part(s) of an image, thus applying a change selectively without affecting the entire picture. Most graphics programs have several means of accomplishing this, such as:

• a marquee tool for selecting rectangular or other regular polygon-shaped regions,

• a lasso tool for freehand selection of a region,

• a magic wand tool that selects objects or regions in the image defined by proximity of color or ,

• vector-based pen tools, as well as more advanced facilities such as edge detection, masking, alpha compositing, and color and channel-based extraction. The border of a selected area in an image is often animated with the marching ants effect to help the user to distinguish the selection border from the image background.

Layers

Main article: Layers (digital image editing)

Another feature common to many graphics applications is that of Layers, which are analogous to sheets of transparent acetate (each containing separate elements that make up a combined picture), stacked on top of each other, each capable of being individually positioned, altered and blended with the layers below, without affecting any of the elements on the other layers. This is a fundamental workflow which has become the norm for the majority of programs on the market today, and enables maximum flexibility for the user while maintaining non-destructive editing principles and ease of use. 8 CHAPTER 1. DAY 1

Image size alteration

Image editors can resize images in a process often called , making them larger, or smaller. High cameras can produce large images which are often reduced in size for Internet use. Image editor programs use a mathematical process called resampling to calculate new pixel values whose spacing is larger or smaller than the original pixel values. Images for Internet use are kept small, say 640 x 480 pixels which would equal 0.3 megapixels.

Cropping an image

Main article: Cropping (image)

Digital editors are used to crop images. Cropping creates a new image by selecting a desired rectangular portion from the image being cropped. The unwanted part of the image is discarded. Image cropping does not reduce the resolution of the area cropped. Best results are obtained when the original image has a high resolution. A primary reason for cropping is to improve the image composition in the new image.

Histogram

Main article: Curve (tonality)

Image editors have provisions to create an image histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made. Improvements in picture brightness and contrast can thus be obtained.

Noise reduction

Main article: Noise reduction

Image editors may feature a number of algorithms which can add or remove noise in an image. Some JPEG artifacts can be removed; dust and scratches can be removed and an image can be de-speckled. Noise reduction merely estimates the state of the scene without the noise and is not a substitute for obtaining a “cleaner” image. Excessive noise reduction leads to a loss of detail, and its application is hence subject to a trade-off between the undesirability of the noise itself and that of the reduction artifacts. Noise tends to invade images when pictures are taken in low light settings. A new picture can be given an 'antiqued' effect by adding uniform monochrome noise.

Removal of unwanted elements

Main article: Inpainting

Most image editors can be used to remove unwanted branches, etc., using a “clone” tool. Removing these distracting elements draws focus to the subject, improving overall composition.

Selective color change

Some image editors have color swapping abilities to selectively change the color of specific items in an image, given that the selected items are within a specific color range.

Image orientation

Image editors are capable of altering an image to be rotated in any direction and to any degree. Mirror images can be created and images can be horizontally flipped or vertically flopped. A small rotation of several degrees is often 1.2. IMAGE EDITING 9

Selective color change

Image orientation (from left to right): original, −30° CCW rotation, and flipped. enough to level the horizon, correct verticality (of a building, for example), or both. Rotated images usually require cropping afterwards, in order to remove the resulting gaps at the image edges.

Perspective control and distortion

Main article: Perspective control Some image editors allow the user to distort (or “transform”) the shape of an image. While this might also be useful for special effects, it is the preferred method of correcting the typical perspective distortion which results from photographs being taken at an oblique angle to a rectilinear subject. Care is needed while performing this task, as the image is reprocessed using interpolation of adjacent pixels, which may reduce overall image definition. The effect mimics the use of a perspective control , which achieves a similar correction in-camera without loss of definition.

Lens correction

Photo manipulation packages have functions to correct images for various lens distortions including pincushion, fisheye and barrel distortions. The corrections are in most cases subtle, but can improve the appearance of some photographs. 10 CHAPTER 1. DAY 1

Perspective control: original (left), perspective distortion removed (right).

Enhancing images

In computer graphics, the process of improving the quality of a digitally stored image by manipulating the image with software. It is quite easy, for example, to make an image lighter or darker, or to increase or decrease contrast. Advanced photo enhancement software also supports many filters for altering images in various ways.[1] Programs specialized for image enhancement are sometimes called image editors.

Sharpening and softening images

Graphics programs can be used to both sharpen and blur images in a number of ways, such as unsharp masking or deconvolution.[2] Portraits often appear more pleasing when selectively softened (particularly the skin and the background) to better make the subject stand out. This can be achieved with a camera by using a large , or in the image editor by making a selection and then blurring it. Edge enhancement is an extremely common technique used to make images appear sharper, although purists frown on the result as appearing unnatural. Another form of image sharpening involves a form of contrast. This is done by finding the average color of the pixels around each pixel in a specified radius, and then contrasting that pixel from that average color. This effect makes the image seem clearer, seemingly adding details. An example of this effect can be seen to the right. It is widely used in the printing and photographic industries for increasing the local contrasts and sharpening the images.

Selecting and merging of images

Main article: Digital compositing Many graphics applications are capable of merging one or more individual images into a single file. The orientation and placement of each image can be controlled. When selecting a raster image that is not rectangular, it requires separating the edges from the background, also known as silhouetting. This is the digital analog of cutting out the image from a physical picture. paths may be used to add silhouetted images to vector graphics or files that retain vector data. Alpha compositing, allows for soft translucent edges when selecting images. There are a number of ways to silhouette an image with 1.2. IMAGE EDITING 11 soft edges, including selecting the image or its background by sampling similar , selecting the edges by raster tracing, or converting a clipping path to a raster selection. Once the image is selected, it may be copied and pasted into another section of the same file, or into a separate file. The selection may also be saved in what is known as an alpha channel. A popular way to create a composite image is to use transparent layers. The background image is used as the bottom layer, and the image with parts to be added are placed in a layer above that. Using an image layer mask, all but the parts to be merged are hidden from the layer, giving the impression that these parts have been added to the background layer. Performing a merge in this manner preserves all of the pixel data on both layers to more easily enable future changes in the new merged image.

Slicing of images

A more recent tool in digital image editing software is the image slicer. Parts of images for graphical user interfaces or web pages are easily sliced, labeled and saved separately from whole images so the parts can be handled individually by the display medium. This is useful to allow dynamic swapping via interactivity or animating parts of an image in the final presentation. See also: Slicing (interface design)

Special effects

Image editors usually have a list of special effects that can create unusual results. Images may be skewed and distorted in various ways. Scores of special effects can be applied to an image which include various forms of distortion, artistic effects, geometric transforms and texture effects,[3] or combinations thereof. thumbnail

Stamp Clone Tool

The Clone Stamp tool selects and samples an area of your picture and then uses these pixels to paint over any marks. The Clone Stamp tool acts like a brush so you can change the size, allowing cloning from just one pixel wide to hundreds. You can change the opacity to produce a subtle clone effect. Also, there is a choice between Clone align or Clone non-align the sample area. In Photoshop this tool is called Clone Stamp, but it may also be called a Rubber Stamp tool.

Change

It is possible, using software, to change the color depth of images. Common color depths are 2, 4, 16, 256, 65,536 and 16.7 million colors. The JPEG and PNG image formats are capable of storing 16.7 million colors (equal to 256 luminance values per color channel). In addition, grayscale images of 8 or less can be created, usually via conversion and down-sampling from a full-color image. Grayscale conversion is useful for reducing file size dramatically when the original photographic print was monochrome, but a color tint has been introduced due to aging effects.

Contrast change and brightening

Image editors have provisions to simultaneously change the contrast of images and brighten or darken the image. Underexposed images can often be improved by using this feature. Recent advances have allowed more intelligent exposure correction whereby only pixels below a particular luminosity threshold are brightened, thereby brightening underexposed shadows without affecting the rest of the image. The exact transformation that is applied to each color channel can vary from editor to editor. GIMP applies the following formula:[4] if (brightness < 0.0) value = value * ( 1.0 + brightness); else value = value + ((1 - value) * brightness); value = (value - 0.5) * (tan ((contrast + 1) * PI/4) ) + 0.5; 12 CHAPTER 1. DAY 1

where value is the input color value in the 0..1 range and brightness and contrast are in the −1..1 range.

Gamma correction

Main article: Gamma correction

In addition to the capability of changing the images’ brightness and/or contrast in a non-linear fashion, most current image editors provide an opportunity to manipulate the images’ gamma value. Gamma correction is particularly useful for bringing details that would be hard to see on most computer monitors out of shadows. In some image editing software this is called “curves”, usually a tool found in the color menu, and no reference to “gamma” is used anywhere in the program or the program documentation. Strictly speaking, the curves tool usually does more than simple gamma correction, since one can construct complex curves with multiple inflection points, but when no dedicated gamma correction tool is provided, it can achieve the same effect.

Color adjustments

The color of images can be altered in a variety of ways. Colors can be faded in and out, and tones can be changed using curves or other tools. The can be improved, which is important if the picture was indoors with daylight film, or shot on a camera with the white balance incorrectly set. Special effects, like sepia tone and grayscale, can be added to an image. In addition, more complicated procedures such as the mixing of color channels are possible using more advanced graphics editors. The red-eye effect, which occurs when flash photos are taken when the pupil is too widely open (so that light from the flash that passes into the eye through the pupil reflects off the fundus at the back of the eyeball), can also be eliminated at this stage.

Dynamic blending

Advanced Dynamic Blending is a concept introduced by photographer Elia Locardi in his blog Blame The Monkey to describe the photographic process of capturing multiple bracketed exposures of a land or cityscape over a specific span of time in a changing natural or artificial environment. Once captured, the exposure are manually blended together into a single High image using post-processing software. Dynamic Blending images serve to display a consolidated moment. This means that while the final image may be a blend of a span of time, it visually appears to represent a single instant.[5][6][7]

Printing

Controlling the print size and quality of digital images requires an understanding of the pixels-per-inch (ppi) variable that is stored in the image file and sometimes used to control the size of the printed image. Within 's Image Size dialog, the image editor allows the user to manipulate both pixel dimensions and the size of the image on the printed document. These parameters work together to produce a printed image of the desired size and quality. Pixels per inch of the image, pixel per inch of the computer monitor, and dots per inch on the printed document are related, but in use are very different. The Image Size dialog can be used as an image calculator of sorts. For example, a 1600 × 1200 image with a resolution of 200 ppi will produce a printed image of 8 × 6 inches. The same image with 400 ppi will produce a printed image of 4 × 3 inches. Change the resolution to 800 ppi, and the same image now prints out at 2 × 1.5 inches. All three printed images contain the same data (1600 × 1200 pixels), but the pixels are closer together on the smaller prints, so the smaller images will potentially look sharp when the larger ones do not. The quality of the image will also depend on the capability of the .

Warping

Main articles: Image warping and Morphing 1.2. IMAGE EDITING 13

1.2.5 See also

• Comparison of raster graphics editors

• Computer graphics •

• Digital image processing •

• Dynamic imaging •

• Graphics file format summary

• Homomorphic filtering • Image development ()

• Image distortion • Image processing

• Image retrieval • Image warping

• Inpainting

1.2.6 References

[1] Implementations include Imagic Photo, Viesus, and Topaz

[2] Implementations include FocusMagic, and Photoshop

[3] JPFix. “Skin Improvement Technology”. Retrieved 2008-08-23.

[4] GIMP for brightness and contrast image filtering.

[5] Lazzell, Jeff. “A Dynamic Blending and Post Processing Workshop With Travel Photographer Elia Locardi”. blog.xritephoto.com. Retrieved Sep 11, 2016.

[6] Cedric, De Boom. “Blending moments in time”. cedricdeboom..io. Retrieved Sep 11, 2016.

[7] “HDR photography with Elia Locardi”. www.cnet.com. Retrieved Sep 11, 2016.

• “Fantasy, fairy tale and myth collide in images: By digitally altering photos of landscapes, artist Anthony Goicolea creates an intriguing world,” The Vancouver Sun (British Columbia); June 19, 2006.

• “It’s hard to tell where pixels end and reality begins,” The San Francisco Chronicle; September 26, 2006. • “Virtual Art: From Illusion to Immersion,” MIT Press 2002; Cambridge, Mass. 14 CHAPTER 1. DAY 1

Image Sharpening: original (top), Image Sharpened (bottom). 1.2. IMAGE EDITING 15

Photomontage of 16 photos which have been digitally manipulated in Photoshop to give the impression that it is a real landscape 16 CHAPTER 1. DAY 1

An example of some special effects that can be added to a picture. 1.2. IMAGE EDITING 17

An example of converting an image from color to grayscale

An example of contrast correction. Left side of the image is untouched. 18 CHAPTER 1. DAY 1

An example of color adjustment using

Before and After example of Advanced Dynamic Blending Technique created by Elia Locardi 1.2. IMAGE EDITING 19

Control printed image by changing pixels-per-inch. 20 CHAPTER 1. DAY 1

1.3 Image processing

In imaging science, image processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, a series of images, or a video, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image.[1] Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Images are also processed as three-dimensional signals with the third- dimension being time or the z-axis. Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging.[2] Closely related to image processing are computer graphics and computer vision. In computer graphics, images are manually made from physical models of objects, environments, and lighting, instead of being acquired (via imaging devices such as cameras) from natural scenes, as in most animated movies. Computer vision, on the other hand, is often considered high-level image processing out of which a machine/computer/software intends to decipher the physical contents of an image or a sequence of images (e.g., or 3D full-body magnetic resonance scans). In modern sciences and technologies, images also gain much broader scopes due to the ever growing importance of scientific (of often large-scale complex scientific/experimental data). Examples include microarray data in genetic research, or real-time multi-asset portfolio trading in finance.

1.3.1 See also

• Image analysis

• Image sharpening

• Image smoothing

• Multidimensional systems

• Near sets

• Photo manipulation

1.3.2 References

[1] Rafael C. Gonzalez; Richard E. Woods (2008). Digital Image Processing. Prentice Hall. pp. 1–3. ISBN 978-0-13-168728- 8.

[2] Joseph P. Hornak, Encyclopedia of Imaging Science and Technology (John Wiley & Sons, 2002) ISBN 9780471332763

1.3.3 Further reading

• Tinku Acharya and Ajoy K. Ray (2006). Image Processing - Principles and Applications. Wiley InterScience.

• Wilhelm Burger and Mark J. Burge (2008). Digital Image Processing: An Algorithmic Approach Using Java. Springer-Verlag. ISBN 978-1-84628-968-2.

• Bernd Jähne (2002). Digital Image Processing (PDF). Springer. ISBN 3-540-67754-2.

• Tim Morris (2004). Computer Vision and Image Processing. Palgrave Macmillan. ISBN 0-333-99451-5.

• Tony F. Chan and Jackie (Jianhong) Shen (2005). Image Processing and Analysis - Variational, PDE, Wavelet, and Stochastic Methods. Society of Industrial and Applied Mathematics. ISBN 0-89871-589-X.

• Milan Sonka, Vaclav Hlavac and Roger Boyle (1999). Image Processing, Analysis, and Machine Vision. PWS Publishing. ISBN 0-534-95393-X.

• John Russ (2011). The Image Processing Handbook. CRC Press. ISBN 9781439840450. 1.4. IMAGE ANALYSIS 21

1.3.4 External links

• Lectures on Image Processing, by Alan Peters. Vanderbilt University. Updated 7 January 2016. • Image Processing On Line – Open access journal with image processing algorithms, open source implemen- tations and demonstrations • IPRG Open group related to image processing research resources • Bare Images Toolbox Online image processing and analysis application

1.4 Image analysis

Not to be confused with Image processing.

Image analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques.[1] Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face. Computers are indispensable for the analysis of large amounts of data, for tasks that require complex computation, or for the extraction of quantitative information. On the other hand, the human visual cortex is an excellent image anal- ysis apparatus, especially for extracting higher-level information, and for many applications — including medicine, security, and remote sensing — human analysts still cannot be replaced by computers. For this reason, many im- portant image analysis tools such as edge detectors and neural networks are inspired by human visual perception models.

1.4.1 Computer Image Analysis

Computer Image Analysis largely contains the fields of computer or machine vision, and medical imaging, and makes heavy use of pattern recognition, digital geometry, and signal processing. This field of developed in the 1950s at academic institutions such as the MIT A.I. Lab, originally as a branch of artificial intelligence and robotics. It is the quantitative or qualitative characterization of two-dimensional (2D) or three-dimensional (3D) digital images. 2D images are, for example, to be analyzed in computer vision, and 3D images in medical imaging. The field was established in the 1950s—1970s, for example with pioneering contributions by Azriel Rosenfeld, Herbert Freeman, Jack E. Bresenham, or King-Sun Fu.

1.4.2 Techniques

There are many different techniques used in automatically analysing images. Each technique may be useful for a small range of tasks, however there still aren't any known methods of image analysis that are generic enough for wide ranges of tasks, compared to the abilities of a human’s image analysing capabilities. Examples of image analysis techniques in different fields include:

• 2D and 3D object recognition, • image segmentation, • detection e.g. Single particle tracking, • video tracking, • optical flow, • medical scan analysis, • 3D Pose Estimation, • automatic number plate recognition. 22 CHAPTER 1. DAY 1

1.4.3 Digital Image Analysis

Digital Image Analysis is when a computer or electrical device automatically studies an image to obtain useful infor- mation from it. Note that the device is often a computer but may also be an electrical circuit, a or a mobile phone. The applications of digital image analysis are continuously expanding through all areas of science and industry, including:

• assay micro plate reading, such as detecting where a chemical was manufactured.

• astronomy, such as calculating the size of a planet.

• defense

• filtering

• machine vision, such as to automatically count items in a factory conveyor belt.

• materials science, such as determining if a metal weld has cracks.

• medicine, such as detecting cancer in a mammography scan.

• metallography, such as determining the mineral content of a rock sample.

• microscopy, such as counting the germs in a swab.

• optical character recognition, such as automatic license plate detection.

• remote sensing, such as detecting intruders in a house, and producing land cover/land use maps.[2][3]

• robotics, such as to avoid steering into an obstacle.

• security, such as detecting a person’s eye color or hair color.

1.4.4 Object-based Image Analysis

Object-Based Image Analysis (OBIA) employs two main processes, segmentation and classification. Traditional image segmentation is on a per-pixel basis. However, OBIA groups pixels into homogeneous objects. These objects can have different shapes and scale. Objects also have statistics associated with them which can be used to classify objects. Statistics can include geometry, and texture of image objects. The analyst defines statistics in the classification process to generate for example land cover. The technique is implemented in software such as eCognition. When applied to earth images, OBIA is known as Geographic Object-Based Image Analysis (GEOBIA), defined as “a sub-discipline of geoinformation science devoted to (...) partitioning remote sensing (RS) imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale”.[4] The international GEOBIA conference has been held biannually since 2006.[5]

1.4.5 Land cover mapping

Land cover and land use change detection using remote sensing and geospatial data provides baseline information for assessing the climate change impacts on habitats and biodiversity, as well as natural resources, in the target areas.

Application of land cover mapping

• Local and regional planning

• Disaster management[6]

• Vulnerability and Risk Assessments

• Ecological management

• Monitoring the effects of climate change 1.4. IMAGE ANALYSIS 23

Image segmentation during the object base image analysis

Process of land cover mapping using TM images

• Wildlife management.

• Alternative landscape futures and conservation

• Environmental forecasting

• Environmental impact assessment

• Policy development 24 CHAPTER 1. DAY 1

1.4.6 References

[1] Solomon, C.J., Breckon, T.P. (2010). Fundamentals of Digital Image Processing: A Practical Approach with Examples in Matlab. Wiley-Blackwell. doi:10.1002/9780470689776. ISBN 0470844736.

[2] Xie, Y.; Sha, Z.; Yu, M. (2008). “Remote sensing imagery in vegetation mapping: a review”. Journal of plant ecology. 1 (1): 9–23. doi:10.1093/jpe/rtm005.

[3] Wilschut, L.I.; Addink, E.A.; Heesterbeek, J.A.P.; Dubyanskiy, V.M.; Davis, S.A.; Laudisoit, A.; Begon, M.; Burdelov, L.A.; Atshabar, B.B.; de Jong, S.M (2013). “Mapping the distribution of the main host for plague in a complex landscape in Kazakhstan: An object-based approach using SPOT-5 XS, Landsat 7 ETM+, SRTM and multiple Random Forests”. International Journal of Applied Earth Observation and Geoinformation. 23: 81–94. doi:10.1016/j.jag.2012.11.007.

[4] G.J. Hay & G. Castilla: Geographic Object-Based Image Analysis (GEOBIA): A new name for a new discipline. In: T. Blaschke, S. Lang & G. Hay (eds.): Object-Based Image Analysis – Spatial Concepts for Knowledge-Driven Remote Sensing Applications. Lecture Notes in Geoinformation and Cartography, 18. Springer, Berlin/Heidelberg, Germany: 75-89 (2008)

[5]

[6]

1.4.7 Notes

• The Image Processing Handbook by John C. Russ, ISBN 0-8493-7254-2 (2006)

• Image Processing and Analysis - Variational, PDE, Wavelet, and Stochastic Methods by Tony F. Chan and Jianhong (Jackie) Shen, ISBN 0-89871-589-X (2005)

• Front-End Vision and Multi-Scale Image Analysis by Bart M. ter Haar Romeny, Paperback, ISBN 1-4020- 1507-0 (2003)

• Practical Guide to Image Analysis by J.J. Friel, et al., ASM International, ISBN 0-87170-688-1 (2000). • Fundamentals of Image Processing by Ian T. Young, Jan J. Gerbrands, Lucas J. Van Vliet, Paperback, ISBN 90-75691-01-7 (1995) • Image Analysis and Metallography edited by P.J. Kenny, et al., International Metallographic Society and ASM International (1989). • Quantitative Image Analysis of Microstructures by H.E. Exner & H.P. Hougardy, DGM Informationsgesellschaft mbH, ISBN 3-88355-132-5 (1988). • Structure Magazine

• “Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing”, Kay Geels in collaboration with Struers A/S, ASTM International 2006.

1.4.8 External links

• Bare Images Toolbox Online image processing and analysis application • Manawatu Microscopy - First known collaboration environment for microscopy and image analysis featuring open of image analysis functions.

• Image analysis using MATLAB algorithms • MIPAR: Simple yet powerful image analysis software Chapter 2

Day 2

2.1 Photo manipulation

The skin features shown in a por- trait of Minnie Driver by Justin Hoch (left) have been manipulated to create the image on the right.

Photo manipulation involves transforming or altering a photograph using various methods and techniques to achieve desired results. Some photo manipulations are considered skillful artwork while others are frowned upon as unethical practices, especially when used to deceive the public, such as that used for political propaganda, or to make a product or person look better. Depending on the application and intent, some photo manipulations are considered an art form because it involves the creation of unique images and in some instances, signature expressions of art by photographic artists. For exam- ple, Ansel Adams employed some of the more common manipulations using darkroom exposure techniques, such as burning (darkening) and dodging (lightening) a photograph.[1][2] Other examples of photo manipulation include retouching photographs using ink or paint, airbrushing, double exposure, piecing photos or negatives together in the darkroom, scratching instant films, or through the use of software-based manipulation tools applied to digital images. There are a number of software applications available for digital image manipulation, ranging from professional applications to very basic imaging software for casual users.

25 26 CHAPTER 2. DAY 2

General Grant at City Point is a composite of three different photographs.

2.1.1 History

Joseph Stalin pictured with the “Vanishing Commissar” (Nikolai Yezhov) before retouching

The photo after retouching, with Yezhov entirely removed Photo manipulation dates back to some of the earliest photographs captured on glass and tin plates during the 19th Century. The practice began not long after the creation of the first photograph (1825) by Joseph Nicéphore Niépce who developed and made the first photographic print from a photoengraved printing plate.[3][4] Traditional photographic prints can be altered using various methods and techniques that involve manipulation directly to the print, such as retouching with ink, paint, airbrushing, or scratching Polaroids during developing.[5] Negatives can be 2.1. PHOTO MANIPULATION 27

Vintage manipulated photo of World War I battle action including details combined from multiple photos. manipulated while still in the camera using double-exposure techniques, or in the darkroom by piecing photos or negatives together. Some darkroom manipulations involved techniques such as bleaching to artfully lighten or totally wash-out parts of the photograph, or hand coloring for aesthetic purposes or to mimic a fine art painting.[6] In the early 19th century, photography and the technology that made it possible was rather crude and cumbersome. While the equipment and technology progressed over time, it wasn't until the late 20th Century that photography evolved into the digital realm. At the onset, was considered by some to be a radical new approach, and was initially rejected by photographers because of its substandard quality.[7] The transition from film to digital has been an ongoing process although great strides were made in the early 21st Century as a result of advancing technology that has greatly improved digital image quality while reducing the bulk and weight of cameras and equipment.[8]

Early manipulation

An early example of tampering was in the early 1860s, when a photo of Abraham Lincoln was altered using the body from a portrait of John C. Calhoun and the head of Lincoln from a famous seated portrait by Mathew Brady – the same portrait which was the basis for the original Lincoln Five-dollar bill.[4] Another is exampled in the Library of Congress Prints and Photographs Online Catalogue wherein it exposes a manipulated American Civil War photograph of General Ulysses S. Grant posing horseback in front of his troops at City Point, Virginia.[9] Close observation of the photograph raises questions and brings to light certain details in the photograph that simply do not add up. For example, Grant’s head is set at a strange angle to his body, his uniform is of a different time period, and his favorite horse “Cincinnati” did not have a left hind sock like the horse in the photograph, although his other horse “Egypt” did have a sock but on a different foot. With further research, three different photographs were discovered that explained the composite using Grant’s head from one photograph, the body of Major General Alexander McDowell McCook atop his horse from another photograph, and for the background, an 1864 photograph of Confederate prisoners captured at the Battle of Fisher’s Hill.[9] It was not until the 20th Century that we began to see digital retouching with Quantel computers running in professional environments,[10] which, alongside other contemporary packages, were effectively replaced in the market by Adobe Photoshop and other editing software for graphic imaging. 28 CHAPTER 2. DAY 2

Goebbels family portrait photo in which the visage of the uniformed Harald, who was actually away on military duties, was inserted and retouched.

2.1.2 Political and ethical issues

Photo manipulation has been used to deceive or persuade viewers or improve storytelling and self-expression.[11] Often even subtle and discreet changes can have a profound impact on how we interpret or judge a photograph, making it all the more important to know when or if manipulation has occurred. As early as the American Civil War, photographs were published as engravings based on more than one negative.[12] Joseph Stalin made use of photo retouching for propaganda purposes.[13] On May 5, 1920 his predecessor Vladimir Lenin held a speech for Soviet troops that Leon Trotsky attended. Stalin had Trotsky retouched out of a photograph showing Trotsky in attendance.[14] In a well known case of damnatio memoriae image manipulation, NKVD leader Nikolai Yezhov (the “Vanishing Commissar”), after his execution in 1940, was removed from an official press photo where he was pictured with Stalin.[15] (For more information, see Censorship of images in the Soviet Union.) The pioneer among journalists distorting photographic images for news value was Bernarr Macfadden: in the mid-1920s, his "composograph" process involved reenacting real news events with costumed body doubles and then photograph- ing the dramatized scenes—then pasting faces of the real news-personalities (gathered from unrelated photos) onto his staged images. In the 1930s, artist John Heartfield used a type of photo manipulation known as the photomontage to critique Nazi propaganda. Some ethical theories have been applied to image manipulation. During a panel on the topic of ethics in image manipulation[16] Aude Oliva theorized that categorical shifts are necessary in order for an edited image to be viewed as a manipulation. In Image Act Theory,[17] Carson Reynolds extended speech act theory by applying it to photo editing and image manipulations. In How to Do Things with Pictures,[18] William Mitchell details the long history of photo manipulation and discusses it critically.

Use in journalism

See also: § Ethical and legal considerations 2.1. PHOTO MANIPULATION 29

Retouching tools from the pre-digital era: gouache paint, kneaded erasers, charcoal sticks, and an airbrush.

A notable incident of controversial photo manipulation occurred over a photograph that was altered to fit the vertical orientation of a 1982 Geographic magazine cover. The altered image made two Egyptian pyramids appear closer together than they actually were in the original photograph.[19] The incident triggered a debate about the ap- propriateness of falsifying an image,[20] and raised questions regarding the magazine’s credibility. Shortly after the incident, Tom Kennedy, director of photography for National Geographic stated, “We no longer use that technology to manipulate elements in a photo simply to achieve a more compelling graphic effect. We regarded that afterwards as a mistake, and we wouldn’t repeat that mistake today.”[20] 30 CHAPTER 2. DAY 2

There are other incidents of questionable photo manipulation in journalism. One such incident arose in early 2005 after Martha Stewart was released from prison. Newsweek used a photograph of Stewart’s face on the body of a much slimmer woman for their cover, suggesting that Stewart had lost weight while in prison.[21] Speaking about the incident in an interview, Lynn Staley, assistant managing editor at Newsweek said, “The piece that we commissioned was intended to show Martha as she would be, not necessarily as she is.” Staley also explained that Newsweek disclosed on that the cover image of Martha Stewart was a composite.[21] Image manipulation software has effected the level of trust many viewers once had in the aphorism, the camera never lies.[22] Images may be manipulated for fun, aesthetic reasons, or to improve the appearance of a subject[23] but not all image manipulation is innocuous as evidenced by the Kerry Fonda 2004 election photo controversy. The image in question was a fraudulent composite image of John Kerry taken on June 13, 1971 and Jane Fonda taken in August, 1972 sharing the same platform at a 1971 antiwar rally; the latter of which carried a fake Associated Press credit with the intent to change the public’s perspective of reality.[22] There is a growing body of writings devoted to the ethical use of digital editing in photojournalism. In the United States, for example, the National Press Photographers Association (NPPA) established a Code of Ethics which pro- motes the accuracy of published images, advising that photographers “do not manipulate images [...] that can mislead viewers or misrepresent subjects.”[24] Infringements of the Code are taken very seriously, especially regarding digital alteration of published photographs, as evidenced by a case in which Pulitzer prize-nominated photographer Allan Detrich resigned his post following the revelation that a number of his photographs had been manipulated.[25] In 2010, a Ukrainian photographer Stepan Rudik, winner of the 3rd prize story in Sports Features, has been disqual- ified due to violation of the rules of the World Press Photo contest. “After requesting RAW-files of the series from him, it became clear that an element had been removed from one of the original photographs.”[26] As of 2015, up to 20%[27] of World Press Photo entries that made it to the penultimate round of the contest were disqualified after they were found to have been manipulated or post-processed with rules violations.[28]

Use in

The photo manipulation industry has often been accused of promoting or inciting a distorted and unrealistic image of self; most specifically in younger people. The world of glamour photography is one specific industry which has been heavily involved with the use of photo manipulation (what many consider to be a concerning element as many people look up to celebrities in search of embodying the 'ideal figure').[29] Manipulation of a photo to alter a model’s appearance can be used to change features such as skin complexion, hair color, body shape, and other features. Many of the alterations to skin involve removing blemishes through the use of the healing tool in Photoshop. Photo editors may also alter the color of hair to remove roots or add shine. Additionally, the model’s teeth and eyes may be made to look whiter than they are in reality. Make up and piercings can even be edited into pictures to look as though the model was wearing them when the photo was taken. Through photo editing, the appearance of a model may be drastically changed to mask imperfections.[30]

Celebrities against photo manipulation

Photo manipulation has triggered negative responses from both viewers and celebrities. This has led to celebrities refusing to have their photos retouched in support of the American Medical Association that has decided that "[we] must stop exposing impressionable children and teenagers to advertisements portraying models with body types only attainable with the help of photo editing software”[31] These include Keira Knightley, Brad Pitt, Andy Roddick, and Jessica Simpson. Brad Pitt had a photographer, Chuck Close, take photos of him that emphasized all of his flaws. Chuck Close is known for his photos that emphasize all skin flaws of an individual. Pitt did so in an effort to speak out against media using photoshop and manipulating celebrities’ photos in an attempt to hide their flaws. Also, Kate Winslet spoke out against photo manipulation in media after GQ magazine altered her body, making it look unnaturally thin.[32] In April 2010, Britney Spears agreed to release “un-airbrushed images of herself next to the digitally altered ones”.[29] The fundamental motive behind her move was to “highlight the pressure exerted on women to look perfect”.[29] In addition, 42-year-old Cate Blanchett also appeared on the cover of Intelligent Life's 2012 March/April issue, makeup-free and without digital retouching for the first time.[33] 2.1. PHOTO MANIPULATION 31

Companies against photo manipulation

Multiple companies have begun taking the initiative to speak out against the use of photo manipulation when ad- vertising their products. Two companies that have done so include Dove and Aerie (American Eagle Outfitters). Dove created the Dove Self-Esteem Fund and also the Dove Campaign for Real Beauty as a way to try to help build confidence in young woman. They want to emphasize what is known as real beauty, or untouched photographs, in the media now.[34] Also, Aerie has started their campaign #AerieREAL. They have a line of undergarments now that goes by that name with the intention of them being for everyone.[35] Also, their advertisements state that the model has not been retouched in any way. They also add in their advertisements that “The real you is sexy.”[36] Also, the American Medical Association has taken a stand against the use of photo manipulation. Dr. McAneny made a statement that altering models to such extremes creates unrealistic expectations in children and teenagers regarding . He also said that we should stop altering the models so they are not exposed to body types that can be attained only through the use of editing the photos. The American Medical Associations as a whole adopted a policy to work with advertisers to work on setting up guidelines for advertisements to try to limit how much photoshop is used. The goal of this policy is to limit the amount of unrealistic expectations for body image in advertisement.[37]

Governments against excessive photo manipulation

Governments are exerting pressure on advertisers, and are starting to ban photos that are too airbrushed and edited. In the United Kingdom the Advertising Standards Authority has banned an advertisement by Lancôme featuring Julia Roberts for being misleading, stating that the flawless skin seen in the photo was too good to be true.[38] The US is also moving in the direction of banning excessive photo manipulation where a CoverGirl model’s ad was banned because it had exaggerated effects, leading to a misleading representation of the product.[39]

Support for photo manipulation in media

Some editors of magazine companies do not view manipulating their cover models as an issue. In an interview with the editor of the French magazine Marie Claire, she stated that their readers are not idiots and that they can tell when a model has been retouched. Also, some who support photo manipulation in the media state that the altered photographs are not the issue, but that it is the expectations that viewers have that they fail to meet, such as wanting to have the same body as a celebrity on the cover of their favorite magazine.[40]

Surveys done about photo manipulation

Surveys have been done to see how photo manipulation affects society and to see what society thinks of it. One survey was done by a fashion store in the United Kingdom, New Look, and it showed that 90% of the individuals surveyed would prefer seeing a wider variety of body shapes in media. This would involve them wanting to see cover models that are not all thin, but some with more curves than others. The survey also talked about how readers view the use of photo manipulation. One statistic stated that 15% of the readers believed that the cover images are accurate depictions of the model in reality. Also, they found that 33% of women who were surveyed are aiming for a body that is impossible for them to attain.[41] Dove also did a survey to see how photo manipulation affects the self-esteem of females. In doing this, they found that 80% of the woman surveyed felt insecure when seeing photos of celebrities in the media. Of the woman surveyed who had lower self-esteem, 71% of them do not believe that their appearance is pretty or stylish enough in comparison to cover models.[42]

Social and cultural implications

The growing popularity of image manipulation has raised concern as to whether it allows for unrealistic images to be portrayed to the public. In her article "On Photography" (1977), Susan Sontag discusses the objectivity, or lack thereof, in photography, concluding that “photographs, which fiddle with the scale of the world, themselves get reduced, blown up, cropped, retouched, doctored and tricked out”.[43] A practice widely used in the magazine industry, the use of photo manipulation on an already subjective photograph, creates a constructed reality for the individual and it can become difficult to differentiate fact from fiction. With the potential to alter body image, debate 32 CHAPTER 2. DAY 2

continues as to whether manipulated images, particularly those in magazines, contribute to self-esteem issues in both men and women. In today’s world, photo manipulation has a positive impact by developing the creativity of one’s mind or maybe a negative one by removing the art and beauty of capturing something so magnificent and natural or the way it should be. According to the Huffington Post, “Photoshopping and airbrushing, many believe, are now an inherent part of the beauty industry, as are makeup, lighting and styling”. In a way, these image alterations are “selling” actual people to the masses to affect responses, reactions, and emotions toward these cultural icons.[44]

2.1.3 Types of digital photo manipulation

In digital editing, photographs are usually taken with a digital camera and input directly into a computer. Transparencies, negatives or printed photographs may also be digitized using a scanner, or images may be obtained from stock pho- tography . With the advent of computers, graphics tablets, and digital cameras, the term image editing encompasses everything that can be done to a photo, whether in a darkroom or on a computer. Photo manipulation is often much more explicit than subtle alterations to color balance or contrast and may involve overlaying a head onto a different body or changing a sign’s text, for examples. Image editing software can be used to apply effects and warp an image until the desired result is achieved. The resulting image may have little or no resemblance to the photo (or photos in the case of compositing) from which it originated. Today, photo manipulation is widely accepted as an art form. There are several subtypes of digital image-retouching:

Technical retouching

Manipulation for photo restoration or enhancement (adjusting colors / contrast / white balance (i.e. gradational retouching), sharpness, noise, removing elements or visible flaws on skin or materials, ...)

Creative retouching

Used as an art form or for commercial use to create more sleek and interesting images for advertisements. Creative retouching could be manipulation for fashion, beauty or advertising photography such as pack-shots (which could also be considered inherently technical retouching in regards to package dimensions and wrap-around factors). One of the most prominent disciplines in creative retouching is image compositing whereby the digital artist uses multiple photos to create a single image. Today, are used more and more to add extra elements or even locations and backgrounds. This kind of image composition is widely used when conventional photography would be technically too difficult or impossible to shoot on location or in studio.

2.1.4 Photoshopped

As a result of the popularity of Adobe Photoshop as image editing software, use of the neologism “photoshopped” grew ubiquitously. The term commonly refers to any and all digital editing of photographs regardless of what software is used.[45][46][47] Trademark owners Adobe Systems Incorporated, while flattered over the software’s popularity, objected to what they referred to as misuse of their trademarked software, and considered it an infringement on their trademark to use terms such as “photoshopped” or “photoshopping” as a noun or verb, in possessive form or as a slang term.[48] However, Adobe’s attempts to prevent "genericization"[49] or “genericide” of the company’s trademark[50] was to no avail. The terms “photoshop”, “photoshopped” and “photoshopping” are ubiquitous and widely used colloquially and academically when referencing image editing software as it relates to digital manipulation and alteration of photographs.[51][52] In popular culture, the term photoshopping is sometimes associated with montages in the form of visual jokes, such as those published on Fark and in MAD Magazine. Images may be propagated memetically via e-mail as humor or passed as actual news in a form of hoax.[53][54] An example of the latter category is "Helicopter Shark", which was widely circulated as a so-called "National Geographic Photo of the Year” and was later revealed to be a hoax.[55] 2.1. PHOTO MANIPULATION 33

• Photomontage of 16 photos which have been digitally manip- ulated in Photoshop to give the impression that it is a real landscape.

• Before its release to news media, congressional staff digitally added into this 2013 official portrait the heads of four members absent in the original photo shoot.[56][57][58][59][60]

• Photograph manipulated in Photoshop to give impression it is a painting complete with brush strokes.

• Original photograph of horses being sorted in a corral 34 CHAPTER 2. DAY 2

• Digitally manipulated composite: horses in original photo are added to a photo of a pasture.

• Photomanipulation

2.1.5 See also

• 2006 Lebanon War photographs controversies

• Beauty whitewash

• Cottingley Fairies

• Kerry Fonda 2004 election photo controversy

• Pascal Dangin

• Photoshop contest

• Scientific misconduct#Photo manipulation

• Source criticism

• Tobacco bowdlerization

• Truth claim (photography)

• Visual arts 2.1. PHOTO MANIPULATION 35

2.1.6 References

[1] Jack Dziamba (February 27, 2013). “Ansel Adams, and Photography Before Photoshop”.

[2] Mia Fineman (November 29, 2012). “Artbeat”. PBS Newshour (Interview). Interview with Tom Legro. South Florida: WPBT2. Retrieved January 30, 2016.

[3] “World’s oldest photo sold to library”. BBC News. 21 March 2002. Retrieved 2011-11-17. The image of an engraving depicting a man leading a horse was made in 1825 by Nicephore Niepce, who invented a technique known as heliogravure.

[4] Farid, Hany. “Photo Tampering Throughout History” (PDF).

[5] Klaus Wolfer (ed.). “How to manipulate SX-70?". Polaroid SX-70 Art. Skylab Portfolio. Retrieved February 2, 2016.

[6] Lynne Warren (2006). Encyclopedia of Twentieth-Century Photography, 3-Volume Set. Routledge. p. 1007. ISBN 9781135205362.

[7] Peres, Michael (2007). The Focal Encyclopedia of Photography, Fourth Edition. Focal Press; 4 edition. p. Preface, 24. ISBN 0-240-80740-5. Retrieved January 30, 2016.

[8] Reichmann, Michael (2006). “Making The Transition From Film To Digital” (PDF). Adobe Systems Incorporated.

[9] “Civil War Glass Negatives and Related Prints – Solving a Civil War Photograph Mystery”. Library of Congress. Retrieved January 30, 2016.

[10] Fabio Sasso (July 2011). Abduzeedo Inspiration Guide for Designers. Pearson Education. p. 124. ISBN 9780132684729.

[11] Rotman, Brian (2008). Becoming Beside Ourselves: The Alphabet, Ghosts, and Distributed Human Being. Duke University Press. pp. 96–97. ISBN 978-0822341833.

[12] Peter E. Palmquist, Thomas R. Kailbourn (2005). Pioneer Photographers from the Mississippi to the Continental Divide: A Biographical Dictionary, 1839-1865. Stanford University Press. p. 55. ISBN 978-0804740579.

[13] King, D. (1997). The Commissar Vanishes: the falsification of photographs and art in Stalin’s Russia. New York: Metropoli- tan Books. ISBN 0-8050-5294-1.

[14] Daniel Ammann (2009). The King of Oil: The Secret Lives of Marc Rich By Daniel Ammann. Macmillan. p. 228. ISBN 978-1429986854.

[15] The Newseum (Sep 1, 1999). ""The Commissar Vanishes” in The Vanishing Commissar”. Retrieved September 30, 2012.

[16] Carlson, Kathryn; DeLevie, Brian; Oliva, Aude (2006). “Ethics in image manipulation”. ACM SIGGRAPH 2006. Inter- national Conference on Computer Graphics and Interactive Techniques. ACM. doi:10.1145/1179171.1179176. ISBN 1-59593-364-6.

[17] Reynolds, C. J. (July 12–14, 2007). Image Act Theory (PDF). Seventh International Conference of Computer Ethics.

[18] Mitchell, William J. (1994). “How to Do Things with Pictures”. The Reconfigured Eye: Visual Truth in the Post-Photographic Era. MIT Press.

[19] Fred Ritchin (November 4, 1984). “Photography’s New Bag Of Tricks”. The New York Times Company. Retrieved January 6, 2016.

[20] “National Geographic — Altered Images”. Bronx Documentary Center. Retrieved January 6, 2016.

[21] Jonathan D. Glater (March 3, 2005). “Martha Stewart Gets New Body In Newsweek”. New York Times. Retrieved January 6, 2015.

[22] Katie Hefner (March 11, 2004). “The Camera Never Lies, But The Software Can”. The New York Times Company.

[23] Kitchin, Rob (2011). “6”. Code/Space: Software and Everyday Life. The MIT Press. p. 120. ISBN 0-262-04248-7.

[24] “NPPA Code of Ethics”. National Press Photographers Association.

[25] Lang, Daryl (April 15, 2007). “Blade Editor: Detrich Submitted 79 Altered Photos This Year”. Photo District News.

[26] “Announcement of disqualification”. World Press Photo. Retrieved 2016-01-22.

[27] “World Press Photo Organizer: 20% of Finalists Disqualified”. TIME.com. Retrieved 2016-01-22.

[28] “What counts as manipulation?". World Press Photo. Retrieved 2016-01-22. 36 CHAPTER 2. DAY 2

[29] “Britney Spears bravely agrees to release un-airbrushed images of herself next to the digitally-altered versions”. Daily Mail. April 13, 2010. Retrieved March 23, 2012.

[30] Metzmacher, Dirk. “Smashing Magazine.” Smashing Magazine. N.p., n.d. Web. April 16, 2014.

[31] “Photoshop contributes to unrealistic expectations of appropriate body image: AMA”. NY Daily News. Retrieved 2016- 04-25.

[32] “Keep It Real Challenge: Photoshop’s Impact on Body Image”. info.umkc.edu. 29 June 2012. Retrieved 19 April 2015.

[33] Roberts, Soraya. “Cate Blanchett goes without digital enhancement on the cover of Intelligent Life”. The Juice. Retrieved March 22, 2012.

[34] “The Evolution Video – Dove Self Esteem Project”. selfesteem.dove.us. 2 June 2013. Retrieved 1 May 2015.

[35] “Aerie for American Eagle”. Retrieved 1 May 2015.

[36] Krupnick, Ellie (17 January 2014). “Aerie’s Unretouched Ads 'Challenge Supermodel Standards’ For Young Women”. The Huffington Post. Retrieved 1 May 2015.

[37] “AMA Adopts New Policies at Annual Meeting”. ama-assn.org. 21 June 2011. Retrieved 19 April 2015.

[38] Zhang, Michael. “Julia Roberts Makeup Ads Banned in UK for Too Much Photoshop”. PetaPixel. Retrieved July 27, 2011.

[39] Anthony, Sebastian. “US watchdog bans photoshopping in cosmetics ads”. Retrieved December 16, 2011.

[40] “In Defense of Photoshop: Why Retouching Isn't As Evil As Everyone Thinks”. The Cut. 29 August 2010. Retrieved 19 April 2015.

[41] “Women Need Further Educating Into Extent of Digital Manipulation of Mo”. prweb.com. 26 November 2013. Retrieved 19 April 2015.

[42] “The Self Esteem Act: Parents Push for Anti-Photoshop Law in U.S. to Protect Teens from Unrealistic Body Image Ideals”. Associated Newspapers. Daily Mail Online. 12 October 2011. Retrieved 19 April 2015.

[43] Sontag, Susan (1977). On Photography. p. 4.

[44] L. Boutwell, Allison. “Photoshop: A Positive and Negative Innovation”.

[45] Rodriguez, Edward (2008). Computer Graphic Artist. Netlibrary. p. 163. ISBN 81-89940-42-2. The term photoshopping is a neologism, meaning “editing an image”, regardless of the program used.

[46] Geelan, David (2006). Undead Theories: Constructivism, Eclecticism And Research in Education. Sense Publisher. p. 146. ISBN 90-77874-31-3. And with digital photography, there is also the possibility of photoshopping – digitally editing the representation to make it more aesthetically pleasing, or to change decisions about .

[47] Laurence M. Deutsch (2001). Medical Records for Attorneys. ALI-ABA. ISBN 9780831808174.

[48] “Use Of The Photoshop Trademark” (PDF) (Press release). Adobe Systems Incorporated. Retrieved February 2, 2016.

[49] Katharine Trendacosta (August 26, 2015). “Here Is Adobe’s Attempt to Stop People From Using the Term “Photoshop” All Willy-Nilly”. Gawker Media. Retrieved February 2, 2016.

[50] John Dwight Ingram (2004). “The Genericide of Trademarks” (PDF). Buffalo Intellectual Property Law Journal. State University of New York. 2 (2). Retrieved February 2, 2016.

[51] Blatner, David (August 1, 2000). “Photoshop: It’s Not Just a Program Anymore”. Macworld. Archived from the original on March 15, 2005.

[52] “Photoshop Essentials”. Portfolio-Builder: Adobe Photoshop and Adobe Illustrator Projects. Peachpit Press. August 15, 2005. ISBN 978-0-321-33658-3.

[53] Jenn Shreve (November 19, 2001). “Photoshop: It’s All the Rage”. Wired Magazine.

[54] Corrie Pikul (July 1, 2004). “The Photoshopping of the President”. Salon.com Arts & Entertainment.

[55] Danielson, Stentor; Braun, David (March 8, 2005). “Shark “Photo of the Year” Is E-Mail Hoax”. National Geographic News. Retrieved May 20, 2006.

[56] ABC News (Jan 4, 2013). “Pelosi Defends Altered Photo of Congresswomen”. Retrieved January 4, 2013. 2.1. PHOTO MANIPULATION 37

[57] Washington Post (Jan 4, 2013). “House Democratic leader Nancy Pelosi defends altered photo of women House members”. Retrieved January 4, 2013.

[58] Salon.com (Jan 4, 2013). “Pelosi defends altered photo of congresswomen”. Retrieved January 4, 2013.

[59] Huffington Post (Jan 4, 2013). “Nancy Pelosi Defends Altered Photo Of Congresswomen (PHOTO)". Retrieved January 4, 2013.

[60] WSET News (ABC TV-13) (Jan 4, 2013). “Pelosi defends altered photo of congresswomen”. Retrieved January 4, 2013.

2.1.7 Further reading

• Ades, Dawn (1986). Photomontage. London, UK: Thames & Hudson. ISBN 0-500-20208-7.

2.1.8 External links

• Digital Tampering in the Media, Politics and Law – a collection of digitally manipulated photos of political interest

• Hoax Photo Gallery – more manipulated photos • Erased figures in Kagemni’s tomb — discusses political image manipulation with an example from Ancient Egypt Chapter 3

Day 3

3.1 Layers (digital image editing)

Layers are used in digital image editing to separate different elements of an image. A layer can be compared to a on which imaging effects or images are applied and placed over or under an image. Today they are an integral feature of image editors. Layers were first commercially available in Fauve Matisse (later xRes),[1] and then available in Adobe Photoshop 3.0, in 1994, but today a wide range of other programs, such as Photo-Paint, Paint Shop Pro, GIMP, Paint.NET, StylePix, and even batch processing tools also include this feature. In vector images editors which support animation, layers are used to further enable manipulation along a common timeline for the animation; in SVG images, the equivalent to layers are “groups”.

3.1.1 Layer types

There are different kinds of layers, and not all of them exist in all programs. They represent a part of a picture, either as pixels or as modification instructions. They are stacked on top of each other, and depending on the order, determine the appearance of the final picture. In graphics software, a layer is the term used to describe the different levels at which you can place an object or image file. In the program you can stack, merge or define layers when creating a digital image. Layers can be partially obscured allowing portions of images within a layer to be hidden or shown in a translucent manner within another image, or you can use layers to combine two or more images into a single digital image. For the purpose of editing, working with layers allows you to go back and make changes within a layer as you work.

3.1.2 Layer (basic)

The standard kind of layer is called simply “Layer” in most programs. It contains just a picture which can be super- imposed on another one. The picture can cover the same area as the resulting picture, just a part of it, or, in some cases, a bigger part than the final picture. A Layer can have a certain transparency/opacity and a number of other properties. In a high end program like Adobe Photoshop, a basic layer may have more than a hundred different possible settings. Even though some of them overlap and give the same result, they give a skilled user a lot of flexibility. A free program like the GIMP may not have as many settings, but well used they can often provide a satisfactory result. Two Layers can blend using one of several modes which result in different light and colour combinations.

3.1.3 Layer mask

A layer mask is linked to a layer and hides part of the layer from the picture. What is painted black on the layer mask will not be visible in the final picture. What is grey will be more or less transparent depending on the shade of grey.

38 3.1. LAYERS (DIGITAL IMAGE EDITING) 39

As the layer mask can be both edited and moved around independently of both the background layer and the layer it applies to, it gives the user the ability to test a lot of different combinations of overlay.

3.1.4 Adjustment layer

An adjustment layer typically applies a common effect like brightness or saturation to other layers. However, as the effect is stored in a separate layer, it is easy to try it out and switch between different alternatives, without changing the original layer. In addition, an adjustment layer can easily be edited, just like a layer mask, so an effect can be applied to just part of the image.

3.1.5 See also

• Alpha compositing • Comparison of raster graphics editors

• Digital image editing • Raster graphics

• Image processing • Sprite (computer graphics)

3.1.6 References

[1] Macromedia Matisse 40 CHAPTER 3. DAY 3

This picture consists of a blue background and on top of that a layer of conifers cut using a layer-mask in the shape of a seagull. 3.1. LAYERS (DIGITAL IMAGE EDITING) 41

A gradient is applied as an adjustment layer to the entire image except the oval in the middle, which was cut out from the adjustment layer. Chapter 4

Day 4

4.1 Image histogram

Sunflower image

Histogram of sunflower image

An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image.[1] It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance. Image histograms are present on many modern digital cameras. Photographers can use them as an aid to show the distribution of tones captured, and whether image detail has been lost to blown-out highlights or blacked-out shadows.[2] This is less useful when using a raw image format, as the dynamic range of the displayed image may only be an approximation to that in the raw file. The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the number of pixels in that particular tone.[1] The left side of the horizontal axis represents the black and dark areas, the middle represents medium grey and the right hand side represents light and pure white areas. The vertical axis represents the size of the area that is captured in each one of these zones. Thus, the histogram for a very dark image will have the majority of its data points on the left side and center of the graph. Conversely, the histogram for a very bright image with few dark areas and/or shadows will have most of its data points on the right side and center of the graph.

42 4.2. CURVE (TONALITY) 43

4.1.1 Image manipulation and histograms

Image editors typically have provisions to create a histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made.[3] Improvements in picture brightness and contrast can thus be obtained. In the field of computer vision, image histograms can be useful tools for thresholding. Because the information contained in the graph is a representation of pixel distribution as a function of tonal variation, image histograms can be analyzed for peaks and/or valleys. This threshold value can then be used for edge detection, image segmentation, and co-occurrence matrices.

4.1.2 See also

• Image editing

• Color histogram, a multidimensional histogram of the distribution of color in an image

• Histogram equalization

• Histogram matching

4.1.3 References

[1] Ed Sutton. “Histograms and the ”. Illustrated Photography.

[2] Michael Freeman (2005). The Digital SLR Handbook. Ilex. ISBN 1-904705-36-7.

[3] Martin Evening (2007). Adobe Photoshop CS3 for Photographers: A Professional Image Editor’s Guide... Focal Press. ISBN 0-240-52028-9.

4.1.4 External links

• CAMERA HISTOGRAMS: TONES & CONTRAST at cambridgeincolour.com

4.2 Curve (tonality)

In image editing, a curve is a remapping of image tonality, specified as a function from input level to output level, used as a way to emphasize colours or other elements in a picture.[1][2] Curves can usually be applied to all channels together in an image, or to each channel individually. Applying a curve to all channels typically changes the brightness in part of the spectrum. The software user may for example make light parts of a picture lighter and dark parts darker to increase contrast. Applying a curve to individual channels can be used to stress a colour. This is particularly efficient in the Lab colour space due to the separation of luminance and chromaticity,[3] but it can also be used in RGB, CMYK or whatever other colour models the software supports.

4.2.1 See also

• Blend modes

• Hurter–Driffield curve

• Tone reproduction curve 44 CHAPTER 4. DAY 4

Photo and curve dialog in the GIMP

Photo and curve dialog with red colour emphasized in the lighter end of the spectrum. 4.3. SENSITOMETRY 45

4.2.2 References

[1] The manual

[2] Adobe web site on curves in Photoshop

[3] Margulis, Dan (2005). Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Col- orspace. ISBN 0-321-35678-0.

4.2.3 External links

• Defanging the Curves Vampire, Dan Margulis, December, 1996

4.3 Sensitometry

Sensitometry is the scientific study of light-sensitive materials, especially photographic film. The study has its origins in the work by Ferdinand Hurter and Vero Charles Driffield (circa 1876) with early black-and-white emulsions.[1][2] They determined how the density of silver produced varied with the amount of light received, and the method and time of development.

4.3.1 Details

Plots of film density (log of opacity) versus the log of exposure are called characteristic curves,[3] Hurter–Driffield curves,[4] H–D curves,[4] HD curves,[5] H & D curves,[6] D–logE curves,[7] or D–logH curves.[8] At moderate exposures, the overall shape is typically a like an “S” slanted so that its base and top are horizontal. There is usually a central region of the HD curve which approximates to a straight line, called the “linear” or “straight-line” portion; the slope of this region is called the gamma. The low end is called the “toe”, and at the top, the curve rounds over to form the “shoulder”. At extremely high exposures, the density may come back down, an effect known as solarisation. Different commercial film materials cover a gamma range from about 0.5 to about 5. Often it is not the original film that one views but a or later generation. In these cases the end-to-end gamma is approximately the product of the separate gammas. Photographic paper prints have end-to-end gammas generally somewhat over 1. Projection transparencies for dark surround viewing have end-to-end gamma approximately 1.5. A full set of HD curves for a film shows how these vary with developer type and time.[3]

4.3.2 See also

• Densitometry

• Leon Warnerke aka Władysław Małachowski, inventor of the first practical sensitometer in 1880.

• Hurter and Driffield

• Julius Scheiner - Scheiner sensitometer

• Henry Chapman Jones - Chapman Jones plate tester

• Josef Maria Eder and Walter Hecht - Eder-Hecht neutral wedge sensitometer

• Spectral sensitivity

• Zone system 46 CHAPTER 4. DAY 4

4.3.3 References

[1] Hurter, Ferdinand & Driffield, Vero Charles (1890) Photochemical Investigations and a New Method of Determination of the Sensitiveness of Photographic Plates, J. Soc. Chem. Ind. May 31, 1890.

[2] Mees, C. E. Kenneth (May 1954). “L. A. Jones and his Work on Photographic Sensitometry” (PDF). Image, Journal of Photography of House. Rochester, N.Y.: International Museum of Photography at George Eastman House Inc. III (5): 34–36. Retrieved 15 July 2014.

[3] “ PROFESSIONAL TRI-X 320 and 400 Films” (PDF). Eastman Kodak Company. May 2007.

[4] Stuart B. Palmer and Mircea S. Rogalski (1996). Advanced University Physics. Taylor & Francis. ISBN 2-88449-065-5.

[5] Kenneth W. Busch and Marianna A. Busch (1990). Multielement Detection Systems for Spectrochemical Analysis. Wiley- Interscience. ISBN 0-471-81974-3.

[6] Richard R. Carlton, Arlene McKenna Adler (2000). Principles of Radiographic Imaging: An Art and a Science. Thomson Delmar Learning. ISBN 0-7668-1300-2.

[7] Ravi P. Gupta (2003). Remote Sensing Geology. Springer. ISBN 3-540-43185-3.

[8] Leslie D. Stroebel and Richard D. Zakia (1993). The Focal Encyclopedia of Photography. Focal Press. ISBN 0-240- 51417-3.

4.3.4 External links

• Basic Sensitometry And Characteristics of Film (Kodak undated)

• A memorial volume containing an account of the photographic researches of Ferdinand Hurter & Vero C. Driffield ; being a reprint of their published papers, together with a history of their early work and a bibliography of later work on the same subject. (Royal Photographic Society of Great Britain 1920) 4.3. SENSITOMETRY 47

Page 10 of Raymond Davis, Jr. and F. M. Walters, Jr., Scientific Papers of the Bureau of Standards, No. 439 (Part of Vol. 18) “Sensitometry of Photographic Emulsions and a Survey of the Characteristics of Plates and Films of American Manufacture,” 1922. The next page starts with the H & D quote: “In a theoretically perfect negative, the amounts of silver deposited in the various parts are proportional to the logarithms of the intensities of light proceeding from the corresponding parts of the object.” The assumption here, based on empirical observations, is that the “amount of silver” is proportional to the optical density. 48 CHAPTER 4. DAY 4

4.4 Tone reproduction

In the theory of photography, tone reproduction is the mapping of scene luminance and color to print reflectance or display luminance,[1] with the aim of subjectively “properly” reproducing brightness and “brightness differences”.[2] The reproduction of color scenes in black-and-white tones is one of the long-time concerns of photographers.[3] A tone reproduction curve is often referred to by its initials, TRC, and the 'R' is sometimes said to stand for response, as in tone response curve.

4.4.1 In photography

In photography, the differences between an “objective” and “subjective” tone reproduction, and between “accurate” and “preferred” tone reproduction, have long been recognized. Many steps in the process of photography are recog- nized as having their own nonlinear curves, which in combination form the overall tone reproduction curve; the Jones diagram was developed as a way to illustrate and combine curves, to study and explain the photographic process.[4][5] The luminance range of a scene maps to the focal-plane illuminance and exposure in a camera, not necessarily directly proportionally, as when a graduated neutral density filter is used to reduce the exposure range to less than the scene luminance range. The film responds nonlinearly to the exposure, as characterized by the film’s characteristic curve, or Hurter–Driffield curve; this plot of optical density of the developed negative versus the logarithm of the exposure (also called a D–logE curve) has central straight section whose slope is called the gamma of the film. The gamma can be controlled by choosing different films, or by varying the development time or temperature. Similarly, the light transmitted by the negative exposed a photographic paper and interacts with the characteristic curve of the paper to give an overall tone reproduction curve. The exposure of the paper is sometimes modified in the darkroom by dodging and/or burning-in, further complicating the overall tone reproduction, usually helping to map a wider dynamic range from a negative onto a narrower print reflectance range. In digital photography, image sensors tend to be nearly linear, but these nonlinear tone reproduction characteristics are emulated in the camera hardware and/or processing software, via "curves".

4.4.2 In printing

In printing, a tone reproduction curve is applied to a desired output-referred luminance value, for example to adjust for the dot gain of a particular printing method.[6] Dot-based printing methods have a finite native dot size. The dot is not square, nor any other shape that when stacked together perfectly fills an image area; rather, the dot will be larger than its target area and overlap its neighbors to some extent. If it were smaller than its target area, it would not be possible to saturate the substrate. A tone reproduction curve is applied to the electronic image prior to printing, so that the reflectance of the print closely approximates a proportionality to the luminance intent implied by the electronic image. It is easier to demonstrate the need for a TRC using halftoned printing methods such as inkjet, or xerographic tech- nologies. However, the need also applies to continuous-tone methods such as photographic paper printing. As an example, suppose one wants to print an area at 50% reflectance, assuming no ink is 100% reflective and saturated black ink is 0% (which of course they aren't). The 50% could be approximated using digital halftoning by applying a dot of ink at every other dot target area, and staggering the lines in a brick-like fashion. In a perfect world, this would cover exactly half of the page with ink and make the page appear to have 50% reflectivity. However, because the ink will bleed into its neighboring target locations, greater than 50% of the page will be dark. To compensate for this darkening, a TRC is applied and the digital image’s reflectance value is reduced to something less than 50% dot coverage. When digital halftoning is performed, we will no longer have the uniform on-off-on-off pattern, but we will have another pattern that will target less than 50% of the area with ink. If the correct TRC was chosen, the area will have an average 50% reflectance after the ink has bled. A TRC can be applied when doing color space conversion. For example, by default, when transforming from L*A*B* to CMYK, Photoshop applies an ICC profile for SWOP standard inks and 20% dot gain for coated paper.

4.4.3 See also

• Curve (tonality) 4.4. TONE REPRODUCTION 49

• Dot gain

• Jones diagram •

4.4.4 References

[1] John Sturge; Vivian Walworth & Allan Shepp (1989). Imaging Processes and Materials. John Wiley and Sons. ISBN 0-471-29085-8.

[2] L. A. Jones (July 1920). “On the Theory of Tone Reproduction, with a Graphic Method for the Solution of Problems”. Journal of the Franklin Institute. The Franklin Institute of the State of Pennsylvania. 190 (1): 39–90. doi:10.1016/S0016- 0032(20)92118-X.

[3] “A New Photographic Process”. American Engineer and Railroad Journal. XLVIII (4): 183. April 1894.

[4] L. A. Jones (March 1921). “Photographic Reproduction of Tone”. Journal of the Optical Society of America. OSA. V (2): 232. doi:10.1364/josa.5.000232.

[5] Leslie D. Stroebel; Ira Current; John Compton & Richard D. Zakia (2000). Basic Photographic Materials and Processes. Focal Press. pp. 235–255. ISBN 0-240-80405-8.

[6] Charles Hains; et al. (2003). “Digital Color Halftones”. In Gaurav Sharma. Digital Color Imaging Handbook. CRC Press. ISBN 0-8493-0900-X. Chapter 5

Day 5

5.1 Gamma correction

Gamma correction, or often simply gamma, is the name of a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems.[1] Gamma correction is, in the simplest cases, defined by the following power-law expression:

γ Vout = AVin

where the non-negative real input value Vin is raised to the power γ and multiplied by the constant A, to get the output value Vout . In the common case of A = 1, inputs and outputs are typically in the range 0–1. A gamma value γ < 1 is sometimes called an encoding gamma, and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely a gamma value γ > 1 is called a decoding gamma and the application of the expansive power-law nonlinearity is called gamma expansion.

5.1.1 Explanation

Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color.[1] The human perception of brightness, under common illumination conditions (not pitch black nor blindingly bright), fol- lows an approximate power function (note: no relation to the Gamma function), with greater sensitivity to relative differences between darker tones than between lighter ones, consistent with the Stevens’ power law for brightness perception. If images are not gamma-encoded, they allocate too many bits or too much bandwidth to highlights that humans cannot differentiate, and too few bits or too little bandwidth to shadow values that humans are sensitive to and would require more bits/bandwidth to maintain the same visual quality.[1][2] Gamma encoding of floating-point im- ages is not required (and may be counterproductive), because the floating-point format already provides a piecewise linear approximation of a logarithmic curve.[3] Although gamma encoding was developed originally to compensate for the input–output characteristic of cathode ray tube (CRT) displays, that is not its main purpose or advantage in modern systems. In CRT displays, the light intensity varies nonlinearly with the electron-gun voltage. Altering the input signal by gamma compression can cancel this nonlinearity, such that the output picture has the intended luminance. However, the gamma characteristics of the display device do not play a factor in the gamma encoding of images and video—they need gamma encoding to maximize the visual quality of the signal, regardless of the gamma characteristics of the display device.[1][2] The similarity of CRT physics to the inverse of gamma encoding needed for video transmission was a combination of luck and engineering, which simplified the electronics in early television sets.[4]

5.1.2 Generalized gamma

The concept of gamma can be applied to any nonlinear relationship. For the power-law relationship Vₒᵤ = Vᵢγ, the curve on a log–log plot is a straight line, with slope everywhere equal to gamma (slope is represented here by the

50 5.1. GAMMA CORRECTION 51 derivative operator):

d log(V ) γ = out d log(Vin) That is, gamma can be visualized as the slope of the input–output curve when plotted on logarithmic axes. For a power-law curve, this slope is constant, but the idea can be extended to any type of curve, in which case gamma (strictly speaking, “point gamma”[5]) is defined as the slope of the curve in any particular region.

5.1.3 Film photography

Main article: Sensitometry

When a photographic film is exposed to light, the result of the exposure can be represented on a graph showing log of exposure on the horizontal axis, and density, or log of transmittance, on the vertical axis. For a given film formulation and processing method, this curve is its characteristic or Hurter–Driffield curve.[6][7] Since both axes use logarithmic units, the slope of the linear section of the curve is called the gamma of the film. Negative film typically has a gamma less than 1; positive film (slide film, reversal film) typically has a gamma greater than 1. Photographic film has a much greater ability to record fine differences in shade than can be reproduced on photographic paper. Similarly, most video screens are not capable of displaying the range of brightnesses (dynamic range) that can be captured by typical electronic cameras.[8] For this reason, considerable artistic effort is invested in choosing the reduced form in which the original image should be presented. The gamma correction, or contrast selection, is part of the photographic repertoire used to adjust the reproduced image. Analogously, digital cameras record light using electronic sensors that usually respond linearly. In the process of rendering linear raw data to conventional RGB data (e.g. for storage into JPEG image format), color space trans- formations and rendering transformations will be performed. In particular, almost all standard RGB color spaces and file formats use a non-linear encoding (a gamma compression) of the intended intensities of the primary colors of the photographic reproduction; in addition, the intended reproduction is almost always nonlinearly related to the measured scene intensities, via a tone reproduction nonlinearity.

5.1.4 Windows, Mac, sRGB and TV/video standard gammas

In most computer display systems, images are encoded with a gamma of about 0.45 and decoded with the reciprocal gamma of 2.2. A notable exception, until the release of Mac OS X 10.6 (Snow Leopard) in September 2009, were computers, which encoded with a gamma of 0.55 and decoded with a gamma of 1.8. In any case, binary data in still image files (such as JPEG) are explicitly encoded (that is, they carry gamma-encoded values, not linear intensities), as are motion picture files (such as MPEG). The system can optionally further manage both cases, through , if a better match to the output device gamma is required. The sRGB color space standard used with most cameras, PCs, and printers does not use a simple power-law nonlin- earity as above, but has a decoding gamma value near 2.2 over much of its range, as shown in the plot to the right. Below a compressed value of 0.04045 or a linear intensity of 0.00313, the curve is linear (encoded value propor- tional to intensity), so γ = 1. The dashed black curve behind the red curve is a standard γ = 2.2 power-law curve, for comparison. Output to CRT-based television receivers and monitors does not usually require further gamma correction, since the standard video signals that are transmitted or stored in image files incorporate gamma compression that provides a pleasant image after the gamma expansion of the CRT (it is not the exact inverse). For television signals, the actual gamma values are defined by the video standards (NTSC, PAL or SECAM), and are always fixed and well known values.

5.1.5 Power law for video display

A gamma characteristic is a power-law relationship that approximates the relationship between the encoded luma in a television system and the actual desired image luminance. 52 CHAPTER 5. DAY 5

With this nonlinear relationship, equal steps in encoded luminance correspond roughly to subjectively equal steps in brightness. Ebner and Fairchild[9] used an exponent of 0.43 to convert linear intensity into (luma) for neutrals; the reciprocal, approximately 2.33 (quite close to the 2.2 figure cited for a typical display subsystem), was found to provide approximately optimal perceptual encoding of grays. The following shows the difference between a scale with linearly-increasing encoded luminance signal (linear gamma-compressed luma input) and a scale with linearly-increasing intensity scale (linear luminance output). On most displays (those with gamma of about 2.2), one can observe that the linear-intensity scale has a large jump in perceived brightness between the intensity values 0.0 and 0.1, while the steps at the higher end of the scale are hardly perceptible. The gamma-encoded scale, which has a nonlinearly-increasing intensity, will show much more even steps in perceived brightness. A cathode ray tube (CRT), for example, converts a video signal to light in a nonlinear way, because the electron gun’s intensity (brightness) as a function of applied video voltage is nonlinear. The light intensity I is related to the source voltage V according to

∝ γ I Vs where γ is the Greek letter gamma. For a CRT, the gamma that relates brightness to voltage is usually in the range 2.35 to 2.55; video look-up tables in computers usually adjust the system gamma to the range 1.8 to 2.2,[1] which is in the region that makes a uniform encoding difference give approximately uniform perceptual brightness difference, as illustrated in the diagram at the top of this section. For , consider the example of a monochrome CRT. In this case, when a video signal of 0.5 (representing mid-gray) is fed to the display, the intensity or brightness is about 0.22 (resulting in a dark gray). Pure black (0.0) and pure white (1.0) are the only shades that are unaffected by gamma. To compensate for this effect, the inverse transfer function (gamma correction) is sometimes applied to the video signal so that the end-to-end response is linear. In other words, the transmitted signal is deliberately distorted so that, after it has been distorted again by the display device, the viewer sees the correct brightness. The inverse of the function above is:

∝ 1/γ Vc Vs where V is the corrected voltage and V is the source voltage, for example from an image sensor that converts photocharge linearly to a voltage. In our CRT example 1/γ is 1/2.2 or 0.45. A color CRT receives three video signals (red, green and blue) and in general each color has its own value of gamma, denoted γR, γG or γB. However, in simple display systems, a single value of γ is used for all three colors. Other display devices have different values of gamma: for example, a Game Boy Advance display has a gamma between 3 and 4 depending on lighting conditions. In LCDs such as those on laptop computers, the relation between the signal voltage V and the intensity I is very nonlinear and cannot be described with gamma value. However, such displays apply a correction onto the signal voltage in order to approximately get a standard γ = 2.5 behavior. In NTSC television recording, γ = 2.2. The power-law function, or its inverse, has a slope of infinity at zero. This leads to problems in converting from and to a gamma colorspace. For this reason most formally defined colorspaces such as sRGB will define a straight-line segment near zero and add raising x + K (where K is a constant) to a power so the curve has continuous slope. This straight line does not represent what the CRT does, but does make the rest of the curve more closely match the effect of ambient light on the CRT. In such expressions the exponent is not the gamma; for instance, the sRGB function uses a power of 2.4 in it, but more closely resembles a power-law function with an exponent of 2.2, without a linear portion.

5.1.6 Methods to perform display gamma correction in computing

Up to four elements can be manipulated in order to achieve gamma encoding to correct the image to be shown on a typical 2.2- or 1.8-gamma computer display:

• The pixel’s intensity values in a given image file; that is, the binary pixel values are stored in the file in such way that they represent the light intensity via gamma-compressed values instead of a linear encoding. This is done 5.1. GAMMA CORRECTION 53

systematically with files (as those in a DVD movie), in order to minimize the gamma-decoding step while playing, and maximize image quality for the given storage. Similarly, pixel values in standard image file formats are usually gamma-compensated, either for sRGB gamma (or equivalent, an approximation of typical of legacy monitor gammas), or according to some gamma specified by such as an ICC profile. If the encoding gamma does not match the reproduction system’s gamma, further correction may be done, either on display or to create a modified image file with a different profile.

• The rendering software writes gamma-encoded pixel binary values directly to the video memory (when highcolor/truecolor modes are used) or in the CLUT hardware registers (when indexed color modes are used) of the display adapter. They drive Digital-to-Analog Converters (DAC) which output the proportional voltages to the display. For ex- ample, when using 24-bit RGB color (8 bits per channel), writing a value of 128 (rounded midpoint of the 0–255 byte range) in video memory it outputs the proportional ≈ 0.5 voltage to the display, which it is shown darker due to the monitor behavior. Alternatively, to achieve ≈ 50% intensity, a gamma-encoded look-up table can be applied to write a value near to 187 instead of 128 by the rendering software.

• Modern display adapters have dedicated calibrating CLUTs, which can be loaded once with the appropriate gamma-correction look-up table in order to modify the encoded signals digitally before the DACs that output voltages to the monitor.[10] Setting up these tables to be correct is called hardware calibration.[11]

• Some modern monitors allow the user to manipulate their gamma behavior (as if it were merely another brightness/contrast-like setting), encoding the input signals by themselves before they are displayed on screen. This is also a calibration by hardware technique but it is performed on the analog electric signals instead of remapping the digital values, as in the previous cases.

In a correctly calibrated system, each component will have a specified gamma for its input and/or output encodings.[11] Stages may change the gamma to correct for different requirements, and finally the output device will do gamma decoding or correction as needed, to get to a linear intensity domain. All the encoding and correction methods can be arbitrarily superimposed, without mutual knowledge of this fact among the different elements; if done incorrectly, these conversions can lead to highly distorted results, but if done correctly as dictated by standards and conventions will lead to a properly functioning system. In a typical system, for example from camera through JPEG file to display, the role of gamma correction will involve several cooperating parts. The camera encodes its rendered image into the JPEG file using one of the standard gamma values such as 2.2, for storage and transmission. The display computer may use a color management engine to convert to a different color space (such as older Macintosh’s γ = 1.8 color space) before putting pixel values into its video memory. The monitor may do its own gamma correction to match the CRT gamma to that used by the video system. Coordinating the components via standard interfaces with default standard gamma values makes it possible to get such system properly configured.

5.1.7 Simple monitor tests

To see whether one’s computer monitor is properly hardware adjusted and can display shadow detail in sRGB images properly, they should see the left half of the circle in the large black square very faintly but the right half should be clearly visible. If not, one can adjust their monitor’s contrast and/or brightness setting. This alters the monitor’s perceived gamma. The image is best viewed against a black background. This procedure is not suitable for calibrating or print-proofing a monitor. It can be useful for making a monitor display sRGB images approximately correctly, on systems in which profiles are not used (for example, the Firefox browser prior to version 3.0 and many others) or in systems that assume untagged source images are in the sRGB colorspace. On some operating systems running the X Window System, one can set the gamma correction factor (applied to the existing gamma value) by issuing the command xgamma -gamma 0.9 for setting gamma correction factor to 0.9, and xgamma for querying current value of that factor (the default is 1.0). In OS X systems, the gamma and other related screen calibrations are made through the System Preferences. Windows versions before lack a first-party developed calibration tool. In the test pattern to the right, the linear intensity of each solid bar is the average of the linear intensities in the surrounding striped ; therefore, ideally, the solid squares and the dithers should appear equally bright in a properly adjusted sRGB system. 54 CHAPTER 5. DAY 5

5.1.8 Terminology

The term intensity refers strictly to the amount of light that is emitted per unit of time and per unit of surface, in units of . Note, however, that in many fields of science this quantity is called luminous exitance, as opposed to luminous intensity, which is a different quantity. These distinctions, however, are largely irrelevant to gamma compression, which is applicable to any sort of normalized linear intensity-like scale. “Luminance” can mean several things even within the context of video and imaging:

• luminance is the photometric brightness of an object, taking into account the wavelength-dependent sensitivity of the human eye (in units of cd/m²);

• relative luminance is the luminance relative to a white level, used in a color-space encoding;

• luma is the encoded video brightness signal, i.e., similar to the signal voltage VS.

One contrasts relative luminance in the sense of color (no gamma compression) with luma in the sense of video (with gamma compression), and denote relative luminance by Y and luma by Y′, the prime symbol (′) denoting gamma compression.[12] Note that luma is not directly calculated from luminance, it is the (somewhat arbitrary) weighted sum of gamma compressed RGB components.[1] Likewise, brightness is sometimes applied to various measures, including light levels, though it more properly applies to a subjective visual attribute. Gamma correction is a type of power law function whose exponent is the Greek letter gamma (γ). It should not be confused with the mathematical Gamma function. The lower case gamma, γ, is a parameter of the former; the upper case letter, Γ, is the name of (and symbol used for) the latter (as in Γ(x)). To use the word “function” in conjunction with gamma correction, one may avoid confusion by saying “generalized power law function.” Without context, a value labeled gamma might be either the encoding or the decoding value. Caution must be taken to correctly interpret the value as that to be applied-to-compensate or to be compensated-by-applying its inverse. In common parlance, in many occasions the decoding value (as 2.2) is employed as if it were the encoding value, instead of its inverse (1/2.2 in this case), which is the real value that must be applied to encode gamma.

5.1.9 See also

• Brightness

• Color management

• Contrast (vision)

• Image editing § Gamma correction

• Luminance

• Luminance (video)

• Luminance (relative)

• Optical transfer function (OTF)

• Post-production

• Tone mapping 5.1. GAMMA CORRECTION 55

5.1.10 References

[1] Charles A. Poynton (2003). Digital Video and HDTV: Algorithms and Interfaces. Morgan Kaufmann. pp. 260, 630. ISBN 1-55860-792-7.

[2] Charles Poynton (2010). Frequently Questioned Answers about Gamma.

[3] Erik Reinhard; Wolfgang Heidrich; Paul Debevec; Sumanta Pattanaik; Greg Ward; Karol Myszkowski (2010). High Dy- namic Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann. p. 82. ISBN 9780080957111.

[4] McKesson, Jason L. “Chapter 12. Dynamic Range – Linearity and Gamma”. Learning Modern 3D Graphics Programming. Archived from the original on 18 July 2013. Retrieved 11 July 2013.

[5] R.W.G. Hunt, The Reproduction of Colour, 6th Ed, p48

[6] Kodak, “Basic sensitometry and characteristics of film” : “A characteristic curve is like a film’s fingerprint.”

[7] “Kodak Professional Tri-X 320 and 400 Films” (PDF). Eastman Kodak Company. May 2007.

[8] Peter Hodges (2004). An introduction to video and audio measurement (3rd ed.). Elsevier. p. 174. ISBN 978-0-240- 80621-1.

[9] Fritz Ebner and Mark D Fairchild, “Development and testing of a color space (IPT) with improved hue uniformity,” Proceedings of IS&T/SID’s Sixth Color Imaging Conference, p 8-13 (1998).

[10] SetDeviceGammaRamp, the Win32 API to download arbitrary gamma ramps to display hardware

[11] Jonathan Sachs (2003). Color Management. Digital Light & Color.

[12] Engineering Guideline EG 28, “Annotated Glossary of Essential Terms for Electronic Production,” SMPTE, 1993.

5.1.11 External links

General information

• Rehabilitation of Gamma by Charles Poynton

• Frequently Asked Questions about Gamma • CGSD - Gamma Correction Home Page by Computer Graphics Systems Development Corporation

• Stanford University CS 178 interactive demo about gamma correction. • A Standard Default Color Space for the Internet - sRGB, defines and explains viewing gamma, camera gamma, CRT gamma, LUT gamma and display gamma • Alvy Ray Smith (1 September 1995). Gamma Correction (PDF) (Technical Memo 9). Microsoft.

Monitor gamma tools

• The Lagom LCD monitor test pages

• The Gamma adjustment page • Monitor test pattern for correct gamma correction (by Norman Koren)

• QuickGamma 56 CHAPTER 5. DAY 5

The effect of gamma correction on an image: The original image was taken to varying powers, showing that powers larger than 1 make the shadows darker, while powers smaller than 1 make dark regions lighter. 5.1. GAMMA CORRECTION 57

2 1

1.5

1 0.5

0.5

0 0 0 0.2 0.4 0.6 0.8 1

Plot of the sRGB standard gamma-expansion nonlinearity in red, and its local gamma value (slope in log–log space) in blue. The local gamma rises from 1 to about 2.2. 58 CHAPTER 5. DAY 5

RGB values apply to large image; thumbnail is exaggerated for clarity

0,0,0 6,6,6

15,15,15 5.2. DIGITAL COMPOSITING 59

5.2 Digital compositing

Four images assembled into one final image

Digital compositing is the process of digitally assembling multiple images to make a final image, typically for print, motion pictures or screen display. It is the digital analogue of optical film compositing.

5.2.1 Mathematics

The basic operation used in digital compositing is known as 'alpha blending', where an opacity value, 'α', is used to control the proportions of two input pixel values that end up a single output pixel. As a simple example, suppose two images of the same size are available and they are to be composited. The input images are referred to as the foreground image and the background image. Each image consists of the same number of pixels. Compositing is performed by mathematically combining information from the corresponding pixels from the two input images and recording the result in a third image, which is called the composited image. Consider three pixels;

• a foreground pixel, f • a background pixel, b • a composited pixel, c and

• α, the opacity value of the foreground pixel. (α = 1 for opaque foreground, α = 0 for a completely transparent foreground). A monochrome raster image where the pixel values are to be interpreted as alpha values is known as a matte.

Then, considering all three colour channels, and assuming that the colour channels are expressed in a γ=1 colour space (that is to say, the measured values are proportional to light intensity), we have:

cᵣ = α fᵣ + (1 − α) bᵣ c = α f + (1 − α) b c = α f + (1 − α) b

Note that if the operations are performed in a colour space where γ is not equal to 1 then the operation will lead to non-linear effects which can potentially be seen as aliasing artifacts (or 'jaggies') along sharp edges in the matte. More generally, nonlinear compositing can have effects such as “halos” around composited objects, because the influence of the alpha channel is non-linear. It is possible for a compositing artist to compensate for the effects of compositing in non-linear space. Performing alpha blending is an expensive operation if performed on an entire image or 3D scene. If this operation has to be done in real time video games there is an easy trick to boost performance. 60 CHAPTER 5. DAY 5

cₒᵤ = α fᵢ + (1 − α) bᵢ cₒᵤ = α fᵢ + bᵢ − α bᵢ cₒᵤ = bᵢ + α (fᵢ − bᵢ)

By simply rewriting the mathematical expression one can save 50% of the multiplications required.

Algebraic properties

When many partially transparent layers need to be composited together, it is worthwhile to consider the algebraic properties of compositing operators used. Specifically, the associativity and commutativity determine when repeated calculation can or cannot be avoided. Consider the case when we have four layers to blend to produce the final image: F=A*(B*(C*D)) where A, B, C, D are partially transparent image layers and "*" denotes a compositing operator (with the left layer on top of the right layer). If only layer C changes, we should find a way to avoid re-blending all of the layers when computing F. Without any special considerations, four full-image blends would need to occur. For compositing operators that are commutative, such as additive blending, it is safe to re-order the blending operations. In this case, we might compute T=A*(B*D) only once and simply blend T*C to produce F, a single operation. Unfortunately, most operators are not commutative. However, many are associative, suggesting it is safe to re-group operations to F=(A*B)*(C*D), i.e. without changing their order. In this case we may compute S:=A*B once and save this result. To form F with an associative operator, we need only do two additional compositing operations to integrate the new layer S, by computing F:=S*(C*D). Note that this expression indicates compositing C with all of the layers below it in one step and then blending all of the layers on top of it with the previous result to produce the final image in the second step. If all layers of an image change regularly but a large number of layer still need to be composited (such as in distributed rendering), the commutativity of a compositing operator can still be exploited to speed up computation through parallelism even when there is no gain from pre-computation. Again, consider the image F=A*(B*(C*D)). Each compositing operation in this expression depends on the next, leading to serial computation. However, associativity can allow us to rewrite F=(A*B)*(C*D) where there are clearly two operations that do not depend on each other that may be executed in parallel. In general, we can build a tree of pair-wise compositing operations with a height that is logarithmic in the number of layers.

5.2.2 Software

The most historically significant nonlinear compositing system was the , which operated in a logarithmic color space, which more closely mimics the natural light response of film emulsions (the Cineon system, made by Kodak, is no longer in production). Due to the limitations of processing speed and memory, compositing artists did not usually have the luxury of having the system make intermediate conversions to linear space for the compositing steps. Over time, the limitations have become much less significant, and now most compositing is done in a linear color space, even in cases where the source imagery is in a logarithmic color space. Compositing often also includes scaling, retouching and colour correction of images.

Node-based and layer-based compositing

There are two radically different digital compositing workflows: node-based compositing and layer-based composit- ing. Node-based compositing represents an entire composite as a tree graph, linking media objects and effects in a proce- dural map, intuitively laying out the progression from source input to final output, and is in fact the way all compositing applications internally handle composites. This type of compositing interface allows great flexibility, including the ability to modify the parameters of an earlier image processing step “in context” (while viewing the final composite). Node-based compositing packages often handle keyframing and time effects poorly, as their workflow does not stem directly from a timeline, as do layer-based compositing packages. Software which incorporates a node based interface include Apple Shake, , eyeon Fusion, and The Foundry’s Nuke. Layer-based compositing represents each media object in a composite as a separate layer within a timeline, each with its own time bounds, effects, and keyframes. All the layers are stacked, one above the next, in any desired order; 5.3. COMPUTATIONAL PHOTOGRAPHY 61 and the bottom layer is usually rendered as a base in the resultant image, with each higher layer being progressively rendered on top of the previously composited of layers, moving upward until all layers have been rendered into the final composite. Layer-based compositing is very well suited for rapid 2D and limited 3D effects such as in , but becomes awkward for more complex composites entailing a large number of layers. A partial solution to this is some programs’ ability to view the composite-order of elements (such as images, effects, or other attributes) with a visual diagram called a flowchart to nest compositions, or “comps,” directly into other compositions, thereby adding complexity to the render-order by first compositing layers in the beginning composition, then combining that resultant image with the layered images from the proceeding composition, and so on. An example of this exists in the Adobe program After Effects.

5.2.3 See also

• Broadcast designer • Chroma key • Digital asset • • Digital on-screen graphic (BUG) • Gamma correction • Graphics coordinator • Motion graphic • Motion graphic design

5.2.4 Further reading

• Mansi Sharma; Santanu Chaudhury; Brejesh Lall (2014). Content-aware seamless stereoscopic 3D compositing. Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image Processing, ACM New York, NY, USA. doi:10.1145/2683483.2683555. • T. Porter and T. Duff, “Compositing Digital Images”, Proceedings of SIGGRAPH '84, 18 (1984). • The Art and Science of Digital Compositing (ISBN 0-12-133960-2)

5.3 Computational photography

Computational photography or computational imaging refers to digital and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas,[6] high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or “post focus”). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques. The definition of computational photography has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics. These areas are given below, organized according to a taxonomy proposed by Shree K. Nayar. Within each area is a list of techniques, and for each technique one or two representative papers or books are cited. Deliberately omitted from the taxonomy are image processing (see also digital image processing) techniques applied to traditionally captured images in order to produce better images. Examples of such techniques are image scaling, dynamic range compression (i.e. tone mapping), color management, image completion (a.k.a. inpainting or hole filling), image compression, digital watermarking, and artistic image effects. Also omitted are techniques that produce range data, volume data, 3D models, 4D light fields, 4D, 6D, or 8D BRDFs, or other high- dimensional image-based representations. Epsilon Photography is a sub-field of computational photography. 62 CHAPTER 5. DAY 5

Computational (“undigital”) photography provides many new capabilities. This example combines HDR (High Dynamic Range) Imaging with panoramics (image-stitching), by optimally combining information from multiple differently exposed pictures of over- lapping subject matter[1][2][3][4][5] 5.3. COMPUTATIONAL PHOTOGRAPHY 63

5.3.1 Computational illumination

This is controlling photographic illumination in a structured fashion, then processing the captured images, to cre- ate new images. The applications include image-based relighting, image enhancement, image deblurring, geome- try/material recovery and so forth. High-dynamic-range imaging uses differently exposed pictures of the same scene to extend dynamic range.[7] Other examples include processing and merging differently illuminated images of the same subject matter (“lightspace”).

5.3.2 Computational optics

This is capture of optically coded images, followed by computational decoding to produce new images. Coded aperture imaging was mainly applied in astronomy or X-ray imaging to boost the image quality. Instead of a single pin-hole, a pinhole pattern is applied in imaging, and deconvolution is performed to recover the image. In coded exposure imaging, the on/off state of the is coded to modify the kernel of motion blur.[8] In this way motion deblurring becomes a well-conditioned problem. Similarly, in a lens based coded aperture, the aperture can be modified by inserting a broadband mask.[9] Thus, out of focus deblurring becomes a well-conditioned problem. The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics.

5.3.3 Computational processing

This is processing of non-optically-coded images to produce new images.

5.3.4 Computational sensors

These are detectors that combine sensing and processing, typically in hardware, like the oversampled binary image sensor.

5.3.5 Early work in computer vision

Although computational photography is a currently popular buzzword in computer graphics, many of its techniques first appeared in the computer vision literature, either under other names or within papers aimed at 3D shape analysis.

5.3.6 See also

• Simultaneous localization and mapping

5.3.7 References

[1] Steve Mann. “Compositing Multiple Pictures of the Same Scene”, Proceedings of the 46th Annual Imaging Science & Technology Conference, May 9–14, Cambridge, Massachusetts, 1993

[2] S. Mann, C. Manders, and J. Fung, “The Lightspace Change Constraint Equation (LCCE) with practical application to estimation of the projectivity+gain transformation between multiple pictures of the same subject matter” IEEE International Conference on Acoustics, Speech, and Signal Processing, 6–10 April 2003, pp III - 481-4 vol.3.

[3] joint parameter estimation in both domain and range of functions in same orbit of the projective-Wyckoff group” ", IEEE International Conference on Image Processing,Vol.3, 16-19,pp.193-196 September 1996

[4] Frank M. Candocia: Jointly registering images in domain and range by piecewise linear comparametric analysis. IEEE Transactions on Image Processing 12(4): 409-419 (2003)

[5] Frank M. Candocia: Simultaneous homographic and comparametric alignment of -adjusted pictures of the same scene. IEEE Transactions on Image Processing 12(12): 1485-1494 (2003)

[6] Steve Mann and R. W. Picard. “Virtual bellows: constructing high-quality images from video.”, In Proceedings of the IEEE First International Conference on Image ProcessingAustin, Texas, November 13–16, 1994 64 CHAPTER 5. DAY 5

[7] ON BEING `UNDIGITAL' WITH DIGITAL CAMERAS: EXTENDING DYNAMIC RANGE BY COMBINING DIF- FERENTLY EXPOSED PICTURES, IS&T’s (Society for Imaging Science and Technology’s) 48th annual conference, Cambridge, Massachusetts, May 1995, pages 422-428

[8] Raskar, Ramesh; Agrawal, Amit; Tumblin, Jack (2006). “Coded Exposure Photography: Motion Deblurring using Flut- tered Shutter”. Retrieved November 29, 2010.

[9] Veeraraghavan, Ashok; Raskar, Ramesh; Agrawal, Amit; Mohan, Ankit; Tumblin, Jack (2007). “Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing”. Retrieved November 29, 2010.

5.3.8 External links

• Nayar, Shree K. (2007). “Computational Cameras”, Conference on Machine Vision Applications. • Computational Photography (Raskar, R., Tumblin, J.,), A.K. Peters. In press. • Special issue on Computational Photography, IEEE Computer, August 2006. • Camera Culture and Computational Journalism: Capturing and Sharing Visual Experiences, IEEE CG&A Special Issue, Feb 2011. • Rick Szeliski (2010), Computer Vision: Algorithms and Applications, Springer. • Computational Photography: Methods and Applications (Ed. Rastislav Lukac), CRC Press, 2010.

5.4 Inpainting

Main article: Image editing Inpainting is the process of reconstructing lost or deteriorated parts of images and videos. In the museum world, in

Original and restored image.

the case of a valuable painting, this task would be carried out by a skilled art conservator or art restorer. In the digital world, inpainting (also known as image interpolation or video interpolation) refers to the application of sophisticated algorithms to replace lost or corrupted parts of the image data (mainly small regions or to remove small defects).

5.4.1 Applications

There are many objectives and applications of this technique. In photography and cinema, is used for film restoration; to reverse the deterioration (e.g., cracks in photographs or scratches and dust spots in film; see infrared cleaning). It is also used for removing red-eye, the stamped date from photographs and removing objects to creative effect. This technique can be used to replace the lost blocks in the coding and transmission of images, for example, in a streaming video. It can also be used to remove logos in videos. 5.4. INPAINTING 65

5.4.2 Methods

In painting

Inpainting is rooted in the restoration of images. Traditionally, inpainting has been done by professional restorers. The underlying methodology of their work is as follows:

• The global picture determines how to fill in the gap. The purpose of inpainting is to restore the unity of the work.

• The structure of the gap surroundings is supposed to be continued into the gap. Contour lines that arrive at the gap boundary are prolonged into the gap.

• The different regions inside a gap, as defined by the contour lines, are filled with colors matching for those of its boundary.

• The small details are painted, i.e. “texture” is added.

Computerized

Since the wide applications of digital camera and the digitalization of old photos, inpainting has become an automatic process that is performed on digital images. More than scratch removing, the inpainting techniques are also applied to object removal, text removal and other automatic modifications of images and videos. Furthermore, they can also be observed in applications like image compression and super resolution. Mainly three groups of 2D image inpainting algorithms can be found in literature. The first one to be noted is structural inpainting, the second one is texture inpainting and the last one is a combination of these two techniques. All these inpainting methods have one thing in common - they use the information of the known or undestroyed image areas in order to fill the gap.

Structural inpainting Structural inpainting uses geometric approaches for filling in the missing information in the region which should be inpainted. These algorithms focus on the consistency of the geometric structure.

Textural inpainting Like everything else the structural inpainting methods have both advantages and disadvan- tages. The main problem is that all the structural inpainting methods are not able to restore texture. Texture has a repetitive pattern which means that a missing portion cannot be restored by continuing the level lines into the gap.

Combined Structural and Textural inpainting Combined structural and textural inpainting approaches simul- taneously try to perform texture and structure filling in regions of missing image information. Most parts of an image consist of texture and structure. The boundaries between image regions accumulate structural information which is a complex phenomenon. This is the result when blending different textures together. That is why, the state of the art inpainting method attempts to combine structural and textural inpainting. A more traditional method is to use differential equations (such as the Laplace’s equation) with Dirichlet boundary conditions for continuity (a seamless fit). This works well if missing information lies within the homogeneous portion of an object area.[1] Other methods follow isophote directions (in an image, a contour of equal luminance), to do the inpainting.[2] Model based inpainting follows the Bayesian approach for which missing information is best fitted or estimated from the combination of the models of the underlying images as well as the image data actually being observed. In deterministic language, this has led to various variational inpainting models.[3] Manual computer methods include using a clone tool or healing tool, to copy existing parts of the image to restore a damaged texture. Texture synthesis may also be used.[4] Exemplar-based image inpainting attempts to automate the clone tool process. It fills “holes” in the image by searching for similar patches in a nearby source region of the image, and copying the pixels from the most similar patch into the hole. By performing the fill at the patch level as opposed to the pixel level, the algorithm reduces blurring artifacts caused by prior techniques.[5][6] 66 CHAPTER 5. DAY 5

5.4.3 See also

• Infrared cleaning • Noise reduction

• Seam carving • Image reconstruction

5.4.4 References

[1] Peterson, Ivars (11 May 2002). “Filling in Blanks”. Science News. Society for Science &. 161 (19): 299–300. doi:10.2307/4013521. JSTOR 4013521. Retrieved 2008-05-11.

[2] M. Bertalmío, G. Sapiro, V. Caselles and C. Ballester., “Image Inpainting”, Proceedings of SIGGRAPH 2000, New Or- leans, USA, July 2000.

[3] T. F. Chan and J. Shen, “Mathematical Models for Local Nontexture Inpainting”, SIAM J. Applied Math., 62(3), 2001, 1019-1043.

[4] Image Replacement through Texture Synthesis, Homan Igehy and Lucas Pereira, Stanford University, Appears in the Pro- ceedings of the 1997 IEEE International Conference on Image Processing

[5] Object Removal by Exemplar-Based Inpainting, Criminisi, A, Perez, P., & Toyama, K., Appears in the Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition

[6] Inpainting Strategies for Reconstruction of Missing Data in VHR Images, L. Lorenzi, F. Melgani and G. Mercier, IEEE Geoscience and Remote Sensing Letters, Set. 2011

5.4.5 External links

• Interactive Point-and-Click Segmentation for Object Removal in Digital Images, ICCV-HCI 2005

• Inpainting and the Fundamental Problem of Image Processing, SIAM

• Image Inpainting, by the Image Processing Group at UCLA.

• Image and Video Inpainting (webarchive) by Guillermo Sapiro

• Video Inpainting by Kedar Patwardhan. • Mobile application which implements image inpainting by Bozhidar Dimitrov.

• Program which realizes image inpainting by SoftOrbits. • Resynthesizer. An open source Gimp plug-in for inpainting and texture expanding.

• Photo restoration software which implements image inpainting by Maxim Gapchenko • Mathematica Inpaint function.

• Online Photo restoration software which implements image inpainting by Teorex • Photo touch up software for removing objects from photos called Photoupz

• Inpainting of images on meshes or point clouds, TIP 2014 • Image completion using planar structure guidance, SIGGRAPH 2014 Chapter 6

Day 6

6.1

Nikon EXPEED, a including an image processor, video processor, (DSP) and a 32-bit controlling the chip

67 68 CHAPTER 6. DAY 6

An image processor, image processing engine, also called , is a specialized digital signal processor (DSP) used for image processing in digital cameras, mobile phones or other devices.[1][2] Image processors often employ even with SIMD or MIMD technologies to increase speed and efficiency. The digital image processing engine can perform a range of tasks. To increase the system integration on embedded devices, often it is a system on a chip with multi-core processor architecture.

6.1.1 Function

Bayer transformation

The employed in an image sensor are color-blind by nature: they can only record shades of grey. To get color into the picture, they are covered with different color filters: red, green and blue (RGB) according to the pattern designated by the Bayer filter - named after its inventor. As each records the color information for exactly one pixel of the image, without an image processor there would be a green pixel next to each red and blue pixel. (Actually, with most sensors there are two green for each blue and red diodes.) This process, however, is quite complex and involves a number of different operations. Its quality depends largely on the effectiveness of the algorithms applied to the raw data coming from the sensor. The mathematically manipulated data becomes the photo file recorded.

Demosaicing

As stated above, the image processor evaluates the color and brightness data of a given pixel, compares them with the data from neighboring pixels and then uses a algorithm to produce an appropriate colour and brightness value for the pixel. The image processor also assesses the whole picture to guess at the correct distribution of contrast. By adjusting the gamma value (heightening or lowering the contrast range of an image’s mid-tones) subtle tonal gradations, such as in human skin or the blue of the sky, become much more realistic.

Noise reduction

Noise is a phenomenon found in any electronic circuitry. In digital photography its effect is often visible as random spots of obviously wrong colour in an otherwise smoothly-coloured area. Noise increases with temperature and exposure times. When higher ISO settings are chosen the electronic signal in the image sensor is amplified, which at the same time increases the noise level, leading to a lower signal-to-noise ratio. The image processor attempts to separate the noise from the image information and to remove it. This can be quite a challenge, as the image may contain areas with fine textures which, if treated as noise, may lose some of their definition.

Image sharpening

As the color and brightness values for each pixel are interpolated some image softening is applied to even out any fuzziness that has occurred. To preserve the impression of depth, clarity and fine details, the image processor must sharpen edges and contours. It therefore must detect edges correctly and reproduce them smoothly and without over-sharpening.

6.1.2 Models

Image processor users are using industry standard products, application-specific standard products (ASSP) or even application-specific integrated circuits (ASIC) with trade names: Canon’s is called DIGIC, ’s EXPEED, Olym- pus’ TruePic, ’s and ’s . Some are known to be based on the , the Texas Instruments OMAP, Panasonic MN103, Zoran Coach, Altek Sunny or image/video processors. ARM architecture processors with its NEON SIMD Media Processing Engines (MPE) are often used in mobile phones. 6.1. IMAGE PROCESSOR 69

Processor brand names

• Canon - DIGIC (based on Texas Instruments OMAP)[3] • - EXILIM engine • - EDiART • Fujifilm - EXR III or X Processor Pro • / - SUPHEED with CxProcess • Leica - MAESTRO (based on Fujitsu Milbeaut)[4] • Nikon - EXPEED (based on Fujitsu Milbeaut)[5] • Olympus - TruePic (based on Panasonic MN103/MN103S) • Panasonic - Venus engine (based on Panasonic MN103/MN103S) • - PRIME (Pentax Real IMage Engine) (newer variants based on Fujitsu Milbeaut) • - GR engine (GR digital), Smooth Imaging Engine • Samsung - DRIMe (based on Samsung Exynos) • Sanyo - Platinum engine • Sigma - True (newer variants based on Fujitsu Milbeaut) • Sharp - ProPix • Sony - BIONZ • HTC - ImageSense

Speed

With the ever-higher pixel count in image sensors, the image processor’s speed becomes more critical: photographers don't want to wait for the camera’s image processor to complete its job before they can carry on shooting - they don't even want to notice some processing is going on inside the camera. Therefore, image processors must be optimised to cope with more data in the same or even a shorter period of time.

6.1.3 References

[1] DIGITAL SIGNAL & IMAGE PROCESSING

[2] Fundamentals of digital image processing

[3] Inside the Canon Rebel T4i DSLR Chipworks

[4] Fujitsu Microelectronics-Leica’s Image Processing System Solution For High-End DSLR

[5] Milbeaut and EXPEED byThom

6.1.4 See also

• Image processing • Digital image processing • Digital image editing • Demosaicing 70 CHAPTER 6. DAY 6

6.2 Noise reduction

For the reduction of a sound’s volume, see soundproofing. For the noise reduction of machinery and products, see noise control.

Noise reduction is the process of removing noise from a signal. All recording devices, both analog and digital, have traits that make them susceptible to noise. Noise can be random or white noise with no coherence, or coherent noise introduced by the device’s mechanism or processing algorithms. In electronic recording devices, a major form of noise is hiss caused by random electrons that, heavily influenced by heat, stray from their designated path. These stray electrons influence the voltage of the output signal and thus create detectable noise. In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film’s sensitivity, more sensitive film having larger sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level. Many noise reduction algorithms tend to damage more or less signals. The local signal-and-noise orthogonalization algorithm [1] can be used to avoid the damages to signals.

6.2.1 In audio

When using analog tape recording technology, they may exhibit a type of noise known as tape hiss. This is related to the particle size and texture used in the magnetic emulsion that is sprayed on the recording media, and also to the relative tape velocity across the tape heads. Four types of noise reduction exist: single-ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, and or dual-ended systems. Single-ended pre-recording systems (such as Dolby HX Pro) work to affect the recording medium at the time of recording. Single-ended hiss reduction systems (such as DNL or DNR) work to reduce noise as it occurs, including both before and after the recording process as well as for live broadcast applications. Single-ended surface noise reduction (such as CEDAR and the earlier SAE 5000A and Burwen TNE 7000) is applied to the playback of phonograph records to attenuate the sound of scratches, pops, and surface non-linearities. Dual-ended systems (such as Dolby B, Dolby C, Dolby S, dbx Type I and dbx Type II, High Com and High Com II as well as Toshiba's adres and JVC's ANRS) have a pre-emphasis process applied during recording and then a de-emphasis process applied at playback.

Dolby and dbx noise reduction system

While there are dozens of different kinds of noise reduction, the first widely used audio noise reduction technique was developed by Ray Dolby in 1966. Intended for professional use, Dolby Type A was an encode/decode system in which the amplitude of frequencies in four bands was increased during recording (encoding), then decreased proportionately during playback (decoding). The Dolby B system (developed in conjunction with Henry Kloss) was a single band system designed for consumer products. In particular, when recording quiet parts of an audio signal, the frequencies above 1 kHz would be boosted. This had the effect of increasing the signal to noise ratio on tape up to 10 dB depending on the initial signal volume. When it was played back, the decoder reversed the process, in effect reducing the noise level by up to 10 dB. The Dolby B system, while not as effective as Dolby A, had the advantage of remaining listenable on playback systems without a decoder. Dbx was the competing analog noise reduction system developed by David E. Blackmer, founder of dbx laboratories.[2] It used a root-mean-squared (RMS) encode/decode algorithm with the noise-prone high frequencies boosted, and the entire signal fed through a 2:1 compander. Dbx operated across the entire audible bandwidth and unlike Dolby B was unusable as an open ended system. However it could achieve up to 30 dB of noise reduction. Since Analog video recordings use frequency modulation for the luminance part (composite video signal in direct colour systems), which keeps the tape at saturation level, audio style noise reduction is unnecessary. 6.2. NOISE REDUCTION 71

Dynamic noise limiter and dynamic noise reduction

Dynamic noise limiter (DNL) is an audio noise reduction system originally introduced by Philips in 1971 for use on cassette decks. Its circuitry is also based on a single chip.[3] It was further developed into dynamic noise reduction (DNR) by National Semiconductor to reduce noise levels on long-distance telephony.[4] First sold in 1981, DNR is frequently confused with the far more common Dolby noise reduction system.[5] However, unlike Dolby and dbx Type I & Type II noise reduction systems, DNL and DNR are playback-only signal processing systems that do not require the source material to first be encoded, and they can be used together with other forms of noise reduction.[6] Because DNL and DNR are non-complementary, meaning they do not require encoded source material, they can be used to remove background noise from any audio signal, including magnetic tape recordings and FM radio broadcasts, reducing noise by as much as 10 dB.[7] They can be used in conjunction with other noise reduction systems, provided that they are used prior to applying DNR to prevent DNR from causing the other noise reduction system to mistrack. The Telefunken High Com U401BR could be utilized to work as a Dolby B–compatible DNR-style expander as well.[8] One of DNR’s first widespread applications was in the GM Delco car stereo systems in U.S. GM cars introduced in 1984.[9] It was also used in factory car stereos in Jeep vehicles in the 1980s, such as the Cherokee XJ. Today, DNR, DNL, and similar systems are most commonly encountered as a noise reduction system in microphone systems.[10]

Other approaches

A second class of algorithms work in the time-frequency domain using some linear or non-linear filters that have local characteristics and are often called time-frequency filters.[11] Noise can therefore be also removed by use of spectral editing tools, which work in this time-frequency domain, allowing local modifications without affecting nearby signal energy. This can be done manually by using the mouse with a pen that has a defined time-frequency shape. This is done much like in a paint program drawing pictures. Another way is to define a dynamic threshold for filtering noise, that is derived from the local signal, again with respect to a local time-frequency region. Everything below the threshold will be filtered, everything above the threshold, like partials of a voice or “wanted noise”, will be untouched. The region is typically defined by the location of the signal Instantaneous Frequency,[12] as most of the signal energy to be preserved is concentrated about it. Modern digital sound (and picture) recordings no longer need to worry about tape hiss so analog style noise reduction systems are not necessary. However, an interesting twist is that dither systems actually add noise to a signal to improve its quality.

Software programs

Most general purpose voice editing software will have one or more noise reduction functions (Audacity, WavePad, etc.). Special purpose noise reduction software programs include Gnome Wave Cleaner, Sony Creative Noise Re- duction, SoliCall Pro, Voxengo Redunoise and X-OOM Music Clean.

6.2.2 In images

Images taken with both digital cameras and conventional film cameras will pick up noise from a variety of sources. Further use of these images will often require that the noise be (partially) removed – for aesthetic purposes as in artistic work or marketing, or for practical purposes such as computer vision.

Types

In salt and pepper noise (sparse light and dark disturbances), pixels in the image are very different in color or intensity from their surrounding pixels; the defining characteristic is that the value of a noisy pixel bears no relation to the color of surrounding pixels. Generally this type of noise will only affect a small number of image pixels. When viewed, the image contains dark and white dots, hence the term salt and pepper noise. Typical sources include flecks of dust inside the camera and overheated or faulty CCD elements. 72 CHAPTER 6. DAY 6

In Gaussian noise, each pixel in the image will be changed from its original value by a (usually) small amount. A histogram, a plot of the amount of distortion of a pixel value against the frequency with which it occurs, shows a normal distribution of noise. While other distributions are possible, the Gaussian (normal) distribution is usually a good model, due to the central limit theorem that says that the sum of different noises tends to approach a Gaussian distribution. In either case, the noise at different pixels can be either correlated or uncorrelated; in many cases, noise values at different pixels are modeled as being independent and identically distributed, and hence uncorrelated.

Removal

Tradeoffs In selecting a noise reduction algorithm, one must weigh several factors:

• the available computer power and time available: a digital camera must apply noise reduction in a fraction of a second using a tiny onboard CPU, while a desktop computer has much more power and time

• whether sacrificing some real detail is acceptable if it allows more noise to be removed (how aggressively to decide whether variations in the image are noise or not)

• the characteristics of the noise and the detail in the image, to better make those decisions

Chroma and luminance noise separation In real-world photographs, the highest spatial-frequency detail con- sists mostly of variations in brightness (“luminance detail”) rather than variations in hue (“chroma detail”). Since any noise reduction algorithm should attempt to remove noise without sacrificing real detail from the scene photographed, one risks a greater loss of detail from luminance noise reduction than chroma noise reduction simply because most scenes have little high frequency chroma detail to begin with. In addition, most people find chroma noise in images more objectionable than luminance noise; the colored blobs are considered “digital-looking” and unnatural, com- pared to the grainy appearance of luminance noise that some compare to film grain. For these two reasons, most photographic noise reduction algorithms split the image detail into chroma and luminance components and apply more noise reduction to the former. Most dedicated noise-reduction computer software allows the user to control chroma and luminance noise reduction separately.

Linear smoothing filters One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights. Smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or lower than the surrounding neighborhood would “smear” across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis for nonlinear noise reduction filters.

Anisotropic diffusion Main article: Anisotropic diffusion

Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation, which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image.

Non-local means Main article: Non-local means

Another approach for removing noise is based on non-local averaging of all the pixels in an image. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered on that pixel and the small patch centered on the pixel being de-noised. 6.2. NOISE REDUCTION 73

Nonlinear filters A median filter is an example of a non-linear filter and, if properly designed, is very good at preserving image detail. To run a median filter:

1. consider each pixel in the image

2. sort the neighbouring pixels into order based upon their intensities

3. replace the original value of the pixel with the median value from the list

A median filter is a rank-selection (RS) filter, a particularly harsh member of the family of rank-conditioned rank- selection (RCRS) filters;[13] a much milder member of that family, for example one that selects the closest of the neighboring values when a pixel’s value is external in its neighborhood, and leaves it unchanged otherwise, is some- times preferred, especially in photographic applications. Median and other RCRS filters are good at removing salt and pepper noise from an image, and also cause relatively little blurring of edges, and hence are often used in computer vision applications.

Wavelet transform The main aim of an image denoising algorithm is to achieve both noise reduction and feature preservation. In this context, wavelet-based methods are of particular interest. In the wavelet domain, the noise is uniformly spread throughout coefficients while most of the image information is concentrated in a few large ones.[14] Therefore, the first wavelet-based denoising methods were based on thresholding of detail subbands coefficients.[15] However, most of the wavelet thresholding methods suffer from the drawback that the chosen threshold may not match the specific distribution of signal and noise components at different scales and orientations. To address these disadvantages, non-linear estimators based on Bayesian theory have been developed. In the Bayesian framework, it has been recognized that a successful denoising algorithm can achieve both noise reduction and feature preservation if it employs an accurate statistical description of the signal and noise components.[14]

Statistical methods Statistical methods for image denoising exist as well, though they are infrequently used as they are computationally demanding. For Gaussian noise, one can model the pixels in a greyscale image as auto-normally distributed, where each pixel’s “true” greyscale value is normally distributed with mean equal to the average greyscale value of its neighboring pixels and a given variance.

Let δi denote the pixels adjacent to the i th pixel. Then the conditional distribution of the greyscale intensity (on a [0, 1] scale) at the i th node is: ∑ − β (c−x(j))2 P(x(i) = c|x(j)∀j ∈ δi) ∝ e 2λ j∈δi for a chosen parameter β ≥ 0 and variance λ . One method of denoising that uses the auto-normal model uses the image data as a Bayesian prior and the auto-normal density as a likelihood function, with the resulting posterior distribution offering a mean or mode as a denoised image.[16]

Software programs

Most general purpose image and photo editing software will have one or more noise reduction functions (median, blur, despeckle, etc.). Special purpose noise reduction software programs include Neat Image, Noiseless, Noiseware, Noise Ninja, G'MIC (through the -denoise command), and pnmnlfilt (nonlinear filter) found in the open source tools. General purpose image and photo editing software including noise reduction functions include Adobe Photo- shop, GIMP, PhotoImpact, Paint Shop Pro, , and .[17]

6.2.3 See also

General noise issues

• Digital image processing

• Noise print

• Signal (electronics) 74 CHAPTER 6. DAY 6

• Signal processing • Signal subspace

Audio

• Architectural acoustics • Noise-canceling headphones • Sound masking

Video

• Dark frame •

Similar problems

• Deblurring

6.2.4 References

[1] Chen, Yangkang; Fomel, Sergey (November–December 2015). “Random noise attenuation using local signal-and-noise orthogonalization”. Geophysics. 80 (6): WD1-WD9. doi:10.1190/GEO2014-0227.

[2] Hoffman, F. (2004), Encyclopedia of Recorded Sound, 1 (revised ed.), Taylor & Francis

[3] “Noise Reduction”. Audiotools.com. 2013-11-10. https://web.archive.org/web/20081105073059/http://freespace.virgin.net:80/ljmayes.mal/comp/philips.htm. Archived from the original on November 5, 2008. Retrieved January 14, 2009. Missing or empty |title= (help)

[4] “Dynamic Noise Reduction”. ComPol Inc.

[5] https://web.archive.org/web/20070927190856/http://www.national.com/company/pressroom/history80.html. Archived from the original on September 27, 2007. Retrieved January 14, 2009. Missing or empty |title= (help)

[6] https://web.archive.org/web/20081220145857/http://triadspeakers.com/education_avterms.html. Archived from the orig- inal on December 20, 2008. Retrieved January 14, 2009. Missing or empty |title= (help)

[7] https://web.archive.org/web/20081220022234/http://www.national.com/pf/LM/LM1894.html. Archived from the origi- nal on December 20, 2008. Retrieved January 14, 2009. Missing or empty |title= (help)

[8] “HIGH COM - The HIGH COM broadband compander utilizing the integrated circuit U 401 BR” (PDF). AEG-Telefunken.

[9] Ed Gunyo. “Evolution of the Riviera - 1983 the 20th Anniversary”. Riviera Owners Association. Originally published in The Riview Vol. 21, No. 6 September/October 2005.

[10] http://www.hellodirect.com/catalog/Product.jhtml?PRODID=11127&CATID=15295[]

[11] B. Boashash, editor, Time-Frequency Signal Analysis and Processing – A Comprehensive Reference, Elsevier Science, Ox- ford, 2003; ISBN 0-08-044335-4

[12] B. Boashash, “Estimating and Interpreting the Instantaneous Frequency of a Signal-Part I: Fundamentals”, Proceedings of the IEEE, Vol. 80, No. 4, pp. 519–538, April 1992, doi:10.1109/5.135376

[13] Puyin Liu and Hongxing Li (2004). Fuzzy Neural Network Theory and Application. World Scientific. ISBN 981-238-786- 2.

[14] M. Forouzanfar, H. Abrishami-Moghaddam, and S. Ghadimi, “Locally adaptive multiscale Bayesian method for image denoising based on bivariate normal inverse Gaussian distributions”, International Journal of Wavelets, Multiresolution and Information Processing, vol. 6, Issue 4, pp. 653–64, July 2008.

[15] Mallat, S.: A Wavelet Tour of Signals Processing. Academic Press, London (1998) 6.3. ITERATIVE RECONSTRUCTION 75

[16] Julian Besag (1986). “On the Statistical Analysis of Dirty Pictures”. Journal of the Royal Statistical Society. Series B (Methodological). 48 (3): 259–302. JSTOR 2345426.

[17] jo (2012-12-11). “profiling sensor and photon noise .. and how to get rid of it.”. darktable.

6.2.5 External links

• Recent trends in denoising tutorial

• Noise Reduction in photography

• Matlab software and Photoshop plug-in for image denoising (Pointwise SA-DCT filter)

• Matlab software for image and video denoising (Non-local transform-domain filter)

• Non-local image denoising, with code and online demonstration

6.3 Iterative reconstruction

Example showing differences between filtered backprojection (right half) and iterative reconstruction method (left half) 76 CHAPTER 6. DAY 6

Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step.[1] In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.[2]

6.3.1 Basic concepts

The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse problem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time. In computed tomography, this approach was the one first used by Hounsfield. There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections. There are typically five components to iterative image reconstruction algorithms, e.g. .[3]

1. An object model that expresses the unknown continuous-space function f(r) that is to be reconstructed in terms of a finite series with unknown coefficients that must be estimated from the data. 2. A system model that relates the unknown object to the “ideal” measurements that would be recorded in the absence of measurement noise. Often this is a linear model of the form Ax + ϵ , where ϵ represents the noise. 3. A statistical model that describes how the noisy measurements vary around their ideal values. Often Gaussian noise or Poisson statistics are assumed. Because Poisson statistics are closer to reality, it is more widely used. 4. A cost function that is to be minimized to estimate the image coefficient vector. Often this cost function includes some form of regularization. Sometimes the regularization is based on Markov random fields. 5. An algorithm, usually iterative, for minimizing the cost function, including some initial estimate of the image and some stopping criterion for terminating the iterations.

6.3.2 Advantages

The advantages of the iterative approach include improved insensitivity to noise and capability of reconstructing an optimal image in the case of incomplete data. The method has been applied in emission tomography modalities like SPECT and PET, where there is significant attenuation along ray paths and noise statistics are relatively poor. Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algo- rithms [4] [5] are now the preferred method of reconstruction. Such algorithms compute estimates of the likely distri- bution of annihilation events that led to the measured data, based on statistical principle, often providing better noise profiles and resistance to the streak artifacts common with FBP. Since the density of radioactive tracer is a function in a function space, therefore of extremely high-dimensions, methods which regularize the maximum-likelihood so- lution turning it towards penalized or maximum a-posteriori methods can have significant advantages for low counts. Examples such as Ulf Grenander's Sieve estimator[6][7] or Bayes penalty methods [8] [9] or via I.J. Good's roughness method [10] ,[11] may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function only. As another example, it is considered superior when one does not have a large set of projections available, when the projections are not distributed uniformly in angle, or when the projections are sparse or missing at certain orientations. These scenarios may occur in intraoperative CT, in cardiac CT, or when metal artifacts [12] [13] require the exclusion of some portions of the projection data. In Magnetic Resonance Imaging it can be used to reconstruct images from data acquired with multiple receive coils and with sampling patterns different from the conventional Cartesian grid[14] and allows the use of improved regularization techniques (e.g. total variation)[15] or an extended modeling of physical processes[16] to improve the reconstruction. For example, with iterative algorithms it is possible to reconstruct images from data acquired in a very short time as required for Real-time MRI.[17] 6.3. ITERATIVE RECONSTRUCTION 77

In Cryo Electron Tomography, where the limited number of projections are acquired due to the hardware limita- tions and to avoid the biological specimen damage, it can be used along with compressive sensing techniques or regularization functions (e.g. Huber function) to improve the reconstruction for better interpretation.[18] Here is an example that illustrates the benefits of iterative image reconstruction for cardiac MRI.[19]

A single frame from a Real-time MRI movie of a human heart. a) direct reconstruction b) iterative (nonlinear inverse) reconstruction[17]

6.3.3 See also

• Tomographic reconstruction

• Positron Emission Tomography

• Tomogram

• Computed Tomography

• Magnetic Resonance Imaging

• Inverse problem

• Osem

• Deconvolution

• Inpainting

• Algebraic Reconstruction Technique

6.3.4 References

[1] Herman, G. T., Fundamentals of computerized tomography: Image reconstruction from projection, 2nd edition, Springer, 2009

[2] Wang, Xiao; Sabne, Amit; Kisner, Sherman; Raghunathan, Anand; Bouman, Charles; Midkiff, Samuel (2016-01-01). “High Performance Model Based Image Reconstruction”. Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. PPoPP '16. New York, NY, USA: ACM: 2:1–2:12. doi:10.1145/2851141.2851163. ISBN 9781450340922.

[3] J A Fessler, “Penalized weighted least-squares image reconstruction for positron emission tomography,” IEEE Trans. on Medical Imaging, 13(2):290-300, June 1994. 78 CHAPTER 6. DAY 6

[4] Carson, Lange; Richard Carson (1984). “EM reconstruction algorithm for emission and transmission tomography”. Journal of Computer Assisted Tomography. 80 (2): 306–316. PMID 6608535.

[5] Vardi, Y.; L. A. Shepp; L. Kaufman (1985). “A statistical model for positron emission tomography”. Journal of the American Statistical Association. 80 (389): 8–37. doi:10.1080/01621459.1985.10477119.

[6] Snyder, Donald L.; Miller, Michael I. (1985). “On the Use of the Method of Sieves for Positron Emission Tomography”. IEEE Transactions on Medical Imaging. NS-32(5): 3864–3872. doi:10.1109/TNS.1985.4334521.

[7] Snyder, D.L.; Miller, M.I.; Thomas, L.J.; Politte, D.G. (1987). “Noise and edge artifacts in maximum-likelihood recon- structions for emission tomography”. IEEE Trans. on Medical Imaging. 6 (3): 228–238. doi:10.1109/tmi.1987.4307831.

[8] Geman, Stuart; McClure, Donald E. (1985). “Bayesian image analysis: An application to single photon emission tomog- raphy”. Proceedings Amererican Statistical Computing: 12–18.

[9] Green, Peter J. (1990). “Bayesian Reconstructions for Emission Tomography Data Using a Modified EM Algorithm”. IEEE Transactions on Medical Imaging. 9(1): 84–93. doi:10.1109/TNS.1985.4334521.

[10] Miller, Michael I.; Snyder, Donald L. (1987). “The role of likelihood and entropy in incomplete data problems: Applica- tions to estimating point-process intensites and toeplitz constrained covariance estimates”. Proceedings of the IEEE. 5(7): 3223–3227. doi:10.1109/PROC.1987.13825.

[11] Miller, Michael I.; Roysam, Badrinath (April 1991). “Bayesian image reconstruction for emission tomography incor- porating Good’s roughness prior on massively parallel processors”. Proceedings National Academy Sciences, USA. 88: 3223–3227. doi:10.1109/TNS.1985.4334521.

[12] Wang, G.E.; Snyder, D.L.; O'Sullivan, J.A.; Vannier, M.W. “Iterative deblurring for CT metal artifact reduction”. IEEE Trans. on Medical Imaging. 15 (5): 657–664. doi:10.1109/42.538943.

[13] FE Boas and D Fleischmann. "Evaluation of two iterative techniques for reducing metal artifacts in computed tomography." Radiology, doi:10.1148/radiol.11101782, 2011.

[14] Pruessmann, K. P., Weiger, M., Börnert, P. and Boesiger, P. (2001), Advances in sensitivity encoding with arbitrary k-space trajectories. Magnetic Resonance in Medicine, 46: 638–651. doi:10.1002/mrm.1241

[15] Block, K. T., Uecker, M. and Frahm, J. (2007), Undersampled radial MRI with multiple coils. Iterative image reconstruc- tion using a total variation constraint. Magnetic Resonance in Medicine, 57: 1086–1098. doi:10.1002/mrm.21236

[16] Fessler, J. (2010) Model-based Image Reconstruction for MRI. Signal Processing Magazine, IEEE 27:81-89

[17] M Uecker, S Zhang, D Voit, A Karaus, KD Merboldt, J Frahm (2010a) Real-time MRI at a resolution of 20 ms. NMR Biomed 23: 986-994, doi:10.1002/nbm.1585

[18] Albarqouni, Shadi; Lasser, Tobias; Alkhaldi, Weaam; Al-Amoudi, Ashraf; Navab, Nassir (2015-01-01). Gao, Fei; Shi, Kuangyu; Li, Shuo, eds. Gradient Projection for Regularized Cryo-Electron Tomographic Reconstruction. Lecture Notes in Computational Vision and Biomechanics. Springer International Publishing. pp. 43–51. doi:10.1007/978-3-319-18431- 9_5. ISBN 978-3-319-18430-2.

[19] I Uyanik, P Lindner, D Shah, N Tsekos I Pavlidis (2013) Applying a Level Set Method for Resolving Physiologic Motions in Free-Breathing and Non-gated Cardiac MRI. FIMH, 2013,

[1]

[1] Bruyant, P.P. “Analytic and iterative reconstruction algorithms in SPECT” Journal Of Nuclear Medicine 43(10):1343- 1358, 2002 Chapter 7

Day 7

7.1 Distortion (optics)

Not to be confused with spherical aberration, a loss of image sharpness that can result from spherical lens surfaces. In geometric optics, distortion is a deviation from rectilinear projection, a projection in which straight lines in a

Wine glasses create non-uniform distortion of their background scene remain straight in an image. It is a form of .

7.1.1 Radial distortion

Although distortion can be irregular or follow many patterns, the most commonly encountered distortions are radially symmetric, or approximately so, arising from the symmetry of a photographic lens. These radial distortions can usually be classified as either barrel distortions or pincushion distortions. See van Walree.[1] Mathematically, barrel and pincushion distortion are quadratic, meaning they increase as the square of distance from the center. In mustache distortion the quartic (degree 4) term is significant: in the center, the degree 2 barrel distortion

79 80 CHAPTER 7. DAY 7 is dominant, while at the edge the degree 4 distortion in the pincushion direction dominates. Other distortions are in principle possible – pincushion in center and barrel at the edge, or higher order distortions (degree 6, degree 8) – but do not generally occur in practical , and higher order distortions are small relative to the main barrel and pincushion effects.

Occurrence

Simulated animation of globe effect (right) compared with a simple pan (left)

In photography, distortion is particularly associated with zoom lenses, particularly large-range zooms, but may also be found in prime lenses, and depends on focal distance – for example, the Canon EF 50mm f/1.4 exhibits barrel distortion at extremely short focal distances. Barrel distortion may be found in wide-angle lenses, and is often seen at the wide-angle end of zoom lenses, while pincushion distortion is often seen in older or low-end telephoto lenses. Mustache distortion is observed particularly on the wide end of zooms, with certain retrofocus lenses, and more recently on large-range zooms such as the Nikon 18–200 mm. A certain amount of pincushion distortion is often found with visual optical instruments, e.g., binoculars, where it serves to eliminate the globe effect. In order to understand these distortions, it should be remembered that these are radial defects; the optical systems in question have rotational symmetry (omitting non-radial defects), so the didactically correct test image would be a set of concentric circles having even separation—like a shooter’s target. It will then be observed that these common distortions actually imply a nonlinear radius mapping from the object to the image: What is seemingly pincushion distortion, is actually simply an exaggerated radius mapping for large radii in comparison with small radii. A graph showing radius transformations (from object to image) will be steeper in the upper (rightmost) end. Conversely, barrel distortion is actually a diminished radius mapping for large radii in comparison with small radii. A graph showing radius transformations (from object to image) will be less steep in the upper (rightmost) end.

Chromatic aberration

Further information:

Radial distortion that depends on wavelength is called "lateral chromatic aberration" – “lateral” because radial, “chro- matic” because dependent on color (wavelength). This can cause colored fringes in high-contrast areas in the outer parts of the image. This should not be confused with axial (longitudinal) chromatic aberration, which causes aberra- tions throughout the field, particularly purple fringing. 7.1. DISTORTION (OPTICS) 81

Radial distortions can be understood by their effect on concentric circles, as in an archery target.

Origin of terms

The names for these distortions come from familiar objects which are visually similar.

• In barrel distortion, straight lines bulge outwards at the center, as in a barrel. 82 CHAPTER 7. DAY 7

• In pincushion distortion, corners of squares form elongated points, as in a cushion.

• In mustache distortion, horizontal lines bulge up in the center, then bend the other way as they approach the edge of the frame (if in the top of the frame), as in curly handlebar mustaches.

7.1.2 Software correction

Radial distortion, whilst primarily dominated by low order radial components,[3] can be corrected using Brown’s distortion model,[4] also known as the Brown–Conrady model based on earlier work by Conrady.[5] The Brown– Conrady model corrects both for radial distortion and for tangential distortion caused by physical elements in a lens not being perfectly aligned. The latter is also known as decentering distortion. See Zhang[6] for radial distortion discussion.

2 4 ··· 2 2 2 4 ··· xd = xu(1 + K1r + K2r + ) + ((r + 2xu ) + 2P1xuyu)(1 + P3r + P4r + )

2 4 ··· 2 2 2 4 ··· yd = yu(1 + K1r + K2r + ) + (P1(r + 2yu ) + 2P2xuyu)(1 + P3r + P4r + ) where:

(xd, yd) = distorted image point as projected on image plane using specified lens,

(xu, yu) = undistorted image point as projected by an ideal pin-hole camera,

(xc, yc) = distortion center (assumed to be the principal point), th Kn = n radial distortion coefficient, th Pn = n tangential distortion coefficient [note that Brown’s original definition has P1 and P2 inter- changed], √ 2 2 r = (xu − xc) + (yu − yc) , and ... = an infinite series.

Barrel distortion typically will have a negative term for K1 whereas pincushion distortion will have a positive value. Moustache distortion will have a non-monotonic radial geometric series where for some r the sequence will change sign. 7.1. DISTORTION (OPTICS) 83

Software can correct those distortions by warping the image with a reverse distortion. This involves determining which distorted pixel corresponds to each undistorted pixel, which is non-trivial due to the non-linearity of the distor- tion equation.[3] Lateral chromatic aberration (purple/green fringing) can be significantly reduced by applying such warping for red, green and blue separately. An alternative method iteratively computes the undistorted pixel position.[7]

Calibrated

Calibrated systems work from a table of lens/camera transfer functions:

• Adobe Photoshop Lightroom and Photoshop CS5 can correct complex distortion. • PTlens is a or standalone application which corrects complex distortion. It not only corrects for linear distortion, but also second degree and higher nonlinear components.[8] • Lensfun is a free to use database and library for correcting lens distortion.[9] • DxO Labs' Optics Pro can correct complex distortion, and takes into account the focus distance. • proDAD Defishr includes an Unwarp-tool and a Calibrator-tool. Due to the distortion of a checkerboard pattern, the necessary unwrap is calculated. • The Micro cameras and lenses perform automatic distortion correction using correction parameters that are stored in each lens’s firmware, and are applied automatically by the camera and raw con- verter software. The optics of most of these lenses feature substantially more distortion than their counterparts in systems that don't offer such automatic corrections, but the software-corrected final images show noticeably less distortion than competing designs.[10]

Manual

Manual systems allow manual adjustment of distortion parameters:

• ImageMagick can correct several distortions; for example the fisheye distortion of the popular GoPro Hero3+ Silver camera can be corrected by the command[11]

convert distorted_image.jpg -distort barrel “0.06335 −0.18432 −0.13009” corrected_image.jpg

• Photoshop CS2 and Photoshop Elements (from version 5) include a manual Lens Correction filter for simple (pincushion/barrel) distortion • Corel Paint Shop Pro Photo includes a manual Lens Distortion effect for simple (barrel, fisheye, fisheye spherical and pincushion) distortion. • The GIMP includes manual lens distortion correction (from version 2.4). • PhotoPerfect has interactive functions for general pincushion adjustment, and for fringe (adjusting the size of the red, green and blue image parts). • can be used to correct distortion, though that is not its primary application.[12]

7.1.3 Related phenomena

Radial distortion is a failure of a lens to be rectilinear: a failure to image lines into lines. If a photograph is not taken straight-on then, even with a perfect rectilinear lens, rectangles will appear as trapezoids: lines are imaged as lines, but the angles between them are not preserved ( is not a conformal map). This effect can be controlled by using a perspective control lens, or corrected in post-processing. Due to perspective, cameras image a cube as a square frustum (a truncated pyramid, with trapezoidal sides)—the far end is smaller than the near end. This creates perspective, and the rate at which this scaling happens (how quickly 84 CHAPTER 7. DAY 7 more distant objects shrink) creates a sense of a scene being deep or shallow. This cannot be changed or corrected by a simple transform of the resulting image, because it requires 3D information, namely the depth of objects in the scene. This effect is known as perspective distortion; the image itself is not distorted, but is perceived as distorted when viewed from a normal viewing distance. Note that if the center of the image is closer than the edges (for example, a straight-on shot of a face), then barrel distortion and wide-angle distortion (taking the shot from close) both increase the size of the center, while pincushion distortion and telephoto distortion (taking the shot from far) both decrease the size of the center. However, radial distortion bends straight lines (out or in), while perspective distortion does not bend lines, and these are distinct phenomena. Fisheye lenses are wide-angle lenses with heavy barrel distortion and thus exhibit both these phenomena, so objects in the center of the image (if shot from a short distance) are particularly enlarged: even if the barrel distortion is corrected, the resulting image is still from a wide-angle lens, and will still have a wide-angle perspective.

7.1.4 See also

• Anamorphosis • • Cylindrical perspective • Distortion • Texture gradient • Underwater vision • Vignetting

7.1.5 References

[1] Paul van Walree. “Distortion”. Photographic optics. Retrieved 2 February 2009.

[2] “Tamron 18-270mm f/3.5-6.3 Di II VC PZD”. Retrieved 20 March 2013.

[3] de Villiers, J. P.; Leuschner, F.W.; Geldenhuys, R. (17–19 November 2008). “Centi-pixel accurate real-time inverse distor- tion correction” (PDF). 2008 International Symposium on Optomechatronic Technologies. SPIE. doi:10.1117/12.804771.

[4] Brown, Duane C. (May 1966). “Decentering distortion of lenses” (PDF). Photogrammetric Engineering. 32 (3): 444–462.

[5] Conrady, Alexander Eugen. "Decentred Lens-Systems.” Monthly notices of the Royal Astronomical Society 79 (1919): 384–390.

[6] Zhang, Zhengyou (1998). A Flexible New Technique for Camera Calibration (PDF) (Technical report). Microsoft Research. MSR-TR-98-71.

[7] “A Four-step Camera Calibration Procedure with Implicit Image Correction”. Retrieved 19 January 2011.

[8] “PTlens”. Retrieved 2 Jan 2012.

[9] “lensfun - Rev 246 - /trunk/README”. Retrieved 13 Oct 2013.

[10] Wiley, Carlisle. “Articles: Digital Photography Review”. Dpreview.com. Retrieved 2013-07-03.

[11] “ImageMagick v6 Examples -- Lens Corrections”.

[12] “Hugin tutorial – Simulating an architectural projection”. Retrieved 9 September 2009.

7.1.6 External links

• Lens distortion estimation and correction with source code and online demonstration • Lens distortion correction on post-processing • Lens distortion and camera field of view in CCTV design 7.2. PERSPECTIVE CONTROL 85

7.2 Perspective control

Not to be confused with Perspective distortion (photography). Perspective control is a procedure for composing or editing photographs to better conform with the commonly

Picture of Notre Dame de Reims showing perspective distortion accepted distortions in constructed perspective. The control would:

• make all lines that are vertical in reality vertical in the image. This includes columns, vertical edges of walls 86 CHAPTER 7. DAY 7

The same picture corrected

and lampposts. This is a commonly accepted distortion in constructed perspective; perspective is based on 7.2. PERSPECTIVE CONTROL 87

the notion that more distant objects are represented as smaller on the page; however, even though the top of the cathedral tower is in reality further from the viewer than base of the tower (due to the vertical distance), constructed perspective considers only the horizontal distance and considers the top and bottom to be the same distance away;

• make all parallel lines (such as four horizontal edges of a cubic room) cross in one point.

Perspective projection distortion occurs in photographs when the film plane is not parallel to lines that are required to be parallel in the photo. A common case is when a photo is taken of a tall building from ground level by tilting the camera backwards: the building appears to fall away from the camera. In the two images shown to the right, the first suffers from perspective distortion — in the second that distortion has been corrected. The popularity of amateur photography has made distorted photos made with cheap cameras so familiar that many people do not immediately realise the distortion. This “distortion” is relative only to the accepted norm of constructed perspective (where vertical lines in reality do not converge in the constructed image), which in itself is distorted from a true perspective representation (where lines that are vertical in reality would begin to converge above and below the horizon as they become more distant from the viewer).

7.2.1 Perspective control at exposure

24mm PC-E Nikkor shift and tilt lens on the , made for DSLR and 35mm SLR cameras.

Main article: Perspective control lens See also:

Professional cameras where perspective control is important control the perspective at exposure by raising the lens parallel to the film. There is more information on this in the view camera article. Most (4x5 and up) cameras have this feature, as well as plane of focus control built into the camera body in the form of flexible bellows and moveable front (lens) and rear (film holder) elements. Thus any lens mounted on a view camera or field camera, and many press cameras can be used with perspective control. 88 CHAPTER 7. DAY 7

Some interchangeable lens , 35 mm film SLR, and Digital SLR camera systems have PC, shift, or tilt/shift lens options which allow perspective control and, in the case of a tilt/shift lens, plane of focus control, but only at a specific focal length.

7.2.2 Perspective control in the darkroom

A darkroom technician can correct perspective distortion in the printing process. It is usually done by exposing the paper at an angle to the film, with the paper raised toward the part of the image that is larger, therefore not allowing the light from the to spread as much as the other side of the exposure. The process is known as rectification printing, and is done using a rectifying printer (transforming printer), which involves rotating the negative and/or easel.[1] Restoring parallelism to verticals (for instance) is easily done by tilting one plane, but if the focal length of the enlarger is not suitably chosen, the resulting image will have vertical distortion (compression or stretching). For correct perspective correction, the proper focal length (specifically, angle of view) must be chosen so that the enlargement replicates the perspective of the camera.

7.2.3 Perspective control during digital post-processing

Digital post-processing software provides means to correct converging verticals and other distortions introduced at image capture. Adobe Photoshop and GIMP have several “transform” options to achieve, with care, the desired control without any significant degradation in the overall image quality. Photoshop CS2 and subsequent releases includes perspective correction as part of its Lens Distortion Correction Filter;[2] DxO Optics Pro from DxO Labs includes perspective correction; while GIMP (as of 2.6) does not include a specialized tool for correcting perspective, though a plug-in, EZ Perspective, is available. RawTherapee, a free and open-source raw converter, includes horizontal and vertical perspective correction tools too. Note that because the mathematics of projective transforms depends on the angle of view, perspective tools require that the angle of view or 35 mm equivalent focal length be entered, though this can often be determined from Exif metadata.[3] It is commonly suggested to correct perspective using a general projective transformation tool, correcting vertical tilt (converging verticals) by stretching out the top;[4][5][6] this is the “Distort Transform” in Photoshop, and the “perspective tool” in GIMP. However, this introduces vertical distortion – objects appear squat (vertically compressed, horizontally extended)[6] – unless the vertical dimension is also stretched. This effect is minor for small angles, and can be corrected by hand, manually stretching the vertical dimension until the proportions look right,[5] but is automatically done by specialized perspective transform tools. An alternative interface, found in Photoshop (CS and subsequent releases) is the “perspective crop”, which enables the user to perform perspective control with the cropping tool, setting each side of the crop to independently determined angles, which can be more intuitive and direct. Other software with mathematical models on how lenses and different types of optical distortions affect the image can correct this by being able to calculate the different characteristics of a lens and re-projecting the image in a number of ways (including non-rectilinear projections). An example of this kind of software is the panorama creation suite Hugin.[3] However these techniques do not enable the recovery of lost spatial resolution in the more distant areas of the subject, or the recovery of lost depth of field due to the angle of the film/sensor plane to the subject. These transforms involve interpolation, as in image scaling, which degrades the image quality, in particular blurring high-frequency detail. How significant this is depends on the original image resolution, degree of manipulation, print/display size, and viewing distance, and perspective correction must be traded off against preserving high-frequency detail.

7.2.4 Perspective control in virtual environments

Architectural images are commonly "rendered" from 3D computer models, for use in promotional material. These have virtual cameras within to create the images, which normally have modifiers capable of correcting (or distorting) the perspective to the artist’s taste. See 3D projection for details. 7.2. PERSPECTIVE CONTROL 89

7.2.5 See also

• Anamorphosis • Keystone effect

• Image distortion

7.2.6 References

[1] Ray, Sidney F. (2002). “63.6: Rectification printing”. Applied photographic optics: lenses and optical systems for photog- raphy, film, video, electronic and digital imaging. Focal Press. pp. 549–553. ISBN 978-0-240-51540-3.

[2] Correcting Perspective in Photoshop, Ken Rockwell

[3] Hugin tutorial — Perspective correction

[4] Perspective Distortion Correction Tutorial, Lone Star Digital

[5] Perspective and Barrel Distortion Correction of Wide-angle Images in Adobe Photoshop, Larry N. Bolch

[6] Perspective Adjustment in PhotoShop, The Luminous Landscape

7.2.7 External links

• Illustrations

wiki page on perspective control

• Controlling perspective while cropping using Photoshop software • Correcting perspective using the Open Source Hugin software Chapter 8

Day 8

8.1 Cropping (image)

This article is about cropping images. For other uses, see Crop (disambiguation).

Cropping refers to the removal of the outer parts of an image to improve framing, accentuate subject matter or change . Depending on the application, this may be performed on a physical photograph, artwork or film footage, or achieved digitally using image editing software. The term is common to the film, broadcasting, photographic, graphic design and printing industries.

8.1.1 Cropping in photography, print & design

Wide view, uncropped photograph

In the printing, graphic design and photography industries, cropping[1] refers to removing unwanted areas from a photographic or illustrated image. One of the most basic photo manipulation processes, it is performed in order to remove an unwanted subject or irrelevant detail from a photo, change its aspect ratio, or to improve the over- all composition.[2] In telephoto photography, most commonly in bird photography, an image is cropped to magnify the primary subject and further reduce the angle of view when a lens of sufficient focal length to achieve the de- sired magnification directly is not available. It is considered one of the few editing actions permissible in modern

90 8.1. CROPPING (IMAGE) 91

Cropped version, accentuating subject photojournalism along with tonal balance, colour correction and sharpening. A crop made from the top and bottom of a photograph may produce an aspect which mimics the panoramic format (in photography) and the widescreen format in cinematography and broadcasting. Both of these formats are not cropped as such, rather the product of highly specialised optical configuration and camera design.

Graphic examples (photography)

Cropping in order to emphasize the subject:

• Cropped image of Anemone coronaria, aspect ratio 1.065, in which the flower fills most of the frame 92 CHAPTER 8. DAY 8

• The original photo, aspect ratio 1.333, in which the flower uses only a small part of the frame

Cropping in order to remove unwanted details/objects:

• Cropped image of Garland chrysanthemum, aspect ratio 16:4

• The original photo, aspect ratio 1.333, the lower right part shows some white-colored trash and the upper right shows dead flower - both are unwanted objects.

8.1.2 Cropping in cinematography & broadcasting

In certain circumstances, film footage may be cropped to change it from one aspect ratio to another, without stretching the image or filling the blank spaces with letterbox bars (fig. 2). Aspect ratio concerns are a major issue in film making. Rather than cropping, the cinematographer traditionally uses mattes to increase the latitude for alternative aspect ratios in projection and broadcast. Anamorphic optics (such as lenses) produce a full-frame, horizontally compressed image from which broadcasters and projectionists can matte a number of alternative aspect ratios without cropping relevant image detail. Without this, widescreen reproduction, especially for television broadcasting, is dependent upon a variety of soft matting techniques such as letterboxing, which involves varying degrees of image cropping (see figures 2, 3 and 4) Since the advent of widescreen television, a similar process removes large chunks from the top & bottom to make a standard 4:3 image fit a 16:9 one, losing 25% of the original image. This process has become standard in the United Kingdom , in TV shows where many archive clips are used, which gives them a zoomed-in, cramped image with significantly reduced resolution. Another option is a process called pillarboxing, where black bands are placed down the sides of the screen, allowing the original image to be shown full-frame within the wider aspect ratio (fig. 6). See this article for a fuller description of the problem. 8.1. CROPPING (IMAGE) 93

• Typical cropping in cinematographic and broadcast applications

• Figure 1: 2.35:1 original image with widescreen aspect ratio, showing alternative aspect ratios

• Figure 2: 2.35:1 image with letterbox resized to 4:3, the whole image is visible

• Figure 3: 1.85:1 image with letterbox resized to 4:3. Typical 16:9 image, the outer edges of the image are not visible

• Figure 4: 1.55:1 image with letterbox resized to 4:3. A compromise between 16:9 and 4:3, often broadcast in the UK

• Figure 5: 1.33:1 image without letterbox, because it is cropped to 4:3, losing much of the original

8.1.3 Additional methods

Various methods may be used following cropping or may be used on the original image.

• Vignetting is the accentuation of the central portion of an image by blurring, darkening, lightening, or desatu- ration of peripheral portions of the image

• The use of non-rectangular mat or picture frame may be used for selection of portions of a larger image 94 CHAPTER 8. DAY 8

8.1.4 Uncropping

It is not possible to “uncrop” a cropped image unless the original still exists or undo information exists: if an image is cropped and saved (without undo information), it cannot be recovered without the original. However, using texture synthesis, it is possible to artificially add a band around an image, synthetically “uncropping” it. This is effective if the band smoothly blends with the existing image, which is relatively easy if the edge of the image has low detail or is a chaotic natural pattern such as sky or grass, but does not work if discernible objects are cut off at the boundary, such as half a car. An uncrop plug-in exists for the GIMP image editor.

8.1.5 References

[1] “Crop Images With PHP and Jquery”.

[2] “Automatic image cropping to improve composition”.

8.2 Image warping

Image warping example

Image warping is the process of digitally manipulating an image such that any shapes portrayed in the image have been significantly distorted. Warping may be used for correcting image distortion as well as for creative purposes (e.g., morphing[1]). The same techniques are equally applicable to video.

8.2.1 Overview

While an image can be transformed in various ways, pure warping means that points are mapped to points without changing the colors. This can be based mathematically on any function from (part of) the plane to the plane. If 8.3. VIGNETTING 95 the function is injective the original can be reconstructed. If the function is a bijection any image can be inversely transformed. The following list is not meant to be a partitioning of all available methods into categories.

• Images may be distorted through simulation of optical aberrations. • Images may be viewed as if they had been projected onto a curved or mirrored surface. (This is often seen in raytraced images.) • Images can be partitioned into polygons and each polygon distorted. • Images can be distorted using morphing.

There are at least two ways to generate an image using whatever chosen methods to distort.

• (forward-mapping) a given mapping from sources to images is directly applied • (reverse-mapping) for a given mapping from sources to images, the source is found from the image

To estimate what kind of warping has taken place between consecutive images, one can use optical flow estimation techniques.

8.2.2 In the news

In 2007, a suspected pedophile used the “swirl” effect to hide his face in the pictures he had taken while raping and sexually abusing young children whose “ages appear to range from six to early teens.”[2] Interpol trivially reversed the swirl by swirling in the opposite direction to identify and eventually locate the man in Thailand.

8.2.3 See also

• Anamorphosis • Softwarp, is a software technique to warp an image so that it can be projected on a curved screen.

8.2.4 References

• Wolberg, G. (1990), Digital Image Warping, IEEE Computer Society Press, ISBN 0-8186-8944-7.

[1] Beier, Thaddeus. Feature-Based Image Metamorphosis. Siggraph '92

[2] Nizza, Mike (2007-10-08). “Interpol Untwirls a Suspected Pedophile”. New York Times Blog. p. The Lede. Retrieved 2008-08-25.

8.2.5 External links

• McMillan, Leonard “An Image-Based Approach to Three-Dimensional Computer Graphics”, Dissertation, 1997

8.3 Vignetting

In photography and optics, vignetting (/vɪnˈjɛtɪŋ/; French: “vignette”) is a reduction of an image’s brightness or saturation at the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait which is clear in the center, and fades off at the edges. A similar effect occurs when photographing projected images or movies off a projection screen, resulting in a so-called “hotspot” effect. 96 CHAPTER 8. DAY 8

A vignette is often added to an image to draw interest to the center and to frame the center portion of the photo.

Vignetting is often an unintended and undesired effect caused by camera settings or lens limitations. However, it is sometimes deliberately introduced for creative effect, such as to draw attention to the center of the frame. A photographer may deliberately choose a lens which is known to produce vignetting to obtain the effect, or it may be introduced with the use of special filters or post-processing procedures. When using superzoom lenses, vignetting may occur all along the zoom range, depending on the aperture and the focal length. However, it may not always be visible, except at the widest end (the shortest focal length). In these cases, vignetting may cause an (EV) difference of up to 0.75EV.

8.3.1 Causes

There are several causes of vignetting. Sidney F. Ray[1] distinguishes the following types:

• Mechanical vignetting • Optical vignetting • Natural vignetting

A fourth cause is unique to digital imaging:

• Pixel vignetting

A fifth cause is unique to analog imaging:

• Photographic film vignetting 8.3. VIGNETTING 97

Vignetting is a common feature of photographs produced by toy cameras such as this shot taken with a Holga.

Mechanical vignetting

Mechanical vignetting occurs when light beams emanating from object points located off-axis are partially blocked by external objects such as thick or stacked filters, secondary lenses, and improper lens hoods. This has the effect of changing the entrance pupil shape as a function of angle (resulting in the path of light being partially blocked). Darkening can be gradual or abrupt – the smaller the aperture, the more abrupt the vignetting as a function of angle. When some points on an image receives no light at all due to mechanical vignetting (the paths of light to these image points is completely blocked), then this results in an restriction of the field of view (FOV) – parts of the image are then completely black.

Optical vignetting

This type of vignetting is caused by the physical dimensions of a multiple element lens. Rear elements are shaded by elements in front of them, which reduces the effective lens opening for off-axis incident light. The result is a gradual decrease in light intensity towards the image periphery. Optical vignetting is sensitive to the lens aperture and can often be cured by a reduction in aperture of 2–3 stops. (An increase in the F-number.) 98 CHAPTER 8. DAY 8

This example shows both vignetting and restricted field of view (FOV). Here a "point-and-shoot camera" is used together with a microscope to create this image. Pronounced vignetting (fall off in brightness towards the edge) is visible as the optical system is not well adapted. Please note that furthermore a circular restriction of the FOV is visible (the black area in the image).

Natural vignetting

Unlike the previous types, natural vignetting (also known as natural illumination falloff) is not due to the blocking of light rays. The falloff is approximated by the cos4 or “cosine fourth” law of illumination falloff. Here, the light falloff is proportional to the fourth power of the cosine of the angle at which the light impinges on the film or sensor array. Wideangle rangefinder designs and the lens designs used in compact cameras are particularly prone to natural vignetting. Telephoto lenses, retrofocus wideangle lenses used on SLR cameras, and telecentric designs in general are less troubled by natural vignetting. A gradual grey filter or postprocessing techniques may be used to compensate for natural vignetting, as it cannot be cured by the lens. Some modern lenses are specifically designed so that the light strikes the image parallel or nearly so, eliminating or greatly reducing vignetting.

Pixel vignetting

Pixel vignetting only affects digital cameras and is caused by angle-dependence of the digital sensors. Light incident on the sensor at normal incident produces a stronger signal than light hitting it at an oblique angle. Most digital cameras use built-in image processing to compensate for optical vignetting and pixel vignetting when converting raw sensor data to standard image formats such as JPEG or TIFF. The use of offset microlenses over the image sensor can also reduce the effect of pixel vignetting.

8.3.2 Post-shoot

For artistic effect, vignetting is sometimes applied to an otherwise un-vignetted photograph and can be achieved by burning the outer edges of the photograph (with film stock) or using digital imaging techniques, such as masking darkened edges. The Lens Correction filter in Photoshop can also achieve the same effect. 8.3. VIGNETTING 99

Vignetting can be used to artistic effect, as demonstrated in this panorama.

Vignetting can be applied in the post-shoot phase with digital imaging software.

In digital imaging, this technique is used to create a lo-fi appearance in the picture.

8.3.3 See also

• Dodging and burning

• Flat-field correction

• Vignette (philately)

8.3.4 References and sources

References

[1] Sidney F. Ray, Applied photographic optics, 3rd ed., Focal Press (2002) ISBN 978-0-240-51540-3.

Sources 100 CHAPTER 8. DAY 8

• Van Walree’s webpage on vignetting uses some unorthodox terminology but illustrates very well the physics and optics of mechanical and optical vignetting. • Peter B. Catrysse, Xinqiao Liu, and Abbas El Gamal: QE Reduction due to Pixel Vignetting in CMOS Image Sensors; in Morley M. Blouke, Nitin Sampat, George M. Williams, Jr., Thomas Yeh (ed.): Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications, Proceedings of SPIE, vol. 3965 (2000).

• Yuanjie Zheng, Stephen Lin, and Sing Bing Kang, Single-Image Vignetting Correction; IEEE Conference on Computer Vision and Pattern Recognition 2006

• Olsen, Doug; Dou, Changyong; Zhang, Xiaodong; Hu, Lianbo; Kim, Hojin; Hildum, Edward. 2010. "Radiometric Calibration for AgCam" Remote Sens. 2, no. 2: 464-477. Chapter 9

Day 9

9.1 Image compression

The objective of image compression is to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form.

Comparison of JPEG images saved by Adobe Photoshop at different quality levels and with or without “save for web”

101 102 CHAPTER 9. DAY 9

9.1.1 Lossy and lossless Image compression

Image compression may be lossy or lossless. is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in . Lossy compression that produces negligible differences may be called visually lossless. Methods for lossless image compression are:

• Run-length encoding – used in default method in PCX and as one of possible in BMP, TGA, TIFF

• Area image compression

• DPCM and Predictive Coding

• Adaptive dictionary algorithms such as LZW – used in GIF and TIFF

• Deflation – used in PNG, MNG, and TIFF

• Chain codes

Methods for lossy compression:

• Reducing the color space to the most common colors in the image. The selected colors are specified in the color palette in the header of the compressed image. Each pixel just references the index of a color in the color palette, this method can be combined with dithering to avoid posterization.

. This takes advantage of the fact that the human eye perceives spatial changes of bright- ness more sharply than those of color, by averaging or dropping some of the information in the image.

. This is the most commonly used method. In particular, a Fourier-related transform such as the Discrete Cosine Transform (DCT) is widely used: N. Ahmed, T. Natarajan and K.R.Rao, "Discrete Cosine Transform,” IEEE Trans. Computers, 90-93, Jan. 1974. The DCT is sometimes referred to as “DCT-II” in the context of a family of discrete cosine transforms; e.g., see discrete cosine transform. The more recently developed is also used extensively, followed by quantization and entropy coding.

.

9.1.2 Other properties

The best image quality at a given bit-rate (or compression rate) is the main goal of image compression, however, there are other important properties of image compression schemes: Scalability generally refers to a quality reduction achieved by manipulation of the bitstream or file (without decom- pression and re-compression). Other names for scalability are progressive coding or embedded bitstreams. Despite its contrary nature, scalability also may be found in lossless , usually in form of coarse-to-fine pixel scans. Scal- ability is especially useful for previewing images while downloading them (e.g., in a web browser) or for providing variable quality access to e.g., databases. There are several types of scalability:

• Quality progressive or layer progressive: The bitstream successively refines the reconstructed image.

• Resolution progressive: First encode a lower image resolution; then encode the difference to higher resolutions.[1][2]

• Component progressive: First encode grey; then color. 9.2. LEMPEL–ZIV–WELCH 103

Region of interest coding. Certain parts of the image are encoded with higher quality than others. This may be combined with scalability (encode these parts first, others later). Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small images, and author or copyright information. Processing power. Compression algorithms require different amounts of processing power to encode and decode. Some high compression algorithms require high processing power. The quality of a compression method often is measured by the Peak signal-to-noise ratio. It measures the amount of noise introduced through a lossy compression of the image, however, the subjective judgment of the viewer also is regarded as an important measure, perhaps, being the most important measure.

9.1.3 Notes and references

[1] Burt, P.; Adelson, E. (1 April 1983). “The Laplacian Pyramid as a Compact Image Code”. IEEE Transactions on Commu- nications. 31 (4): 532–540. doi:10.1109/TCOM.1983.1095851.

[2] Shao, Dan; Kropatsch, Walter G. (February 3–5, 2010). “Irregular Laplacian Graph Pyramid” (PDF). Computer Vision Winter Workshop 2010. Nove Hrady, Czech Republic: Czech Pattern Recognition Society. |first1= missing |last1= in Editors list (help)

9.1.4 External links

• Image compression from MIT OpenCourseWare

• Image Coding Fundamentals

• A study about image compression (Image compression basics and comparing different compression methods like JPEG2000, JPEG and JPEG XR / HD Photo)

• Data Compression Basics (includes comparison of PNG, JPEG and JPEG-2000 formats)

• FAQ:What is the state of the art in lossless image compression? from comp.compression

• IPRG Open group related to image processing research resources

9.2 Lempel–Ziv–Welch

Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm pub- lished by Lempel and Ziv in 1978. The algorithm is simple to implement, and has the potential for very high through- put in hardware implementations.[1] It is the algorithm of the widely used file compression utility compress, and is used in the GIF image format.

9.2.1 Algorithm

The scenario described by Welch’s 1984 paper[1] encodes sequences of 8-bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence for which there is no code yet in the dictionary. The code for the sequence (without that character) is added to the output, and a new code (for the sequence with that character) is added to the dictionary. The idea was quickly adapted to other situations. In an image based on a color table, for example, the natural character alphabet is the set of color table indexes, and in the 1980s, many images had small color tables (on the order of 16 colors). For such a reduced alphabet, the full 12-bit codes yielded poor compression unless the image was large, so the idea of a variable-width code was introduced: codes typically start one bit wider than the symbols being encoded, 104 CHAPTER 9. DAY 9 and as each code size is used up, the code width increases by 1 bit, up to some prescribed maximum (typically 12 bits). When the maximum code value is reached, encoding proceeds using the existing table, but new codes are not generated for addition to the table. Further refinements include reserving a code to indicate that the code table should be cleared and restored to its initial state (a “clear code”, typically the first value immediately after the values for the individual alphabet characters), and a code to indicate the end of data (a “stop code”, typically one greater than the clear code). The clear code allows the table to be reinitialized after it fills up, which lets the encoding adapt to changing patterns in the input data. Smart encoders can monitor the compression efficiency and clear the table whenever the existing table no longer matches the input well. Since the codes are added in a manner determined by the data, the decoder mimics building the table as it sees the resulting codes. It is critical that the encoder and decoder agree on which variety of LZW is being used: the size of the alphabet, the maximum table size (and code width), whether variable-width encoding is being used, the initial code size, whether to use the clear and stop codes (and what values they have). Most formats that employ LZW build this information into the format specification or provide explicit fields for them in a compression header for the data.

Encoding

A high level view of the encoding algorithm is shown here:

1. Initialize the dictionary to contain all strings of length one.

2. Find the longest string W in the dictionary that matches the current input.

3. Emit the dictionary index for W to output and remove W from the input.

4. Add W followed by the next symbol in the input to the dictionary.

5. Go to Step 2.

A dictionary is initialized to contain the single-character strings corresponding to all the possible input characters (and nothing else except the clear and stop codes if they're being used). The algorithm works by scanning through the input string for successively longer substrings until it finds one that is not in the dictionary. When such a string is found, the index for the string without the last character (i.e., the longest substring that is in the dictionary) is retrieved from the dictionary and sent to output, and the new string (including the last character) is added to the dictionary with the next available code. The last input character is then used as the next starting point to scan for substrings. In this way, successively longer strings are registered in the dictionary and made available for subsequent encoding as single output values. The algorithm works best on data with repeated patterns, so the initial parts of a message will see little compression. As the message grows, however, the compression ratio tends asymptotically to the maximum.[2]

Decoding

The decoding algorithm works by reading a value from the encoded input and outputting the corresponding string from the initialized dictionary. In order to rebuild the dictionary in the same way as it was built during encoding, it also obtains the next value from the input and adds to the dictionary the concatenation of the current string and the first character of the string obtained by decoding the next input value, or the first character of the string just output if the next value can not be decoded (If the next value is unknown to the decoder, then it must be the value that will be added to the dictionary this iteration, and so its first character must be the same as the first character of the current string being sent to decoded output). The decoder then proceeds to the next input value (which was already read in as the “next value” in the previous pass) and repeats the process until there is no more input, at which point the final input value is decoded without any more additions to the dictionary. In this way the decoder builds up a dictionary which is identical to that used by the encoder, and uses it to decode subsequent input values. Thus the full dictionary does not need be sent with the encoded data; just the initial dictionary containing the single-character strings is sufficient (and is typically defined beforehand within the encoder and decoder rather than being explicitly sent with the encoded data.) 9.2. LEMPEL–ZIV–WELCH 105

Variable-width codes

If variable-width codes are being used, the encoder and decoder must be careful to change the width at the same points in the encoded data, or they will disagree about where the boundaries between individual codes fall in the stream. In the standard version, the encoder increases the width from p to p + 1 when a sequence ω + s is encountered that is not in the table (so that a code must be added for it) but the next available code in the table is 2p (the first code requiring p + 1 bits). The encoder emits the code for ω at width p (since that code does not require p + 1 bits), and then increases the code width so that the next code emitted will be p + 1 bits wide. The decoder is always one code behind the encoder in building the table, so when it sees the code for ω, it will generate an entry for code 2p − 1. Since this is the point where the encoder will increase the code width, the decoder must increase the width here as well: at the point where it generates the largest code that will fit in p bits. Unfortunately some early implementations of the encoding algorithm increase the code width and then emit ω at the new width instead of the old width, so that to the decoder it looks like the width changes one code too early. This is called “Early Change"; it caused so much confusion that Adobe now allows both versions in PDF files, but includes an explicit flag in the header of each LZW-compressed stream to indicate whether Early Change is being used. Out of graphics file formats capable of using LZW compression, TIFF uses early change, while GIF and most others don't. When the table is cleared in response to a clear code, both encoder and decoder change the code width after the clear code back to the initial code width, starting with the code immediately following the clear code.

Packing order

Since the codes emitted typically do not fall on byte boundaries, the encoder and decoder must agree on how codes are packed into bytes. The two common methods are LSB-First (“Least Significant Bit First”) and MSB-First (“Most Significant Bit First”). In LSB-First packing, the first code is aligned so that the least significant bit of the code falls in the least significant bit of the first stream byte, and if the code has more than 8 bits, the high order bits left over are aligned with the least significant bits of the next byte; further codes are packed with LSB going into the least significant bit not yet used in the current stream byte, proceeding into further bytes as necessary. MSB-first packing aligns the first code so that its most significant bit falls in the MSB of the first stream byte, with overflow aligned with the MSB of the next byte; further codes are written with MSB going into the most significant bit not yet used in the current stream byte. GIF files use LSB-First packing order. TIFF files and PDF files use MSB-First packing order.

9.2.2 Example

The following example illustrates the LZW algorithm in action, showing the status of the output and the dictionary at every stage, both in encoding and decoding the data. This example has been constructed to give reasonable com- pression on a very short message. In real text data, repetition is generally less pronounced, so longer input streams are typically necessary before the compression builds up efficiency. The plaintext to be encoded (from an alphabet using only the capital letters) is: TOBEORNOTTOBEORTOBEORNOT# The # is a marker used to show that the end of the message has been reached. There are thus 26 symbols in the plaintext alphabet (the 26 capital letters A through Z), plus the stop code #. We arbitrarily assign these the values 1 through 26 for the letters, and 0 for '#'. (Most flavors of LZW would put the stop code after the data alphabet, but nothing in the basic algorithm requires that. The encoder and decoder only have to agree what value it has.) A computer will render these as strings of bits. Five-bit codes are needed to give sufficient combinations to encompass this set of 27 values. The dictionary is initialized with these 27 values. As the dictionary grows, the codes will need to grow in width to accommodate the additional entries. A 5-bit code gives 25 = 32 possible combinations of bits, so when the 33rd dictionary word is created, the algorithm will have to switch at that point from 5-bit strings to 6-bit strings (for all code values, including those which were previously output with only five bits). Note that since the all- zero code 00000 is used, and is labeled “0”, the 33rd dictionary entry will be labeled 32. (Previously generated output is not affected by the code-width change, but once a 6-bit value is generated in the dictionary, it could conceivably be the next code emitted, so the width for subsequent output shifts to 6 bits to accommodate that.) The initial dictionary, then, will consist of the following entries: 106 CHAPTER 9. DAY 9

Encoding

Buffer input characters in a sequence ω until ω + next character is not in the dictionary. Emit the code for ω, and add ω + next character to the dictionary. Start buffering again with the next character. (The string to be encoded is “TOBEORNOTTOBEORTOBEORNOT#".) Unencoded length = 25 symbols × 5 bits/symbol = 125 bits Encoded length = (6 codes × 5 bits/code) + (11 codes × 6 bits/code) = 96 bits. Using LZW has saved 29 bits out of 125, reducing the message by almost 22%. If the message were longer, then the dictionary words would begin to represent longer and longer sections of text, allowing repeated words to be sent very compactly.

Decoding

To decode an LZW-compressed archive, one needs to know in advance the initial dictionary used, but additional entries can be reconstructed as they are always simply concatenations of previous entries. At each stage, the decoder receives a code X; it looks X up in the table and outputs the sequence χ it codes, and it conjectures χ + ? as the entry the encoder just added – because the encoder emitted X for χ precisely because χ + ? was not in the table, and the encoder goes ahead and adds it. But what is the missing letter? It is the first letter in the sequence coded by the next code Z that the decoder receives. So the decoder looks up Z, decodes it into the sequence ω and takes the first letter z and tacks it onto the end of χ as the next dictionary entry. This works as long as the codes received are in the decoder’s dictionary, so that they can be decoded into sequences. What happens if the decoder receives a code Z that is not yet in its dictionary? Since the decoder is always just one code behind the encoder, Z can be in the encoder’s dictionary only if the encoder just generated it, when emitting the previous code X for χ. Thus Z codes some ω that is χ + ?, and the decoder can determine the unknown character as follows:

1. The decoder sees X and then Z. 2. It knows X codes the sequence χ and Z codes some unknown sequence ω. 3. It knows the encoder just added Z to code χ + some unknown character, 4. and it knows that the unknown character is the first letter z of ω. 5. But the first letter of ω (= χ + ?) must then also be the first letter of χ. 6. So ω must be χ + x, where x is the first letter of χ. 7. So the decoder figures out what Z codes even though it’s not in the table, 8. and upon receiving Z, the decoder decodes it as χ + x, and adds χ + x to the table as the value of Z.

This situation occurs whenever the encoder encounters input of the form cScSc, where c is a single character, S is a string and cS is already in the dictionary, but cSc is not. The encoder emits the code for cS, putting a new code for cSc into the dictionary. Next it sees cSc in the input (starting at the second c of cScSc) and emits the new code it just inserted. The argument above shows that whenever the decoder receives a code not in its dictionary, the situation must look like this. Although input of form cScSc might seem unlikely, this pattern is fairly common when the input stream is charac- terized by significant repetition. In particular, long strings of a single character (which are common in the kinds of images LZW is often used to encode) repeatedly generate patterns of this sort.

9.2.3 Further coding

The simple scheme described above focuses on the LZW algorithm itself. Many applications apply further encod- ing to the sequence of output symbols. Some package the coded stream as printable characters using some form of binary-to-text encoding; this will increase the encoded length and decrease the compression rate. Conversely, in- creased compression can often be achieved with an adaptive entropy encoder. Such a coder estimates the probability distribution for the value of the next symbol, based on the observed frequencies of values so far. A standard entropy encoding such as Huffman coding or then uses shorter codes for values with higher probabilities. 9.2. LEMPEL–ZIV–WELCH 107

9.2.4 Uses

LZW compression became the first widely used universal data compression method on computers. A large English text file can typically be compressed via LZW to about half its original size. LZW was used in the public-domain program compress, which became a more or less standard utility in Unix systems circa 1986. It has since disappeared from many distributions, both because it infringed the LZW patent and because gzip produced better compression ratios using the LZ77-based algorithm, but as of 2008 at least FreeBSD includes both compress and uncompress as a part of the distribution. Several other popular compression utilities also used LZW, or closely related methods. LZW became very widely used when it became part of the GIF image format in 1987. It may also (optionally) be used in TIFF and PDF files. (Although LZW is available in software, Acrobat by default uses DEFLATE for most text and color-table-based image data in PDF files.)

9.2.5 Patents

Main article: Graphics Interchange Format § Unisys and LZW patent enforcement

Various patents have been issued in the United States and other countries for LZW and similar algorithms. LZ78 was covered by U.S. Patent 4,464,650 by Lempel, Ziv, Cohn, and Eastman, assigned to Sperry Corporation, later Unisys Corporation, filed on August 10, 1981. Two US patents were issued for the LZW algorithm: U.S. Patent 4,814,746 by Victor S. Miller and Mark N. Wegman and assigned to IBM, originally filed on June 1, 1983, and U.S. Patent 4,558,302 by Welch, assigned to Sperry Corporation, later Unisys Corporation, filed on June 20, 1983. In 1993–94, and again in 1999, Unisys Corporation received widespread condemnation when it attempted to enforce licensing fees for LZW in GIF images. The 1993-1994 Unisys-Compuserve (Compuserve being the creator of the GIF format) controversy engendered a Usenet comp.graphics discussion Thoughts on a GIF-replacement file format, which in turn fostered an email exchange that eventually culminated in the creation of the patent-unencumbered Portable Network Graphics (PNG) file format in 1995. Unisys’s US patent on the LZW algorithm expired on June 20, 2003,[3] 20 years after it had been filed. Patents that had been filed in the United Kingdom, France, Germany, Italy, Japan and Canada all expired in 2004,[3] likewise 20 years after they had been filed.

9.2.6 Variants

• LZMW (1985, by V. Miller, M. Wegman)[4] – Searches input for the longest string already in the dictionary (the “current” match); adds the concatenation of the previous match with the current match to the dictionary. (Dictionary entries thus grow more rapidly; but this scheme is much more complicated to implement.) Miller and Wegman also suggest deleting low frequency entries from the dictionary when the dictionary fills up. • LZAP (1988, by James Storer)[5] – modification of LZMW: instead of adding just the concatenation of the previous match with the current match to the dictionary, add the concatenations of the previous match with each initial substring of the current match. (“AP” stands for “all prefixes”.) For example, if the previous match is “wiki” and current match is “pedia”, then the LZAP encoder adds 5 new sequences to the dictionary: “wikip”, “wikipe”, “wikiped”, “wikipedi”, and “wikipedia”, where the LZMW encoder adds only the one sequence “wikipedia”. This eliminates some of the complexity of LZMW, at the price of adding more dictionary entries. • LZWL is a syllable-based variant of LZW.

9.2.7 See also

• LZ77 and LZ78 • LZMA • Lempel–Ziv–Storer–Szymanski • LZJB 108 CHAPTER 9. DAY 9

9.2.8 References

[1] Welch, Terry (1984). “A Technique for High-Performance Data Compression” (PDF). Computer. 17 (6): 8–19. doi:10.1109/MC.1984.1659158.

[2] Ziv, J.; Lempel, A. (1978). “Compression of individual sequences via variable-rate coding” (PDF). IEEE Transactions on . 24 (5): 530. doi:10.1109/TIT.1978.1055934.

[3] “LZW Patent Information”. About Unisys. Unisys. Archived from the original on June 26, 2009. Retrieved March 6, 2014.

[4] David Salomon, Data Compression – The complete reference, 4th ed., page 209

[5] David Salomon, Data Compression – The complete reference, 4th ed., page 212

9.2.9 External links

• Rosettacode wiki, algorithm in various languages • U.S. Patent 4,558,302, Terry A. Welch, High speed data compression and decompression apparatus and method • SharpLZW – C# open source implementation • MIT OpenCourseWare: Lecture including LZW algorithm • Mark Nelson, LZW Data Compression on Dr. Dobbs Journal (October 1, 1989)

9.3 Image file formats

“Image format” redirects here. For the camera sensor format, see . This article is about digital image formats used to store photographic and other images. For disk-image file formats, see Disk image. For digital file formats in general, see .

Image file formats are standardized means of organizing and storing digital images. Image files are composed of digital data in one of these formats that can be rasterized for use on a computer display or printer. An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it.

9.3.1 Image file sizes

The size of raster image files is positively correlated with the resolution and images size (number of pixels) and the color depth (bits per pixel). Images can be compressed in various ways, however. A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file size. Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes. This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e. images with large continuous regions like or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format. Vector images, unlike raster images, can be any dimension independent of file size. File size increases only with the addition of more vectors. For example, a 640 * 480 pixel image with 24-bit color would occupy almost a megabyte of space: 640 * 480 * 24 = 7,372,800 bits = 921,600 bytes = 900 kB 9.3. 109

9.3.2 Image file compression

There are two types of image file compression algorithms: lossless and lossy. Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images. Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size.

9.3.3 Major graphic file formats

Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the Internet. These graphic formats are listed and briefly described below, separated into the two main families of graphics: raster and vector. In addition to straight image formats, Metafile formats are portable formats which can include both raster and vec- tor information. Examples are application-independent formats such as WMF and EMF. The metafile format is an intermediate format. Most applications open metafiles and then save them in their own native format. Page descrip- tion language refers to formats used to describe the layout of a printed page containing text, objects and images. Examples are PostScript, PDF and PCL.

Raster formats

JPEG/JFIF JPEG (Joint Photographic Experts Group) is a lossy compression method; JPEG-compressed images are usually stored in the JFIF (JPEG File Interchange Format) file format. The JPEG/JFIF filename extension is JPG or JPEG. Nearly every digital camera can save images in the JPEG/JFIF format, which supports eight-bit grayscale images and 24-bit color images (eight bits each for red, green, and blue). JPEG applies lossy compression to images, which can result in a significant reduction of the file size. Applications can determine the degree of compression to apply, and the amount of compression affects the visual quality of the result. When not too great, the compression does not noticeably affect or detract from the image’s quality, but JPEG files suffer generational degradation when repeatedly edited and saved. (JPEG also provides lossless image storage, but the lossless version is not widely supported.)

JPEG 2000 JPEG 2000 is a compression standard enabling both lossless and lossy storage. The compression methods used are different from the ones in standard JFIF/JPEG; they improve quality and compression ratios, but also require more computational power to process. JPEG 2000 also adds features that are missing in JPEG. It is not nearly as common as JPEG, but it is used currently in professional movie editing and distribution (some digital cinemas, for example, use JPEG 2000 for individual movie frames).

Exif The Exif (Exchangeable image file format) format is a file standard similar to the JFIF format with TIFF extensions; it is incorporated in the JPEG-writing software used in most cameras. Its purpose is to record and to standardize the exchange of images with image metadata between digital cameras and editing and viewing software. The metadata are recorded for individual images and include such things as camera settings, time and date, , exposure, image size, compression, name of camera, color information. When images are viewed or edited by image editing software, all of this image information can be displayed. The actual Exif metadata as such may be carried within different host formats, e.g. TIFF, JFIF (JPEG) or PNG. IFF-META is another example.

TIFF The TIFF (Tagged Image File Format) format is a flexible format that normally saves eight bits or sixteen bits per color (red, green, blue) for 24-bit and 48-bit totals, respectively, usually using either the TIFF or TIF filename extension. The tagged structure was designed to be easily extendible, and many vendors have introduced proprietary special-purpose tags – with the result that no one reader handles every flavor of TIFF file. can be lossy or 110 CHAPTER 9. DAY 9

lossless, depending on the technique chosen for storing the pixel data. Some offer relatively good lossless compression for bi-level (black&white) images. Some digital cameras can save images in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image format is not widely supported by web browsers. TIFF remains widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color spaces, such as the CMYK defined by a particular set of printing press inks. OCR (Optical Character Recognition) software packages commonly generate some form of TIFF image (often monochromatic) for scanned text pages.

GIF GIF (Graphics Interchange Format) is in normal use limited to an 8-bit palette, or 256 colors (while 24-bit color depth is technically possible).[1][2] GIF is most suitable for storing graphics with few colors, such as simple diagrams, shapes, logos, and cartoon style images, as it uses LZW lossless compression, which is more effective when large areas have a single color, and less effective for photographic or dithered images. Due to GIF’s simplicity and age, it achieved almost universal software support. Due to its animation capabilities, it is still widely used to provide image animation effects, despite its low compression ratio compared to modern video formats.

BMP The BMP file format (Windows bitmap) handles graphic files within the OS. Typically, BMP files are uncompressed, and therefore large and lossless; their advantage is their simple structure and wide acceptance in Windows programs.

PNG The PNG (Portable Network Graphics) file format was created as a free, open-source alternative to GIF. The PNG file format supports eight-bit paletted images (with optional transparency for all palette colors) and 24-bit truecolor (16 million colors) or 48-bit truecolor with and without alpha channel - while GIF supports only 256 colors and a single transparent color. Compared to JPEG, PNG excels when the image has large, uniformly colored areas. Even for photographs – where JPEG is often the choice for final distribution since its compression technique typically yields smaller file sizes – PNG is still well-suited to storing images during the editing process because of its lossless compression. PNG provides a patent-free replacement for GIF (though GIF is itself now patent-free), and can also replace many common uses of TIFF. Indexed-color, grayscale, and truecolor images are supported, plus an optional alpha channel. The Adam7 interlacing allows an early preview, even when only a small percentage of the image data has been transmitted. PNG can store gamma and chromaticity data for improved color matching on heterogeneous platforms. PNG is designed to work well in online viewing applications like web browsers and can be fully streamed with a progressive display option. PNG is robust, providing both full file integrity checking and simple detection of common transmission errors. Animated formats derived from PNG are MNG and APNG. The latter is supported by Mozilla Firefox and Opera and is backwards compatible with PNG.

PPM, PGM, PBM, and PNM Netpbm format is a family including the portable pixmap file format (PPM), the portable graymap file format (PGM) and the portable bitmap file format (PBM). These are either pure ASCII files or raw binary files with an ASCII header that provide very basic functionality and serve as a lowest common denominator for converting pixmap, graymap, or bitmap files between different platforms. Several applications refer to them collectively as PNM (Portable aNy Map).

WebP WebP is a new open image format that uses both lossless and lossy compression. It was designed by to reduce image file size to speed up web page loading: its principal purpose is to supersede JPEG as the primary format for photographs on the web. WebP is based on VP8's intra-frame coding and uses a container based on RIFF.

HDR raster formats Most typical raster formats cannot store HDR data (32 bit floating point values per pixel component), which is why some relatively old or complex formats are still predominant here, and worth mention- ing separately. Newer alternatives are showing up, though. RGBE is the format for HDR images originating from Radiance and also supported by Adobe Photoshop.

HEIF The High Efficiency Image File Format (HEIF) is an image container format that was standardized by MPEG on the basis of the ISO base media file format. While HEIF can be used with any image compression format, the HEIF 9.3. IMAGE FILE FORMATS 111 standard specifies the storage of HEVC intra-coded images and HEVC-coded image sequences taking advantage of inter-picture prediction.

BAT BAT was released into the public domain by C-Cube Microsystems. The “official” file format for JPEG files is SPIFF (Still Picture Interchange File Format), but by the time it was released, BAT had already achieved wide acceptance. SPIFF, which has the ISO designation 10918-3, offers more versatile compression, color management, and metadata capacity than JPEG/BAT, but it has little support. It may be superseded by JPEG 2000/DIG 2000: ISO SC29/WG1, JPEG - Information Links. Digital Imaging Group, “JPEG 2000 and the DIG: The Picture of Compatibility.”

BPG BPG () is a new image format. Its purpose is to replace the JPEG image format when quality or file size is an issue. Its main advantages are:

• High compression ratio. Files are much smaller than JPEG for similar quality. • Supported by most Web browsers with a small Javascript decoder (gzipped size: 76 KB). • Based on a subset of the HEVC open video compression standard. • Supports the same chroma formats as JPEG (grayscale, YCbCr 4:2:0, 4:2:2, 4:4:4) to reduce the losses during the conversion. An alpha channel is supported. The RGB, YCgCo and CMYK color spaces are also supported. • Native support of 8 to 14 bits per channel for a higher dynamic range. • Lossless compression is supported. • Various meta data (such as EXIF) can be included.

Other raster formats

• CD5 (Chasys Draw Image) • DEEP (IFF-style format used by TVPaint) • ECW (Enhanced Compression Wavelet) • FITS (Flexible Image Transport System) • FLIF (Free Lossless Image Format) - a work-in-progress lossless image format which claims to outperform PNG, lossless WebP, lossless BPG and lossless JPEG2000 in terms of compression ratio. It uses the MANIAC (Meta-Adaptive Near-zero Integer Arithmetic Coding) entropy encoding algorithm, a variant of the CABAC (context-adaptive binary arithmetic coding) entropy encoding algogithm. • ICO, container for one or more icons (subsets of BMP and/or PNG) • ILBM (IFF-style format for up to 32 bit in planar representation, plus optional 64 bit extensions) • IMG (ERDAS IMAGINE Image) • IMG (Graphics Environment Manager (GEM) image file; planar, run-length encoded) • JPEG XR (New JPEG standard based on Microsoft HD Photo) • Layered Image File Format for microscope image processing • (Nearly raw raster data) • PAM (Portable Arbitrary Map) is a late addition to the Netpbm family • PCX (Personal Computer eXchange), obsolete • PGF (Progressive Graphics File) • PLBM - Planar Bitmap, proprietary Amiga format 112 CHAPTER 9. DAY 9

• SGI

• SID (multiresolution seamless image database, MrSID)

is an obsolete format

• TGA (TARGA), obsolete

• VICAR file format (NASA/JPL image transport format)

• XISF (Extensible Image Serialization Format)

Container formats of raster graphics editors These image formats contain various images, layers and objects, out of which the final image is to be composed

• CPT (Corel Photo Paint)

• PSD (Adobe PhotoShop Document)

• PSP (Corel Paint Shop Pro)

• XCF (eXperimental Computing Facility format, native GIMP format)

Vector formats

Main article: Vector graphics

As opposed to the raster image formats above (where the data describes the characteristics of each individual pixel), vector image formats contain a geometric description which can be rendered smoothly at any desired display size. At some point, all vector graphics must be rasterized in order to be displayed on digital monitors. Vector images may also be displayed with analog CRT technology such as that used in some electronic test equipment, medical monitors, radar displays, laser shows and early video games. Plotters are printers that use vector data rather than pixel data to draw graphics.

CGM CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster graphics, and text, and is defined by ISO/IEC 8632. All graphical elements can be specified in a textual source file that can be compiled into a binary file or one of two text representations. CGM provides a means of graphics data interchange for computer representation of 2D graphical information independent from any particular application, system, platform, or device. It has been adopted to some extent in the areas of and professional design, but has largely been superseded by formats such as SVG and DXF.

Gerber format (RS-274X) The (aka Extended Gerber, RS-274X) was developed by Gerber Sys- tems Corp., now Ucamco, and is a 2D bi-level image description format. It is the de facto standard format used by printed circuit board or PCB software. It is also widely used in other industries requiring high-precision 2D bi-level images.[3]

SVG SVG () is an created and developed by the Con- sortium to address the need (and attempts of several corporations) for a versatile, scriptable and all-purpose vector format for the web and otherwise. The SVG format does not have a compression scheme of its own, but due to the textual nature of XML, an SVG graphic can be compressed using a program such as gzip. Because of its scripting potential, SVG is a key component in web applications: interactive web pages that look and act like applications. 9.3. IMAGE FILE FORMATS 113

Other 2D vector formats

• AI (Adobe Illustrator Artwork) • CDR (CorelDRAW) • DrawingML • GEM metafiles (interpreted and written by the Graphics Environment Manager VDI subsystem) • Graphics Layout Engine • HPGL, introduced on Hewlett-Packard plotters, but generalized into a printer language • HVIF (Haiku Vector Format) • MathML • NAPLPS (North American Presentation Layer Protocol Syntax) • ODG (OpenDocument Graphics) • !DRAW, a native vector graphic format (in several backward compatible versions) for the RISC-OS computer system begun by in the mid-1980s and still present on that platform today • POV-Ray markup language • Precision Graphics Markup Language, a W3C submission that was not adopted as a recommendation. • PSTricks and PGF/TikZ are languages for creating graphics in TeX documents. • ReGIS, used by DEC computer terminals • Remote imaging protocol • VML () • WMF / EMF (Windows Metafile / Enhanced Metafile) • format used in vector applications from Xara • XPS (XML Paper Specification)

3D vector formats

• AMF - Additive Manufacturing File Format • Asymptote - A language that lifts TeX to 3D. • .blend - Blender • COLLADA • .dgn • .dwf • .dwg • .dxf • eDrawings • .flt - OpenFlight • HSF • IGES 114 CHAPTER 9. DAY 9

• IMML - Immersive Media Markup Language

• IPA

• JT

• .MA (Maya ASCII format)

• .MB (Maya Binary format)

• .OBJ (Alias|Wavefront file format)

• OpenGEX - Open Game Engine Exchange

• PRC

• STEP

• SKP

• STL - A stereolithography format

• U3D - file format

• VRML - Virtual Reality Modeling Language

• XAML

• XGL

• XVL

• xVRML

• .3D

• 3DF

• .3DM

• .3ds - Autodesk 3D Studio

• 3DXML

• X3D - Vector format used in 3D applications from Xara

Compound formats (see also Metafile)

These are formats containing both pixel and vector data, possible other data, e.g. the interactive features of PDF.

• EPS (Encapsulated PostScript)

• PDF (Portable Document Format)

• PostScript, a page description language with strong graphics capabilities

• PICT (Classic Macintosh QuickDraw file)

• SWF (Shockwave Flash)

• XAML User interface language using vector graphics for images. 9.4. COMPARISON OF GRAPHICS FILE FORMATS 115

Stereo formats

• MPO The Multi Picture Object (.mpo) format consists of multiple JPEG images (Camera & Imaging Products Association) (CIPA).

• PNS The PNG Stereo (.pns) format consists of a side-by-side image based on PNG (Portable Network Graph- ics).

• JPS The JPEG Stereo (.jps) format consists of a side-by-side image format based on JPEG.

9.3.4 References

[1] Andreas Kleinert (2007). “GIF 24 Bit (truecolor) extensions”. Retrieved 23 March 2012.

[2] Philip Howard. “True-Color GIF Example”. Retrieved 23 March 2012.

[3] “Gerber File Format Specification”. Ucamco.

9.4 Comparison of graphics file formats

This is a comparison of image file formats.

9.4.1 General

Ownership of the format and related information.

9.4.2 Technical details

9.4.3 References

[1] Advanced Bitonal Compression

[2]

[3] Adobe: DNG Specification

[4] Adobe: Introducing the Specification: Information for manufacturers

[5] Adobe Labs: CinemaDNG (last bullet point)

[6] RFC 3240

[7] The Gerber Format Specification, downloadable at http://www.ucamco.com

[8] http://www.iana.org/assignments/media-types/image/vnd.adobe.photoshop

[9] Supports raster data embedding with the use of Base64

[10] Multiple pages (SVG 1.2)

[11] SVG 1.1 color profile descriptions Chapter 10

Day 10

10.1 TIFF

This article is about the file format. For other uses, see TIFF (disambiguation). “TIF” redirects here. For other uses, see TIF (disambiguation).

Tagged Image File Format, abbreviated TIFF or TIF, is a computer file format for storing raster graphics images, popular among graphic artists, the publishing industry,[1] and photographers. The TIFF format is widely supported by image-manipulation applications, by publishing and page layout applications, and by scanning, faxing, word pro- cessing, optical character recognition and other applications.[2] The format was created by for use in . It published the latest version 6.0 in 1992, subsequently updated with an Adobe Systems copyright after the latter acquired Aldus in 1994. Several Aldus/Adobe technical notes have been published with minor extensions to the format, and several specifications have been based on TIFF 6.0, including TIFF/EP (ISO 12234-2), TIFF/IT (ISO 12639),[3][4][5] TIFF-F (RFC 2306) and TIFF-FX (RFC 3949).[6]

10.1.1 History

TIFF was created as an attempt to get desktop scanner vendors of the mid-1980s to agree on a common scanned image file format, in place of a multitude of proprietary formats. In the beginning, TIFF was only a binary image format (only two possible values for each pixel), because that was all that desktop scanners could handle. As scanners became more powerful, and as desktop computer disk space became more plentiful, TIFF grew to accommodate grayscale images, then color images. Today, TIFF, along with JPEG and PNG, is a popular format for high color-depth images. The first version of the TIFF specification was published by Aldus Corporation in the autumn of 1986 after two major earlier draft releases. It can be labeled as Revision 3.0. It was published after a series of meetings with various scanner manufacturers and software developers. In April 1987 Revision 4.0 was released and it contained mostly minor enhancements. In October 1988 Revision 5.0 was released and it added support for palette color images and LZW compression.[7] The specification originally used the names Tagged Image File Format and Tag Image File Format (Revision 4.0) as well as their acronym TIFF, but later dropped all references to the full name (Revision 5.0, 6.0). More recent ISO publications again refer to the full name Tag Image File Format (see references elsewhere in this article). Still, the version Tagged Image File Format seems to be several times more prevalent in non-official usage, even by Adobe.

10.1.2 Features and options

TIFF is a flexible, adaptable file format for handling images and data within a single file, by including the header tags (size, definition, image-data arrangement, applied image compression) defining the image’s geometry. A TIFF file, for example, can be a container holding JPEG (lossy) and PackBits (lossless) compressed images. A TIFF file also can include a vector-based clipping path (outlines, croppings, image frames). The ability to store image data

116 10.1. TIFF 117 in a lossless format makes a TIFF file a useful image archive, because, unlike standard JPEG files, a TIFF file using lossless compression (or none) may be edited and re-saved without losing image quality. This is not the case when using the TIFF as a container holding compressed JPEG. Other TIFF options are layers and pages. TIFF offers the option of using LZW compression, a lossless data-compression technique for reducing a file’s size. Use of this option was limited by patents on the LZW technique until their expiration in 2004. The TIFF 6.0 specification consists of the following parts:[7]

• Introduction (contains information about TIFF Administration, usage of Private fields and values, etc.) • Part 1: Baseline TIFF • Part 2: TIFF Extensions • Part 3: Appendices

Part 1: Baseline TIFF

When TIFF was introduced its extensibility provoked compatibility problems. The flexibility in encoding gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats.[8] To avoid these problems, every TIFF reader was required to read Baseline TIFF. Among other things, Baseline TIFF does not include layers, or compressed JPEG or LZW images. Baseline TIFF is formally known as TIFF 6.0, Part 1: Baseline TIFF. The following is an incomplete list of required Baseline TIFF features:[7]

Multiple subfiles TIFF readers must be prepared for multiple/multi-page images (subfiles) per TIFF file, although they are not required to actually do anything with images after the first one. There may be more than one Image File Directory (IFD) in a TIFF file. Each IFD defines a subfile. One use of subfiles is to describe related images, such as the pages of a facsimile document. A Baseline TIFF reader is not required to read any IFD beyond the first one.[7]

Strips A baseline TIFF image is composed of one or more strips. A strip (or band) is a subsection of the image composed of one or more rows (horizontal rows of pixels). Each strip may be compressed independently of the entire image, and each begins on a byte boundary. If the image height is not evenly divisible by the number of rows in the strip, the last strip may contain fewer rows. If strip definition tags are omitted, the image is assumed to contain a single strip.

Compression Baseline TIFF readers must handle the following three compression schemes:[7]

• No compression • CCITT Group 3 1-Dimensional Modified Huffman RLE • PackBits compression - a form of run-length encoding

Image types Baseline TIFF image types are: bilevel, grayscale, palette-color, and RGB full-color images.[7]

Byte order Every TIFF file begins with a two-byte indicator of byte order: “II” for little-endian (a.k.a. “Intel byte ordering”, circa 1980)[9] or “MM” for big-endian (a.k.a. “Motorola byte ordering”, circa 1980)[9] byte ordering. The next two-byte word contains the format version number, which has always been 42 for every version of TIFF (e.g., TIFF v5.0 and TIFF v6.0).[10] All words, double words, etc., in the TIFF file are assumed to be in the indicated byte order. The TIFF 6.0 specification states that compliant TIFF readers must support both byte orders (II and MM); writers may use either.[11]

Other TIFF fields TIFF readers must be prepared to encounter and ignore private fields not described in the TIFF specification. TIFF readers must not refuse to read a TIFF file if some optional fields do not exist.[7] 118 CHAPTER 10. DAY 10

Part 2: TIFF Extensions

Many TIFF readers support tags additional to those in Baseline TIFF, but not every reader supports every extension.[12][13][14][15] As a consequence, Baseline TIFF features became the lowest common denominator for TIFF format. Baseline TIFF features are extended in TIFF Extensions (defined in the TIFF 6.0 Part 2 specification) but extensions can also be defined in private tags. The TIFF Extensions are formally known as TIFF 6.0, Part 2: TIFF Extensions. Here are some examples of TIFF extensions defined in TIFF 6.0 specification:[7]

Compression

• CCITT T.4 bi-level encoding

• CCITT T.6 bi-level encoding

• LZW Compression scheme

• JPEG-based compression (TIFF compression scheme 7) uses the DCT (Discrete Cosine Transform) introduced in 1974 by N. Ahmed, T. Natarajan and K. R. Rao; see Reference 1 in Discrete cosine transform. For more details see Adobe document.

Image types

• CMYK Images

• YCbCr Images

• HalftoneHints

• Tiled Images

• CIE L*a*b* Images

Image Trees A baseline TIFF file can contain a sequence of images (IFD). Typically, all the images are related but represent different data, such as the pages of a document. In order to explicitly support multiple views of the same data, the SubIFD tag was introduced.[16] This allows the images to be defined along a tree structure. Each image can have a sequence of children, each child being itself an image. The typical usage is to provide thumbnails or several versions of an image in different colour spaces.

Tiles A TIFF image may also be composed of a number of tiles. All tiles in the same image have the same dimensions and may be compressed independently of the entire image, similar to strips (see above). Tiled images are part of TIFF 6.0, Part 2: TIFF Extensions, so the support for tiled images is not required in Baseline TIFF readers.

Other extensions According to TIFF 6.0 specification (Introduction), all TIFF files using proposed TIFF extensions that are not approved by Adobe as part of Baseline TIFF (typically for specialized uses of TIFF that do not fall within the domain of publishing or general graphics or picture interchange) should be either not called TIFF files or should be marked some way so that they will not be confused with mainstream TIFF files.

Private tags

Developers can apply for a block of “private tags” to enable them to include their own proprietary information inside a TIFF file without causing problems for file interchange. TIFF readers are required to ignore tags that they do not recognize, and a registered developer’s private tags are guaranteed not to clash with anyone else’s tags or with the standard set of tags defined in the specification. Private tags are numbered in the range 32,768 and higher. Private tags are reserved for information meaningful only for some organization, or for experiments with a new compression scheme within TIFF. Upon request, the TIFF administrator (currently Adobe) will allocate and register 10.1. TIFF 119 one or more private tags for an organization, to avoid possible conflicts with other organizations. Organizations and developers are discouraged from choosing their own tag numbers arbitrarily, because doing so could cause serious compatibility problems. However, if there is little or no chance that TIFF files will escape a private environment, organizations and developers are encouraged to consider using TIFF tags in the “reusable” 65,000-65,535 range. There is no need to contact Adobe when using numbers in this range.[7]

Internet Media Type

The MIME type image/tiff (defined in RFC 3302) without an application parameter is used for Baseline TIFF 6.0 files or to indicate that it is not necessary to identify a specific subset of TIFF or TIFF extensions. The optional “application” parameter (Example: Content-type: image/tiff; application=foo) is defined for image/tiff to identify a particular subset of TIFF and TIFF extensions for the encoded image data, if it is known. According to RFC 3302, specific TIFF subsets or TIFF extensions used in the application parameter must be published as an RFC.[17] MIME type image/tiff-fx (defined in RFC 3949 and RFC 3950) is based on TIFF 6.0 with TIFF Technical Notes TTN1 (Trees) and TTN2 (Replacement TIFF/JPEG specification). It is used for Internet fax compatible with the ITU-T Recommendations for Group 3 black-and-white, grayscale and color fax.

TIFF Compression Tag

The TIFF Tag 259 (010316) stores the information about the Compression method. The default value is 1 = no compression. Most TIFF writers and TIFF readers support only some TIFF compression schemes. Here are some examples of used TIFF compression schemes:

BigTIFF

The TIFF file formats use 32-bit offsets, which limits file size to 4 GiB (4,294,967,296 bytes). BigTIFF is a TIFF variant file format which uses 64-bit offsets and supports much larger files.[29] The BigTIFF file format specification was implemented in 2007 in development releases of LibTIFF version 4.0, which was finally released as stable in December 2011. Support for BigTIFF file formats by applications is limited.

Digital preservation

Adobe holds the copyright on the TIFF specification (aka TIFF 6.0) along with the two supplements that have been published. All of these documents can be found on the Adobe TIFF Resources page. The Fax standard in RFC 3949 is based on these TIFF specifications.[30] TIFF files that strictly use the basic “tag sets” as defined in TIFF 6.0 along with restricting the compression technology to the methods identified in TIFF 6.0 and are adequately tested and verified by multiple sources for all documents being created can be used for storing documents. Commonly seen issues encountered in the content and document management industry associated with the use of TIFF files arise when the structures contain proprietary headers, are not properly documented, and/or contain “wrappers” or other containers around the TIFF datasets, and/or include improper compression technologies, or those compression technologies are not properly implemented. Variants of the TIFF format can be used within document imaging and content/document management systems using CCITT Group IV 2D compression which supports black-and-white (bitonal, monochrome) images, among other compression technologies that support color. When storage capacity and network bandwidth was a greater issue than commonly seen in today’s server environments, high-volume storage scanning, documents were scanned in black and white (not in color or in grayscale) to conserve storage capacity. The inclusion of the SampleFormat tag in TIFF 6.0 allows TIFF files to handle advanced pixel data types, including integer images with more than 8 bits per channel and floating point images. This tag made TIFF 6.0 a viable format for scientific image processing where extended precision is required. An example would be the use of TIFF to store images acquired using scientific CCD cameras that provide up to 16 bits per photosite of intensity resolution. Storing a sequence of images in a single TIFF file is also possible, and is allowed under TIFF 6.0, provided the rules for multi-page images are followed. 120 CHAPTER 10. DAY 10

TIFF/IT

TIFF/IT is a file format structured to digitally send data for print ready pages that have been created on high-end pre- press systems.[33] The TIFF/IT specification (ISO 12639) describes a multiple-file format which can describe a single page per file set.[34] TIFF/IT files are different from common TIFF files and they are not interchangeable.[35][36][37] The goals in developing TIFF/IT were to carry forward the original IT8 magnetic tape formats into a media inde- pendent version. TIFF/IT is based on Adobe TIFF 6.0 specification and both extends TIFF 6 by adding additional tags and restricts it by limiting some tags and the values within tags. Not all valid TIFF/IT images are valid TIFF 6.0 images.[38] TIFF/IT defines image file formats for encoding colour continuous tone picture images, colour line art images, high resolution continuous tone images, monochrome continuous tone images, binary picture images, binary line art im- ages, screened data, and images of composite final pages.[4] There is no MIME type defined for TIFF/IT. The MIME type image/tiff should not be used for TIFF/IT files, because TIFF/IT does not conform to Baseline TIFF 6.0 and the widely deployed TIFF 6.0 readers are not able to read TIFF/IT. The MIME type image/tiff (defined in RFC 3302) without an application parameter is used for Baseline TIFF 6.0 files or to indicate that it is not necessary to identify a specific subset of TIFF or TIFF extensions. The application parameter should be used with image/tiff to distinguish TIFF extensions or TIFF subsets. According to RFC 3302, specific TIFF subsets or TIFF extensions must be published as an RFC. There is no such RFC for TIFF/IT. There is also no plan by the ISO committee that oversees TIFF/IT standard to register TIFF/IT with either a parameter to image/tiff or as new separate MIME type.[17]

TIFF/IT files TIFF/IT consists of a number of different files and it cannot be created or opened by common desktop applications.[17][35][39] TIFF/IT-P1 file sets usually consist of the following files:[4][5][40]

• Final Page (FP)

• Continuous Tone image (CT)

• Line Work image (LW)

• High resolution Continuous-tone files (HC - optional)

TIFF/IT also defines the following files:[4]

• Monochrome continuous-tone Picture images (MP)

• Binary Picture images (BP)

• Binary Line-art images (BL)

• Screened Data (SD)

Some of these data types are partly compatible with the corresponding definitions in the TIFF 6.0 specification. The Final Page (FP) allows the various files needed to define a complete page to be grouped together - it provides a mechanism for creating a package that includes separate image layers (of types CT, LW, etc.) to be combined to create the final printed image. Its use is recommended but not required. There must be at least one subfile in an FP file, but no more than one of each type. It typically contains a CT subfile and an LW subfile.[4][38][41] The space for this standard is CMYK, but also other color spaces and the use of ICC Profiles are supported.[4]

TIFF/IT compression TIFF/IT makes no provision for compression within the file structure itself, but there are no restrictions.[38] (For example, it is allowed to compress the whole file structure in a ZIP archive.) LW files use a specific compression scheme known as Run-length encoding for LW (Compression tag value is 808016). HC files also use a specific Run-length encoding for HC (Compression tag value is 808116). The TIFF/IT P1 specs do not allow use of compression within the CT file. The following is a list of defined TIFF/IT compression schemes:[32] 10.1. TIFF 121

TIFF/IT P1 The ISO 12639:1998 introduced TIFF/IT-P1 (Profile 1) - a direct subset of the full TIFF/IT standard (previously defined in ANSI IT8.8–1993). This subset was developed on the ground of the mutual realization by both the standards and the software development communities that an implementation of the full TIFF/IT standard by any one vendor was both unlikely (because of its complexity), and unnecessary (because Profile 1 would cover most applications for digital ad delivery). Almost all TIFF/IT files in digital advertising were distributed as TIFF/IT-P1 file sets in 2001.[42][43] When people talk about TIFF/IT, they usually mean the P1 standard.[5] Here are some of the restrictions on TIFF/IT-P1 (compared to TIFF/IT):[41]

• Uses CMYK only (when appropriate) • It is pixel interleaved (when appropriate) • Has a single choice of image orientation • Has a single choice of dot range • Restricted compression methods

TIFF/IT-P1 is a simplified conformance level of TIFF/IT and it maximizes the compatibility between Color Elec- tronic Prepress Systems (CEPS) and Desk Top Publishing (DTP) worlds.[38][44] It provides a clean interface for the proprietary CEPS formats such as the Scitex CT/LW format.

TIFF/IT P2 Because TIFF/IT P1 had a number of limitations, an extended format was developed. The ISO 12639:2004 introduced a new extended conformance level - TIFF/IT-P2 (Profile 2). TIFF/IT-P2 added a number of functions to TIFF/IT-P1 like:[5]

• CMYK spot colours only (when appropriate) • Support for the compression of CT and BP data (JPEG and Deflate) • Support for multiple LW and CT files in a single file • Support for copydot files through a new file type called SD (Screened Data) • There was some effort to create a possibility to concatenate FP, LW, and CT files into a single file called the GF (Group Final) file, but this was not defined in a draft version of ISO 12639:2004.[32]

This format was not widely used.

Private tags

The TIFF/IT specification preserved the TIFF possibility for developers to utilize private tags. The TIFF/IT specifi- cation is very precise regarding how these private tags should be treated - they should be parsed, but ignored.[45] Private tags in the TIFF/IT-P1 specification were originally intended to provide developers with ways to add specific functionality for specific applications. Private tags can be used by developers (e.g., Scitex) to preserve specific printing values or other functionality. Private tags are typically labelled with tag numbers greater than or equal to 32768. All private tags must be requested from Adobe (the TIFF administrator) and registered. In 1992, the DDAP (Digital Distribution of Advertising for Publication, later Digital Directions in Applications for Production) developed their requirement statement for digital ad delivery. This was presented to ANSI-accredited CGATS (Committee for Technology Standards) for development of an accredited file format standard for the delivery of digital ads. CGATS reviewed their alternatives for this purpose and TIFF format seemed like the ideal candidate, except for the fact that it could not handle certain required functionalities. CGATS asked Aldus (the TIFF administrator) for a block of their own TIFF private tags in order to implement what eventually became TIFF/IT. For example, the ability to identify the sequence of the colors is handled by tag 34017 - the Color Sequence Tag.[45] TIFF/IT was created to satisfy the need for a transport-independent method of encoding raster data in the IT8.1, IT8.2 and IT8.5 standards. 122 CHAPTER 10. DAY 10

Standards

TIFF/IT was defined in ANSI IT8.8–1993 standard in 1993 and later revised in the International Standard ISO 12639:1998 - Prepress digital data exchange – Tag image file format for image technology (TIFF/IT).[3] The ISO stan- dard replaces ANSI IT8.8–1993. It specifies a media-independent means for prepress electronic data exchange.[46] The ISO 12639:2004 (Second edition) standard for TIFF/IT superseded the ISO 12639:1998. It was also later extended in ISO 12639:2004 / Amd. 1:2007 - Use of JBIG2-Amd2 compression in TIFF/IT.

10.1.3 See also

• Comparison of graphics file formats • Libtiff, widely used open source library + utilities for reading/writing/manipulating TIFF files • DNG • GeoTIFF • Image file formats • STDU Viewer • Windows Photo Viewer • T.37 (ITU-T recommendation)

10.1.4 References

[1] Murray, James D.; vanRyper, William (April 1996). “Encyclopedia of Graphics File Formats” (Second ed.). O'Reilly. ISBN 1-56592-161-5. Retrieved 2014-03-07.

[2] TIFF was chosen as the native format for raster graphics in the NeXTstep ; this TIFF support carried over into Mac OS X.

[3] “TIFF/IT ISO/IEC 12639”. ISO. 1998.

[4] “TIFF/IT for Image Technology”. The National Digital Information Infrastructure and Preservation Program at the Library of Congress. 2006-10-03.

[5] “The TIFF/IT file format”. Retrieved 2011-02-19.

[6] “File Format for Internet Fax”. 2005. Retrieved 2011-02-19. This file format specification is commonly known as TIFF for Fax eXtended (TIFF-FX).

[7] TIFF Revision 6.0 Final — June 3, 1992, Retrieved on 2009-07-10

[8] Trauth, Martin H. (2006). MATLAB Recipes For Earth Sciences. Springer. p. 198. ISBN 3-540-27983-0.

[9] David Beecher, author of dozens of image processing engines over the last 30 years. Any TIFF file can be viewed with a HEX editor to confirm this.

[10] Aldus/Microsoft (1988-08-08). “1) Structure”. TIFF. Revision 5.0. Aldus Corporation and Microsoft Corporation. Archived from the original on 2008-12-04. Retrieved 2009-06-29. The number 42 was chosen for its deep philosophical significance.

[11] Adobe Developers Association (1992-06-03). “Section 7: Additional baseline TIFF Requirements”. TIFF (PDF). Revision 6.0. Adobe Systems Incorporated. p. 26. Retrieved 2009-06-29. ‘MM’ and ‘II’ byte order. TIFF readers must be able to handle both byte orders. TIFF writers can do whichever is most convenient or efficient.

[12] Microsoft. “You cannot preview scanned TIFF file in Windows Picture and Fax Viewer”. Retrieved 2011-02-28.

[13] Microsoft. “You Cannot View TIFF Images Using Windows Picture and Fax Viewer”. Retrieved 2011-02-28.

[14] Microsoft. “Handling Microsoft Office Document Scanning TNEF and TIFFs in ”. Retrieved 2011-02-28.

[15] “About Tagged Image File Format (TIFF)". Retrieved 2011-03-04. 10.1. TIFF 123

[16] TIFF Specification Supplement 1, Retrieved 2013-08-04

[17] CIP4 (2008). “JDF Specification - Appendix H MimeType and MimeTypeVersion Attributes”. Retrieved 2011-03-03.

[18] “Baseline TIFF Tag Compression”. Retrieved 2011-02-26.

[19] “LibTIFF - TIFF 6.0 Specification Coverage”. Retrieved 2011-02-28.

[20] “JSTOR/Harvard Object Validation Environment - TIFF Compression Schemes”. Archived from the original on January 30, 2011. Retrieved 2011-02-26.

[21] “JSTOR/Harvard Object Validation Environment - JHOVE TIFF-hul Module”. Archived from the original on December 10, 2010. Retrieved 2011-02-26.

[22] “TIFF Fields”. Retrieved 2011-02-27.

[23] Library of Congress Collections. “Tags for TIFF and Related Specifications”. Retrieved 2011-02-27.

[24] “GIMP Documentation - Saving as TIFF”. Retrieved 2011-02-27.

[25] “IrfanView - History of changes”. Retrieved 2011-02-27.

[26] Succeed project (2014). Recommendations for metadata and data formats for online availability and long-term preservation (PDF). p. 68. If files are actively managed in a digital repository, it is possible to consider using either LZW or ZIP lossless compression for the TIFF files. JPEG compression should not be used within the TIFF format. [...] Most of the respondents use uncompressed images (64%), if compression is used then LZW is mostly used.

[27] “LEADTOOLS TIFF Format SDK”. Retrieved 2011-07-04.

[28]

[29] “Extending LibTiff library with support for the new BigTIFF format”.

[30] “TIFF, Revision 6.0”. Digital Preservation. Library of Congress. 2014-01-08. Retrieved 2014-03-11.

[31] “ISO 12639:2004 - Graphic technology - Prepress digital data exchange - Tag image file format for image technology (TIFF/IT)". Retrieved 2011-03-03.

[32] ISO (2002), DRAFT INTERNATIONAL STANDARD ISO/DIS 12639 - Graphic technology — Prepress digital data exchange — Tag image file format for image technology (TIFF/IT) - Revision of first edition (ISO 12639:1998) (PDF), retrieved 2011- 03-07

[33] “Glossary of Printing Terms - TIFF/IT”. Retrieved 2011-03-01.

[34] CIP3 application note (PDF), retrieved 2011-03-01

[35] Tiff/It Questions and Answers (PDF), retrieved 2011-03-01

[36] Introduction to PDF/X, retrieved 2011-03-01

[37] “Tiff/It P1 Specifications”. Retrieved 2011-03-03. Note: TIFF/IT-P1 is not equivalent to a Photoshop® Tiff!

[38] DDAP, TIFF/IT-P1, PDF-X/1 (PDF), 1998, archived from the original (PDF) on February 15, 2006, retrieved 2011-03-01

[39] DDAP Association (2003). “TIFF/IT Implementers”. Archived from the original on April 25, 2005. Retrieved 2011-03- 03.

[40] Harlequin RIP - manual for a commercial TIFF/IT plugin (PDF), archived from the original (PDF) on February 20, 2011, retrieved 2011-03-02

[41] A software manual with information about TIFF/IT (PDF), archived from the original (PDF) on September 20, 2011

[42] DDAP Position Statement - TIFF/IT as a File Format for Delivery of Digital Advertising - October, 2001, October 2001, archived from the original on 2004-01-11, retrieved 2011-03-03

[43] DDAP Position Statement - TIFF/IT as a File Format for Delivery of Digital Advertising - October, 2001 (PDF), October 2001, Archived from the original on March 21, 2003, retrieved 2011-03-03

[44] “TIFF/IT-P1”. Retrieved 2011-03-01.

[45] DDAP Association (2002). “TIFF/IT Private Tags”. Archived from the original on April 28, 2003. Retrieved 2011-03-03.

[46] “Glossary of Printing Terms - TIFF/IT-P1”. Retrieved 2011-03-01. 124 CHAPTER 10. DAY 10

10.1.5 External links

• Adobe TIFF Resources page: Adobe links to the specification and main TIFF resources

• LibTIFF Home Page: Widely used library used for reading and writing TIFF files as well as TIFF file processing command line tools

• TIFF File Format FAQ and TIFF Tag Reference: Everything you always wanted to know about the TIFF File Format but were afraid to ask

• TIFF description at Digital Preservation (The Library of Congress)

• TIFF Revision 4.0: Specification for revision 4.0, in HTML (warning: for historical purposes only, the TIFF 6.0 spec contains the full 4.0 revision)

• TIFF Revision 5.0: Specification for revision 5.0, in HTML (warning: for historical purposes only, the TIFF 6.0 spec contains the full 5.0 revision)

• TIFF Revision 6.0: Specification for revision 6.0, in PDF (warning: there is an outdated and flawed section ( compression), corrected in supplements, and there are additions to this PDF too – for the full specification, see the Adobe TIFF Resources page)

• RFC 3302 - image/tiff, RFC 3949 and RFC 3950 - image/tiff-fx, RFC 2306 - Tag Image File Format (TIFF) - F Profile for Facsimile, RFC 1314 - legacy exchange of images in the Internet.

• Code Tiff Tag Reader - Easy readable code of a TIFF tag reader in Mathworks Matlab (Tiff 5.0/6.0)

• AlternaTIFF - Free in-browser TIFF viewer

• eiStream Annotation (also known as Wang or Kodak Annotation). Developed by eiStream.

“eiStream Annotation Specification, Version 1.00.06”. Archived from the original on 2003-01-24. Retrieved 2013-05-14.

• ADEO Imaging Annotation

“Multi-Page TIFF Editor - History of changes - TIFF tags”. Retrieved 2013-05-14.

10.2 Raw image format

This article is about a digital photography topic. For storage virtualization topic, see IMG (file format). ".srf” redirects here. For the ATL file type, see ATL Server § SRF files. “Camera raw” redirects here. It is not to be confused with Adobe Camera Raw.

A camera raw image file contains minimally processed data from the image sensor of either a digital camera, , or motion picture film scanner.[1][2] Raw files are named so because they are not yet processed and therefore are not ready to be printed or edited with a bitmap graphics editor. Normally, the image is processed by a raw converter in a wide- internal colorspace where precise adjustments can be made before conversion to a “positive” file format such as TIFF or JPEG for storage, printing, or further manipulation. This often encodes the image in a device-dependent colorspace. There are dozens, if not hundreds, of raw formats in use by different models of digital equipment (like cameras or film scanners).[3]

10.2.1 Rationale

Raw image files are sometimes called digital negatives, as they fulfill the same role as negatives in film photography: that is, the negative is not directly usable as an image, but has all of the information needed to create an image. Likewise, the process of converting a raw image file into a viewable format is sometimes called developing a raw image, by analogy with the film development process used to convert photographic film into viewable prints. The selection of the final choice of image rendering is part of the process of white balancing and color grading. 10.2. RAW IMAGE FORMAT 125

Like a photographic negative, a raw digital image may have a wider dynamic range or color gamut than the eventual final image format, and it preserves most of the information of the captured image. The purpose of raw image formats is to save, with minimum loss of information, data obtained from the sensor, and the conditions surrounding the capturing of the image (the metadata). Raw image formats are intended to capture as closely as possible (i.e. at the best of the specific sensor’s performance) the radiometric characteristics of the scene, that is, physical information about the light intensity and color of the scene. Most raw image file formats store information sensed according to the geometry of the sensor’s individual photo- receptive elements (sometimes called pixels) rather than points in the expected final image: sensors with hexagonal element displacement, for example, record information for each of their hexagonally-displaced cells, which a decoding software will eventually transform into the rectangular geometry during “digital developing”.

10.2.2 File contents

Raw files contain the information required to produce a viewable image from the camera’s sensor data. The structure of raw files often follows a common pattern:

• A short file header which typically contains an indicator of the byte-ordering of the file, a file identifier and an offset into the main file data

• Camera sensor metadata which is required to interpret the sensor image data, including the size of the sensor, the attributes of the CFA and its color profile

• Image metadata which is required for inclusion in any CMS environment or database. These include the ex- posure settings, camera/scanner/lens model, date (and, optionally, place) of shoot/scan, authoring information and other. Some raw files contain a standardized metadata section with data in Exif format.

• An image thumbnail

• Optionally a reduced-size image in JPEG format, which can be used for a quick preview

• In the case of motion picture film scans, either the timecode, keycode or frame number in the file sequence which represents the frame sequence in a scanned reel. This item allows the file to be ordered in a frame sequence (without relying on its filename).

• The sensor image data

Many raw file formats, including IIQ (), 3FR (), DCR, K25, KDC (Kodak), CR2 (Canon), ERF (Epson), MEF (), MOS (), NEF (Nikon), ORF (Olympus), PEF (Pentax), RW2 (Panasonic) and ARW, SRF, SR2 (Sony), are based on the TIFF file format.[4] These files may deviate from the TIFF standard in a number of ways, including the use of a non-standard file header, the inclusion of additional image tags and the encryption of some of the tagged data. Panasonic’s raw converter corrects geometric distortion and chromatic aberration on such cameras as the LX3,[5][6][7] with necessary correction information presumably included in the raw.[8] Phase One's raw converter also offers corrections for geometrical distortion, chromatic aberration, purple fringing and keystone correction emulating the shift capability of tilt-shift in software and specially designed hardware, on most raw files from over 100 different cameras.[9][10] The same holds for Canon’s DPP application, at least for all more expensive cameras like all EOS DSLRs and the G series of compact cameras. DNG, the Adobe digital negative format, is an extension of the TIFF 6.0 format and is compatible with TIFF/EP, and uses various open formats and/or standards, including Exif metadata, XMP metadata, IPTC metadata, CIE XYZ coordinates, ICC profiles, and JPEG.[11]

Sensor image data

In digital photography, the raw file plays the role that photographic film plays in film photography. Raw files thus contain the full resolution (typically 12- or 14-bit) data as read out from each of the camera’s image sensor pixels. 126 CHAPTER 10. DAY 10

The camera’s sensor is almost invariably overlaid with a color filter array (CFA), usually a Bayer filter, consisting of a mosaic of a 2x2 matrix of red, green, blue and (second) green filters. One variation on the Bayer filter is the RGBE filter of the Sony Cyber-shot DSC-F828, which exchanged the green in the RG rows with "emerald"[12] (a blue-green[13] or [14] color). Other sensors, such as the , capture information directly in RGB form (using three pixel sensors in each location). These RGB raw data still need to be processed to make an image file, because the raw RGB values correspond to the responses of the sensors, not to a standard color space like sRGB. These data do not need to be demosaiced, however. Flatbed and film scanner sensors are typically straight narrow RGB or RGBI (where “I” stands for the additional infra- red channel for automatic dust removal) strips that are swept across an image. The HDRi raw data format is able to store the infrared raw data, which can be used for infrared cleaning, as an additional 16-bit channel. The remainder of the discussion about raw files applies to them as well. (Some scanners do not allow the host system access to the raw data at all, as a speed compromise. The raw data are processed very rapidly inside the scanner to select out the best part of the available dynamic range so only the result is passed to the computer for permanent storage, reducing the amount of data transferred and therefore the bandwidth requirement for any given speed of image throughput.) To obtain an image from a raw file, this mosaic of data must be converted into standard RGB form. This is often referred to as “raw development”. When converting from the four-sensor 2x2 Bayer-matrix raw form into RGB pixels, the green pair is used to control the luminance detail of the processed output pixel, while the red and blue, which each have half as many samples, are used mostly for the more slowly-varying chroma component of the image. If raw format data is available, it can be used in high-dynamic-range imaging conversion, as a simpler alternative to the multi-exposure HDI approach of capturing three separate images, one underexposed, one correct and one overexposed, and “overlaying” one on top of the other.

Standardization

Providing a detailed and concise description of the content of raw files is highly problematic. There is no single raw format; formats can be similar or radically different. Different manufacturers use their own proprietary and typically undocumented formats, which are collectively known as raw format. Often they also change the format from one camera model to the next. Several major camera manufacturers, including Nikon, Canon and Sony, encrypt portions of the file in an attempt to prevent third-party tools from accessing them.[15] This industry-wide situation of inconsistent formatting has concerned many photographers who worry that their valu- able raw photos may someday become inaccessible, as computer operating systems and software programs become obsolete and abandoned raw formats are dropped from new software. The availability of high-quality open source software which decodes raw image formats, particularly , has helped to alleviate these concerns. An essay by Michael Reichmann and Juergen Specht stated “here are two solutions – the adoption by the camera industry of A: Public documentation of RAW [sic] formats; past, present and future, or, more likely B: Adoption of a universal RAW [sic] format”.[16][17][18] “Planning for [US] Library of Congress Collections” identifies raw-file formats as “less desirable file formats”, and identifies DNG as a suggested alternative.[19] DNG is the only raw image format for which industry-wide buy-in is being sought. It is based upon, and compatible with, the ISO standard raw image format ISO 12234-2, TIFF/EP, and is being used by ISO in their revision of that standard. The ISO standard raw image format is ISO 12234-2, better known as TIFF/EP. (TIFF/EP also supports “non-raw”, or “processed”, images). TIFF/EP provided a basis for the raw image formats of a number of cameras. For example, Nikon's NEF raw files are based on TIFF/EP, and include a tag which identifies the version of TIFF/EP they are based on.[20] Adobe’s DNG raw file format was based on TIFF/EP, and the DNG specification states “DNG ... is compatible with the TIFF-EP standard”.[21] Several cameras use DNG as their raw image format, so in that limited sense they use TIFF/EP too.[22] Adobe Systems launched this DNG raw image format in September 2004. By September 2006, several camera manufacturers had started to announce support for DNG in newer camera models, including Leica, Samsung, Ricoh, Pentax, Hasselblad (native camera support); and, Better Light (export).[23] The Leica Digital-Modul-R (DMR) was first to use DNG as its native format.[24] In September 2009 Adobe stated that there were no known intellectual property encumbrances or license requirements for DNG.[25] (There is a “Digital Negative (DNG) Specification Patent License”,[26] but it does not actually state that there are any patents held on DNG, and the September 2009 statement was made at least 4 years after this license was published). 10.2. RAW IMAGE FORMAT 127

TIFF/EP began its 5-year revision cycle in 2006.[27] Adobe offered the DNG specification to ISO to be part of ISO’s revised TIFF/EP standard.[28][29] A progress report in October 2008 from ISO about the revision of TIFF/EP stated that the revision "... currently includes two “interoperability-profiles,” “IP 1” for processed image data, using ".TIF” extension, and “IP 2” for “raw” image data, ".DNG” extension”.[30] It is “IP 2” that is relevant here. A progress report in September 2009 states that “This format will be similar to DNG 1.3, which serves as the starting point for development.”[31] DNG has been used by open-source developers.[15] Use by camera makers varies: the largest companies such as Canon, Nikon, Sony, and some others, do not use DNG. Smaller companies and makers of “niche” cameras who might otherwise have difficulty getting support from software companies frequently use DNG as their native raw image format. Pentax uses DNG as an optional alternative to their own raw image format. There are 15 or more such companies, even including a few that specialize in movie cameras.[22] In addition, most Canon point & shoot cameras can support DNG by using CHDK.

10.2.3 Processing

See also: Color image pipeline

To be viewed or printed, the output from a camera’s image sensor has to be processed, that is, converted to a photo- graphic rendering of the scene, and then stored in a standard raster graphics format such as JPEG. This processing, whether done in-camera or later in a raw-file converter, involves a number of operations, typically including:[32][33]

• decoding – image data of raw files are typically encoded for compression purpose, but also often for obfuscation purpose (e.g. raw files from Canon[34] or Nikon cameras).[35]

• demosaicing – interpolating the partial raw data received from the color-filtered image sensor into a matrix of colored pixels.

• defective pixel removal – replacing data in known bad locations with interpolations from nearby locations

• white balancing – accounting for of the light that was used to take the photograph

• noise reduction – trading off detail for smoothness by removing small fluctuations

• color translation – converting from the camera native color space defined by the spectral sensitivities of the image sensor to an output color space (typically sRGB for JPEG)

• tone reproduction[36][37] – the scene luminance captured by the camera sensors and stored in the raw file (with a dynamic range of typically 10 or more bits) needs to be rendered for pleasing effect and correct viewing on low-dynamic-range monitors or prints; the tone-reproduction rendering often includes separate tone mapping and gamma compression steps.

• compression – for example JPEG compression

Note that demosaicing is only performed for CFA sensors; it is not required for 3CCD or Foveon X3 sensors. Cameras and image processing software may also perform additional processing to improve image quality, for ex- ample:

• removal of systematic noise – bias frame subtraction and flat-field correction

• dark frame subtraction

• optical correction – lens distortion, vignetting, chromatic aberration and color fringing correction

• contrast manipulation

• increasing visual acuity by unsharp masking

• dynamic range compression – lighten shadow regions without blowing out highlight regions 128 CHAPTER 10. DAY 10

When a camera saves a raw file it defers most of this processing; typically the only processing performed is the removal of defective pixels (the DNG specification requires that defective pixels are removed before creating the file[38]). Some camera manufacturers do additional processing before saving raw files; for example, Nikon has been criticized by astrophotographers for applying noise reduction before saving the raw file.[39] Some raw formats also allow nonlinear quantization.[40][41] This nonlinearity allows the compression of the raw data without visible degradation of the image by removing invisible and irrelevant information from the image. Although noise is discarded this has nothing to do with (visible) noise reduction.

Benefits

Nearly all digital cameras can process the image from the sensor into a JPEG file using settings for white balance, color saturation, contrast, and sharpness that are either selected automatically or entered by the photographer before taking the picture. Cameras that produce raw files save these settings in the file, but defer the processing. This results in an extra step for the photographer, so raw is normally only used when additional computer processing is intended. However, raw has numerous advantages over JPEG such as:

• Many more shades of colors compared to JPEG files - raw files have 12 or 14 bits of intensity information per channel (4096-16384 shades), compared to JPEG’s gamma-compressed 8 bits (256 shades).

• Higher image quality. Because all the calculations (such as applying gamma correction, demosaicing, white bal- ance, brightness, contrast, etc...) used to generate pixel values (in RGB format for most images) are performed in one step on the base data, the resultant pixel values will be more accurate and exhibit less posterization.

• Bypassing of undesired steps in the camera’s processing, including sharpening and noise reduction

• JPEG images are typically saved using a lossy compression format (though a lossless JPEG compression is now available). Raw formats typically use lossless compression or high-quality lossy compression.

• Finer control. Raw conversion software allows users to manipulate more parameters (such as lightness, white balance, hue, saturation, etc...) and do so with greater variability. For example, the white point can be set to any value, not just discrete preset values like “daylight” or “incandescent”. Furthermore, the user can typically see a preview while adjusting these parameters.

• The color space can be set to whatever is desired.

• Different demosaicing algorithms can be used, not just the one coded into the camera.

• The contents of raw files include more information, and potentially higher quality, than the converted results, in which the rendering parameters are fixed, the color gamut is clipped, and there may be quantization and compression artifacts.

• Large transformations of the data, such as increasing the exposure of a dramatically under-exposed photo, result in fewer visible artifacts when done from raw data than when done from already rendered image files. Raw data leave more scope for both corrections and artistic manipulations, without resulting in images with visible flaws such as posterization.

• All the changes made on a raw image file are non-destructive; that is, only the metadata that controls the rendering is changed to make different output versions, leaving the original data unchanged.

• To some extent, raw-format photography eliminates the need to use the HDRI technique, allowing a much better control over the mapping of the scene intensity range into the output tonal range, compared to the process of automatically mapping to JPEG or other 8-bit representation.

Drawbacks

• Camera raw file size is typically 2–6 times larger than JPEG file size.[42] While use of raw formats avoids the compression artifacts inherent in JPEG, fewer images can fit on a given memory card. However, the large sizes and low prices of modern memory cards mitigate this. shooting tends to be slower and shorter due to the larger file size. 10.2. RAW IMAGE FORMAT 129

• Most raw formats implement lossless data compression to reduce the size of the files without affecting im- age quality. But some others use lossy data compression where quantization and filtering is performed on the image data.[40][41] Sony’s lossy 11+7 bit delta compression of raw data causes posterization under certain conditions.[43] Several Nikon cameras let photographers choose between no compression, lossless compression or lossy compression for their raw images. Red Digital Cinema Camera Company introduced .r3d REDCODE RAW with compression ratio from 3:1 to 18:1 which depends on resolution and frame rates.[44]

• The standard raw image format (ISO 12234-2, TIFF/EP) is not widely accepted. DNG, the potential candidate for a new standard format, has not been adopted by many major camera companies. (See "Standardization" section). Numerous different raw formats are currently in use and new raw formats keep appearing, while others are abandoned.[45]

• Because of the lack of widespread adoption of a standard raw format, more specialized software may be required to open raw files than for standardized formats like JPEG or TIFF. Software developers have to fre- quently update their products to support the raw formats of the latest cameras but open source implementations like dcraw make it easier.

• The time taken in the image workflow is an important factor when choosing between raw and ready-to-use image formats. With modern photo editing software the additional time needed to process raw images has been greatly reduced but it still requires an extra step in workflow in comparison with using out-of-camera .

10.2.4 Software support

Cameras that support raw files typically come with for conversion of their raw image data into standard RGB images. Other processing and conversion programs and plugins are available from vendors that have either licensed the technology from the camera manufacturer or reverse-engineered the particular raw format and provided their own processing algorithms.

Operating system support

Apple Mac OS X and iOS In January 2005, Apple released iPhoto 5, which offered basic support for viewing and editing many raw file formats. In April 2005, Apple’s OS X 10.4 brought raw support to the operating system’s ImageIO framework, enabling raw support automatically in the majority of OS X applications both from Apple (such as Preview, OS X’s PDF and image viewing application, and Aperture, a photo post-production software package for professionals) as well as all third party applications which make use of the ImageIO frameworks. Semi-regular updates to OS X generally include updated support for new raw formats introduced in the intervening months by camera manufacturers. In 2016, Apple announced that iOS 10 would allow capturing raw images on selected hardware, and third party applications will be able to edit raw images through the operating system’s framework.[46]

Microsoft Windows

Windows Camera Codec Pack Microsoft supplies the free Windows Camera Codec Pack for Windows XP and later versions of Windows, to integrate raw file viewing and printing into some Windows tools.[47] The codecs allow native viewing of raw files from a variety of specific cameras in Windows Explorer / and Photo Gallery / , in Windows Vista and .[48] As of October 2016, Microsoft had not released an updated version since April 2014, which supported some specific cameras by the following manufacturers: Canon, Casio, Epson, Fujifilm, Kodak, Konica Minolta, Leica, Nikon, Olympus, Panasonic, Pentax, Samsung, and Sony.[48]

Windows Imaging Component (WIC) Main article: Windows Imaging Component 130 CHAPTER 10. DAY 10

Windows supports the Windows Imaging Component (WIC) codec standard. WIC was available as a stand-alone downloadable program for Windows XP Service Pack 2, and built into Windows XP Service Pack 3, Windows Vista, and later versions. Windows Explorer / File Explorer, and Windows Live Photo Gallery / Windows Photo Gallery can view raw formats for which the necessary WIC codecs are installed. Canon, Nikon, Sony, Olympus and Pentax have released WIC codecs for their cameras, although some manufactures only provide codec support for the 32-bit versions of Windows.[49] Commercial DNG WIC codecs are also available from Ardfry Imaging,[50] and others; and FastPictureViewer Pro- fessional installs a set of WIC-enabled image decoders.[51][52]

Android 5.0, introduced in late 2014, can allow to take raw images, useful in low- light situations.[53]

Free and open source software

• darktable is a raw-workflow tool for Linux and other open Unix-like operating systems. Features native 32-bit floating point processing and a plugin architecture. • dcraw is a program which reads most raw formats and can be made to run on operating systems not supported by most (such as Unix). Libraw[54] is an API library based on dcraw, offering a more convenient interface for reading and converting raw files. HDR PhotoStudio and AZImage[55] are some of the commercial applications that use Libraw. Jrawio is another API library, written in pure Java code and compliant to the standard Java Image I/O API. • digiKam is an advanced digital photo management application for Linux, Windows, and Mac OS X that sup- ports raw processing. • ExifTool supports the reading, writing and editing of metadata in raw image files. ExifTool supports many different types of metadata including Exif, GPS, IPTC, XMP, JFIF, GeoTIFF, ICC Profile, Photoshop IRB, FlashPix, AFCP and ID3, as well as the maker notes of many digital cameras. • ImageMagick, a popular software suite for image manipulation and conversion, reads many different raw file formats.[56] ImageMagick is available for Linux/Unix, Mac OS, Windows, and other platforms. • LightZone is a photo editing program providing the ability to edit many raw formats natively. Most tools are raw converters, but LightZone allows a user to edit a raw file as if it were TIFF or JPEG. The project was discontinued in September 2011[57] and reinstated as an open source project in December 2012. • is a raw format developer. • RawTherapee is a raw developer supporting Linux, OS X and Windows operating systems. It features a native 32-bit floating point pipeline. • is an image organizer available for all major operating systems with the ability to view and edit raw images and has built-in social networking upload capability. • UFRaw is a frontend which uses dcraw as a back end. It can be used as a GIMP plugin and is available for most operating systems.

Proprietary software

In addition to those listed under operating system support, above, the commercial software described below support raw formats.

Dedicated raw converters The following products were launched as raw processing software to process a wide range of raw files, and have this as their main purpose:

• Adobe Photoshop Lightroom • Pro (now Corel AfterShot Pro) 10.2. RAW IMAGE FORMAT 131

• Capture One[58] • DxO Optics Pro • Hasselblad's Phocus relies on operating system support to process non-Hasselblad files • Photo Ninja • Silkypix Developer Studio • MagicRaw

Others

• ACDSee Pro is photo management and editing software that supports the raw formats of 21 camera manufacturers.[59] • Adobe Photoshop supports raw formats (as of version CS2). • DNG Viewer is a free Windows (32bit) viewer based on dcraw. The very simple viewer is installed as RAW , supports some lossless operations, and can save raw images as BMP, JPEG, PNG, or TIFF.[60] • FastRawViewer is a dedicated raw viewer that runs on Mac and Windows, and currently claims to support all raw formats except Foveon.[61] • Helicon Filter supports raw formats. • IrfanView is a / basic editor with support for raw files. • Konvertor support for raw formats is based on dcraw. • Paint Shop Pro contains raw support, although as in the case of most editors updates to the program may be necessary to attain compatibility with newer raw formats as they are released. • PhotoLine supports raw formats. • (development discontinued) is a free editor and organizer from Google. It can read and display many raw formats, but like iPhoto, Picasa provides only limited tools for processing the data in a raw file. • SilverFast supports raw formats. • XnView support for raw formats is mostly based on dcraw.

HTML5 browser-based apps A new class of raw file processing tools appeared with the development of HTML5 - rich Internet applications.

• Raw.pics.io is able to render and apply basic adjustments to raw and DNG files.

10.2.5 Raw filename extensions and respective camera manufacturers

• .3fr (Hasselblad) • .ari (Arri_Alexa) • .arw .srf .sr2 (Sony) • .bay (Casio) • .crw .cr2 (Canon) • .cap .iiq .eip (Phase_One) • .dcs .dcr .drf .k25 .kdc (Kodak) • .dng (Adobe) 132 CHAPTER 10. DAY 10

• .erf (Epson)

• .fff (Imacon/Hasselblad raw)

• .mef (Mamiya)

• .mdc (Minolta, Agfa)

• .mos (Leaf)

• .mrw (Minolta, Konica Minolta)

• .nef .nrw (Nikon)

• .orf (Olympus)

• .pef .ptx (Pentax)

• .pxn (Logitech)

• .R3D (RED Digital Cinema)

• .raf (Fuji)

• .raw .rw2 (Panasonic)

• .raw .rwl .dng (Leica)

• .rwz (Rawzor)

• .srw (Samsung)

• .x3f (Sigma)

10.2.6 Raw bitmap files

Less commonly, raw may also refer to a generic image file format containing only pixel color values. For example, “Photoshop Raw” files (.raw) contain 8-bits-per-channel RGB data in top-to-bottom, left-to-right pixel order. Di- mensions must be input manually when such files are re-opened, or a square image is assumed. Due to its simplicity, this format is very open and compatible, though limited by its lack of metadata and run-length encoding. Especially in photography and graphic design, where color management and extended are important, and large images are common.

10.2.7 See also

• List of cameras supporting a raw format

10.2.8 References

[1] http://www.luminous-landscape.com/tutorials/understanding-series/u-raw-files.shtml

[2] “Camera Raw Formats”. Digital Preservation. Library of Congress. 2006-10-04. Retrieved 2014-03-11.

[3] Decoding raw digital photos in Linux

[4] “Exif Tool, Supported File Types”.

[5] The Online Photographer: Panasonic LX3 Barrel Distortion Controversy

[6] Panasonic LX3 Lens Distortion

[7] “Panasonic LX3 Lens Distortion”. Seriouscompacts.com. Retrieved 2011-12-11.

[8] Panasonic LX7 Review - Imaging Resource 10.2. RAW IMAGE FORMAT 133

[9] “Review: Capture One 6 Pro”. IT Enquirer. Retrieved 5 October 2011.

[10] “Phase One Capture One 6 Pro Review”. ePhotoZine. Retrieved 5 October 2011.

[11] Adobe: DNG Specification

[12] “Realization of natural color reproduction in Digital Still Cameras, closer to the natural sight perception of the human eye”.

[13] “Sony Japan announces new RGB+E image sensors”. imaging-resource.com. July 16, 2003.

[14] “Sony announce new RGBE CCD”. dpreview.com. 15 July 2003.

[15] “Raw storm in a teacup?". Dpreview.com. 2005-04-27. Retrieved 2007-12-09. Dave Coffin, creator of the dcraw program, discusses some of his successful reverse-engineering in this interview, and mentions his enthusiasm for the DNG format.

[16] Reichmann, Michael; Specht, Juergen (May 2005). “The RAW Flaw (at The Luminous Landscape)".

[17] Reichmann, Michael; Specht, Juergen (May 2005). “The RAW Flaw (at The Luminous Landscape)" (DOC).

[18] Reichmann, Michael; Specht, Juergen (May 2005). “The RAW Flaw (at The Luminous Landscape)" (PDF).

[19] Planning for US Library of Congress Collections: Preferences in Summary

[20] Barry Pearson: What is in a raw file?

[21] Adobe: DNG 1.3.0.0 Specification (June 2009) (scroll down a bit)

[22] Barry Pearson: Products from Camera Manufacturers that use DNG in some way

[23] Barry Pearson: DNG support, to end-September 2006

[24] Barry Pearson: A brief history of DNG

[25] Adobe Labs: CinemaDNG (final bullet point)

[26] Adobe: Digital Negative (DNG) Specification Patent License

[27] I3A (International Imaging Industry Association): WG18, Ad Hoc groups and JWG 20/22/23 Meet in Tokyo

[28] Web archive of widely distributed email: Forwarded Message from a member of the ISO TC42 (technical committee for photography) working group 18 (electronic imaging) standards group

[29] DPReview: Adobe seeks International recognition for DNG

[30] I3A (International Imaging Industry Association): ISO 12234 Part 2 – TIFF/EP (scroll down a bit)

[31] NPES: Minutes of ISO/TC 130/WG2, 39th Meeting, see 14f

[32] R. Ramanath; W.E. Snyder; Y. Yoo; M.S. Drew. “Color Image Processing Pipeline in Digital Still Cameras” (PDF).

[33] Keigo Hirakawa. “Color Imaging Pipeline for Digital Still & Video Cameras Part 1: Pipeline and Color Processing” (PDF).

[34] Understanding What is stored in a Canon RAW .CR2 file, How and Why

[35] Ron Day. Understanding & Using the RAW File Format

[36] Nanette Salvaggio (2008). Basic Photographic Materials and Processes (3rd ed.). Focal Press. p. 206. ISBN 978-0-240- 80984-7.

[37] William E. Kasdorf (2003). The Columbia guide to digital publishing. Columbia University Press. p. 270. ISBN 978-0- 231-12499-7.

[38] “Digital Negative (DNG) Specification” (PDF): 14.

[39] “Comparative test: Canon 10D / in the field of deep-sky astronomy”.

[40] “Digital Negative (DNG) Specification” (PDF): 61.

[41] “Is the Nikon D70 NEF (RAW) format truly lossless?".

[42] “Understanding Camera Raw”.

[43] http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection 134 CHAPTER 10. DAY 10

[44] Kevin Carter (March 3, 2014). “RED Epic Dragon review: First camera to break the 100-point DxOMark sensor score barrier”.

[45] Larry Strunk (2006-03-19). “The RAW Problem”. OpenRAW.

[46] “Advances in iOS Photography”. Apple. 14 June 2016. Retrieved 16 June 2016.

[47] Microsoft RAW Image Thumbnailer and Viewer for Windows XP

[48] “Windows Camera Codec Pack”. Microsoft. 2014-04-22. Retrieved 2015-02-18.

[49] Understanding RAW Image Support in Windows Vista: Windows Vista team blog

[50] DNG Thumbnail and Preview Support for Windows Photo Gallery and Windows Live Photo Gallery

[51] FastPictureViewer’s Image Formats Compatibility Chart

[52] FastPictureViewer 32/64-bit raw codec pack for Windows

[53] Paul Monckton. “Android 5.0 Camera Tests Show Update Instantly Improves Every ”. Retrieved December 27, 2014.

[54] “Libraw”.

[55] “AZImage”.

[56] “ImageMagick Image Formats”.

[57] The LightZombie Project - About

[58] “Capture One”.

[59] “ACDSystems Supported RAW Formats”.

[60] ideaMK: DNG Viewer

[61] http://www.fastrawviewer.com/usermanual/supported-cameras

10.2.9 External links

• Adobe: Understanding Raw Files"; background on how camera sensors treat raw files

• Open RAW: a working group of photographers, software engineers and other people interested in advocating the open documentation of digital camera raw files

• Atkins, Bob: "Raw, JPEG, and TIFF"; common file formats compared.

• Coupe, Adam: "The benefits of shooting in RAW"; Article with diagrams explaining raw data and its advan- tages.

• Goldstein, Jim M.: "RAW vs JPEG: Is Shooting RAW Format for Me?"; an editorial.

• Basic Photography lesson in Camera Raw A pros and cons approach to the discussion of shooting in Camera Raw

• Clevy, Laurent: "Inside the Canon RAW format v2: understanding the .CR2 file format"

• Foi, Alessandro: "Signal-dependent noise modeling, estimation, and removal for digital imaging sensors"; with Matlab software and raw-data samples of Canon, Nikon, Fujifilm cameras. 10.3. DIGITAL NEGATIVE 135

10.3 Digital Negative

For the informal use of “digital negative”, see Raw image format.

Digital Negative (DNG) is a patented, open, non-free lossless raw image format written by Adobe used for digital photography. It was launched on September 27, 2004.[1] The launch was accompanied by the first version of the DNG specification,[2] plus various products, including a free-of-charge DNG converter utility. All Adobe photo manipulation software (such as Adobe Photoshop and ) released since the launch supports DNG.[3] DNG is based on the TIFF/EP standard format, and mandates significant use of metadata. Use of the file format is royalty-free; Adobe has published a license allowing anyone to exploit DNG,[4] and has also stated that there are no known intellectual property encumbrances or license requirements for DNG.[5] Adobe stated that if there were a consensus that DNG should be controlled by a standards body, they were open to the idea.[6] Adobe has submitted DNG to ISO for incorporation into their revision of TIFF/EP.[7]

10.3.1 Rationale for DNG

Given the existence of other raw image formats, Adobe’s creation of DNG as a competing format implies that DNG is unusual and satisfies objectives that other raw image formats do not. These objectives and the associated characteristics of DNG, as well as assessments of whether these objectives are met, are described below. Increasingly, professional archivists and conservationists, working for respectable organizations, variously suggest or recommend DNG for archival purposes.[8][9][10][11][12][13][14][15][16]

Objectives

These objectives are repeatedly emphasized in Adobe documents:[1][6][17][18]

• Digital image preservation (sometimes known as “archiving”): to be suitable for the purpose of preserving digital images as an authentic resource for future generations.[19] Assessment: The US Library of Congress states that DNG is a recommended alternative to other raw image formats: “Less desirable file formats: RAW; Suggested alternatives: DNG”.[20] The Digital Photography Best Practices and Workflow (dpBestflow) project, funded by the United States Library of Congress and run by the American Society of Media Photographers (ASMP), singles out DNG, and states “DNG files have proven to be significantly more useful than the proprietary raw files in our workflow”.[21] • Easy and/or comprehensive exploitation by software developers: to enable software to be developed without the need for reverse engineering; and to avoid the need for frequent software upgrades and re-releases to cater for new cameras. Assessment: Software could support raw formats without DNG, by using reverse engineering and/or dcraw; DNG makes it easier, and many software products can handle, via DNG, images from cameras that they have no specific knowledge of.[22] An unresolved restriction is that any edit/development settings stored in the DNG file by a software product are unlikely to be recognized by a product from a different company. (This problem is not specific to DNG). • In-camera use by camera manufacturers: to be suitable for many camera manufacturers to use as a native or optional raw image format in many cameras. Assessment: About 12 camera manufacturers have used DNG in-camera. About 38 camera models have used DNG.[23] Raw image formats for more than 230 camera models can be converted to DNG.[24] • Multi-vendor interoperability: to be suitable for workflows where different hardware and software components share raw image files and/or transmit and receive them.

Characteristics

All of the above objectives are facilitated or enabled by most of these characteristics:[25]

• Freely-available specification:[26][27][28] this can be downloaded from the Adobe website without negotiation or needing justification.[2] 136 CHAPTER 10. DAY 10

• Format based on open specifications and/or standards: DNG is compatible with TIFF/EP, and various open formats and/or standards are used, including Exif metadata, XMP metadata, IPTC metadata, CIE XYZ coor- dinates and JPEG.[2][29]

• Self-contained file format: a DNG file contains the data (raw image data and metadata) needed to render an image without needing additional knowledge of the characteristics of the camera.[2][22]

• Version control scheme: it has a version scheme built into it that allows the DNG specification, DNG writers, and DNG readers, to evolve at their own paces.[30][31]

• Freely-available source-code-based software development kit (SDK): there are three aspects - there is an SDK; it is source-code-based (as can be verified by examination); and it can be downloaded from the Adobe website without needing justification.[32]

• Documented to have no known intellectual property encumbrances or license requirements: there is both a “Dig- ital Negative (DNG) Specification Patent License” which says that anyone can exploit DNG,[4] and a statement that there are no known intellectual property encumbrances or license requirements for DNG.[5]

• Lossless and lossy compression (optional): DNG support optional lossless and (since version 1.4) also lossy compression.[33] The lossy compressions losses are practically indistinguishable in real world images.[34]

10.3.2 Technical summary

A DNG file always contains data for one main image, plus metadata, and optionally contains at least one JPEG preview.[2] It normally has the extension “dng” or “DNG”. DNG conforms to TIFF/EP and is structured according to TIFF. DNG supports various formats of metadata, (in- cluding Exif metadata, XMP metadata, IPTC metadata), and specifies a set of mandated metadata.[30] DNG is both a raw image format and a format that supports “non-raw”, or partly processed, images.[2] The latter (non-raw) format is known as “Linear DNG”.[35] Linear DNG is still scene-referred[36] and can still benefit from many of the operations typically performed by a raw converter, such as white balance, the application of a camera color profile, HDR compositing, etc. All images that can be supported as raw images can also be supported as Linear DNG. Images from the Foveon X3 sensor or similar, hence especially Sigma cameras, can only be supported as Linear DNG. DNG can contain raw image data from sensors with various configurations of color filter array (CFA). These include: conventional Bayer filters, using three colors and rectangular pixels; four-color CFAs, for example the RGBE filter used in the Sony Cyber-shot DSC-F828; rectangular (non-square) pixels, for example as used in the Nikon D1X; and offset sensors (for example with octagonal pixels) such as Super CCD sensors of various types, as used in various Fujifilm cameras. (Or combinations of these if necessary). DNG specifies metadata describing these individual parameters; this is one significant extension to TIFF/EP. When used in a CinemaDNG movie clip, each frame is encoded using the above DNG image format. The clip’s image stream can then be stored in one of two formats: either as video essence using frame-based wrapping in an MXF file, or as a sequence of DNG files in a specified file directory. Contrary to its name (Digital Negative) the DNG format doesn't distinguish negative and positive data[37] - all data is considered to be describing a positive image. While this is not an issue when working with images from digital cameras (which are always positive), working with scanned (by a film scanner or DSLR copy stand) film negatives saved as raw DNG files is complicated, because the resulted image is not automatically inverted and thus impossible to be used directly. A way to get around this is using an inverted curve in the photo editing application, however this reverses the effect of the image controls (Exposure, Shadow and Highlight details, etc) which complicates the photo editing.

10.3.3 Timeline

This provides a mixture of the dates of significant events (such as “the first X”) and various counts of usage at the anniversaries of the launch (each 27 September). Counts of products and companies that use DNG in some way are provided primarily for illustration. They are approximate, and include products that are no longer sold. The purpose is mainly to demonstrate that such products 10.3. DIGITAL NEGATIVE 137 and companies exist, and to show trends. Convertible raw image formats (camera models whose raw images can be converted to DNG) only include official support by Adobe DNG converters; not unofficial support by Adobe products (sometimes reaching about 30), nor support by other DNG converters.[24]

• 2003, late: Adobe started work on the DNG specification.[6] • 2004, early: Adobe started talking to other companies about DNG.[6] • Launch, 2004, 27 September: Adobe launched DNG. Specification version 1.0.0.0 published.[1] Convertible raw image formats: 60+ • 2005, February: Specification version 1.1.0.0 published. • 2005, June: First digital back to write DNG, the Leica DMR (Digital Modul R) back for the R8 and R9. • 2005, July: First camera manufacturer to provide a DNG converter for its own raw file formats - Hasselblad's FlexColor. • 2005, July: First genuine digital SLR camera to write DNG, Hasselblad H2D. • 1st anniversary, 2005, 27 September: Camera manufacturers: 4. Camera models: 7. Software products: 70+. Convertible raw image formats: 70+. • 2005, October: First compact camera to write DNG, Ricoh GR Digital. • 2006, July: First monochrome digital back to write DNG, MegaVision E Series MonoChrome. • 2006, September: First rangefinder camera to write DNG, . • 2006, September: First camera to offer the user a choice of proprietary raw or DNG, . • 2nd anniversary, 2006, 27 September: Camera manufacturers: 8. Camera models: 9. Software products: 120+. Convertible raw image formats: 110+. • 2007, May: First raw converter & photo-editor whose first raw-handling release only supported DNG, . • 2007, July: First underwater camera to write DNG, Sea&Sea DX-1G. (Based on Ricoh Caplio GX100). • 3rd anniversary, 2007, 27 September: Camera manufacturers: 10. Camera models: 13. Software products: 170+. Convertible raw image formats: 160+. • 2007, October: First digital scan back system and first 360-degree panorama system to write DNG, Seitz 6x17 Digital and Seitz Roundshot D3 with D3 digital scan back.[38][39] • 2008, February: First software on a mobile phone to write DNG, Tea Vui Huang’s “DNG Phone Camera” for Nokia. • 2008, April: Adobe announces CinemaDNG initiative, using DNG as the basis for the individual raw images of a movie. • 2008, May: Specification version 1.2.0.0 published. • 2008, September: First movie camera to use DNG as a raw image format, Ikonoskop A-cam dII.[40] • 4th anniversary, 2008, 27 September: Camera manufacturers: 13. Camera models: 29. Software products: 200+. Convertible raw image formats: 180+. • 2008, September: First DNG converter running on Linux (among several other things), digiKam. • 2009, spring/summer: First digiscope with built-in camera to write DNG, Zeiss Photoscope 85 T* FL.[41] • 2009, June: Specification version 1.3.0.0 published. • 5th anniversary, 2009, 27 September: Camera manufacturers: 14. Camera models: 38. Software products: 220+. Convertible raw image formats: 230+. • 2009, November: First “interchangeable unit” camera to write DNG, Ricoh GXR.[42] 138 CHAPTER 10. DAY 10

• 2010, February: First 3D movie camera to write DNG, Ikonoskop A-cam3D.[43]

• 2010, March: First medium format camera to offer the user a choice of proprietary raw or DNG, .

• 6th anniversary, 2010, 27 September: Camera manufacturers: 14. Camera models: 47. Software products: 240+. Convertible raw image formats: 290+.

• 2012, September: Specification version 1.4.0.0 published.

During the first 5 years when about 38 camera models were launched that wrote DNG, Adobe software added support for about 21 Canon models, about 20 Nikon models, and about 22 Olympus models.

10.3.4 Reception

The reaction to DNG has been mixed.[24] A few camera manufacturers stated their intention to use DNG at launch. The first supported DNG about 9 months after launch. Several more niche and minority camera manufacturers added support after this (e.g. Leica). The largest camera manufacturers have apparently never indicated an intention to use DNG (e.g. Nikon and Canon). Some software products supported DNG within 5 months of launch, with many more following. Some only support DNG from cameras writing DNG, or from cameras supported via native raw image formats. OpenRAW was an advocacy and lobby group with the motto “Digital Image Preservation Through Open Documen- tation”. They became opposed to DNG. Some photographic competitions do not accept converted files, and some do not accept DNG.[44]

10.3.5 DNG conversion

“DNG conversion” refers to the process of generating a DNG file from a non-DNG image. (This is in contrast to “raw conversion”, which typically refers to reading and processing a raw file, which might be a DNG file, and generating some other type of output from it). DNG conversion is one of the sources of DNG files, the other being direct output from cameras and digital backs. Several software products are able to do DNG conversion. The original such product is Adobe DNG Converter or DNG Converter, a freely available stand-alone utility from Adobe.[17] Other Adobe products such as the ACR plugin to Photoshop or Lightroom can also generate DNG files from other image files. Most DNG converters are supplied by companies other than Adobe. For example:

• The software that Pentax supplies with all their dSLR cameras can convert PEF raw image files from into DNG files.

• Flexcolor and Phocus from Hasselblad can convert 3FR raw image files from Hasselblad cameras and digital backs into DNG files.

• Capture One from Phase One is a raw converter that can process not only raw image files from Phase One digital backs, but also raw image files from many other cameras too. Capture One can save images from many of those cameras to DNG.

• KDE Image Plugin Interface is an API, that can save the images it is processing to DNG. It can be used stan- dalone or with any image processing applications of the KDE under Linux and Windows.

• A number of DNG converters have been developed by “amateurs” to enable raw images from their favored camera or digital back to be processed in a large range of raw converters. These include cases where cameras have been hacked to output raw images that have then been converted to DNG.

The process of DNG conversion involves extracting raw image data from the source file and assembling it according to the DNG specification into the required TIFF format. This optionally involves compressing it. Metadata as defined in the DNG specification is also put into that TIFF assembly. Some of this metadata is based on the characteristics of the camera, and especially of its sensor. Other metadata may be image-dependent or camera-setting dependent. 10.3. DIGITAL NEGATIVE 139

So a DNG converter must have knowledge of the camera model concerned, and be able to process the source raw image file including key metadata. Optionally a JPEG preview is obtained and added. Finally, all of this is written as a DNG file. DNG conversion typically leaves the original raw image file intact. For safety, many photographers retain the original raw image file on one medium while using the DNG file on another, enabling them to recover from a range of hard- ware, software, and human, failures and errors. For example, it has been reported in user forums that some versions of the Adobe DNG Converter don't preserve all the raw data from raw images from some camera models.[45][46]

10.3.6 Summary of products that support DNG in some way

This section summarizes other more comprehensive lists.[47][48]

Adobe products

All raw image file handling products from Adobe now support DNG.[3] Adobe DNG Converter was published by Adobe Systems on September 27, 2004. It converts different camera raw format files into the Digital Negative (DNG) standard. It also supports lossless data compression when converting. The program is free of charge. It can be downloaded at Adobe’s site (for Microsoft Windows[49] and the Apple Macintosh[50]).

Digital cameras and related software

Use by camera manufacturers varies; there are about 15 camera manufacturers that use DNG, including a few that specialize in movie cameras:[23]

• Niche camera manufacturers typically use DNG in new cameras (including a digiscope, panorama cameras, and at least one movie camera). The article on raw image formats illustrates the complicated relationship between new raw image formats and third-party software developers. Using DNG provides immediate support for these cameras by a large range of software products.

• High-end smartphone cameras using iOS 10,[51] Android 5 or Windows Mobile that are capable to shoot RAW images typically use DNG (e.g. Samsung and Nokia).

• Some low market share but conventional camera manufacturers use DNG in new cameras. Camera manufac- turers that do not supply their own software for processing raw images typically, but not always, use DNG.

• Pentax typically offers users the option of whether to use Pentax’s own raw image format (PEF) or DNG, but some, for example , Q10 and Q7, do not support PEF. For example, the digital SLR Camera Pentax K-x does offer the ability to save PEF or DNG or even DNG+ which saves two files, a DNG and a separate JPEG file at the same time.

• If a camera uses DNG, and that camera manufacturer supplies software, it will support DNG. It may support DNG only from their own cameras, or support it more generically.

• Canon, Nikon, Sony, Panasonic, Olympus, Fuji, and Sigma do not use DNG in their cameras. If a camera manufacturer’s cameras do not use DNG, their software is unlikely to support DNG unless that software is also sold independently of the cameras.

Some digital cameras that support DNG.:[23]

• Casio supports DNG in their Exilim PRO EX-F1 and Exilim EX-FH25.

• DxO supports DNG in their DxO ONE camera (introduced 2015).

• Leica's Digital Modul R for the Leica R8 or Leica R9 and the Leica M8 or natively support the DNG format.

• MegaVision E Series Monochrome back. 140 CHAPTER 10. DAY 10

• High-end Nokia (now Microsoft) Lumia smartphones like Nokia Lumia Icon, 930, 950, 1020 and 1520, were the first smartphone cameras to support DNG files.

• Panoscan MK-3 digital panoramic camera.

• Pentax supports DNG in their 645D, 645Z, K10D, K20D, K200D, K2000, K-7, K-x, K-r, K-5, K-30, K-5II(s), K-50, K-500, K-3, K-3II, K-S1, K-S2, K-70 and K-1 DSLR cameras; alongside the K-01, Q, Q10, Q7 and Q-S1 mirrorless cameras.

• Ricoh supports DNG in the Ricoh Digital GR, considered a professional compact, and the Ricoh Caplio GX.

• Ricoh GXR mirrorless interchangeable lens camera unit use also DNG.[52]

• Samsung supports DNG in their Pro815 "prosumer" camera and GX-10 and GX-20 DSLR cameras. Samsungs high-end smartphones like the Galaxy S6 also uses DNG.

• Sea&Sea DX‐1G underwater camera.

• Seitz Roundshot D3 digital back, used in cameras such as the 6×17.[53]

• Silicon Imaging Silicon Imaging Digital Cinema SI-1920HDVR.

• Sinar now uses DNG as the raw file standard for their eMotion series of digital backs.

Some of the Canon cameras can shoot as DNG using additional CHDK.

Third-party software

Support by software suppliers varies; there are of the order of 200 software products that use DNG.[47][54] The majority of raw handling software products support DNG. Most provide generic support, while a few support it only if it is output directly from a camera. The type of support varies considerably. There appear to be very few third party software products that process raw images but don't support DNG. This may reflect the difficulty of discovering all of those that do not.[55]

10.3.7 Versions of the specification

All versions of the specification remain valid, in the sense that DNG files conforming to old versions should still be read and processed by DNG readers capable of processing later versions. DNG has a version scheme built into it that allows the DNG specification, DNG writers, and DNG readers, to evolve at their own paces.[31] Each version of the specification describes its compatibility with previous versions.[2]

1.0.0.0, published September, 2004 This version accompanied the launch of DNG and related products. It was a rare, possibly unique, example of a raw image format specification published by its owner. It was adequate for representing typical images, but it had a few errors and deficiencies (specifically the lack of support of “masked pixels” and an inadvertent deviation from the JPEG specification) that required it soon to be replaced by the next version.

1.1.0.0, published February, 2005 This version corrected the flaws in the first version. It has proved capable of representing raw images for a large variety of cameras (both when written in-camera or via conversion from other raw image formats) for a few years, and it is the version still typically written in-camera.

1.2.0.0, published May 2008 This version was based on experience and feedback from other companies about DNG since its launch. It introduced many new features, especially several new options for color specification under the general heading of “Camera Profiles”. These are mainly of value to software products wanting their own flavor of color handling. This version permits administrative control of Camera Profiles, including calibration signatures and copyright information. 10.3. DIGITAL NEGATIVE 141

1.3.0.0, published June, 2009 This version added various improvements, but the major change was to introduce “Opcodes”. A Opcode is an algorithm to be applied to some or all of the image data, described in the spec- ification, and (optionally) implemented in the product that reads and processes the DNG file. The DNG file itself holds lists of Opcodes to be executed, together with the parameters to be applied on execution. In ef- fect, the DNG file can contain lists of “function calls” to be executed at various stages in the raw conversion process. For example, the WarpRectilinear Opcode “applies a warp to an image and can be used to correct geometric distortion and lateral (transverse) chromatic aberration for rectilinear lenses”. This is an example of an algorithm that cannot be applied to the raw image data itself before it is placed into the DNG file, because it should be executed after demosaicing. There are 13 Opcodes described in this version, and each Opcode is accompanied by a specification version so that more can be added in future. 1.4.0.0, published Sept., 2012 This version added Floating Point Image Data, Transparent Pixels, Proxy DNG Files, and additional tags. It also added SampleFormat and Predictor.

CinemaDNG, published September 2009 CinemaDNG uses DNG for each frame of a movie clip. There are additional tags specifically for movies: TimeCodes and FrameRate.[56] It is not clear whether these tags will be added to a later version of the DNG specification, or will remain separately described only in the CinemaDNG specification.

10.3.8 Standardization

DNG is not (yet) a standard format, but is based on several open formats or standards and is being used by ISO in its revision of TIFF/EP. A timeline:

• 2001: The ISO standard raw image format, ISO 12234-2, better known as TIFF/EP, was ratified and published. It also supports “non-raw”, or “processed”, images. TIFF/EP provided a basis for the raw image formats of a number of cameras, but they typically added their own proprietary data. Some cameras have sensors that cannot be described by that version of TIFF/EP. • 2004, September: Adobe launched DNG. Its specification states it “is compatible with the TIFF-EP standard”.[2] It is a TIFF/EP extension with considerably more specified metadata, brought up-to-date and made fit for pur- pose. DNG also exploits various other open formats and standards, including Exif metadata, XMP metadata, IPTC metadata, CIE XYZ coordinates, ICC profiles, and JPEG.[29][2] Although DNG supports more sensor configurations than TIFF/EP (for example, cameras from Fujifilm using Super CCD sensors), it still doesn't support all sensor types as raw images, especially those using the Foveon X3 sensor or similar, hence especially Sigma cameras. • 2006: TIFF/EP began its 5-year revision cycle. • 2006 to 2007: Adobe offered the DNG specification to ISO to be part of ISO’s revised TIFF/EP standard.[7][57] • 2008, September & October: Minutes of ISO/TC 130/WG2 — Prepress Data Exchange, 37th Meeting: “WG 18 is revising the two-part standard (ISO 12234), which addresses digital camera removable memory. The revision of... Part 2 will add DNG into TIFF/EP.” A progress report from ISO about the revision of TIFF/EP stated that the revision "...currently includes two “interoperability-profiles,” “IP 1” for processed image data, using ".TIF” extension, and “IP 2” for “raw” image data, ".DNG” extension”.[58] • 2009, September: Minutes of ISO/TC 130/WG2 — Prepress Data Exchange, 39th Meeting: the revision of TIFF/EP “is comprehensive to support many different use cases, including backward compatibility with current TIFF readers and support of Adobe DNG... Profile 2 (proposed extension .dng, if Adobe is in agreement) is intended for camera raw images, including un-demosaiced images... This format will be similar to DNG 1.3, which serves as the starting point for development.”[59]

10.3.9 See also

• Comparison of image viewers • DNxHD codec • dcraw 142 CHAPTER 10. DAY 10

10.3.10 References

[1] “Adobe Unifies Raw Photo Formats with Introduction of Digital Negative Specification” (Press release). Adobe Systems. September 27, 2004.

[2] “Specification” (PDF), DNG (PDF), Adobe.

[3] Pearson, Barry, Adobe products that support DNG, UK.

[4] “Specification Patent License”, Digital Negative (DNG), Adobe.

[5] “File Format”, CinemaDNG, Adobe Labs.

[6] “Adobe’s Kevin Connor Speaks on Adobe’s DNG Specification”, Digital Media Designer, Digital Media net.

[7] dpreview staff (May 15, 2008), “Adobe seeks International recognition for DNG”, DPReview, retrieved December 11, 2014, Adobe is submitting its DNG 'universal RAW' format to the International Standard’s Organization (ISO), in a move aimed at increasing acceptance and usage. The format is being proposed as part of ISO’s TIFF/EP (electronic photography), standard..

[8] universal photographic digital imaging guidelines (UPDIG): File formats - the raw file issue

[9] Archaeology Data Service / Digital Antiquity: Guides to Good Practice - Section 3 Archiving Raster Images - File Formats

[10] University of Connecticut: “Raw as Archival Still Image Format: A Consideration” by Michael J. Bennett and F. Barry Wheeler

[11] Inter-University Consortium for Political and Social Research: Obsolescence - File Formats and Software

[12] JISC Digital Media - Still Images: Choosing a File Format for Digital Still Images - File formats for master archive

[13] International Digital Enterprise Alliance, Digital Image Submission Criteria (DISC) Guidelines & Specifications 2007 (PDF)

[14] The J. Paul Getty Museum - Department of Photographs: Rapid Capture Backlog Project - Presentation

[15] American Institute for Conservation - Electronic Media Group: Digital Image File Formats

[16] Archives Association of British Columbia: Born Digital Photographs: Acquisition and Preservation Strategies (Rosaleen Hill)

[17] Adobe: Digital Negative (DNG) - The public, archival format for digital camera raw data

[18] Adobe: Introducing the Digital Negative Specification: Information for manufacturers

[19] Planning for US Library of Congress Collections: Sustainability Factors

[20] Planning for US Library of Congress Collections: Preferences in Summary

[21] dpBestflow: Raw File Formats

[22] Barry Pearson: Support via DNG but not native raws

[23] Barry Pearson: Products from Camera Manufacturers that support DNG in some way.

[24] Barry Pearson: A brief history of DNG

[25] Planning for US Library of Congress Collections: Adobe Digital Negative (DNG), Version 1.1

[26] Reichmann, Michael; Specht, Juergen (May 2005). “The RAW Flaw (at The Luminous Landscape)".

[27] Reichmann, Michael; Specht, Juergen (May 2005). “The RAW Flaw (at The Luminous Landscape)". Archived from the original (DOC) on 2012-09-20.

[28] Reichmann, Michael; Specht, Juergen (May 2005). “The RAW Flaw (at The Luminous Landscape)" (PDF). Archived from the original (PDF) on 2011-01-06.

[29] Barry Pearson: DNG’s relationship to standards

[30] Adobe: DNG Specification (Section 4)

[31] Barry Pearson: Version control in DNG files 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 143

[32] Adobe: Adobe DNG Software Development Kit (SDK)

[33] Adobe

[34] Lossy DNG comparison

[35] Barry Pearson: Linear DNG

[36] CIE: “Scene-referred": image state, scene-referred

[37] http://www.adobe.com/content/dam/Adobe/en/products/photoshop//dng_spec_1.4.0.0.

[38] Seitz Phototechnik: Roundshot D3 with Seitz D3 digital scan back

[39] Seitz Phototechnik: Seitz 6x17 Digital with Seitz D3 digital scan back

[40] Ikonoskop: Ikonoskop A-cam dII

[41] Zeiss: PhotoScope 85 T* FL

[42] Ricoh: Ricoh GXR

[43] Ikonoskop: Ikonoskop A-cam3D

[44] Wadleigh, John (Oct 30, 2009). “Competition does not accept DNG”. Retrieved 3 January 2015.

[45] Adobe-hosted User Forum: Reconvert with new DNG Converter?

[46] DPReview Forum: PROGRESS: version 0.9.0.0 now supports PEF files

[47] Adobe: DNG hardware and software support

[48] Barry Pearson: Products that support DNG in some way

[49] “MS Windows product download”, DNG converter, Adobe.

[50] “Apple Mac product download”, DNG converter, Adobe.

[51] http://www.macworld.com/article/3120658/ios/camera-app-makers-tap-into-raw-power-with--and-look-forward-to-dual-lenses. html

[52] “Ricoh GXR”, DPReview (quick review).

[53] Round shot, CH.

[54] Barry Pearson: Software products that support DNG in some way

[55] Barry Pearson: Products without explicit DNG support

[56] Adobe Labs: CinemaDNG - Image Data Format Specification (Version 1.0.0.0) (PDF)

[57] A member of the ISO TC42 (technical committee for photography) working group 18 (electronic imaging) standards group (26 April 2007), Digikam-devel (archive) (mailing list).

[58] “ISO 12234 Part 2 – TIFF/EP”, Eye on standards (working group report), I3A (International Imaging Industry Association), down a bit, October 2008.

[59] Minutes of ISO/TC 130/WG2, 39th Meeting (PDF), Peking, China: NPES, September 2009, 14f.

10.4 List of cameras supporting a raw format

This is a dynamic list and may never be able to satisfy particular standards for completeness. You can help by expanding it with reliably sourced entries.

10.4.1 Still cameras

The following digital cameras allow photos to be taken and saved in at least one raw image format. Some cameras support more than one, usually a proprietary format and Digital Negative (DNG). 144 CHAPTER 10. DAY 10

Agfa

• Agfa ActionCam - MDC

See also: Minolta

Canon

Casio

• Exilim EX-ZR700 • Exilim EX-ZR1000

Fujifilm

• Fujifilm FinePix IS-Pro • Fujifilm FinePix S1 Pro • Fujifilm FinePix S2 Pro • Fujifilm FinePix S3 Pro • Fujifilm FinePix S5 Pro • Fujifilm FinePix S6500fd • Fujifilm FinePix S9100/9600 • Fujifilm FinePix S9000/9500 • Fujifilm FinePix 20 Pro • Fujifilm FinePix F700 • Fujifilm FinePix F710 • Fujifilm FinePix E900 • Fujifilm FinePix F550EXR • Fujifilm FinePix F600EXR • Fujifilm FinePix F810 • Fujifilm FinePix HS10 • Fujifilm FinePix HS20EXR • Fujifilm FinePix HS35EXR • Fujifilm FinePix HS30EXR • Fujifilm FinePix S100FS • Fujifilm FinePix S200EXR • Fujifilm FinePix S5100/5500 • Fujifilm FinePix S5200/5600 • Fujifilm FinePix S7000 • Fujifilm FinePix HS50EXR 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 145

• Fujifilm FinePix X10

• Fujifilm X100 (X100, X100S, X100T)

• Fujifilm FinePix X-S1

• Fujifilm X-Pro1

• Fujifilm X-E1

• Fujifilm X-E2

• Fujifilm X-M1

• Fujifilm X-A1

• Fujifilm X-A2

• Fujifilm X-T1

• Fujifilm X-T10

Imacon

• Imacon Ixpress

Hasselblad

• Hasselblad H series

• Hasselblad CF series

• Hasselblad CFV series

• Hasselblad Lunar / Hasselblad Lunar Limited Edition - ARW 2.3.0 (CMOS, compressed)

• Hasselblad Stellar / Hasselblad Stellar Special Edition - ARW 2.3.0 (CMOS, compressed)

• Hasselblad Stellar II - ARW 2.3.1 (CMOS, compressed)

• Hasselblad HV - ARW 2.3.1 (CMOS, compressed)

• Hasselblad Lusso - ARW 2.3.1 (lossy delta-compression)

See also: Sony

Kodak

• Kodak P712

• Kodak P850

• Kodak P880 saved in .KDC format

• Kodak C603/C643 via hidden debug menu

• Kodak C713 via hidden debug menu saved in .RAW format

• Kodak DCS-620, −660 Canon bodies, 2 and 6 megapixels

• Kodak DCS-720, −760 bodies, 2 and 6 megapixels

• Kodak DCS-14n 146 CHAPTER 10. DAY 10

• Kodak DCS Pro SLR/n • Kodak DCS Pro SLR/c • Kodak Z1015IS • Kodak EasyShare Z980 • Kodak EasyShare Z990 • Kodak PixPro S1

Konica

• Konica Digital Revio KD-400Z (MK77/2118) - undocumented raw image file mode, erroneously using the JPG file extension, convertible to MRW • Konica Revio KD-410Z (MK12) - undocumented raw image file mode, erroneously using the JPG file exten- sion, convertible to MRW • Konica Revio KD-420Z (ML42) - undocumented raw image file mode, erroneously using the JPG file exten- sion, convertible to MRW • Konica Digital Revio KD-500Z (MK86) - undocumented raw image file mode, erroneously using the JPG file extension, convertible to MRW • Konica Revio KD-510Z (ML22) - undocumented raw image file mode, erroneously using the JPG file exten- sion, convertible to MRW

See also: Minolta and Konica Minolta

Konica Minolta

• Konica Minolta DiMAGE A2 (2720) - MRW • Konica Minolta DiMAGE A200 (2747) - MRW • Konica Minolta Dynax 5D / Konica Minolta Maxxum 5D / Konica Minolta α−5 Digital / Konica Minolta α Sweet Digital (2186) - MRW • Konica Minolta Dynax 7D / Konica Minolta Maxxum 7D / Konica Minolta α−7 Digital (2181) - MRW • Konica Minolta DiMAGE G530 (2736) - undocumented raw image file mode, erroneously using the JPG file extension, convertible to MRW • Konica Minolta DiMAGE G600 (2744) - undocumented raw image file mode, erroneously using the JPG file extension, convertible to MRW • Konica Minolta DiMAGE Z2 (2725, SX745) - unofficial hack to enable raw image file mode, using the JPG file extension, convertible to NEF

See also: Minolta, Konica, and Sony

Kyocera

N Digital

Leaf

• Leaf Digital Backs 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 147

Leica

• Leica M8 • Leica M8.2 • Leica M9 • Leica Digilux 2 • • Leica V-LUX 1 • Leica D-LUX 3 • Leica D-LUX 4 • Leica D-LUX 5 • Leica Digital Modul-R • Vario

Minolta

• Minolta RD-175 (2753) - MDC • (2773) - MRW • Minolta DiMAGE 7 (2766) / Minolta DiMAGE 7UG - MRW • Minolta DiMAGE 7i (2779) - MRW • Minolta DiMAGE 7Hi (2778) - MRW • Minolta DiMAGE A1 (2782) - MRW • Minolta DiMAGE G400 (2732) - undocumented raw image file mode, erroneously using the JPG file extension, convertible to MRW • Minolta DiMAGE G500 (2731) - undocumented raw image file mode, erroneously using the JPG file extension, convertible to MRW

See also: Agfa, Konica, and Konica Minolta

Nikon

Nikon DSLR series

• Nikon D1H • Nikon D1X • • Nikon D2Hs • 148 CHAPTER 10. DAY 10

• Nikon D2Xs • • Nikon D40 • Nikon D40x • Nikon D50 • Nikon D60 • Nikon D70/D70s • Nikon D80 • Nikon D90 • Nikon D100 • • Nikon D300s • Nikon D600 • • Nikon D700 • • Nikon D800E • • Nikon D810A • Nikon D3000 • Nikon D3100 • Nikon D3200 • Nikon D3300 • Nikon D5000 • Nikon D5100 • Nikon D5200 • Nikon D5300 • Nikon D5500 • Nikon D7000 • Nikon D7100 • Nikon D7200 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 149

Nikon MILC series

, September 21, 2011 • , September 21, 2011[1] • , August 10, 2012 • ,[2] October 24, 2012 • • Nikon 1 AW1

Nikon Coolpix series with at least 10 megapixels

• Nikon Coolpix A • Nikon Coolpix P6000 • Nikon Coolpix P7000 • Nikon Coolpix P7100 • Nikon Coolpix P7700 • Nikon Coolpix P7800 • Nikon Coolpix P330 • Nikon Coolpix P340

Nikon Coolpix series below 10 megapixels

• Nikon Coolpix 700 ("DIAG RAW" hack) • Nikon Coolpix 800 ("DIAG RAW" hack) • Nikon Coolpix 880 ("DIAG RAW" hack) • Nikon Coolpix 900 ("DIAG RAW" hack) • Nikon Coolpix 950 ("DIAG RAW" hack) • Nikon Coolpix 990 ("DIAG RAW" hack) • ("DIAG RAW" hack) • Nikon Coolpix 2100 ("DIAG RAW" hack) • Nikon Coolpix 2500 ("DIAG RAW" hack) • Nikon Coolpix 3200 ("DIAG RAW" hack) • Nikon Coolpix 3700 ("DIAG RAW" hack) • Nikon Coolpix 4300 ("DIAG RAW" hack) 150 CHAPTER 10. DAY 10

• Nikon Coolpix 4500 ("DIAG RAW" hack) • Nikon Coolpix 5000 (as of ver. 1.7 12/18/03) • • Nikon Coolpix 5700 • Nikon Coolpix 8400 • Nikon Coolpix 8700 • Nikon Coolpix 8800 • Nikon Coolpix S6 ("DIAG RAW" hack)

Olympus

• Olympus C-5050Z • Olympus C-5060WZ • Olympus C-8080WZ • Olympus C-7000 • Olympus E-1 • Olympus E-3 • Olympus E-5 • Olympus E-10 • Olympus E-20 • Olympus E-30 • Olympus E-300 • Olympus E-330 • Olympus E-400 • Olympus E-410 • Olympus E-420 • Olympus E-450[3] • Olympus E-500 • Olympus E-510 • Olympus E-520 • Olympus E-600 • Olympus E-620 • Olympus OM-D E-M1 • Olympus OM-D E-M5 • Olympus OM-D E-M5 II • Olympus OM-D E-M10 • Olympus PEN E-P1[4] 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 151

• Olympus PEN E-P2 • Olympus PEN E-P3 • Olympus PEN E-P5 • Olympus PEN E-PL1 • Olympus PEN E-PL2 • Olympus PEN E-PL3 • Olympus PEN E-PL5 • Olympus PEN E-PM1 • Olympus PEN E-PM2 • Olympus SP-320 • Olympus SP-350 • Olympus SP-510 UZ • Olympus SP-570 UZ • Olympus Stylus 1[5] • Olympus XZ-1 • Olympus XZ-2 • Olympus VG-120

Panasonic

• Panasonic Lumix DMC-FX150 • Panasonic Lumix DMC-FZ8 • Panasonic Lumix DMC-FZ18 • Panasonic Lumix DMC-FZ28 • Panasonic Lumix DMC-FZ30 • Panasonic Lumix DMC-FZ35 • Panasonic Lumix DMC-FZ38 • Panasonic Lumix DMC-FZ50 • Panasonic Lumix DMC-FZ72 • Panasonic Lumix DMC-FZ100 • Panasonic Lumix DMC-FZ150 • Panasonic Lumix DMC-FZ200 (Superzoom, released 2012) • Panasonic Lumix DMC-FZ300 • Panasonic Lumix DMC-FZ1000 • Panasonic Lumix DMC-L1 • Panasonic Lumix DMC-L10 • Panasonic Lumix DMC-GM1 152 CHAPTER 10. DAY 10

• Panasonic Lumix DMC-GM5 • Panasonic Lumix DMC-G1 • Panasonic Lumix DMC-G2 • Panasonic Lumix DMC-G3 • Panasonic Lumix DMC-G5 • Panasonic Lumix DMC-G6 • Panasonic Lumix DMC-G7 • Panasonic Lumix DMC-GF1 • Panasonic Lumix DMC-GF2 • Panasonic Lumix DMC-GF3[6] • Panasonic Lumix DMC-GX1 • Panasonic Lumix DMC-GX7 • Panasonic Lumix DMC-GX8 • Panasonic Lumix DMC-GH1 • Panasonic Lumix DMC-GH2 • Panasonic Lumix DMC-GH3 • Panasonic Lumix DMC-GH4 • Panasonic Lumix DMC-LX1 • Panasonic Lumix DMC-LX2 • Panasonic Lumix DMC-LX3 • Panasonic Lumix DMC-LX5 • Panasonic Lumix DMC-LX7

Pentax

• Pentax *ist D • Pentax *ist DL • Pentax *ist DL2 • Pentax *ist DS • Pentax *ist DS2 • Pentax 645D - DNG and PEF • Pentax 645D IR - DNG and PEF • Pentax K100D Super • Pentax K100D • Pentax K10D - DNG and PEF • • Pentax K2000/K-m 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 153

- DNG and PEF

• Pentax K-01 - DNG and PEF

• Pentax K-30 - DNG[7]

• Pentax K-500 - DNG

• Pentax K-50 - DNG

• Pentax K-S1

• Pentax K-S2

• Pentax K-3

• Pentax K-3 II

• Pentax K-5 - DNG and PEF

• Pentax K-5 II - DNG and PEF

• Pentax K-5 IIs - DNG and PEF

• Pentax K-7 - DNG and PEF

• Pentax K-r - DNG and PEF

• Pentax K-x

• Pentax MX-1

• Pentax Q - DNG

• Pentax Q7 - DNG

• Pentax Q10 - DNG

• Pentax Q-S1 - DNG

Phase One

• All Phase One Digital Backs

Polaroid

• Polaroid x530

Ricoh

• Ricoh Caplio GX100

• Ricoh GX200

• Ricoh GR

• Ricoh GR Digital

• Ricoh GR Digital II

• Ricoh GR Digital III

• Ricoh GR Digital IV 154 CHAPTER 10. DAY 10

Samsung

• Samsung GX-10 - DNG

• Samsung GX-20 - DNG

• Samsung Pro815

• Samsung NX-M (Mini)

• Samsung NX1

• Samsung NX500

• Samsung NX1000

• Samsung NX100

• Samsung NX200

• Samsung NX300

• Samsung NX30

• Samsung NX20

• Samsung NX10

• Samsung WB5000

• Samsung EX1

• Samsung EX2F

Sigma

• Sigma DP1

• Sigma DP1s

• Sigma DP1x

• Sigma DP2

• Sigma DP2s

• Sigma DP2x

• Sigma SD9

• Sigma SD10

• Sigma SD14

• Sigma SD1 Merrill

• Sigma DP2 Merrill

• Sigma DP1 Merrill

• Sigma DP3 Merrill

• Sigma DP0 Quattro

• Sigma DP2 Quattro 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 155

Sony

• Sony DSLR-A100 - ARW 1.0 (lossless compression)

• Sony DSLR-A200 - ARW 2.0 (lossless compression)

• Sony DSLR-A230 - ARW 2.1 (lossless compression)

• Sony DSLR-A290 - ARW 2.1 (lossless compression)

• Sony DSLR-A300 - ARW 2.0 (lossless compression)

• Sony DSLR-A330 - ARW 2.1 (lossless compression)

• Sony DSLR-A350 - ARW 2.0 (lossless compression)

• Sony DSLR-A380 - ARW 2.1 (lossless compression)

• Sony DSLR-A390 - ARW 2.1 (lossless compression)

• Sony DSLR-A450 - ARW 2.1 (lossy delta-compression)

• Sony DSLR-A500 - ARW 2.1 (lossy delta-compression)

• Sony DSLR-A550 - ARW 2.1 (lossy delta-compression)

• Sony DSLR-A560 - ARW 2.2 (lossy delta-compression)

• Sony DSLR-A580 - ARW 2.2 (lossy delta-compression)

• Sony DSLR-A700 - ARW 2.0 (lossy delta-compression and 12-bit losslessly packed)

• Sony DSLR-A850 - ARW 2.1 (lossy delta-compression and 12-bit losslessly packed)

• Sony DSLR-A900 - ARW 2.1 (lossy delta-compression and 12-bit losslessly packed)

• Sony SLT-A33 - ARW 2.2 (lossy delta-compression)

• Sony SLT-A35 - ARW 2.2 (lossy delta-compression)

• Sony SLT-A37 - ARW 2.3.0 (lossy delta-compression)

• Sony SLT-A55 / Sony SLT-A55V - ARW 2.2 (lossy delta-compression)

• Sony SLT-A57 - ARW 2.3.0 (lossy delta-compression)

• Sony SLT-A58 - ARW 2.3.0 (lossy delta-compression)

• Sony SLT-A65 / Sony SLT-A65V - ARW 2.3.0 (lossy delta-compression)

• Sony SLT-A77 / Sony SLT-A77V - ARW 2.3.0 (lossy delta-compression)

• Sony SLT-A99 / Sony SLT-A99V - ARW 2.3.0/2.3.1 (lossy delta-compression)

• Sony ILCA-68 - ARW 2.3.1? (lossy delta-compression)

• Sony ILCA-77M2 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCA-99M2 - ARW 2.3.2? (lossy delta-compression, 14-bit losslessy packed)

• Sony NEX-3 / Sony NEX-3C - ARW 2.1/2.2 (lossy delta-compression)

• Sony NEX-C3 - ARW 2.2 (lossy delta-compression)

• Sony NEX-F3 - ARW 2.3.0 (lossy delta-compression)

• Sony NEX-3N - ARW 2.3.0 (lossy delta-compression)

• Sony NEX-5 / Sony NEX-5C - ARW 2.1/2.2 (lossy delta-compression) 156 CHAPTER 10. DAY 10

• Sony NEX-5N - ARW 2.2/2.3.0 (lossy delta-compression)

• Sony NEX-5R - ARW 2.3.0/2.3.1 (lossy delta-compression)

• Sony NEX-5T - ARW 2.3.1 (lossy delta-compression)

• Sony NEX-6 - ARW 2.3.0/2.3.1 (lossy delta-compression)

• Sony NEX-7 - ARW 2.3.0/2.3.1 (lossy delta-compression)

• Sony ILCE-3000 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-3500 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-5000 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-5100 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-6000 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-6300 - ARW 2.3.2?

• Sony ILCE-7 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-7R - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-7S - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-7M2 - ARW 2.3.1 (lossy delta-compression)

• Sony ILCE-7RM2 - ARW 2.3.1/2.3.2? (lossy delta-compression, optional 14-bit losslessy packed)

• Sony ILCE-7SM2 - ARW 2.3.2? (lossy delta-compression, 14-bit losslessy packed)

• Sony ILCE-QX1 - ARW 2.3.1 (lossy delta-compression)

• Sony Cyber-shot DSC-V3 - SRF

• Sony Cyber-shot DSC-F828 - SRF

• Sony Cyber-shot DSC-R1 - SR2

• Sony Cyber-shot DSC-RX1 / Sony Cyber-shot DSC-RX1R - ARW 2.3.0/2.3.1 (lossy delta-compression)

• Sony Cyber-shot DSC-RX10 - ARW 2.3.1 (lossy delta-compression)

• Sony Cyber-shot DSC-RX10M2 - ARW 2.3.? (lossy delta-compression)

• Sony Cyber-shot DSC-RX100 - ARW 2.3.0 (lossy delta-compression)

• Sony Cyber-shot DSC-RX100M2 - ARW 2.3.1 (lossy delta-compression)

• Sony Cyber-shot DSC-RX100M3 - ARW 2.3.1 (lossy delta-compression)

• Sony Cyber-shot DSC-RX100M4 - ARW 2.3.? (lossy delta-compression)

• Sony Handycam NEX-VG20 / Sony Handycam NEX-VG20E - ARW 2.3.0 (lossy delta-compression)

• Sony Handycam NEX-VG30 / Sony Handycam NEX-VG30E - ARW 2.3.0 (lossy delta-compression)

• Sony Handycam NEX-VG900 / Sony Handycam NEX-VG900E - ARW 2.3.0 (lossy delta-compression)

• Sony XCD-SX710CR - (8 bit/10 bit)

• Sony XCD-SX910CR - (8 bit/10 bit)

See also: Konica Minolta and Hasselblad 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 157

10.4.2 Native in-camera raw video support

The following cameras allow audio and video to be shot in at least one raw (often similar to raw image format, but also with proprietary YCbCr and RGB coding and/or lossy wavelet compression) format.

ARRI

With proprietary ArriRaw and HDMI or HD-SDI uncompressed video format.

• Arriflex D-20

• Arriflex D-21

• Arriflex Alexa, (Plus, M, Studio)

Blackmagic Design

• Blackmagic Production Camera 4K

• Blackmagic Pocket Cinema Camera

• Blackmagic URSA

• Blackmagic URSA Mini

Bolex

• Digital Bolex D16 and D16m Monochrome, with open format CinemaDNG video raw format.

Canon

• Cinema EOS C500, RAW 4K output.

Unofficially, with the use of Magic Lantern software, the following EOS cameras can record RAW video:

• 5D Mark III

• 5D Mark II

• 7D

• 6D

• 60D

• 50D

• 650D

• 600D

• 550D

• EOS M

DALSA

• Dalsa Origin 1&2, Company was acquired by Arriflex 158 CHAPTER 10. DAY 10

Ikonoskop

• A-Cam dII, with open format CinemaDNG video raw format.

Kinefinity

• KineMAX 6K [8] • KineMINI 4K [9] • KineRAW-S35 [10] • KineRAW-MINI [11] • KineFINITY Terra 5K • KineFINITY Terra 6K

RED

With proprietary REDCODE raw format with lossy compression for video.

• RED ONE • RED EPIC • RED Scarlet-X

Sony

• Sony F65 • Sony F55 (with adapter) • Sony F5 (with adapter) • Sony NXCAM NEX-FS700 / Sony NXCAM NEX-FS700R, with firmware 3 plus AXS-R5 or HXR-IFR5 or Convergent Design Odyssey 7Q(+) • Sony XDCAM PXW-FS7, with XDCA-FS7 plus AXS-R5 or HXR-IFR5 or external recorder (4K/2K raw recording)

10.4.3 Uncompressed video output via HDMI

Nikon

• Nikon D4 • Nikon D4S • • Nikon D800/D800E • Nikon D810/D810A • Nikon D600 • Nikon D610 • Nikon D7100 • Nikon D5200 10.4. LIST OF CAMERAS SUPPORTING A RAW FORMAT 159

• Nikon D5300

• Nikon D750

• Nikon D5500

• Nikon D7200

• Nikon D500

10.4.4 Other

Some Nikon Coolpix cameras which are not advertised as supporting a RAW image format can actually produce usable raw files if switched to a maintenance mode.[12] Note that switching to this mode can invalidate a camera’s guarantee. Nikon models with this capability: E700, E800, E880, E900, E950, E990, E995, E2100, E2500, E3700, E4300, E4500. Some Canon PowerShot cameras with DiGiC II and certain DiGiC III image processors which are not advertised as supporting a RAW format can actually produce usable raw files with an unofficial open-source firmware add-on by some users.[13] The Nokia N900 mobile phone has an add on app “Fcam”, which allows capture and saving of RAW files in Adobe’s DNG format (along with other advanced features usually found in DSLRs). In 2013, Nokia launched Nokia Lumia 1520 and smartphones with DNG RAW format.[14] , Note 7, Galaxy S6 Edge+, S7 and S7 Edge also support RAW image capture.

10.4.5 See also

• Raw image format

10.4.6 References

[1] Nikon announces Nikon 1 system with V1 small sensor mirrorless camera

[2] Nikon announces 1 V2 - a more photographer-friendly, 14MP 1 series camera Dpreview

[3] “Archived Products > E-450”. olympusamerica.com. 2011. Retrieved 9 August 2011.

[4] “Olympus - Specifications”. olympus.co.uk. 2011. Retrieved 9 August 2011.

[5] Butler, Andy Westlake, Richard (October 2013). “Olympus Stylus 1 First Impressions Review”. dpreview.com. Archived from the original on May 7, 2014. Retrieved May 7, 2014.

[6] “DMC-GF3 | PRODUCTS | LUMIX | Digital Camera | Panasonic Global”. panasonic.net. 2011. Retrieved 9 August 2011.

[7] “PENTAX K-30. A sporty, mid-level digital SLR camera, offering solid performance and a weather-resistant, dustproof body to active outdoor photographers”. PENTAX RICOH IMAGING. 2012. Retrieved 7 June 2012.

[8] http://www.kinefinity.com/kinemax/?lang=en

[9] http://www.kinefinity.com/kinemini/?lang=en

[10] http://www.kinefinity.com/kineraw-s35/?lang=en

[11] http://www.kinefinity.com/kineraw_mini/?lang=en

[12] http://e2500.narod.ru/raw_format_e.htm

[13] See DIGIC#Custom firmware

[14] “Nokia bringing RAW photography to the Lumia 1520 and 1020”. October 22, 2013. 160 CHAPTER 10. DAY 10

10.4.7 External links

• DCRAW List of supported cameras at the end, all of them support raw format.

10.5 Exif

This article is about a format for storing metadata in image and audio files. For information about filename and directory structures of digital cameras, see Design rule for Camera File system.

Exchangeable image file format (officially Exif, according to JEIDA/JEITA/CIPA specifications) is a standard that specifies the formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other systems handling image and sound files recorded by digital cameras. The specification uses the following existing file formats with the addition of specific metadata tags: JPEG discrete cosine transform (DCT)[1] for compressed image files, TIFF Rev. 6.0 (RGB or YCbCr) for uncompressed image files, and RIFF WAV for audio files (Linear PCM or ITU-T G.711 μ-Law PCM for uncompressed audio data, and IMA-ADPCM for compressed audio data).[2] It is not used in JPEG 2000, PNG, or GIF. This standard consists of the Exif image file specification and the Exif audio file specification.

10.5.1 Background

The Japan Electronic Industries Development Association (JEIDA) produced the initial definition of Exif. Version 2.1 of the specification is dated 12 June 1998. JEITA established Exif version 2.2 (a.k.a. "Exif Print"), dated 20 February 2002 and released in April 2002.[3] Version 2.21 (with Adobe RGB support) is dated 11 July 2003, but was released in September 2003 following the release of DCF 2.0. The latest, version 2.3, released on 26 April 2010 and revised in May 2013, was jointly formulated by JEITA and CIPA. Exif is supported by almost all camera manufacturers. The metadata tags defined in the Exif standard cover a broad spectrum:

• Date and time information. Digital cameras will record the current date and time and save this in the metadata. • Camera settings. This includes static information such as the camera model and make, and information that varies with each image such as orientation (rotation), aperture, shutter speed, focal length, , and ISO speed information. • A thumbnail for previewing the picture on the camera’s LCD screen, in file managers, or in photo manipulation software. • Descriptions • Copyright information.

10.5.2 Technical

The Exif tag structure is borrowed from TIFF files. On several image specific properties, there is a large overlap between the tags defined in the TIFF, Exif, TIFF/EP, and DCF standards. For descriptive metadata, there is an overlap between Exif, IPTC Information Interchange Model and XMP info, which also can be embedded in a JPEG file. The Metadata Working Group has guidelines on mapping tags between these standards.[4] When Exif is employed for JPEG files, the Exif data are stored in one of JPEG’s defined utility Application Segments, the APP1 (segment marker 0xFFE1), which in effect holds an entire TIFF file within. When Exif is employed in TIFF files (also when used as “an embedded TIFF file” mentioned earlier), the TIFF Private Tag 0x8769 defines a sub- Image File Directory (IFD) that holds the Exif specified TIFF Tags. In addition, Exif also defines a Global Positioning System sub-IFD using the TIFF Private Tag 0x8825, holding location information, and an “Interoperability IFD” specified within the Exif sub-IFD, using the Exif tag 0xA005. Formats specified in Exif standard are defined as folder structures that are based on Exif-JPEG and recording for- mats for memory. When these formats are used as Exif/DCF files together with the DCF specification (for better 10.5. EXIF 161 interoperability among devices of different types), their scope shall cover devices, recording media, and that handle them.

10.5.3 Geolocation

See also:

The Exif format has standard tags for location information. As of 2014 many cameras and most mobile phones have a built-in GPS receiver that stores the location information in the Exif header when a picture is taken. Some other cameras have a separate GPS receiver that fits into the flash connector or . Recorded GPS data can also be added to any digital photograph on a computer, either by correlating the time stamps of the photographs with a GPS record from a hand-held GPS receiver or manually by using a map or mapping software. The process of adding geographic information to a photograph is known as geotagging. Photo-sharing communities like Panoramio, locr or equally allow their users to upload geocoded pictures or to add geolocation information online.

10.5.4 Program support

Exif data are embedded within the image file itself. While many recent image manipulation programs recognize and preserve Exif data when writing to a modified image, this is not the case for most older programs. Many image gallery programs also recognise Exif data and optionally display it alongside the images. Software libraries, such as libexif[5] for C and Adobe XMP Toolkit[6] or Exiv2[7] for C++, Metadata Extractor[8] for Java, PIL/Pillow for Python or ExifTool[9] for Perl, parse Exif data from files and read/write Exif tag values.

10.5.5 Problems

Technical

The Exif format has a number of drawbacks, mostly relating to its use of legacy file structures.

• The derivation of Exif from the TIFF file structure using offset pointers in the files means that data can be spread anywhere within a file, which means that software is likely to corrupt any pointers or corresponding data that it doesn't decode/encode. For this reason most image editors damage or remove the Exif metadata to some extent upon saving.[10] • The standard defines a MakerNote tag, which allows camera manufacturers to place any custom format meta- data in the file. This is used increasingly by camera manufacturers to store camera settings not listed in the Exif standard, such as shooting modes, post-processing settings, serial number, focusing modes, etc. As the tag contents are proprietary and manufacturer-specific, it can be difficult to retrieve this information from an image or to properly preserve it when rewriting an image. Manufacturers can encrypt portions of the information; for example, some Nikon cameras encrypt the detailed lens data in the MakerNote data.[11] • Exif is very often used in images created by scanners, but the standard makes no provisions for any scanner- specific information. • Photo manipulation software sometimes fails to update the embedded thumbnail after an editing operation, possibly causing the user to inadvertently publish compromising information.[12] For example, someone might blank out a licence registration plate of a car (for privacy concerns), only to have the thumbnail not so updated, meaning the information is still visible. • Exif metadata are restricted in size to 64 kB in JPEG images because according to the specification this infor- mation must be contained within a single JPEG APP1 segment. Although the FlashPix extensions allow infor- mation to span multiple JPEG APP2 segments, these extensions are not commonly used. This has prompted some camera manufacturers to develop non-standard techniques for storing the large preview images used by some digital cameras for LCD review. These non-standard extensions are commonly lost if a user re-saves the image using image editor software, possibly rendering the image incompatible with the original camera that created it. (In 2009, CIPA released the Multi Picture Object specification which addresses this deficiency and provides a standard way to store large previews in JPEG images.[13]) 162 CHAPTER 10. DAY 10

• There is no way to record time-zone information along with the time, thus rendering the stored time ambiguous.

• There is no field to record readouts of a camera’s accelerometers or inertial navigation system. Such data could help to establish the relationship between the image sensor’s XYZ coordinate system and the gravity vector (i.e., which way is down in this image). It could also establish relative camera positions or orientations in a sequence of photos.

• The DPI-value of photos is meaningless, a fictitious number, but it is required. The inclusion of this made up tag confuses both people and software. The format has not yet been updated to remove this requirement.[14]

Privacy and security

Since the Exif tag contains metadata about the photo, it can pose a privacy problem. For example, a photo taken with a GPS-enabled camera can reveal the exact location and time it was taken, and the unique ID number of the device - this is all done by default - often without the user’s knowledge. Many users may be unaware that their photos are tagged by default in this manner, or that specialist software may be required to remove the Exif tag before publishing. For example, a whistleblower, journalist or political dissident relying on the protection of anonymity to allow them to report malfeasance by a corporate entity, criminal, or government may therefore find their safety compromised by this default data collection. In December 2012, anti-virus programmer John McAfee was arrested in Guatemala while fleeing from alleged persecution[15] in Belize, which shares a border. Vice magazine had published an exclusive interview on their website with McAfee “on the run”[16] that included a photo of McAfee with a Vice reporter taken with a phone that had geotagged the image.[17] The photo’s metadata included GPS coordinates locating McAfee in Guatemala, and he was captured two days later.[18] According to documents leaked by Edward Snowden, the NSA is targeting Exif information under the XKeyscore program.[19] The privacy problem of Exif data can be avoided by removing the Exif data using a metadata removal tool.

10.5.6 Related standards

Metadata Working Group was formed by a consortium of companies in 2006 (according to their web page) or 2007 (as stated in their own press release). Version 2.0 of the specification was released in November 2010,[4] giving recommendations concerning the use of Exif, IPTC and XMP metadata in images. Extensible Metadata Platform (XMP) is an ISO standard, originally created by Adobe Systems Inc., for the cre- ation, processing and interchange of standardized and custom metadata for digital documents and data sets. IPTC was developed in the early 1990s by the International Press Telecommunications Council (IPTC) to expedite the international exchange of news among newspapers and news agencies.

10.5.7 Example

The following table shows Exif data for a photo made with a typical digital camera. Notice that authorship and copyright information is generally not provided in the camera’s output, so it must be filled in during later stages of processing. Some programs, such as Canon’s Digital Photo Professional, allow the name of the owner to be added to the camera itself.

10.5.8 FlashPix extensions

The Exif specification also includes a description of FPXR (FlashPix-ready) information, which may be stored in APP2 of JPEG images using a structure similar to that of a FlashPix file.[21] These FlashPix extensions allow meta- information to be preserved when converting between FPXR JPEG images and FlashPix images. FPXR information may be found in images from some models of digital cameras by Kodak and Hewlett-Packard.[22] Below is an example of the FPXR information found in a JPEG image from a Kodak EasyShare V570 digital camera: 10.5. EXIF 163

Konqueror showing Exif data 164 CHAPTER 10. DAY 10

10.5.9 Exif audio files

The Exif specification describes the RIFF file format used for WAV audio files and defines a number of tags for storing meta-information such as artist, copyright, creation date, and more in these files.[23] The following table gives an example of Exif information found in a WAV file written by the Pentax Optio WP digital camera:

10.5.10 MakerNote data

The “MakerNote” tag contains image information normally in a proprietary binary format. Some of these manufacturer- specific formats have been decoded:

• OZHiker (not updated since 2008): Agfa, Canon, Casio, Epson, Fujifilm, Konica/Minolta, Kyocera/Contax, Nikon, Olympus, Panasonic, Pentax/Asahi, Ricoh, Sony[24]

• Kamisaka (not updated since 2007): Canon, Casio, FujiFilm, ISL, KDDI, Konica/Minolta, Mamiya, Nikon, Panasonic, Pentax, Ricoh, Sigma, Sony, WWL[25]

• X3F Info: Sigma/Foveon[26]

• ExifTool: Canon, Casio, FujiFilm, GE, HP, JVC/Victor, Kodak, Leaf, Minolta/Konica-Minolta, Nikon, Olympus/Epson, Panasonic/Leica, Pentax/Asahi, Reconyx, Ricoh, Samsung, Sanyo, Sigma/Foveon, Sony[27]

• Olypedia: Olympus[28]

Unfortunately, the proprietary formats used by many manufacturers break if the MakerNote tag is moved, i.e. by inserting or editing a tag that precedes it. The reason to edit to the Exif data could be as simple as to add copyright information, an Exif comment, etc. In some cases, camera vendors also store important information only in pro- prietary makernote fields, instead of using available Exif standard tags. An example for this is Nikon’s ISO speed settings tag.[29]

10.5.11 See also

• Comparison of image viewers (Exif view/edit functions)

• Comparison of metadata editors

• Design rule for Camera File system (DCF)

• Digital photography

• eXtensible Metadata Platform (XMP)

• Geocoded photo

• Image file formats

• IPTC Information Interchange Model

• Metadata Working Group

• Tag Image File Format / Electronic Photography (TIFF/EP)

10.5.12 References

[1] Ahmed, N.; Natarajan, T.; Rao, K. R. (January 1974), “Discrete Cosine Transform”, IEEE Transactions on Computers, C–23 (1): 90–93, doi:10.1109/T-C.1974.223784

[2] “Standard of the Camera & Imaging Products Association, CIPA DC-008-Translation-2012, Exchangeable image file format for digital still cameras: Exif Version 2.3” (PDF). Retrieved 2014-04-08. 10.5. EXIF 165

[3] Technical Standardization Committee on AV & IT Storage Systems and Equipment (April 2002). “Exchangeable Image File Format for Digital Still Cameras” (PDF). Version 2.2. Japan Electronics and Information Technology Industries Association. JEITA CP-3451. Retrieved 2008-01-28.

[4] “Guidelines for Handling Image Metadata” (PDF). Metadata Working group. 2010-11-01. Retrieved 2015-05-11.

[5] “The libexif C EXIF for C”. Retrieved 2009-11-08.

[6] “Adobe XMP Toolkit SDK”. Adobe Inc.

[7] “Exiv2 Image Metadata Library”. Andreas Huggel. Retrieved 2009-02-12.

[8] “Metadata Extractor”. Drew Noakes. Retrieved 2011-02-18.

[9] “Image::ExifTool Perl library”. Phil Harvey. Retrieved 2009-02-12.

[10] “TIFF Revision 6.0” (PDF). Adobe. 1992-06-03. Retrieved 2009-04-07.

[11] “Nikon Tags: Nikon LensData01 Tags”. Phil Harvey. 2008-01-25. Retrieved 2008-01-28.

[12] Maximillian Dornseif (2004-12-17). “EXIF Thumbnail in JPEG images”. disLEXia 3000 blog. Archived from the original on September 28, 2007. Retrieved 2008-01-28.

[13] “Multi-Picture Format” (PDF). CIPA. 2009-02-04. Retrieved 2014-04-29.

[14] Dpi, misunderstandings and explanation, what is dpi

[15] “McAfee wins stay of deportation from Guatemala”. Cnn.com. Retrieved 2012-12-26.

[16] We Are with John McAfee Right Now, Suckers, Vice, December 3, 2012, retrieved 7 December 2012

[17] Vice leaves metadata in photo of John McAfee, pinpointing him to a location in Guatemala, The Next Web, December 3, 2012, retrieved 7 December 2012

[18] John McAfee arrested in Guatemala for illegal entry, CBS News, December 5, 2012, retrieved 7 December 2012

[19] Staff (July 31, 2013). “XKeyscore Presentation from 2008 – Read in Full – Training Materials for the XKeyscore Program Detail How Analysts Can Use It and Other Systems to Mine Enormous Agency Databases and Develop Intelligence from the Web – Revealed: NSA Program That Collects 'Nearly Everything a User Does on the Internet'". The Guardian. Retrieved August 6, 2013.

[20] “JPEG Rotation and EXIF Orientation / Digital Cameras with Orientation Sensors etc”. Impulseadventure.com. Retrieved 2012-12-26.

[21] (JEITA CP-3451) Section 4.7.2: Interoperability Structure of APP2 in Compressed Data.

[22] Phil Harvey (18 March 2011). “FlashPix Tags”. Retrieved 29 March 2011.

[23] (JEITA CP-3451) Section 5: Exif Audio File Specification.

[24] Evan Hunter. “EXIF Makernotes - Reference Information”. OZHiker. Retrieved 2008-01-29.

[25] “Exif MakerNote " (in Japanese). Kamisaka. Retrieved 2008-01-29.

[26] “SIGMA and FOVEON EXIF MakerNote Documentation”. x3f.info. Archived from the original on 2007-08-05. Re- trieved 2008-03-26.

[27] “ExifTool Tag Names”. Phil Harvey. 2008-01-18. Retrieved 2011-01-24.

[28] “Olympus Makernotes” (in German). Olypedia. Retrieved 2008-01-29.

[29] Andreas Huggel (2012-04-25). “Makernote formats and specifications”. Retrieved 2012-09-09. 166 CHAPTER 10. DAY 10

10.5.13 External links

• Exif standard version 2.31

• Exif standard version 2.3

• Exif standard version 2.2 as PDF or as HTML

• Overview of the revisions to the DCF and Exif standards

• What is EXIF Data

• Exif in the TIFF Tag Directory

• Metadata working group

• List of Exif tags including MakerNote tags

• Exif Dangers

10.6 Comparison of image viewers

This article presents a comparison of image viewers and image organizers which can be used for image viewing.

10.6.1 Functionality overview and licensing

10.6.2 Supported file formats

Commonly used vendor-independent formats

Camera raw formats

10.6.3 Supported operating systems

10.6.4 Basic features

10.6.5 Additional features

10.6.6 See also

• Comparison of raster graphics editors

• Digital image editing

• List of raster graphics editors

• List of graphics file formats

10.6.7 Notes

• iPhoto is part of iLife, which includes a DVD authoring package (iDVD), a video editor (iMovie), a music player (iTunes), a multimedia web publisher (iWeb), and an audio-sequencing program (GarageBand)

• FastPictureViewer's DirectX hardware acceleration support depends on the actual installed and the amount of available video memory. The commercial version also supports previewing some camera RAW formats for which a WIC-enabled codec exists. Such RAW codecs are currently available from Canon, Nikon, Olympus, Pentax, Sony and for Adobe DNG. 10.7. IPTC INFORMATION INTERCHANGE MODEL 167

• Many applications on Mac OS X use either the Core Image or QuickTime for image support. This enables reading and writing to a variety of formats, including JPEG, JPEG2000, Apple Icon Image format, TIFF, PNG, PDF, BMP and more.

• SView5 may also run on Linux/x86 and MacOS/x86 using Mono.

10.6.8 References

[1] http://dottech.org/129555/windows-review-top-best-free-image-viewer-program/

10.7 IPTC Information Interchange Model

The Information Interchange Model (IIM) is a file structure and set of metadata attributes that can be applied to text, images and other media types. It was developed in the early 1990s by the International Press Telecommunications Council (IPTC) to expedite the international exchange of news among newspapers and news agencies. The full IIM specification includes a complex data structure and a set of metadata definitions. Although IIM was intended for use with all types of news items — including simple text articles — a subset found broad worldwide acceptance as the standard embedded metadata used by news and commercial photographers. In- formation such as the name of the photographer, copyright information and the caption or other description can be embedded either manually or automatically. IIM metadata embedded in images are often referred to as “IPTC headers”, and can be easily encoded and decoded by most popular photo editing software. The Extensible Metadata Platform (XMP) has largely superseded IIM’s file structure, but the IIM image attributes are defined in the IPTC Core schema for XMP and most image manipulation programs keep the XMP and non-XMP IPTC attributes synchronized. Because of its nearly universal acceptance among photographers — even amateurs — this is by far IPTC’s most widely used standard. On the other hand, the use of IIM structure and metadata for text and graphics is mainly limited to European news agencies.

10.7.1 Overview

IIM attributes are widely used and supported by many image creation and manipulation programs. Almost all the IIM attributes are supported by the Exchangeable image file format (Exif), a specification for the image file format used by digital cameras. IIM metadata can be embedded into JPEG/Exif, TIFF, JPEG2000 or Portable Network Graphics formatted image files. Other file formats such as GIF or PCX do not support IIM. IIM’s file structure technology has largely been overtaken by the Extensible Metadata Platform (XMP), but the IIM attribute definitions are the basis for the IPTC Core schema for XMP.

10.7.2 History

Since the late 1970s the IPTC’s activities have primarily focused on developing and publishing industry standards for the interchange of news. The first standard, IPTC 7901, bridged the eras of teleprinters and computers. In the late 1980s development began on a standard (the Information Interchange Model) that would be designed to best work with computerized news editing systems. In particular, the IPTC defined a set of IIM metadata attributes that can be applied to images. These were defined originally in 1979, and revised significantly in 1991 to be part of the IIM, but the concept really advanced in 1994 when Adobe Systems defined a specification for actually embedding the metadata into digital image files — yielding “IPTC headers.” 168 CHAPTER 10. DAY 10

(Adobe adopted the IPTC IIM metadata definitions, but not the overall IIM data structure. Photos that contain IPTC Headers appear in all other respects to be normal JPEG or TIFF images; software that does not recognize IPTC Headers will simply ignore the metadata.) In 2001, Adobe introduced "Extensible Metadata Platform" (XMP), which is an XML schema for the same types of metadata as IPTC, but is based on XML/RDF, and is therefore inherently extensible. The effort spawned a collaboration with the IPTC, eventually producing the “IPTC Core Schema for XMP”, which merges the two ap- proaches to embedded metadata. The XMP specification describes techniques for embedding metadata in JPEG, TIFF, JPEG2000, GIF, PNG, HTML, PostScript, PDF, SVG, Adobe Illustrator, and DNG files. Recent versions of all the main Adobe software products, (Photoshop, Illustrator, Acrobat, Framemaker, etc.) support XMP, as do an increasing number of third-party tools. In June 2007, IPTC in cooperation with IFRA held the First International Photo Metadata conference, titled “Working towards a seamless photo workflow” to a standing room only crowd (over 130 attendees), prior to the CEPIC Congress, in Florence, Italy. A similar conference was held in Malta in June 2008. The IPTC Photo Metadata working group released a white paper,[1] which figured prominently at this event. The conference keynote was given by Andreas Trampe, head of the photo desk of Stern. Other speakers included photog- raphers such as David Riecks and Peter Krogh, photo and news agencies such as Reuters; representatives of standards bodies such as PLUS, IPTC, and IFRA; as well as spokespersons from the photo metadata implementers side, such as Adobe Systems, Apple Inc., Canon Inc., FotoWare AS, Hasselblad, and Microsoft. The electronic presentations given by most of the speakers are available online from the Photo Metadata Conference website including a link to a report on each of the speakers’ talks

10.7.3 See also

• Comparison of metadata editors

10.7.4 References

[1] Löffler, Harald (2007), Baranger, Walt; Steidl, Michael, eds., Photo Metadata White Paper 2007, IPTC. The white paper discusses upcoming changes to the IPTC Photo Metadata Standards

10.7.5 External links

• The International Press Telecommunications Council

• IPTC Core schema for XMP • NewsML

• IPTC Recommendation 7901 - The Text Transmission Format Chapter 11

Text and image sources, contributors, and licenses

11.1 Text

• Digital image processing Source: https://en.wikipedia.org/wiki/Digital_image_processing?oldid=764026112 Contributors: Mav, The Anome, Andre Engels, Yaroslavvb, Michael Hardy, Sannse, MightCould, Tobias Conradi, GRAHAMUK, Dysprosia, Robbot, Giftlite, DavidCary, Seabhcan, Asc99c, Jorge Stolfi, Bact, Stonor, Glogger, Abdull, Mike Rosoft, Rich Farmbrough, Bender235, ESkog, Syp, Matt Britt, Naturenet, Alansohn, Snowolf, Cburnett, Forderud, BrentN, Mindmatrix, Phillipsacp, Isnow, Nneonneo, Ligulem, Jehochman, Mathbot, Nihiltres, Quuxplusone, Predictor, Adoniscik, Wikizen, YurikBot, JimmyTheWig, NawlinWiki, Wiki alf, ONEder Boy, Mhol- land, Zvika, SmackBot, AngelovdS, Gilliam, D hanbun~enwiki, Roscelese, Nbarth, Wharron, Only, Prashantsonar, Pedronavajo, Nmnogueira, Beetstra, Dicklyon, Lbarajas, Chunawalla, Ylloh, Enselic, Roshan baladhanvi, Quibik, Dchristle, Tuxide, Barticus88, Mojo Hand, Head- bomb, Targetter, Seaphoto, Alphachimpbot, Hermann.tropf, JAnDbot, MER-C, Magioladitis, Alex Spade, User A1, Ekotkie, Pvosta, Wildknot, CommonsDelinker, Jiuguang Wang, 83d40m, Llorenzi, TXiKiBoT, Sanfranman59, Canaima, Themcman1, Jamelan, Wikipedie, Enviroboy, SieBot, Roesser, Satishpitchikala, Keilana, Bentogoa, Nilx~enwiki, Prashantva, Lamoidfl, HairyWombat, Bob1960evens, Mild Bill Hiccup, Auntof6, Hezarfenn, Johnuniq, XLinkBot, Shireesh13, TFOWR, Stillastillsfan, Addbot, Fgnievinski, Imagegod, Hosse~enwiki, ,Tomkalayil, DBAllan, Shirik ,محمد.رضا ,Yobot, SniperMaské, AnomieBOT, Materialscientist ,زرشک ,Lamoid, MrOllie, Ozob, Lightbot Inputpersona, Flowerheaven, Gourami Watcher, Apm-gh, Fox Wilson, Zvn, 777sms, Becritical, Eekerz, Tonyxty, TreacherousWays, Jmencisom, Wikipelli, Dcirovic, ZéroBot, Checkingfax, Fæ, Mr Yarrow, Casia wyq, Lilinjing cas, Gwestheimer, Kleopatra, ClueBot NG, Matthiaspaul, Mharbut, Helpful Pixie Bot, BG19bot, Smitty121981, Metricopolus, Mark Arsten, Snow Blizzard, Kanca13, TarielVincent, Msec109, ChrisGualtieri, Tagremover, SoledadKabocha, Mark viking, I am One of Many, Vinchaud20, Tentinator, Stacie Jensen, Samrat- subedi, Ginsuloft, AndyThe, Stamptrader, Nont Banditwong, Agashlin, Darth Jadus, Kumardeepakr3, GeekPhotog, InternetArchiveBot, Jamal Abdalla, GreenC bot, SkyWarrior, Perlasatish203 and Anonymous: 172 • Image editing Source: https://en.wikipedia.org/wiki/Image_editing?oldid=761192904 Contributors: Frecklefoot, Michael Hardy, Kku, Haakon, Julesd, Charles Matthews, Dragons flight, Shizhao, Robbot, Tea2min, Richy, Everyking, Khalid hassani, Kaldari, Abdull, Im- roy, Discospinster, Dave souza, Night Gyr, Root4(one), RoyBoy, EurekaLott, Fir0002, Flxmghvgvk, Hooperbloob, Diego Moya, Jeltz, Hu, Hohum, CloudNine, Mahanga, Marasmusine, Scriberius, Camw, Phillipsacp, Ge0rge~enwiki, Tabletop, Kbdank71, Mlewan, Tizio, Tim!, Jake Wartenberg, Binary, JoshuacUK, Rainchill, Aapo Laitinen, Skizatch, Mayosolo, Gurch, Kri, Bgwhite, Adoniscik, Jvv62, Hydrargyrum, Stephenb, Ugur Basak, ALoopingIcon, Wiki alf, ZacBowling, ONEder Boy, RazorICE, Nicholas Perkins, Arthur Rubin, Th1rt3en, JBogdan, Jecowa, Meithal~enwiki, Profero, Toniht, Eigenlambda, Sarah, SmackBot, Jon Rob, Tuoreco, Hobbes747, TheDoc- tor10, Gilliam, Skizzik, Durova, Shalroth, Simon123, Maetrics, Benjamin9003, Nbarth, Yanksox, Htra0497, Can't sleep, clown will eat me, Zambaccian, Addshore, HarisM, Ultraexactzz, Rklawton, Pile3d, Dicklyon, Politepunk, Hetar, MIckStephenson, NativeForeigner, CapitalR, Tawkerbot2, Davidbspalding, Zarex, Makeemlighter, Doug Nelson, Sbn1984, Valodzka, Gogo Dodo, PlanetCoder, Fishde- coy, Omicronpersei8, Mactographer, PerfectStorm, Marek69, Bobblehead, Leon7, Catapa, Klausness, Mentifisto, AntiVandalBot, Gioto, Michelle1206, Goldenrowley, PavelMalychev, JAnDbot, Wiki0709, MER-C, Jahoe, Vituperex, Raanoo, VoABot II, Theroadislong, Der- Hexer, Artsmartconsulting, Hbent, Oicumayberight, Jackson Peebles, R'n'B, CommonsDelinker, Tgeairn, J.delanoy, CFCF, Aboosh, Cog- nita, Sayer55512, Protector777, 83d40m, Suckindiesel, Althepal, KylieTastic, DigitalEnthusiast, TheMindsEye, Spin-docta, Davidwr, Oshwah, Hylerlink, Alphaios~enwiki, Awl, Jamelan, Kezzie123, Pimpinpunk, Xett, Ernest lk lam, Bbh123, Maduixa, Dawn Bard, Android Mouse, Flyer22 Reborn, Rightleftright, Ravanacker, Wuhwuzdat, Pinkadelica, ImageRemovalBot, Dlrohrer2003, Twinsday, ClueBot, Editmaniac, Mild Bill Hiccup, Billbouchard, Holden yo, Iohannes Animosus, Dekisugi, Boxing245, Contains Mild Peril, Mas- terOfHisOwnDomain, XLinkBot, Jytdog, Manishti2004, Dthomsen8, Shoemaker’s Holiday, Chrsorlando, Addbot, Menschenfressender Riese, CanadianLinuxUser, Download, Ericthered43, Magicretouch, Tide rolls, 50blues, Kb04090, Libertycents, Luckas-bot, Yobot, Mmxx, Backslash Forwardslash, AnomieBOT, Efa, Jim1138, Eart4493, Materialscientist, B4lite, Tomkalayil, Sellyme, J04n, Abuk SABUK, IfYouReadThisYouSuck, Tiger1027tw, George585, FrescoBot, Hyju, Fortdj33, PhotoHand, Subhadeepgayen, Angieangelr., Pinethicket, Elockid, RedBot, MathijsM, Nslimak, Lotje, Mean as custard, Mateus95860, Skamecrazy123, J36miles, Plot Citizen, Slightsmile, Patvoiiage, Ὁ οἶστρος, Qniemiec, Rcsprinter123, Chrisstone12, Orange Suede Sofa, Fixarphotos, HSPR Wiki, 28bot, Clue- ,Rvighne, Els verhaert, Stuffious ,أحمد ش الشيخ ,Bot NG, MelbourneStar, TangerineCat, Alex Nico, Widr, Luckysagar123, HMSSolent MusikAnimal, AvocatoBot, RscprinterBot, Klilidiplomus, Cemulku, ChrisGualtieri, Tagremover, Mediran, Codename Lisa, Jiangjia- hao8888, Lugia2453, SteenthIWbot, Teacher83, Mark viking, Crisskooper, Stacie Jensen, Jain kalpesh, Colorvale, Comp.arch, Stub Mandrel, ScotXW, JaconaFrere, Neuralia, Ofek Gila, Alxde13, MacWarcraft, IagoQnsi, Lor, Ptvbalochistan, JMP EAX, RegistryKey, Yee Hgns, GeneralizationsAreBad, Creativenid, S Manov, Alena012, Bender the Bot, Tim Forster, VladShvets01, Digi5studio, Master- mindShiva, Clippingpartnerindia and Anonymous: 268

169 170 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• Image processing Source: https://en.wikipedia.org/wiki/Image_processing?oldid=757129698 Contributors: Lee Daniel Crocker, The Anome, Andre Engels, Michael Hardy, Komap, MightCould, Stw, Theresa knott, Poor Yorick, Smack, Dysprosia, Bevo, Pollinator, Rob- bot, Rholton, Giftlite, Seabhcan, Asc99c, Jorge Stolfi, Finn-Zoltan, Solipsist, Bact, MrMambo, Fangz, Karol Langner, Imroy, Mani1, Bender235, Apyule, Matt Britt, Minghong, Sam Korn, Pearle, Mdd, PAR, Cburnett, Forderud, C3o, BrentN, RHaworth, Phillipsacp, Qwertyus, GünniX, Quuxplusone, Predictor, Jonawebb, Chobot, DVdm, Adoniscik, YurikBot, Bhny, Anomalocaris, Coderzombie, Supten, Curpsbot-unicodify, Allens, SmackBot, Marc Lacoste, Malte singapore, Mdd4696, Kipmaster, Chris the speller, Nbarth, DHN- bot~enwiki, GillianSmith, Kutabar, VMS Mosaic, Carloscastro, MichaelBillington, Abmac, HarisM, IronGargoyle, Dicklyon, Peter Horn, Hu12, Mauromaiorca, Andthu, Tpl, Iobuddha, Ylloh, CmdrObot, Wafulz, Requestion, Slazenger, Nick Wilson, Alanbly, Dancter, Wax- igloo, Tuxide, Barticus88, Andyjsmith, James086, Nick Number, Escarbot, AntiVandalBot, Alphachimpbot, JAnDbot, MER-C, Magi- oladitis, Animum, EagleFan, User A1, JaGa, Calltech, Pvosta, Andreas Mueller, CommonsDelinker, Fordescort79, Inschanör~enwiki, Huzzlet the bot, Trusilver, Jiuguang Wang, Dwachter, LordAnubisBOT, Fletch1978, 83d40m, Huotfh, Largoplazo, Bonadea, Idioma-bot, VolkovBot, Melaneum, AlnoktaBOT, TXiKiBoT, Nitin.mehta, Ferengi, Odo1982, Wikipedie, Organizmo, Mahogny, Porieu, Finnrind, SieBot, Chimin 07, BotMultichill, ToePeu.bot, Roesser, Nilx~enwiki, Sbowers3, Yswismer, OKBot, Prashantva, Lamoidfl, Bravekermit, ClueBot, Sumbaug, Flightsoffancy, Hezarfenn, Panecasareccio, Muro Bot, Ranjithsutari, Johnuniq, Kamoshirenai, Meera123, XLinkBot, Manishti2004, Noctibus, Fgnievinski, Lamoid, MrOllie, Wipware, Favonian, Renatokeshet, Verbal, Jarble, Legobot, Yobot, Evereyes, Chintanbug, AnomieBOT, AndyCharles1Wilson, Omnipaedista, Shirik, DanielZhang27, Thehelpfulbot, Much noise, Mostafa.elhoushi, Houjou, Jonesey95, RedBot, North8000, Judesba, Diannaa, EmausBot, SensibleStuff, Dewritech, Primefac, JacobDang, Abhinav.bal, Jmencisom, Mr Yarrow, Casia wyq, Kleopatra, ClueBot NG, Matthiaspaul, Digitalmediaprocessing, Vacation9, Paul.mather, MerlIw- Bot, Helpful Pixie Bot, ChrisEngelsma, BG19bot, Vagobot, Neøn, Marcocapelle, Kagundu, Mark Arsten, CitationCleanerBot, Chan- vese, TarielVincent, Fans656, Cemulku, ChrisGualtieri, Tagremover, Jihadcola, Rockerz007, Schrezraoeder, TwoTwoHello, Lugia2453, Frosty, Me, Myself, and I are Here, Mms731, Phamnhatkhanh, Erez.o, Bsingh.cs, Samratsubedi, Ginsuloft, Basji, Sam Sailor, AndyThe, Deepu04852, RIYAPALACE, Monkbot, Saudamini Shah, Tjhuston225, MicroPaLeo, Timatse.ubutra, KasparBot, Davidstone1415, Ja- mal Abdalla, Yrikk and Anonymous: 222 • Image analysis Source: https://en.wikipedia.org/wiki/Image_analysis?oldid=750688650 Contributors: Andre Engels, Hike395, Hobbes~enwiki, Giftlite, Seabhcan, Jorge Stolfi, Rgdboer, MITalum, Anthony Appleyard, HenryLi, Oleg Alexandrov, Isnow, 790, BD2412, Salix alba, Mark83, Rune.welsh, Closedmouth, Ohnoitsjamie, Radagast83, Peter Horn, SkyWalker, CmdrObot, Werratal, Dancter, Tuxide, Escar- bot, KrakatoaKatie, AntiVandalBot, WinBot, Alphachimpbot, MER-C, Avicennasis, Wildknot, MartinBot, R'n'B, Nono64, FANSTAR- bot, Jimmaths, Rhagerman, Jmath666, Wikipedie, Mahogny, Porieu, Winchelsea, Fcummins, Drslyguy, Sjm462, Pomergirl, Myarro, His Manliness, Addbot, Fgnievinski, Shervinemami, MrOllie, Renatokeshet, Yobot, Jacobhh, Ciphers, Jim1138, J04n, FrescoBot, PigFlu Oink, DrilBot, Beao, North8000, MorbidEntree, Compvision, Lackermann, Mikewuzh3r3, ClueBot NG, Khvit, TarielVincent, Tentina- tor, AndyThe, Monicagellar 08, Uddinkabir, Barry.rutig, KasparBot, Jamal Abdalla, Jsosa284, LiesScientist, TheRealPascalRascal and Anonymous: 46 • Photo manipulation Source: https://en.wikipedia.org/wiki/Photo_manipulation?oldid=763227289 Contributors: Sjc, Stevertigo, Infrog- mation, Jahsonic, Delirium, CesarB, Mac, Theresa knott, Furrykef, Tempshill, Shantavira, Altenmann, Psychonaut, HaeB, Dbenbenn, Mckaysalisbury, Lucky 6.9, Nx7000~enwiki, Quadell, Piotrus, Kaldari, Jossi, Sam Hocevar, JoseBlanco, Kate, Poccil, Imroy, Discospin- ster, Smyth, Bender235, JJJJust, Doover, RoyBoy, Orlady, Cmdrjameson, Sam Korn, Eleland, Hoary, Yummifruitbat, DreamGuy, Cyber- artist, ^demon, MONGO, Bluemoose, CTeicheira, Sparkit, BD2412, Mendaliv, Chenxlee, Rjwilmsi, Jake Wartenberg, Lockley, Binary, Rainchill, The wub, Flarn2006, Ground Zero, Who, RexNL, Pinkville, Preslethe, Srleffler, Bgwhite, Wavelength, Sceptre, Hede2000, Hy- drargyrum, Stephenb, Joel7687, Brian Crawford, Dbfirs, ZhaoHong, Mieciu K, Mysid, Varano, Fallout boy, Crisco 1492, Arthur Rubin, Earthengine, Profero, Sarah, SmackBot, Tuoreco, Clpo13, Delldot, The Octavo, Mintpieman, Gilliam, Ohnoitsjamie, Ppntori, Durova, Bluebot, Biatch, Trebor, Thumperward, SchfiftyThree, Victorgrigas, Deli nk, Nbarth, Colonies Chris, Antonrojo, Metavida, Brimba, Jon- Harder, Rrburke, VMS Mosaic, Patrickbowman, Andrew c, Dragon695, Ohconfucius, Sambot, MYT, Dicklyon, Peterbr~enwiki, Andy- roo316, Storm2005, Wellesradio, KJS77, DouglasCalvert, MIckStephenson, Blakegripling ph, Dan1679, CmdrObot, Makeemlighter, Doug Nelson, Xaro, Reywas92, Gogo Dodo, Alucard (Dr.), Fidopdp, Barticus88, Marek69, Esemono, CharlotteWebb, Paste, Carson Reynolds, Tillman, Kauczuk, Myofilus, Ingolfson, JAnDbot, TAnthony, MegX, Rothorpe, VoABot II, CTF83!, Oicumayberight, Mart- inBot, Keith D, CommonsDelinker, KTo288, Nwhitehair, Sspavam, Adys, Brohacz, Andyohio, RenniePet, Wikigi, JHeinonen, Ajfweb, TheMindsEye, Fences and windows, Jacroe, Oshwah, Lawvol, 3tmx, Corvus cornix, Symo85, Bllasae, Jamelan, Odo1982, Madhero88, Capper13, Terminill, Natox, Minestrone Soup, TJRC, Arkwatem, VideoRanger2525, Zil, Svick, Eliburford, Anchor Link Bot, Wuh- wuzdat, Sumit Rodda, Denisarona, Ddunleavy, TFCforever, ImageRemovalBot, Twinsday, Martarius, ClueBot, Shubopshadangalang, Badger Drink, The Thing That Should Not Be, Podzemnik, Pimvantend, Fatsamsgrandslam, Freemarlie, Keysanger, BirgerH, Dekisugi, SchreiberBike, Cherukumalli, DumZiBoT, Gmhofmann, XLinkBot, Nitomarsiglio, Rror, Addbot, Scientus, Damiens.rf, Shumski, Ka Faraq Gatri, MrOllie, Zbigad, Neverlost82, Magicretouch, Redandwhite90, Celtics 2008, Yobot, Jan Arkesteijn, Mmxx, AnomieBOT, Jim1138, Kingpin13, Materialscientist, Citation bot, Nhantdn, Collinsmas229, Gap9551, Rbudegb, Cc77, Green Cardamom, Sock, Sub- hadeepgayen, Gordoncedar, Sulatnijag, Pinethicket, Thinking of England, Serols, SpaceFlight89, Cramyourspam, Dude1818, Trappist the monk, Lotje, Begoon, Fedot11, MWyndham, Mean as custard, John of Reading, WikitanvirBot, Usmanakhtar, Dulioil, Dishcmds, Wikipelli, Eelali, Smurfjones, Staszek Lem, Amylleonard, Ursulets22, HupHollandHup, Mayur, Donner60, Dasinc, Fixarphotos, Ger- manJoe, Egunt07, Benmillionare, ClueBot NG, Matthiaspaul, Orriblepig, Bathori, Declinedera, Atsme, Frietjes, O.Koslowski, Widr, Helpful Pixie Bot, Vikkib79, Kailash29792, BG19bot, M0rphzone, MusikAnimal, Images143, Canoe1967, Zach Vega, CitationCleaner- Bot, Operatorsprofolio, CRimagery, BattyBot, Justincheng12345-bot, Teammm, Mdann52, Cemulku, Maya.Riaz, Najwa.nh, Sanag24, Khansale, Sanj.93, Dobie80, Skyscape144, Mohammad Al Khalid, Ferdinando Castaldo, Frosty, Jgonzalez68, Me, Myself, and I are Here, MidnightRequestLine, Epicgenius, SucreRouge, BreakfastJr, Utkarshsingh93, Comp.arch, Brandnewmicthis, Badar hashmi, JaconaFrere, ColRad85, Jtd1981, Filedelinkerbot, Evemmel, Justinvalle, Dustinnewman26, XoxRashed, Kcook23, Arefinsumit, H. Toghan, Patrick Bongoy, S Manov, Khenshoppe, Lovebhanu, Pumpkinuk, AnnaGoFast, Ntlftw, Sushanhaider, Meganslade17 and Anonymous: 273 • Layers (digital image editing) Source: https://en.wikipedia.org/wiki/Layers_(digital_image_editing)?oldid=754966563 Contributors: Edward, Haakon, JJJJust, Mlewan, Bgwhite, Anclation~enwiki, Hftf, Nbarth, 16@r, Benqish, HanzoHattori, Darbao, VolkovBot, Jeff G., Comrade Graham, Wiae, Hello71, MenoBot, ClueBot, Addbot, CanadianLinuxUser, Invado, Yobot, AnomieBOT, George585, Fres- coBot, I dream of horses, Jordgette, Lopifalko, RayneVanDunem, ClueBot NG, Pwnagic, Advic5, MyrioTechne and Anonymous: 22 • Image histogram Source: https://en.wikipedia.org/wiki/Image_histogram?oldid=742341739 Contributors: Samsara, Abdull, Mdd, Kvaks, Axeman89, Mindmatrix, Josh Parris, Bubba73, Thejapanesegeek, ONEder Boy, IronTek, SmackBot, InverseHypercube, Beetstra, Dick- lyon, Artoonie, Rehno Lindeque, Cydebot, Marcuscalabresus, LivingShadow, Llorenzi, Cfolson, Mihirgokani007, Lara bran, Gkay1500, HairyWombat, ClueBot, Manamarak, Dhulme, Dekart, Cewvero, Addbot, Lolicious, RoninChris64, Jannic86, AnomieBOT, Materialsci- entist, Xqbot, Fatheroftebride, FrescoBot, Pinethicket, KarthikeyanKC, Andfriend, John of Reading, Tuankiet65, Wikipelli, Macphesa, 11.1. TEXT 171

Grandphuba, Mentibot, JaffaMan, Helpful Pixie Bot, MusikAnimal, Silvrous, Purest1, Cqwi, MaxPlank111, Normantg, OzzyRameses, Thatguyfromspace, Bender the Bot and Anonymous: 26 • Curve (tonality) Source: https://en.wikipedia.org/wiki/Curve_(tonality)?oldid=544619594 Contributors: Michael Hardy, Mlewan, Adonis- cik, Nbarth, Dicklyon, Van helsing, PixelBot, Addbot, Yobot, Some standardized rigour and Helpful Pixie Bot • Sensitometry Source: https://en.wikipedia.org/wiki/Sensitometry?oldid=749280477 Contributors: Linuxlad, Lectonar, Rjwilmsi, The wub, Carrionluggage, Srleffler, Adoniscik, Kgyt, Marc Lacoste, Jbergquist, Aleenf1, Dicklyon, Iridescent, George100, Thijs!bot, J.delanoy, Dcouzin, Sintaku, MystBot, Addbot, Fgnievinski, Sillyfolkboy, Hunt059, AnomieBOT, Redbobblehat, Kendaniszewski, MastiBot, Cullen328, WikitanvirBot, Matthiaspaul, Helpful Pixie Bot, Jacopo188, BattyBot, ChrisGualtieri, Bender the Bot and Anonymous: 14 • Tone reproduction Source: https://en.wikipedia.org/wiki/Tone_reproduction?oldid=741251074 Contributors: Hooperbloob, Rjwilmsi, Jpowersny2, SmackBot, Chris the speller, Nbarth, Dicklyon, Edchilvers, Dspark76, Citation bot, Citation bot 1, Dcirovic, Helpful Pixie Bot, BattyBot, Monkbot, Bender the Bot and Anonymous: 4 • Gamma correction Source: https://en.wikipedia.org/wiki/Gamma_correction?oldid=751281772 Contributors: Damian Yerrick, Axel- Boldt, Uriyan, The Anome, Tarquin, Andre Engels, Heron, Erik Zachte, Cyp, Mulad, Gutza, Cjmnyc, Furrykef, VeryVerily, Sand- man~enwiki, Robbot, Hankwang, Kadin2048, Naddy, Modulatum, Dav4is, Mboverload, Macrakis, Bumm13, Perey, ZeroOne, Army1987, Richi, Zetawoof, Hooperbloob, Linuxlad, PAR, H2g2bob, Forderud, Oleg Alexandrov, Jaimetout, Jacobolus, Waldir, VsevolodSipakov, Bubba73, The wub, Ian Dunster, Bitoffish, Srleffler, Kri, Chobot, Adoniscik, YurikBot, Hydrargyrum, ENeville, Janke, Shotgunlee, Mysid, Ms2ger, SmackBot, Capnquackenbush, Barasoaindarra, [email protected], Tom Lougheed, David Woolley, Gilliam, Stuarta, Nbarth, Fluggo~enwiki, Kevinpurcell, VMS Mosaic, Al Fecund, Jbergquist, Cosmix, Korval, Slogby, Hulmem, Jurohi, Dicklyon, Guy- burns, Peyre, Caiaffa, SimonD, Simastrick, Outriggr (2006-2009), ChristTrekker, Hanche, Thijs!bot, Racaille, Davidhorman, Klausness, Lovibond, PhilKnight, Simongarrett, Magioladitis, Nikevich, MarcLevoy, Avsharath, Glrx, R'n'B, CommonsDelinker, EdBever, Danie- lEng, Wolfganghaak, Kano14, Evb-wiki, Totsugeki, Pcordes, Dcouzin, Pleasantville, SlipperyHippo, Ricardo Cancho Niemietz, Michael Frind, Brenont, MinorContributor, R39525, Eric.brasseur, Dantesoft, DragonBot, X-romix, ChrisHodgesUK, Eshouthe, Rubybrian, SP1R1TM4N, Addbot, KaiKemmann, Lightbot, Gary johnson 53, Yobot, Mike.hs, Hersfold tool account, Thehelpfulbot, FrescoBot, Maggyero, Rectec794613, WikitanvirBot, Bulgurc, Dcirovic, Mikhail Ryazanov, KlappCK, Watchpigsfly, Gmzh, JordoCo, Helpful Pixie Bot, Farnlogo, Inrobert, CitationCleanerBot, Mike Novikoff, Junkyardsparkle, Sashaaz99, Bender the Bot and Anonymous: 109 • Digital compositing Source: https://en.wikipedia.org/wiki/Digital_compositing?oldid=726815318 Contributors: The Anome, Pnm, Stw, Big Bob the Finder, Altenmann, LetterRip, Michael Snow, Asparagus, BenFrantzDale, Dnas, Gary D, Discospinster, CanisRufus, Apyule, Suguri F, Mathmo, Teemu Leisti, Graham87, Adoniscik, Quasimondo, Dannharriss, Mdwyer, TuukkaH, SmackBot, Bluebot, Bejnar, Ckatz, Hakanblomdahl, CmdrObot, Theoh, Cydebot, Ronbrinkmann, Ekabhishek, Magioladitis, Cmw72, R'n'B, Ppapadeas, VolkovBot, Iamthedeus, ObfuscatePenguin, Martarius, Laurinavicius, Ricvelozo, Margin1522, Legobot, Yobot, JonOomph, Dcirovic, DASHBotAV, JordoCo, BG19bot, Canoe1967, ChrisGualtieri, Codename Lisa, Boulbik, Kengorrie, Fmadd and Anonymous: 49 • Computational photography Source: https://en.wikipedia.org/wiki/Computational_photography?oldid=760899711 Contributors: The Anome, Ronz, Bearcat, David Koller, Connelly, Ds13, Niteowlneils, Jfdwolff, Quadell, Beland, Glogger, Xezbeth, JohnyDog, Danhash, Woohookitty, Dvyost, Mariocki, Jpbowen, RL0919, SmackBot, Bluebot, JonHarder, Bayg, George100, Stodieck, Bobke, Chia-Kai Liang, Nikevich, MarcLevoy, Amitkagrawal, Pleasantville, Primal400, JabbaTheBot, Wikimonitorwest, WurmWoode, XLinkBot, Annicedda, MdoDude, Yobot, AceEdit, Galoubet, Jonkerz, Wrambro, Beccamlu, Srini.narasimhan, Newyorkadam, Meskun, Vsapiexpo, Zingbot, Tomsal291, S. Mathavan, TheRealPascalRascal and Anonymous: 36 • Inpainting Source: https://en.wikipedia.org/wiki/Inpainting?oldid=751426992 Contributors: Graeme Bartlett, Momotaro, Woohookitty, Rjwilmsi, Bgwhite, Adoniscik, SmackBot, Incnis Mrsi, Nbarth, Hu12, RichardMcCoy, CommonsDelinker, Johnbod, Llorenzi, Am- roamroamro, Natg 19, Finnrind, Gbawden, TimmmmCam, MGusi, Addbot, KamikazeBot, Wallacer, AnomieBOT, Jim1138, Cita- tion bot, Tannenmann, D'ohBot, Citation bot 1, Weichtiere, Matthiaspaul, Digitalmediaprocessing, DARIO SEVERI, Verbcatcher, David.moreno72, Mark viking, Br dimitrov, Iretk, 296.x, Monkbot, TheIdiotIndian, Thompson jeff, Equinox, Mrpetreli, GreenC bot, Artselector and Anonymous: 30 • Image processor Source: https://en.wikipedia.org/wiki/Image_processor?oldid=748641447 Contributors: JesseW, Fudoreaper, Imroy, Nikkimaria, Chris the speller, Nbarth, Z'Alai, Optomaniac, Luckas-bot, AnomieBOT, Mark Schierbecker, BenzolBot, ZéroBot, ClueBot NG, Matthiaspaul, Widr, KLBot2, Tagremover, Mark viking, ScotXW, Cpiebtanz and Anonymous: 10 • Noise reduction Source: https://en.wikipedia.org/wiki/Noise_reduction?oldid=761776776 Contributors: The Anome, Michael Hardy, Isomorphic, Smack, Mydogategodshat, Charles Matthews, Radiojon, Samsara, Blainster, Kent Wang, BenFrantzDale, Asc99c, Hugh2414, Wmahan, Markome, Sam Hocevar, Jayjg, Smokris, Cmdrjameson, Drange net, Hooperbloob, ProhibitOnions, Kenyon, Mindmatrix, Ruud Koot, Rufous, BD2412, Rjwilmsi, Cambridgeincolour, Amire80, Windchaser, WouterBot, Bgwhite, Adoniscik, SoBa~enwiki, RussBot, Anomalocaris, Megapixie, Johndarrington, Voidxor, Whitejay251, DmitriyV, Crystallina, SmackBot, Pzavon, Betacommand, Keegan, Nbarth, Ned Scott, Lyran, DMacks, Byelf2007, Dicklyon, Hu12, JoeBot, NativeForeigner, JForget, Requestion, Daletan, Sinn, Quintote, Ingolfson, JAnDbot, Sangak, Verkhovensky, Adot, STBot, Sigmundpetersen, Althepal, STBotD, Aduvall, Pcordes, Bonadea, Llorenzi, Funandtrvl, VolkovBot, SQL, Nilx~enwiki, Yerpo, AMCKen, BrightRoundCircle, Idm196884, Binksternet, Toby.Turander, WikHead, Dekart, Addbot, Mohamad398, Josh1414, Yobot, AnomieBOT, DemocraticLuntz, JackieBot, Ulric1313, LilHelpa, Happyrabbit, Nasa- verve, Paul Laroque, Gordoncedar, Cincoutprabu, Theohiosource, Zink Dawg, Spakin, Beta M, 3dec3, Dcirovic, Shwab, Joseph.elkhouri, ClueBot NG, Matthiaspaul, MelbourneStar, Stefanome, Helpful Pixie Bot, KLBot2, BG19bot, Katona4444, CitationCleanerBot, Cyberbot II, Garyatvocal, TwoTwoHello, Jamesx12345, 93, Lemnaminor, Valernur, Zonderr, Wikieditordownunder, SGrantStats, Danyahooeditor, Pspears35, InternetArchiveBot, GreenC bot, Bender the Bot, Taverr and Anonymous: 80 • Iterative reconstruction Source: https://en.wikipedia.org/wiki/Iterative_reconstruction?oldid=759649439 Contributors: Andreas Kauf- mann, Seabifar, Wouterstomp, AjAldous, Isnow, Rjwilmsi, Bruce1ee, Snek01, Tribaal, SmackBot, Shrew, Mafiag~enwiki, Headbomb, Squids and Chips, Neparis, Edboas, Jfessler, Dthomsen8, Addbot, MrOllie, Yobot, J04n, Duoduoduo, Klar sagen, Klbrain, Martin.uecker, Ssscienccce, Svvenkatakrishnan, Iuyanik13, GXUST20140201002, Shadialbarqouni, Mim.cis and Anonymous: 15 • Distortion (optics) Source: https://en.wikipedia.org/wiki/Distortion_(optics)?oldid=757016044 Contributors: SimonP, Patrick, Ben- FrantzDale, Macrakis, Zzo38, Rich Farmbrough, Mdf, O18, Collinong, Mindmatrix, Jacobolus, Tabletop, Qwertyus, Bitoffish, Srleffler, Antilived, Adoniscik, Borgx, Cmglee, SmackBot, Marc Lacoste, Nbarth, Beetstra, Calibas, Dicklyon, Anoneditor, Thijs!bot, Lovibond, Joe Schmedley, Nyttend, SoyYo, Kostisl, Tikiwont, Normankoren, Cmansley, OlavN, Aamackie, Guy Van Hooveld, WalrusJR, Odo Benus, XLinkBot, Nepenthes, Addbot, AnomieBOT, Tiger1027tw, Tom.Reding, Lotje, SpoonlessSA, John of Reading, Eekerz, Mer- litz, Jmencisom, Gsarwa, Mail6543210, MerlIwBot, BG19bot, St900, Wolfgang42, BattyBot, Bakkedal, ChrisGualtieri, OhCh9voh, Monkbot, Yikkayaya, Plumonito, Bicubic, Interpuncts, Laotinant, Latosh Boris and Anonymous: 40 172 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• Perspective control Source: https://en.wikipedia.org/wiki/Perspective_control?oldid=747081724 Contributors: Dmmaus, Imroy, Eleas- sar, Howcheng, Durova, Chris the speller, Nbarth, JAnDbot, Lumber Jack second account, Sagabot, Johnbod, Darrask, Motorrad-67, TorLillqvist, Addbot, Yobot, AnomieBOT, Flmz, User399, Ponydepression, Helpful Pixie Bot, Bender the Bot and Anonymous: 4 • Cropping (image) Source: https://en.wikipedia.org/wiki/Cropping_(image)?oldid=749405191 Contributors: Zundark, Wapcaplet, Eric119, Dysprosia, MathKnight, Leonard G., Hapsiainen, Wtmitchell, Cy21, Tabletop, XLerate, SchuminWeb, Nihiltres, Srleffler, YurikBot, SmackBot, Nbarth, Robofish, Dicklyon, Iridescent, MIckStephenson, Cydebot, Paddles, Slixor, Sunridin, Marek69, Nick Number, Seaphoto, Goldenrowley, MER-C, Magioladitis, JamesBWatson, R'n'B, Johnbod, WereSpielChequers, JSpung, Superdotman, Werld- wayd, ClueBot, The Thing That Should Not Be, Keeper76, DragonBot, Kounosu, Apparition11, Addbot, Luckas-bot, THEN WHO WAS PHONE?, Eumolpo, ArthurBot, Ssspam, EmausBot, ZéroBot, Ethaniel, ClueBot NG, DBigXray, BG19bot, Loupeznik, BattyBot, Eyesnore, 0xF8E8, Radhakrishna.achanta and Anonymous: 48 • Image warping Source: https://en.wikipedia.org/wiki/Image_warping?oldid=731065085 Contributors: Patrick, Root4(one), Srleffler, Adoniscik, SmackBot, JzG, Iridescent, Pascal.Tesson, Davidhorman, AlexNewArtBot, Andy Dingley, Richmondman, DumZiBoT, Ad- dbot, Doulos Christos, Robban75, Rezonansowy, Kevinjing001 and Anonymous: 11 • Vignetting Source: https://en.wikipedia.org/wiki/Vignetting?oldid=757101383 Contributors: Bryan Derksen, Egil, Gisle~enwiki, Ehn, Victor Engel, Silvonen, BenFrantzDale, Dmmaus, Deglr6328, Imroy, Kwamikagami, Fir0002, Jlencion, Hooperbloob, Romary, Ashley Pomeroy, LOL, Brighterorange, The wub, FlaBot, Margosbot~enwiki, Srleffler, Spasemunki, YurikBot, Janke, Xaje, EEMIV, PBurns, SmackBot, Nbarth, Amphytrite, Acdx, Mr Stephen, Raysonho, Mactographer, TonyTheTiger, JAnDbot, Deflective, Dazp1970, Cjmclark, VolkovBot, Walvis, Vsst, Caltas, TheDefiniteArticle, Rocknrollsuicide, Odo Benus, Ggia, Mpd1989, Vanished User 1004, Jengirl1988, Addbot, Feigle, Ashanda, Googletalk, SeymourSycamore, Luckas-bot, Jandrewnelson, Rubinbot, OllieFury, S0aasdf2sf, J04n, Shadow- jams, Intercrew 2k, Craig Pemberton, Tom.Reding, Lotje, Alph Bot, ZéroBot, A2soup, Philafrenzy, Gsarwa, Helpful Pixie Bot, Kath- leen883, Ryker-Smith, Northamerica1000, Andreiscurei, Bobn2, Comfr, Tony Mach, Katterjohn, Tedbigham, Dhhbdbbf and Anony- mous: 55 • Image compression Source: https://en.wikipedia.org/wiki/Image_compression?oldid=761381933 Contributors: Robert Merkel, Zun- dark, The Anome, Tarquin, Tbackstr, Gianfranco, Rade Kutil, Patrick, Minesweeper, Stevenj, Error, Rawr, Timwi, Grendelkhan, HaeB, Alan Liefting, OverlordQ, Doerfler, ZeroOne, Fir0002, Jumbuck, Kocio, PAR, Paul1337, Jheald, Pawnbroker, Gareth McCaughan, Preslethe, Roboto de Ajvol, GENtLe, Dbfirs, DmitriyV, Zvika, Burton Radons, Oli Filth, Jnavas, Gyrobo, Rludlow, Mosca, Metta Bub- ble, Hannes Agnarsson Johnson, Beetstra, Hu12, BananaFiend, Mblumber, Blitz Tiger, GentlemanGhost, Bobblehead, Jeph paul, Barek, Sterrys, RMN, Speck-Made, CommonsDelinker, J.delanoy, SpigotMap, Tatrgel, 83d40m, TXiKiBoT, Oshwah, Knacker ITA, Nagy, SieBot, YourEyesOnly, Sunayanaa, Judicatus, OKBot, GarconN2, Diazenefz, Diagramma Della Verita, XLinkBot, Yangez, SilvonenBot, Dekart, MrOllie, Legobot, Ptbotgourou, Fraggle81, Materialscientist, Ganesha Freya, Louperibot, Pinethicket, RedBot, Serols, RobinK, Dinamik-bot, Gigith, WikitanvirBot, Mordgier, JasonSaulG, Watson edin, Oldtimer1820, ZéroBot, ClueBot NG, BG19bot, Alex Ra- tushnyak, Rijinatwiki, BattyBot, Pratyya Ghosh, Jogfalls1947, Epicgenius, Citobun, Mrudhuhas, PiotrGrochowski000, Madalintudose, Vinchansoe, Gogoshzara and Anonymous: 61 • Lempel–Ziv–Welch Source: https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch?oldid=734784888 Contribu- tors: Damian Yerrick, Tobias Hoevekamp, Bryan Derksen, Zundark, Ap, Mjb, B4hand, Michael Hardy, MartinHarper, Geoffrey~enwiki, Williamv1138, J-Wiki, Den fjättrade ankan~enwiki, Dean p foster, Kragen, Dynabee, Jgade, Emperorbma, Timwi, Dcoetzee, Bemoeial, Dmsar, Magnus.de, Doradus, Furrykef, Saltine, Bevo, Topbanana, Phil Boswell, Robbot, Jaredwf, DHN, Cyrius, Giftlite, Dbenbenn, Paul Richter, Mintleaf~enwiki, Inkling, IRelayer, Paul Pogonyshev, Gary Jones, Alfa, Nayuki, Matthäus Wander, Pne, Uranographer, LucasVB, DCrazy, Stephan Leclercq, OverlordQ, Thincat, Marc Mongenet, E David Moyer, Quota, Salimfadhley, MementoVivere, Erehtsti, Udzu, Guanabot, Hcs, ArnoldReinhold, Kwamikagami, Shanes, Kgaughan, DaveGorman, Haham hanuka, Musiphil, CyberSkull, DariuszT, Lord Pistachio, Thryduulf, Philthecow, Cruccone, Urod, GregorB, Isnow, Graham87, Jenesis, Rjwilmsi, Sohmc, Mathbot, Elharaty, Nagytibi, Sasuke Sarutobi, Amshali, Canageek, Diotti, Misza13, Procedure, Frigoris, DmitriyV, Kiv, A bit iffy, SmackBot, Roger Hui, DevaSatyam, Gilliam, Gaiacarra, Iain.dalton, Thumperward, Oli Filth, DHN-bot~enwiki, Eliyahu S, Mrt doulaty, ElementFire, Lajm, Bagolop, Norm mit, JeffW, GDallimore, Senis, Chetvorno, Mellery, Snorkelman, Requestion, Hakkasberra, Ignoramibus, Demogog, Bdragon, Apantomimehorse, Mentifisto, InternetBummer, Danroa, Isilanes, JAnDbot, Arch dude, LittleOldMe, SiobhanHansa, Wlod, CountingPine, Abednigo, Gwern, Speck-Made, Skywalker2049, Dispenser, Ryan Postlethwaite, Ken g6, J ham3, TXiKiBoT, Jozue, Elphion, Fizzackerly, Cuddlyable3, Duncan.Hull, AlleborgoBot, Agranzot, SieBot, Lazyman2, TJRC, VVVBot, David Be, DaBler, Nut- tycoconut, Lightmouse, ClueBot, 22367rh, Mark.t.nelson, Nanobear~enwiki, Kuvarian, Ykhwong, CodeCaster, Cappy-chan, XLinkBot, Hotcrocodile, Addbot, Yobot, Wojciech mula, AnomieBOT, Blue Prawn~enwiki, Kevin chn, Parvindersingh23, Jellystones, Liorma, Om- nipaedista, RibotBOT, Damjan.jov, MoreNet, Spakin, Kerrick Staley, EmausBot, IncognitoErgoSum, Kiwi128, One.Ouch.Zero, ClueBot NG, Stamen.V.Petrov, Deternitydx, Dexbot, Comp.arch, YITYNR and Anonymous: 157 • Image file formats Source: https://en.wikipedia.org/wiki/Image_file_formats?oldid=762059389 Contributors: Zundark, Dcljr, Kier- ant, Samsara, AnonMoos, RickBeton, Chris 73, DavidCary, JamesHoadley, Utcursch, Keoniphoenix, GreenReaper, Rich Farmbrough, Hhielscher, Koenige, Mancomb, Dystopos, Shenme, L.Willms, Teeks99, Varuna, Alansohn, Hu, Ronark, Rick Sidwell, Mikeo, Zshzn, Trevie, Phillipsacp, AlbertCahalan~enwiki, Waldir, Magister Mathematicae, Jijinmachina, Rjwilmsi, Wikibofh, Jdowland, Bubba73, DoubleBlue, Nihiltres, Gurch, DEIDATVM, DVdm, YurikBot, Wavelength, StuffOfInterest, Hydrargyrum, Cpuwhiz11, Wiki alf, Welsh, Aaron Schulz, Shotgunlee, Rwalker, Xpclient, FF2010, Closedmouth, Ketsuekigata, SmackBot, KnowledgeOfSelf, Hydrogen Iodide, KaiUwe, TFMcQ, Thunderboltz, Fitch, Pandion auk, MindlessXD, Kslays, Yamaguchi, Gilliam, Brianski, Bluebot, Oli Filth, TheS- pectator, Jerome Charles Potts, ERobson, Kitteh~enwiki, Frap, Jennica, Reclarke, Rrburke, Aldaron, Ne0Freedom, AdmiralMemo, A5b, Bostwickenator, Spiritia, LSD, Peterlewis, CyrilB, Beetstra, Dicklyon, Kvng, Ojan, UncleDouggie, GDallimore, RekishiEJ, Thats- fearagain, Davidbspalding, Ludwig Boltzmann, CmdrObot, Spankman, KyraVixen, Yaris678, Mdc299f, Crossmr, ST47, Hounddog32, Vanished User jdksfajlasd, Ebichu63, Epbr123, Ultimus, N5iln, Bobblehead, Dfrg.msc, Icep, Jtmoon, MER-C, Instinct, LittleOldMe, SteveSims, VoABot II, BucsWeb, Catgut, LaVieEntiere, Esanchez7587, Jackson Peebles, HotXRock, Drewmutt, Shellwood, Shred- der46, Petrwiki, Hodlipson, Mikael Häggström, Jorfer, KylieTastic, Stanqo, Tweisbach, Vanished user 39948282, Bonadea, Martial75, Tkgd2007, SoCalSuperEagle, Ryan032, Anna Lincoln, Elphion, J0EY Bukkake, Programcpp, Madhero88, Billinghurst, PeterEast- hope, Dirkbb, Meters, Skarz, Jungegift, Darxus, Ben Boldt, Tresiden, Baltimark, Hebisddave, PookeyMaster, Oxymoron83, Jdaloner, Johnakinjr01, Bencherlite (AWB), Dillard421, Paiev, Paulinho28, Hebbster, Kauai68, TheDooD111, HairyWombat, Loren.wilton, Clue- Bot, Hippo99, Digitalkiller, Senderovich, Drmies, Mild Bill Hiccup, Robmontagna, Paulcmnt, Excirial, Anon lynx, Vanisheduser12345, Lartoven, Sun Creator, Sonicdrewdriver, NuclearWarfare, Matty0wnz, Swindbot, ZX787, BarretB, XLinkBot, Spitfire, Dlpkbr, Little Mountain 5, Skarebo, WikHead, Addbot, Legaladvantagellc, Ghettoblaster, Some jerk on the Internet, Mabdul, Non-dropframe, Ateth- nekos, Blethering Scot, Jncraton, Charstiny, Chamal N, Tassedethe, Alfie66, Math Champion, Yobot, Kpharshan, AnomieBOT, Ciphers, 11.1. TEXT 173

1exec1, Jim1138, Galoubet, Piano non troppo, Kingpin13, Halfs, Materialscientist, Gsmgm, Andrew-916, Ched, China cobra, Uber- mensk, Nantucketnoon, A.amitkumar, Dougofborg, FrescoBot, Michael93555, Mfwitten, Outback the koala, Davydoo, I dream of horses, Quantumsilverfish, Lotje, Rio98765, Jpcha2, Barry Pearson, Jeffrd10, EmausBot, John of Reading, T3dkjn89q00vl02Cxp1kqs3x7, Im- munize, Dewritech, Mo ainm, Dcirovic, K6ka, Checkingfax, MorbidEntree, Walter.Arrighetti, AOC25, Tolly4bolly, Bomazi, David- carroll123456, 28bot, ClueBot NG, Matthiaspaul, MelbourneStar, This lousy T-shirt, Satellizer, Shaddim, 123Hedgehog456, Very triv- ial, Widr, Antiqueight, Treyofdenmark, Be..anyone, Ankamsarav, Hza a 9, Nospildoh, Mark Arsten, Petejbell, Designer4u, Glacialfox, Klilidiplomus, JGM73, Murughendra, Pratyya Ghosh, SergeantHippyZombie, Basemetal, TwoTwoHello, Stewwie, Faizan, Bughuntr, Fumiko Take, Jodosma, Tentinator, Airsynth, Serpinium, Sahya04, Makkachin, Nikmolnar, Eduardo Leal 20, XavierXerxes, Cousteau, Lovepercy4ever, Kisses888, Horseless Headman, WikiWisePowder, BethNaught, Avi0307, Domsacraft, Narky Blert, Patchoulol~enwiki, Oleaster, DiscantX, My Chemistry romantic, CAPTAIN RAJU, EvanStathem, Simplexity22, DatGuy, Hanscoil, 123nav, Fmadd, Game- zone123, Monkeyman123 PRO and Anonymous: 468 • Comparison of graphics file formats Source: https://en.wikipedia.org/wiki/Comparison_of_graphics_file_formats?oldid=761497232 Contributors: Ahoerstemeier, Nickshanks, Jerzy, Ashley Y, Rursus, DocWatson42, Tweenk, Chowbok, Kaldari, EagleOne, Pmsyyz, An- phanax, Dennis Brown, Bfg, CyberSkull, Paul1337, Maladon, Someoneinmyheadbutit’snotme, Kbolino, Karnesky, Mangojuice, Waldir, Havarhen, Bubuka, JVz, Gudeldar, Gurch, Mucus, Nicholasink, Bdelisle, Steppres, Logixoul, Sean2000, MonMan, Shinmawa, Mike- blas, ColdFusion650, Xpclient, Lennylen, TrickyTank, Donhalcon, JLaTondre, Spaceboy492, Yonir, SmackBot, Pbb, Mihai cartoaje, Xephael, Martin.Budden, JJay, Baa, Frap, VMS Mosaic, UU, Radagast83, Danikayser84, Mike Richardson, Spiritia, Soumyasch, Sky- Walker, FleetCommand, Ludwig Boltzmann, Skybon, Cydebot, Boardhead, The machine512, SummonerMarc, MegamanX64, Grayshi, Frank1239, Gioto, JimScott, Andersb~enwiki, CForrester, Alphachimpbot, Vituperex, Magioladitis, VoABot II, JamesBWatson, Bobby D. DS., Minimiscience, My Alt Account, Ariel., LordPhobos, Glrx, Vi2, Gblandst, Algotr, Jeepday, Althepal, Stanqo, OlavN, JhsBot, Billinghurst, Haseo9999, AJRobbins, Darxus, Wrldwzrd89, Dns13, Nemo20000, This, that and the other, MilFlyboy, Radon210, Silver- backNet, Spitfire19, Anyeverybody, Kauai68, ClueBot, Plastikspork, GarconN2, Niceguyedc, Hs4pratt, Narendra Sisodiya, Sun Creator, Matty0wnz, Puceron, DanielPharos, Libcub, Kbdankbot, Addbot, Ghettoblaster, Mabdul, CanadianLinuxUser, Tassedethe, LEADTech- nologies, TaBOT-zerem, Manwichosu, Crazykasey, Efa, Jim1138, Anybody, FrescoBot, Gcalis, WaldirBot, Steve Quinn, Lemonsourkid, Barry Pearson, Canyq, Christoph hausner, Zollerriia, Larsgrobe, GoingBatty, Mo ainm, Oldtimer1820, Arman Cagle, Rmashhadi, MerlI- wBot, Be..anyone, Frign, Myutwo33, Designer4u, Dobie80, Jon Sneyers, Yamaha5, YiFeiBot, Graboy1234, GeoffreyT2000 and Anony- mous: 90 • TIFF Source: https://en.wikipedia.org/wiki/TIFF?oldid=762606999 Contributors: Kpjas, Brion VIBBER, Zundark, Tarquin, Alex.tan, Andre Engels, Fnielsen, Maury Markowitz, Bewildebeast, DopefishJustin, Ahoerstemeier, Whkoh, Jimregan, Evercat, Emperorbma, Furrykef, Carbuncle, JorgeGG, Anthony Fok, Robbot, Peak, Flauto Dolce, Goodralph, Cyrius, Mattflaschen, SamB, Taviso, Vk2tds, Ssd, Chinasaur, Behnam, Horatio, LucasVB, OverlordQ, JnB987, Vina, Zantolak, Bumm13, Icairns, Bk0, BrianWilloughby, Fuzlyssa, Moxfyre, EagleOne, Imroy, Jiy, JTN, Pak21, Narsil, Bender235, Ajmas, Pedant, Polluks, FrankWarmerdam, Giraffedata, Kjkolb, Minghong, Bijee~enwiki, Guy Harris, CyberSkull, Kocio, Metron4, Yogi de, Suruena, Gpvos, Amorymeltzer, Mahanga, Bobrayner, OwenX, Rocastelo, Sburke, Phillipsacp, Pol098, Rchamberlain, Frankie1969, Mandarax, Cuchullain, Rjwilmsi, Jivecat, Salix alba, Ad- jamir, FlaBot, Nihiltres, Crazycomputers, Chobot, Jpfagerback, YurikBot, Wavelength, RobotE, Crotalus horridus, NTBot~enwiki, Retodon8, Cliffb, Hede2000, Hellbus, Hydrargyrum, CambridgeBayWeather, Wiki alf, Grafen, DavidH, Hymyly, Goffrie, Xpclient, Rwxrwxrwx, StealthFox, 2fort5r, GrinBot~enwiki, Palapa, SmackBot, K-UNIT, Pieleric, Onebravemonkey, David Woolley, Gilliam, Durova, MalafayaBot, Mdwh, Jerome Charles Potts, Modest Genius, Can't sleep, clown will eat me, Frap, Mhaeberli, Delivery:435, Mwtoews, Wybot, Kendrick7, Ozhiker, Guyjohnston, ManiacK, Vindictive Warrior, Iliev, Deditos, Ckatz, Loadmaster, Roregan, AE- Moreira042281, David Souther, Scarlsen, GDallimore, Aeons, Ivan Pozdeev, PorthosBot, JohnCD, Dgw, Walterarlenhenry, Seagoat8888, MaxEnt, CCFreak2K, Cydebot, Dclunie, Crossmr, Boardhead, Sygnosis, Docmgmt, Thijs!bot, Mojo Hand, Escarbot, AntiVandalBot, Gioto, Luna Santin, 49oxen, TimVickers, Tabanger, Cchuter, Glennwells, Outsid3r, Nikbro, A4, SHCarter, Mclay1, Nathan192001, LorenzoB, SperryTS, Ghostwo, Jim.henderson, Akurn, Glrx, J.delanoy, Nemo bis, Little Professor, AntiSpamBot, Liliana-60, Comet- styles, Shamatt, Idioma-bot, Spellcast, Sam Blacketer, CWii, Grammarmonger, Ryan032, DrSlony, Jeanhaney, Cootiequits, LeaveSleaves, Katimawan2005, Kbrose, Maddiemoo39, SieBot, Karaboom, Paolo.dL, Cpeel, HairyWombat, ClueBot, Deedub1983, IceUnshattered, RRS Trojan, RFST, Nekrobutcher, DocBambs, No such user, Propeng, Alexbot, M4gnum0n, Anon lynx, Thingg, 7, Bitbank, Dfoxvog, Addbot, Mortense, Ghettoblaster, Landon1980, TheNeutroniumAlchemist, CarsracBot, Kstiever, Wikajaro, OlEnglish, Kalkühl, Neu- rovelho, Yobot, ShreeHari, Jeffz1, AnomieBOT, Historyfiend2000, Xqbot, Peterdx, GrouchoBot, RibotBOT, Kyrios320, ToraNeko, Ink- tvis~enwiki, Green Cardamom, Tamariki, Jonesey95, RedBot, Soc88, Lotje, Lrosenthol, Barry Pearson, January, Andrewjfs, Onel5969, Mean as custard, DexDor, Alph Bot, Doughorner, EmausBot, Angrytoast, Vietkidd, Dewritech, Zipperhead46163, Dcirovic, Phiarc, Ddbtek, Max theprintspace, Phoenixthebird, Josve05a, AOC25, H3llBot, Unreal7, GrindtXX, ClueBot NG, Matthiaspaul, Derschmidt, Be..anyone, Helpful Pixie Bot, Wbm1058, BG19bot, Hza a 9, MusikAnimal, Lorinthar, Onewhohelps, Wowowowowowo, Holvoetj, Thesquaregroot, Thegreatgrabber, BattyBot, Cyberbot II, Jogfalls1947, RMCD bot, Kitsurenda, Myconix, Haminoon, Spike0xff, Nyuszika7H, LeeuwenVanBas, Monkbot, OMPIRE, AlexGuzaev77, Bobsrislolha, MB298, GreenC bot, Gluons12, DIYeditor and Anonymous: 253 • Raw image format Source: https://en.wikipedia.org/wiki/Raw_image_format?oldid=764001577 Contributors: Zundark, Nate Silva, Ed- ward, Julesd, Gisle~enwiki, Ilyanep, Charles Matthews, Jeversol, Samsara, Robbot, Dale Arnett, Boffy b, DocWatson42, BenFrantzDale, Gids~enwiki, Dmmaus, AlistairMcMillan, Rworsnop, Aughtandzero, DragonflySixtyseven, Ary29, Ukexpat, Fg2, Moxfyre, Qutezuce, Alistair1978, Mattdm, Bobo192, Tmh, L.Willms, Towel401, M5, Alansohn, Gary, Guy Harris, CyberSkull, Melaen, Ronark, Suru- ena, Derbeth, Wheaty, H2g2bob, SteinbDJ, Stuartyeates, Stemonitis, Mindmatrix, Jackel, Pol098, AlbertCahalan~enwiki, BlaiseFE- gan, BD2412, ABot, Bubba73, Nandesuka, W3bbo, Gringer, Baryonic Being, Gary Cziko, Jfriedl, Simishag, Groogle, Gaius Cor- nelius, Rsrikanth05, Ebow, Daveswagon, Curtis Clark, Hub, Welsh, ONEder Boy, Kxjan, Shotgunlee, Darkfred, Meaney, Oliverdl, Sandstein, SMcCandlish, GraemeL, StevoB, SmackBot, Joseph S. Wisniewski, Ckaiserca, InverseHypercube, Martin.Budden, Marc La- coste, Eskimbot, Scott Paeth, Gilliam, Folajimi, Betacommand, Lubos, BitterSTAR, Stevage, Nbarth, Projectbluebird, Doc0tis, Frap, John Hupp, Berland, Midnightcomm, Pandar~enwiki, CypherXero, Mion, Rklawton, Guyjohnston, Khazar, Euchiasmus, WalterMB, Nobunaga24, Ckatz, 16@r, JHunterJ, Waterwingz, Dicklyon, Dr.K., Vincecate, Eliashc, Hu12, Joseph Solis in Australia, Dein Freund der Baum, Urebelscum, Raysonho, Makeemlighter, Bladez, DeLarge, Kushal one, DShantz, MoritzMoeller, Hydraton31, Carl Turner, Ykliu, Hometack, Tawkerbot4, GregDowning, Bpadinha, AstroPig7, Thijs!bot, N5iln, AlexSpurling, Davidhorman, Jonny-mt, Lovibond, Note- worthy, Bernopedia, Carstenw, B-rat, Kuteni, JAnDbot, Kebertxela, ChuckOp, LittleOldMe, Micksterama, Faizhaider, Gphoto, Criadop- erez, Jim.henderson, R'n'B, Nono64, Mausy5043, J.delanoy, Svetovid, Herbythyme, Mcg3o, Fridolin, Dargaud, TommyImages, Yaocwiki, M-le-mot-dit, Zojj, Althepal, STBotD, Psydexzerity, Daleala, Gregdmclaren, RJASE1, VasilievVV, DrSlony, Tumblingsky, WMBP, Wikifreakfr, Owengibbins, Bealevideo, BwDraco, Badly Bradley, Warp foo, Forlornturtle, Vchimpanzee, Quincylover~enwiki, Schnel- lundleicht, Logan, Darxus, Milkbreak, SieBot, Bexyisbored, Vexorg, Martinperlin, Lennartgoosens, Soler97, Kschengs, DavidAndersen, 174 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Ddxc, Pat Hawks, Hardloaf, Svick, AC007, ClueBot, Hippo99, GarconN2, Wispanow, Wikit2007, Boing! said Zebedee, Niceguyedc, Xenoptrix, Ykhwong, StanContributor, Libor.tinka, JasonAQuest, Oddstray, GFHandel, Innov8or, Axelriet, DumZiBoT, XLinkBot, Ash773, Rror, Addbot, Macray, Bytec, Tassedethe, Arspr, Rafielad, Sven Boisen, Luckas-bot, Yobot, David Jaša, AnomieBOT, Dar- ryl101, Photographerguy, CeciliaPang, Konrad Klacke, Bergos~enwiki, Almabot, Zeus57, GrouchoBot, Nickmn, RibotBOT, Pmjde- bruijn, Joaquin008, Noisy Crew, Celuici, Nagualdesign, GT5162, FrescoBot, HuggormSWE, Tamariki, Schngrg, Wokstation, Tim1357, TobeBot, OlgaVictorovna, Barry Pearson, Lopifalko, Salvio giuliano, EmausBot, Super48paul, Dewritech, GoingBatty, EleferenBot, Dcirovic, QuentinUK, Walter.Arrighetti, Fred Gandt, 1Veertje, Jakub Onderka, Gistya, Gsarwa, Lutoma, ClueBot NG, Matthiaspaul, Gj67, Fakeymacfake, Andu novac, Be..anyone, Helpful Pixie Bot, Ickspinne, 2001:db8, BG19bot, Bs27975, Mrjohntestrup, Brentworks, Badon, LuisBezanilla, DuckLahz, Nfwehner, Osanamanjaro, ChrisGualtieri, Colorsystem, Codename Lisa, Jamesx12345, Kamradeg, Forrest Canton, CiroFlexo, JohnSnow78, AlexZubaev2, Tymon.r, Lulu.water, Georgebusher, Aestheticvapor, Bender the Bot, Hap- pyPengu, Kwalker31 and Anonymous: 301 • Digital Negative Source: https://en.wikipedia.org/wiki/Digital_Negative?oldid=754515116 Contributors: Zundark, Leandrod, Gren- delkhan, TMC1221, Vespristiano, David Gerard, RScheiber, AlistairMcMillan, Golbez, Oneiros, Ary29, Olivier Debre, Chmod007, O'Dea, Sysy, Bender235, Whosyourjudas, Bontenbal, Zapyon, Rick Sidwell, Suruena, Max Naylor, Karnesky, Mindmatrix, Bluemoose, Hideyuki, Ilya, Rjwilmsi, Rogerd, Bubba73, Gavinatkinson, FlaBot, Eddie Parker, King of Hearts, DaGizza, Bgwhite, Sneak, Oliverdl, Sandstein, Nikkimaria, SmackBot, InverseHypercube, Dylnuge, Chris the speller, Stevage, George Ho, Radagast83, Ozhiker, Dicklyon, Sharcho, Storm2005, StanfordProgrammer, Paul Foxworthy, Davidbspalding, PKT, Gioto, Luna Santin, Ccwnj, Bernopedia, Paul1776, Belg4mit, Boleslaw, .anacondabot, Demerara, Diegojc~enwiki, Ignavia, Jim.henderson, Steadivision, DH85868993, Travelster, Tum- blingsky, Badly Bradley, Darxus, J-p krelli, Martarius, Keyur mithawala, EoGuy, Dantesoft, Alexbot, Calimo, DumZiBoT, Jdelgama, Deineka, Addbot, Ruftytufty57, Gul e, WillT.Net, Tassedethe, Matekm, Alfie66, Sven Boisen, Yobot, Ptbotgourou, Yngvadottir, AnomieBOT, Götz, Oduis, RandomAct, Cidcn, Xposurepro, Jørgen K H Knudsen, FrescoBot, Hermansgallery, Hexafluoride, DrilBot, Gnepets, Tamariki, OlgaVictorovna, Barry Pearson, Kmonsoor, EmausBot, SamirGunic, GoingBatty, Gsarwa, Orange Suede Sofa, Matthiaspaul, Liberulo, Snotbot, OKIsItJustMe, BG19bot, Browndg, Hamish59, Andyfunderburk, Makecat-bot, AlexZugaev2, AlexGuzaev1972, InternetArchive- Bot, Ciccio wiki and Anonymous: 79 • List of cameras supporting a raw format Source: https://en.wikipedia.org/wiki/List_of_cameras_supporting_a_raw_format?oldid= 763701833 Contributors: Samsara, Dale Arnett, Moxfyre, Jpk, Bo Lindbergh, Supergloom, Juan Toledo, Richard Taytor, Pol098, Sdg- jake, Rjwilmsi, JamesLee, Mikeblas, Ohnoitsjamie, Cquarksnow, Stevage, Onceler, Berland, Cyrix~enwiki, Pandar~enwiki, BlueStream, Salamurai, Makyen, Danger bird, Raysonho, Peripitus, Omnicloud, Dancter, RRJackson, LG4761, Perfectionniste, Kmwiki, RenniePet, Joshmt, Jevansen, Gnulxusr, RedAndr, SieBot, Dziadeck, Sjmtlewy, Vexorg, JT72, CubeBubi, Fernandocenteno, Vedranzeman, Vanhorn, Lockoom, Reboot81, Im99k, Archone, Ayr~enwiki, Addbot, Jay Pegg, Tassedethe, NikOly, Yobot, Zedjunior, Nworegontex, Acropia, Kb2zuz, Opticron, Infl3xion, Locobot, Chn Andy, Ll1324, FrescoBot, Fila66, LittleWink, MDGx, Mardy.tardi, Barry Pearson, Rostami- , Lopifalko, Walter.Arrighetti, Gsarwa, Angerdan, ClueBot NG, Matthiaspaul, Gmcconkey, Michael Barker, Be..anyone, The Apoorv, BattyBot, Joerg71, Iskander HFC, Tagremover, Umang97g, Sm7x7, Jl91569, Hpguner, Jasper Crowe, SomeUser1B, Kenji Gunawan and Anonymous: 125 • Exif Source: https://en.wikipedia.org/wiki/Exif?oldid=760345810 Contributors: AxelBoldt, Zundark, Taw, SimonP, Thosp, Bob Jonkman, Patrick, Wwwwolf, (, Mac, Technopilgrim, Jdstroy, Furrykef, Grendelkhan, Shizhao, Francs2000, Robbot, Chealer, Pigsonthewing, Peak, Auric, David Gerard, Céréales Killer, Mintleaf~enwiki, Jmoliver, BenFrantzDale, Markus Kuhn, Dmmaus, AlistairMcMillan, Joe Rodgers, Sam Hocevar, Rlcantwell, Elwell, Anirvan, Ojw, Moxfyre, Reinthal, Imroy, Sysy, Discospinster, Rich Farmbrough, Qutezuce, Mumonkan, Alistair1978, Bender235, Closeapple, Kaszeta, PhilHibbs, Chillmann, NetBot, Sortior, Jeffmedkeff, Flxmghvgvk, .:Ajvol:., Giraffedata, AndrewRH, Shlomital, Hooperbloob, Musiphil, Guy Harris, Free Bear, Zippanova, Water Bottle, Swift, Hu, Rick Sidwell, Mnemo, Mrtngslr, Mindmatrix, Phillipsacp, Commander Keane, Sdgjake, Lensovet, Outlyer, Emerson7, Rjwilmsi, Feydey, FlaBot, Eu- bot, Krusch, Srleffler, YurikBot, Borgx, Chuck Carroll, Groogle, Hydrargyrum, Barefootguru, Hub, Blitterbug, TBarregren, Tonymec, Mike Serfas, Naasking, Viory, Petri Krohn, JLaTondre, Fourohfour, SmackBot, Adam majewski, InverseHypercube, Od Mishehu, Ul- tramandk, Eskimbot, Mdd4696, DazB, Gilliam, Andy M. Wang, Marc Kupper, Sloanr, Alex brollo, OrangeDog, Mdwh, Drewnoakes, Colonies Chris, OrphanBot, Josterpi, Lox, Wiz9999, Ianmacm, Cybercobra, Henning Makholm, Mion, Ozhiker, Roguegeek, Rahulsah- gal~enwiki, Groggy Dice, Dicklyon, Bashari, Bill Malloy, Paul Foxworthy, Trialsanderrors, Zahn, JosephWong, GeoGuy, CmdrObot, Trav1085, Oosoom, Boardhead, Omicronpersei8, Bobblehead, Phooto, Gioto, Michelle1206, Bernopedia, Minimice, JAnDbot, MER-C, Pjpalm1964, Geoffadams, Lazko~enwiki, MartinDK, Xsystems, J.P.Lon, Ishi Gustaedr, Nyttend, Adam4445, Stolsvik, Misibacsi, Gw- ern, Jim.henderson, Speck-Made, Glrx, Nono64, Theo Mark, Mike.lifeguard, Kandy Talbot, Squidfryerchef, Pegase~enwiki, VolkovBot, Voorlandt, Jtygs, Bonzon, Andy Dingley, Seneca91, Monty845, AlleborgoBot, SieBot, BotMultichill, Armydelay~enwiki, Yerpo, ImageR- emovalBot, Martarius, Aberigan, Trivialist, Alexbot, Socrates2008, Manco Capac, Twopilots, AlanPater, Onomou, XLinkBot, Lkovac, Kwjbot, Addbot, SpellingBot, Taasss, Kenyoni, Matěj Grabovský, Yobot, AnomieBOT, Götz, Photographerguy, Materialscientist, Are you ready for IPv6?, Citation bot, LilHelpa, MauritsBot, Xqbot, RibotBOT, Mrwzl, WaysToEscape, FrescoBot, Davidt8, Paine Ellsworth, BenzolBot, Winterst, Tamariki, Tom.Reding, Feldermouse, Lars Washington, Jandalhandler, Cnwilliams, Hasenläufer, Chris Caven, Dr. Vicodine, Lotje, ErikvanB, ArwinJ, RjwilmsiBot, EmausBot, John of Reading, Exif2, Odog502, Idhodwiq, H3llBot, Gunarpenikis, WalterTross, Hajj 3, ChuispastonBot, Ovianao, Pasanen1, Mikhail Ryazanov, ClueBot NG, Matthiaspaul, Skoot13, Wdchk, Semyonov, Wbm1058, Lowriejoe, RiyaKeralore, Oxydendrum, Pacerier, MrBill3, Deaconous, BattyBot, Cyberbot II, SoledadKabocha, Jogfalls1947, RMCD bot, Andyhowlett, हिंदुस्थान वासी, 32RB17, Southernchoppa, Aarreedd, Bbfd, Tracygoesrawr, Crosslink06, Nick4me, Mechak- shsim, Ozusut, Magno Almeida, Tom29739, TLAS15, GreenC bot, Seba5tien, Lunaa.chocolate, Bender the Bot and Anonymous: 214 • Comparison of image viewers Source: https://en.wikipedia.org/wiki/Comparison_of_image_viewers?oldid=761937616 Contributors: Zundark, Jdlh, Voidvector, Dcljr, Mythril, SebastianHelm, Ed g2s, Samsara, Mackensen, ClemRutter, Ctachme, Maximaximax, ZZyXx, Ojw, EagleOne, Blorg, MCBastos, Kjoonlee, BACbKA, Longhair, Badmonkey0001, Furrybeagle, Gal Buki, Minghong, Bfg, Guy Harris, CyberSkull, JimParsons, Wouterstomp, Swift, Paul1337, Xerol~enwiki, Woohookitty, Karnesky, RHaworth, Phillipsacp, RzR~enwiki, Sdgjake, Santiago Roza (Kq), Saringer, Reisio, Drbogdan, Bubuka, Raffaele Megabyte, Gudeldar, TuX~enwiki, Mark83, GreyCat, Com- potatoj, Steppres, YurikBot, Sceptre, IanManka, Hkdennis2k, Futurix, ONEder Boy, Thiseye, Philbull, Troodon~enwiki, Mike92591, Rwxrwxrwx, Codes02, JQF, Janizary, Fram, Bewebste, Krasu, Donhalcon, JLaTondre, Mardus, RenegadeMinds, Daivox, User24, SmackBot, Pbb, ArnoGourdol, Eskimbot, JJay, Grawity, Giandrea, Richmeister, Valent, Jcarroll, Xpi6, Simon123, Djdole, Miguel An- drade, Colonies Chris, Michael.Pohoreski, Escottf, Henning Makholm, Jrvz, Kamenlitchev, Slakr, Woodroar, AlexLibman, Paul Foxwor- thy, Tawkerbot2, FleetCommand, CmdrObot, Raysonho, Rawling, ShelfSkewed, Skybon, Kjr99044, Themightyquill, Pit-yacker, Cyde- bot, Mblumber, Naarcissus, Medovina, Lofote, Dick stevens, Connectionfailure, JamesAM, Bobblehead, SGGH, AstroFloyd, Aquilosion, CharlotteWebb, Dawnseeker2000, Gioto, Snailshoes, Wanders1, Firefeather, Dedmond29, Dogru144, Michig, Deslicp, Jusuwa, Mis- terTroy, Ddyer, Nevit, Gwern, Speck-Made, RuineR, R'n'B, CommonsDelinker, 6502~enwiki, Nono64, Dgebel, TanatOS, M-le-mot-dit, 11.2. IMAGES 175

RayForma, Tambora1815, Gwen Gale, Kukos, Sebbu~enwiki, Travelster, Dqeswn, Wikifreakfr, OlavN, Una Smith, X-Bert, Viharm, MartinPackerIBM, Leehach, Utics, AlleborgoBot, Wrldwzrd89, Gregbedford, TJRC, Mike ruar, Pinkano, Garde, Bandrewfox, Xentrax, Alecs.y.rodez, Huku-chan, TypoBot, Drfrogsplat, AC007, DEEJAY JPM, Hippo99, Thyuji, Bf109hartmann, Infograph1, Niceguyedc, Edelsteen, Jcouvaras, Rhododendrites, Kh555~enwiki, IMPly, Mlaffs, NataliyaIT, Axelriet, DumZiBoT, Wektt, Snurre86, MystBot, Wikitikivandy, Addbot, DemonEyesBob, MrOllie, Uncia, Favonian, DeerLite, KaiKemmann, OlEnglish, Fiftyquid, Yobot, Manwichosu, AnomieBOT, Crazykasey, Ipatrol, Yotcmdr, Pictomio, Toolumpy, LilHelpa, Cameron Scott, MiltonSegura, Dog funtom, Dima.fedorov, J04n, Jinglejangeljazzpimp, Snapactster, Abigor, Alex Mak, Armanic, FrescoBot, Logiphile, Gerhardkgmx, Junior76fr, Jamouse, Lit- tleWink, Lemonsourkid, Skidadpa, Tdjprj, Txt.file, Flegmatica, Barry Pearson, Seyyedalih, Christoph hausner, Bokarevitch, Nongrezzo, EmausBot, Cmuelle8, Dewritech, Anuoldman, GoingBatty, Errortoday1, Moswento, Dannyp1996, Photorecon, Shencypeter, Nobelium, NoCoolNamesRemain, Providus, Palosirkka, Havkacik, ClueBot NG, Cipilica, Piast93, Micsutka, KKAtan, GabrielHWhite, Be..anyone, SAPryor, Mark Arsten, DuckLahz, RadicalRedRaccoon, Oranjblud, Geyser1951, KiwiNeko14, Lemnaminor, Derward5, PhotoSoftwar- eReviewer, Hgourvest, ScotXW, Andersp dk, Jamesb6545, Stephen.elver, Anjefu, Wedepe001, Rockdrjones and Anonymous: 247 • IPTC Information Interchange Model Source: https://en.wikipedia.org/wiki/IPTC_Information_Interchange_Model?oldid=710687098 Contributors: Rich Farmbrough, GregorB, Ketiltrout, Pok148, JLaTondre, Makyen, Hu12, Paul Foxworthy, Casablanca2000in, Vigilius, Addbot, Baranger, Lightbot, Flukas, Citation bot, Citation bot 1, SporkBot, Senator2029, Pokbot, Semyonov, Marienh, Michipedian, Jdx, Agebhardnyc and Anonymous: 17

11.2 Images

• File:113th_congress_usa_women_version_altered_by_office_of_House_Minority_Leader.jpg Source: https://upload.wikimedia. org/wikipedia/commons/a/a5/113th_congress_usa_women_version_altered_by_office_of_House_Minority_Leader.jpg License: Pub- lic domain Contributors: Original: US Congress; This (altered) version: Office of the House Minority Leader Original artist: Original: US Congress; This (altered) version: Office of the House Minority Leader • File:405px-Cathedral_Notre-Dame_de_Reims,_France_Combo.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ a2/405px-Cathedral_Notre-Dame_de_Reims%2C_France_Combo.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Adjustment-layer.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/76/Adjustment-layer.jpg License: Public do- main Contributors: Own work Original artist: Magnus Lewan • File:Adobe_HQ.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Adobe_HQ.jpg License: CC-BY-SA-3.0 Contrib- utors: Own work Original artist: Coolcaesar • File:Altmer_High_Elves_trapped_and_outnumbered_but_we_won't_die_today.jpg Source: https://upload.wikimedia.org/wikipedia/ commons/0/08/Altmer_High_Elves_trapped_and_outnumbered_but_we_won%27t_die_today.jpg License: CC BY-SA 3.0 Contribu- tors:

Altmer High Elf three point defense.png

174 - Torres del Paines - Janvier 2010 - downsample.jpg

Original artist: I distribute this image under the Creative Commons Attribution 3.0 Unported License

• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do- main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs) • File:Archery_Target_80cm.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d5/Archery_Target_80cm.svg License: CC BY-SA 2.5 Contributors: Own work Original artist: Alberto Barbati • File:Astigmatism.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Astigmatism.svg License: CC-BY-SA-3.0 Con- tributors: self-made/own work (created with Inkscape 0.44) Original artist: Sebastian Kosch (Ginger Penguin) • File:Backlight-wedding.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Backlight-wedding.jpg License: Attribu- tion Contributors: Own work Original artist: David Ball • File:Barrel_(PSF).png Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Barrel_%28PSF%29.png License: Public do- main Contributors: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation Original artist: Pearson Scott Foresman • File:Barrel_distortion.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/63/Barrel_distortion.svg License: Public do- main Contributors: Own work Original artist: WolfWings • File:Beach-Original_and_Clearer-png.png Source: https://upload.wikimedia.org/wikipedia/commons/4/42/Beach-Original_and_Clearer-png. png License: CC BY-SA 4.0 Contributors: Own work Original artist: Ofek Gila 176 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Before_and_After_example_of_Advanced_Dynamic_Blending_Technique_created_by_Elia_Locardi.jpg Source: https://upload. wikimedia.org/wikipedia/commons/6/6e/Before_and_After_example_of_Advanced_Dynamic_Blending_Technique_created_by_Elia_ Locardi.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Elia Locardi • File:Bundesarchiv_Bild_146-1978-086-03,_Joseph_Goebbels_mit_Familie.jpg Source: https://upload.wikimedia.org/wikipedia/commons/ c/cc/Bundesarchiv_Bild_146-1978-086-03%2C_Joseph_Goebbels_mit_Familie.jpg License: CC BY-SA 3.0 de Contributors: This im- age was provided to Wikimedia Commons by the German Federal Archive (Deutsches Bundesarchiv) as part of a cooperation project. The German Federal Archive guarantees an authentic representation only using the originals (negative and/or positive), resp. the digitalization of the originals as provided by the Digital Image Archive. Original artist: Unknown • File:Calanit004.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Calanit004.jpg License: CC BY 3.0 Contributors: Own work Original artist: MathKnight and Zachi Evenor • File:Cathedral_Notre-Dame_de_Reims,_France-PerCorr.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/44/Cathedral_ Notre-Dame_de_Reims%2C_France-PerCorr.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Cathedral_Notre-Dame_de_Reims,_France.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/e8/Cathedral_Notre-Dame_ de_Reims%2C_France.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Chartzit001.JPG Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Chartzit001.JPG License: CC BY-SA 3.0 Con- tributors: Own work Original artist: MathKnight Flag-of-Israel(boxed).png and Zachi Evenor • File:Chromatic_aberration_lens_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Chromatic_aberration_ lens_diagram.svg License: CC-BY-SA-3.0 Contributors: Bob Mellish (talk)(Uploads) Original artist: Bob Mellish (talk)(Uploads) • File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig- inal artist: ? • File:Crystal_Clear_device_camera.png Source: https://upload.wikimedia.org/wikipedia/commons/9/99/Crystal_Clear_device_camera. png License: LGPL Contributors: All Crystal Clear icons were posted by the author as LGPL on -look; Original artist: Everaldo Coelho and YellowIcon; • File:Curves_none_applied.png Source: https://upload.wikimedia.org/wikipedia/commons/1/13/Curves_none_applied.png License: CC- BY-SA-3.0 Contributors: own photo and screen shot from the Gimp Original artist: Magnus Lewan • File:Curves_red_applied.png Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Curves_red_applied.png License: CC- BY-SA-3.0 Contributors: own photo and screen shot from the Gimp Original artist: Magnus Lewan • File:Cushion.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/Cushion.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:DNG_tm.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/DNG_tm.svg License: Public domain Contributors: http://en.wikipedia.org/wiki/File:DNG_tm.svg Original artist: Adobe Systems • File:Dawn_vignetting_effect_-_swifts_creek.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/Dawn_vignetting_ effect_-_swifts_creek.jpg License: GFDL 1.2 Contributors: Own work Original artist: fir0002 | flagstaffotos.com.au

• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango! Desktop Project. Original artist: The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although minimally).” • File:Episode_after_Battle_of_Zonnebeke_1918_Hurley.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/7a/Episode_ after_Battle_of_Zonnebeke_1918_Hurley.jpg License: CC BY-SA 3.0 au Contributors: State Library of NSW Original artist: Frank Hur- ley • File:Example_of_color_correction_in_photoshop.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/Example_of_ color_correction_in_photoshop.jpg License: CC BY 2.5 Contributors: Transferred from en.wikipedia to Commons by Wikig using CommonsHelper. Original artist: Sbn1984 at English Wikipedia / Later version(s) were uploaded by Zenohockey at English Wikipedia. • File:FBP_Iter_single.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ed/FBP_Iter_single.jpg License: Public domain Contributors: self created using NM workstation Original artist: hg6996 • File:Field_curvature.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fc/Field_curvature.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: BenFrantzDale • File:Flamingos_at_Sunset.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/43/Flamingos_at_Sunset.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Atsme • File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc- by-sa-3.0 Contributors: ? Original artist: ? • File:GammaCorrection_demo.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/96/GammaCorrection_demo.jpg Li- cense: GFDL Contributors: This file was derived from: Weeki Wachee spring 10079u.jpg Original artist: X-romix 10:00, 7 June 2008 (UTC), Updated by --Rubybrian (talk) 14:25, 14 September 2010 (UTC); Photographer: Toni Frissell • File:Gammatest.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/82/Gammatest.svg License: CC-BY-SA-3.0 Contrib- utors: Image by User:Janke; vectorized by User:Mysid. Original artist: User:Janke, Mysid 11.2. IMAGES 177

• File:Globe_and_high_court.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Globe_and_high_court.jpg License: CC-BY-SA-3.0 Contributors: Own work Original artist: Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License. • File:Globe_and_high_court_fix.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/28/Globe_and_high_court_fix.jpg License: CC-BY-SA-3.0 Contributors: Own work Original artist: Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License. • File:Globe_effect. Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Globe_effect.gif License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Cmglee • File:Gnome-mime-sound-openclipart.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Gnome-mime-sound-openclipart. svg License: Public domain Contributors: Own work. Based on File:Gnome-mime-audio-openclipart.svg, which is public domain. Orig- inal artist: User:Eubulides • File:H&D_curve.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/H%26D_curve.png License: Public domain Con- tributors: en-wiki Original artist: Qkowlew • File:HartmannShack_1lenslet.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/HartmannShack_1lenslet.svg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: HHahn • File:Heart-direct-vs-iterative-reconstruction.png Source: https://upload.wikimedia.org/wikipedia/commons/5/54/Heart-direct-vs-iterative-reconstruction. png License: CC BY-SA 3.0 Contributors: http://www.biomednmr.mpg.de Original artist: Biomedizinische NMR Forschungs GmbH • File:Image_cropping_133x1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/26/Image_cropping_133x1.jpg License: CC-BY-SA-3.0 Contributors: Amphitheater.jpg (unknown author, Public Domain) Original artist: Andreas -horn- Hornig • File:Image_cropping_155x1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/Image_cropping_155x1.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Image_cropping_185x1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f1/Image_cropping_185x1.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Image_cropping_235x1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/Image_cropping_235x1.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Image_cropping_aspect_ratios.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/53/Image_cropping_aspect_ratios. jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Image_warping_example.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/Image_warping_example.jpg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: Salemterrano • File:Imageeditingslelecgivecolorchange10-28-2005.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b1/Imageeditingslelecgivecolorchange10-28-2005. jpg License: Public domain Contributors: Own work Original artist: User:Rainchill • File:Imageorientationexample12-7.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Imageorientationexample12-7. jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Imgeditingspecialmanipulati.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Imgeditingspecialmanipulati.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:It_is_a_picture_that_has_been_edited_with_a_Bokeh_effect_with_a_gaussian_blur_effect.jpg Source: https://upload.wikimedia. org/wikipedia/en/3/3f/It_is_a_picture_that_has_been_edited_with_a_Bokeh_effect_with_a_gaussian_blur_effect.jpg License: CC-BY- SA-3.0 Contributors: Own work Previously published: 11/11/2014 Original artist: Teacher83 • File:Kalanit01.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Kalanit01.jpg License: CC BY-SA 2.5 Contributors: Own work Original artist: MathKnight • File:Konqueror_Exif_data.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6a/Konqueror_Exif_data.jpg License: Pub- lic domain Contributors: Own work Original artist: Sysy~commonswiki • File:Land_cover_mapping_using_TM_images.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/Land_cover_mapping_ using_TM_images.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Uddinkabir • File:Lange-MigrantMother02.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/54/Lange-MigrantMother02.jpg Li- cense: Public domain Contributors: This image is available from the United States Library of Congress's Prints and Photographs division under the digital ID fsa.8b29516. This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more informa- tion. Original artist: Dorothea Lange, Farm Security Administration / Office of War Information / Office of Emergency Management / Resettlement Administration • File:Layer-masks-mask.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Layer-masks-mask.jpg License: Public domain Contributors: Own work Original artist: Magnus Lewan • File:Layer-masks.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Layer-masks.jpg License: Public domain Con- tributors: Own work Original artist: Magnus Lewan • File:Layer-masks1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/69/Layer-masks1.jpg License: Public domain Con- tributors: Own work Original artist: Magnus Lewan 178 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Layer-masks2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/50/Layer-masks2.jpg License: Public domain Con- tributors: Own work Original artist: Magnus Lewan • File:LayerProperties.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/58/LayerProperties.jpg License: Public domain Contributors: Own work Original artist: Magnus Lewan • File:LayersLeft.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c6/LayersLeft.jpg License: Public domain Contribu- tors: Own work Original artist: Magnus Lewan • File:LayersRight.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/LayersRight.jpg License: Public domain Contrib- utors: Own work Original artist: Magnus Lewan • File:LayersTransparent.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/e6/LayersTransparent.jpg License: Public domain Contributors: Own work Original artist: Magnus Lewan • File:Lens_coma.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Lens_coma.svg License: CC-BY-SA-3.0 Contrib- utors: Manuell Vectorisierte Version von: https://commons.wikimedia.org/wiki/File:Lens_coma.png Original artist: Unknownwikidata:Q4233718 • File:LillyCroppedacp.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/LillyCroppedacp.jpg License: Public domain Contributors: • Own work by the original uploader • Photo taken by original uploader and cropped to show only the lilly. Original artist: Alison Phillips • File:Lillyacp.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ac/Lillyacp.jpg License: Public domain Contributors: Own work by the original uploader Original artist: Alison Phillips • File:MACS166_Set_1_3.png Source: https://upload.wikimedia.org/wikipedia/en/c/cb/MACS166_Set_1_3.png License: CC-BY-SA- 3.0 Contributors: Own work Original artist: MacWarcraft • File:Mergefrom.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0f/Mergefrom.svg License: Public domain Contribu- tors: ? Original artist: ? • File:MigrantMotherColorized.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/MigrantMotherColorized.jpg Li- cense: Public domain Contributors: File:Lange-MigrantMother02.jpg Original artist: Original photo by Dorothea Lange. Colorized by John Boero. • File:MinnieDriverJan2011_(glamourized).jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/4e/MinnieDriverJan2011_ %28glamourized%29.jpg License: CC BY 2.0 Contributors: • MinnieDriverJan2011_cropped.jpg Original artist: MinnieDriverJan2011_cropped.jpg:*MinnieDriverJan2011.jpg: Justin Hoch pho- tographing for Hudson Union Society • File:MinnieDriverJan2011_cropped.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/MinnieDriverJan2011_cropped. jpg License: CC BY 2.0 Contributors: • MinnieDriverJan2011.jpg Original artist: MinnieDriverJan2011.jpg: Justin Hoch photographing for Hudson Union Society • File:Mustache_distortion.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Mustache_distortion.svg License: Pub- lic domain Contributors: Barrel distortion.svg Original artist: Barrel distortion.svg: WolfWings • File:Nikkor-PC-E.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/5a/Nikkor-PC-E.jpg License: Attribution Contrib- utors: Own work by the original uploader Original artist: Motorrad-67 • File:Nikon_Expeed.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/Nikon_Expeed.jpg License: CC BY 2.0 Con- tributors: Nikon Expeed URK_3066 Original artist: Nguyen Hung Vu from Hanoi, Vietnam • File:Noise_reduction_in_Audacity_(0,_5,_12,_30_dB)_(150Hz)_(0.15_sec).ogg Source: https://upload.wikimedia.org/wikipedia/commons/ d/df/Noise_reduction_in_Audacity_%280%2C_5%2C_12%2C_30_dB%29_%28150Hz%29_%280.15_sec%29.ogg License: Public do- main Contributors: Own work Original artist: Beta M • File:Object_based_image_analysis.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c2/Object_based_image_analysis. jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Uddinkabir • File:Out-of-focus_image_of_a_spoke_target..svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/Out-of-focus_image_ of_a_spoke_target..svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Tom.vettenburg • File:PassionFlower_x3.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/11/PassionFlower_x3.jpg License: Public do- main Contributors: Source: Olympus C-2040 Digital Camera URL: http://hylerlink.com/distribution/PassionFlower_x3.jpg Original artist: Larry Hyler [hylerlink]; • File:Photo_editing_contrast_correction.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Photo_editing_contrast_ correction.jpg License: Public domain Contributors: Transferred from en.wikipedia to Commons by Wikig using CommonsHelper. Orig- inal artist: Toniht at English Wikipedia • File:Photomontage_(Forggensee_Panorama)_-2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/97/Photomontage_ %28Forggensee_Panorama%29_-2.jpg License: CC BY-SA 3.0 Contributors: See below. Original artist: Photomontage by: Mmxx (talk / email) 11.2. IMAGES 179

• File:Pincushion_distortion.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/Pincushion_distortion.svg License: Pub- lic domain Contributors: Own work Original artist: WolfWings • File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ? • File:Process_nocomparam.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c0/Process_nocomparam.png License: CC- BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: The original uploader was Glogger at English Wikipedia • File:Quality_comparison_jpg_vs_saveforweb.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/ce/Quality_comparison_ jpg_vs_saveforweb.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors: Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist: Tkgd2007 • File:Randabschattung_Mikroskop_Kamera_6.JPG Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/Randabschattung_ Mikroskop_Kamera_6.JPG License: CC-BY-SA-3.0 Contributors: Own work Original artist: User:Magnus_Manske • File:Restoration.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Restoration.jpg License: CC BY 2.0 Contributors: Mom and Dad - Mid 1960s (Restoration) Original artist: R Walters from USA • File:Retouche-set.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Retouche-set.jpg License: No restrictions Con- tributors: Nationaal Archief fotonummer SFA002003392 Original artist: Unknownwikidata:Q4233718 • File:Rgbtobandswexample11-28-200.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/fe/Rgbtobandswexample11-28-200. jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:SRGB_gamma.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ef/SRGB_gamma.svg License: Public domain Con- tributors: Transferred from en.wikipedia to Commons by Shizhao using CommonsHelper. Original artist: Dicklyon at English Wikipedia • File:Spherical_aberration_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/60/Spherical_aberration_3.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: HHahn • File:Split-arrows.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Split-arrows.svg License: Public domain Contrib- utors: ? Original artist: ? • File:Srgbnonlinearity.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Srgbnonlinearity.png License: CC-BY-SA- 3.0 Contributors: English Wikipedia Original artist: Army1987 (talk) and Dicklyon (talk) • File:Stock_horses_in_corral.png Source: https://upload.wikimedia.org/wikipedia/en/d/da/Stock_horses_in_corral.png License: CC- BY-SA-3.0 Contributors: Photograph in Henrietta, TX Original artist: Atsme • File:Stock_horses_in_corral_composite.png Source: https://upload.wikimedia.org/wikipedia/en/7/74/Stock_horses_in_corral_composite. png License: CC-BY-SA-3.0 Contributors: Photoshopped composite of two images Original artist: Atsme • File:SunHistacp.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/SunHistacp.jpg License: Public domain Contribu- tors: • Own work by the original uploader • Made using Paint Shop Pro Original artist: Alison Phillips • File:SunLou2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/95/SunLou2.jpg License: Public domain Contributors: • Own work by the original uploader • Taken in uploader’s back yard in Vero Beach, FL Original artist: Alison Phillips • File:Swanson_tennis_center.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c1/Swanson_tennis_center.jpg License: CC BY-SA 2.5 Contributors: Transferred from en.wikipedia to Commons by Shizhao using CommonsHelper. Original artist: Photograph taken from shifting pixel, photographer: Joe Lencioni (Jlencion at en.wikipedia). • File:Symbol_neutral_vote.svg Source: https://upload.wikimedia.org/wikipedia/en/8/89/Symbol_neutral_vote.svg License: Public do- main Contributors: ? Original artist: ? • File:Symbol_template_class.svg Source: https://upload.wikimedia.org/wikipedia/en/5/5c/Symbol_template_class.svg License: Public domain Contributors: ? Original artist: ? • File:Teva_17_3_(64).JPG Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Teva_17_3_%2864%29.JPG License: CC BY-SA 3.0 Contributors: Own work Original artist: MathKnight 180 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_ with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg from the Tango project. Original artist: Benjamin D. Esham (bdesham) • File:The_Commissar_Vanishes_2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/The_Commissar_Vanishes_2. jpg License: Public domain Contributors: http://www.tate.org.uk/tateetc/issue8/erasurerevelation.htm Original artist: Unknownwikidata:Q4233718. • File:The_real_me_by_Achraf_Baznani.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/The_real_me_by_Achraf_ Baznani.jpg License: CC BY-SA 4.0 Contributors: http://www.baznani.com/about-the-artist/ Original artist: Achraf Baznani • File:Ulysses_S._Grant_at_City_Point.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Ulysses_S._Grant_at_City_ Point.jpg License: Public domain Contributors: This image is available from the United States Library of Congress's Prints and Photographs division under the digital ID ppmsca.15886. This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information. Original artist: L. C. Handy • File:Uniformity.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2b/Uniformity.jpg License: CC BY 2.5 Contributors: Own creation Original artist: Atoma • File:Villianc_transparent_background.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/88/Villianc_transparent_background. svg License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: Image:Villianc.jpg, by J.J., re- leased under GFDL. • File:Vinepyrennees.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/Vinepyrennees.jpg License: CC-BY-SA-3.0 Contributors: Own worek Original artist: Mick Stephenson mixpix 00:59, 13 March 2007 (UTC) • File:Vinepyrennees_crop.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Vinepyrennees_crop.jpg License: CC- BY-SA-3.0 Contributors: Own work Original artist: Mick Stephenson mixpix 00:59, 13 March 2007 (UTC) • File:Voroshilov,_Molotov,_Stalin,_with_Nikolai_Yezhov.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Voroshilov% 2C_Molotov%2C_Stalin%2C_with_Nikolai_Yezhov.jpg License: Public domain Contributors: http://www.tate.org.uk/tateetc/issue8/ erasurerevelation.htm Original artist: Unknownwikidata: Q4233718 • File:Wiktionary-logo-v2.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Wiktionary-logo-v2.svg License: CC BY- SA 4.0 Contributors: Own work Original artist: Dan Polansky based on work currently attributed to Wikimedia Foundation but originally created by Smurrayinchester • File:Woy_Woy_Channel_-_Vignetted.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Woy_Woy_Channel_-_Vignetted. jpg License: GFDL Contributors: Own work Original artist: TheDefiniteArticle

11.3 Content license

• Creative Commons Attribution-Share Alike 3.0