1.1 Introduction to the Digital Systems
Total Page:16
File Type:pdf, Size:1020Kb
1.1 Introduction to the digital systems PHO 130 F Digital Photography Prof. Lorenzo Guasti How a DSLR work and why we call a camera “reflex” The heart of all digital cameras is of course the digital imaging sensor. It is the component that converts the light coming from the subject you are photographing into an electronic signal, and ultimately into the digital photograph that you can view or print PHO 130 F Digital Photography Prof. Lorenzo Guasti Although they all perform the same task and operate in broadly the same way, there are in fact th- ree different types of sensor in common use today. The first one is the CCD, or Charge Coupled Device. CCDs have been around since the 1960s, and have become very advanced, however they can be slower to operate than other types of sensor. The main alternative to CCD is the CMOS, or Complimentary Metal-Oxide Semiconductor sen- sor. The main proponent of this technology being Canon, which uses it in its EOS range of digital SLR cameras. CMOS sensors have some of the signal processing transistors mounted alongside the sensor cell, so they operate more quickly and can be cheaper to make. A third but less common type of sensor is the revolutionary Foveon X3, which offers a number of advantages over conventional sensors but is so far only found in Sigma’s range of digital SLRs and its forthcoming DP1 compact camera. I’ll explain the X3 sensor after I’ve explained how the other two types work. PHO 130 F Digital Photography Prof. Lorenzo Guasti All digital camera light sensors operate in basically the same way. They rely on the ability of certain semiconductor materials to convert light into electrical charge. A digital camera sensor consists of millions of microscopic light-sensitive cells arranged in a grid on a wafer of silicon. Each one of these cells generates an electrical charge when it is struck by a photon (a particle of light). The brighter the light, the greater the number of photons, and thus the larger the electrical charge that is produced. The charges from all of these millions of photocells are fed into the camera’s image processing system, which combines them into the digital image which is saved on your memory card. PHO 130 F Digital Photography Prof. Lorenzo Guasti The individual photocells in a camera sensor can only detect the brightness of light, not its colour, so a coloured filter is placed over each cell. A coloured filter only transmits light of the same colour, and blocks out all other colours. So for example if a cell has a red filter over it, that cell will now only detect red light. PHO 130 F Digital Photography Prof. Lorenzo Guasti By arranging filters of the three primary colours – red, green and blue – in a regular pattern over each cell in the sensor, all three colours can be detected. In nearly all digital camera sensors, the filters are laid out in a mosaic pattern of two green filters to every one of red and blue, because the human eye is more sensitive to green light. This type of filter is called a Bayer mask filter, after Dr. Bryce E. Bayer of Kodak who invented it. PHO 130 F Digital Photography Prof. Lorenzo Guasti In this way, the signals from each group of colour-filtered sensor cells can be decoded, or “demosaiced” by the image processor to detect the full spectrum of colours. By calculating the red, green and blue values for each pixel location, a full colour image is produced. Of course this means that the individual pixels of your final image do not literally represent an individual photocell on the sensor. In fact the colours of each pixel are calculated, or “interpolated” from the colour values of a group of four photocells. PHO 130 F Digital Photography Prof. Lorenzo Guasti Demosaicing Many digital cameras can output the unprocessed raw image data coming from the sensor. CCD/CMOS sensors only record one color for each pixel and the real color for a given pixel has to be interpolated from the surrounding pixels. This process is called demosaicing and there are many different algorithms for doing it. Most, if not all, raw processing software authors claim their algorithm is the most advanced in the market and provides the highest quality output possible. The key is that each colored pixel can be used more than once. The true color of a single pixel can be determined by averaging the values from the closest surrounding pixels. PHO 130 F Digital Photography Prof. Lorenzo Guasti Raw image format A raw image file contains minimally processed data from the image sensor of a digital camer. Raw files are so named because they are not yet processed and ready to be used with a bitmap graphics editor or printed. Normally, the image will be processed by a raw converter where precise adjustments can be made before conversion to an RGB file format such as TIFF or JPEG for storage, printing, or further manipulation. Nearly all digital cameras can process the image from the sensor into a JPEG file using settings for white balance, color saturation, contrast, and sharpness that are either selected automatically or entered by the photographer before taking the picture. Cameras that support raw files save these settings in the file, but defer the processing. This results in an extra step for the photographer, so raw is normally only used when additional computer processing is intended. However, raw permits much greater control than JPEG for several reasons: PHO 130 F Digital Photography Prof. Lorenzo Guasti » Finer control is easier for the settings when a mouse and keyboard are available to set them. For example, the white point can be set to any value, not just discrete values like “daylight” or “incandescent”. » The settings can be previewed and tweaked to obtain the best quality image or desired effect. (With in-camera processing, the values must be set before the exposure). This is especially pertinent to the white balance setting since color casts can be difficult to correct after the conversion to RGB is done. » Camera raw files have 12 or 14 bits of intensity information, not the gamma-compressed 8 bits typically stored in processed TIFF and JPEG files; since the data are not yet rendered and clipped to a color space gamut, more precision may be available in highlights, shadows, and saturated colors. » Different demosaicing algorithms can be used, not just the one coded into the camera. http://www.vtc.com/products/Adobe-Photoshop-Advanced-Artistry-II-tutorials.htm Two video tutorial: Image Resolution Issues pt. 1 and pt. 2 PHO 130 F Digital Photography Prof. Lorenzo Guasti PHO 130 F Digital Photography Prof. Lorenzo Guasti Pixel and Resolution A pixel (PICture Element) is the composition unit of a digital image. Megapixels determine the qua- lity (clarity) of a digital image that’s taken with a sensor in a digital camera. The basic building block of an image. A single digital dot. It is an electronic device that detects light intensity. The pixels are physically combined into a single device called a sensor. Sensor resolution: The sensor resolution is the number of pixels the image carries at the camera’s maximum resolution setting. The physical size of a digital camera’s sensor also plays a role in the quality of the image captured. Generally speaking, larger is better. Some digital cameras include a sensor that’s the same size as a frame of 35mm film; others use a smaller sensor. PHO 130 F Digital Photography Prof. Lorenzo Guasti The term “Resolution”, when used to describe a digital camera refers to the size of the digital image the camera produces, and is usually expressed in terms of “megapixels” or how many mil- lion pixels it can record in a single image. The number of pixels a camera captures is called the camera’s resolution. For example, a camera that captures 1600 x 1200 pixels produces an image with a resolution of 1.92 million pixels and would be referred to as a 2.0 megapixel camera. You get to 1.92 million pixels by multiplying the vertical and horizontal dimensions. That number is then rounded off to 2 for marketing purposes. Megapixel (camera resolution) it’s important when you are going to print. You will se a chart later in the slide. PHO 130 F Digital Photography Prof. Lorenzo Guasti Resolution, scanning, and graphics size is a vast and often confusing topic, even for experienced designers. Resolution refers to the dots of ink or electronic pixels that make up a picture whether it is printed on paper or displayed on-screen. The term DPI (dots per inch) is probably a familiar term if you’ve bought or used a printer, a scanner, or a digital camera. DPI is one measure of resolution. Whether printed on paper or displayed on your computer screen, a picture is made up of tiny little dots. There are color dots and there are black dots. The more dots in a picture, the larger the size of the graphic file. Resolution is measured by the number of dots in a horizontal or vertical inch. Resolution is a density of dots. Each type of display device (scanner, digital camera, printer, computer monitor) has a maximum number of dots it can process and display no matter how many dots are in the picture. PHO 130 F Digital Photography Prof. Lorenzo Guasti For the PDF file, colored: Number inside the box is DPI, Violet is the best, orange is not good! Example: A 600 DPI laser printer can print up to 600 dots of picture information in an inch.