Sample Manuscript Showing Specifications and Style

Total Page:16

File Type:pdf, Size:1020Kb

Sample Manuscript Showing Specifications and Style Information capacity: a measure of potential image quality of a digital camera Frédéric Cao 1, Frédéric Guichard, Hervé Hornung DxO Labs, 3 rue Nationale, 92100 Boulogne Billancourt, FRANCE ABSTRACT The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital processing can correct some flaws (like distortion). Our definition of information takes possible correction into account and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading. Keywords: digital photography, image quality evaluation, optical aberration, information capacity, camera performance database 1. INTRODUCTION The evaluation of a digital camera is a key factor for customers, whether they are vendors or final customers. It relies on many different factors as the presence or not of some functionalities, ergonomic, price, or image quality. Each separate criterion is itself quite complex to evaluate, and depends on many different factors. The case of image quality is a good illustration of this topic. A digital camera is invariably composed of three subsystems: an optic, a sensor and digital processing. Optics all have their particular design, sensors have different size and pixel count, digital processing can be parameterized on the camera, or even be achieved by third party software. Every part of the system has qualities and defects. Devising methods and protocols to measure these individual performance is already a big issue. This is only halfway to the customer request, who basically wants to know what the best camera is. Therefore, a huge amount of information has to be synthesized in a few or even a single number. Standards organizations have been working on the topics for many years. Characterization of many parts of a camera has been described in ISO standards. More recently, the I3A Camera Phone Image Quality1 (CPIQ) has been working on the objective characterization of cameras and its relation to subjective image quality. In this paper, we will adopt a different point of view. A camera aims to capture light and its spatial distribution and reproduce it on a plane. A good camera should allow this capture with a good accuracy, that is to say with as many details as possible and unbiased. In other terms, it should capture a large amount of "good" information. A series of measurement results on about 80 cameras has been published on DxOMark.com2. Results are essentially related to sensor noise. On the other hand, resolution is not completely factored; there is only a normalization of noise measurement that reduces the resolution of sensors to a relatively low resolution in order to make comparisons. In this case, it turns out that noise is essentially dependent on sensor size. This paper aims to complete the analysis by properly 1 [email protected] Frédéric Cao, Frédéric Guichard and Hervé Hornung, "Information capacity: a measure of potential image quality of a digital camera", Proc. SPIE 7537, 75370F (2010); doi:10.1117/12.838903 Copyright 2010 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. introducing resolution. In this case, we cannot ignore the influence of lens either, with all the possible defects3. In this article, we propose the information capacity as an objective measurement that takes into account sensor noise, spectral responses and optical aberrations at the same time. On the other hand, the information capacity we propose deals with RAW images. One of the main reasons is that, for DSLR, raw converter is the user's choice, between many competitive solutions on the market. Also, raw converters evolve quite fast, at least faster than camera bodies and lenses, because they still benefit from the exponential growth of computing power. A raw converter cannot create information but only lose some or change the organization of information. Computing the RAW information capacity removes the bias of the raw converter, and also permits to anticipate the fact that some aberrations can be digitally corrected. To illustrate this point, consider the example of geometric distortion. A distorted image contains "bad" information in the sense that the spatial organization is not as expected on a perfect (pinhole) camera. However distortion correction methods have been used for a long time in computer vision, in particular 3D reconstruction. Some raw converters or other software integrate this kind of correction, and some camera manufacturers also embed such corrections on the camera chip. Therefore, we can consider that the geometrical effect of distortion can be cancelled out by digital processing. However, this correction is not priceless in terms of image: there is a local loss of resolution. This is well known for very wide-angle lenses, for which field of view is traded against distortion and resolution on the image border. The general point is that a correction algorithm can organize the information contained in the raw image differently, but cannot (or should not) create new information. Of course, distortion in an image damages the subjective quality. We consider that a correction algorithm can be designed so that the image quality attribute related to the defect can be optimal. Therefore, what we are interested in is the amount of information that has been lost due to the defect, and cannot be recovered from the image itself. The outline of the papers is as follows. In Sect. 2, we introduce the notion of local information capacity as the product of local resolution and local dynamic. We then review the main characteristics of a camera, in terms of sensor and optics aberrations and how they individually influence the local information capacity in Sect. 3 and 4. We present how to gather local contribution in Sect. 5, leading to a global definition of information capacity accounting for sensor and optics characteristics. In Sect. 6, we show some measurements and comparisons on real cameras before concluding. 2. INFORMATION CAPACITY 2.1 Definition We choose to represent the performance of a camera by measuring the amount of information it is able to capture. This amount of information has to be understood as in Shannon's work4, that is the size of the channel that is necessary to transmit the image. Image information is completely determined by resolution and dynamic. The first one is related to the details that a camera can spatially render and the second one to the level of contrast that the camera can capture. This contrast can be related to two different notions: the dynamic range which is the maximal contrast that the camera can measure without saturation, and the SNR that can be related to the smallest level of contrast that can be discriminated from noise. Resolution can be defined as a number of pixels, giving the number of elementary elements used to describe the spatial organization of the image. It is well known that even though the number of pixels of the sensor should be related to the resolution of the camera, this notion is actually more complex. Indeed, it should also account for loss due to the fact that the optical system cannot be perfect. Both the optics and the sensor introduce a certain amount of blur, which has to be integrated in the definition of resolution. Similarly, the bit depth of the sensor (which is the number of bits used to encode gray levels) is only a first rough estimate of the real photograph dynamic. For instance, noise whatever its sources) has to be taken into account. Dynamic is better measured in a logarithmic scale. This is both compliant with perception since the eye does not have a linear response to illumination and to information theory since the number of bits that are need to encode a value is proportional to the logarithm of this value. Moreover, photographers also measure light in Exposure Value (EV) which is a base 2 logarithm. Therefore a general definition of the information capacity of a camera is The main difficulty is to properly define resolution and dynamic, both as simply as possible, and as completely as possible, that is by taking all the properties of the camera into account. In previous works5,6,7, we were interested only in sensor performance. Therefore, resolution only integrated pixel blur (sensor MTF) and lens diffraction limit, and sensor noise through its color sensitivity. 2.2 Local resolution and local dynamic The main goal of this paper is to extend this model, by incorporating real lens characteristics. Most of optical aberrations are field dependent. Therefore, their influence on resolution and dynamic shall be described locally. Information capacity can also be expressed as a local quantity. The global information capacity of the camera is obtained by summing these local contributions. At each spatial position , we will define the local resolution and the local dynamic . Each aberration contributes as a loss to either resolution or dynamic.
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • Denver Cmc Photography Section Newsletter
    MARCH 2018 DENVER CMC PHOTOGRAPHY SECTION NEWSLETTER Wednesday, March 14 CONNIE RUDD Photography with a Purpose 2018 Monthly Meetings Steering Committee 2nd Wednesday of the month, 7:00 p.m. Frank Burzynski CMC Liaison AMC, 710 10th St. #200, Golden, CO [email protected] $20 Annual Dues Jao van de Lagemaat Education Coordinator Meeting WEDNESDAY, March 14, 7:00 p.m. [email protected] March Meeting Janice Bennett Newsletter and Communication Join us Wednesday, March 14, Coordinator fom 7:00 to 9:00 p.m. for our meeting. [email protected] Ron Hileman CONNIE RUDD Hike and Event Coordinator [email protected] wil present Photography with a Purpose: Conservation Photography that not only Selma Kristel Presentation Coordinator inspires, but can also tip the balance in favor [email protected] of the protection of public lands. Alex Clymer Social Media Coordinator For our meeting on March 14, each member [email protected] may submit two images fom National Parks Mark Haugen anywhere in the country. Facilities Coordinator [email protected] Please submit images to Janice Bennett, CMC Photo Section Email [email protected] by Tuesday, March 13. [email protected] PAGE 1! DENVER CMC PHOTOGRAPHY SECTION MARCH 2018 JOIN US FOR OUR MEETING WEDNESDAY, March 14 Connie Rudd will present Photography with a Purpose: Conservation Photography that not only inspires, but can also tip the balance in favor of the protection of public lands. Please see the next page for more information about Connie Rudd. For our meeting on March 14, each member may submit two images from National Parks anywhere in the country.
    [Show full text]
  • “Digital Single Lens Reflex”
    PHOTOGRAPHY GENERIC ELECTIVE SEM-II DSLR stands for “Digital Single Lens Reflex”. In simple language, a DSLR is a digital camera that uses a mirror mechanism to either reflect light from a camera lens to an optical viewfinder (which is an eyepiece on the back of the camera that one looks through to see what they are taking a picture of) or let light fully pass onto the image sensor (which captures the image) by moving the mirror out of the way. Although single lens reflex cameras have been available in various shapes and forms since the 19th century with film as the recording medium, the first commercial digital SLR with an image sensor appeared in 1991. Compared to point-and-shoot and phone cameras, DSLR cameras typically use interchangeable lenses. Take a look at the following image of an SLR cross section (image courtesy of Wikipedia): When you look through a DSLR viewfinder / eyepiece on the back of the camera, whatever you see is passed through the lens attached to the camera, which means that you could be looking at exactly what you are going to capture. Light from the scene you are attempting to capture passes through the lens into a reflex mirror (#2) that sits at a 45 degree angle inside the camera chamber, which then forwards the light vertically to an optical element called a “pentaprism” (#7). The pentaprism then converts the vertical light to horizontal by redirecting the light through two separate mirrors, right into the viewfinder (#8). When you take a picture, the reflex mirror (#2) swings upwards, blocking the vertical pathway and letting the light directly through.
    [Show full text]
  • Digital Holography Using a Laser Pointer and Consumer Digital Camera Report Date: June 22Nd, 2004
    Digital Holography using a Laser Pointer and Consumer Digital Camera Report date: June 22nd, 2004 by David Garber, M.E. undergraduate, Johns Hopkins University, [email protected] Available online at http://pegasus.me.jhu.edu/~lefd/shc/LPholo/lphindex.htm Acknowledgements Faculty sponsor: Prof. Joseph Katz, [email protected] Project idea and direction: Dr. Edwin Malkiel, [email protected] Digital holography reconstruction: Jian Sheng, [email protected] Abstract The point of this project was to examine the feasibility of low-cost holography. Can viable holograms be recorded using an ordinary diode laser pointer and a consumer digital camera? How much does it cost? What are the major difficulties? I set up an in-line holographic system and recorded both still images and movies using a $600 Fujifilm Finepix S602Z digital camera and a $10 laser pointer by Lazerpro. Reconstruction of the stills shows clearly that successful holograms can be created with a low-cost optical setup. The movies did not reconstruct, due to compression and very low image resolution. Garber 2 Theoretical Background What is a hologram? The Merriam-Webster dictionary defines a hologram as, “a three-dimensional image reproduced from a pattern of interference produced by a split coherent beam of radiation (as a laser).” Holograms can produce a three-dimensional image, but it is more helpful for our purposes to think of a hologram as a photograph that can be refocused at any depth. So while a photograph taken of two people standing far apart would have one in focus and one blurry, a hologram taken of the same scene could be reconstructed to bring either person into focus.
    [Show full text]
  • Completing a Photography Exhibit Data Tag
    Completing a Photography Exhibit Data Tag Current Data Tags are available at: https://unl.box.com/s/1ttnemphrd4szykl5t9xm1ofiezi86js Camera Make & Model: Indicate the brand and model of the camera, such as Google Pixel 2, Nikon Coolpix B500, or Canon EOS Rebel T7. Focus Type: • Fixed Focus means the photographer is not able to adjust the focal point. These cameras tend to have a large depth of field. This might include basic disposable cameras. • Auto Focus means the camera automatically adjusts the optics in the lens to bring the subject into focus. The camera typically selects what to focus on. However, the photographer may also be able to select the focal point using a touch screen for example, but the camera will automatically adjust the lens. This might include digital cameras and mobile device cameras, such as phones and tablets. • Manual Focus allows the photographer to manually adjust and control the lens’ focus by hand, usually by turning the focus ring. Camera Type: Indicate whether the camera is digital or film. (The following Questions are for Unit 2 and 3 exhibitors only.) Did you manually adjust the aperture, shutter speed, or ISO? Indicate whether you adjusted these settings to capture the photo. Note: Regardless of whether or not you adjusted these settings manually, you must still identify the images specific F Stop, Shutter Sped, ISO, and Focal Length settings. “Auto” is not an acceptable answer. Digital cameras automatically record this information for each photo captured. This information, referred to as Metadata, is attached to the image file and goes with it when the image is downloaded to a computer for example.
    [Show full text]
  • Session Outline: History of the Daguerreotype
    Fundamentals of the Conservation of Photographs SESSION: History of the Daguerreotype INSTRUCTOR: Grant B. Romer SESSION OUTLINE ABSTRACT The daguerreotype process evolved out of the collaboration of Louis Jacques Mande Daguerre (1787- 1851) and Nicephore Niepce, which began in 1827. During their experiments to invent a commercially viable system of photography a number of photographic processes were evolved which contributed elements that led to the daguerreotype. Following Niepce’s death in 1833, Daguerre continued experimentation and discovered in 1835 the basic principle of the process. Later, investigation of the process by prominent scientists led to important understandings and improvements. By 1843 the process had reached technical perfection and remained the commercially dominant system of photography in the world until the mid-1850’s. The image quality of the fine daguerreotype set the photographic standard and the photographic industry was established around it. The standardized daguerreotype process after 1843 entailed seven essential steps: plate polishing, sensitization, camera exposure, development, fixation, gilding, and drying. The daguerreotype process is explored more fully in the Technical Note: Daguerreotype. The daguerreotype image is seen as a positive to full effect through a combination of the reflection the plate surface and the scattering of light by the imaging particles. Housings exist in great variety of style, usually following the fashion of miniature portrait presentation. The daguerreotype plate is extremely vulnerable to mechanical damage and the deteriorating influences of atmospheric pollutants. Hence, highly colored and obscuring corrosion films are commonly found on daguerreotypes. Many daguerreotypes have been damaged or destroyed by uninformed attempts to wipe these films away.
    [Show full text]
  • Section 10 Vignetting Vignetting the Stop Determines Determines the Stop the Size of the Bundle of Rays That Propagates On-Axis an the System for Through Object
    10-1 I and Instrumentation Design Optical OPTI-502 © Copyright 2019 John E. Greivenkamp E. John 2019 © Copyright Section 10 Vignetting 10-2 I and Instrumentation Design Optical OPTI-502 Vignetting Greivenkamp E. John 2019 © Copyright On-Axis The stop determines the size of Ray Aperture the bundle of rays that propagates Bundle through the system for an on-axis object. As the object height increases, z one of the other apertures in the system (such as a lens clear aperture) may limit part or all of Stop the bundle of rays. This is known as vignetting. Vignetted Off-Axis Ray Rays Bundle z Stop Aperture 10-3 I and Instrumentation Design Optical OPTI-502 Ray Bundle – On-Axis Greivenkamp E. John 2018 © Copyright The ray bundle for an on-axis object is a rotationally-symmetric spindle made up of sections of right circular cones. Each cone section is defined by the pupil and the object or image point in that optical space. The individual cone sections match up at the surfaces and elements. Stop Pupil y y y z y = 0 At any z, the cross section of the bundle is circular, and the radius of the bundle is the marginal ray value. The ray bundle is centered on the optical axis. 10-4 I and Instrumentation Design Optical OPTI-502 Ray Bundle – Off Axis Greivenkamp E. John 2019 © Copyright For an off-axis object point, the ray bundle skews, and is comprised of sections of skew circular cones which are still defined by the pupil and object or image point in that optical space.
    [Show full text]
  • Seeing Like Your Camera ○ My List of Specific Videos I Recommend for Homework I.E
    Accessing Lynda.com ● Free to Mason community ● Set your browser to lynda.gmu.edu ○ Log-in using your Mason ID and Password ● Playlists Seeing Like Your Camera ○ My list of specific videos I recommend for homework i.e. pre- and post-session viewing.. PART 2 - FALL 2016 ○ Clicking on the name of the video segment will bring you immediately to Lynda.com (or the login window) Stan Schretter ○ I recommend that you eventually watch the entire video class, since we will only use small segments of each video class [email protected] 1 2 Ways To Take This Course What Creates a Photograph ● Each class will cover on one or two topics in detail ● Light ○ Lynda.com videos cover a lot more material ○ I will email the video playlist and the my charts before each class ● Camera ● My Scale of Value ○ Maximum Benefit: Review Videos Before Class & Attend Lectures ● Composition & Practice after Each Class ○ Less Benefit: Do not look at the Videos; Attend Lectures and ● Camera Setup Practice after Each Class ○ Some Benefit: Look at Videos; Don’t attend Lectures ● Post Processing 3 4 This Course - “The Shot” This Course - “The Shot” ● Camera Setup ○ Exposure ● Light ■ “Proper” Light on the Sensor ■ Depth of Field ■ Stop or Show the Action ● Camera ○ Focus ○ Getting the Color Right ● Composition ■ White Balance ● Composition ● Camera Setup ○ Key Photographic Element(s) ○ Moving The Eye Through The Frame ■ Negative Space ● Post Processing ○ Perspective ○ Story 5 6 Outline of This Class Class Topics PART 1 - Summer 2016 PART 2 - Fall 2016 ● Topic 1 ○ Review of Part 1 ● Increasing Your Vision ● Brief Review of Part 1 ○ Shutter Speed, Aperture, ISO ○ Shutter Speed ● Seeing The Light ○ Composition ○ Aperture ○ Color, dynamic range, ● Topic 2 ○ ISO and White Balance histograms, backlighting, etc.
    [Show full text]
  • Multi-Agent Recognition System Based on Object Based Image Analysis Using Worldview-2
    intervals, between −3 and +3, for a total of 19 different EVs To mimic airborne imagery collection from a terrestrial captured (see Table 1). WB variable images were captured location, we attempted to approximate an equivalent path utilizing the camera presets listed in Table 1 at the EVs of 0, radiance to match a range of altitudes (750 to 1500 m) above −1 and +1, for a total of 27 images. Variable light metering ground level (AGL), that we commonly use as platform alti- tests were conducted using the “center weighted” light meter tudes for our simulated damage assessment research. Optical setting, at the EVs listed in Table 1, for a total of seven images. depth increases horizontally along with the zenith angle, from The variable ISO tests were carried out at the EVs of −1, 0, and a factor of one at zenith angle 0°, to around 40° at zenith angle +1, using the ISO values listed in Table 1, for a total of 18 im- 90° (Allen, 1973), and vertically, with a zenith angle of 0°, the ages. The variable aperture tests were conducted using 19 dif- magnitude of an object at the top of the atmosphere decreases ferent apertures ranging from f/2 to f/16, utilizing a different by 0.28 at sea level, 0.24 at 500 m, and 0.21 at 1,000 m above camera system from all the other tests, with ISO = 50, f = 105 sea level, to 0 at around 100 km (Green, 1992). With 25 mm, four different degrees of onboard vignetting control, and percent of atmospheric scattering due to atmosphere located EVs of −⅓, ⅔, 0, and +⅔, for a total of 228 images.
    [Show full text]
  • Passport Photograph Instructions by the Police
    Passport photograph instructions 1 (7) Passport photograph instructions by the police These instructions describe the technical and quality requirements on passport photographs. The same instructions apply to all facial photographs used on the passports, identity cards and permits granted by the police. The instructions are based on EU legislation and international treaties. The photographer usually sends the passport photograph directly to the police electronically, and it is linked to the application with a photograph retrieval code assigned to the photograph. You can also use a paper photograph, if you submit your application at a police service point. Contents • Photograph format • Technical requirements on the photograph • Dimensions and positioning • Posture • Lighting • Expressions, glasses, head-wear and make-up • Background Photograph format • The photograph can be a black-and-white or a colour photograph. • The dimensions of photographs submitted electronically must be exactly 500 x 653 pixels. Deviations as small as one pixel are not accepted. • Electronically submitted photographs must be saved in the JPEG format (not JPEG2000). The file extension can be either .jpg or .jpeg. • The maximum size of an electronically submitted photograph is 250 kilobytes. 1. Correct 2. Correct 2 (7) Technical requirements on the photograph • The photograph must be no more than six months old. • The photograph must not be manipulated in any way that would change even the small- est details in the subject’s appearance or in a way that could raise suspicions about the photograph's authenticity. Use of digital make-up is not allowed. • The photograph must be sharp and in focus over the entire face.
    [Show full text]
  • Ground-Based Photographic Monitoring
    United States Department of Agriculture Ground-Based Forest Service Pacific Northwest Research Station Photographic General Technical Report PNW-GTR-503 Monitoring May 2001 Frederick C. Hall Author Frederick C. Hall is senior plant ecologist, U.S. Department of Agriculture, Forest Service, Pacific Northwest Region, Natural Resources, P.O. Box 3623, Portland, Oregon 97208-3623. Paper prepared in cooperation with the Pacific Northwest Region. Abstract Hall, Frederick C. 2001 Ground-based photographic monitoring. Gen. Tech. Rep. PNW-GTR-503. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 340 p. Land management professionals (foresters, wildlife biologists, range managers, and land managers such as ranchers and forest land owners) often have need to evaluate their management activities. Photographic monitoring is a fast, simple, and effective way to determine if changes made to an area have been successful. Ground-based photo monitoring means using photographs taken at a specific site to monitor conditions or change. It may be divided into two systems: (1) comparison photos, whereby a photograph is used to compare a known condition with field conditions to estimate some parameter of the field condition; and (2) repeat photo- graphs, whereby several pictures are taken of the same tract of ground over time to detect change. Comparison systems deal with fuel loading, herbage utilization, and public reaction to scenery. Repeat photography is discussed in relation to land- scape, remote, and site-specific systems. Critical attributes of repeat photography are (1) maps to find the sampling location and of the photo monitoring layout; (2) documentation of the monitoring system to include purpose, camera and film, w e a t h e r, season, sampling technique, and equipment; and (3) precise replication of photographs.
    [Show full text]
  • Exposure Metering and Zone System Calibration
    Exposure Metering Relating Subject Lighting to Film Exposure By Jeff Conrad A photographic exposure meter measures subject lighting and indicates camera settings that nominally result in the best exposure of the film. The meter calibration establishes the relationship between subject lighting and those camera settings; the photographer’s skill and metering technique determine whether the camera settings ultimately produce a satisfactory image. Historically, the “best” exposure was determined subjectively by examining many photographs of different types of scenes with different lighting levels. Common practice was to use wide-angle averaging reflected-light meters, and it was found that setting the calibration to render the average of scene luminance as a medium tone resulted in the “best” exposure for many situations. Current calibration standards continue that practice, although wide-angle average metering largely has given way to other metering tech- niques. In most cases, an incident-light meter will cause a medium tone to be rendered as a medium tone, and a reflected-light meter will cause whatever is metered to be rendered as a medium tone. What constitutes a “medium tone” depends on many factors, including film processing, image postprocessing, and, when appropriate, the printing process. More often than not, a “medium tone” will not exactly match the original medium tone in the subject. In many cases, an exact match isn’t necessary—unless the original subject is available for direct comparison, the viewer of the image will be none the wiser. It’s often stated that meters are “calibrated to an 18% reflectance,” usually without much thought given to what the statement means.
    [Show full text]