Why You Should Forget Luminance Conversion and Do Something Better

Why You Should Forget Luminance Conversion and Do Something Better

Why You Should Forget Luminance Conversion and Do Something Better Rang M. H. Nguyen Michael S. Brown National University of Singapore York University [email protected] [email protected] Abstract sRGB image Ground truth luminance One of the most frequently applied low-level operations in computer vision is the conversion of an RGB camera im- age into its luminance representation. This is also one of the Error due to wrong Error due to wrong Error due to wrong most incorrectly applied operations. Even our most trusted white-balancing (2500oK) tone-curve equations (YIQ-Luma) softwares, Matlab and OpenCV, do not perform luminance conversion correctly. In this paper, we examine the main factors that make proper RGB to luminance conversion dif- ficult, in particular: 1) incorrect white-balance, 2) incorrect gamma/tone-curve correction, and 3) incorrect equations. 0.20 Our analysis shows errors up to 50% for various colors are 0.15 not uncommon. As a result, we argue that for most com- 0.10 puter vision problems there is no need to attempt luminance 0.05 conversion; instead, there are better alternatives depending on the task. Figure 1. This figure shows examples of errors that arise due to improper luminance conversion. The ground truth luminance for this experiment is captured from a hyperspectral camera. 1. Introduction and Motivation One of the most frequently applied operations in com- sion, including the color space’s assumed white-point and puter vision and image processing is the conversion of an nonlinear mappings (e.g. gamma correction). Radiomet- RGB image into a single-channel luminance representation. ric calibration methods [7, 16, 18, 19] have long known Luminance is a photometric measurement that quantifies that cameras use proprietary nonlinear mappings (i.e. tone- how the human eye perceives radiant energy emitting from curves) that do not conform to sRGB standards. Recent a scene. As such, RGB to luminance conversion is used as a work in [3, 15, 17, 31, 14] has shown that these tone-curves way to convert an RGB image into its perceived brightness can be setting-specific. Fig. 1 shows examples of errors representation. Luminance is generally represented by the caused by different factors in the color space conversion variable, Y , which comes from the CIE 1931 XYZ color from sRGB to luminance. Interestingly, however, computer space definition for which Y is defined as the luminosity visions algorithms still work in the face of these errors. If function of a standard human observer under well-lit con- our algorithms work with incorrect luminance conversion, ditions. Luminance is routinely used in a variety of vision why then are we even bothering to attempt luminance con- tasks, from image enhancement [22, 27, 29] to feature de- version? tection [2, 20], to physical measurements [10, 11, 26]. Contribution This work offers two contributions. First, There are a number of commonly used methods to con- we systematically examine the challenges in obtaining true vert an RGB image to Y . For example, the widely used scene luminance values from a camera RGB image. Specif- YIQ and YUV color spaces use the weighted average Y = ically, we discuss the assumptions often overlooked in the 0.299R + 0.587G + 0.114B, while more recent methods definition of standard color spaces and onboard camera adopt a weighted average of Y = 0.2126R + 0.7152G + photo-finishing that are challenging to undo when perform- 0.0722B. In some cases, a simple RGB average of Y = ing luminance conversion. We also discuss the use of in- (R + G + B)/3 is used. Clearly, these all cannot be correct. correct equations - e.g YIQ or HSV - that are erroneously In addition, there are other factors at play in this conver- interpreted as luminance. Our findings reveal it is not un- 16750 0.9 common to encounter conversion errors up to 50% from the Primaries of sRGB 0.8 Primaries of NTSC true luminance values. Our second contribution is to advo- CIE XYZ 1 0.7 0.8 Gamma NTSC White point of sRGB encoding cate that for many vision algorithms, alternatives to lumi- 0.6 (2.2)-1 0.6 nance conversion exist and are better suited for the task at 0.5 sRGB hand. 0.4 0.4 0.3 2.2 0.2 Gamma 0.2 decoding 2. Related Work 0 0.1 0 0.2 0.4 0.6 0.8 1 White point of NTSC 0 There is little work analyzing the correctness of lumi- 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 nance conversion with respect to an imaged scene. Many Figure 3. The sRGB and NTSC color spaces primaries and white- approaches in the published literature provide a citation to points as defined in the CIE XYZ color space. These establish the conversion equations given in standard image processing mapping between CIE XYZ and sRGB/NTSC and vice-versa. textbooks (e.g., [5]) and assume the conversions to be ac- curate. There has been work, however, that describes the various color spaces and their usages. S¨usstrunk et al. [28] (SPD) by parameterizing it in terms of the X, Y , and Z. reviewed the specifications and usage of standardized RGB This means a three-channel image I under the CIE XYZ color spaces for images and video. This work described color space can be described as: a number of industry-accepted RGB color spaces, such as standard RGB (sRGB), Adobe RGB 98, Apple RGB, I(x)= C (λ) R(x,λ) L(λ)dλ, (1) Z c NTSC, and PAL. This work serves as a reminder that it is ω important to be clear about which color space images are where λ represents the wavelength, ω is the visible spec- in before doing a conversion. Others have examined factors trum 380 − 720nm, Cc is the CIE XYZ color matching that affect the color distributions of an imaged scene. In par- function, and c = X,Y,Z are the primaries. The term ticular, Romero et al. [25] analyzed color changes of a scene R(x,λ) represents the scene’s spectral reflectance at pixel x under variation of daylight illuminations. Their conclusion and L(λ) is the spectral illumination in the scene. In many is that the values of chromaticity coordinates have signifi- cases, the spectral reflectance and illumination at each pixel cant changes while the values of luminance coordinates are are combined together into the spectral power distribution less effected. Kanan et al. [13] analyzed the effect of thir- S(x,λ) (see Fig. 2). Therefore, Eq. 1 can be rewritten as: teen methods for converting color images to grayscale im- ages (often considered to be luminance) on object recogni- I(x)= Cc(λ) S(x,λ)dλ. (2) tion. They found, not surprisingly, that different conversion Zω methods result in different object recognition performance. There is a large body of work on radiometric calibration In this case, any S(x) that maps to the same X/Y/Z values is of cameras (e.g. [7, 16, 18, 19]). These works have long considered to be perceived as the same color to an observer. established the importance of understanding the nonlinear The color space was defined such that the matching function mapping of camera pixel intensities with respect to scene associated with the Y primary has the same response as the radiance. These methods, however, do not explore the rela- luminosity function of a standard human observer [4]. This tionship of their linearized camera values to the true scene means that the Y value for a given spectral power distribu- luminance as defined by CIE XYZ. tion indicates how bright it is perceived with respect to other scene points. As such, Y is referred to as the “luminance of 3. Factors for Luminance Conversion a scene” and is a desirable attribute to describe an imaged scene. 3.1. Preliminaries: CIE 1931 XYZ and Luminance A number of color spaces have been derived from the Virtually all modern color spaces used in image process- CIE XYZ color space. Converting to luminance is essen- ing and computer vision trace their definition to the work tially mapping a color value in a different color space back by Guild and Wright [9, 30], whose work on a device inde- to the CIE Y value. The following describes a number of pendent perceptual color space was adopted as the official factors necessary to get this mapping correct. CIE 1931 XYZ color space. Even though other color spaces 3.2. RGB Color Spaces (sRGB/NTSC) were introduced later (and shown to be superior), the CIE 1931 XYZ remains the de facto color space for camera and While CIE XYZ is useful for colorimetry to describe the video images. relationships between SPDs, a color space based on RGB CIE XYZ (dropping 1931 for brevity) established three primaries related to real imaging and display hardware is hypothetical color primaries, X, Y , and Z. These primaries desirable. To establish a new color space, two things are provide a means to describe a spectral power distribution needed - the location of the three primaries (R, G, B) and 6751 SPD1 SPD2 (Y:38, x:0.24, y:0.61) Chromaticity y 2.5 400 500 600 700 �x, � X Z CIE XYZ 2 ��� Y X Z Y 1.5 1 0.5 SPD2 0 400 500 600 700 0 Color Matching Functions Y (Luminance) SPD1 (Y:50, x:0.56, y:0.37) � 400 500 600 700 CIE XYZ Physical Scene Image under CIE XYZ color space (Spectral Power Distribution) Figure 2.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us