Color-To-Grayscale: Does the Method Matter in Image Recognition?

Total Page:16

File Type:pdf, Size:1020Kb

Color-To-Grayscale: Does the Method Matter in Image Recognition? Color-to-Grayscale: Does the Method Matter in Image Recognition? Christopher Kanan*, Garrison W. Cottrell Department of Computer Science and Engineering, University of California San Diego, La Jolla, California, United States of America Abstract In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures. Citation: Kanan C, Cottrell GW (2012) Color-to-Grayscale: Does the Method Matter in Image Recognition? PLoS ONE 7(1): e29740. doi:10.1371/ journal.pone.0029740 Editor: Eshel Ben-Jacob, Tel Aviv University, Israel Received July 13, 2011; Accepted December 4, 2011; Published January 10, 2012 Copyright: ß 2012 Kanan, Cottrell. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by the James S. McDonnell Foundation (Perceptual Expertise Network, I. Gauthier, PI), and the National Science Foundation (NSF) (grant #SBE-0542013 to the Temporal Dynamics of Learning Center, G.W. Cottrell, PI). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * E-mail: [email protected] Introduction influences performance and if so, to identify which method is preferred regardless of the dataset or descriptor type. Modern descriptor-based image recognition systems often Our experiments are conducted with relatively few instances, operate on grayscale images, with little being said of the since classifier performance is much more sensitive to the quality mechanism used to convert from color-to-grayscale. This is of the descriptors in this setting [4]. One reason for this because most researchers assume that the color-to-grayscale phenomenon is that an image recognition system can obtain method is of little consequence when using robust descriptors. invariance properties simply by training it with more data, as long However, since many methods for converting to grayscale have as the additional data exhibits the same variation as the test set [5]. been employed in computer vision, we believe it is prudent to For many applications this is infeasible (e.g., automated surveil- assess whether this assumption is warranted. The most common lance systems for detecting suspected criminals) and it could techniques are based on weighted means of the red, green, and reduce execution speed for some non-parametric classification blue image channels (e.g., Intensity and Luminance), but some algorithms, e.g., nearest neighbor. If a descriptor is not suitably methods adopt alternative strategies to generate a more percep- robust when the size of the training set is small, the classifier may tually accurate representation (e.g., Luma and Lightness [1]) or to inappropriately separate the categories. We believe this is preserve subjectively appealing color contrast information in especially likely with large changes in illumination. grayscale images (e.g., Decolorize [2]). A priori, none of these Related work has shown that illumination conditions and criteria suggest superior recognition performance. camera parameters can greatly influence the properties of several The main reason why grayscale representations are often used recent image descriptor types [6]. This suggests grayscale for extracting descriptors instead of operating on color images algorithms that are less sensitive to illumination conditions fmay directly is that grayscale simplifies the algorithm and reduces exhibit superior performance when illumination is variable. To computational requirements. Indeed, color may be of limited our knowledge, this is the first time color-to-grayscale algorithms benefit in many applications and introducing unnecessary have been evaluated in a modern descriptor-based image information could increase the amount of training data required recognition framework on established benchmark datasets. to achieve good performance. In this paper we compare thirteen different methods for Methods converting from color-to-grayscale. While we do not evaluate every method that has been developed, we evaluate all of the Color-to-Grayscale Algorithms widely used methods, as well as some less well known techniques In this section we briefly describe thirteen methods with linear (e.g., Decolorize). All of the methods are computationally inexpen- time complexity for converting from color-to-grayscale, i.e., | | sive, i.e., they all have linear time complexity in the number of functions G that take a Rn m 3 color image and convert it to a | pixels. This comparison is performed using the Naive Bayes Rn m representation. All image values are assumed to be between Nearest Neighbor (NBNN) [3] image recognition framework and 0 and 1. Let R, G, and B represent linear (i.e., not gamma four different types of image descriptors. Our objective is to corrected) red, green, and blue channels. The output of each determine if the grayscale representation used significantly grayscale algorithm is between 0 and 1. Since some methods have PLoS ONE | www.plosone.org 1 January 2012 | Volume 7 | Issue 1 | e29740 Color-to-Grayscale: Does the Method Matter? names like Luster and Luminance, we denote all grayscale algorithms greater than gamma corrected Intensity, and it follows that by capitalizing the first letter and italicizing in the text. All ÀÁ transformations are applied component-wise, i.e., applied inde- GIntensityƒGGleamƒC GIntensity : pendently to each pixel. Several of the methods use the standard 1=2:2 gamma correction function CðÞt ~t’~t [7]. We denote When gamma corrected Intensity and Gleam are both applied to gamma corrected channels as R’, G’, and B’. The output of the natural images, we found that Gleam produces pixel values around grayscale algorithms on several images is shown in Fig. 1. 20–25% smaller on average. Perhaps the simplest color-to-grayscale algorithm is Intensity [1]. Unlike Intensity and Gleam, Luminance [8] is designed to match It is the mean of the RGB channels: human brightness perception by using a weighted combination of the RGB channels: 1 GIntensity/ ðÞRzGzB : ð1Þ 3 GLuminance/0:3Rz0:59Gz0:11B: ð3Þ Although Intensity is calculated using linear channels, in practice Luminance does not try to match the logarithmic nature of human gamma correction is often left intact when using datasets brightness perception, but this is achieved to an extent with containing gamma corrected images. We call this method Gleam: subsequent gamma correction. Luminance is the standard algorithm used by image processing software (e.g., GIMP). It is implemented 1 by MATLAB’s ‘‘rgb2gray’’ function, and it is frequently used in G ~ ðÞR’zG’zB’ : ð2Þ Gleam 3 computer vision (e.g. [9]). Luma is a similar gamma corrected form used in high-definition televisions (HDTVs) [1]: In terms of pixel values, Intensity and Gleam produce very different results. Since CðÞt is a concave function, Jensen’s inequality [7] z z implies that Gleam will never produce a representation with values GLuma/0:2126R’ 0:7152G’ 0:0722B’: ð4Þ Lightness is a perceptually uniform grayscale representation used in the CIELAB and CIELUV color spaces [10]. This means an increment in Lightness should more closely correspond to human perception, which is achieved via a nonlinear transformation of the RGB color space [10], 1 G / ðÞ116fYðÞ{16 , ð5Þ Lightness 100 where Y~0:2126Rz0:7152Gz0:0722B, and 8 <> t1=3 if twðÞ6=29 3 ftðÞ~ 1 29 2 4 ð6Þ :> tz otherwise: 3 6 29 We have normalized Lightness to range from 0 to 1, instead of the usual range of 0 to 100. The Lightness nonlinearity f (t) implements a form of gamma correction. Value is the achromatic channel in the Hue, Saturation, and Value (HSV) color space and it provides absolute brightness information. It is computed by taking the maximum of the RGB channels [10]: G ~maxðÞR,G,B : ð7Þ Figure 1. Qualitative comparison of color-to-grayscale algo- Value rithms. The four images shown are: (1) a panel of fully saturated colors; (2) Ishihara plate 3, in which a person with normal vision will see the Since gamma correction is a monotonically increasing function it number 29, while a person with red-green deficient vision may see the follows that, number 70; (3) a green shrub laden with red berries; and (4) a picture of the Pacific Ocean. All images are shown gamma corrected so that the ~ details are not excessively dark, except for Gleam, Luma, and Lightness. CðÞmaxðÞR,G,B maxðÞR’,G’,B’ : The color panel contains fully saturated colors, which Value, Intensity, and Luster convert to the same shade of gray; however, humans do not HSV is occasionally used in image recognition (e.g., [9,11,12]), but perceive these colors as having equivalent brightness which is a trait Value is equally sensitive to changes in the brightness of one color captured by Lightness and Luminance. Gleam, Intensity, Luminance, channel as it is to changes to all color channels, so we expect it to Lightness, and Decolorize all lose most of the chromatic contrast present perform poorly when significant brightness variation is present. in the Ishihara plate, while Luster, and Value preserve it. The same pattern of chromatic contrast degradation is present in the fruit image, Luster is the L channel in the HLS (Hue, Lightness, and with the fruit becoming much more difficult to distinguish from the Saturation) color space [1].
Recommended publications
  • CDOT Shaded Color and Grayscale Printing Reference File Management
    CDOT Shaded Color and Grayscale Printing This document guides you through the set-up and printing process for shaded color and grayscale Sheet Files. This workflow may be used any time the user wants to highlight specific areas for things such as phasing plans, public meetings, ROW exhibits, etc. Please note that the use of raster images (jpg, bmp, tif, ets.) will dramatically increase the processing time during printing. Reference File Management Adjustments must be made to some of the reference file settings in order to achieve the desired print quality. The settings that need to be changed are the Slot Numbers and the Update Sequence. Slot Numbers are unique identifiers assigned to reference files and can be called out individually or in groups within a pen table for special processing during printing. The Update Sequence determines the order in which files are refreshed on the screen and printed on paper. Reference file Slot Number Categories: 0 = Sheet File (black)(this is the active dgn file.) 1-99 = Proposed Primary Discipline (Black) 100-199 = Existing Topo (light gray) 200-299 = Proposed Other Discipline (dark gray) 300-399 = Color Shaded Areas Note: All data placed in the Sheet File will be printed black. Items to be printed in color should be in their own file. Create a new file using the standard CDOT seed files and reference design elements to create color areas. If printing to a black and white printer all colors will be printed grayscale. Files can still be printed using the default printer drivers and pen tables, however shaded areas will lose their transparency and may print black depending on their level.
    [Show full text]
  • The Art of Digital Black & White by Jeff Schewe There's Just Something
    The Art of Digital Black & White By Jeff Schewe There’s just something magical about watching an image develop on a piece of photo paper in the developer tray…to see the paper go from being just a blank white piece of paper to becoming a photograph is what many photographers think of when they think of Black & White photography. That process of watching the image develop is what got me hooked on photography over 30 years ago and Black & White is where my heart really lives even though I’ve done more color work professionally. I used to have the brown stains on my fingers like any good darkroom tech, but commercially, I turned toward color photography. Later, when going digital, I basically gave up being able to ever achieve what used to be commonplace from the darkroom–until just recently. At about the same time Kodak announced it was going to stop making Black & White photo paper, Epson announced their new line of digital ink jet printers and a new ink, Ultrachrome K3 (3 Blacks- hence the K3), that has given me hope of returning to darkroom quality prints but with a digital printer instead of working in a smelly darkroom environment. Combine the new printers with the power of digital image processing in Adobe Photoshop and the capabilities of recent digital cameras and I think you’ll see a strong trend towards photographers going digital to get the best Black & White prints possible. Making the optimal Black & White print digitally is not simply a click of the shutter and push button printing.
    [Show full text]
  • Grayscale Lithography Creating Complex 2.5D Structures in Thick Photoresist by Direct Laser Writing
    EPIC Meeting on Wafer Level Optics Grayscale Lithography Creating complex 2.5D structures in thick photoresist by direct laser writing 07/11/2019 Dominique Collé - Grayscale Lithography Heidelberg Instruments in a Nutshell • A world leader in the production of innovative, high- precision maskless aligners and laser lithography systems • Extensive know-how in developing customized photolithography solutions • Providing customer support throughout system’s lifetime • Focus on high quality, high fidelity, high speed, and high precision • More than 200 employees worldwide (and growing fast) • 40 million Euros turnover in 2017 • Founded in 1984 • An installation base of over 800 systems in more than 50 countries • 35 years of experience 07/11/2019 Dominique Collé - Grayscale Lithography Principle of Grayscale Photolithography UV exposure with spatially modulated light intensity After development: the intensity gradient has been transferred into resist topography. Positive photoresist Substrate Afterward, the resist topography can be transfered to a different material: the substrate itself (etching) or a molding material (electroforming, OrmoStamp®). 07/11/2019 Dominique Collé - Grayscale Lithography Applications Microlens arrays Fresnel lenses Diffractive Optical elements • Wavefront sensor • Reduced lens volume • Modified phase profile • Fiber coupling • Mobile devices • Split & shape beam • Light homogenization • Miniature cameras • Complex light patterns 07/11/2019 Dominique Collé - Grayscale Lithography Applications Diffusers & reflectors
    [Show full text]
  • Grayscale Vs. Monochrome Scanning
    13615 NE 126th Place #450 Kirkland, WA 98034 USA Website:www.pimage.com Grayscale vs. Monochrome Scanning This document is intended to discuss why it is so important to scan microfilm and microfiche in grayscale and to show the limitations of monochrome scanning. The best analogy for the limitations of monochrome scanning is if you have every tried to photocopy your driver licenses. The picture can go completely black. This is because the copier can only reproduce full black or full white and not gray levels. If you place the copier in photo mode it is able to reproduce shades of gray. Grayscale scanning is analogous to the photo modes setting on your copier. The types of items on microfilm that are difficult to reproduce in monochrome are pencil on a blue form, light signatures, date stamps and embossing. In grayscale these items have a much higher probability to reproduce in the scanned version. Certainly there are instances where filming errors exist and the film is almost pure black or pure white. This can happen if the door to the room was opened during filming, if the canister had light intrusion prior to developing or if the chemicals or temperature were off on the developer. If these are identified the vendor can make a lamp adjustment in these sections of film or if they are frequent and the vendor has the proper cameras, they can scan at a higher bit depth. We have the ability to scan at bit depths higher than 8 bit gray up to 12 bits. 8 bit supports 256 levels of gray, 10bit supports 1024 levels and 12 bit 4096 levels.
    [Show full text]
  • Scanning & Halftones
    SCANNING & HALFTONES Ethics It is strongly recommend that you observe the rights of the original artist or publisher of the images you scan. If you plan to use a previously published image, contact the artist or publisher for information on obtaining permission. Scanning an Image Scanning converts a continuous tone image into a bitmap. Original photographic prints and photographic transparencies (slides) are continuous tone. The scanning process captures picture data as pixels. Think of a pixel as one tile in a mosaic. Bitmapped images Three primary pieces of information are relevant to all bitmapped images. 1. Dimensions Example: 2" x 2" 2. Color Mode Example: 256 level grayscale scan 3. Resolution Example: 300 ppi Basic Steps of Scanning 1. Place image on scanner bed Scanning & Halftones 2. Preview the image – click Preview 3. Select area to be scanned – drag a selection rectangle 4. Determine scan resolution (dpi or ppi) 5. Determine mode or pixel depth (grayscale, color, line art) 6. Scale selected area to desired dimensions (% of original) 7. Scan Resolution Resolution is the amount of something. something amount amount over physical distance fabric the number of stitches in fabric the number of stitches per inch in a needlepoint film the amount of grain in film the number of grains in a micro meter in film digital image the number of pixels the number of pixels per inch in a digital image Resolution is a unit of measure: Input Resolution the number of pixels per inch (ppi) of a scanned image or an image captured with a digital camera On Screen Resolution the number of pixels per inch displayed on your computer monitor (ppi or dpi) Output Resolution the number of dots per inch (dpi) printed by the printer (laser printer, ink jet printer, imagesetter) dpi or ppi refer to square pixels per inch of a bitmap file.
    [Show full text]
  • 2.1 Identity LOGO
    2.1 Identity LOGO Primary logo Limited use logos Circle of Trust icon Circle of Trust icon X X X Logotype X X Logotype X X Circle of Trust icon Logotype The proportion and arrangement of components Three logo lockup configurations are available. 0.625 in within the breastcancer.org logo have been The primary logo is the preferred lockup developed for consistent application across all and is to be used on all breastcancer.org breastcancer.org communications. branded materials. Note: The limited use logos are intended for rare Do not retypset, rearrange, or alter the logos situations where the primary logo will in any way. To maintain consistency, use not work, such as very narrow horizontal Smallest acceptable size for the primary logo only approved digital art files. or vertical formats. breastcancer.org guidelines | siegel+gale | November 2007 2.2 Identity CLEAR SPACE 3X 3X 3X 3X 3X 3X X 3X X 2X 3X 2X 2X X 2X Always surround the breastcancer.org logo by Note: the amount of free space specified above. Clear space specifications are provided to help maintain integrity and presence when logos are placed in proximity to competing visual elements. Positioning text, graphic elements, or other logos within the recommended clear space is not acceptable. breastcancer.org guidelines | siegel+gale | November 2007 2.3 Identity COLOR PALETTE BCO Pink BCO Fuschia BCO Soft Yellow BCO Dark Gray BCO Blue Primary colors Spot Uncoated Spot Coated CMYK RGB HTML BCO Pink PANTONE® 205 U PANTONE® 205 C C:0 M:83 Y:17 K:0 R:218 G:72 B:126 DA487E BCO Fuschia PANTONE® 220 U PANTONE® 220 C C:5 M:100 Y:22 K:23 R:163 G:0 B:80 A30050 Secondary colors BCO Soft Yellow PANTONE® 7403 U PANTONE® 7403 C C:0 M:11 Y:51 K:0 R:232 G:206 B:121 E8CE79 BCO Dark Gray PANTONE® Cool gray 9U PANTONE® Cool gray 9C C:0 M:0 Y:0 K:70 R:116 G:118 B:120 747678 BCO Blue PANTONE® 2955 U PANTONE® 2955 C C:100 M:55 Y:10 K:48 R:0 G:60 B:105 003C69 The breastcancer.org color palette comprises two primary colors and three secondary colors.
    [Show full text]
  • Re-Coloring of Grayscale Images: Models with Aesthetic and Golden Proportions
    Re-Coloring of Grayscale Images: Models with Aesthetic and Golden Proportions Artyom M. Grigoryan and Sos S. Agaian Department of Electrical and Computer Engineering The University of Texas at San Antonio, San Antonio, Texas USA and Computer Science Department, College of Staten Island and the Graduate Center, Staten Island, NY, USA [email protected] [email protected] April 2018, SPIE 1 OUTLINE • Introduction • Grayscale-to-color transformations • 6 Models of Color Images with Gold Ratio Re-coloring from Grays Re-coloring from Intensity • 6 Models of Color Images with Aesthetic Ratio Re-coloring from Grays Re-coloring from Intensity • Examples with color images and painting • Examples with grayscale images • Summary • References 2 Abstract • The problem of color image composition from original grayscale images is considered. A few models are proposed and analyzed, which are based on the observation done on many color images that in average a proportion exists between primary colors in images. • Re-coloring of grayscale images are performed in the RGB color model and the Golden, Silver, Aesthetic ratios and other rules of proportions are considered between the primary colors. The gray is used as the main color to be map into the three colors at each pixel. • A parameterized model with the given ratio of re-coloring images is also described. The coloring of grayscale image allows us to apply the powerful tool of quaternion imaging for image enhancement and facial imaging. 3 Introduction • The goal of this paper is to propose a few models for coloring images. Our assumption is based on the observation done on many color images that in average a certain proportion exists between primary colors in images.
    [Show full text]
  • Visualization of Medical Content on Color Display Systems
    White Paper # 44 Level: Advanced Date: April 2016 Visualization of medical content on color display systems 1. Introduction Since 1993, the International Color Consortium (ICC) has worked on standardization and evolution of color management architecture. The architecture relies on profiles which describe the color characteristics of a device in a reference color space. The ICC1 describes its profiles as “… tools to translate color data created on one device into another device's native color space […] permitting tremendous flexibility to both users and vendors. For example, it allows users to be sure that their image will retain its color fidelity when moved between systems and applications”. This document focuses on results and recommendations for the correct use of ICC profiles for visualization of grayscale (GSDF [1]) and color medical images on color displays. These recommendations allow for visualization of medical content on color display systems, whereby DICOM GSDF images, pseudo color images and color accurate images can all be presented effectively on the same display. The results and recommendations in this document were first discussed in the ICC Medical Imaging Working Group (MIWG). 2. Calibration requirements, standards and guidelines for medical displays Medical displays that are used for primary diagnosis need to achieve minimum performance levels to ensure that subtle details in radiological images are visible to the radiologist. The National Electrical Manufacturers Association (NEMA) and The American College of Radiology (ACR) developed a Standard for Digital Imaging and Communications in Medicine (DICOM) [1]. Part 14 describes the Grayscale 1 http://color.org/abouticc.xalter Standard Display Function (GSDF). This standard defines a way to take the existing Characteristic Curve of a display system (i.e.
    [Show full text]
  • Brand Guidelines
    Logo Usage Guide Revised: 01/02/2020 Table of Contents Primary Logo Lockup 1 Secondary Logo Lockup 2 Color System 3 Logo Do’s & Don’ts 4 Clearspace & Minimum Sizes 5 Typography 6 File Type Usage 7 Primary Logo Lockup 1 FULL COLOR LOGO FULL COLOR LOGO KNOCKOUT LOGO The Elwyn Primary Logo, as shown to the right, is the preferred usage for all appearances of the logo when creating collateral for Elwyn. Full Color should be used on white or lightly colored backgrounds. KNOCKOUT LOGO The Knockout Primary Logo should be used on colored backgrounds or photographic elements to increase legibility. GRAYSCALE LOGO GRAYSCALE LOGO ONE COLOR LOGO In instances where printing in color is not an option, consider using the Elwyn Primary Logo in grayscale form. ONE COLOR LOGO In instances where single color printing is required, One Color Primary Logo may be used in Magenta, Gray, White, or Black. Secondary Logo Lockup 2 FULL COLOR LOGO FULL COLOR LOGO The Elwyn Secondary Logo Lockup can be used interchangeably with the Primary Logo. Use the orientation of the application to determine the logo that will provide the best prominence and visibility. Full Color should be used on white or lightly colored backgrounds. KNOCKOUT KNOCKOUT LOGO OVER MAGENTA The Knockout Secondary Logo should be used on colored backgrounds or photographic elements to increase legibility. GRAYSCALE LOGO GRAYSCALE In instances where printing in color is not an option, consider using the Elwyn Secondary Logo in grayscale form. ONE COLOR LOGO ONE COLOR MAGENTA In instances where single color printing is required, One Color Secondary Logo may be used in Magenta, Gray, White, or Black.
    [Show full text]
  • Street Co Presentation for Netflix
    STREET CO UPAD III LED display solution UPAD III Table of Contents 1 Color Gamut 4 Greyscale 2 HDR technology Cases using our 5 product 3 Image booster 1 Wide Color Gamut Wide color Gamut Upad III Wide color gamut technology reaches NTSC & 93,8% of DCI-P3 REC 709 Ordinary LED technology Comparison of color gamut standard among Upad III / REC 709 NTSC / DCI – P3 Comparison of color gamut standard among Upad III & BT 2020 Comparison of color gamut standard among Upad III & DCI-P3 2 HDR Technology Differences between SDR and HDR Real HDR captured pictures 3 Image Booster The Image Booster Engine has the following 3 functions which improve the display effect (the actual effect depends on the driver IC) from different dimensions. Color Management: Switch the color gamut of the screen between multiple gamuts to enable more precise colors on the screen. Precise Grayscale: Individually correct the 65,536 levels of grayscale (16 bits) of the driver IC to fix the display problems at low grayscale conditions, such as brightness spikes, brightness dips, color cast and mottling. This function can also better assist other display technologies, such as 22 bits+ and individual Gamma adjustment for RGB, allowing for a smoother and uniform image. 22 bits+: Improve the LED display grayscale by 64 times to avoid grayscale loss due to low brightness and allow for more details in dark areas and a smoother image. Color management technology By using A8s/A10s receiving cards and the color management module of the software, the color management technology can perform automatic capturing, calibration, and verification.
    [Show full text]
  • Decolorize: Fast, Contrast Enhancing, Color to Grayscale Conversion
    Decolorize: fast, contrast enhancing, color to grayscale conversion Mark Grundland*, Neil A. Dodgson Computer Laboratory, University of Cambridge Abstract We present a new contrast enhancing color to grayscale conversion algorithm which works in real-time. It incorporates novel techniques for image sampling and dimensionality reduction, sampling color differences by Gaussian pairing and analyzing color differences by predominant component analysis. In addition to its speed and simplicity, the algorithm has the advantages of continuous mapping, global consistency, and grayscale preservation, as well as predictable luminance, saturation, and hue ordering properties. Keywords: Color image processing; Color to grayscale conversion; Contrast enhancement; Image sampling; Dimensionality reduction. 1 Introduction We present a new contrast enhancing color to grayscale conversion algorithm which works in real-time. Compared with standard color to grayscale conversion, our approach better preserves equiluminant chromatic contrasts in grayscale. Alternatively, compared with standard principal component color projection, our approach better preserves achromatic luminance contrasts in grayscale. Finally, compared with recent optimization methods [1,2] for contrast enhancing color to grayscale conversion, this approach proves much more computationally efficient at producing visually similar results with stronger mathematical guarantees on its performance. Many legacy pattern recognition systems and algorithms only operate on the luminance channel and entirely ignore chromatic information. By ignoring important chromatic features, such as edges between isoluminant regions, they risk reduced recognition rates. Although luminance predominates human visual processing, on its own it may not capture the perceived color contrasts. On the other hand, maximizing variation by principal component analysis can result in high contrasts, but it may not prove visually meaningful.
    [Show full text]
  • Basic Intensity Quantification with Imagej
    Basic Intensity Quantification with ImageJ Pretty pictures are nice, but many times we need to turn our images into quantifiable data. ImageJ is useful for getting information from images, including pixel intensity. There are a number of different ways to get intensity information from images using the base package of ImageJ (no plugins required). Quantify Gray Levels Across an Entire Image or Single Object/Region If you want to know the intensity of the entire image or a defined area within the image, there are several ways to do this: 1) You can simply hover the cursor over a given area in the image and read out the pixel intensity at that pixel on the toolbar. For RGB images, there will be three numbers, red, green and blue. 2) You can use the Analyze Measure tool. First go to Analyze Set Measurements. Check the boxes next to the information you want. You can get information on area, diameter, perimeter and other factors as well as information about intensity. For an explanation of any of these parameters, see the ImageJ documentation page: http://rsbweb.nih.gov/ij/docs/guide/userguide- 27.html#sec:Analyze-Menu If you simply set your parameters and then choose Analyze Measure, you will get information on the entire image, so average pixel intensity for your object plus background. If you want to limit your measured area to just your object you have a couple of choices: first, you can draw a region of interest (ROI) around your object with one of the drawing tools (in the toolbar) and then Analyze Measure will limit it’s measurement to that area.
    [Show full text]