Video and Image Bayesian Demosaicing with a Two Color Image Prior

Total Page:16

File Type:pdf, Size:1020Kb

Video and Image Bayesian Demosaicing with a Two Color Image Prior Video and Image Bayesian Demosaicing with a Two Color Image Prior Eric P. Bennett1, Matthew Uyttendaele2,C.LawrenceZitnick2, Richard Szeliski2, and Sing Bing Kang2 1 The University of North Carolina at Chapel Hill, Chapel Hill, NC 2 Microsoft Research, Redmond, WA Abstract. The demosaicing process converts single-CCD color repre- sentations of one color channel per pixel into full per-pixel RGB. We introduce a Bayesian technique for demosaicing Bayer color filter array patterns that is based on a statistically-obtained two color per-pixel im- age prior. By modeling all local color behavior as a linear combination of two fully specified RGB triples, we avoid color fringing artifacts while preserving sharp edges. Our grid-less, floating-point pixel location archi- tecture can process both single images and multiple images from video within the same framework, with multiple images providing denser color samples and therefore better color reproduction with reduced aliasing. An initial clustering is performed to determine the underlying local two color model surrounding each pixel. Using a product of Gaussians statis- tical model, the underlying linear blending ratio of the two representative colors at each pixel is estimated, while simultaneously providing noise re- duction. Finally, we show that by sampling the image model at a finer resolution than the source images during reconstruction, our continuous demosaicing technique can super-resolve in a single step. 1 Introduction Most digital cameras use a single sensor to record images and video. They use color filter arrays (CFAs) to capture one color band per pixel, and interpolate colors to produce full RGB per pixel. This interpolation process is known as demosaicing. The Bayer filter is the most popular type of CFA used today. Demosaicing a raw Bayer image requires an underlying image model to guide decisions for reconstructing the missing color channels. At every pixel only one color chan- nel is sampled, so we must use that information, combined with that of nearby samples, to reconstruct plausible RGB triples. An image model provides a prior for reconstructing the missing colors based on patterns of the surrounding sam- ples. Demosaicing algorithms differ in how local spatial changes in a single color channel are used to propagate information to the other channels. Demosaicing is inherently underspecified because there are no complete RGB triples anywhere in the image to learn an image specific prior from. Even worse, Research performed while an intern at Microsoft Research, Redmond. A. Leonardis, H. Bischof, and A. Pinz (Eds.): ECCV 2006, Part I, LNCS 3951, pp. 508–521, 2006. c Springer-Verlag Berlin Heidelberg 2006 Video and Image Bayesian Demosaicing with a Two Color Image Prior 509 if every pixel in an image has a random ratio of red, green, and blue, there is no hope of reconstructing the image. Only by assuming some local coherence between channels can we reasonably reconstruct the original image. In this paper, we reduce the problem’s complexity by developing an under- lying statistical image model that treats all colors in a local area as a linear combination of no more than two representative colors. To use this model, we estimate the two representative colors for the local area centered at each pixel and find the linear blending for each pixel that determines its color. Our two color model is motivated by the need to reconstruct accurate colors at edges. Current demosaicing algorithms can accurately reconstruct colors in image areas where only low frequencies are present. However, yellow or purple color fringing can appear at high frequency edges due to the edges in multi- ple color channels not being aligned in the reconstruction. By constraining the system to interpolate between fully specified RGB colors, there is less risk of misalignment. This constraint also provides noise reduction in smooth areas. The underlying two colors at each pixel are estimated using K-Means cluster- ing. The RGB colors used for clustering can come from any existing demosaicing algorithm. The final color at each pixel results from discovering the proper linear blending coefficient between the two representative colors. Based on knowledge of a small set of CFA samples around each pixel, our prob- lem is posed using Bayesian probabilities. Stating the problem statistically allows the model to include non-grid-aligned samples from multiple images or tempo- rally adjacent video frames to increase color accuracy. Also, by sampling the de- mosaicer’s output at an increased resolution, information from these additional samples exposes details between pixels, providing super-resolution in a single step. 2 Previous Work There are many approaches to demosaicing. A simple technique for demosaicing a Bayer color filter array [1] (shown in Figure 1) is bilinear interpolation, which is able to reconstruct smooth and smoothly varying image areas. At the edges, bilinear interpolation risks creating aliasing or “zippering” artifacts where every other pixel along an edge alternates between being considered on or off the edge. Color fringing is the other significant artifact, where yellows, purples, and cyans appear along or on sharp edges. These artifacts result from bilinear interpolation incorrectly placing an edge in a color channel one pixel offset from the same edge in a different channel. R G R G R G G B G B G B R G R G R G G B G B G B Fig. 1. The Bayer color filter array pattern 510 E.P. Bennett et al. Solving the color fringing and zippering issues was the focus of much sub- sequent research. One approach is to bilinearly interpolate the green chan- nel and then interpolate the red:green and blue:green ratios for the remaining samples [2]. This assumes there exists no green detail smaller than two pixels and that red and green locally vary in a fixed ratio with green. Median in- terpolation [3] assumes that bilinear interpolation can be repaired by median filtering the red-green and blue-green spaces. Both of these methods target fringing artifacts but can result in over-smoothing. Comparisons can be found in [4]. The approach for Vector Color Filter Array Demosaicing [5] uses filtering in a different way. It first generates pseudo-colors using the local combinations of R, G, and B CFA samples. The chosen color is the “median color” whose total distance to the other pseudo-colors is minimized. Techniques sensitive to gradients were introduced to reduce over-smoothing by performing color interpolation only along sharp edges and not across them. Laroche and Prescott [6], Hamilton and Adams [7], and Chang et al. [8] presented algorithms with a chronologically increasing number of gradient directions evalu- ated and interpolated. Kimmel [9] modeled images as smooth surfaces separated by edge discontinuities which were enhanced with inverse diffusion. There are also grid-based techniques that learn statistical image models. Mal- var et al. [10] presents a fast linear interpolation scheme with color-specific ker- nels which were learned using a Wiener approach from the popular Kodak data set [11]. Another approach, independent of color filter array pattern, is Assorted Pixels [12] which constructs a kernel based on any multi-spectral array. Hel-Or [13] modeled correlation between channels using Canonical Correlation Analysis. Bayesian statistical methods were applied to the demosaicing problem by Brainard [14] who modeled the base image priors as a set of sinusoids. Closer to our two color approach is Bayesian Matting [15], which finds the foreground and background colors from the surrounding areas. Using a Bayesian system, it finds a linear blending ratio between these two colors. To perform super-resolution enhancement, additional resolution information must be acquired from somewhere. Zomet and Peleg [16] use information in multiple images taken from different sensors while Freeman et al. [17] inferred resolution from different resolution scalings of the same image. Demosaicing while providing super-resolution from multiple images was investigated by Fung and Mann [18]. Their approach places samples into a regularly spaced grid and each output pixel component is found using a nearest neighbor search of the registered inputs. Gotoh and Okutomi [19] generalized earlier super-resolution approaches to directly process Bayer samples from many frames. Their results used primarily synthetic frames (>20), and did not give quantitative results or consider single-image demosaicing. To evaluate our results, we are interested in using a more perceptually valid measure than merely SNR or MSE. The S-CIELAB [20, 21] model provides an extension to the Lαβ color space that is aware of local contrast and can hint if the human visual system (HVS) cannot detect errors due to masking. Video and Image Bayesian Demosaicing with a Two Color Image Prior 511 Also of interest is the iCAM [22] model which predicts the color the HVS per- ceives in the presence of nearby colors. Although we do not use such a com- plex model, there are methods that measure perceptual error considering more aspects of the HVS, such as the Multiscale Adaptation Model [23] and Visi- ble Differences Predictor [24]. These and other models have proved useful in detecting HDR compression error and in allocating rendering tasks based on contrast [25, 26]. 3 The Two Color Model Central to our processing is the assumption that at most two representative colors exist within a local neighborhood. Every pixel within that neighborhood is either one of the representative colors or is a linear combination of both. This assumption is violated in areas where more than two different colors meet, but such occurrences are relatively rare. Assuming a Gaussian noise model, this distribution represents a cylindrical volume in color space which spans the two representative colors, as shown in Figure 2. The two color model serves multiple purposes. Primarily, it serves as a con- straint to the ill-conditioned demosaicing problem.
Recommended publications
  • Camera Raw Workflows
    RAW WORKFLOWS: FROM CAMERA TO POST Copyright 2007, Jason Rodriguez, Silicon Imaging, Inc. Introduction What is a RAW file format, and what cameras shoot to these formats? How does working with RAW file-format cameras change the way I shoot? What changes are happening inside the camera I need to be aware of, and what happens when I go into post? What are the available post paths? Is there just one, or are there many ways to reach my end goals? What post tools support RAW file format workflows? How do RAW codecs like CineForm RAW enable me to work faster and with more efficiency? What is a RAW file? In simplest terms is the native digital data off the sensor's A/D converter with no further destructive DSP processing applied Derived from a photometrically linear data source, or can be reconstructed to produce data that directly correspond to the light that was captured by the sensor at the time of exposure (i.e., LOG->Lin reverse LUT) Photometrically Linear 1:1 Photons Digital Values Doubling of light means doubling of digitally encoded value What is a RAW file? In film-analogy would be termed a “digital negative” because it is a latent representation of the light that was captured by the sensor (up to the limit of the full-well capacity of the sensor) “RAW” cameras include Thomson Viper, Arri D-20, Dalsa Evolution 4K, Silicon Imaging SI-2K, Red One, Vision Research Phantom, noXHD, Reel-Stream “Quasi-RAW” cameras include the Panavision Genesis In-Camera Processing Most non-RAW cameras on the market record to 8-bit YUV formats
    [Show full text]
  • Advanced Color Machine Vision and Applications Dr
    Advanced Color Machine Vision and Applications Dr. Romik Chatterjee V.P. Business Development Graftek Imaging 2016 V04 - for tutorial 5/4/2016 Outline • What is color vision? Why is it useful? • Comparing human and machine color vision • Physics of color imaging • A model of color image formation • Human color vision and measurement • Color machine vision systems • Basic color machine vision algorithms • Advanced algorithms & applications • Human vision Machine Vision • Case study Important terms What is Color? • Our perception of wavelengths of light . Photometric (neural computed) measures . About 360 nm to 780 nm (indigo to deep red) . “…the rays are not coloured…” - Isaac Newton • Machine vision (MV) measures energy at different wavelengths (Radiometric) nm : nanometer = 1 billionth of a meter Color Image • Color image values, Pi, are a function of: . Illumination spectrum, E(λ) (lighting) λ = Wavelength . Object’s spectral reflectance, R(λ) (object “color”) . Sensor responses, Si(λ) (observer, viewer, camera) . Processing • Human <~> MV . Eye <~> Camera . Brain <~> Processor Color Vision Estimates Object Color • The formation color image values, Pi, has too many unknowns (E(λ) = lighting spectrum, etc.) to directly estimate object color. • Hard to “factor out” these unknowns to get an estimate of object color . In machine vision, we often don’t bother to! • Use knowledge, constraints, and computation to solve for object color estimates . Not perfect, but usually works… • If color vision is so hard, why use it? Material Property •
    [Show full text]
  • Characteristics of Digital Fundus Camera Systems Affecting Tonal Resolution in Color Retinal Images 9
    Characteristics of Digital Fundus Camera Systems Affecting Tonal Resolution in Color Retinal Images 9 ORIGINAL ARTICLE Marshall E Tyler, CRA, FOPS Larry D Hubbard, MAT University of Wisconsin; Madison, WI Wake Forest University Medical Center Boulevard Ken Boydston Winston-Salem, NC 27157-1033 Megavision, Inc.; Santa Barbara, CA [email protected] Anthony J Pugliese Choices for Service in Imaging, Inc. Glen Rock, NJ Characteristics of Digital Fundus Camera Systems Affecting Tonal Resolution in Color Retinal Images INTRODUCTION of exposure, a smaller or larger number of photons strikes his article examines the impact of digital fundus cam- each sensel, depending upon the brightness of the incident era system technology on color retinal image quality, light in the scene. Within the dynamic range of the sen- T specifically as it affects tonal resolution (brightness, sor, it can differentiate specific levels of illumination on a contrast, and color balance). We explore three topics: (1) grayscale rather than just black or white. However, that the nature and characteristics of the silicon-based photo range of detection is not infinite; too dark and no light is sensor, (2) the manner in which the digital fundus camera detected to provide pictorial detail; too bright and the sen- incorporates the sensor as part of a complete system, and sor “maxes out”, thus failing to record detail. (3) some considerations as to how to select a digital cam- This latter point requires further explanation. Sensels era system to satisfy your tonal resolution needs, and how have a fixed capacity to hold electrons generated by inci- to accept delivery and training on a new system.1 dent photons.
    [Show full text]
  • Measuring Camera Shannon Information Capacity with a Siemens Star Image
    https://doi.org/10.2352/ISSN.2470-1173.2020.9.IQSP-347 © 2020, Society for Imaging Science and Technology Measuring camera Shannon Information Capacity with a Siemens Star Image Norman L. Koren, Imatest LLC, Boulder, Colorado, USA Abstract Measurement background Shannon information capacity, which can be expressed as bits per To measure signal and noise at the same location, we use an pixel or megabits per image, is an excellent figure of merit for pre- image of a sinusoidal Siemens-star test chart consisting of ncycles dicting camera performance for a variety of machine vision appli- total cycles, which we analyze by dividing the star into k radial cations, including medical and automotive imaging systems. Its segments (32 or 64), each of which is subdivided into m angular strength is that is combines the effects of sharpness (MTF) and segments (8, 16, or 24) of length Pseg. The number sine wave noise, but it has not been widely adopted because it has been cycles in each angular segment is 푛 = 푛푐푦푐푙푒푠/푚. difficult to measure and has never been standardized. We have developed a method for conveniently measuring inform- ation capacity from images of the familiar sinusoidal Siemens Star chart. The key is that noise is measured in the presence of the image signal, rather than in a separate location where image processing may be different—a commonplace occurrence with bilateral filters. The method also enables measurement of SNRI, which is a key performance metric for object detection. Information capacity is strongly affected by sensor noise, lens quality, ISO speed (Exposure Index), and the demosaicing algo- rithm, which affects aliasing.
    [Show full text]
  • Spatial Frequency Response of Color Image Sensors: Bayer Color Filters and Foveon X3 Paul M
    Spatial Frequency Response of Color Image Sensors: Bayer Color Filters and Foveon X3 Paul M. Hubel, John Liu and Rudolph J. Guttosch Foveon, Inc. Santa Clara, California Abstract Bayer Background We compared the Spatial Frequency Response (SFR) The Bayer pattern, also known as a Color Filter of image sensors that use the Bayer color filter Array (CFA) or a mosaic pattern, is made up of a pattern and Foveon X3 technology for color image repeating array of red, green, and blue filter material capture. Sensors for both consumer and professional deposited on top of each spatial location in the array cameras were tested. The results show that the SFR (figure 1). These tiny filters enable what is normally for Foveon X3 sensors is up to 2.4x better. In a black-and-white sensor to create color images. addition to the standard SFR method, we also applied the SFR method using a red/blue edge. In this case, R G R G the X3 SFR was 3–5x higher than that for Bayer filter G B G B pattern devices. R G R G G B G B Introduction In their native state, the image sensors used in digital Figure 1 Typical Bayer filter pattern showing the alternate sampling of red, green and blue pixels. image capture devices are black-and-white. To enable color capture, small color filters are placed on top of By using 2 green filtered pixels for every red or blue, each photodiode. The filter pattern most often used is 1 the Bayer pattern is designed to maximize perceived derived in some way from the Bayer pattern , a sharpness in the luminance channel, composed repeating array of red, green, and blue pixels that lie mostly of green information.
    [Show full text]
  • Multi-Frame Demosaicing and Super-Resolution of Color Images
    1 Multi-Frame Demosaicing and Super-Resolution of Color Images Sina Farsiu∗, Michael Elad ‡ , Peyman Milanfar § Abstract In the last two decades, two related categories of problems have been studied independently in the image restoration literature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address them in a unified context. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a MAP estimation technique by minimizing a multi-term cost function. The L 1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance components. Finally, an additional regularization term is used to force similar edge location and orientation in different color channels. We show that the minimization of the total cost function is relatively easy and fast. Experimental results on synthetic and real data sets confirm the effectiveness of our method. ∗Corresponding author: Electrical Engineering Department, University of California Santa Cruz, Santa Cruz CA. 95064 USA. Email: [email protected], Phone:(831)-459-4141, Fax: (831)-459-4829 ‡ Computer Science Department, The Technion, Israel Institute of Technology, Israel.
    [Show full text]
  • Demosaicking: Color Filter Array Interpolation [Exploring the Imaging
    [Bahadir K. Gunturk, John Glotzbach, Yucel Altunbasak, Ronald W. Schafer, and Russel M. Mersereau] FLOWER PHOTO © PHOTO FLOWER MEDIA, 1991 21ST CENTURY PHOTO:CAMERA AND BACKGROUND ©VISION LTD. DIGITAL Demosaicking: Color Filter Array Interpolation [Exploring the imaging process and the correlations among three color planes in single-chip digital cameras] igital cameras have become popular, and many people are choosing to take their pic- tures with digital cameras instead of film cameras. When a digital image is recorded, the camera needs to perform a significant amount of processing to provide the user with a viewable image. This processing includes correction for sensor nonlinearities and nonuniformities, white balance adjustment, compression, and more. An important Dpart of this image processing chain is color filter array (CFA) interpolation or demosaicking. A color image requires at least three color samples at each pixel location. Computer images often use red (R), green (G), and blue (B). A camera would need three separate sensors to completely meas- ure the image. In a three-chip color camera, the light entering the camera is split and projected onto each spectral sensor. Each sensor requires its proper driving electronics, and the sensors have to be registered precisely. These additional requirements add a large expense to the system. Thus, many cameras use a single sensor covered with a CFA. The CFA allows only one color to be measured at each pixel. This means that the camera must estimate the missing two color values at each pixel. This estimation process is known as demosaicking. Several patterns exist for the filter array.
    [Show full text]
  • An Overview of State-Of-The-Art Denoising and Demosaicking Techniques: Toward a Unified Framework for Handling Artifacts During Image Reconstruction
    AN OVERVIEW OF STATE-OF-THE-ART DENOISING AND DEMOSAICKING TECHNIQUES: TOWARD A UNIFIED FRAMEWORK FOR HANDLING ARTIFACTS DURING IMAGE RECONSTRUCTION Bart Goossens∗, Hiep Luong, Jan Aelterman, Aleksandra Pižurica and Wilfried Philips Ghent University - TELIN - IPI - iMinds Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium [email protected] ABSTRACT RAW Updated input image RAW image Pushing the physical limits of the camera sensors brings significant challenges to image post-processing techniques, Degradation model Impose prior knowledge which often have to work under various constraints, such as (blur, signal-dependent noise, demosaicking…) w.r.t. undegraded image (e.g., wavelets, shearlets) low-power, limited memory and computational resources. Other Iterative platforms, e.g., desktop PCs suffer less from these issues, solver Reconstructed allowing extra room for improved reconstruction. Moreover, the image captured images are subject to many sources of imperfection (e.g., noise, blur, saturation) which complicate the processing. In this paper, we give an overview of some recent work aimed at Fig. 1. A unified reconstruction framework. overcoming these problems. We focus especially on denoising, deblurring and demosaicking techniques. techniques have been developed that jointly solve several degradations (e.g., noise, demosaicking, blur, saturation) at I. INTRODUCTION once, using the mathematical framework of estimation theory, leading to much better reconstruction results. Several notable In the past decade, there has been a significant increase in examples are: joint denoising and demosaicking [1]–[4], joint diversity of both display devices and image acquisition devices, demosaicking and deconvolution [5]–[8], high dynamic range going from tiny cameras in mobile phones to 100 megapixel reconstruction and denoising [9], [10] and declipping and (MP) digital cameras.
    [Show full text]
  • Comparison of Color Demosaicing Methods Olivier Losson, Ludovic Macaire, Yanqin Yang
    Comparison of color demosaicing methods Olivier Losson, Ludovic Macaire, Yanqin Yang To cite this version: Olivier Losson, Ludovic Macaire, Yanqin Yang. Comparison of color demosaicing methods. Advances in Imaging and Electron Physics, Elsevier, 2010, 162, pp.173-265. 10.1016/S1076-5670(10)62005-8. hal-00683233 HAL Id: hal-00683233 https://hal.archives-ouvertes.fr/hal-00683233 Submitted on 28 Mar 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Comparison of color demosaicing methods a, a a O. Losson ∗, L. Macaire , Y. Yang a Laboratoire LAGIS UMR CNRS 8146 – Bâtiment P2 Université Lille1 – Sciences et Technologies, 59655 Villeneuve d’Ascq Cedex, France Keywords: Demosaicing, Color image, Quality evaluation, Comparison criteria 1. Introduction Today, the majority of color cameras are equipped with a single CCD (Charge- Coupled Device) sensor. The surface of such a sensor is covered by a color filter array (CFA), which consists in a mosaic of spectrally selective filters, so that each CCD ele- ment samples only one of the three color components Red (R), Green (G) or Blue (B). The Bayer CFA is the most widely used one to provide the CFA image where each pixel is characterized by only one single color component.
    [Show full text]
  • Learning Deep Convolutional Networks for Demosaicing Nai-Sheng Syu∗, Yu-Sheng Chen∗, Yung-Yu Chuang
    1 Learning Deep Convolutional Networks for Demosaicing Nai-Sheng Syu∗, Yu-Sheng Chen∗, Yung-Yu Chuang Abstract—This paper presents a comprehensive study of ap- for reducing visual artifacts as much as possible. However, plying the convolutional neural network (CNN) to solving the most researches only focus on one of them. demosaicing problem. The paper presents two CNN models that The Bayer filter is the most popular CFA [5] and has been learn end-to-end mappings between the mosaic samples and the original image patches with full information. In the case the widely used in both academic researches and real camera Bayer color filter array (CFA) is used, an evaluation on popular manufacturing. It samples the green channel with a quincunx benchmarks confirms that the data-driven, automatically learned grid while sampling red and blue channels by a rectangular features by the CNN models are very effective and our best grid. The higher sampling rate for the green component is con- proposed CNN model outperforms the current state-of-the-art sidered consistent with the human visual system. Most demo- algorithms. Experiments show that the proposed CNN models can perform equally well in both the sRGB space and the saicing algorithms are designed specifically for the Bayer CFA. linear space. It is also demonstrated that the CNN model can They can be roughly divided into two groups, interpolation- perform joint denoising and demosaicing. The CNN model is based methods [6], [7], [8], [9], [10], [11], [12], [13], [14] very flexible and can be easily adopted for demosaicing with any and dictionary-based methods [15], [16].
    [Show full text]
  • Multi-Frame Demosaicing and Super-Resolution of Color Images
    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006 141 Multiframe Demosaicing and Super-Resolution of Color Images Sina Farsiu, Michael Elad, and Peyman Milanfar, Senior Member, IEEE Abstract—In the last two decades, two related categories of prob- lems have been studied independently in image restoration liter- ature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and, as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address them in a unified con- text. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a pos- teriori estimation technique by minimizing a multiterm cost func- tion. The I norm is used for measuring the difference between the projected estimate of the high-resolution image and each low- resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance components. Finally, an additional reg- ularization term is used to force similar edge location and orien- tation in different color channels. We show that the minimization of the total cost function is relatively easy and fast. Experimental results on synthetic and real data sets confirm the effectiveness of our method. Index Terms—Color enhancement, demosaicing, image restora- tion, robust estimation, robust regularization, super-resolution. I. INTRODUCTION EVERAL distorting processes affect the quality of images S acquired by commercial digital cameras.
    [Show full text]
  • Demosaicing of Colour Images Using Pixel Level Data-Dependent Triangulation
    Demosaicing of Colour Images Using Pixel Level Data-Dependent Triangulation Dan Su, Philip Willis Department of Computer Science University of Bath Bath, BA2 7AY, U.K. mapds, [email protected] Abstract G B G B G B R G R G R G Single-chip digital cameras use an array of broad- G B G B G B spectrum Charge-Coupled Devices (CCD) overlayed with a colour filter array. The filter layer consists of transpar- R G R G R G ent patches of red, green and blue, such that each CCD G B G B G B pixel captures one of these primary colours. To reconstruct R G R G R G a colour image at the full CCD resolution, the ‘missing’ primary values have to be interpolated from nearby sam- ples. We present an effective colour interpolation using a Figure 1. Bayer Colour Filter Array Pattern simple pixel level data-dependent triangulation. This inter- (U.S. Patent 3,971065, issued 1976) polation technique is applied to the commonly-used Bayer Colour Filter Array pattern. Results show that the proposed method gives superior reconstruction quality, with smaller visual defects than other methods. Furthermore, the com- and blue combined. plexity and efficiency of the proposed method is very close to Since there is only one colour primary at each position, simple bilinear interpolation, making it easy to implement we can reconstruct the image at the spatial resolution of the and fast to run. CCD only if we interpolate the two missing primary values at each pixel. That is, at a green pixel we have to gener- ate red and blue values by interpolating nearby red and blue 1 Introduction values.
    [Show full text]