Quick viewing(Text Mode)

Adaptive Homogeneity-Directed Demosaicing Algorithm Thorsten Frommen Contents

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228394656

Adaptive Homogeneity-Directed Demosaicing

Article · July 2007

CITATIONS READS 11 2,602

1 author:

Thorsten Frommen RWTH Aachen University

4 PUBLICATIONS 46 CITATIONS

SEE PROFILE

All content following this page was uploaded by Thorsten Frommen on 18 March 2018.

The user has requested enhancement of the downloaded file. Adaptive Homogeneity-Directed Demosaicing Algorithm

Thorsten Frommen

Contents

1 Introduction 2

2 Digital Cameras 2 2.1DesignandFunctionality...... 2 2.2ColorFilters...... 3 2.2.1 BayerFilter...... 3 2.2.2 RGBEFilter...... 3 2.2.3 CYGMFilter...... 3 2.2.4 OtherColorFilters...... 3 2.3ImageSensors...... 4 2.3.1 MosaicFilterSensor...... 4 2.3.2 FoveonX3...... 4 2.3.3 3CCDSensor...... 4 2.3.4 OtherImageSensors...... 4

3Demosaicing 5 3.1ImageTheory...... 6 3.2Interpolating...... 6

4 Artifacts and their Dealing with 7 4.1Aliasing...... 8 4.1.1 GreenImageInterpolation...... 8 4.1.2 ()ImageInterpolation...... 8 4.2MisguidanceColorArtifacts...... 9 4.3InterpolationArtifacts...... 10

5 The Algorithm 10 5.1Homogeneity-DirectedDemosaicingAlgorithm...... 10 5.2AdaptiveParametrization...... 10

6 Currently used Demosaicing Techniques 11

7 Use in Medical Image Processing 11 7.1DigitalMedicalPhotography...... 11 7.2DigitalMicroscopy...... 12 7.3DigitalFluorescenceMicroscopy...... 12

8Conclusion 12

9 Implementing the Algorithm 13 9.1AnImplementation...... 13 9.2ComputedResults...... 13 Abstract

Digital cameras use in large part only one CCD (charge-coupled device) with a filter array as . Thus, the output images feature a certain kind of pattern in which every has only single- color information, for instance red or cyan. The process of modifying the raw (image) material and reconstructing an image with full RGB color information at each pixel is called demosaicing. In this paper, one particular way of demosaicing will be explained and analyzed. Concluding, the importance and the use of demosaicing regarding medical image processing will be considered.

Keywords: demosaicing algorithm, color artifact, , Bayer pattern, metric neighborhood model.

1 Introduction

This seminar paper is primarily based on Adaptive homogeneity-directed demosaicing algorithm, written by Hirakawa and Parks in 2005 [1] and an earlier version from 2003 [2]. In general, a digital still camera uses one sensor chip with a preceded color filter array. Since the most common type of digital cameras features a mosaic filter sensor (see Section 2.3) in combination with a Bayer filter (see Section 2.2), Hirakawa and Parks focus on this specific camera type. Its raw images have, corresponding to the filter design, at each pixel only single-color information. In order to acquire RGB images while not suffering losses regarding the actual resolution at the same time, certain aspects need to be thought over right-mindedly. These considerations emerge not only in respect of digital still cameras, but in respect of any digital optical apparatuses that use an insufficient amount of mosaic filter sensors; regarding medical image processing, there is more than just one application, as we will see in Section 7.

2 Digital Cameras

To understand the topic properly, first we glimpse at the design and the functionality of a typical . At this, we will tend to the inside of the camera—in particular, the color filter array and the sensor chip. Having the basics made plain to us, we will begin looking into the subject of demosaicing and all its stumbling blocks.

2.1 Design and Functionality A common digital camera features a mosaic filter sensor that consists of a color filter array and a single CCD as image sensor. After passing the ocular, the light beams will encounter the filter complex, illustrated in Figure 1. Depending on the filter type, the light will be split into specific components

Figure 1: Inner design of a typical digital camera. which will be transmitted to the chip that will convert the amount of incoming light into electric current. Finally, these electric signals will be processed into image data, typically in JPEG compression. In addition to this very common design of a digital camera, there are a lot of other ones. Some cameras, for example, use a CMOS (complementary metal oxide semiconductor) in place of a CCD. Other cameras don’t contain a mosaic filter while doing the filtering in some different way. Yet another camera type splits the light via a prism assembly and thus features three CCDs instead of only one.

2.2 Color Filters

Figure 2: . Figure 3: RGBE Filter. Figure 4: CYGM Filter.

2.2.1 Bayer Filter A very common color filter is the Bayer filter or so-called Bayer pattern [3]. It consists of three distinct filter types where the first one is sensitive to the region of the spectrum, the second type transmits for the red spectral range and the third type is a blue filter. The filter types are arranged in a repeated 2 × 2 matrix (quincunx grid) that has one red and blue component, respectively, and, due to the fact that the human eye is more sensitive to green compared to red and blue, two green components. Furthermore, a better luminance representation is provided. Figure 2 shows the original array of the definition given in the US patent that has, in each quincunx grid, the two green elements placed in the diagonal from the upper left to the lower right. Often, patterns with other valid arrangements of these three filter types are called Bayer filters, too.

2.2.2 RGBE Filter The RGBE filter contains four distinct regions. In actual fact, it extends the Bayer filter by replacing in each quincunx grid one green component with one that transmits for the emerald (cyan) spectral range, illustrated in Figure 3. According to Sony Corporation, using this filter reduces the color reproduction error dramatically [4]. The RGBE filter, too, may occur with different arrangements of its components by having green and cyan in one diagonal, though.

2.2.3 CYGM Filter To be seen in Figure 4, the CYGM filter uses—in contrast to the two aforementioned filter types that feature the primary red, green and blue—the subtractive or typesetting colors cyan, yellow and magenta plus green. This four-color array is said to provide more accurate luminance information while it is less accurate regarding color information.

2.2.4 Other Color Filters Apart from the three introduced color filters, there are diverse other ones in use. The Eastman Kodak Company, for example, is currently working on new color filter patterns that feature panchromatic or “clear” in order to collect a higher amount of light and thus deliver higher quality photos under low-light conditions [5]. Each and every mosaic filter has its own selection of color components and their specific arrangement; consequently, it needs its own demosaicing algorithm.

2.3 Image Sensors

Figure 5: Mosaic filter sensor. Figure 6: Foveon X3. Figure 7: 3CCD sensor.

2.3.1 Mosaic Filter Sensor The most common image sensor is the mosaic filter sensor, shown in Figure 5. It consists of a mosaic pattern color filter, depicted in Section 2.2), and one chip. Each pixel element collects only single-color information, so the image material resembles the mosaic pattern of the filter. To obtain a full RGB color image, the collected data needs to be modified in the form of attaining the missing color information (see Section 3).

2.3.2 Foveon X3 The Foveon X3, like the mosaic filter sensors, is made up of one chip. A major difference lies within the arrangement of its components, as the Foveon X3 has three layers of pixels that are placed one upon the other within the silicon (see Figure 6). This technique takes advantage of the fact that red, green and blue light penetrate the silicon to different depths [6]. Since every pixel is covered by all of the three distinct filter layers, the collected data represents an image with full RGB color information at each pixel.

2.3.3 3CCD Sensor In contrast to the two above-listed sensors, the 3CCD sensor contains, as the name implies, three CCDs (see Figure 7). The prism setup in the center of the array refracts the light and directs the appropriate wavelength ranges to their respective CCDs [7]. The collected data will be processed to generate a RGB color image. Due to the spatial demands and the high expenses of the prism assembly, this sensor type is not as common as a single-chip sensor—regarding digital still cameras, at least. The main field of application of 3CCD sensors is digital video cameras, because size does not matter at this.

2.3.4 Other Image Sensors There are more than just these three sensor types in use—for example, the Super CCD sensor that has octagonal pixels instead of rectangular ones—, but the introduced sensors suffice for the purpose of a brief overview. 3Demosaicing

When comparing the three sensor types (see Section 2.3), we notice that the mosaic filter sensor only provides raw image material with single-color information at each pixel. Assume without loss of generality we intend to take RGB color images with a digital still camera using a mosaic filter sensor. Since the Bayer pattern is the most common color filter, we henceforth consider the image to be recorded with a mosaic filter sensor that applies this very pattern. Figure 8 shows a typical raw image obtained by camera using the Bayer pattern; the right half of

Figure 8: Camera interpolation (left) and raw-data (right).

Figure 9: Magnified cut-out of both parts of the image.

Figure 10: Even more magnified cut-out. the image shows its camera-interpolated representation. Figures 9 and 10 give two magnifications of this raw image. As mentioned before, due to the design of the mosaic pattern the image lacks information of exactly two filter colors at each pixel. In order to obtain a RGB image with full-color information, the known data has to be processed in a reasonable and efficient way. This process of modifying the raw image material and reconstructing an image with full RGB color information at each pixel is called demosaicing. Section 3.1 gives a theoretical representation of a color image. Then, in Section 3.2, we consider interpolating the image material in different ways and take a look at the results.

3.1 Image Theory A color image f : X → Y can be interpreted as a mapping between a set of 2-D locations X and a set of color values Y . If considering RGB images, all y ∈ Y are of the type y =[R, G, B]T ,whereR, G and B indicate the red, green and blue tri-stimulus value, respectively.

3.2 Interpolating Interpolating is, roughly spoken, a mathematical instrument for computing missing information—in this case: color information—with the help of the already available information. The first and most obvious approach to demosaicing is to simply interpolate the image, in terms of color, by setting the color of each pixel to the average value of a certain number of its (up to eight) neighbors and itself—bilinear or . By means of a simple example image (Figure 11) and its representation according

Figure 11: Example image, kept Figure 12: Bayer filter applied to Figure 13: Simple interpolation in same-color nuances. given image. with average neighbor-color.

to a camera using the Bayer filter (Figure 12), the procedure will now be outlined. Since we do not consider the uneven occurrences of red, green and blue, respectively, the derived image (Figure 13) is fairly unsound. It is way too dark, the pixels that were surrounded by four red (blue) pixels are too intense and the whole image shows a green cast. A considerably better result (Figure 14) can be

Figure 14: Improved interpola- Figure 15: Difference of original Figure 16: Difference of original tion with occurrence weighting. and simple interpolated image. and weighted interpolated image.

compassed when the average color is calculated in respect to the actual occurrences of each specific color. Thus, the calculation of the red value, for instance, is made by summing up all red values of the (up to nine) pixels and then dividing the sum by the quantity of just the red-value pixels (and not the whole heighborhood). The difference images of the original image and both results are depicted in Figures 15 and 16; the lighter a pixel, the more accurate the procedure. The results of both applied to a real photograph (Figures 17, 18 and 19) give a quite better impression.

Figure 17: Raw image (Bayer fil- Figure 18: Simple interpolation Figure 19: Improved interpola- ter applied). with average neighbor-color. tion with occurrence weighting.

4 Artifacts and their Dealing with

When interpolating an image, the artifacts illustrated in Figure 20 may occur. Section 4.1 will give an

Figure 20: Aliasing (left), misguidance color artifacts (middle), interpolation artifacts (right). overview of aliasing and proposes the application of filterbank techniques to directional interpolation, in order to reduce this type of artifact. A way of reducing the amount of misguidance color artifacts via homogeneity-directed metric neighborhood model reviews will be explained in Section 4.2. Concluding, Section 4.3 will present an iterative approach to compensating interpolation artifacts. 4.1 Aliasing Another interpretation of computing the missing pixels is the cancellation of the aliasing terms of the sampled signals.

4.1.1 Green Image Interpolation Due to the rectangular arrangement of the Bayer filter, only interpolation in horizontal and vertical direction is considered. Both interpolations are divided into two independent interpolations; one for the green-red rows and columns, the other one for the blue-green ones. Since all of the four interpolations are done in a similar way, we take a look at the horizontal interpolation for the green-red rows only. ForanycolorsignalP (x), let P0(x)andP1(x) be its even and odd sampled signal, respectively. P (x),xeven 0,xeven P0(x)= P1(x)= (1) 0,xodd P (x),xodd

As we can see, (1) implies that P (x)=P0(x)+P1(x) and thus,

G(x)=G0(x)+G1(x). (2) Let h be a linear filter that meets the demand G(x)=h(x) · G(x). (3) With (2) and (3) we get to G0(x),xeven G(x)= (4) h0(x) · G1(x)+h1(x) · G0(x),xodd where h0 and h1 denote even and odd sampling of h, respectively. Furthermore, let

h0(x) · G1(x) ≈ h0(x) · R1(x)(5)

be the second demand on the filter h. By virtue of (5), the unknown value G1(x) in (4) can be replaced with the known signal R1(x). G0(x),xeven G(x)= (6) h0(x) · R1(x)+h1(x) · G0(x),xodd After combining the even and odd cases of (6), with

G(x)=G0(x)+h0(x) · R1(x)+h1(x) · G0(x), (7) the final equation for the horizontal interpolation of any green-red row is derived. Proceed likewise to obtain the three remaining interpolations. The filterbank design of (6) implicates alias cancellation. The aliasing terms in G0 are removed by the ones in R1.

4.1.2 Red (Blue) Image Interpolation Let R(·), G(·)andB(·) now be 2-D pixel positions. To calculate the red image R,usetheknownred values RS and either the horizontal or the vertical interpolated green image G. GS denotes the partial image of G that contains the pixel positions of RS; therefore: GS ⊂ G. The difference image R − G is band limited below the sampling rate [8]. Hence, the very difference image is reconstructed with

R − G = L ∗ (RS − GS )(8) with the aid of a 2-D low-pass filter L.UseG from (7) and solve (8) for R to obtain

R = L ∗ (RS − GS)+G. (9) Proceed likewise to generate the horizontal blue image, and finally, using the vertically interpolated green image, generate the vertical red and blue images. Please take note of the fact that the result of using the horizontally interpolated image will differ from the vertically interpolated one. After having developed all six images, we simply combine the three images interpolated in the same direction into a new image fH and fV , respectively. 4.2 Misguidance Color Artifacts

With the images fH and fV from Section 4.1, for every pixel in the final image f there are two choices. It seems wise to choose the pixel that misbehaves least while measuring misbehavior in terms of homogeneity regarding certain aspects. Assume without loss of generality the image is designed for the human eye. Therefore, we convert the pixels from RGB into LAB using the mapping function π : Y → Y,πˆ ([R, G, B]T )=[L, a, b]T .Let T y1,y2 ∈ Y ,ˆy1 = π(y1), yˆ2 = π(y2)whereˆ yi =[ˆyiL, yˆia, yˆib] ,i∈{1, 2} [9]. 2 2 With the function dX (x1,x2)= (x1x − x2x) +(x1y − x2y) we create a spatial distance set

B(x, δ)={ p ∈ X | dX (x, p) ≤ δ } (10)

for each x ∈ X so that every p ∈ B(x, δ) has a spatial distance to x of at most δ. Likewise, we create a luminance distance set Lf (x, L)={ p ∈ X | dL(f(x),f(p)) ≤ L } (11)

by using the function dL(y1,y2)=|yˆ1L − yˆ2L|, and a color distance set

Cf (x, C )={ p ∈ X | dC (f(x),f(p)) ≤ C } (12) 2 2 that adheres the function dC (y1,y2)= (ˆy1a − yˆ2a) +(ˆy1b − yˆ2b) . Every element in (11) differs from x at most L concerning luminance and, on the other hand, each element in (12) equals x—in respect of color—with a maximum deviation of C (see Figure 21). When combining (10), (11) and (12), we obtain

Figure 21: Illustration of Lf (·) ∩ Cf (·) with certain values L and C [1].

a new set, the metric neighborhood,

Uf (x, δ, L,C )=B(x, δ) ∩ Lf (x, L) ∩ Cf (x, C ) (13)

that contains the pixels that reside in all of the three distance sets according to the parameters δ, L,C ∈ R (see Section 5.2). Define a homogeneity map

|Uf (x, δ, L,C )| Hf (x, δ, L,C)= (14) |B(x, δ)| where |·|:2X → R provides the size of a set. With (14) and the spatial averaging kernel ⎡ ⎤ 1 1 9 ··· 9 ⎢ ⎥ A ⎢ . . . ⎥ = ⎣ . .. . ⎦ (15) 1 1 9 ··· 9 the final image f will be generated using fH (x), if A ∗ HfH(x, ·) ≥ A ∗ HfV (x, ·) f(x)= (16) fV (x), if A ∗ HfH(x, ·)

as selector. 4.3 Interpolation Artifacts The result of the interpolation using (7) and (9) may, even if the directional selector is regarded as perfect, show color artifacts. Hirakawa and Parks refer to this phenomenon as interpolation artifacts; they associate it with limitations in the interpolation. A common approach to reducing these interpolation artifacts is, in pseudocode according to [10], as follows:

FOR m times R = median(R − G)+G B = median(B − G)+G median(G−R)+R+median(G−B)+B G = 2 ENDFOR.

At this, median(·) is used as the median filter operator. The procedure follows the in Section 4.1 proposed assumption that the difference images R − G and B − G are about the same [8]. The iterations prevent small variations in color, though preserving the edges.

5 The Algorithm

Taking the three distinct types of artifacts into account, the algorithm, shown in Section 5.1, is simply the combination of the beforehand developed proceedings. In order to give the algorithm some kind of adaptive behavior (see Section 5.2), it is possible to allow it a great latitude in terms of fixing only a certain amount of the required parameters and making the algorithm determine the remaining ones.

5.1 Homogeneity-Directed Demosaicing Algorithm Having regard to the above-listed considerations, the attained algorithm is as follows:

1. Using (7) and (9), respectively, generate the interpolated images fH and fV .

2. Determine the homogeneity map Hf and apply (14).

3. By dint of (16), merge fH and fV into a new image f. 4. Reduce the interpolation artifacts of f by using the iteration from Section 4.3.

5.2 Adaptive Parametrization The homogeneity map, given in Section 4.2, uses certain parameters: δ for the spatial distance function, L for the luminance distance function and C for the color distance function. Hirakawa and Parks use δ = 1 while letting the algorithm derive L and C for each x ∈ X.Bymeans of L we will see how this works. If the orientation of the object boundary at x is horizontal, we want the adjacent pixels directly to the left and to the right of x to belong to the metric neighborhood (13) and thus, we want f(x)tobe chosen from fH . We define

LH(x)= max dL(fH (x),fH (x0)). (17) T T x0∈{x−[1,0] ,x+[1,0] } Likewise, we define LV (x)= max dL(fV (x),fV (x0)) (18) T T x0∈{x−[0,1] ,x+[0,1] } for a vertically oriented boundary at x. In order that the nearest—in terms of luminance—pixel will be chosen, we set the respective parameter to the minimum of (17) and (18):

L =min{LH,LV }. (19)

The color distance parameter C can be derived in a similar way. For all pixels x, the homogeneity map will be computed using these adaptive parameters. 6 Currently used Demosaicing Techniques

To get hold of the demosaicing techniques currently in use, the author of this paper tried to get in touch with half a dozen producers of digital cameras—unfortunately, in vain. None but two inquiries have been responded to. Panasonic Deutschland replied that they were unaware of the full particulars of the software that is used in their products, since it is externally procured. The answer from Pentax Europe GmbH said that even the name of the applied techniques is regarded as some kind of corporate secret so that most likely no company at all could or rather will be of any help. The latter statement was reinforced during personal communication with an acquainted freelancer of Canon Deutschland GmbH.

7 Use in Medical Image Processing

This Section provides an abstract of medical applications that are confronted with demosaicing—starting with for medicinal purposes through to domains of microbiology and cytology. Even though digital imaging plays a remarkable role in medicine, demosaicing is not really a problem. Altogether, any prevalent up-to-date demosaicing technique meets all requirements; but of course there are obvious exceptions.

7.1 Digital Medical Photography

Figure 22: Digital medical photographs [11].

With the widespread availability of computers and the increasing connectivity to the Internet, digital photography (see Figure 22) has become a powerful tool for physicians, surpassing the uses of conven- tional photography and opening new applications which would not have been possible with traditional photography on film [12]. Important advantages are, to give but a few, quickly transmitting images anywhere over (internal and external) networks, archiving a huge amount of information in marginal space, and retrieving perfect originals via a few keystrokes. The switch to digital imaging also bears a crucial meaning regarding time and money. In addition, digital examinations are inherently fast and turn out to be much more bearable. Apart from the significance of digital photography in general, the function of demosaicing with respect to medical image processing is not that important. A person viewing a medical photograph normally knows about the nature and the specifics of the recorded matter. Thus, even the fact that, for example, every five-hundredth pixel shows an abnormal color should not alienate any spectator. Photographs taken by using an average demosaicing technique altogether suffice concerning the main fields of application such as the comparison of certain aspects before and after a medical procedure and the long-term observation of a rash or a burn, for example. Figure 23: Images obtained by a digital microscope [13].

7.2 Digital Microscopy As mentioned before, not only digital still cameras but also the bigger part of digital microscopes is afflicted with the problems of demosaicing. Similar to digital medical photographs, the spectator is generally in know of the specifics of the observed objects and thus should not care about small incor- rectnesses within the obtained images (see Figure 23). If even these negligible artifacts should violate by some means, a wrong setting may be the cause for this.

7.3 Digital Fluorescence Microscopy

Figure 24: Images obtained by a digital fluorescence microscope [13].

During personal communication with a PhD student with the university hospital of Aachen, I gained knowledge of endocytosis. Research regarding this process includes comparing and assessing images—the emphasis is on colors—(see Figure 24) obtained by a digital fluorescence microscope. In general, this type of microscope, too, has only one CCD or CMOS while using a mosaic filter. At this, a proper design of the applied demosaicing algorithm is indispensable.

8Conclusion

In this paper the problem of demosaicing was introduced and discussed. The necessary basics were outlined beforehand, however, the main content was a specific demosaicing algorithm, developed by Hirakawa and Parks and presented in their paper Adaptive homogeneity-directed demosaicing algorithm [1], [2]. Inthiscase,too,thereisnotthe course of action; various ways of demosaicing emphasize different aspects. Thus, the appropriate method arises from the circumstances. 9 Implementing the Algorithm 9.1 An Implementation In order to gain practical experience, the author of this paper started implementing the algorithm himself. The program (named Ed – Easy demosaicing) is able to load and, if necessary, apply the Bayer filter to any image whose format is supported by SDL (simple directmedia layer). Various, in part modifiable, interpolation types—for example, simple interpolation, weighted interpolation, horizontal interpolation— are ready to use. The filters can be applied simultaneously. They also can be switched from non-additive (standard) to additive mode, in order to apply more than one filter to one and the same region or, in particular, the whole image. The modifiable integrated zoom gives the user a better view on the subject. In addition, the user is able to save the created image (BMP format) at any time. Since the implementation of the algorithm is not as simple as it appears at first glance, it is still in process. Up to now, both interpolations of the green image are generated correctly; the red and blue image interpolations, however, are incorrect. 1 1 1 1 1 Similar to the implementation of Hirakawa and Parks, the linear filter h = − 4 2 2 2 − 4 , used for the interpolation of the green image, and δ = 1 have been employed while (20) serves as bilinear interpolator for the red and blue image interpolation. ⎡ ⎤ 1 1 1 4 2 4 ⎢ 1 1 ⎥ L = ⎣ 2 1 2 ⎦ (20) 1 1 1 4 2 4

To get the sources (C++), just write an e-mail to [email protected].

9.2 Computed Results

Figure 25: Raw image (Bayer filter applied). Figure 26: Red samples.

Figure 27: Green samples. Figure 28: Blue samples.

Figure 29: Horizontal green interpolation. Figure 30: Vertically interpolated green image.

Figure 25 shows an image of the UKAachen-Pferd obtained by a digital camera applying the Bayer filter. In Figures 26, 27 and 28 we see the sampled image for red, green and blue, respectively. The horizontal and the vertical interpolation for the green image is depicted in Figures 29 and 30.

References

[1] Hirakawa K, Parks TW. Adaptive homogeneity-directed demosaicing algorithm. IEEE Trans Image Processing 2005;14(3):360-9. [2] Hirakawa K, Parks TW. Adaptive homogeneity-directed demosaicing algorithm. Proc IEEE Int Conf Image Processing 2003;3:669-72. [3] Bayer BE. Color imaging array. United States Patent 3 971 065, 1976. [4] Sony Corporation. Realization of natural color reproduction in Digital Still Cameras, closer to the natural sight perception of the human eye. [Online]. 2003 [cited 2007 May 23]; Available from: URL:http://www.sony.net/SonyInfo/News/Press Archive/200307/03-029E/ [5] Eastman Kodak Company. New KODAK Image Sensor Technology Redefines Digital Image Capture. [Online]. 2007 [cited 2007 July 16]; Available from: URL:http://www.kodak.com/eknec/PageQuerier.jhtml?pq-path=2709&pq-locale=en US&gpcid= 0900688a80720f9d [6] Foveon. X3 Technology – Direct Image Sensors. [Online]. 2007 [cited 2007 June 20]; Available from: URL:http://www.foveon.com/article.php?a=67 [7] Duncan DB, Ducan JG, Leeson GJ. Apparatus for forming a plurality of subimages having different characteristics. United States Patent 6 215 597, 2001. [8] Kimmel R. Demosaicing: Image Reconstruction from Color CCD Samples. IEEE Trans Image Processing 1999;8(9):1221-8.

[9] Central Bureau CIE. Colorimetry. CIE Pub 15(2). [10] Dalbey J. Pseudocode Standard. [Online]. 2003 [cited 2007 May 27]; Available from: URL:http://www.csc.calpoly.edu/∼jdalbey/SWE/pdl std.html [11] San Francicso General Hospital. Educational Clinical Images. [Online]. 2007;Available from URL:http://sfghed.ucsf.edu/Education/ClinicImages/clinical photos.htm [12] Prasad S, Roy B. Digital Photography in Medicine. J Postgrad Med 2003;49(4):332-6. [13] Florida State University. Nikon MicroscopyU – The Source for Microscopy Education. [Online]. 2007; Available from URL:http://www.microscopyu.com

View publication stats