<<

pco.knowledge base SIZE & SENSITIVITY

This is illustrated in figure 2, which shows the spectrally Pixel Size & Fill Factor dependent absorption coefficient (see fig. 2 – blue curve) Are large The fill factor (see figure 4) of a pixel describes the ratio of silicon as used in solar cells. The second curve (see fig. of sensitive area versus total area of a pixel, since a 2 – red curve) depicts the penetration depth of light in sil- part of the area of an pixel is always used icon and is the inverse of the blue curve. In this scenario, always more sensitive? for , wiring, or registers, which be- it is very likely that the material used had no AR-coating. long to the structure of the pixel of the corresponding This paper, by Green and Keevers², also measured the image sensor (CCD, CMOS, sCMOS). Only the light sen- spectrally dependent reflectivity of silicon. This is shown sitive part might contribute to the light to charge carrier in figure 3. The curve represents the factor, R, in the conversion, which is detected. There is a common myth that larger pixel size image sen- and partially travels through it. As the above equation for the quantum efficiency. sors are always more sensitive than smaller pixel size orange curve of the flux shows, a part of it is lost image . This isn’t always the case, though. To at the surface via , therefore a proper anti- explain why this is more a myth than a fact, it is a good reflective coating has a high impact on this loss idea to look at how the pixel size of an image sensor has contribution. an impact on the , especially for the overall Following this, a part of the photon flux is converted into sensitivity, which is determined by the quantum charge carriers in the light-sensitive part of the semicon- efficiency. ductor, and a remaining part is transmitted. Using this illustration, the quantum efficiency can be defined as: Quantum efficiency Figure 5: Cross sectional view of pixels with different ll fac- QE e d tors (blue area corresponds to light sensitive area) and the The "quantum efficiency" (QE) of a photo detector or image −α∙ light rays which impinge on them (orange arrows). Dashed with: sensor describes the ratio of incident to converted Reflection= (1 at− R)the ∙ surface,ζ ∙ (1− which) can be light rays indicate that this light does not contribute to the charge carriers which are read out as a from the signal: [a] pixel with 75 % ll factor, [b] pixel with 50 % ll fac- minimized by appropriate coatings tor and [c] pixel with 50 % ll factor plus micro on top. device. In a CCD, CMOS, or sCMOS , it denotes (1 − R) Part of the electron-hole pairs (charge carri- how efficiently the camera converts light into electric ers), which contribute to the photo current, 4 charges, which makes it a very good parameter to compare In case the fill factor is too small , fill factor is usually im- ζ and which did not recombine at the surface. the sensitivity of such a system. When light or photons fall proved with the addition of micro . The lens col- d Part of the photon fl ux, which is absorbed in lects the light impinging onto the pixel and focuses the onto a semiconductor - such as silicon - there are several e Figure 3: Measured re ectivity of silicon as material used in −α∙ light to the light sensitive area of the pixel (see figure 5). loss mechanisms. photons incident the semiconductor. Therefore the thickness d solar cells3. photon (1− ) should be suffi ciently large, to increase that part. flux Light Quantum Due to the different absorption characteristics of For camera systems, it is usually the QE of the whole Fill factor Signal sensitive area effi ciency silicon, as a basic material for these image sensors, camera system which is given. This includes non-mate- and due to the different structures of each image rial related losses like fill-factor and reflection of windows large large high large 1/α sensor, the QE is spectrally dependent. and cover glasses. In data sheets, this parameter is given small small small small photo reflected small + larger effective semiconductor sensive photon as a percentage, such that a QE of 50 % means that high large d micro lens area material volume flux on average two photons are required to generate one charge carrier (electron). Table 1: Correspondences & Relations

transmied photon flux Although the application of micro lenses is always beneficial x x for pixel with fill factors below 100 %, there are some phys-

Figure 1: Semiconductor material with the thickness, d, and a light ical and technological limitations to consider. (See Table 2) sensitive area or volume which converts photons into charge carriers. The graph shows what happens with the photon flux across the light penetration into the material1. Pixel Size & Optical Figure 6 demonstrates optical imaging with a simple op- Figure 1 shows the photons impinging on a semiconductor Figure 4: Pixels with different ll factors (blue area corre- tical system based on a thin lens. In this situation, the sponds to light sensitive area): [a] pixel with 75 % ll factor, Newtonian imaging equation is valid: with a light-sensitive area. The fi gure also shows a curve, Figure 2: Measured absorption coefficients and penetration depth from [b] pixel with 50 % ll factor and [c] pixel with 50 % ll factor 2 which represents the photon flux when the light hits the silicon as material used in solar cells . plus micro lens on top. x0 xi = f

1 pco.knowledge base PIXELAre larger SIZE pixels& SENSITIVITY always more sensitive?

This is illustrated in figure 2, which shows the spectrally Pixel size & fill factor dependent absorption coefficient (see fig. 2 – blue curve) ARE LARGER PIXELS The fill factor (see figure 4) of a pixel describes the ratio of silicon as used in solar cells. The second curve (see of light sensitive area versus total area of a pixel, since a fig. 2 – red curve) depicts the penetration depth of light in part of the area of an image sensor pixel is always silicon and is the inverse of the blue curve. In this used for transistors, wiring, capacitors, or registers, ALWAYS MORE SENSITIVE? scenario, it is very likely that the material used had no which belong to the structure of the pixel of the AR-coating. This paper by Green and Keevers² also corresponding image sensor (CCD, CMOS, sCMOS). measured the spectrally dependent reflectivity of Only the light sensitive part might contribute to the light silicon. This is shown in figure 3. The curve represents to charge carrier conversion which is detected. There is a common myth that larger pixel size image sen- semiconductor and partially travels through it. As the or- the factor R in the above equation for the quantum sors are always more sensitive than smaller pixel size ange curve of the photon flux shows, a part of it is lost at efficiency. image sensors. This isn’t always the case, though. To the surface via reflection, therefore a proper anti-reflective explain why this is more a myth than a fact, it is a good coating has a high impact on this loss contribution. idea to look at how the pixel size of an image sensor has Following this a part of the photon flux is converted into an impact on the image quality, especially for the overall charge carriers in the light sensitive part of the semicon- sensitivity, which is determined by the quantum efficiency. ductor, and a remaining part is transmitted. Using this illus- tration the quantum efficiency can be defined as: Quantum Efciency QE e d The "quantum efficiency" (QE) of a photo detector or image Figure 5: Cross sectional view of pixels with different fill factors (blue area −α∙ corresponds to light sensitive area) and the light rays which impinge on with: sensor describes the ratio of incident photons to converted = (1 − R) ∙ ζ ∙ (1− ) them (orange arrows). Dashed light rays indicate that this light does not charge carriers which are read out as a signal from the de- Reflection at the surface, which can be min- contribute to the signal: [a] pixel with 75 % fill factor, [b] pixel with 50 % fill factor and [c] pixel with 50 % fill factor plus micro lens on top. vice. In a CCD, CMOS or sCMOS camera it denotes how ef- imized by appropriate coatings ficiently the camera converts light into electric charges and it (1 − R) Part of the electron-hole-pairs (charge carri- 4 is therefore a very good parameter to compare the sensitivity In case the fill factor is too small , it is usually improved ers), which contribute to the photo current, of such a system. When light or photons fall onto a semicon- with the addition of micro lenses. The lens collects ζ and which did not recombine at the surface. ductor - such as silicon - there are several loss mechanisms. the light impinging onto the pixel and focuses the light d to the light sensitive area of the pixel (see figure 5). e Part of the photon flux, which is absorbed in Figure 3: Measured reflectivity of silicon as material used in solar cells3. −α∙ photons incident the semiconductor. Therefore the thickness d photon (1− ) should be sufficiently large, to increase that part. flux For camera systems, it is usually the QE of the whole Light Quantum Due to the different absorption characteristics of silicon, Fill factor Signal camera system which is given. This includes sensitive area effi ciency as basic material for these image sensors and due to non-material-related losses like fill factor and reflection large large high large the different structures of each image sensor the QE is of windows and cover glasses. In data sheets, small small small small 1/α spectrally dependent. this parameter is given as a percentage, such that a photo reflected small + larger effective d semiconductor sensive photon QE of 50 % means that on average two high large material volume flux micro lens area photons are required to generate one charge carrier (electron). Table 1: Correspondences & relations transmied photon flux Although the application of micro lenses is always beneficial x x for pixels with fill factors below 100 %, there are some physical and technological limitations to consider. (See Figure 1: Semiconductor material with the thickness, d, and a light sensitive area or volume which converts photons into Table 2) charge carriers. The graph shows what happens with the photon ux across the light penetration into the material1. Pixel size & optical imaging

Figure 1 shows the photons impinging on a semiconductor Figure 4: Pixels with different  ll factors (blue area corre- Figure 6 demonstrates optical imaging with a simple with a light sensitive area. The figure also shows a curve, sponds to light sensitive area): [a] pixel with 75 %  ll factor, optical system based on a thin lens. In this situation, Figure 2: Measured absorption coefcients and penetration [b] pixel with 50 %  ll factor and [c] pixel with 50 %  ll factor the Newtonian imaging equation is valid: which represents the photon flux when the light hits the depth from silicon as material used in solar cells2. plus micro lens on top.

x0 xi = f

2 pco.knowledge base PIXELAre larger SIZE pixels& SENSITIVITY always more sensitive? PIXEL SIZE & SENSITIVITY

Limitations Pixel size & / depth of field6 is known as depth of focus. The above equation can be are various parameters that could be changed or kept Although micro lenses up to 20 solved for x3 and yields: constant. In this section, we’ll follow different pathways µm pixel pitch can be manufac- to try to answer this question. tured, their effi ciency gradually is Δ Size limitation for micro lenses decreased above 6-8 µm pixel pitch. 3 /# /# Constant Area The increased stack height, which is 1 + 𝑑𝑑푑0 proprotional to the covered area, is Δ𝑥𝑥 = 𝑓𝑓 ⋅ ⋅ 𝜀𝜀𝜀𝜀𝜀� ⋅ (1 + |𝑀𝑀�푀𝑀�𝑀𝑀 not favorable as well.5 where M is the magnification𝑓𝑓 from chapter 2. This equation constant => , , , object Due to semi-conductor processing shows the critical role of the f-number for the depth of focus. distance & irradiance, requirements for front-illuminated Front illuminated CMOS pixels have CMOS image sensors, there must variable => resolution & pixel size limited fi ll factor always be a certain pixel area cov- ered with metal, which reduces the maximum available fill factor. Large areas are diffi cult to realize because the diffusion length (and therefore the probability for recom- Figure 7: Illustration of the concept of depth of focus, which is the range of Large light sensitive areas bination) increases. Although image image distances in which the blurring of the image remains below a sensors for special applications certain threshold. have been realized with 100 µm pixel pitch.

Table 2: Some limitations The image equation (chapter 2) determines the relation between object and image distances. If the Figure 8: Illustration of the concept of depth of eld. This gives the range of distances of the object in which the blur- image plane is slightly shifted or the object is closer ring of its image at a xed image distance remains below a to the lens system, the image is not rendered given threshold. useless. Rather, it gets more and more blurred the Figure 9: Illustration of two square shaped image sensors larger the deviation from the distances becomes, within the image circle of the same lens, which generates the Of even more importance for practical usage than the image of a tree on the image sensors. Image sensor [a] has given by the image equation. depth of focus is the . The depth of field is the pixels a quarter the size area of the pixels of sensor [b]. The concepts of depth of focus and depth of field are range of object positions for which the radius of the blur disk remains below a threshold at a fixed image plane. based on the fact that for any given application only For ease of comparison, we shall assume square a certain degree of sharpness is required. For 2 shaped image sensors. These are fit into the image cir- = ε processing, it is naturally given by the 3 cle or imaging area of a lens. For comparison, a homo- /# −𝑓𝑓 Figure 6: Optical imaging with a simple optical system based on a thin size of the pixels of an image sensor. It makes no geneous illumination is assumed. Therefore, the radiant In the limit of𝑑𝑑푑푑푑푑𝑑𝑑 | | we obtain: lens and characterized by some geometrical pa-rameters: f - focal length sense to resolve smaller structures but to allow a 3 𝑑𝑑푑푑푑푑𝑑𝑑 ⋅ (1 + |𝑀𝑀�푀𝑀�𝑀𝑀 flux per unit area, also called irradiance E [W/m2], is of lens, F - focal point of lens on object side, F - focal point of lens on o i certain blurring. The blurring can be described using constant over the area of the image circle. Let us as- image side, xo - dis-tance between Fo and object = object distance, xi - Δ𝑥𝑥 ≪ 𝑑𝑑 the image of a point object, as illustrated in figure 6. 3 /# distance between Fi and image, Yo - object size, Yi - image size 2 sume that the left image sensor (fig. 9, sensor [a]) has At the image plane, the point object is imaged to a 1 + |𝑀𝑀� smaller pixels but higher resolution — e.g. 2000 x 2000 point. It smears to a disk with the radius, e, (see fi- If the depth of field|Δ𝑥𝑥 includes| ≈ 𝑓𝑓 ⋅ infinite distance, ⋅ 𝜀𝜀 the depth of 𝑀𝑀 pixels at 10µm pixel pitch — while the right image sen- gure 7) with increasing distance from the image field is given by: sor (fig. 9, sensor [b]) subsequently has 1000 x 1000 Or the Gaussian lens equation: 2 plane. 0 pixels at 20µm pixel pitch.

/# 1 1 1 Introducing the f-number f/# of an optical system as the 𝑚𝑚𝑚𝑚𝑚𝑚 𝑓𝑓 The question would be, which sensor has the brighter = ratio of the focal length f0 and diameter of lens aperture Generally the whole concept𝑑𝑑 ≈ of depth of field and depth signal and which sensor has the better signal-to-- f (x0 f (xi f f0 2 ⋅ 𝑓𝑓 ⋅ 𝜀𝜀 D0: f/# = of focus is only valid for a perfect optical system. If the ratio (SNR)? To answer this question, it is possible to + D0 optical system shows any aberrations, the depth of field and the magnifi cation, +M, is) given +by the) ratio of the im- either look at a single pixel, which neglects the different can only be used for blurring significantly larger than age size Yi to the object size Y0: the radius of the blur disk can be expressed: resolution, or to compare the same resolution with the those caused by aberrations of the system. same lens, but this corresponds to comparing a single Y f Xi 1 M = i = = 0 pixel with 4 pixels. ( Y0 x0 f f/# 0 Pixel Size & Comparison Total 𝑓𝑓 3 Generally, it is also assumed that both image sensors 𝜀𝜀𝜀𝜀� ⋅ ⋅ Δ𝑥𝑥 Area / Resolution where x3 is the distance from𝑓𝑓 + the 𝑑𝑑푑� (focused) image plane. have the same fill factor of their pixels. The small pixel The range of positions of the image plane, [d’ - x3, d’ If the influence of the pixel size on the sensitivity, dynamic, measures the signal, m, which has its own readout noise,

+ x3], forΔ which the radius of the blur disk is lower than , image quality of a camera should be investigated, there r0, and therefore a signal-to-noise-ratio, s, could be de- Δ Δ 𝜀𝜀

3 pco.knowledge base PIXEL SIZE & SENSITIVITY PIXELAre larger SIZE pixels& SENSITIVITY always more sensitive?

Limitations Pixel Size & Depth of is known as depth of focus. The above equation can be are various parameters that could be changed or kept 6 constant. In this section, we’ll follow different pathways Although micro lenses up to 20 Focus / Depth of Field solved for x3 and yields: µm pixel pitch can be manufac- to try to answer this question. tured, their efficiency gradually is Δ Size Limitation for micro lenses decreased above 6-8 µm pixel pitch. 3 /# /# Constant Area The increased stack height, which is 1 + 𝑑𝑑푑0 proprotional to the covered area, is Δ𝑥𝑥 = 𝑓𝑓 ⋅ ⋅ 𝜀𝜀𝜀𝜀𝜀� ⋅ (1 + |𝑀𝑀�푀𝑀�𝑀𝑀 not favorable as well.5 where M is the magnifi cation𝑓𝑓 from chapter 2. This equation constant => image circle, aperture, focal length, object Due to semi-conductor processing shows the critical role of the f-number for the depth of focus. distance & irradiance, requirements for front illuminated Front illuminated CMOS pixels have CMOS image sensors, there must variable => resolution & pixel size limited fill factor always be a certain pixel area cov- ered with metal, which reduces the maximum available fill factor. Large areas are difficult to realize because the diffusion length, and therefore the probability for recom- Large light sensitive areas bination, increases. Although image Figure 7: Illustration of the concept of depth of focus, which sensors for special applications is the range of image distances in which the blurring of the have been realized with 100 µm image remains below a certain threshold. pixel pitch. The image equation (chapter 2) determines the rela- Table 2: Some Limitations tion between object and image distances. If the im- Figure 8: Illustration of the concept of depth of field. This gives the range of distances of the object in which the blurring of its image at a fixed age plane is slightly shifted or the object is closer to image distance remains below a given threshold. the lens system, the image is not rendered useless. Rather, it gets more and more blurred the larger the Figure 9: Illustration of two square shaped image sensors within the image Of even more importance for practical usage than the deviation from the distances becomes, given by the circle of the same lens, which generates the image of a tree on the image depth of focus is the depth of fi eld. The depth of fi eld is the sensors. Image sensor [a] has pixels a quarter the size area of the pixels of image equation. range of object positions for which the radius of the blur sensor [b]. The concepts of depth of focus and depth of field are disk remains below a threshold at a fixed image plane. based on the fact that for any given application only For ease of comparison, we shall assume ε 2 a certain degree of sharpness is required. For digital square shaped image sensors. These are fit into the = image processing, it is naturally given by the size of the 3 image circle or imaging area of a lens. For /# −𝑓𝑓 Figure 6: Optical imaging with a simple optical system based pixels of an image sensor. It makes no sense to resolve comparison, a homogeneous illumination is assumed. In the limit of𝑑𝑑푑푑푑푑𝑑𝑑 | | we obtain: on a thin lens and characterized by some geometrical pa- smaller structures but to allow a certain blurring. The 3 𝑑𝑑푑푑푑푑𝑑𝑑 ⋅ (1 + |𝑀𝑀�푀𝑀�𝑀𝑀 Therefore, the radiant flux per unit area, also called rameters: f - focal length of lens, Fo - focal point of lens on blurring can be described using the image of a point irradiance E [W/m2], is constant over the area of the object side, Fi - focal point of lens on image side, xo - dis- Δ𝑥𝑥 ≪ 𝑑𝑑 object, as illustrated in figure 6. At the image plane, 3 /# tance between Fo and object = object distance, xi - distance 2 image circle. Let us assume that the left image sensor between Fi and image, Yo - object size, Yi - image size the point object is imaged to a point. It smears to a 1 + |𝑀𝑀� (fig. 9, sensor [a]) has smaller pixels but higher If the depth of fi eld|Δ𝑥𝑥 includes| ≈ 𝑓𝑓 ⋅ infi nite distance, ⋅ 𝜀𝜀 the depth of disk with the radius, e, (see figure 7) with increasing 𝑀𝑀 resolution — e.g. 2000 x 2000 pixels at 10 µm pixel distance from the image plane. fi eld is given by: pitch — while the right image sensor (fig. 9, sensor 2 Or the Gaussian lens equation: 0 [b]) subsequently has 1000 x 1000 pixels at 20 µm Introducing the f-number f/# of an optical system as the ra- /# pixel pitch. 1 1 1 tio of the focal length f0 and diameter of lens aperture D0: 𝑚𝑚𝑚𝑚𝑚𝑚 𝑓𝑓 = Generally, the whole𝑑𝑑 concept ≈ of depth of field and The question would be, which sensor has the brighter f (x0 f (xi f f0 2 ⋅ 𝑓𝑓 ⋅ 𝜀𝜀 f/# = depth of focus is only valid for a perfect optical signal and which sensor has the better signal-to-noise- + D0 system. If the optical system shows any aberrations, and the magnification, +M, is) given +by the) ratio of the im- ratio (SNR)? To answer this question, it is possible the depth of field can only be used for blurring age size Yi to the object size Y0: the radius of the blur disk can be expressed: to either look at a single pixel, which neglects the significantly larger than those caused by aberrations different resolution, or to compare the same Yi f Xi 1 0 of the system. M = = = resolution with the same lens, but this corresponds to ( Y0 x0 f f/# 0 comparing a single pixel with 4 pixels. 𝑓𝑓 3 𝜀𝜀𝜀𝜀� ⋅ ⋅ Δ𝑥𝑥 Pixel size & comparison total area / where x is the distance from the (focused) image plane. Generally, it is also assumed that both image sensors 3 𝑓𝑓 + 𝑑𝑑푑� resolution The range of positions of the image plane, [d’ - x3, d’ have the same fill factor of their pixels. The small pixel

+ x3], forΔ which the radius of the blur disk is lower than , If the influence of the pixel size on the sensitivity, dynamic, measures the signal, m, which has its own readout Δ image quality of a camera should be investigated, there noise, r0, and therefore a signal-to-noise-ratio, s, could Δ 𝜀𝜀 be determined for two important imaging situations:

4 pco.knowledge base PIXELAre larger SIZE pixels& SENSITIVITY always more sensitive? PIXEL SIZE & SENSITIVITY low light (s0), where the readout noise is dominant and height and half the diagonal of the “old” image circle, The lenses are constructed such that the same f-number bright light (s1), where the photon noise is dominant. which gives: always generates the same irradiance in the same image circle area. Therefore, the attempt to focus the energy on 2 1 = a smaller area for the comparison is not accomplished by 2 2 keeping the f-number constant. It is the real diameter of Readout SNR low SNR bright 𝑛𝑛푛𝑛𝑛 Pixel type Signal 𝑑𝑑 ⋅ 𝑐𝑐 푐 ⋅ 𝑐𝑐 noise light light To get an idea of the required focal length, it is possible to the aperture, D, which has to be kept constant. There-

Small pixel m r0 s0 s1 fore, the new f-number would be, if the old f-number was determine the new magnification Mnew and subsequently f/#=16 (see example in figure 13): Large pixel 4 x m > r0 > s0 > s1 the required focal length fnew of the lens (see chapter 2 7 4 small pixel 4 x m 2 x r0 2 x s0 2 x s1 and figure 9), since the object size, Y0, remains the same: 50 Table 3: consideration on signal and SNR for different pixel sizes same 1 = = = = 2 1 2 total area 𝑜𝑜𝑜𝑜𝑜𝑜 𝑛𝑛푛𝑛𝑛 𝑛𝑛푛𝑛𝑛 𝑛𝑛푛𝑛𝑛 100 Figure 11: Illustration of two square shaped image sensors within the = = |𝑌𝑌 | = h = 0 𝑛𝑛푛𝑛𝑛 𝑛𝑛푛𝑛𝑛 /#𝑛𝑛푛𝑛𝑛 𝑓𝑓 𝑓𝑓 𝑓𝑓 /#𝑜𝑜𝑜𝑜𝑜𝑜 image circle of the same lens type (e.g. f-mount). The tree is imaged to the 0 0 2 2 / 𝑛𝑛푛𝑛𝑛 |𝑌𝑌 | 𝑜𝑜𝑜𝑜𝑜𝑜 𝑌𝑌 ⋅ 𝑐𝑐 𝑓𝑓 𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜𝑜𝑜 ⋅ 𝑓𝑓 ⋅ 16 = 8 same number of pixels which require lenses with different focal lengths: /#𝑜𝑜𝑜𝑜𝑜𝑜 𝑀𝑀 ⋅ 𝑀𝑀 𝑤𝑤𝑤𝑤𝑤𝑤 𝑜𝑜𝑜𝑜𝑜𝑜 𝐷𝐷 𝑓𝑓 𝑓𝑓 Still, the proportionality of SNR to pixel area at a Image sensor [a] has pixels with 4 times the area of the pixels of sensor Since the photon𝑌𝑌 flux,𝑌𝑌 0, coming from the𝑌𝑌 object ⋅ 𝑐𝑐or a Again the question would𝑓𝑓 be which sensor has the [b]. constant irradiance is valid, meaning the larger the light source is constant due to the same aperture of the brighter signal and which sensor has the better signal-to- Φ pixel size and therefore the area, the better the SNR lens (NOT the same f-number!), the new lens with the noise-ratio (SNR)? This time we look at the same number will be. However, this ultimately means that one pixel which before was spread to a larger image area, changed focal length achieves to spread the same en- of pixels but with different size, but due to the different with a total area that fits into the image circle has the now has to be “concentrated” to a smaller area ergy over a smaller area, which results in a higher irradi- lens at the same aperture, the irradiance for the smaller best SNR: (see figure 11 and 12). ance. To get an idea of the new irradiance, we need to pixels is higher. Still, it is assumed that both image sen- know the new area, Anew: sors have the same fill factor of their pixels. 1 2 2 Readout SNR low SNR bright 2 1 Pixel type Signal = 𝜋𝜋𝜋� ⋅ 𝑐𝑐 = noise light light 𝑛𝑛푛𝑛𝑛 2 4 𝐴𝐴 2 Small pixel m r0 s0 s1 𝑜𝑜𝑜𝑜𝑜𝑜 2 Large pixel m > r0 < s0 s1 𝐴𝐴 𝜋𝜋 𝜋 Figure 10: Illustration of three resulting which were recorded at ⋅ 𝑐𝑐 different resolutions: a) high resolution, b) low resolution and c) 1 pixel With this new area it is possible to calculate how much Table 4: consideration on signal and SNR for different pixel resolution higher the new irradiance, Inew, will be. sizes same resolution

0 0 0 Assume three image sensors with the same total = = If now the argument would be that the larger f-number 1 area, which all fit into the image circle of a lens, but all 𝑛𝑛푛𝑛𝑛 Φ 4 Φ Φ 𝑜𝑜𝑜𝑜𝑜𝑜 will cause a different depth of field, which will in turn 𝐼𝐼 = 4 ⋅ = 4 ⋅ 𝐼𝐼 three have different resolutions. As seen in figure 10, a) 𝑛𝑛푛𝑛𝑛 𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜𝑜𝑜 change the sharpness of the image and therefore the The reason for that𝐴𝐴 discrepancy ⋅ 𝐴𝐴 between𝐴𝐴 the argumentation shows the clearest image but has the worst SNR per and the results is within the definition of the f-number /#f : quality, the equation from chapter 3 can be used to look pixel. Figure 10 b), the SNR / pixel is better, but due to at the consequences. the smaller resolution, the image quality is f f/# = worse compared to a). Finally, figure 10 c) with a D From the above example we have the focal lengths fold = super large single pixel shows the maximum SNR Figure 12: llustration of two imaging situations, which are required for 100mm and fnew = 50mm, and we are looking for the right comparison: [1] this is the original lens which images completely to the f-number to have the same depth of field. Therefore: per pixel but unfortunately the image content is lost. large pixel sensor - [2] this is the lens with the changed focal length, which images to the smaller pixel size sensor 2 2 2 502 Constant Resolution = = 𝑜𝑜𝑜𝑜𝑜𝑜 𝑛𝑛푛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑛𝑛푛𝑛𝑛2 1002 constant => aperture & object distance, 𝑓𝑓 𝑓𝑓 /#𝑛𝑛푛𝑛𝑛 𝑓𝑓 /#𝑜𝑜𝑜𝑜𝑜𝑜 If, for example the resolution of the image sensor is /#𝑜𝑜𝑜𝑜𝑜𝑜 /#𝑛𝑛푛𝑛𝑛 <=> 𝑓𝑓 𝑜𝑜𝑜𝑜𝑜𝑜 ⋅ 𝑓𝑓 ⋅ 16 = 4 2From ⋅ 𝑓𝑓 the ⋅ 𝜀𝜀above2 ⋅ 𝑓𝑓 example ⋅ 𝜀𝜀 we have the focal lengths fold variable => pixel size, focal length, area & irradiance 1000 x 1000 pixels, and the sensor with the smaller 𝑓𝑓 = 100mm and fnew = 50mm, and we are looking for the Another possibility to compare is whether the pixels has a pixel pitch of half the dimension of the Figure 13: Images of the same test chart with the same resolution, same object distance, and same ll factor but dif- right f-number to have the same depth of field. Therefore: number of pixels should be the same at the same larger pixel sensor, the image circle diameter dold of the ferent pixel size and different lens settings (all images have object distance. To test this, it would be necessary larger pixel sensor amounts to with c = width the same scaling for display): Since we have calculated a f-number = 8 for the correct [a] pixel size 14.8 µm x 14.8 µm, f = 100mm, f-number = 16 comparison, the depth of field for the image sensor with to change the focal length of the lens in front of the or height of the image sensor. Since the smaller pixel [b] pixel size 7.4 µm x 7.4 µm, f = 50mm, f-number = 16 image sensor with the small pixels. Since the sensor has half the pitch, it also has2 ∙ half the width and [c] pixel size 7.4 µm x 7.4 µm, f = 50mm, f-number = 8 the smaller pixels is even better. (energy),

5 pco.knowledge base PIXEL SIZE & SENSITIVITY PIXELAre larger SIZE pixels& SENSITIVITY always more sensitive? termined for two important imaging situations: low light height and half the diagonal of the “old” image circle, The lenses are constructed such that the same f-number (s0), where the readout noise is dominant and bright light which gives: always generates the same irradiance in the same image (s1), where the photon noise is dominant. circle area. Therefore, the attempt to focus the energy on 2 1 = a smaller area for the comparison is not accomplished by 2 2 keeping the f-number constant. It is the real diameter of Readout SNR low SNR bright 𝑛𝑛푛𝑛𝑛 Pixel type Signal 𝑑𝑑 ⋅ 𝑐𝑐�푐� ⋅ 𝑐𝑐 noise light light To get an idea of the required focal length, it is possible to the aperture, D, which has to be kept constant. There-

Small pixel m r0 s0 s1 fore, the new f-number would be, if the old f-number was determine the new magnifi cation Mnew and subsequently f/#=16 (see example in fi gure 13): Large pixel 4 x m > r0 > s0 > s1 the required focal length fnew of the lens (see chapter 2 7 4 small pixel 4 x m 2 x r0 2 x s0 2 x s1 and fi gure 9), since the object size, Y0, remains the same: 50 Table 3: consideration on signal and SNR for different pixel 1 = = = = 2 1 2 𝑜𝑜𝑜𝑜𝑜𝑜 𝑛𝑛푛𝑛𝑛 𝑛𝑛푛𝑛𝑛 𝑛𝑛푛𝑛𝑛 100 sizes same total area Figure 11: Illustration of two square shaped image sensors = = |𝑌𝑌 | = h = 0 𝑛𝑛푛𝑛𝑛 𝑛𝑛푛𝑛𝑛 /#𝑛𝑛푛𝑛𝑛 𝑓𝑓 𝑓𝑓 𝑓𝑓 /#𝑜𝑜𝑜𝑜𝑜𝑜 within the image circle of the same lens type (e.g. f-mount). 0 0 2 2 / 𝑛𝑛푛𝑛𝑛 |𝑌𝑌 | 𝑜𝑜𝑜𝑜𝑜𝑜 𝑌𝑌 ⋅ 𝑐𝑐 𝑓𝑓 𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜𝑜𝑜 ⋅ 𝑓𝑓 ⋅ 16 = 8 The tree is imaged to the same number of pixels which /#𝑜𝑜𝑜𝑜𝑜𝑜 𝑀𝑀 ⋅ 𝑀𝑀 𝑤𝑤𝑤𝑤𝑤𝑤 𝑜𝑜𝑜𝑜𝑜𝑜 𝐷𝐷 𝑓𝑓 𝑓𝑓 Still, the proportionality of SNR to pixel area at a constant require lenses with different focal lengths: Image sensor [a] Since the photon𝑌𝑌 fl ux,𝑌𝑌 0, coming from the𝑌𝑌 object ⋅ 𝑐𝑐or a Again the question would𝑓𝑓 be which sensor has the irradiance is valid, meaning the larger the pixel size and has pixels with 4 times the area of the pixels of sensor [b]. light source is constant due to the same aperture of the brighter signal and which sensor has the better signal-to- Φ therefore the area, the better the SNR will be. However, lens (NOT the same f-number!), the new lens with the noise-ratio (SNR)? This time we look at the same number this ultimately means that one pixel with a total area that which before was spread to a larger image area, now changed focal length achieves to spread the same en- of pixels but with different size, but due to the different fits into the image circle has the best SNR: has to be “concentrated” to a smaller area (see figure ergy over a smaller area, which results in a higher irradi- lens at the same aperture, the irradiance for the smaller 11 and 12). ance. To get an idea of the new irradiance, we need to pixels is higher. Still, it is assumed that both image sen- know the new area, Anew: sors have the same fi ll factor of their pixels. 1 2 2 Readout SNR low SNR bright 2 1 Pixel type Signal = 𝜋𝜋𝜋� ⋅ 𝑐𝑐 = noise light light 𝑛𝑛푛𝑛𝑛 2 4 𝐴𝐴 2 Small pixel m r0 s0 s1 𝑜𝑜𝑜𝑜𝑜𝑜 2 Large pixel m > r0 < s0 s1 𝐴𝐴 𝜋𝜋𝜋� Figure 10: Illustration of three resulting images which were ⋅ 𝑐𝑐 recorded at different resolutions: a) high resolution, b) low With this new area it is possible to calculate how much Table 4: consideration on signal and SNR for different pixel sizes same resolution resolution and c) 1 pixel resolution higher the new irradiance, Inew, will be.

0 0 0 If now the argument would be that the larger f-number Assume three image sensors with the same total area, = = 1 will cause a different depth of field, which will in turn which all fit into the image circle of a lens, but all three 𝑛𝑛푛𝑛𝑛 Φ 4 Φ Φ 𝑜𝑜𝑜𝑜𝑜𝑜 𝐼𝐼 = 4 ⋅ = 4 ⋅ 𝐼𝐼 change the sharpness of the image and therefore have different resolutions. As seen in figure 10, a) shows 𝑛𝑛푛𝑛𝑛 𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜𝑜𝑜 The reason for that𝐴𝐴 discrepancy ⋅ 𝐴𝐴 between𝐴𝐴 the argumentation the quality, the equation from chapter 3 can be used to the clearest image but has the worst SNR per pixel. Fig- and the results is within the defi nition of the f-number /#f : look at the consequences. ure 10 b), the SNR / pixel is better but due to the smaller f resolution the image quality is worse compared to a). From the above example we have the focal lengths fold f/# = Finally, figure 10 c) with a super large single pixel shows D = 100 mm and fnew = 50 mm, and we are looking for the the maximum SNR per pixel but unfortunately the image Figure 12: llustration of two imaging situations, which are right f-number to have the same depth of field. content is lost. required for comparison: [1] this is the original lens which images completely to the large pixel sensor - [2] this is the Therefore: lens with the changed focal length, which images to the 2 2 2 502 Constant Resolution smaller pixel size sensor = = 𝑜𝑜𝑜𝑜𝑜𝑜 𝑛𝑛푛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑛𝑛푛𝑛𝑛2 1002 constant => aperture & object distance, 𝑓𝑓 𝑓𝑓 /#𝑛𝑛푛𝑛𝑛 𝑓𝑓 /#𝑜𝑜𝑜𝑜𝑜𝑜 If, for example the resolution of the image sensor is /#𝑜𝑜𝑜𝑜𝑜𝑜 /#𝑛𝑛푛𝑛𝑛 <=> 𝑓𝑓 𝑜𝑜𝑜𝑜𝑜𝑜 ⋅ 𝑓𝑓 ⋅ 16 = 4 From2 ⋅ 𝑓𝑓 the ⋅ 𝜀𝜀 above2 ⋅ 𝑓𝑓 example ⋅ 𝜀𝜀 we have the focal lengths variable => pixel size, focal length, area & irradiance 1000 x 1000 pixels, and the sensor with the smaller 𝑓𝑓 fold = 100 mm and fnew = 50 mm, and we are looking Another possibility to compare is whether the number pixels has a pixel pitch of half the dimension of the Figure 13: Images of the same test chart with the same resolution, same object distance, and same fill factor but different pixel size, and different for the right f-number to have the same depth of field. of pixels should be the same at the same object dis- larger pixel sensor, the image circle diameter dold of the lens settings (all images have the same scaling for display): Therefore: tance. To test this, it would be necessary to change larger pixel sensor amounts to with c = width [a] pixel size 14.8 µm x 14.8 µm, f = 100 mm, f-number = 16 the focal length of the lens in front of the image sensor or height of the image sensor. Since the smaller pixel [b] pixel size 7.4 µm x 7.4 µm, f = 50 mm, f-number = 16 Since we have calculated a f-number = 8 for the with the small pixels. Since the information (energy), sensor has half the pitch, it also has2 ∙half 𝑐𝑐 the width and [c] pixel size 7.4 µm x 7.4 µm, f = 5 0mm, f-number = 8 correct comparison, the depth of field for the image sensor with the smaller pixels is even better.

6 pco.knowledge base PIXELAre larger SIZE pixels& SENSITIVITY always more sensitive?

Pixel size & fullwell capacity, Adjust the f-numbers of the other lenses in such a readout noise, way that the aperture of all lenses is equal (similar)! WHAT IS ALL THE HYPE The fullwell capacity of a pixel of an image sensor is an Compare for sensitivity! important parameter which determines the general dy- namic of the image sensor and therefore also for the Are image sensors with larger pixels more ABOUT RESOLUTION AND camera system. Although the fullwell capacity is influenced sensitive than smaller pixels? by various parameters like pixel architecture, layer structure, and well depth, there is a general No, because the sensitivity has nothing to do with MEGA PIXELS ANYWAY? correspondence also to the light sensitive area. This is the size of the pixel. In a given optical set-up, the image also true for the electrical capacity of the pixel and the sensor with the larger pixels will give a larger signal at a dark current, which is thermally generated. Both, dark lower resolution due to the larger collection area. But current and capacity8 , add to the noise behavior, and the parameter to look at, in case a more sensitive image therefore larger pixels also show larger readout noise. sensor is required, is the quantum efficiency, and it is Resolution, in the context of an image sensor, describes spectrally dependent. Therefore, the application defines the total number of pixels utilized to create an image. which image sensor is the most sensitive. Characterizing the resolution of an image sensor simply Light Fullwell Dark Readout Pixel type sensitive Capacity refers to the pixel count and is generally expressed as capacity current noise area the horizontal number of pixels multiplied by the vertical Small pixel small small small small small number of pixels, for example: Large pixel large large large large large h

Table 5: consideration on fullwell capacity, readout noise, dark current END NOTES 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣�𝑣𝑣𝑣𝑣𝑣𝑣 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛푛𝑛𝑛푛𝑛𝑛𝑛𝑛푛𝑛𝑛𝑛𝑛𝑛𝑛푛𝑛𝑛 × 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛푛𝑛𝑛푛𝑛𝑛𝑛𝑛푛𝑛𝑛𝑛𝑛𝑛𝑛푛𝑛𝑛 1 Graphic has been inspired by fi gure 17.1-1 from “Fundamentals of Photonics”, For instance, the sCMOS image sensor CIS2521 has the B.E.A. Saleh and M.C. Teich, John Wiley & Sons, Inc., 1991. = 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡� How to compare with respect to following resolution: 2 Data are taken from Green, M.A. and Keevers, M. “Optical properties of intrinsic pixel size & sensitivity silicon at 300 K”, Progress in Photovoltaics, p.189-92, vol.3, no.3; (1995). 3 Data are taken from Green, M.A. and Keevers, M. “Optical properties of intrinsic h Generally, it is a good idea to image the same silicon at 300 K”, Progress in Photovoltaics, p.189-92, vol.3, no.3; (1995). Figure 1: The graph illustrates how a black/white line test The above calculation𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 is typical𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣�𝑣𝑣𝑣𝑣𝑣𝑣 for technical data sheets scene to each camera, which means: 4 The fi ll factor can be small for example: the pixel of an interline transfer CCD (2560 × 2160 ) 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝�푝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝� image is inuenced between the limits “blurred” – “sharp” in image sensor, where 35 % of the pixel area is used for the shift register or the and marketing materials for scientific cameras, and would resolution (y-axis) and between the limits “soft” – “brilliant” global 5 or 6T pixel of a CMOS sensor, where the transistors and electri- seem that more pixels would always be beneficial, but the in contrast (x-axis). Keep the object distance and the illumination cal leads cause a 40 % fi ll factor. 5 For some large pixels it still might be useful to use a less effi cient micro lens if, for title question remains, why does the pixel count matter? constant! example, the lens prevents too much light falling onto the periphery of the pixel. the more the images become “sharp”, while a “brilliant” If the cameras should be compared, they should use the 6 This chapter comes from the book “Practical Handbook on Image Processing contrast helps to distinguish the white and black lines for Scientifi c and Technical Applications” by Prof. B. Jähne, CRC Press, chapter Bene t and Relevance same resolution, which either means analogue binning 4. Image Formation Concepts, pp. 131 - 132. for a Camera User even if they are not sharp. or mathematical summation or averaging, or usage of a 7 Since it is allowed to add the power of the readout noise, the resulting noise region/area of interest. equals the square root of (4 x r0²). Assuming an image sensor, or a camera system with an im- Image Sensors and Cameras 8 The correspondence between pixel area, capacity and therefore kTC-noise is a age sensor, is generally used to detect images and objects, Keep or adjust the same resolution for all cameras! little bit simplifi ed, because there are newer techniques from CMOS manufac- turers like Cypress, which overcome that problem. Nevertheless, the increasing the critical question is what the influence of the resolution is Starting with the image sensor in a camera system, the dark current is correct, whereas the sum still follows the area size. Further in on the image quality. First, if the resolution is higher, more modulation transfer function (MTF) describes the ability of Then select a proper focal length for each camera, CCDs and newer CMOS image sensors most of the noise comes from the whereas each camera should see the same image on the transfer nodes, which is cancelled out by correlated double sampling (CDS). spatial information is obtained, and as a consequence, the camera system to resolve fine structures. It is a variant 1 active image sensor area. larger data files are generated. Second, the amount of in- of the optical transfer function (OTF) which mathematically formation, which can be obtained by a camera, is insep- describes how the system handles the optical information, Select corresponding lens with the appropriate fo- arably connected to the applied imaging , and the or contrast of the scene or the sample, and transfers it onto cal length for each camera! optics are characterized by their own optical resolution or the image sensor and then into a digital format for process- Adjust the lens with the largest focal length with ability to resolve details and must also be considered. ing via computer. The resolution ability depends on both the largest useful f-number! the number and also the size of the pixel. Figure 1 illustrates the influence of resolution between For a proper comparison, use the equation for the f- its limits of “Blurred” and “Sharp”, and the influence of The maximum spatial resolution is described as the abil- number on page 6, keep the aperture D constant, and contrast between its limits of “soft” and “brilliant” on the ity to separate patterns of lines and it is calculate the corresponding f-number for the other lenses image quality, where, the higher the resolution of an op- given in line pairs per millimeter ([lp/mm]). The theoretical and adjust them as good as possible - then compare the tical system (consisting of a camera and imaging optics), limit is given in the literature and defines the maximum images!

7 contact

europe (HQ) PCO AG Donaupark 11 93309 Kelheim, Germany

+49 9441 2005 50 [email protected] pco.de america PCO-TECH Inc. 1000 N West Street, Suite 1200 Wilmington, DE 19801

+1 866 678 4566 [email protected] pco-tech.com for application stories please visit our website asia PCO Imaging Asia Pte. 3 Temasek Ave Centennial Tower, Level 34 Singapore, 039190

+65 6549 7054 [email protected] pco-imaging.com china Suzhou PCO Imaging Technology Co., Ltd. Room A10, 4th Floor, Building 4 Ascendas Xinsu Square, No. 5 Xinghan Street Suzhou Industrial Park, China 215021

+86 512 67634643 [email protected] pco.cn

subject to changes without prior notice ©PCO AG, Kelheim | Are larger pixels always more sensitive? | v1.01 ISO 9001:2015