EMVA Standard 1288

Standard for Characterization and Presentation of Speci- fication Data for Image Sensors and Cameras

Release A1.03

September 06 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

Acknowledgements

Companies participate in the elaboration of the EMVA Standard 1288 and they're representatives

Company Represented by: Adimec Jochem Herrmann Asentics Wolfgang Pomrehn Aspect Systems Marcus Verhoeven ATMEL Jacques Leconte AWAIBA Martin Wäny Basler Dr. Friedrich Dierks DALSA Prof. Dr. Albert Theuwissen BOSCH Dr. Uve Apel IDT Daniel Diezemann JAI/Pulnix Tue Moerk MELEXIS Dr. Arnaud Darmont PCO Dr. Gerhard Holst Sony Jérôme Avenel Stemmer Imaging Martin Kersting / Manfred Wütschner TVI Vision Kari Siren / Jari Löytömaki University Heidelberg Prof. Dr. Bernd Jähne University Oldenburg Dr. Heinz Helmers

Rights and Trademarks

The European Machine Vision Association owns the "EMVA, standard 1288 compliant" logo. Any com- pany can obtain a license to use the "EMVA standard 1288 compliant" logo, free of charge, with prod- uct specifications measured and presented according to the definitions in EMVA standard 1288. The licensee guarantees that he meets the terms of use in the relevant version of EMVA standard 1288. Licensed users will self -certify compliance of their measurement setup, computation and represent a- tion with which the "EMVA standard 1288 compliant" logo is used. The licensee has to check reg ularly compliance with the relevant version of EMVA standard 1288, at least once a year. When di splayed on line the logo has to be featured with a link to EMVA standardization web page. EMVA will not be liable for specifications not compliant with the standard and damage r esulting there from. EMVA keeps the right to withdraw the granted license any time and without giving reasons.

Page 2 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

About this Standard EMVA has started the initiative to define a unified method to measure, compute and present specifica- tion parameters and characterization data for cameras and image sensors used for machine vision applications. The standard does not define what nature of data should be disclosed. It is up to the component manufacturer to decide if he wishes to publish typical data, data of an individual component, guaran- teed data, or even guaranteed performance over life time of the component. However the component manufacturer shall clearly indicate what the nature of the presented data is. The Standard is organized in different modules, each addressing a group of specification parameters, assuming a certain physical behavior of the sensor or camera under c ertain boundary conditions. Ad- ditional modules covering more parameters and a wider range of sensor and camera products will be added at a later date. For the time being it will be necessary for the manufacturer to indicate additional, component specific information, not defined in the standard, to fully describe the performance of image sensor or camera products, or to describe physical behavior not covered by the mathematical mo dels of the standard. The purpose of the standard is to benefit the Automated Vision Industry by providing fast, comprehen- sive and consistent access to specification information for Cameras and Sensors. Particularly it will be beneficial for those who wish to compare cameras or who wish to calculate system performance based on the performance specifications of a image sensor or a camera.

Page 3 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

Contents

ACKNOWLEDGEMENTS 2

RIGHTS AND TRADEMARK S 2

ABOUT THIS STANDARD 3

CONTENTS 4

4 INTRODUCTION AND SCOPE 5

5 BASIC INFORMATION 6

6 GENERAL DEFINITIONS 7

6.1 ACTIVE AREA 7 6.2 NUMBER OF PIXELS 7 6.3 (GEOMETRICAL) PIXEL AREA 7

7 MODULE 1: CHARACTERIZING THE IMAGE QUALITY AND SENSITIVITY OF MACHINE VISION CAMERAS AND SENSORS 8

7.1 MATHEMATICAL MODEL 8 7.2 MEASUREMENT SETUP 12 7.3 MATCHING THE MODEL TO THE DATA 13 7.3.1 EXTENDED PHOTON TRANSFER METHOD 13 7.3.2 SPECTROGRAM METHOD 16 7.4 PUBLISHING THE RESULTS 20 7.4.1 CHARACTERIZING TEMPORAL AND SENSITIVITY 20 7.4.2 CHARACTERIZING TOTAL AND SPATIAL NOISE 21

8 REFERENCES 23

Page 4 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

4 Introduction and Scope The first version of this standard covers monochrome digital area scan cameras with linear photo re- sponse characteristics. Line scan and color cameras will follow. Analog cameras can be described according to this standard in conjunction with a frame grabber; similarly, image sensors can be described as part of a camera. The standard text is organized into separate modules. The first module covers noise and sensitivity. More modules will follow in future versions of the standard.

Mathematical Model

Parameters

Matching the model to the data

Measurement data

Camera

Fig 1 Elements of the Standard

Each module defines a mathematical model for the effects to be described (see Fig 1). The model contains parameters which characterize the camera. The parameters are found by matching the model to measurement data. Each module consists of the following parts: § Description of the mathematical model § Description of the measurement setup § Description how to match the model to the data and compute the parameters § Description of how the results are published The standard can only be applied if the camera under test can actually be described by the mathe- matical model. To ensure this, each module contains a set of conditions which need to be fulfilled. If the conditions are not fulfilled, the computed parameters are meaningless with respect to the camera under test and thus the standard cannot be applied. The standard is intended to provide a concise definition and clear description of the measurement process. For a better understanding of the underlying physical and mathematical model of the camera please read [1], [2], [3], [5], or [7]. Measurement examples are contained in [1].

Page 5 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

5 Basic Information Before discussing the modules, this section describes the basic information which must be published for each camera: § Vendor name § Model name § Type of data presented: Typical; Guaranteed; Guaranteed over life time 1 § Sensor type CCD; CMOS; CID etc... § Sensor diagonal in [mm] (Sensor length in the case of line sensors) § Indication of lens category to be used [inch] § Resolution of the sensor’s active area (width x height in [pixels]) § Pixel size (width x height in [µm]) § Readout type (CCD only)

- progressive scan

- interlaced § Transfer type (CCD only)

- Interline transfer

- Frame transfer

- Full frame transfer

- Frame interline transfer

- § Shutter type (CMOS only)

- Global : all pixels start exposing and stop exposing at the same time.

- Rolling : exposure starts line by line with a slight delay between line starts; the exposure time for each line is the same.

- Others : defined in the data-sheet. § Overlap capabilities

- Overlapping : readout of frame n and exposure of frame n+1 can happen at the same time.

- Non-overlapping : readout of frame n and exposure of frame n+1 can only happen sequen- tially.

- Others : defined in the data-sheet. § Maximum frame rate at the given operation point. (no change of settings permi tted) § General conventions § Definition used for typical data. (Number of samples, sample selection). § Others (Interface Type etc.)

1 The type of data may vary for different parameters. E.g. guaranteed specification for most of the parameters and typical data for some measurements not easily done in production (e.g. h(l)). It then has to be clearly indicated which data is of what nature.

Page 6 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

6 General definitions This section defines general terms used in the different modules

6.1 Active Area

The Active Area of an image sensor or of a camera is defined as the array of light sensitive pi xels that are functional2 in normal operation mode.

6.2 Number of Pixels

The number of pixels is defined as:

The number of pixels is the number of separate, physically existing and light sensitive photosites in the Active Area3.

Stacked photosites4 are counted as a single pixel

The number of pixels of a sensor / camera is indicated in number of column s x number of rows. (E.g. 640 x 480)

6.3 (Geometrical) Pixel Area Geometrical not necessarily light sensitive area of a pixel, given by horizontal pixel pitch x vertical pixel pitch.

2 Functional in this context means that the pixel values are given out. 3 Dark pixels are not counted 4 Stacked pixels are sometimes used for colour separation

Page 7 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

7 Module 1: Characterizing the Image Quality and Sensitivity of Machine Vision Cameras and Sen- sors

This module describes how to characterize the temporal and spatial noise of a camera and its sensitiv- ity to light.

7.1 Mathematical Model This section describes the physical and mathematical model used for the measurements in this mod- ule. (Fig 2 & Fig 3): A number of np photons hits the (geometrical) pixel area during the exposure time. These photons generate a number of ne electrons, a process which is characterized by the total quantum efficiencyh . (total quantum efficiency includes fill factor microlenses etc..) see formula 4.

A number of photons ...

... hitting the pixel area during exposure time ...

... creating a number of electrons ...

... forming a charge which is converted by a capacitor to a voltage ...

... being amplified ...

... and digitized ... 42 ... resulting in the digital gray value.

Fig 2 Physical model of the camera

The electrons are collected,5 converted into a voltage by means of a capacitor, amplified and finally digitized yielding the digital gray value y which is related to the number of electrons by the overall sys- tem gain K. All dark noise sources6 in the camera are referenced to the number of electrons in the

pixel and described by a (fictive) number of nd noise electrons added to the photon generated elec- trons.

5 The actual mechanism is different for CMOS sensors, however, the mathematical model for CMOS is the same as for CDD sensors. 6 Dark Noise = all noise sources present when the camera is capped; not to be confused with Dark Current Noise.

Page 8 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

dark noise

quantum n system efficiency d gain n e np h K y

number of number of digital photons electrons grey value

Fig 3 Mathematical model of a single pixel

Spatial non-uniformities in the image are modeled by adding (fictive) spatial noise electrons to the photon generated electrons.7 The following naming conventions are used for the mathematical model:

§ nx denotes a number of things of type x. nx is a stochastic quantity.

§ mx denotes the mean of the quantity x.

2 § s x denotes the standard deviation and s x the variance of the quantity x. § The index p denotes quantities related to the number of photons hitting the geometrical pixel dur- ing exposure time. § The index e denotes quantities related to the number of electrons collected in the pixel. § The index d denotes quantities related to the number of (fictive) dark noise electrons collected in the pixel. § The index y denotes quantities related to the digital gray values. The mathematical model consists of the following equations: Basic Model for Monochrome Light

m p = F pTexp (1)

EAl F = (2) p hc

m y = K(me + md ) = K (hm p + md ) = Khm + m (3) 123p y.dark light induced

h =h(l) (4)

2 2 2 2 s y = s y.total = s y.temp +s y. spat

é ù (5) = K 2 êhm +s 2 + S 2h 2 m 2 +s 2 ú ê14p243d 1g442p443o ú ëê temporal spatial ûú

s 2 = K 2hm + s 2 (6) y.temp 123p y.temp.dark light induced

7 Not shown in Fig 3.

Page 9 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

s 2 = K 2S 2h 2m 2 + s 2 (7) y. spat 142g 43p y.spat.dark light induced Saturation

m y ® my.sat . (8) saturation

2 s y ® 0 (9) saturation

me.sat = hm p.sat + md » hm p.sat (10)

Model Extension for Dark Current Noise

md = md0 + N dTexp (11)

2 2 s d = s d 0 + NdTexp (12)

J-30°C

kd N d = N d 30 2 (13) Model Extension for Non-

2 2 2 s y. full = s y.white +s y.nonwhite (14)

Derived Measures

hm p SNRy = (15) 2 2 2 2 2 (s d0 +s o )+hm p + S gh m p s m = d 0 (16) p.min h

m p. sat 89 DYN in = (see footnotes ) (17) m p. min

me.sat DYN out = (18) s d 0

2 s y. full F = 2 (19) s y.white Alternative Measures10

PRNU1288 = S g (20)

DSNU 1288 = s o (21)

8 For linear sensors, DYN in = DYN out holds true. 9 The dynamic range must be present in the same image (“intra scene”). 10 These measures are given for convenience. Note that for historical reasons several inconsistent definitions of these terms exist. Therefore the 1288 suffix should always be used.

Page 10 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

using the following quantities with their units given in square brackets: A area of the (geometrical) pixel [m2] m c Speed of light c » 3×108 s DYN in Input dynamic range [1]

DYN out Output dynamic range [1] E irradiance on the sensor surface [W/m2] F Non-whiteness coefficient h Planck’s constant h » 6.6310-34 Js K overall system gain [DN/e-]

kd Doubling temperature of the dark current [°C]

N d dark current [e-/s]

N d 30 dark current for a housing temperature of 30°C [e-/s] 2 2 Sg variance coefficient of the spatial gain noise [% ]

PRNU1288 photo response non-uniformity [%] SNR y gray value’s signal-to-noise ratio [%]

Texp exposure time [s] h total quantum efficiency11 [e-/p~] = [1] = [%] J housing temperature of the camera [°C] l wavelength of light [m] 12 me mean number of photon generated electrons [e-]

me.sat saturation capacity, i.e. mean equivalent electrons if the camera is saturated [e-]

md mean number of (fictive) temporal dark noise electrons [e-]

md 0 mean number of (fictive) dark noise electrons for exp osure time zero [e-] 13 m p mean number of photons collected by one pixel during exposure time [p~]

m p.min Absolute sensitivity threshold [p~]

m p.sat mean number of photons colleted if the camera is saturated [p~] 14 m y mean gray value [DN]

m y.dark mean gray value with no light applied [DN]

m y.sat mean gray value if the camera is saturated [DN] 2 2 s d variance of temporal distribution of dark signal referred to electrons [e- ] 2 2 s d 0 variance of the (fictive) temporal dark noise electrons for exposure time zero [e- ] 2 2 s o variance of the spatial offset noise [e- ] DSNU1288 dark signal non-uniformity [e-] 2 2 s y. full variance of the gray values’ distribution including white noise and artifacts [DN ] 2 2 s y.nonwhite variance of the gray values’ distribution including the non-white part of the noise only [DN ] 2 2 s y.spat variance of spatial distribution of gray values (spatial noise) [DN ] 2 2 s y.spat.dark variance of the spatial distribution of dark signal (spatial dark noise) [DN ] 2 2 s y.temp variance of temporal distribution of gray values (temporal noise) [DN ]

11 Including the geometrical fill factor. 12 The unit [e-] = [e-2] = 1 denotes a number of electrons. 13 The unit [p~] = [p~2] = 1 denotes a number of photons. 14 The unit [DN] = [DN2] = 1 denotes digital numbers.

Page 11 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

2 2 s y.temp.dark variance of temporal distribution of dark signal (temporal dark noise) [DN ] 2 2 s y.total variance of the distribution of gray values (total noise) [DN ] 2 2 s y.white variance of the gray values’ distribution including the white part of the noise only [DN ]

F p number of photons collected in the geometric pixel per unit exp osure time [p~/s] Throughout this document noise energy is described in terms of variance. An equivalent way would be using rms (root mean square) values. The following relation holds true: variance = rms2. The model contains several important assumptions that need to be challenged during the qualification: § The amount of photons collected by a pixel depends on the product of irradiance and exposure time. § All noise sources are stationary and white with respect to time and space.15 The parameters de- scribing the noise are invariant with respect to time and space. § Only the total quantum efficiency is wavelength dependent. The effects caused by light of different wavelengths can be linearly superimposed. § Only the dark current is temperature dependent. If these assumptions do not hold true and the mathematical model cannot be matched to the meas- urement data, the camera cannot be characterized using this standard.

7.2 Measurement Setup The measurements described in the following section use dark and bright measurements. Dark meas- urements are performed while the camera is capped. Bright measurements are taken without a lens and in a dark room. The sensor is illuminated by a dif- fuse disk-shaped light source16 placed in front of the camera (see Fig. 4). Each pixel must “see” the whole disk.17 No reflection shall take place.18

d

mount sensor

D D'

temperature disk-shaped light source sensor

Fig. 4 : Optical setup

The f-number of this setup is defined as: d f = (22) # D

15 The spectrogram method (see section 7.3.2) is used to challenge this assumption. 16 This could be, for example, the port of an a Ulbricht sphere. A good diffuser with circular ape rture would also do. 17 Beware, the mount forms an artificial horizon for the pixels and might occlude parts of the disk for pixels located at the border of the sensor. 18 Especially not on the mount's inside screw thread.

Page 12 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

with the following quantities: d distance from sensor to light source [m] D diameter of the disk-shaped light source [m] The f-number must be 8. If not otherwise stated, measurements are performed at a 30°C camera housing temper ature. The housing temperature is measured by placing a temperature sensor at the lens mount. For cameras consuming a lot of power, measurements may be performed at a higher temperature. Measurements are done with monochrome light. Use of the wavelength where the quantum efficiency of the camera under test is maximal is recommended. The wavelength variation must be £ 50 nm . The amount of light falling on the sensor is measured with an accuracy19 of better than ±5%. The char- acteristic of non-removable filters is taken as part of the camera characteristic. The number of photons hitting the pixel during exposure time is varied by changing the exposure time and computed using equations (1) and (2). All camera settings (besides the variation of exposure time where stated) are identical for all meas- urements. For different settings (e.g., gain) different sets of measurements must be acquired and dif- ferent sets of parameters, containing all parameters20 which may influence the characteristic of the camera, must be presented.

7.3 Matching the Model to the Data

7.3.1 Extended Photon Transfer Method

The measurement scheme described in this section is based on the “Photon Transfer Method” (see [4]) and identifies those model parameters which deal with temporal noise. For a fixed set of camera settings, two series of measurements are performed with varying exposure 21 times Texp :

§ First a dark run is performed and the following quantities are determined (details below): Texp , m y.dark , and 2 s y.temp.dark .

2 § Second a bright run is performed and the following quantities are determined: Texp , m p , m y , and s y.temp .

Set up the measurement to meet the following conditions: § The number of bits is as high as possible. § The Gain setting of the camera is as small as possible but large enough to ensure that in darkness 2 22 s y .temp.dark ³ 1 holds true. § The Offset setting of the camera is as small as possible but large enough to ensure that the dark signal including the temporal and spatial noise is well above zero.23 § The range of exposure times used for the measurement series is chosen so that the series covers

SNRy = 1 and the saturation point. § Distribute the exposure time values used for measurement in a way that ensures the results for minimum detectable light and saturation bear same exactness.

19 Typical pitfalls are: the inside of the lens mount reflects additional light on the sensor ;the measurements device has a different angular characteristic as compared with the came ra sensor. 20 Including for example if the exposure time is programmed or defined by means of an external trigger signal. 21 Varying the exposure time is required for determining dark current and shutter efficiency. 22 Otherwise, the quantization noise will spoil the measurement. 23 Otherwise, asymmetric clipping of the noisy signal will spoil the measurement.

Page 13 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

§ No automated parameter control (e.g., automated gain control) is enabled. * * *

The mean of the gray values m y is computed according to the formula:

1 m y = å yij (23) N i, j using the following quantities:

m y mean gray value [DN]

yij gray value of the pixel in the i-th row and j-th column [DN] N Number of pixels [1] All pixels in the active area24 must be part of the computation.25.

* * *

2 2 The variance of the temporal distribution of the gray values s y.temp , namely s y.temp.dark , is com- puted from the difference of two images A and B according to:

1 é 1 2ù 2 A B (24) s y.temp = ê å (yij - yij ) ú 2 ë N i, j û using the following quantities:

2 2 s y.temp variance of the temporal noise [DN ] A yij gray value of the pixel in the i-th row and j-th column of the image A [DN] B yij gray value of the pixel in the i-th row and j-th column of the image B [DN] N Number of pixels All pixels in the active area are part of the computation. To avoid transient phenomena when the live grab is started, images A and B are taken in order from a live image series. * * * After performing the measurements, draw the following di agrams:

(a) m y versus m p

2 (b) s y.temp versus m p

(c) m y.dark versus Texp

2 (d) s y.temp.dark versus Texp

2 2 (e) s y.temp -s y.temp.dark versus m y - my.dark

(f) m y - my.dark versus m p

Select a contiguous range of measurements where all diagrams show a sufficiently linear correspon- 26 dence. The range should cover at least 80% of the range between SNRy = 1 and SNRy = Max .

24 See "general definitions". 25 Defective pixels must not be excluded.

Page 14 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

* * * The overall system gain K is computed according to the mathematical model as:

s 2 -s 2 K = y.temp y.temp.dark (25) m y - my.dark

2 2 which describes the linear correspondence in the diagram showing s y.temp -s y.temp.dark versus

m y - my.dark . Match a line starting at the origin to the linear part of the data in this di agram. The slope of this line is the overall system gain K. * * * The total quantum efficiency h is computed according to the mathematical model as:

m - m h = y y.dark (26) Km p

which describes the linear correspondence in the diagram showing m y - my.dark versus m p . Match a line starting at the origin to the linear part of the data in this diagram. The slope of this line divided by the overall system gain K yields the total quantum efficiencyh . * * *

The dark current N d is computed according to the mathematical model as:

my.dark - Kmd0 Nd = (27) KTexp

which describes the linear correspondence in the diagram showing m y.dark versus Texp . Match a line to the linear part of the data in this diagram. The slope of this line divided by the overall system gain K

yields a value which equals the dark current N d derived from the . The of fset from the matched line divided by the overall system gain K yields the dark offset md 0 . This quantity, how- ever, is not of interest for characterizing a camera. If a camera has a dark current compensation, the dark current is computed as:

2 2 2 s y.temp.dark - K s d0 N d = 2 (28) K Texp

2 which describes the linear correspondence in the di agram showing s y.temp.dark versus Texp . Match a line (with offset) to the linear part of the data in the diagram. The slope of this line divided by the square of the overall system gain K yields also the dark current N d . If the camera’s exposure time cannot be set long enough to result in meaningful values for the dark current N d and the doubling temperature kd (see below) these two parameters – and only these – may be omitted when presenting the results; the raw measurement data however must be given.

2 The dark noise for exposure time zero s d 0 is found as the offset of same line divided by the square of the overall system gain K . * * *

26 If this is not possible, the camera does not follow the model and cannot be qualified using this norm.

Page 15 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

The doubling temperature kd of the dark current is determined by measuring the dark current as described above for different housing temperatures. The temperatures must vary over the whole range of the operating temper ature of the camera. Put a capped camera in a climate exposure cabinet and drive the housing temperature to the desired value for the next measurement. Before starting the measurement, wait at least for 10 minutes with the camera reading out live images to make sure ther mal equilibrium is reached. For each temper a- ture J , determine the dark current N d by taking a series of measurements with varying exposure times as described above. Draw the following diagram:

(g) log2 N d versus J - 30°C . Check to see if the diagram shows a linear correspondence and match a line to the linear part of the data. From the mathematical model, it follows that: J - 30°C log2 N d = + log2 Nd 30 (29) kd

and thus the inverse of the slope of the line equals the doubling temperature kd and the offset taken to the power of 2 equals the 30°C dark current N d30 . * * *

2 The saturation point is defined as the maximum of the curve in the diagram showing s y.temp versus

m p . The abscissa of the maximum point is the number of photons m p.sat where the camera saturates.

The full well capacity me.sat in electrons is computed according to the mathematical model as:

me.sat = hm p.sat (30)

* * * To determine the wavelength dependence of the total quantum efficiency, run a series of measur e- 27 ments with monochrome light of different wavelengths l , including the wavelength lo , where the quantum efficiency has been determined as described above. For each wavelength, adjust the light’s intensity and the exposure time so that the same amount of photons m p hit the pixel during exposure time. Take a bright measurement and compute the mean m y of the image. Cap the camera, take a dark measurement and compute the mean m y.dark of the image. From the mathematical model, it follows that:

m y (l)- m y.dark h(l) = h(l0 ) (31) m y (l0 )- m y.dark m p =const which can be given as a table and/ or graphic28.

7.3.2 Spectrogram Method

The measurement scheme described in this section is based on the “Spectrogram Method” (see [1]) and identifies those model parameters which deal with total and spatial noise.29

27 You can use a set of filters. 28 Note that this is different from the spectral distribution of the responsivity which is determined by the same measurement, but holding constant the irradiance instead of the number of photons co llected.

Page 16 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

The total noise is taken from a spectrogram of a single image. The spectrogram is computed by tak- ing the mean of the amplitude of the Fourier transform of each line (details below). The white part of the total noise, as well as the total amount of noise including all kind of artifacts such as stripes in the image, can be estimated from the spectrogram. For linescan cameras the spectrogram method is used on a "pseudo area" image made combination off 1000 consecutively acquired lines. 30 To describe the total noise, three measurements for different lighting conditions are made. For each measurement, the spectrogram, s y. full , and s y.white are measured. The measurements are done for a fixed set of camera settings. The spatial noise is estimated in the same manner as the total noise but the spectrogram is taken from an image resulting from low pass filtering (averaging) a live image stream. To describe the spatial noise, a bright and a dark run are performed. During the dark run, the following 2 quantities are determined (details below): Texp , m y.dark , and s y.spat.dark .

2 During the bright run, the following quantities are determined: Texp , m p , m y , and s y.spat .

2 2 For computing s y.spat.dark and s y.spat , the measure s y. full is used which contains all noise parts. * * * Set up the measurement to meet the following conditions:31 (Note: All settings must be equal for all measurements. For some cameras it may be useful to perform the measurements at several operating points in order to obtain meaningful values for all measured parameters.) § The number of bits per pixel is as high as possible. § The Gain setting of the camera is as small as possible but large enough to ensure that in darkness 2 2 s y.temp ³ 1 and s y.spat ³ 1 holds true. § The Offset setting of the camera is as small as possible but large enough to ensure that the dark signal, including the temporal and spatial noise, is well above zero. § The range of exposure times used for the measurement series is chosen so that the series covers

SNRy = 1 and the saturation point. § No automated parameter control (e.g., automated gain control) is enabled. Camera built-in offset and gain shading correction or any other correction (e.g. defect pixel cor- rection) may be applied but must not be changed during a series of measurements.32 * * * The spectrogram of an image is computed by the following steps:

§ Restrict the number of pixels per line so that the largest number N = 2q is less than or equal to the image width.33 ( qÎ N )

29 Spatial noise is often not really white but can, to a large extent, be dominated by periodic art ifacts such as stripes in the image. To deal with this, the spatial noise parameters are estimated from the spectr ogram. The Spectrogram is the mean of the frequency spectrum of the of the image’s lines. 30 The Spectrogram computed in this way will leas to relatively higher spatial noise values compared to a area scan camera because the consecutive lines are correlated with respect to spatial noise. 31 It may be necessary to use a different gain setting as in the photon transfer measurement. 32 Applying shading correction during measurement might make it impossible to match the mathemat ical model and thus characterize the camera by the methods described in this standard. 33 Depending on the FFT implementation available, non- 2q based data length can be also used.

Page 17 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

§ For each of the M lines of the image, compute the amplitude of the Fourier transform:

- Prepare an array y(k) with the length 2N .

- Copy the pixels from the image to the first half of the array ( 0 £ k £ N -1).

- Compute the mean of the pixels in the first half of the array

1 N -1 y = å y(k) (32) N k =0

- Subtract the mean from the values y(k ):= y(k )- y (33)

- Fill the second half of the array with zeros ( N £ k £ 2N -1).

- Apply a (Fast) Fourier Tr ansformation to the array y(k):

nk 2 N -1 j2p Y (n) = å y(k )e 2N (34) k =0 The frequency index n runs in the interval 0 £ n £ N yielding N+1 complex result values. Verify that the first value Y (0) is zero. (This must be the case if the mean was subtracted from each line and computation was done correctly)

- Compute the amplitude of the Fourier transform as:

Y (n) = Y (n)Y * (n) (35) § Take the squared mean of the amplitude values for all M lines of the image

1 2 (36) Y (n) = å Y j (n) M j

where Y j (n) is the amplitude of the Fourier transform of the j-th line.

The N+1 values Y (n) with 0 £ n £ N form the spectrogram of the image. It should be flat with occa- sional peaks only. * * * The mean of the squared transform is the variance of the noise, describing the total grey , including all artifacts. It is computed according to:

2 2 1 s y. full = å Y (n) (37) N +1 n The square of the height of the flat part seen in the spectrogram curve is the variance describing the white part of the noise. It is estimated by taking the median of the spectrogram; sort the values Y (n) and take the value with the index N 2 .

s y.white = sort( Y (n) n = 0,1,2,KN ) N (38) index = 2 * * * To check if the total noise is white, take three spectrograms: one in darkness, one with the camera at 50% saturation capacity me.sat and one with the camera at 90% saturation. Draw the three spectr o-

Page 18 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

grams in one diagram showing Y (n) Kh in [p~] versus n in [1/pixel]. All three curves should be flat with occasional sharp peaks only. Compute the non-whiteness coefficient34 for each curve:

2 s y. full F = 2 (39) s y.white and check if it is approximately 1. If this parameter deviates significantly from 1 the spatial noise is not white, and the model may not be applied. * * * In order to gain a temporal low-pass filtered version of the camera image, the mean is computed from a set of N images taken from a live image stream. This can be done recursively by processing each pixel according to the following algorithm: ky + y y = k k +1 (40) k +1 k +1

2 s y.temp s 2 = (41) k +1 k + 1

where yk is the pixel’s value in the k-th image with 0 £ k £ N -1 and N is the total number of images 35 processed. The temporal low-pass filtered image is formed by the pixel values yN which have a 2 temporal variance of s N . The total number N of images processed is determined by running the recursion until the following condition is met:

s y. full (N) ³10×s N (42)

were s y. full (N ) is the rms value of the total grey noise computed from the low-pass filtered image

according equation (37) using the spectrogram method and s N is the rms value of the temporal noise of the low-pass filtered images computed according to equation (41). * * * Using temporal low-pass filtered images, run a series of dark measurements and a series of bright

measurements. For each measurement, compute the spectrogram and determine s y. full . Use this

quantity as s y.spat in the bright and s y.spat.dark in the dark measurement. Draw the following di agrams:

2 2 (h) s y.spat - s y.spat.dark versus m y - my.dark

(i) s y.spat.dark versus Texp Select a contiguous range of measurements where all diagrams show a sufficiently linear correspon- dence.36 * * *

34 This parameter indicates how well the camera / sensor matches the mathematical model. 35 To avoid rounding errors, the number format of yN must have sufficient resolution. A float value is recom- mended. 36 If this is not possible, the camera does not follow the model and cannot be qualified using this sta ndard.

Page 19 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

2 The variance coefficient of the spatial gain noise Sg or its rms value S g respective is computed according to the mathematical model as:

2 2 s y.spat - s y.spat. dark S g = (43) m y - m y .dark

2 2 which describes the linear correspondence in the diagram showing s y.spat - s y.spat.dark versus

m y - my.dark . Match a line through the origi n to the linear part of the data. The line’s slope equals the

rms value of the spatial gain noise S g . * * *

2 From the mathematical model, it follows that the variance of the spatial offset noise s o should be constant and not dependent on the exposure time. Check that the data in the diagram showing

s y.spat.dark versus Texp forms a flat line. Compute the mean of the values in the diagram and square the 2 result; this equals the variance of the spatial offset noise s o .

7.4 Publishing the Results This section describes the information which must be published to characterize a camera according to this standard. The published measurement data must be typical and/or guaranteed specification for the characterized camera type 37. The type of data must be clearly indicated. If only typical data is published the definition of typical (sample selection; number of samples) must be indicated. As good practice the following convention is recommended: For all guaranteed specification data typi- cal, maximum and minimum values respectively curves are published. If for some parameters only one value or curve of data points is given, the data is typical and has to be acquired in accordance with the publishers definition of "typical". A camera's characteristics may change depending on the operating point described by settings such as gain, offset, digital shift, shading, etc. A camera manufacturer can publish multiple data sets for multiple operating points. Each data set must contain a complete description of the (fixed) camera settings during measurement as well as a complete set of model parameters. Some parameters may be left blank if correspondence with the model is not satisfied. The raw data and the diagrams, however, must still be plotted. Use diagrams to show the measured data points.

7.4.1 Characterizing Temporal Noise and Sensitivity

The data described in this section can be published for multiple operating points. The following basic parameters are part of the mathematical model: § h(l) : Total quantum efficiency in [%] for monochrome light versus wavelength of the light in [nm]. FWHM of the illumination should be smaller than 50nm. This data can be given as a table and/or graphic.

§ s d 0 : Standard deviation of the temporal dark noise referenced to electrons for exposure time zero in [e-].

§ N d30 : Dark current for a housing temperature of 30°C in [e-/s].

37 The type of data may vary for different parameters. E.g. guaranteed specification for most of the parameters and typical data for some measurements not easily done in production (e.g. h(l)). It then has to be clearly indi- cated which data is of what nature.

Page 20 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

§ kd : Doubling temperature of the dark current in [°C]. 1 § : Inverse of overall system gain in [e-/DN]. K

§ me.sat : Saturation capacity referenced to electrons in [e-]. The following derived data is computed from the mathematical model using the basic parameters given above:

§ m p.min (l) : Absolute sensitivity threshold in [p~] for monochrome light versus wavelength of the light in [nm].

§ SNRy (m p ) : Signal to noise ratio in [1] versus number of photons collected in a pixel during expo- sure time in [p~] for monochrome light with it’s wavelength given in [nm]. The wavelength should be near the maximum of the quantum efficiency. If this data is given as a diagram, it must be plot-

ted with SNRy on the y-axis using a double scale log2 [bit] / 20 log10 [dB] and m p on the x-axis

using a single scale log2 [bit].

§ DYN in = DYN out : Dynamic range in [1] The following raw measurement data is given in graphic form allowing the reader to estimate how well the camera follows the mathematical model. In all graphics, the linear part of the data used for estimating the parameters must be indicated.

§ m y (m p ) : Mean gray value in [DN] versus number of photons collected in a pixel during exp osure time in [p~].

2 2 § s y.temp (m p ) : Variance of temporal distribution of gray values in [DN ] versus number of photons collected in a pixel during exposure time in [p~].

§ m y.dark (Texp ) : Mean of the gray values’ dark signal in [DN] versus exposure time in [s].

2 2 § s y.temp.dark (Texp ) : Variance of the gray values’ temporal distribution in dark in [DN ] versus expo- sure time in [s].

2 2 § [s y.temp - s y.temp.dark ](m y - m y.dark ) : Light induced variance of temporal distribution of gray values in [DN2] versus light induced mean gray value in [DN].

§ [m y - m y.dark ](m p ) : light induced mean gray value in [DN] versus the number of photons col- lected in a pixel during exposure time in [p~].

§ log2 N d (J - 30°C ) : logarithm to the base 2 of the dark current in [e-/s] versus devi ation of the housing temperature from 30°C in [°C]

7.4.2 Characterizing Total and Spatial Noise

The data described in this section can be published for multiple operating points. The following basic parameters are part of the mathematical model:

§ s o : Standard deviation of the spatial offset noise referenced to electrons in [e-].

§ Sg : Standard deviation of the spatial gain noise in [%].

Page 21 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

Y (n) § : Spectrogram referenced to photons in [p~] versus spatial frequency in [1/pixel] for no Kh light, 50% saturation and 90% saturation. Indicate the whiteness factor F for each of the three graphs. The following raw measurement data is given in graphic form allowing the reader to estimate how well the camera follows the mathematical model. In all graphics, the linear part of the data used for estimating the parameters must be indicated.

2 2 § s y.spat - s y.spat.dark (my - m y.dark ) : Light induced standard deviation of the spatial noise in [DN] ver- sus light induced mean of gray values [DN].

§ s y.spat.dark (Texp ) : Standard deviation of the spatial dark noise in [DN] versus exposure time in [s].

Page 22 of 23 Ó Copyright EMVA, 2005 Standard for Characterization and Presentation of Specification Data for Image Sensors and Cameras

September 06 Release A1.03

8 References [1] Dierks, Friedrich, “Sensitivity and Image Quality of Digital Cameras”, download at http://www.baslerweb.com/ [2] Holst, Gerald, “CCD arrays, cameras, and displays”, JCD Publishing, 1998, ISBN 0-9640000-4-0 [3] Jähne, Bernd, Practical Handbook on Image Processing for Scientific and Technical Applications, CRC Press 2004, ISBN [4] Janesick, James R., CCD characterization using the photon transfer technique, Proc. SPIE Vol. 570, Solid State Imaging Arrays, K. Prettyjohns and E. Derenlak, Eds., pp. 7-19 (1985) [5] Janesick, James R., Scientific Charge-Coupled Devices, SPIE PRESS Monograph Vol. PM83, ISBN 0-8194-3698-4 [6] Kammeyer, Karl, Kroschel, Kristian, Digitale Signalverarbeitung, Teubner Stuttgart 1989, ISBN 3- 519-06122-8 [7] Theuwissen, Albert J.P., Solid-State Imaging with Charge-Coupled Devices, Kluwer Academic Publishers 1995, ISBN 0-7923-3456-6

Page 23 of 23 Ó Copyright EMVA, 2005