<<

Processing System Applications: from Barbie Cams to the Space Telescope

Robert Kremens, Ph.D Rochester Institute for Technology Center for Science and Remote Sensing Group and Pixelphysics, Inc.

May 2001 Outline

• Fundamentals of Image Processing for Digital

• Solid State Image - CCDs, CMOS, etc.

• System Requirements for Several Applications

– Break

• Hardware Analysis: The Jam Cam

• Hardware Analysis: The DC210

• Hardware Analysis: The Chandra Orbital X-ray Telescope Fundamentals of Image Processing

Robert Kremens May 2001 Image Processing Pipeline

Analog White Scene Processing & A/D Conv. Balance Balance Filter Array (CFA)

RGB to CFA Blurring* YCC Conv. Correction Interpolation

Unsharp Chroma JPEG Finished Masking Compress File Format Subsample (Edge Enh.) White Balance

• Usually performed on raw CFA • White balance attempts to adjust for variations in the illuminant (D65, Tungsten, etc.) by adjusting analog gain in the R,G,B channels • What is white? – R=G=B= ~255 Implementing White Balance

• Method A - Predetermine the illuminant. – Acquire image. • Camcorder method - white cap pointed at source. • Can be a problem if scene has no white. – R, G and B adjustment values calculated to make them scale to ~255. – Subsequent have incoming raw pixels multiplied by adjustment value. • Can be done on the fly with hardware, or in analog stages (preferable). Count

0 Level 255 Implementing White Balance (cont’d)

• Method B - Adjust each image after acquisition – Find area with R~G~B at highest intensity. • Examine Full Image - time consuming. • Predetermined small image area - what if no white? • Subsampled image - faster. – Determine adjustment values for R, G and B. – If G isn’t high enough, there is no white. • Still possible to scale to gray by adjusting R and B? • Or just leave it alone.

illumination removes much of the need for determining the adjustment parameters, since the color spectrum of the illumination source is known. Digital Camera Image Processing

Analog White Scene Processing & A/D Conv. Balance Balance (CFA)

RGB to Gamma CFA Blurring* YCC Conv. Correction Interpolation

Unsharp Chroma JPEG Finished Masking Compress File Format Subsample (Edge Enh.) Scene Balance

• Adjusts the of an image so that neutral images are seen as neutral. • Adjust color planes throughout their range; can use adjustment that is a function of pixel value. • Unless subject is holding a neutral density chart, this is much more of an art than a science. Implementing Scene Balance

• Look for areas of image with approximately equal R, G and B values. – Look at entire image - time consuming. – Look at sub-sampled image - can miss data. – Look at blocks of image. • Create areas of image by averaging over 20x20 pixels. • Histogram and scene classification are basic methods - force color histogram to be ‘correct’ • Adjustment (multiplication) values must not affect overall brightness of image. – If R an B need to be increased, G should be decreased also. • Be careful of interaction w/ White Balance. Digital Camera Image Processing

Analog White Scene Processing & A/D Conv. Balance Balance Color Filter Array (CFA)

RGB to Gamma CFA Blurring* YCC Conv. Correction Interpolation

Unsharp Chroma JPEG Finished Masking Compress File Format Subsample (Edge Enh.) CFA Interpolation

• CFA interpolation creates 3 separate bit planes for each pixel location of the .

• Weighted averages are typical, but algorithms vary depending on filter array patterns. • Can be clever and use adaptive algorithms to reduce sub- sampling effects and undesirable artifacts (‘zippers’) Implementing CFA Interpolation (Median method)

• Mean Interpolation (Green) 00 01 02 03 – G11 = (G01 + G10 + G12 + G21)/4 • Median Interpolation (Green) 10 11 12 13 – G11 = [(G01 + G10 + G12 + G21) - MAX(G01 + G10 + G12 + G21) - MIN(G01 + G10 + G12 + G21)]/2 20 21 22 23

30 31 32 33

• Red and Blue may interpolated differently than green. – R11 = (R00 + R02 + R20 + R22) / 4 – B11 = B11 – R12 = (R02 + R22) / 2 – B12 = (B11 + B13) / 2 – B22 = (B11 + B13 + B31 + B33) / 4 Some Observations on CFA Interpolation

• The math is not complicated (adds, compares and shifts) – Data re-organization is key to speed. • Barrel shifters and/or byte extraction instructions. • MMX style pack and unpack. – SIMD instructions (such as in MMX) can greatly accelerate math. – A good candidate for hardware acceleration. • Arithmetic compares, adders and shifters are easy to implement.

• Interpolation schemes can cause image artifacts. – Edges, corners and stripes present a problem. Color Spaces and Standards

• How is the image represented in R,G,B space? – Attempt to maximize color gamut and psycho-visual quality while minimizing non-linear effects. • CCIR 601 - now ITU-R BT.601 – Standard – Y’CrCb (Y’Cr’Cb’) , 4:2:2 Subsampling • Y excursion 0 - 219, offset = 16 (Y = 16 to 235) • Cx excursion +/- 112, offset = 128 (Cx = 16 to 240) – No assumptions about white point • CCIR 709 - now ITU-R BT.709 – HDTV Studio Standard – Y’CrCb Color Space • Y excursion 0 - 219, offset = 16 (Y = 16 to 235) • Cx excursion +/- 112, offset = 128 (Cx = 16 to 240) – Specifies White Point (x = .3127, y = .3290, z = .3582) (D65) – Specifies dark viewing conditions. Color Spaces and Standards (cont’d)

• sRGB (called NIFRGB by Kodak) – Default color space for HP and Microsoft – Same as CCIR 709 except • Specifies DIM viewing environment • Full 0 - 255 encoding of YCrCb values • Photo YCC – Also uses CCIR 709 • White is 189 instead of 219 • Results in RGB values from 0 - 346 when reconverted • Chroma channels are unbalanced (supposedly follows distribution of in a real scene) Digital Camera Image Processing

Analog White Scene Processing & A/D Conv. Balance Balance Color Filter Array (CFA)

RGB to Gamma CFA Blurring* YCC Conv. Correction Interpolation

Unsharp Chroma JPEG Finished Masking Compress File Format Subsample (Edge Enh.)

• Gamma describes the nonlinear response of a display device (CRT) to an applied .

4.5L, L<=0.018 L’ = 0.45 709 {1.099L -0.99, 0.018

Video Signal

• RGB values must be corrected for Gamma before they are transformed into a video space. Digital Camera Image Processing

Analog White Scene Processing & A/D Conv. Balance Balance Color Filter Array (CFA)

RGB to Gamma CFA Blurring* YCC Conv. Correction Interpolation

Unsharp Chroma JPEG Finished Masking Compress File Format Subsample (Edge Enh.) R’G’B’ to Y’CrCb Conversion

• Conversion to YCrCb occurs for 2 reasons: – Chroma can be subsampled. • Eye is more responsive to intensity changes (G) than color changes (R,B) • Can compress color channels (R,B) for smaller stored image – Video output potential. • Well known conversion matrices convert Gamma Corrected RGB to Y’CrCb.

Y’ 0.257 0.504 0.098 R’ 0* Cr = 0.439 -0.368 -0.071 G’ + 128 Cb -0.148 -0.291 0.439 B’ 128 * 0 for UPF format, 16 for CCIR 601 Implementing RGB - YCrCb Conversion

• Color space conversion can be implemented in several ways. – Straight software implementation • Flexible, but inefficient – Hardware assist • Single cycle MAC - 9 clock cycles • SIMD instructions - 3 clock cycles Red – Straight Hardware • Fast - 3 clock cycles • Costly – 3 Dimensional Lookup Tables

Green Blue Implementing Nonlinear Color Space Conversion - 3D Lookup Tables

• CMY Space is very device dependent and the conversions from RGB, L*ab or YCrCb are not linear.

0,0,255

255,255,255

0,255,0

255,0,0 Implementing Nonlinear Color Space Conversion - 3D Lookup Tables

• A 3-D Lookup Table defines the conversion for specific colors 0,0,255 • A fully populated table would have over 16M 255,255,255 entries (256x256x256) • A subset of entries is chosen to populate the table.

0,255,0

255,0,0 Implementing Nonlinear Color Space Conversion - 3D Lookup Tables

• The actual values for the conversion are interpolated using various mechanisms. – Tri-linear interpolation (10 */ 7+-) – Prism Interpolation (8 */ 5+-) – Tetrahedral Interpolation (6 */ 3+-) – Pyramid Interpolation (7 */ 4+-) – Fuzzy Logic methods • Simple Table Example C,M,Y Y,Cr,Cb 0,0,0 219,120.2,128.0 0,0,128 192,120.5,0.8 0,0,255 . 0,128,0 . 0,128,128 . … . 255,255,255 0.2, 120.5,127 Digital Camera Image Processing

Analog White Scene Processing & A/D Conv. Balance Balance Color Filter Array (CFA)

RGB to Gamma CFA Blurring* YCC Conv. Correction Interpolation

Unsharp Chroma JPEG Finished Masking Compress File Format Subsample (Edge Enh.)

• The human eye is much more sensitive to intensity variations than color variations. • Some color can be discarded without loss of .

(Y) Luminance (RS-170) + =

Chrominance (I & Q) Chroma Subsampling

• 4:2:2 and 4:2:0 are typical subsampling ratios – 4:2:2 is typically used in video. – 4:2:0 is prevalent in still .

4:4:4 (No subsampling) Cb Cb Cb Cb 4:2:2 4:2:0 Cb Cb Cb Cb Cb CbCb Cb Cr Cr Cr Cr Cb Cb Cb CbCb Cb Cb Cb Cr Cr Cr Cr CrCr CrCr Cr Y Y Y Y Cr Cr CrCr CrCr Cr Cr Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y What about other image processing?

• In consumer cameras, a large amount of post-capture processing takes place to enhance the quality of the image. • Blurring and subsequent unsharp-masking are common image improvements in processing chains • Blurring with a 3 X 3 convolution kernal after CFA interpolation reduces artifacts (Moire patterns, ‘zippers’, color banding) • Implementation of 3 X 3 convolution with various kernals is well known • Unsharp masking: – Add a edge-enhanced image to the original image to sharpen lines R = O + aS where on a pixel-by-pixel basis original image (O) is added to a fraction (a) of a sharpened or edge-extracted image (S) – Requires sharpened copy of image - can be performed in 3 row blocks for 3X3 sharpening kernal What is the future direction of the digital camera processing chain?

• Increased sensor size will require higher computing power to maintain image quality • Multiple output CCD’s can use parallel processing for some image functions • Smaller pixel size and decreased latitude will require more accurate image processing to achieve accurate images • Sensor development will halt around 3-4 Mpixels barring surprises in process development • Movie modes (MPEG or AVI stream) may be desirable as storage devices increase in capacity • Appearance of high capacity random access magnetic recording, video, audio, HDTV, high speed radio packet network, could usher in a new devices Modern Sensor Characteristics Solid State Imager Basics

Robert Kremens May 2001 Outline

• What Are The Basic Characteristics of Solid State Imagers? • What are the Different Solid State Imager Implementations? • Where are We Headed in The Future?

31 Imager Characteristics: Pixel Size & Pitch • Smaller pixels create marketing trade-offs – More pixels per specific die size (increased resolution) – Smaller die size for same number of pixels (lower cost) • Creating smaller pixels is an engineering tradeoff – Smaller pixels have less (total volume of charge collecting depletion region is smaller) – For some transport mechanisms, smaller pixels will be noisier. – Unless gate metalization is also shrunk, sensitivity will suffer (more of pixel is covered by metal) • Current commercially available minimum pixel size is 3.6 mm (obtained in several ‘consumer’ 2-3 megapixel CCDs) Imager Characteristics: Fill Factor • Fill factor ( ratio) is the ratio of usable sensor area to the total pixel area.

Pixel Area Active Area Impediments to a good fill factor - Gate metalization - Interline shift registers Pixel Pitch - Active pixel - Anti-blooming structures

• Current Fill Factors range from 25% up to near 100% depending mainly on the transfer technology used. Imager Characteristics: • Dark current is thermally induced charge carriers generated by impurities and silicon defects. • It manifests itself as a DC offset, which in turn lowers the dynamic range of the device. • Dark current is non-uniform from pixel to pixel resulting in fixed pattern - only continuously clocked CCDs avoid this effect. Imager Characteristics: Noise and Defects • Fixed Pattern Noise - constant “speckle” under uniform illumination conditions – Dependent on transfer method and usage – Not easily removed by process or design changes • Thermal Noise - shot noise – Independent of transfer method (silicon is silicon) • Readout noise - clocking noise, active circuitry noise – Shading • Sensor Defects – Bad pixels, missing columns or rows, non-uniformity of response, mis-aligned color filter array • Reset noise - (kTC noise) discharge resistance thermal noise – Not easily removed by process or design changes Imager Characteristics: Reset and Fixed Pattern Noise Removal n Correlated Double Sampling (CDS)

SH1 Amp Output Signal Imager + - SH1 Pulse SH2 SH2 Pulse

n Delayed Data Sampling (DDS)

Amp Delay Output Signal Imager + - Delayed Output SH Pulse Imager Characteristics: Readout Speed

• Measured in pixels/sec range - 50K to 10M • Integration Time – Too little - Affects sensitivity (not enough electrons stored) – Too much - Affects noise floor (thermal noise also accumulates) • Read Out Rate – Too fast - Affects charge transfer, burdens clock driver circuits – Too slow - increased integration time, can’t output standard video • Quick Calculation – Want VGA (640x480) at 30 frames/sec ~ 300Kpixels/sec – Assume 4 phase horizontal and 4 phase vertical clocks – 640 x 480 x 30 = 9.2MHz (Clock generator would need 36MHz) Imager Characteristics: Dynamic Range • Dynamic range is the pixel well capacity (in electrons) divided by the r.m.s. noise floor (in electrons). • Sometimes expressed in dB • 8 bits = 48.2dB 10 bits = 60.2 dB • Dynamic range is related to both processing (fixed) and voltage bias levels (easily screwed up) • Typical Numbers – ‘Full well’ capacity 30,000 - 2,000,000 e- – Noise <5 - 300 e- – Dynamic Range 40 - 100dB Imager Characteristics: Blooming & Smearing • Blooming occurs when a charge collector overflows into its neighbor. – More of a problem with CCD transfer mechanisms – Anti-blooming structures shunt excess charge away from active area

------++++ +

• Smearing is the effect of continued integration during the read out phase – Can occur with all un-shuttered sensor/camera designs Charge Coupled Devices (CCD) • CCDs are bucket brigade devices – Multiphase clocks transfer the charge

1 Pixel 1 Pixel F1 F2 F3 F4 F1 F2 F3 F4 V+V- V+V- V+V- V+V- V+V- V+V- V+V- V+V-

n Blooming is a problem with CCDs, anti-blooming structures are added which increases processing complexity and cost. Full Frame CCDs • Full Frame CCDs transfer each row incrementally.

n Full Frame CCDs exhibit significant smearing if it is not shuttered during readout. n A Full Frame CCD can have an excellent fill factor. Frame Transfer CCDs • Frame Transfer CCDs combat smearing by rapidly transferring pixels through the active area.

n Smearing is still an issue. n Extra silicon is needed. n Fill factor can be very good. n Next image can be acquired while previous is being read out.

Shielded Area Interline Transfer CCDs • Interline Transfer CCDs have transfer shift registers along each column.

Shielded

n Smearing is eliminated at the cost of reduced Fill Factor. n A second image can be acquired during readout. Progressive Scan CCDs

• Camcorders and video are a significant market driver for CCDs - hence many CCDs have interlaced readout. • Progressive scan devices are non-interlaced, but can be frame transfer, interline transfer or any other transfer method.

Interlaced Scan Progressive Scan Charge Injection Devices (CID)

• Charge Injection Devices have individually addressable pixels and non-destructive readout. n CIDs do not have blooming problems. n Fixed pattern noise is is low. n CIDs are RAD hard. n Electronic window and Vertical Scan zoom capabilities.

+ -

Horizontal Scan Passive Pixel CMOS

• Passive Pixel CMOS Sensors are very low cost.

n Fixed Pattern noise is very high, dynamic range is low. n VERY low power. n Small output signal level - charge is placed on entire row for readout (many pF) n Electronic windowing and Vertical Scan zooming. n Fill Factor can be nearly 100%. n On board clocking can greatly Horizontal Scan simplify interface. n Not prone to blooming. Active Pixel CMOS • Active pixel CMOS sensors trade off Fill Factor for individual pixel and reduced noise.

n Current Fill Factor ~35%. n Same advantages as passive CMOS, but significantly less noise. n On board electronics - amps, A/Ds, Processing! Vertical Scan n Excellent dynamic range.

Horizontal Scan Active Pixel CMOS (cont’d) • There are two approaches to solving the Active CMOS Fill Factor problem. – Make the array bigger. Causes problems with the , making image field wider, bad for fixed focus devices. – Fabricate microlenses over each pixel.

– Microlenses have been manufactured, but are difficult to manufacture in high volume and add a significant cost. – Anti- coating these highly curved surfaces is difficult. Side by Side Comparison of the visible sensor technologies

CCD CID Passive Pixel Active Pixel CMOS CMOS Pixel Size ++ + ++ + Readout Noise ++ -- -- + Fill Factor ++(1) - ++ - Dynamic Range + - - + RAD Hardness -- ++ ++ + Single Supply -- - ++ ++ System Power -- + ++ ++ System Volume - + ++ ++ System -- -- ++ ++ (1) - Frame transfer Integration can be very good, System Noise + -- - ++ interline transfer is System Cost -- -- ++ + poor. Ease of Use -- -- ++ ++ Electronic + - ++ ++ Electronic -- ++ ++ ++ Windowing Electronic -- ++ ++ ++ Zoom CCDs are radiation soft, but otherwise tend to be the highest performance optical array sensors

Sensor Family Strength Weakness Family Deficiencies Mature technology, mass Radiation soft (20 - 30 krad) CCD production lessons Radiation soft Large size available (4K X 4K) Capacitive device, high power Low QE on shuttered consumption (W to many W) devices Very low noise Cannot operate cryo Widest experience base Specialized production lines required Full frame CCD Lowest noise devices Smear with moving objects Backthinned full Highest QE (~80%) of any Difficult process, low yield frame CCD visible detector Very low noise (~2e)

Interline CCD Smear eliminated Low fill factor due to added structure Frame transfer Smear eliminated Up to 1/2 silicon area wasted in CCD storage register High frame rates possible Interline CCD with Improved fill factor (~60%) Microlenses radiation soft microlens (plastic) Microlens works best with high f/ system. CMOS sensors have not been manufactured in large sizes and lag CCDs in image performance

Sensor Family Strength Weakness Family Deficiencies CMOS Many lessons from CMOS High fixed pattern noise Small devices (1K X 1K) manufacturing Becoming highest volume High read noise Low QE, low fill factor sensor On-chip integration with amps, Low fill factor Noisy (read and fixed CDS, A/D, pattern) Very low power (10 - 100 mW) Poor sensitivity due to gate Limited shutter availability structure on top of pixel Very high frame rates possible Operating parameters change (60 MHz pixel clock X 8 with radiation exposure outputs) Radiation hard (~200 - 1000 Multiple outputs complicate krad) electronics package Single power supply No shutter capability - smear CMOS passive pixel Simplest structure CMOS sensor device High sensitivity - large fill factor 70-80% CMOS amp per High signal output, lower read Fixed patern noise problems pixel (APS) noise due to amplifier mismatch CMOS APS with Improved fill factor (~40 - 50%) Microlens works best with high f/ microlens for higher sensitivity system. CMOS amp per row Higher signal output than PPS, Signal still small because of (APR) simpler, better fill factor large column capcitance (~50%), less fixed pattern noise Several CMOS alternatives exist

Sensor Family Strength Weakness Family Deficiencies CID Simple structure eases Low signal output manufacturing Extendable to large arrays High fixed pattern noise Very radiation hard (~1000 krad) Random addressable High readout speed Hybrid (CMOS High readout speed Large scale devices only - recently produced bonded detector panel) 100% fill factor, high QE Difficult to butt? Multiple wavelengths by altering photosensor plane CMOS sensors have advantages in radiation tolerance and readout speed

• Natural extension of the original Reticon readout arrays. • Each pixel may have active to amplify and buffer signal. This is very desirable form a S/N standpoint. • No charge transfer across the sensor. • APS / CMOS can be cooled to low temperatures to reduce dark current - no CTE limitations. Passive cooling in space provides ample noise reduction. • Since there is no long-range charge transport, dislocations from massive charged particle does not degrade sensor performance. • CID (w/o active pixel) has high read noise on the order of 300 e-. Increased sensor size will increase read noise (due to column/row capacitance increase). The CID is a passive pixel sensor. CMOS sensors still have process-related image quality problems

• Widely variable, poor sensitivity across array in many early CMOS sensors. CMOS sensors got a ‘bad rap’. • Non uniformity of response and resultant fixed pattern noise due to differences in size of and storage . • Recent designs with smaller features may allow improved fill factor, spectral response and sensitivity for a given pixel pitch (Motorola and others). – 0.35 mm design rules currently used in US on 6 and 8 inch wafers. – 7.8 mm pixel pitch with pinned photodiode. – Electronic exposure including rolling mode (usual for CMOS) and fully shuttered mode. – QE ~ 22%, fill factor 35% CMOS sensors performance issues can be solved today with more transistors per pixel

• 5- per pixel sensors have similar noise performance to CCDs, but (presently) low fill factor. • Need larger pixels (20 mm) and smaller feature size (0.18 mm design rules).

4 transistors per pixel CMOS noise performance may not be an issue

• Most noisy sensors are of ‘commercial’ variety - computer cams, Barbie cameras, etc. • Read noise can be reduced, fixed pattern noise corrected for post-capture. • When feature size is reduced, design must pay attention to maximum signal from pixel to avoid overload. • Also need is the plot of number of transistors vs. fill factor (or QE) parameterized for pixel size. CCD vs. Active Pixel CMOS

• CCD imagers are still the reigning champion in everything except low cost, low quality (toys & security) and RAD hard (space) applications. • Active pixel imagers are a revolution in progress. – Powerhouses in the imaging, microprocessor and memory fields are all throwing gobs of money into Active Pixel R&D. • CCDs may soon give way to AP CMOS imagers in the consumer arena, but CCDs will remain dominant in the high end scientific applications (). – What will happen to all those specialized CCD fabs when the consumer market dries up? System Requirements Consumer Cameras to the Space Telescope

Bob Kremens May 2001 The electronic image systems in use today present a conflicting set of design parameters for sensors and electronics • Consumer Cameras ( Mavica series, Kodak DCS 2XX, etc.) – Low cost – Large number of relatively small pixels • Need high resolution to rival film • Small pixels OK for 8 bit dynamic range (more DR desirable, but…) – Low power for long battery life – Rapid readout for shortened click-to-click time

• Professional digital cameras (Kodak DCS660, Fuji – Cost not as significant an issue – Very large number of larger pixels • Need high resolution ( to match high end 35mm and 70mm formats) and high dynamic range • 12 bit color planes (‘36 bit images’) are de facto standard • Need high readout speed to suit pro shooting style Scientific applications are not cost sensitive but present other sets of challenges

• Scientific applications – Cost usually not an issue – Require highest dynamic range and lowest noise – Astronomical (ground and space based) imaging • Huge number of pixels desirable (replace 20-30 cm film) • Extremely low noise for extended exposures

– Cooling to LN2 temperatures or less ‘expected’ • Wide, flat spectral response • Non-destructive readout – Allows longer exposures on dimmer objects • Freedom from blooming and adjacent pixel overload effects – Bright objects often next to dim objects of interest • Radiation hardness necessary for space applications The electronic image systems in use today present a conflicting set of design parameters for sensors and electronics (2) • Remote Sensing applications (satellites and aircraft) – Can use linear or array sensors • Aircraft or satellite moves, can scan the area like a ‘pushbroom’ – Require sensitivity to reduce constraints on optics and stability to avoid constant re-calibration – May require radiation hardness in space applications Some example sensing systems: Astronomy

Ground based astronomy Lincoln Laboratory 2K X 4K 3-side edge buttable CCD Can be made into an 8K X 2NK array (N = 1 to …) Backthinned (no front- surface structure) for ultra high quantum efficiency (~80%) Optimized for low noise and slow speed readout with 16 bit digitizer systems Some example sensing systems: Astronomy

Sloan Digital Sky Survey instrument

Uses 30 2K x 2K SITe backthinned arrays.

Arrays cooled to -80 C.

That’s 126 megapixels per frame! Some example sensing systems: Professional camera back

Phaseone- Lightphase

16 Mpixel large pixel Phillips CCD

14 bit digitizer for ~42 bit ‘deep’ images

Firewire IEEE 1394 readout to PC or Mac

16 megapixel camera back System Hardware A Digital Consumer Camera -The Jam Cam-

Bob Kremens May 2001 The KB-Gear JamCam versions have sold close to millions (?) of units

• VGA resolution CMOS sensor camera • RS-232 and USB data communications • Fixed focus, fixed iris, no shutter • Simple ‘camera’ type interface • TWAIN drive, MS Picture-It!,ArcSoft PhotoFantasy bundled software What are the basic components of a digital camera?

• Imaging Optics - lens or mirror and mount • Image Sensor • Power Supply - potentially 7 - 9 power supplies for CCD camera with flash and LCD/backlighter • Processing electronics - CPU / DSP, memory, program store • Exposure and focus control - shutter, iris • (Removable media) - • Camera communication - serial, USB, IrDA, Firewire, radio The future market and profit leaders may not be today’s camera/electronics companies

• Fine details of image quality (where great IP is required) not necessarily an issue with low-end cameras. • All present low-end cameras have ‘acceptable’ images for the use intended • These cameras have fundamentally different architecture from previous ‘high-end’ cameras • The popularity of these cameras modify some fundamental digital camera ‘truths’ that have been held since 1995 • Ample opportunity for other camera ideas/modalities to come to dominate the electronic marketplace. • 1.3 Mpixel camera produces quite acceptable 4” X 6” prints (Why the need for more pixels?) The present camera architecture makes some assumptions which may be invalid

Sensor Timing Generator • Assumptions Analog Front End - Removable /Focus/Aperture Assembly Image Sensor + Analog storage, in- Processing camera Motor Control processing, Interface System ROM finished files, USB

image review I/O Ports on camera. Interface Image processing Microprocessor + RS-232 auxillary ASICS

System RAM

Interface RS-170 Memory Bus Video

Removable Flash Media for Image Display LCD Image Storage The architecture of the JamCam is similar to older digital cameras produced by first-line camera companies

Fixed Focus CMOS Integrated Fixed Aperutre Sensor Lens Assembly Digital Output

SRAM

Off-camera image USB Microprocessor with Physical processing, fixed Embedded USB/RS-232 System ROM Driver storage, no image Controller display.

Fixed Flash ROM Image Store

Status Display LCD The JamCam provides a totally acceptable camera experience - and this is all the consumer asks for!

• Extremely simple to use – Shutter button, mode switch – Mode switch selects resolution – Modes: • Capture (normal power-on) • Delete (all) • Set resolution (VGA, 1/2 VGA. 1/4 VGA) – Display indicates number of remaining pictures ala throw-away camera – RS232 and USB interfaces – Fast turn on (~1.5 seconds) – Fast image store (~1.5 - 7 seconds) – Infinite battery life with 9V cell The computer-centric JamCam model has the PC as the center of entertainment.

• Computer-centric world has home CPU performing functions of DVD, MP3, still image player, audio/radio/CD player,etc. • Single I/O device moves to living room with display unit (HDTV). • Uses high CPU power of PC device to perform entertainment and web tasks simultaneously. • Radio/IR links to cameras, remote I/O devices In the present digital camera model, the portable device is (or can be) an ‘entertainment center’.

• The camera, as a powerful portable computing device with large local memory can also be a: – Full featured still camera – Audio player (MP3) – (short MPEGs) – Radio – Cell/radio Telephone – Audio recorder • Will the future consumer desire an all-in-one portable ‘entertainment center’? The JamCam is composed of six major components

• The CMOS sensor minimizes the number of analog front-end components • Flash ROM, DRAM and ROM memory components couple with processor. • One Atmel microprocessor component provides ample computing power for this size sensor and the limited processing done on- board. • Two PLD ‘glue’ parts interface the LCD display and pushbuttons. • Double sided SMT PCB!! • PCB-mounted image sensor and lens assembly The JamCam is memory rich and processor-poor

• 2 MByte Flash image storage • 2 Mbyte ROM • 256KByte static ram

• One low power, 8 bit RISC processor with embedded USB and other interface functionality Limiting the number of mechanical components increases ruggedness, reduces power consumption and cost • Shutter and iris less lens assembly • Adequate exposure range is provided by electronic shuttering of sensor • This system is similar to a primitive ‘box’ camera, with some exceptions: – Short lens provides large – Large exposure latitude provided by wide range electronic shutter on sensor • Image acquisition limitations: – Fixed depth of field • Not an issue in consumer cameras – Limited focus range • 2’ - infinity: similar to best AF film point and shoot cameras – Limited exposure range • typically The component count in the JamCam may be nearing a minimum

• Double sided PCB Component Vendor Function Number AT43320 Atmel AVR Risc Core with USB Hub and Embedded USB 1 • One ‘major’ AT49LV1614 Atmel 1M X 16 sectored flash RAM 1 AT27BV1024 Atmel 1M X 16 non-window EPROM 1 CPU ATF16LV8 Atmel 200 gate EPLD 2 component TC55V2001 Toshiba 2 Mbit (256K X 8) SRAM 1 DS14C232 RS232 driver 1 • Processing 74HC273 Glue 4 74HC4040 Glue 1 time (to 74HC00 Glue 1 compressed 74HC244 Glue 1 CD4013 Glue 1 finished file): LCD ? Display Resistors 22 Capacitors 29 Xsistors 4 38.5 Gates Individual gates 2 12 MHz Xtal 1 kpixels/sec. Switches SPST 2 USB conn USB connector 1 1/8" Mini 1/8" mini stero phone 1 HDCS-2000 HP CMOS 640 X 480 image sensor 1 TOTAL 78 System Hardware A Mid-Range Consumer Digital Camera Kodak DC210

Bob Kremens May 2001 The Kodak DC210 was a ‘second generation’ 1 Mpixel digital camera that sold well

2:1 f/ 4 auto-focus zoom lens

Powerful flash unit

Flash and ambient light exposure sensors Consumer digital cameras integrate optics, image sensors and image processing electronics

• Extremely compact three- dimensional packaging

• Flex board, double side SMT essential for density on mid-range and high-end cameras

• Separation of boards according to function is common to reduce noise – Flash unit almost always separate – CCD/sensor PCB attaches to rear of optics package with flex cable – LCD and backlighter (discharge lamp) separate What are the basic components of a digital camera?

• Imaging Optics - lens or mirror and mount • Image Sensor • Power Supply - potentially 7 - 9 power supplies for CCD camera with flash and LCD/backlighter • Processing electronics - CPU / DSP, memory, program store • Exposure and focus control - shutter, iris • (Removable media) - • Camera communication - serial, USB, IrDA, Firewire, radio The optics/imager/focus/zoom package is w wonder of miniaturization

• Sensor attaches to rear of optics package - fairly universal in this arena The goal of the image processing package is the production of a ‘finished’ file on a removable media.

There are several Main CPU ways to assemble these components from basic Image Analog Basic Image Image Intellectual Sensor Preprocessing Creation Property components or custom ASIC Aperture,Zoom and devices Focus Control

JPEG or Other Compression Depending on Camera Control performance level Focus and of camera and size Exposure of image sensor, a Sensors User Input single CPU may Buttons, Dials perform all camera Display functions

Removable Media The image processing electronics must be inexpensive, low power and provide adequate processing • Consumer cameras are still low (no?) margin products - pennies count!

• Short battery life has been a serious user complaint for first and second generation digital cameras - much effort has been expended to increase battery life

• The requirements of high processing horsepower and low power consumption tend to be mutually exclusive - designers often forced to compromise and idle components when not used. The electronic architecture must also be familiar enough to develop the camera application easily

• Product lifetime is very short in this market (< 1year) • Development cycle is correspondingly short so exotic architectures or components are less popular even when providing superior performance • Complex interaction between system elements and processors may require real time operating system (RTOS), multi-tasking capability and complex development system support • Large amounts of code necessary for implementation of company intellectual property requires source code reuse and control • Good development tools required! • Often, the best development tools are available for ‘general purpose’ processors, so these are found in many camera designs (Motorola PowerPC, Hitachi SH-DSP, SPARC, etc.) The camera electronics performs other important internal functions in addition to image processing

These systems are incredibly Ready IrDa Motor Lens Rear CCD Shutter complex and the image Light Drive Drive Position Switch Carrier processor is only a part of the ‘picture’ (multi-processor)

Processor (1) Processor (2) CCD Driver / Interface Much near-real-time processing for auto-focus, LCD Interface auto-exposure, power Exposure Power/Interface management and flash control Sensor

Analog (RF, serial, power, (RTOS) Flashlamp, DC-DC Converter) Status LCD Camera must remain Flash Photoflash responsive to user input in Batteries Lamp Cap. spite of other operations occurring simultaneously (multi-tasking) Kodak has chosen a ‘computer-centric’ architecture for most of their consumer cameras

It is clear that the Gain, A/D, CCD Timing 1160 X 872 CCD Generation Kodak computer + Input Signal Conditioning 0.5 MB Dual Port approach is one DRAM LCD and Composite Video that facilitates change, Buffer code control and ease of development 4 MB Compact Control and Routing ASICs FlashMemory Card Data Routing and Camera Removable Picture Storage Management LCD Rear Panel Display Ancillary camera functions relegated to 8 8 Bit Microcontroller bit micro with link to Exposure control, Control inputs, voltage monitor, Status LCD driver main processor Status LCD

Lots of memory - like a real computer! DSP - RISC Microprocessor 2 MB Working DRAM 0.5 MB Flash Ram Image Processing, Local Storage, Working CCD Parameters, System Compression Memory Code System Hardware The Chandra Orbital X-Ray Observatory

Bob Kremens May 2001 Astronomers study the full spectrum of electromagnetic radiation from the cosmos Chandra is quite a different camera, but has the same systems and requirements of the Jam Cam

Basic camera components:

• Imaging Optics • Image Sensor • Power Supply • Processing electronics • Exposure and focus control • (Removable media) • Camera communication X-rays are energetic , and can be detected by most solid state optical sensors

• Incident X-ray deposits energy in pixel via the photoelectric effect.

• Chandra observations are centered in the region under 10 KeV

• A will generate a charge carrier for each 2.6 eV of energy, e.g. 2.6 KeV X-ray generates 1000 electron-hole pairs

• The incident flux is very low: incoming photons can be counted. The detectors are read out rapidly and the image ‘accumulated’. Each photon’s ‘signature’ (energy and position) can be precisely known if the detector is energy sensitive

• This is a very different situation from most optical photography applications, where zillions of photons impinge on the detector simultaneously and the detector is read once per image X-ray optics are operated at glancing incidence

Chandra’s optics are ultra- precise cylindrical mirrors of parabolic and hyperbolic section

Aperture of telescope ~ 1.2 m

It was very difficult to provide the accuracy necessary for these concentric cylinders optics (Eastman Kodak)

This instrument was characterized before orbital insertion!!! Chandra has two focal plane instruments

• High Resolution Camera - (HRC) high spatial resolution and very high sensitivity, but limited energy resolution – Stacked microchannel plates and crossed wire grid readout array – Pulse from incident X-ray impinges at intersection of two wires, defining location of incident photon

• Advanced CCD Imaging Spectrometer - (ACIS) - simultaneous energy and spatial location using CCDs The CCD in the ACIS focal plane is read out rapidly

Most CCDs in astronomical applications are read out infrequently

The 10 CCDs in Chandra’s ACIS instrument are read out rapidly and continuously when on target to insure only a single photon will be captured in each pixel

This readout method assures that energy and spatial information may be obtained for each photon - photons are rare!!! The CCDs are operated in ‘single hit’ mode and create spectra and images simultaneously

Eight frame transfer CCDs are used in the 0.5o focal plane

Quick frame transfer to storage area - then readout to determine energy and position

Total pixels: The block diagram for the ACIS instrument looks similar to a consumer digital camera!

• Except for thermal control in Chandra, a CCD camera is a CCD camera The data stream is processed by the Earth station

• Not much image processing is done on the ‘camera’ - raw data stream is telemetered to Earth for subsequent image re-creation Chandra is proving to be as spectacular an instrument as the HST

• This is the first decent glimpse of the universe at these wavelengths, uncovering new physics and providing a host of surprises • This instrument has tag-teamed with the Hubble Space Telescope to probe the universe over a wide spectral range. • Especially important are the differences in the visual and X- ray images of the same object

Compact galaxy group HGC62 in 50,000 second exposure from Chandra