ACMOS MULTI-MODAL CONTACT-IMAGING SCANNING

by

Arshya Feizi

A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto

Copyright c 2014 by Arshya Feizi A CMOS multi-modal contact-Imaging Scanning Microscope Arshya Feizi Master of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto 2014 Abstract

This thesis presents the design, implementation and partial experimental

characterization of a multi-modality scanning-contact microscope (SCM) with

application in biomedical imaging.

Bench-top light are bulky and expensive and provide only one imaging

modality. The SCM’s imaging component is a custom-made CMOS imager in the

AMS0.35µm imaging process. Six pixel types are integrated into the imager, which enable the SCM to support six imaging modalities. For sub-pixel resolution imaging, a specialized pixel layout is used which allows the system to support a super-resolution algorithm which takes multiple images with sub-pixel shifts as its input and generates a single high-resolution image.

Each pixel type may generate an output voltage or current, depending on whether it is active or passive. A low-power dual-input 2nd order ΔΣ ADC with an SNR of 78dB is

implemented to accommodate both current and voltage inputs while preserving

noiseshaping characteristics for both inputs.

ii Acknowledgements

Despite the fact that this work solely bears my name, it would not have been possible without the help of many significant individuals. First, I would like to thank my parents for their unconditional love and emotional support through the ups and downs of this project. I owe additional appreciation to my father, Prof. M. Reza Feyzi, who also provided professional support and insight for this work. I like to thank my dear friend, Saba Rahimi, who helped in making Toronto feel like home; and my peers at University of Toronto, especially Hossein Kassiri, Aynaz Vatankhah, Saeid Rezaei, and Javid Musayev for their presence and friendship during both rain and shine. Finally, I like to thank my supervisors, Prof. Roman Genov and Prof. Glenn Gulak for providing the motivation and academic resources for this work. I would also like to appreciate my course instructors, Prof. Tony Chan Carusone, and Prof. Richard Schreier for teaching me many aspects of advanced circuit design.

iii Contents

List of Tables vii

List of Figures viii

List of Acronyms xii

1 Introduction 1 1.1 Domains of Biological Imaging ...... 1 1.1.1 Body Imaging ...... 2 1.1.2 Organ/Tissue Imaging ...... 3 1.1.3 Cell Imaging ...... 3 1.1.4 Intra-cellular and Molecular Imaging ...... 4 1.2 ...... 4 1.2.1 Light (Lens-based) Microscopy ...... 5 Bright-field/Dark-field microscope: ...... 5 : ...... 6 Confocal microscope: ...... 7 Polarization microscopy: ...... 8 Summary of light microscopy ...... 10 1.2.2 Contact (Lens-less) Imaging and Microscopy ...... 10 Sub-pixel resolution Imaging: ...... 11 Fluorescence Imaging ...... 13 Polarization Imaging ...... 14

iv 1.3 Motivation ...... 14 1.4 Thesis Organization ...... 16

2 Sub-pixel-Resolution Scanning Contact-Imaging Microscope 17 2.1 Introduction ...... 17 2.2 Sub-pixel Resolution Imaging ...... 18 2.2.1 Super resolution algorithm ...... 18 2.2.2 Super-resolution algorithm hardware ...... 19 Static sample ...... 19 Moving sample ...... 20 2.3 Principle of Line Scanning ...... 21 2.3.1 Line sensors vs. 2D pixel arrays ...... 21 2.3.2 Staggered line sensor ...... 23 2.3.3 Staggering and implementation of the super-resolution algorithm . . . . 24 2.3.4 Folded-staggering and multi-modality imaging ...... 25 2.4 VLSI Architecture ...... 26 2.4.1 Multi-modality Pixels ...... 27 Targeted Modalities ...... 27 Staggered Pixel Array ...... 37 Static 2D Imaging Pixel Array ...... 40 System specifications ...... 41 2.4.2 Readout Circuit ...... 41 ADC Bank ...... 42 Output Multiplexer ...... 44 2.5 Summary ...... 45

3 Dual-mode Current/Voltage-input ∆Σ ADC 47 3.1 Introduction ...... 47 3.2 Readout Architecture ...... 49 3.3 ADC Design ...... 51 3.3.1 Background ...... 51

v 3.3.2 System Level Design ...... 52 3.4 CMOS Implementation of ADC ...... 54 3.4.1 Integrators ...... 54 3.4.2 Opamp design ...... 57 3.4.3 Quantizer ...... 58 3.4.4 Complete circuit ...... 58 3.4.5 Decimation Filter ...... 59 3.4.6 Clock Generator ...... 60

4 Experimental Results 63 4.1 ADC characterization ...... 63 4.1.1 Voltage-input ADC ...... 63 4.1.2 Current-input ADC ...... 64 4.1.3 Comparison ...... 66 4.1.4 Summary of ADC performance ...... 66 4.2 Microscope Setup ...... 67 4.2.1 Image Sensor ...... 67 4.2.2 Sample movement ...... 69 4.2.3 Back-end and Post Processing ...... 71 4.3 Experimental Image Sequence ...... 73 4.3.1 Data Acquisition ...... 73 4.3.2 Image Reconstruction ...... 74 ”Zig-zag” outline ...... 76 Incorrect aspect ratio ...... 76

5 Conclusions 79 5.1 Thesis Contributions ...... 79 5.2 Future Work ...... 80

References 81

vi List of Tables

1.1 Summary of light microscopes ...... 10 1.2 Table of envisioned specifications ...... 15

2.1 Summary of pixel types ...... 42 2.2 Summary of ADC specifications ...... 44

3.1 Summary of pixel outputs ...... 49

4.1 Comparative analysis of the ADC ...... 69

vii List of Figures

1.1 Domains of biological imaging based on sample size ...... 2 1.2 CT-Scanner [7] ...... 2 1.3 MRI Machine [7] ...... 3 1.4 Optics of a bright-field and dark-field microscope ...... 5 1.5 Image comparison in a bright-field and dark-field microscope ...... 6 1.6 Optics of a fluorescence microscope ...... 7 1.7 Optics of a confocal microscope ...... 8 1.8 Optics of a polarization microscope ...... 9 1.9 Fundamental of contact imaging [28] ...... 11 1.10 Opto-fluidic microscope conceptual illustration [37], reused with permission from author ...... 12 1.11 Utilization of small apertures on pixels to increase resolution ...... 13

2.1 Illustration of the super resolution algorithm ...... 19 2.2 (a) Photodiode structure of a static-sample 2D imager (b) Shifting the photodi- ode electronically to acquire sub-pixel shifted images ...... 20 2.3 Image acquisition and reconstruction in a 2D pixel array ...... 21 2.4 Image acquisition and reconstruction using a line scanner ...... 22 2.5 Concept of staggering pixels ...... 23 2.6 Increase in resolution using the concept of staggering pixels ...... 24 2.7 Folded staggering of pixels ...... 24 2.8 Illustration of the folded-staggered formation and rows inserted between adja- cent pixels ...... 25

viii 2.9 Problem of wasted silicon area between lines ...... 26 2.10 Combination of folded-staggering and multi-modality imaging, implementable with a super resolution algorithm (three modalities in this illustrative figure) . . 26 2.11 Chip architecture ...... 27 2.12 Chip micrograph, 7.5mm×3.2mm AMS0.35µm imaging process ...... 28 2.13 3T pixel circuit and layout ...... 29 2.14 3T pixel timing diagram ...... 30 2.15 Schematic and layout of the high-resolution imaging pixels ...... 30 2.16 Timing diagram for the high-resolution pixel ...... 31 2.17 Layout of the polarization pixels, photodiode area of 25µm2 ...... 33 2.18 Phase difference in between excitation and emission light in a fluorescent marker 33 2.19 Circuit diagram and layout of staggered array passive pixel ...... 34 2.20 Structure of the wavelength-sensitive diode used for spectrum sensing [49], reused here with permission by author ...... 34 2.21 Light attenuation of poly-silicon with respect to its thickness [49], reused here with permission by author ...... 35 2.22 Layout of the spectrum-sensitive pixel, photodiode area of 164µm2 ...... 36 2.23 Timing diagram of the spectrum-sensing pixels ...... 37 2.24 Detailed drawing of the multi-modality folded-staggered pixel array ...... 37 2.25 Layout of one pixel period ...... 39 2.26 Microscopic image of the staggered pixel array ...... 39 2.27 Schematic of the passive pixel for static 2D imaging ...... 40 2.28 Layout of the passive pixel for static 2D imaging ...... 40 2.29 Timing diagram of static 2D imager passive pixels ...... 41 2.30 Readout architecture ...... 42 2.31 Using a TIA to convert photocurrent into voltage ...... 43 2.32 Block diagram of the ∆Σ ADC ...... 44 2.33 Architecture of the output multiplexer ...... 45

3.1 Chip block diagram ...... 50

ix 3.2 Block diagram of a passive and active pixel ...... 50 3.3 General block diagram of a ∆Σ modulator ...... 51 3.4 Noise spectrum after performing oversampling and noise shaping ...... 52 3.5 Block diagram of the proposed ∆Σ modulator ...... 53 3.6 First stage integrator ...... 54 3.7 Simplified diagram of the 1st stage integrator for voltage and current input . . . 55 3.8 Schematic diagram of the second-stage integrator ...... 57 3.9 Schematic diagram of the opamp ...... 58 3.10 Schematic diagram of the comparator ...... 58 3.11 Complete circuit of the modulator ...... 59 3.12 General block diagram of a decimation filter ...... 59 3.13 Schematic of the decimation filter ...... 60 3.14 Non-overlapping clock generator ...... 60 3.15 Timing diagram of the non-overlapping clock generator ...... 60 3.16 Using one inverter to generate complementary clocks ...... 61 3.17 Complementary clock generator ...... 61 3.18 Timing diagram of the complementary clock generator ...... 62

4.1 Output spectrum of the ADC with voltage-input ...... 64 4.2 SNDR vs. voltage amplitude ...... 65 4.3 Output spectrum of the ADC with current-input ...... 65 4.4 SNDR vs. current amplitude ...... 66 4.5 PSD comparison of the hybrid and discrete-time ∆Σ modulator ...... 67 4.6 PSD comparison of the hybrid and discrete-time ∆Σ modulator ...... 68 4.7 System experimental setup ...... 68 4.8 Bonding of chip to package ...... 70 4.9 Chip mounted on PCB ...... 70 4.10 Usage of a lens to project the moving sample’s image onto the chip ...... 71 4.11 Picture of the stepper motor ...... 72 4.12 Diagram of the live data transfer hardware ...... 72

x 4.13 Timing of the pixel array readout with generated control signals ...... 73 4.14 Block diagram of data organization with software CDR ...... 74 4.15 One block of 16-bit data serialized by the FPGA ...... 74 4.16 Sample image used to verify the system functionality ...... 75 4.17 Sample image as it is entering the field-of-view(a), in the middle of imaging(b), and on exiting the field-of-view(c) ...... 75 4.18 Reconstructed image ...... 76 4.19 Reconstructed image for four step sizes: 2840(a), 2740(b), 2640(c), 2540(d) . . 77 4.20 Final image stretched from its height ...... 77

xi List of Acronyms

ADC Analog-to-digital converter

APS Active pixel sensor

PPS Passive pixel sensor

I/O Input/Output

PSNDR Peak signal-to-noise-and-distortion ratio

PSNR Peak signal-to-noise ratio

CMOS Complementary metal oxide semiconductor

DAC Digital-to-analog converter

MOM Metal-on-metal

PCB Printed circuit board

PD Photodiode

SNR Signal-to-noise ratio

xii VLSI Very large-scale integration

NTF Noise Transfer Function

STF Signal Transfer Function

FPGA Field-programmable gate array sps samples per second fps Frames per second

USB Universal serial bus

FPN Fixed pattern noise

viii Chapter 1

Introduction

This thesis presents the design, implementation and partial experimental characterization of a multi-modality lens-less microscope with application in biomedical imaging such as cell/DNA imaging and cell counting. The primary component of this system, a CMOS (Complementary

Metal Oxide Semiconductor) image sensor, is presented in detail.

This chapter discusses the underlying motivation for going towards lens-less microscopy by investigating different microscopy techniques and imaging modalities and their pros and cons. Section 1.1 discusses the different domains in biological imaging and explains the area that is targeted in contact imaging. Section 1.2 discusses the classic microscopy techniques and their imaging modalities. Section 1.3 provides a list of specifications for the envisioned microsystem.

1.1 Domains of Biological Imaging

As shown in Figure 1.1 [1, 2, 3, 4, 5, 6, 7], biological imaging can be classified into five categories based on the size of the sample of interest: 1. Body 2. Organ/Tissue, 3. Cellular 4.

Intra-cellular 5. Molecular. Each of these categories are briefly described below.

1 2 CHAPTER 1. INTRODUCTION

Molecular Organelle/Intra-cellular Cellular Tissues/Organs Body

10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 1 Size (m)

Figure 1.1: Domains of biological imaging based on sample size

1.1.1 Body Imaging

In this category, the sample of interest is the entire body (or large portion) of a species whose size is on the order of meters, such as a human or large mammal. Methods such as Comput- erized Tomography Scan (CT-Scan) and/or Magnetic Resonance Imaging (MRI) are used for this purpose [8, 9].

A CT-scan device, shown in Fig. 1.2 [7], utilizes x-rays for imaging. The x-ray travels through soft body tissue and is scattered upon reaching hard tissues such as bones. Therefore, it is used for viewing bone fractures or lung/chest problems [11].

Figure 1.2: CT-Scanner [7]

An MRI machine, shown in Fig. 1.3 [7], works by placing the body in a strong magnetic 1.1. DOMAINSOF BIOLOGICAL IMAGING 3

Figure 1.3: MRI Machine [7]

field and applying RF pulses to the body. The hydrogen atoms inside the body act as magnets in the presence of an RF pulse and line up with the external magnetic field. In the absence of the RF pulse, the hydrogen atoms return to their original state, releasing energy. The released energy is picked up by the machine, yielding an image. An MRI image is used for viewing details about soft tissues such as ligaments, spinal cord injuries, or brain tumors [10].

1.1.2 Organ/Tissue Imaging

Organs and tissue samples range from millimeters to tens of centimeters in size. Imaging a particular organ or tissue is especially useful in detection of diseases striking a particular location of the body, such as cancer [12]. MRI is used to image large organs or tissues. For small samples, light microscopes are used [1]. Light microscopy will be discussed in detail in section 1.2.

1.1.3 Cell Imaging

In cell imaging, the sample sizes vary from several micrometers to hundreds of micrometers.

In-vivo cell imaging is used in research of diseases non-invasively and for in-vivo character- 4 CHAPTER 1. INTRODUCTION ization and measurement of biological processes at the cellular and molecular level [14]. On a larger scale, cell counting is used in analytical studies which require an accurate number of cells such as quantitative PCR and cell proliferation [13]. It must be emphasized that in the latter case, an image of a ”cell cluster” is of interest, not the physical features of a single cell.

The common technique for cell imaging is light microscopy which is described in section 1.2.

1.1.4 Intra-cellular and Molecular Imaging

In sub-cellular imaging, understanding the physical features of a single cell or exceptionally small organism in the order of hundreds of nanometers such as pollen or bacteria is of interest.

SEM microscopes [15] are mainly used for this application. This domain of imaging is beyond the study in this work and will not be covered in this thesis.

The main focus of this work is to develop a CMOS contact imaging scanning microscope.

For low-cost portable devices, the size of the CMOS die is in the order of a few millimeters to a few centimeters. Therefore, it can be used in contact imaging of sample sizes of the same order, such as cell imaging. As mentioned earlier, light microscopes have been primarily used for this purpose. In the following section, details of conventional light microscopes are provided followed by state-of-art lens-less imaging devices with their corresponding pros and cons.

1.2 Microscopy

In this section, the working mechanism of light microscopes is explained, followed by the latest developments in lens-less microscopes. 1.2. MICROSCOPY 5

Occular Lens Occular Lens

Projection Lens Projection Lens

Objective Lens Objective Lens

Sample Sample

Condenser Lens Condenser Lens

Patch stop

Light Source Light Source (a) Bright-field Microscope (b) Dark-field Microscope

Figure 1.4: Optics of a bright-field and dark-field microscope

1.2.1 Light (Lens-based) Microscopy

Bright-field/Dark-field microscope:

The diagram of a bright-field microscope is shown in Fig. 1.4a. The specimen is illuminated from below by focusing the output of a light source, such as a lamp, on to the sample using a condenser lens. The emitted light from the sample is then magnified and guided to the eye using a collection of objective, projection, and occular lenses. Despite its simplicity, a bright-field microscope suffers from low contrast [17]. 6 CHAPTER 1. INTRODUCTION

(a) Sample image from a bright-field microscope (b) Sample image from a dark-field microscope

Figure 1.5: Image comparison in a bright-field and dark-field microscope

A dark-field microscope, shown in Fig. 1.4b, enhances the contrast by placing a ”dark-

field patch-stop” in front of the light source. The contrast is enhanced by darkening the sur- rounded region of the sample. Sample images from these microscopes are seen in Fig. 1.5 [16].

Methods such as phase-contrast microscopy, dispersion staining, oblique illumination, and differential interference contrast have similar structures as bright-field/dark-field microscopes with minor modifications to enhance contrast and/or resolution or highlight specific sample features. These microscopes can achieve resolutions of up to a few microns. The limiting factor in resolution in these microscopes is the numerical aperture of their lenses [17] and low diffraction limit of white light used for illumination [18].

Fluorescence microscope:

In fluorescence microscopy, the sample of interest is labeled with a fluorescent dye (fluores- cent biomarker) which absorbs photons at one wavelength and emits photons at a longer wave- length [19]. Typical fluorescence dyes include rhodamine, fluorescein, and green-fluorescent- protein (GFP). The significance of fluorescence microscopy is that only the region of interest in an organism may be shown in a fluorescent image. Furthermore, since different fluorescent biomarkers emit at different wavelengths, each dye can be used to stain a particular structure, allowing for simultaneous detection of multiple organisms [20].

The structure of a fluorescence microscope is shown in Fig. 1.6. An excitation filter selects the wavelength corresponding to the excitation wavelength of the biomarkers. The sample is 1.2. MICROSCOPY 7 then illuminated with the excitation light. The fluorescent dyes then emit light at a longer wavelength which is picked up by an image sensor after passing through a dichoric mirror and an emission filter tuned to block the excitation light [20].

Figure 1.6: Optics of a fluorescence microscope

A fluorescent microscope provides extremely high contrast and selectivity at the cost of

expensive hardware and potentially sophisticated chemistry required in attaching biomarkers

to different samples [21].

Confocal microscope:

Despite their relative simplicity, the microscopes described so far are only suitable for 2D

imaging. A confocal microscope solves this problem by taking pictures at multiple depths and

later reconstructing the acquired images [22]. Its structure is depicted in Fig. 1.7. A confocal

microscope consists of a laser which shines light through a pinhole and onto an objective lens 8 CHAPTER 1. INTRODUCTION using a dichroic mirror. The objective lens is adjusted for a particular focal plane with a narrow depth of field. The reflected light from the sample is viewed through a confocal pinhole which blocks all light every plane except the focal plane.

Figure 1.7: Optics of a confocal microscope

In , the illumination wavelength can be blue, which has the shortest wavelength in visible light. This maximizes the image resolution by minimizing the diffraction limit of light [23].

Polarization microscopy:

The aforementioned microscopes can only detect the intensity and color of a specimen. How- ever, understanding the polarity of the light emitted from the sample provides a richer descrip- 1.2. MICROSCOPY 9 tion of the specimen structure [24]. Polarization imaging is used to measure polarization prop- erties of scattering media such as tissue [25]. Moreover, it can distinguish between isotropic and anisotropic substances [27] which is used in optical mineralogy to obtain information about the composition and structure of materials [26].

The diagram of a polarization microscope is shown in Fig. 1.8. The light source is polar- ized and applied to the sample. The light components of the sample are then passed through the analyzer, a second polarizer oriented at 90 degrees with respect to the first polarizer. The stage may be rotated by the user to modify the orientation of the sample with respect to the incoming light.

Figure 1.8: Optics of a polarization microscope

The polarization microscope provides additional information from the sample with the fol-

lowing limitations: Similar to the bright-field microscope, the resolution of a polarization mi-

croscope is limited by light diffraction. Also, in similar lighting conditions, the SNR of the

polarization microscope is half of a bright-field microscope because the light which is trans-

mitted to the sample is halved by the initial polarizer. Also, specialized strain-free lenses are 10 CHAPTER 1. INTRODUCTION required to prevent image artifacts due to stress induced to the objective glass during assem- bly [27].

Summary of light microscopy

A summary of the microscopy techniques described in this section is provided in table 1.1. De- spite their high resolution and contrast, commercial light microscopes are bulky and expensive and are optimized for only one imaging modality.

Bright-field Dark-field Polarization Confocal Fluorescence Resolution few µm few µm few µm 0.25µm <1µm Contrast Low High High Very High Very High 3D imaging No No No Yes Yes Sample staining Yes/No Yes/No No Yes Yes Size Benchtop Benchtop Benchtop Benchtop Benchtop Weight ∼10Kg ∼10Kg ∼15Kg ∼30Kg ∼30Kg Price ($) ∼5K ∼5K ∼5K ∼200K ∼100K

Table 1.1: Summary of light microscopes

Contact imaging and lens-less microscopy are techniques to implement low-cost portable devices in which their performance is comparable to commercial microscopes.

1.2.2 Contact (Lens-less) Imaging and Microscopy

The microscopes mentioned in the previous section are bulky and expensive and cannot be used for field applications. Furthermore, they are fragile and require careful handling. this section introduces the latest innovations in lens-less imaging systems and their pros and cons.

The structure of a contact imaging microsystem is shown in Fig. 1.9 [28]. The specimen is placed directly on top of a CMOS/CCD image sensor. A light source is placed above the image sensor to illuminate the sample. Most specimens of interest are translucent after staining [31]. 1.2. MICROSCOPY 11

Therefore, a shadow of the sample is projected on the image sensor. According to section 1.1, since the size of CMOS dies is in the order of a few millimeters, it is a viable tool for cell/tissue imaging. The resolution of such system is limited by the pixel size of the image sensor, ranging from 2.25µm to 175µm [32, 33, 34, 35].

Point LED Light Fiber Source Optic Cable

Specimen Glass Slide CMOS Die

Image PCB Sensor

Figure 1.9: Fundamental of contact imaging [28]

Sub-pixel resolution Imaging:

To overcome the pixel size limitations, sub-pixel resolution imaging methods have been in-

troduced. One method of sub-pixel resolution imaging is to oversample the image spatially

by taking multiple images of the same specimen with shifts smaller than one pixel size. This

data is then fed into an algorithm to use the redundant data to approximate a high-resolution

image [39]. The algorithm is described in section 2.2. Here, we focus on the techniques for

spatial oversampling.

An imaging system fundamentally consists of three components: light source, sample, and

image sensor; therefore, to achieve the sub-pixel shifting, at least one of these components must

be slightly shifted multiple times and their corresponding images saved for later reconstruction.

Different methods have been reported for this purpose. In [36], the light source is moved using

mechanical coils and actuators to slightly manipulate the location of an LED which is used as

the light source for the imaging system. This results in additional power wasted in the actuators 12 CHAPTER 1. INTRODUCTION and a relatively unstable setup due to moving system components. This problem is alleviated in [37] by changing the moving component of the imaging system to the sample, itself. An

Opto-Fluidic Microscope (OFM) setup is used, where sample flow is inherent in the system.

As shown in Fig. 1.10, The specimen is moved inside microfluidic channels which are placed on the image sensor. With high-enough frame rate, images with sub-pixel shifts are acquired.

However, this approach mandates the velocity of the fluid be strictly controlled and that the sample does not rotate within the microchannels which is extremely difficult to determine in

fluids.

Figure 1.10: Opto-fluidic microscope conceptual illustration [37], reused with permission from author

Using the same concept of sample flow, in [40] the requirement for a sub-pixel algorithm is omitted by coating the pixels of a CMOS image sensor with opaque material, and creating small apertures in the pixels such that as the sample flows along the sensory region, only a small portion of it is exposed to the pixel (Fig. 1.11). Thus, after image reconstruction, the resolution of the system is increased to the size of the aperture, as opposed to the size of the pixel itself. Nevertheless, since most of the pixel’s area is covered with opaque material, the

SNR of the system significantly drops because the ratio of light signal to pixel dark current decreases. Moreover, the integration time must be increased such that enough time has been given to the sensor to accumulate sufficient light signal from the sample, reducing the frame rate thus increasing the imaging time by orders of magnitude. Also, the microfluidic channels 1.2. MICROSCOPY 13 must be placed accurately with an angle of 0.05 radians with respect to the aperture array [40] which increases the need for precision and requires sophisticated tools to accomplish. Finally, this approach relies on post-processing on the CMOS image sensor, adding cost and complexity to the system.

Figure 1.11: Utilization of small apertures on pixels to increase resolution

The sub-pixel imaging techniques of [37, 40] inherit advantages and disadvantages as

discussed earlier. We introduce the ”Scanning Contact Microscope” (SCM) which achieves

sub-pixel resolution imaging in CMOS without the need for post-processing or cumbersome

alignment methods. The SCM incorporates a specialized pixel layout in which a sample is

simultaneously spatially oversampled, similar to [37], and a high-resolution image is acquired

using exceptionally small photodiodes. This way, the microsystem is capable of sub-pixel-

resolution imaging in a low-light condition by feeding oversampled data into a super-resolution

algorithm; and producing an inherent sub-pixel-resolution image in a medium-high light con-

dition. In addition, the SCM includes pixels for polarized light imaging, spectrum sensing, and

fluorescence lifetime imaging (FLIM). The issue of sample rotation in microfluidic channels is

temporarily addressed in this work with using a stepper motor to move the sample in precise

steps instead of allowing it to flow. These items are further discussed in sections 4.2 and 2.4.

Fluorescence Imaging

In [33], an image sensor optimized for fluorescent contact imaging is presented. A special-

ized photodiode is implemented to determine the wavelengths of the input light to distinguish 14 CHAPTER 1. INTRODUCTION between different biomarkers, allowing for simultaneous detection of multiple specimens. An optical notch filter tuned for 450nm was placed on the die to block the excitation light. This structure could perform high throughput fluorescence imaging at the cost of low sensitivity of the photodiode and low resolution due to its large 175µm×175µm photodiode size. In [32] a contact chemiluminescence/fluorescent sensor is shown. Despite its high functionality, this system also suffers from low resolution because an additional electrode for charge sensing is placed in each pixel.

In [41], another technique of fluorescence imaging, FLIM, is shown. Single photon avalanche diodes (SPAD) are used to convert incoming fluorescence light to a train of pulses corresponding to the life-time of the fluorescent die. Similar to the previous cases, this system lacks spatial resolution because SPADs are implemented in large round multi-well structures; therefore, they are typically a few times larger than minimum sized pixels in CMOS.

Polarization Imaging

A technique for measuring the light polarity in CMOS is shown in [24]. To implement on- chip polarization filters, the photodiodes were covered with metal wire gratings utilizing the

METAL-1 layer of the CMOS process. This sensor was used in polarized navigation sensing.

The performance of metal gratings for light polarization was proved; however, the pixel array was used for ambient light sensing, not a tool for high resolution imaging. In this work, this polarization technique is used as one imaging feature of a contact microscope.

1.3 Motivation

The goal of this research is to design a low-cost portable multi-modality microscope for cell and DNA imaging. Commercial microscopes are single modality (enable only one type of imaging), bulky and expensive. The resolution of most conventional lens-less microscopes is limited by their pixel size which makes them unsuitable for applications demanding high 1.3. MOTIVATION 15 resolution such as cell imaging. The proposed microsystem can perform high resolution multi- modality imaging of samples, which enables the user to acquire most of the necessary sensory information from a single device as opposed to using a application-specific microscope to ac- quire a particular type of information from the sample. Six imaging modalities are envisioned in this work: high-resolution imaging, fluorescence imaging, fluorescence life-time imaging microscopy (FLIM), spectrum-sensing, polarized light microscopy, and sub-pixel resolution imaging for static samples; therefore, the proposed microsystem will integrate most bench-top microscopes into a single low-cost portable device, reducing the cost of such sensory applica- tions by orders of magnitude.

The proposed microsystem consists of two sensory regions: a section of multi-modality 1D line-scanners and a 2D pixel array to image an object without the use of motion. The 1D line- scanning section can accommodate high resolution imaging using a specialized pixel layout to enable the acquisition of multiple images of a specimen with sub-pixel shifts for application in a super-resolution algorithm. Table 1.2 shows the envisioned specifications of this system.

Volume < 2 in3 Weight ∼50g Cost <$5 1D multi-modality line System Imaging platform scanners 2D pixel array for static imaging Spatial resolution 300-500nm Electronics ADC resolution 10 bit SNR > 10dB

Table 1.2: Table of envisioned specifications 16 CHAPTER 1. INTRODUCTION

1.4 Thesis Organization

In this chapter, an introduction to commercial microscopy and lens-less imaging was provided.

Also, it included a description of the goal and motivation of this work. The rest of the thesis is organized as follows:

• Chapter 2 provides a detailed explanation about the underlying theory of this work and

the system-level implementation of the ideas on CMOS;

• Chapter 3 presents the system-level and circuit-level implementation of the on-chip

ADC, used to digitize a 2D collection of pixels;

• Chapter 4 shows system-level results of the implemented system and circuit-level char-

acterizations of the readout blocks of the custom CMOS chip;

• Chapter 5 discusses some short-comings of this work and the path to build upon in future

work to achieve the desired specifications. Chapter 2

Sub-pixel-Resolution Scanning

Contact-Imaging Microscope

2.1 Introduction

As mentioned in chapter 1, in light microscopes the resolution is mainly constrained by the numerical aperture of its utilized lenses, and ultimately, the inevitable light diffraction phe- nomenon, limiting the resolution to hundreds of nanometers. In contact imaging systems, this limitation is usually the pixel size which is in the order of 2.25µm - 175µm. It is evident that contact imaging systems lack high resolution but enjoy low price and portability. One method to overcome the limitation of pixel size in contact imaging systems is to use sub-pixel resolution techniques to increase the spatial resolution to beyond the physical pixel size. The remainder of the chapter is organized as follows: section 2.2 provides a background on sub- pixel resolution techniques. Section 2.3 describes the line-scanning technique used in this work and finally, Section 2.4 presents the VLSI architecture of the system.

17 18CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

2.2 Sub-pixel Resolution Imaging

In this section, first the super resolution algorithm is described, followed by a discussion on the required hardware for implementing this algorithm.

2.2.1 Super resolution algorithm

In conventional imaging systems, the ultimate bottleneck in increasing the spatial resolution is the pixel size. Image processing algorithms have been previously explored to address this limitation [29, 30]. However, these algorithms apply for cases where image enhancement is desired for a single image. These algorithms use filters and transforms to enhance the resolution using data from a single image. In this work, the super-resolution algorithm can combine multiple low resolution images into a single high-resolution image given that the low-resolution images are spatially shifted with respect to one another by a fraction of a pixel [36, 37, 38].

Shown in Fig. 2.1 [39], the red shaded region shows the physical pixels of the image sensor and the gray grid behind it shows the target high-resolution image. hk and vk denote uniform horizontal and vertical shifts, respectively. It is clear that hk and vk are a fraction of the physical pixel size.

Equations (2.1)–(2.2) show the concept behind the super-resolution algorithm.

N X x˜k,i = Wk,i,j(hk, vk).yj (2.1) j=1 p M 1 X X α C(Y ) = (x − x˜ )2 + (Y T • Y ) (2.2) 2 k,i k,i 2 fil fil k=1 i=1

Here, Xk = [xk,1, xk,2, , xk,M ] denote the actual low-resolution acquired frames from the image sensor where xk,i are its pixel values, x˜k,i is a calculated low-resolution pixel value, Y =

[y1, y2, ...yN ] is the target high-resolution image, Wk,i,j is a physical weighting coefficient,

Yfil is the high-frequency component of Y , and α is the weight of these high-frequencies. In 2.2. SUB-PIXEL RESOLUTION IMAGING 19

Figure 2.1: Illustration of the super resolution algorithm

(2.2), C(Y ) is a cost function which must be minimized in a number of iterations to maximize the resolution of the resulting high-resolution image. This function works by estimating yj such that the low-resolution image resulting from it after applying Wk,i,j is as close to xk,i as

2 possible. This is achieved by minimizing the error term (xk,i − x˜k,i) for each pixel value.

2.2.2 Super-resolution algorithm hardware

The super-resolution algorithm explained in the previous section may be applied to static (sta- tionary) or moving samples. Each case is described below.

Static sample

As mentioned in the previous section, the requirement for sub-pixel resolution imaging is that a number of frames with uniform sub-pixel shifts be acquired and fed into a super-resolution algorithm. In the SCM which relies on sample movement, this is done using the concept described in section 2.3.3. In [36], this is done by mechanically manipulating a light source and taking multiple shadow images from slightly different angles. However, this method relies 20CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE on the mechanical movement of an LED which leads to system complication, extra power and additional components (coils). In this work, a novel method is introduced to manipulate the image with no mechanical movement.

As shown in Fig. 2.2a, the photodiode is divided into 16 interleaved fingers which are grouped into four regions (shown with different colors). Each region acts as one ”fingered photodiode” during image acquisition. On each frame, one photodiode group is selected as the sensing element. Depicted in Fig. 2.2b, the selected photodiode group is cycled on four consecutive frames. As a result, the group’s location is electronically shifted vertically by the height of one finger, Hf , and shifted horizontally by the width of one finger extension, Wex, on each frame. The four acquired images serve as the input to the super-resolution algorithm.

Figure 2.2: (a) Photodiode structure of a static-sample 2D imager (b) Shifting the photodiode electronically to acquire sub-pixel shifted images

Moving sample

As described in chapter 1, spatial oversampling is achieved in an opto-fluidic microscope by moving a sample over the pixel area and taking multiple images of the sample with sub-pixel shifts. In this work, a customized pixel layout is introduced and described in section 2.3.2 which achieves sub-pixel shifting in the axis perpendicular to the movement of the specimen. 2.3. PRINCIPLEOF LINE SCANNING 21

To aid motion estimation and avoid sample rotation, time dependence has been removed from the system by using controlled displacement methods. A linear stepper motor provides precise displacements along one axis and therefore, it is used as the actuator in the proposed system.

Future generations of this work may benefit from motion-estimation algorithms to omit an additional actuator. Descriptions about the imaging sequence and stepper motor are given in section 4.2.2.

2.3 Principle of Line Scanning

In this section, first the concept of line scanning vs. 2D imaging is explained. Next, the trans- formation of a horizontal set of pixels into a staggered (diagonal) line is explained followed by its extension to sub-pixel resolution and multi-modality imaging.

2.3.1 Line sensors vs. 2D pixel arrays

In the image sensors used in cameras or conventional contact imaging microsystems, a two dimensional array of pixels is implemented to image a sample which is projected on the sensor via a lens, or placed directly on top of the sensor in contact imaging. Shown in Fig. 2.3, in this type of imaging the subject must be held still during the frame acquisition time.

Figure 2.3: Image acquisition and reconstruction in a 2D pixel array 22CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

In line sensors, a one dimensional line of pixels is used and the image is taken by scanning the sample. As shown in Fig. 2.4a, this is done by either moving the pixels over the sample or vice versa. An example of a line scanner is a conventional desktop scanner. During each readout of the pixels, one ”row” of the frame is acquired and the sample (or line of pixels) is moved. The frame is completed once the entire area of the object is scanned. As shown in

Fig. 2.4b, the captured rows are later placed beside each other to reconstruct the 2D image.

(a) Sequence of images using a line scanner

(b) Image reconstruction in a line scanner

Figure 2.4: Image acquisition and reconstruction using a line scanner 2.3. PRINCIPLEOF LINE SCANNING 23

2.3.2 Staggered line sensor

In the concept of line scanning, the vertical placement of the pixels which make up a row is not important. As shown in Fig 2.5, the pixels of each column may be pushed vertically to form a staggered layout while the theory of line scanning remains valid. The reason is as follows:

Assuming that the sample is completely passing over the sensor with a constant velocity, each point of the sample is scanned by the sensor at a known time and location. The only difference is that the each time the line is scanned, it represents a diagonal line of the sample, not a horizontal line. Although staggering the line pixels slightly increases the image reconstruction complexity, but [40] exploited this fact to remove the resolution limitation dictated by the pixel size. Shown in Fig. 2.6, first, the pixels of a commercial 2D CMOS image sensor column were coated with opaque material. Then, small apertures with a diameter of DA were etched onto each pixel of a column. The apertures were aligned with the movement direction of the sample such that they resembled a staggered set of pixels, as described above. This modification increased the final image resolution to the aperture size, as opposed to the pixel size.

Figure 2.5: Concept of staggering pixels

Nevertheless, in [40], the field of view is limited to the DA × Npixel where Npixel is the number of pixels in a line. This limitation is removed in the proposed work by introducing

the concept of folded-staggered pixels. As shown in Fig. 2.7, once the pixels have reached

a particular height, the rest of the pixels of the same line can be shifted down to start a new

column without disrupting the line sensing concept.

In theory, the height at which the pixels are folded is arbitrary. However, in section 2.4.1, a

design strategy is introduced to determine the height of the line in practice. 24CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

Figure 2.6: Increase in resolution using the concept of staggering pixels

Figure 2.7: Folded staggering of pixels

2.3.3 Staggering and implementation of the super-resolution algorithm

As explained in section 2.2, the targeted super-resolution algorithm uses a set of low-resolution images which are shifted with respect to each other by less than a pixel size. This algorithm effectively approximates a target high-resolution image which can reproduce each of the in- put low-resolution images by locally averaging its pixels. This method of sub-pixel resolution imaging can be performed in a staggered line sensor. As shown in Fig. 2.8, in addition to folded-staggering ( Fig. 2.7 ), the pixels of each column are placed on top of each other such that the horizontal shift between each two pixels of adjacent rows is less than one pixel size. In addition, the frame rate of the sensor is increased such that the time gap between two consec- utive frames is less than the time it takes for the sample to completely pass over one pixel (an 2.3. PRINCIPLEOF LINE SCANNING 25

Figure 2.8: Illustration of the folded-staggered formation and rows inserted between adjacent pixels analogous method is to decrease the movement speed of the sample). Thus, multiple images with sub-pixel shifts are obtained. These slightly shifted frames serve as raw data for the super resolution algorithm in post-processing.

2.3.4 Folded-staggering and multi-modality imaging

In Fig. 2.7, it was shown how a single line of staggered pixels may be folded and the line height be adjusted to fit within certain area constraints without sacrificing the field-of-view of the imager. However, as shown in Fig. 2.9a, the silicon area in between the lines is ”wasted” because there are no sensors present in those locations and the readout circuitry is column parallel. Area optimization is an important factor in scalability of the pixel array. Some layout techniques may be introduced for fitting all circuitry which would normally be column-level, including the ADC, within the unused area. One example is shown in Fig. 2.9b. However, these optimization techniques will require the available area be strictly known; therefore, the ADC design/layout must be modified for different layouts. Also, this may result in an unorthodox layout of the ADC, potentially degrading its performance.

In this work, the wasted area is used to add different types of pixels, each for a particular modality of imaging, depicted as different colors in Fig. 2.10. The complete array forms a periodic sequence of folded-staggered pixels in which each pixel type is used for a particular 26CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

t t t t D D D D u u u u E E E E o o o o T T T T d d d d a a a a S S S S e e e e A A A A R R R R

W W W W

(a) Wasted area between staggered lines of pixels (b) Image reconstruction in a line scanner

Figure 2.9: Problem of wasted silicon area between lines imaging modality. Not only does this method systematically use the wasted area, but it also provides additional imaging capabilities based on the number of pixel types. Details about the design and layout of each pixel are provided in the following section.

Figure 2.10: Combination of folded-staggering and multi-modality imaging, implementable with a super resolution algorithm (three modalities in this illustrative figure)

2.4 VLSI Architecture

The CMOS prototype was fabricated in the AMS0.35µm imaging process. The chip architec- ture is depicted in Fig. 2.11. It contains four main regions. The two light-sensitive pixel regions generate an output proportional to the input light intensity. They are converted to digital by the 2.4. VLSI ARCHITECTURE 27

ADC bank. The row decoder generates the control signals for the pixels, while the clock gen- erator block generates the clocks for the ADCs and controls the output multiplexer. The output multiplexer is used to guide the digitized pixel data to the chip outputs. The die micrograph is shown in Fig. 2.12. Each of the aforementioned blocks are explained below.

Single-modality Row Decoders 2D static-sample Pixel Array Clock Generator

Multi-modality Pixel Array Row Decoers Line scanner Pixel Array

Readout circuits

ADC Bank

Clock Generator Clock Output Mux

Figure 2.11: Chip architecture

2.4.1 Multi-modality Pixels

In section 2.3, the principle of folded-staggering of pixels and multi-modality imaging was

explained. Here, the different types of pixels which represent the key imaging modalities are

discussed. Next, the combination of the pixels in their complete array is presented. Finally, a

novel method of sub-pixel imaging is introduced and explained.

Targeted Modalities

In chapter 1, the imaging modalities which is of interest to biologists and scientists were ex-

plained and the conventional microscopes currently used in laboratories were described. In

summary, all conventional bench-top instruments are bulky and expensive. Also, their optics 28CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

Figure 2.12: Chip micrograph, 7.5mm×3.2mm AMS0.35µm imaging process 2.4. VLSI ARCHITECTURE 29

(a) Circuit topology of a standard 3T pixel (b) Layout of a 3T pixel, photodiode area of 25µm2

Figure 2.13: 3T pixel circuit and layout

are suited for only one particular type of imaging. For example, a bright-field microscope

cannot be used to sense the light polarity of a sample. The folded-staggered multi-modality

image sensor array is used to include different pixel types suitable for various imaging modali-

ties while providing a high resolution, comparable to a conventional microscope. The imaging

modalities which are included are as follows:

High-sensitivity fluorescence imaging: Fluorescence imaging is primarily used for ana-

lytical biology studies in which it is desired to measure whether a particular substance is present

in a sample [42]. Therefore, the sensitivity of the photodiode is an important characteristic of

the pixel. A 3T active-pixel is used with an n-well/p-sub photodiode for high sensitivity [43].

The circuit topology and layout for this pixel are shown in Fig. 2.13a and Fig. 2.13b, respec-

tively. The timing diagram for a 3T pixel with rolling mode readout is shown in Fig. 2.14.

The rows are reset one at a time; therefore, the integration time for the pixel is equivalent to

1 F rameRate . High-resolution bright-field imaging: In bright-field imaging, a high-power light source is assumed for illuminating the sample. Therefore, the specialized pixel for this modality can be optimized for resolution, while sacrificing sensitivity. The circuit topology of this pixel is shown in Fig. 2.15a and its layout is shown in Fig. 2.15b. 30CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

Figure 2.14: 3T pixel timing diagram

(a) Circuit topology (b) Layout, photodiode area of 0.09µm2

Figure 2.15: Schematic and layout of the high-resolution imaging pixels

The photodiode is a n+/psub diode, sized 300nm×300nm. It is designed to be small to min- imize the pixel’s light sensitive region to maximize resolution. The photodiode’s capacitance

(Cpd) is in order of 100aF, which, by itself cannot dominate the capacitance on the photodiode node (X) therefore, an additional 15fF MOS capacitor is added to compensate for the loss in the electron-well capacity. Furthermore, an electronic shutter is used to control the integration of this photodiode because if a 3T-pixel rolling readout is used, due to the photodiode’s small size, the image is more prone to motion artifact.

The timing for this pixel is shown in Fig. 2.16. During the integration time, SHT and

Sample are high and the generated photocurrent is integrated on Cc. Once the integration time is over the shutter is closed by switching SHT to low. When a row is selected, Cc is connected to the output by turning on the Sample switch followed by a reset of the entire row.

Polarization imaging: As mentioned in chapter 1, observing the polarity of the light emit- 2.4. VLSI ARCHITECTURE 31

Figure 2.16: Timing diagram for the high-resolution pixel

ted from a specimen provides a richer description of its structure. The Stokes vector, (2.3), is used to express the polarization state of a magnetic wave (e.g. light) [24].

   I           Q  −→   S =   , (2.3)      U         V  where, I is the light intensity, Q is the polarization along the linear 0/90 degree orientation, U is the amount of radiation that is polarized in the 45 degree orientation, and V is the ellipticity.

The degree of linear polarization (DOLP), (2.4), is used to show the partial polarization of the electromagnetic wave.

Q DOLP = (2.4) I 32CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

where, Q is calculated as:

2 2 Q = I90 − I0 (2.5)

where, I0 is the light intensity after passing through a horizontally oriented polarizer and I90 is the light intensity after passing through a vertically oriented polarizer. Therefore, to obtain the

DOP and the Stokes parameters, the light polarity must be measured along three orientations:

0 degrees, 90 degrees, and 45 degrees. The light intensity (I) is separately measured. In this

work, three pixel types were allocated to implement on-chip polarization filters to measure

the polarity of input light along the three orientations. Each polarization filter is composed of

periodic gratings with a width and pitch comparable to the wavelength of the input light placed

over the photosensor.

Λ To maximize the polarization properties of the grating, λ must be minimized where, λ is the wavelength of the input light and Λ is the grating period [44]. In this work, a grating period of 0.2λ was implemented. In spite of DRC violations, a metal grating with a width and pitch of 100nm was implemented using the METAL-1 layer which makes this polarizer suitable for input light in the green (λ = 532nm) to red ( λ = 620nm) region of the visible spectrum.

Three pixels identical to the pixel of Fig. 2.13 were laid out beside each other. A grating was placed on the photodiode of each pixel. As shown in the pixel layout of Fig. 2.17, each grating is rotated to implement one of the three orientations. Combined with the original pixel of Fig. 2.13 to measure the light intensity, the necessary data to calculate the Stokes parameters are obtained.

Fluorescence Life-time Imaging (FLIM): As explained in chapter 1, FLIM is a type of

fluorescence imaging in which the decaying time of emitted light from fluorescent dies is mea- sured. Two approaches are used for this purpose. One method is the use of Single Photon

Avalanche Diodes (SPADs) [45, 46, 47]. Despite their high resolution, SPADs require a high- 2.4. VLSI ARCHITECTURE 33

Figure 2.17: Layout of the polarization pixels, photodiode area of 25µm2 bias voltage and double n-wells for implementation. Furthermore, their relatively large hexag- onal geometry make them extremely difficult to scale in large pixel arrays. The second method for FLIM is sinusoidal excitation of the fluorophores and measuring the phase difference of the emitted light with the excitation source [48]. As shown in Fig. 2.18, upon receiving a sinusoidal excitation, a fluorescent marker will emit a sine wave at its emission wavelength with a phase delay corresponding to its life-time. This approach was chosen since it can be implemented by measuring the photo-current of nwell/psub photodiodes and comparing their phase with the input excitation light.

Figure 2.18: Phase difference in between excitation and emission light in a fluorescent marker

A passive pixel was implemented for this purpose. The pixel schematic is shown in Fig. 2.19a and its layout is shown in Fig. 2.19b. To minimize crosstalk and substrate noise, the photodiode 34CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE was entrenched within a ground ring for enhanced isolation.

(a) Circuit topology (b) Layout,photodiode area of 22.5µm2

Figure 2.19: Circuit diagram and layout of staggered array passive pixel

Spectrum sensing: As described in chapter 1, one advantage of fluorescence imaging is simultaneous detection of multiple specimens in a sample using various biomarkers which emit at different wavelengths and spectrally multiplexing them. For this, the wavelength-sensitive photodiode of [33] was implemented as the sensory element. As shown in Fig. 2.20, adopted from [49], a PMOS transistor with its source and drain connected is used as the photo-sensitive element.

Figure 2.20: Structure of the wavelength-sensitive diode used for spectrum sensing [49], reused here with permission by author

The wavelength-dependent optical transmittance of the polysilicon gate in the 0.35µm 2.4. VLSI ARCHITECTURE 35 technology was simulated in [49] and is shown in Fig. 2.21. The poly-silicon gate acts as a wavelength-dependent optical attenuator for incoming light.

Figure 2.21: Light attenuation of poly-silicon with respect to its thickness [49], reused here with permission by author

Longer wavelengths pass through the gate with less attenuation than shorter wavelengths.

The outer p+ ring serves as a regular p+/n-well photodiode. By changing the well bias voltage,

VB, the electron storage capacity of the region below the gate is modified. For detecting N

wavelengths, λ1, λ2..., λN , the pixel is read out with N different Vgate voltages. The following N equations are then used to calculate the intensity of the input wavelengths.

I1 = k11ø1 + k12ø2 + ... + k1N øN (2.6)

I2 = k21ø1 + k22ø2 + ... + k2N øN (2.7)

.

.

.

IN = kN1ø1 + kN2ø2 + ... + kNN øN (2.8) 36CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

where, Ii is the detector’s response for Vgate = Vi, kij are coefficients relating the effect of

λj on Ii at Vgate = Vi, and øi is the unknown intensity of λi. These equations are solved for

ø1, ø2...øN . Details are provided in [49]. Since the sensitivity of the pixel is degraded from adding the extra poly-silicon layer over the photodiode, this is compensated by increasing the photodiode size by a factor of 4 compared to the other pixels types.

The pixel circuit is a 3T pixel similar to Fig. 2.13a with an additional gate-bias voltage.

Fig. 2.22 shows the layout of the 3T spectrum-sensing pixel.

Figure 2.22: Layout of the spectrum-sensitive pixel, photodiode area of 164µm2

The timing diagram of this pixel is shown in Fig. 2.23. It must be noted that if N wave- lengths are to be measured, the frame rate for the spectrum sensing modality is decreased by a factor of N compared to the other modalities since N independent measurements are required to solve (2.6)- (2.8). 2.4. VLSI ARCHITECTURE 37

Figure 2.23: Timing diagram of the spectrum-sensing pixels

Staggered Pixel Array

In section 2.3.2 the concept of folded-staggered line scanner was discussed and its application in multi-modality imaging was shown in section 2.3.4. In this section, the complete design and layout of the staggered pixel array is discussed. Fig. 2.24, repeated here for convenience, shows the architecture of a folded-staggered multi-modality line sensor.

Figure 2.24: Detailed drawing of the multi-modality folded-staggered pixel array

Each two adjacent pixels of a line are separated by the number of rows in between them, 38CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

nrow, which is determined by the horizontal shift between rows (HS). HS also determines the number of sub-pixel shifted images which are fed to the super-resolution algorithm. As highlighted in Fig. 2.24, the condition for not loosing data when folding a line (i.e. starting a new period) is that the first pixel of the N th period be aligned with the last pixel of the (N −1)th period such that it corresponds to a vertical shift of the pixel which would be the next pixel in the (N − 1)th period. This requires each pixel type to cross the entire width of the period diagonally. Therefore, the height of the array is calculated as follows:

WP eriod = M × W W N = P eriod Height HS

HArray = NHeight × H (2.9) where, M is the number of modalities (pixel types), W is the width of the pixel, H is the pixel height, HS is the horizontal shift, WP eriod is the width of a pixel period, NHeight is the number of pixels required to cover one period width with a diagonal stack, and HArray is the required height of the pixel array. Equation (2.9) shows that the necessary height of the array is directly affected by the number of pixel types. Adding one pixel type to the period increases the number

W of rows by a factor of HS . Decreasing HS also increases the height since it will take more rows to cover the entire width of the period. Fig. 2.25 shows how the five pixel types of section 2.4.1 are arranged beside each other to form the building block of a period.

The total width of the period is 63.2µm. For the small pixels, HS = 300nm and their widths are 6.9 µm - 10.2 µm. The spectrum sensing pixel’s height is 2x larger than the other pixels in the period. Therefore, HS = 600nm and its width is 15.2 µm.

The complete array consists of 214 rows and 256 columns. The array height is 2.5mm.

Since HS directly affects the required height of the pixel array, it was chosen based on the maximum allowable aspect ratio of the chip. Therefore, if the width of the pixel array were 2.4. VLSI ARCHITECTURE 39

Figure 2.25: Layout of one pixel period increased, HS could be smaller to increase the number of inputs to the super-resolution algo- rithm, if necessary. Fig. 2.26 shows a microscopic image of part of the staggered array. In this design, nrow = 18.

Figure 2.26: Microscopic image of the staggered pixel array 40CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

Static 2D Imaging Pixel Array

Fig. 2.27a shows the circuit topology of the static-imaging method described in section 2.2.2.

A passive pixel structure with n+/p-sub photodiodes was used to minimize the pixel area. In

Fig. 2.27b, it is shown how each of the four photodiodes is arranged. A separate 93× 96 array was allocated for this pixel type. The layout of this pixel is shown in Fig. 2.28.

Figure 2.27: Schematic of the passive pixel for static 2D imaging

Figure 2.28: Layout of the passive pixel for static 2D imaging

Fig. 2.29 shows the timing diagram of this pixel. On each frame, one of the S1-S4 control 2.4. VLSI ARCHITECTURE 41 signals is selected. Four frames are acquired by cycling from S1 to S4. This results in four images with sub-pixel shifts as described in section 2.4

Figure 2.29: Timing diagram of static 2D imager passive pixels

Experimental validation of the functionality of this pixel and implementation of the super-

resolution algorithm using this pixel will be presented in future work.

System specifications

In the previous sections, all of the pixel types were explained. The summary of the pixel types,

including their sizes is given in table 2.1.

2.4.2 Readout Circuit

Fig. 2.30 shows the readout architecture of the chip for digitizing the analog pixel outputs and

sending them off-chip. The readout stage is composed of two sections: The ADC bank and

the output multiplexer. The ADCs are column-parallel, organized as 8 banks of 16 ADC’s per

bank. Therefore, there are a total of 128 ADC’s for 256 pixel columns. For reading out each

row, first the odd columns are digitized and sent off-chip, followed by the even columns.

Details about the specifications of the ADC’s and output multiplexer are as follows. 42CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

∗ Application Output Size (µm) Photodiode Type Number of units Area (µm2) in pixel period 1 High-resolution Voltage 6.9×8.8 0.09 Active 1×1 2 Fluorescence Voltage 10.2×10.2 25 Passive 1×1 3 Polarization Voltage 24.5×10.2 3×25 Active 3×1 4 FLIM Current 7.5×10.2 22.5 Passive 1×1 5 Spectrum sensing Voltage 15.2×22 164 Active 2×2 6 Static imaging Current 21×21 4×109 Passive NA * Each unit pixel corresponds to a pixel occupying a unit 8µm×11µm area Table 2.1: Summary of pixel types

Figure 2.30: Readout architecture

ADC Bank

The ADC must accommodate the analog outputs from the pixel array. As described in sec- tion 2.4.1, the output of the pixel array may be a voltage or current. In the case of voltage-type pixels, the output varies between 0.5V and 2.8V, obtained by simulation. In the case of passive 2.4. VLSI ARCHITECTURE 43 pixels, the output photocurrent may be up to 100nA based on information from the process datasheet. One method for accommodating both input types is to use two ADCs, each with a particular input type. Due to area constraints, this method is not feasible. Another solution is to use one ADC with a voltage input and use a trans-impedance amplifier (TIA) on each column to convert the output of all current-type pixels to voltage, then use the same ADC, as shown in

Fig. 2.31. However, since the current range from the pixels is in the order of 100nA at most, a resistor in the order of hundreds of kΩ must be used to convert the current to a decent voltage for the ADC. A resistor in that order is too large to fit within a pixel pitch. An alternative for a TIA is to integrate the current onto a capacitor using a capacitive trans-impedance amplifier

(CTIA) [50]. This method is also unfeasible in this work since there are multiple pixel types with voltage or current outputs which are to be read simultaneously. Since a CTIA would be required only for current-type pixels, it would cause additional delay in digitizing their outputs since some time must be allocated to the CTIA to integrate the photocurrent. This will result in uneven readout, or unsynchronized control signals in the row decoders and output multiplexers.

Figure 2.31: Using a TIA to convert photocurrent into voltage

One solution to the aforementioned issues is to use a configurable ADC which can accept

voltage or current inputs with minimum circuitry overhead. A 2nd order ∆Σ ADC was chosen

for this purpose, shown in Fig. 2.32.

The first stage of the ADC is configured to accept a current or voltage input and the second

stage is shared for the two inputs. The ADC was chosen to have 10 bits of accuracy based on

the noise calculations of [51] and full-scale range to accomodate the voltage/current outputs

of the pixels. Also, the conversion speed of the ADC must meet the frame rate requirement, 44CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE

Figure 2.32: Block diagram of the ∆Σ ADC determined by the desired moving speed of the sample. Table 2.2 summarizes the specifications of the ADC. Design details of the ADC are given in chapter 3.

ADC architecture 2nd order ∆Σ Sampling rate 1MHz Power 60µW Bandwidth 4KHz Resolution 10-bit Full scale (Voltage) 0.4V-2.85V Full scale (Current) 320nA Conversion rate 7800 Sample/sec

Table 2.2: Summary of ADC specifications

Output Multiplexer

The output multiplexer guides the digitized columns of a particular row to the outputs of the chip. As mentioned earlier, the ADC bank consists of 8 banks, 16 ADCs per bank. Highlighted in Fig. 2.33, the output multiplexer consists of two stages: stage-1 multiplexes the ADCs in each bank while stage-2 multiplexes the ADC banks. The desired pixel type is efficiently selected by setting the TypeSelect control signal to the desired value. Also, since the pixels 2.5. SUMMARY 45 are laid out as folded-staggered line sensors, the size of the sample determines what region of the pixel array is used. For a known sample size, power is saved by selecting the appropriate

ADC banks which correspond to the chip region for which the sample is moving. This is conveniently set by setting the range of the BankSel control signal. If the desired modality and sample size are known, readout is performed with no switching in the multiplexer, reducing the total power consumption by approximately 30%.

Figure 2.33: Architecture of the output multiplexer

2.5 Summary

In this chapter, the idea behind sub-pixel imaging using folded-staggered line scanning pixels and static 2D pixels was discussed. The block diagram of the implemented CMOS chip for executing these ideas was presented. In the next chapter, the readout blocks are discussed in detail. 46CHAPTER 2. SUB-PIXEL-RESOLUTION SCANNING CONTACT-IMAGING MICROSCOPE Chapter 3

Dual-mode Current/Voltage-input ∆Σ

ADC

A low power ∆Σ ADC with a configurable voltage/current input is presented. This was de- signed as the readout block for a pixel array consisting of different types of pixels which may generate a voltage or current output. A dual-input 2nd order ∆Σ ADC was designed to accom- modate both current and voltage inputs. The ADC noise shaping characteristics are preserved for both input types. An SNDR of 56.6dB is achieved while consuming 60µW from a 3.3V supply.

3.1 Introduction

Many multi-channel sensory applications require low power ADC’s with bandwidths in the order of KHz to MHz and 9-14 bits of resolution. [54]. For this work, multiple pixels types with voltage or current outputs are to be digitized by the ADC. Therefore, the ADC must be configurable to accept both voltage and current inputs with minimum circuitry overhead for column-level integation. ∆Σ ADC’s benefit from oversampling and noise shaping; therefore, for this range of resolution and bandwidth, they out-perform their Nyquist rate counterparts in

47 48 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

terms of power and area. Also, similar to cyclic and SAR ADC’s since the ∆Σ feedback loop

contains a DAC which may be conveniently configured to accept a reference voltage or current,

it is a suitable choice of ADC architecture for this work.

∆Σ ADC’s have received much attention in the past years [55, 56, 57, 58] for image

sensors due to their convenient system-level integration capability with high resolution. In

[59], a 1st order ∆Σ ADC was implemented at the pixel level in CMOS 0.18µm. Despite its minimum area, it increased the pixel area by a factor of 4x compared to imagers using similar technology [60, 61]. Utilization of an in-pixel ADC was not a liable approach in this work because as discussed in chapter 2, the minimum number of rows of the pixel array heavily depends on the pixel width. According to section 2.4.1, one additional row must be added to the array for every HS increase in pixel width. Including in-pixel circuitry for all pixel types prohibitively increases the array height. In [63], 128×128 pixel array with a charge-based 1st order ∆Σ is presented. Here, the ∆Σ loop is split between the pixel and its respective column.

A charge-based DAC is implemented inside the pixel which injects controlled charge packets in the photodiode capacitance whenever the integrated photodiode voltage crosses a certain threshold, determined by the column-level comparator. However, this method is difficult to implement for a large pixel array since the column-level comparator must operate at a higher frequency to maintain a certain frame-rate while the capacitive load it must drive increases due to the increased column length.

In [35], a 2nd order column-level ∆Σ ADC with a switched capacitor implementation is presented. It employed logic inverters as an integrators, significantly reducing the power consumption of the ADC. This architecture displayed a reliable low-power performance with

10-bit resolution. However, it is necessary to set the amplifiers’ bias points on each clock cycle.

Therefore, it cannot be used for continuous-time current integration which is required in this work for reading out passive pixels.

In this work, a configurable 2nd order ∆Σ architecture is designed to digitize a voltage or current while preserving the signal transfer function (STF) and noise transfer function (NTF). 3.2. READOUT ARCHITECTURE 49

Application Output Range Type High-resolution DC Voltage 0.5V - 2.8V APS Fluorescence DC Voltage 0.5V - 2.8V APS Polarization DC Voltage 0.5V - 2.8V APS FLIM ac Current up tp 100nA PPS Spectrum sensing DC Voltage 0.5V - 2.8V APS Static imaging DC Current up to 100nA PPS

Table 3.1: Summary of pixel outputs

This is done without the use of explicit current-to-voltage converters which use power and area

resources. The rest of this chapter is organized as follows: In section 3.2, a block diagram of

the imaging system is shown. In section 3.3, the system-level design of the ADC is explained.

Finally, in section 3.4, the CMOS implementation of the ADC is presented.

3.2 Readout Architecture

The block diagram of the image sensor chip is shown in Fig. 3.1. As described in the previous

chapter, one pixel array is arranged as a multi-modality ”line scanner”. To acquire an image,

the sample is moved over the pixel array with constant velocity. Each acquired frame from this

array represents one row of the final image. The rows are later put together to form the final 2D image. The second pixel array is a 2D array of passive pixels for imaging a stationary object.

A rolling shutter readout is used to connect each pixel to its respective column-level ADC [62].

The ADCs consist of column-parallel ∆Σ modulators followed by decimation filters. The

clock generator block generates the required clock phases and control signals for these blocks.

The row decoder selects a row to be read out.

Table 3.1 summarizes the outputs of the pixel types explained in the previous chapter.

Regardless of the pixel type, the block diagram of a PPS and APS is consistent and is shown 50 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

Figure 3.1: Chip block diagram

(a) Passive pixel sensor (b) Active pixel sensor

Figure 3.2: Block diagram of a passive and active pixel in Fig. 3.2a and Fig. 3.2b, respectively.

An APS generates a voltage while a PPS creates a photo-current output. Whenever RS is turned HIGH by the row decoder, the pixel output is connected to the column for readout.

Depending on whether the selected pixel is active or passive, the column level ADC must be configured to accept a voltage or current input. As explained in the previous chapter, current-to- voltage conversion using a TIA is not a viable solution for current readouts since the maximum photocurrents are in the order of 100nA meaning a feedback resistor in the order of hundreds of KΩ would be required to generate a voltage in the ADC range. To remove this requirement, 3.3. ADC DESIGN 51

a ∆Σ ADC with a configurable input was used. The modulator sets a reference current or

voltage depending on the input type, removing the need of any additional current-to-voltage

converter. The modulator output is then filtered by the decimation filter block and directed to

the output.

3.3 ADC Design

3.3.1 Background

The general block diagram of a ∆Σ ADC is shown in Fig. 3.3. H(z)/H(s) represents the transfer

function of the loop filter for a discrete/continuous-time modulator.

Figure 3.3: General block diagram of a ∆Σ modulator

Taking H(z) as the loop filter, the system transfer function from input to the output of the modulator may be written as:

Y (z) = STF (z)X(z) + NTF (z)E(z)

H(z) STF (z) = 1 + H(z) 1 NTF (z) = 1 + H(z)

where, STF and NTF are the signal transfer function and loop transfer funtion, respectively.

∆Σ ADC’s benefit from oversampling and noise shaping. Oversampling is defined as the 52 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

Figure 3.4: Noise spectrum after performing oversampling and noise shaping

ratio of which the input is sampled with respect to the Nyquist rate and may be written as:

f OSR = s 2fB

where, fs is the sampling frequency and fB is the bandwidth of the input. Oversampling reduces the noise power in the signal band by a factor of OSR because it spreads the noise power in a wider range of frequencies. Noise shaping further reduces the quantization noise in the signal band by shifting the noise power to high frequencies [64]. The decimation filter removes the out-of-band noise and generates a clean output. These effects are shown in the noise spectrum of Fig. 3.4.

A decimation filter in the same order as the modulator downsamples the modulator output by a factor of OSR. Therefore, the conversion speed for each sample is fs × OSR.

3.3.2 System Level Design

All pixels from table 3.1 except the FLIM pixels generate a DC voltage or current. The output range of active and passive pixels is 0.5V to 2.8V and <100nA, respectively. According to the noise analysis of [51], 10 bit accuracy is desired for the ADC.

A first order modulator is sensitive to idle tones for DC inputs [35]. Also, it requires a high 3.3. ADC DESIGN 53

OSR to provide a 10-bit SQNR; therefore, a second order modulator was chosen. The block

diagram for the proposed modulator is shown in Fig. 3.5.

Figure 3.5: Block diagram of the proposed ∆Σ modulator

The first stage is a discrete-time integrator for voltage inputs and a continuous-time inte- grator for current inputs. The second stage is a discrete-time integrator and is shared for both inputs. Therefore, the modulator is discrete-time for voltage inputs and hybrid for current in- puts. For a voltage input, M is set to 1 so H1(z) and VDAC are selected. For a current input, M is set to 0 so H1(s) and IDAC are selected. H1 was chosen to be continuous-time for current inputs since current integration is ”continuous” in nature. The gains on each stage were scaled

to adjust the maximum output swing of each stage to 0.75FS for efficiency. Due to the low input current levels, the signal was effectively doubled by ”not halving” during gain scaling.

This does not affect the NTF but will double the STF gain at DC.

The NTF for the discrete-time modulator and hybrid modulator, are designed to be equiva- lent. (3.2) and (3.2) show the transfer function from input to the output.

−1 2 Ycur(z) = 2Iin(z) + (1 − z ) E(z) (3.1)

−1 2 Yvol(z) = Vin(z) + (1 − z ) E(z) (3.2)

where, Ycur(z) and Yvol(z) denote the output for a Iin and Vin, respectively. Taking the sample movement speed to be 10µm/sec, a frame rate of 15fps is sufficient. Since a rolling shutter

readout is used, the frame rate is equal to 1 where N is the number of rows and t Nrow×trow row row 54 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC is the time it takes to read each row. Sine the odd and even columns are time-multiplexed into the ADC, it takes two ADC conversions for each row. There are 214 rows in the line-scanner array; therefore, the time for each conversion must be approximately 100µs. To achieve an

SNR of 60dB from an ideal 2nd order ∆Σ modulator with a 1-bit quantizer, the OSR must be at least 32. However, 30dB was allocated to thermal noise and harmonics (especially since a single-ended design is used to save area). From this, OSR was chosen to be 125; therefore, fs = 1MHz.

3.4 CMOS Implementation of ADC

In this section, first each block in Fig. 3.5, are individually discussed. Then, the complete circuit implementation of the modulator is presented.

3.4.1 Integrators

First stage integrator: The first integrator is shown with detail in Fig. 3.6. VDAC is imple- mented as a capacitor charged to Vref+ or Vref− on one phase and integrated on the next phase.

IDAC is implemented as a current source pushing or pulling Iref from Cint1.

Figure 3.6: First stage integrator 3.4. CMOS IMPLEMENTATION OF ADC 55

(a) 1st stage integrator in voltage mode (b) 1st stage integrator in current mode

Figure 3.7: Simplified diagram of the 1st stage integrator for voltage and current input

As mentioned in section 3.3.2, there are two modes of operation. Simplified in Fig. 3.7a, when M is set to 1, this integrator becomes a discrete-time integrator for voltage inputs.

In this operation mode, IDAC is disconnected from the opamp and their currents are directed to GND. On φ1, the input is sampled onto Cs and the opamp offset is sampled on the Coff .

On φ2 the charge is then transfered onto the integration capacitor, Cint1. The transfer function from the input to the output becomes:

Cs z Vout(z) = − [Vin(z) − Sgn(y)Vref (z)] (3.3) Cint1 z − 1

When M is set to 0, the integrator is simplified to Fig. 3.7b. Cs and Coff are bypassed and the input is directly connected to the negative terminal of the opamp. This way, the input

current is continuously integrated onto Cint1. Also, shown in Fig. 3.5, the input current is

not halved as opposed to its voltage counter-part, doubling the STF gain. IDAC is decided

on the rising edge of φ1. Therefore, Iref is constant for one complete cycle and is integrated

for 1/fs seconds before it is subject to change. The transfer function of the integrator in the continuous-time mode is:

1 Vout(s) = − [Iin(s) − Sgn(y)Iref (s)] Cint1s 56 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

It is desired to design the current domain continuous integrator such that the second stage

does not realize any difference compared the discrete-time voltage input to preserve the loop

transfer function. For this, first we calculate ∆V at the output of the opamp from integrating

the VDAC voltage on each integration phase. The full-scale voltage of the ADC is 0.4V-2.85V.

Therefore, on each integration phase the voltage change on the output of the integrator is:

1 C ∆V = ( × (2.85 − 0.4)) × 1 = 0.6125V 2 Cint1

Next, we use the charge transfer equation to find the equivalent current, Iref , which will generate the same ∆V in one clock period taking into account the integration time, 1/fs = 1µs

and Cint1 = 520fF :

Iref ∆t = C∆V

therefore, Iref = 312nA. Since the current may be pushed or pulled from the node, IFS = 624nA. Therefore, the maximum input photocurrent of 100nA will be a signal of -10dB with

respect to the ADC full scale (taking into account that the STF gain is 2). Nevertheless, this

was taken into account during the system level design to meet the SQNR requirement for a cur-

rent input. One may avoid loosing -10dB of SNQR by decreasing the clock frequency and/or

Cint1 such that a lower Iref delivers the required voltage change at the output of the opamp. It is also possible to adjust the gain parameters of the second stage to take into account the gain

loss in the first stage. However, since this was not required in this design, these methods were

not considered.

Second stage integrator: The second stage integrator is a conventional discrete-time

switched capacitor integrator with an offset cancellation capacitor, shown in Fig. 3.8. The

output for this integrator is shown in equation (3.4).

Cs z Vout(z) = − [Vin(z) − Sgn(y)Vref (z)] (3.4) Cint2 z − 1 3.4. CMOS IMPLEMENTATION OF ADC 57

Figure 3.8: Schematic diagram of the second-stage integrator

3.4.2 Opamp design

The design specifications of the opamps are derived from the following considerations:

1. Gain requirement: The opamp gain must be sufficiently high to ensure the loop func- tion’s zero shifts due to finite gain are negligible. A linear gain in the order of A ≥ OSR−1 [64] ensures this. Therefore, A ≥ 128 ≡ 42dB. Nevertheless, since the opamp output contains sig-

nal components, a higher gain of 70dB was chosen to lower harmonic distortion [66].

2. Settling time requirement: The opamp settling time must be low enough such that the

opamp settles to 10 bit accuracy in Ts/2 seconds. Taking half this time for worst-case slewing and half for linear settling, Islew ≥ 1.8µA and UGB ≥ 4MHz.

3. Swing requirement: The opamp must deliver 0.75Vpp swing. Taking into account the ADC’s voltage range, this translates to a swing from ( 0.75V - 2.55V) from a 3.3V power supply.

4. Noise requirement: The thermal noise of a switched capacitor integrator is given in [65].

Cint1 and gm are chosen such that thermal noise dominates the total noise of the modulator for a power efficient design.

Taking into account these considerations, The schematic of the opamp is given in Fig. 3.9.

A two-stage opamp was used for its high gain and swing. The bias current for meeting the slew requirement also provides sufficient gain-bandwidth to meet the settling requirement. 58 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

Figure 3.9: Schematic diagram of the opamp

3.4.3 Quantizer

The 1-bit quantizer is implemented as a latched comparator [53]. The schematic of the com- parator is shown in Fig. 3.10. The outputs are reset during clk and the decision is made on each rising edge. To minimize metastability errors, the transistors were sized such that the comparator resolves up to 1µV in 100ns.

Figure 3.10: Schematic diagram of the comparator

3.4.4 Complete circuit the circuit-level implementation of the modulator is shown in Fig. 3.11.

The signal path is composed of the two integrators with a 1-bit quantizer. On the voltage input, IDAC ’s currents are directed to VCM . 3.4. CMOS IMPLEMENTATION OF ADC 59

Figure 3.11: Complete circuit of the modulator

3.4.5 Decimation Filter

A second order decimation filter implementing a Sinc filter sufficiently attenuates the noise outside of the pass-band of a 2nd order ∆Σ modulator. As shown in Fig. 3.12 a decimation filter would normally consist of an accumulator and differentiator [52]. In this work, since the inputs of the modulator are usually DC, the differentiator may be omitted. FLIM imaging is the only case where the modulator input is a single-tone sine wave. For this case, the output of the integrating block is differentiated in post-processing.

Figure 3.12: General block diagram of a decimation filter

The two stage integrator, shown in Fig. 3.13, is consisted of a counter and an accumulator.

Since the output is downsampled by OSR = 125, neither of the blocks must saturate for 125 clock cycles. Therefore, a 7-bit counter and 14-bit accumulator was used.

After the ADC conversion is done, its data is stored in a memory for readout. 60 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

Figure 3.13: Schematic of the decimation filter

3.4.6 Clock Generator

The non-overlapping clock generator is shown in Fig. 3.14 [52]. Delays I0-I3 are included to allow non-overlap time adjustment.

Figure 3.14: Non-overlapping clock generator

The timing diagram of the non-overlapping clock generator is shown in Fig. 3.15.

Figure 3.15: Timing diagram of the non-overlapping clock generator

Since complementary switches are used in the modulator, the inverted phases are also re- quired. Using an inverter configured as Fig. 3.16 injects additional distortion to the system 3.4. CMOS IMPLEMENTATION OF ADC 61

because path2 includes an additional delay due to the added inverter compared to path1. As a result, on each clock transition one switch transistor will toggle instantly while the other remains unchanged for td. The delays on the two paths must be adjusted.

Figure 3.16: Using one inverter to generate complementary clocks

Fig. 3.17 depicts how the complementary clocks were generated. The input clock, clkin, is delayed by ∆T , (8x longer than a single inverter delay). SP 1 and SP 2 are non-overlapping

clocks with non-overlap times equivalent to 2∆T . SP 1 and SP 2 are used to connect S1 to

the output buffers. As shown in Fig. 3.18, since the delay on SP 1 and SP 2 is longer than an

inverter delay, the signals at Q and Q are ready before any of the output switches are activated,

removing the delay between Q and Q.

Figure 3.17: Complementary clock generator 62 CHAPTER 3. DUAL-MODE CURRENT/VOLTAGE-INPUT ∆Σ ADC

Figure 3.18: Timing diagram of the complementary clock generator Chapter 4

Experimental Results

4.1 ADC characterization

The chip was fabricated in a 0.35µm CMOS imaging technology. Each modulator occupies an area of 15µm×1.1mm, double the column pitch. The decimation filter occupies 36µm×235µm.

The primary source of area usage in the modulator is the capacitors, implemented as poly caps.

The area would significantly reduce if MOM caps could be used, which was not available in this kit.

Each input mode was independently characterized by setting the ADC mode to its respec- tive input (M=1 for voltage, M=0 for current). In both cases the sampling rate was set to 1MHz with a bandwidth of 4KHz. The power consumption of the ADC for both inputs is 60µW. An input signal of 1.325KHz was applied to the input of the ADC. This frequency corresponds to the 53rd bin of a 40000-point FFT. The frequency was chosen such that the effects of the 2nd and 3rd harmonic are captured on the total SNDR.

4.1.1 Voltage-input ADC

The output spectrum of the ADC for a voltage input at -6dBFS is shown in Fig. 4.1. It is seen that the SNDR is limited by the 2nd harmonic.

63 64 CHAPTER 4. EXPERIMENTAL RESULTS

0

X: 0.001325 Y: −6.019 SNDR = 55.8dB @ OSR = 125 −20

−40

X: 0.00265 −60 Y: −63.93 dBFS

−80

−100

−120 −4 −3 −2 −1 10 10 10 10 Normalized Frequency

Figure 4.1: Output spectrum of the ADC with voltage-input

The SNDR vs. input amplitude is shown in Fig. 4.2. The peak SNR of the voltage input is

56.6dB with a 0.6Vpp input.

4.1.2 Current-input ADC

The output spectrum of the ADC for a current input at -6dBFS is shown in Fig. 4.1. Similar to

the voltage input, the SNDR is limited by the 2nd harmonic.

The SNDR vs. input amplitude is shown in Fig. 4.4. The SNDR for the current input is

limited by harmonics starting from 122nApp. The maximum SNDR is 56.5dB at a 240nApp input.

The maximum input photocurrent for the system is considered to be 100nA or -10dbFS.

It is seen from this plot that at Iin = −8.5dB, the SNR is 56.4dB which is the SNR limit of the ADC. Therefore, the fact that the maximum photocurrent is at -10dBFS, does not cause

significant sacrifice in the ADC resolution. 4.1. ADC CHARACTERIZATION 65

60

55

50

45

40

35 dB

30

25

20

15

10 −3 −2 −1 0 1 10 10 10 10 10 V Figure 4.2: SNDR vs. voltage amplitude

0

X: 0.001325 Y: −6.025 SNR after layout = 56.2dB @ OSR = 125

−20

−40

X: 0.00265 −60 Y: −63.69 dBFS

−80

−100

−120 −4 −3 −2 −1 10 10 10 10 Normalized Frequency

Figure 4.3: Output spectrum of the ADC with current-input 66 CHAPTER 4. EXPERIMENTAL RESULTS

60

50

40

30 dB

20

10

0 −1 0 1 2 3 10 10 10 10 10 nA Figure 4.4: SNDR vs. current amplitude

4.1.3 Comparison

As mentioned in chapter 3, the hybrid designed dictated that Iref = 312nA. The measured

nd value for Iref to preserve the 2 order NTF is 324nA. The PSD comparisons is shown in Fig. 4.5.

The SNDR vs. input amplitude relative to full-scale for the current and voltage input is shown in Fig. 4.6. In the linear region, the noise level of the current-mode input is higher by a factor of 0.85dB. This difference can be attributed to noise contributions of the measurement devices and current DAC which inject noise directly in the signal-path.

4.1.4 Summary of ADC performance

Table 4.1 shows a summary of the ADC performance and how it compares to recent ADC’s used in image sensors using the same technology. 4.2. MICROSCOPE SETUP 67

0 Voltage Input Current Input

−20

−40

−60 dBFS

−80

−100

−120 −4 −3 −2 −1 10 10 10 10 Normalized Frequency

Figure 4.5: PSD comparison of the hybrid and discrete-time ∆Σ modulator

4.2 Microscope Setup

As shown in Fig. 4.7, the multi-modality microscope is composed of three components: a

sensor which acts as the imaging core, a stepper motor configured to move the sample in

precisely controlled steps, and a post-processing stage to organize the data stream from the

sensor and perform the super-resolution algorithm on the acquired images. These components

are described as follows.

4.2.1 Image Sensor

A custom-made CMOS image sensor in AMS0.35 µm imaging process was used. The sensor

is set up to enable lens-less contact imaging which requires the sample to be moved directly on

top of the image sensor. As shown in Fig. 4.8, the bonding wires of the chip were arranged such

that they are on only two opposite sides, clearing the path on top of the CMOS die so the sample 68 CHAPTER 4. EXPERIMENTAL RESULTS

60

Current Input

Voltage Input

50

40

30 dB 0.85

20

10

0 −70 −60 −50 −40 −30 −20 −10 0 dBFS Figure 4.6: PSD comparison of the hybrid and discrete-time ∆Σ modulator

Figure 4.7: System experimental setup 4.2. MICROSCOPE SETUP 69

This Work [63] [67] [68] [69] Single Single ADC architecture ∆Σ ∆Σ Slope + ∆Σ Slope Self-reset Sampling rate (MHz) 1 1.97 NA NA 10 Power 60µW NA NA 250µW* 34µW* Bandwidth 4kHz 4kHz NA NA 20kHz Conversion rate (sps) 7800 7800 480 NA 40000 PSNR (dB) 78 NA NA NA NA PSNDR (dB) 56.6 52 50.9 46.2 48 Area 25500 NA 10500 58000∗ NA Dual Input YES NO NO NO NO * Based on calculations Table 4.1: Comparative analysis of the ADC

can freely move along the chip without contacting the bonding wires. The packaged CMOS

chip is mounted onto a PCB using a custom-made plexi-glass manifold to secure the connection

of the chip onto the PCB. The mounted setup of the chip is shown in Fig. 4.9. Soldering was

avoided for the sake of exchanging the chip easily, if necessary. To acquire preliminary results

from this work, instead of placing the sample directly on top of the CMOS die and moving

it, an 8mm F1.3 lens was used to create a projected image of the sample, placed above the

PCB, onto the image sensor. This is highlighted in Fig. 4.10. Movement in the sample shifts

0 ∆X the projected image by ∆X = M where ∆X is the physical sample movement and M is the lens’s magnification.

4.2.2 Sample movement

As mentioned in section 2.2.2, in [40, 37] the sample is moved through microfludic channels, which usually results in rotation of the specimen invalidating the final result. To avoid such 70 CHAPTER 4. EXPERIMENTAL RESULTS

Figure 4.8: Bonding of chip to package

Figure 4.9: Chip mounted on PCB

issues and acquire preliminary results, the ”Parker ZETA57-102-MO” biphasic stepper motor was used to move the sample. Fig. 4.11 shows the motor, taken from the supplier datasheet.

The internal gearing of the motor is designed for linear movement on its shaft with a precision of ±8µm. Since the motor moves the sample along the optical imaging axis of the chip, correct positioning of the motor with respect to the chip is important. As opposed to [40, 37] where aligning the microfluidic channels with the sensor is cumbersome, in this work the orientation of the motor shaft and CMOS die should be exactly 90 degrees which is easily achieved. 4.2. MICROSCOPE SETUP 71

Figure 4.10: Usage of a lens to project the moving sample’s image onto the chip

4.2.3 Back-end and Post Processing

Each acquired frame from the image sensor is sent to a laptop using a live data stream interface.

Fig. 4.12 shows a schematic of the technique used to transfer the data to the laptop. The data from each frame is consisted of 214 rows×256 columns×10 bits which is serialized inside the

FPGA and sampled by the oscilloscope, then transferred to the PC using the VISA interface.

The timing diagram for reading out the pixel array, and transferring it to the PC is shown in Fig. 4.13. The pixel array is read row by row. On each row, there are two sets of data: one corresponding to readout from the odd columns, and the other corresponding to the even 72 CHAPTER 4. EXPERIMENTAL RESULTS

Figure 4.11: Picture of the stepper motor

Figure 4.12: Diagram of the live data transfer hardware columns. At the end of each data set, the End-Of-Data (EOD) pulse is triggered, toggling the

ODD and EVEN signals. Also, at the beginning of every frame, a Start Frame (SF) signal issues the reset for the row decoder.

Since each frame is a large serial stream of data, it is subject to drift and mis-alignment of the data and clock at the receiver end. To remedy this problem, a clock-data-recovery (CDR) algorithm is implemented to synchronize the data with the sampling clock. The block diagram of data organization including CDR is shown in Fig. 4.14 and is described as follows:

As shown in Fig. 4.15, each serialized data block contains 16 bits: D[9:0] are used for the pixel data while D[15] is allocated to the SF bit and D[14:13] is allocated to ODD and EVEN

for parity check (D[10:12] were not used). The oscilloscope sampling rate is 5X faster than

the bit rate of the FPGA; therefore, each bit transfered to the laptop contains five samples. The

reconstruction and CDR algorithm first utilizes the SF signal to determine the first pixel of

each frame and aligns the reconstruction clock such that it samples directly in the middle of 4.3. EXPERIMENTAL IMAGE SEQUENCE 73

Figure 4.13: Timing of the pixel array readout with generated control signals each bit. Upon detection of a toggle in ODD or EVEN, the CDR script resynchronizes the reconstruction clock with the middle of each bit.

4.3 Experimental Image Sequence

4.3.1 Data Acquisition

A two dimensional symmetrical sample image was used to verify the functionality of the multi- modality line scanner. Shown in Fig. 4.16, the sample image is an opaque circle with a translu- cent center placed on a transparent background. The sample was moved above the chip along the imaging axis via the stepper motor. Dark frame subtraction was performed to attenuate

fixed pattern noise (FPN).

Fig. 4.17 shows three frames of the sequence; Fig. 4.17a corresponds to the image as it is entering the frame, Fig. 4.17b shows the frame in which half of the complete data is acquired, and finally, Fig. 4.17c shows the frame in which the sample is exiting the frame and the data acquisition is complete.

As seen from these images, Fig. 4.16 is distorted in the frames and there are black columns in between the brighter columns. These columns correspond to the pixel types which were not 74 CHAPTER 4. EXPERIMENTAL RESULTS

Figure 4.14: Block diagram of data organization with software CDR

Figure 4.15: One block of 16-bit data serialized by the FPGA sensitive enough to produce an output, given the integration time and lighting conditions.

4.3.2 Image Reconstruction

The raw data from moving the sample optically over the chip surface are combined to recon- struct the final image. The stepper motor moves the sample over the image sensor after each frame is successfully acquired. The stepper motor’s step size is set such that on each step, 4.3. EXPERIMENTAL IMAGE SEQUENCE 75

Figure 4.16: Sample image used to verify the system functionality

Figure 4.17: Sample image as it is entering the field-of-view(a), in the middle of imaging(b), and on exiting the field-of-view(c) the image projected onto the chip is displaced by one row; therefore, each image acquisition session is consisted of 428 frames. For image reconstruction, the rows of each frame which correspond to two adjacent pixels in the staggered line, are placed beside each other to form one horizontal row of the final image. Performing this for all the frames produces the complete collection of horizontal rows which are then placed vertically on top of each other to form the

final image. Fig. 4.18 shows the final reconstructed image for the 8×11 nwell/psub active pixel type.

It is observed that the reconstructed image does not contain the periodic black columns of Fig. 4.17 and the ”O” shape with a lighter center, corresponding to the translucent region of the sample image, is produced. The remaining FPN on the reconstructed image is a result of misalignment of pixels when folding and the fact that for each row to form, all the frames are used. Therefore, any gradient in lighting conditions or inconsistency in projection when the sample is at different angles with respect to the lens leads to FPN. The image contains 76 CHAPTER 4. EXPERIMENTAL RESULTS

Figure 4.18: Reconstructed image additional artifacts which are described as follows:

”Zig-zag” outline

Alignment inaccuracy between the sections of the reconstructed image causes a zig-zag line effect on the edges. This is caused from a constant error in the step size of the stepper motor.

The step-size error is accumulative because the same step is used to acquire all the frames.

If the step size is less than a row height, the resulting image generates negative slopes on the edges; likewise, if the step size is greater than a row width, positive-slope edges are seen on the final image. Fig. 4.19 shows the resulting output image for four different step sizes: 2840,

2740, 2640, and 2540 unit steps. It is seen that the optimal result is seen for a 2640 steps.

Incorrect aspect ratio

The second image artifact is the incorrect aspect ratio of the image. Recall from Fig. 4.16 that the intended image was a ”perfect circle”; while the reconstructed image is oval. This comes from the fact that the reconstruction algorithm assumes square-shaped pixels whereas the pixels of the image sensor are 8µm×11µm. One solution is to stretch the final image by a factor of 11/8 vertically to account for this. This is done in Fig. 4.20. It is observed that this 4.3. EXPERIMENTAL IMAGE SEQUENCE 77

Figure 4.19: Reconstructed image for four step sizes: 2840(a), 2740(b), 2640(c), 2540(d) transformation effectively transposed the oval to a perfect circle.

Figure 4.20: Final image stretched from its height 78 CHAPTER 4. EXPERIMENTAL RESULTS Chapter 5

Conclusions

5.1 Thesis Contributions

The complete design, implementation, and partial experimental validation of a multi-modality scanning contact microscope was presented in this thesis. The SCM’s hardware includes a custom-made CMOS image sensor in the AMS0.35µm imaging process. The imager is com- patible with a super-resolution algorithm which generates a high-resolution image from spa- tially oversampled input data.

Two independent pixel arrays were designed and implemented for moving and static sam- ples. For static samples, spatial oversampling is achieved by shifting the photodiode under the sample electronically by a fraction of a pixel. For moving samples, the technique of folded- staggering of pixels was introduced and exploited to accommodate spatial oversampling. The folded-staggered pixel array includes five types of pixels, analogous to five imaging modali- ties. It is capable of acquiring 2D images from each modality independently. Experimental imaging results were obtained from a test-sample using one pixel type in the folded-staggered pixel array.

A column-parallel dual-input 2nd order ∆Σ ADC was designed, implemented, and experi- mentally characterized. The ADC accommodates current and voltage inputs, corresponding to

79 80 CHAPTER 5. CONCLUSIONS the outputs of the different pixel types, with minimal circuitry overhead. The ∆Σ ADC’s NTF was designed to be consistent for both input types.

5.2 Future Work

This thesis proved the functionality of the SCM. However, a stepper motor and lens were used in this work as two components of the system. This contradicts with the notion of ”low-power portable contact microscope” which is the ultimate goal of this work. Therefore, the prototype imager must be used in a modified system setup which is capable of moving a sample on the surface of the chip without using an external motor. Flowing the sample through a microfluidic channel in close proximity of the pixels is one option.

Imaging results were shown from one pixel type. The other pixel types are used for polar- ized light microscopy, FLIM, spectrum-sensing, and high-resolution imaging. Their imaging results may be separately obtained.

In this work, five pixel types were placed beside each other to form one period of the multi- modality line scanner pixel array. Although area considerations were made through-out the design of this pixel array, the pixel period may be optimized for area and imaging performance, reducing the required height of this array.

The theory of the proposed SCM proves how spatial oversampling is achieved for moving and static samples. Nevertheless, imaging results after applying the super-resolution algorithm to the raw data is left for future exploration of this work. References

[1] W. R. Klemm. (2001). Big-Picture Time [Online]. Available: http://peer.tamu.edu/ .

[2] Jeffrey Mahr. (2001, May 10). Prokaryotic Cells [Online]. Available: http://cnx.org/ .

[3] F. De Angelis, F. Gentile, F. Mecarini et al., ”Breaking the diffusion limit with super-hydrophobic delivery of molecules to plasmonic nanofocusing SERS structures” Nature Photonics 5, pp. 682-687, 2011.

[4] S. Tsukita and H. Ishikawa, ”The Movement of Membranous Organelles in Axons,” J. Cell Biol vol. 84, pp. 513-530, 1980.

[5] M. K. Kandasamy. (2013, June 26). Microscopy [Online]. Available: http://bmc.uga.edu/gallery/microscopy/ .

[6] L. Parvin. (2009, August 27).Looking at brain activity with MRI [Online]. Available: http://www.open.edu/ .

[7] viet-can medical services. (2010). Services [Online]. Available: http://www.vietcanmedical.com/ .

[8] American College of Surgeons (2013, September 4). Five Things Physicians and Patients Should Question [Online]. Available: http://www.choosingwisely.org/ .

[9] Canadian Magnetic Imaging, (2014). Whole body scans [Online]. Available: http://www.canmagnetic.com/ .

[10] M. Kalapurayil, (2013, July 13). What Is MRI? How Does MRI Work? [Online]. Available: http://www.medicalnewstoday.com/ .

[11] C. Nordqvist (2014, May 15). What is a CT scan? What is a CAT scan? [Online]. Available: http://www.medicalnewstoday.com/ .

[12] WebMD Medical Reference from Healthwise (2012, Nov 29). Magnetic Resonance Imaging (MRI) of the Abdomen [Online]. Available: http://www.webmd.com/ .

[13] Bio-Rad Laboratories (2014). Cell Counting Methods [Online]. Available: http://www.bio-rad.com/ .

81 82 REFERENCES

[14] R. Weissleder, U. Mahmood, ”Molecular Imaging,” Radiology vol. 219, pp. 316-333, 2001.

[15] K. C. A. Smith, C. W. Oatley, ”The scanning electron microscope and its fields of application,” British J. Appli. Physics vol. 6, pp. 391-399, 1955.

[16] E. Corredor, P. S. Testillano, M. Coronado,et al. ”Nanoparticle penetration and transport in living pumpkin plants: in situ subcellular identification,” BMC Plant Biology vol. 9, pp. 45, 2009.

[17] Ian Dobbie, ”Lecture 3-2014,” in University of Oxford, Biochemistry Course notes, 19 May 2014.

[18] M. Pluta, Advanced Light Microscopy: Principles and Basic Properties,Elsevier Science Ltd, 1988.

[19] A. Chagovetz and S. Blair, ”Real-time DNA microarrays: reality check,” Biochem. Soc. Trans., vol. 37, no. 2, pp. 471-475, Apr 2009.

[20] K. R. Spring, (2013). Introduction to Fluorescence Microscopy [Online]. Available: http://www.microscopyu.com/ .

[21] S. M. Wilson, A. Bacic, ”Preparation of plant cells for transmission electron microscopy to optimize im- munogold labeling of carbohydrate and protein epitopes,” Nature Protocols vol. 7, pp. 1716-1727, 2012.

[22] L. Chen, ”Confocal microscope,” U.S. Patent 5 162 941, Nov 10, 1992.

[23] Kenneth R. Spring, Thomas J. Fellers and Michael W. Davidson (2009) Resolution and Contrast in Confocal Microscopy, [Online]. Available: http://www.olympusconfocal.com/ .

[24] Sarkar, M., San Segundo Bello, D., Van Hoof, C., Theuwissen, A, ”Integrated polarization analyzing CMOS Image sensor for autonomus navigation using polarized light,” Intelligent Systems (IS), 2010 5th IEEE International Conference, pp. 224-229, 7-9 July 2010.

[25] Qun Zhu, Ian M. Stockford, John A. Crowe, Stephen P. Morgan, ”Experimental and theoretical evaluation of rotating orthogonal polarization imaging,” J. Biomed., vol. 14(3), pp. 034006-034006-10, 2009.

[26] M. Abramowitz, (2012) Polarized Microscopy With A Retardation Plate, [Online]. Available: http://www.olympusmicro.com/ .

[27] Philip C. Robinson, Michael W. Davidson (2013). Introduction to Polarized Light Microscopy [Online]. Available: http://www.microscopyu.com/ .

[28] Smith, B.A. and Zimmerman, T.F., ”Contact microscope using point source illumination,” US Patent 7,936,501 B2, May 3, 2011 .

[29] Freeman, W.T., Jones, T.R., Pasztor, E.C., ”Example-based super-resolution,” Computer Graphics and Ap- plications, IEEE, vol. 22, no. 2, pp. 56-65, Mar/Apr 2002. REFERENCES 83

[30] Elad, M., Hel-Or, Y., ”A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur,” Image Processing, IEEE Transactions on , vol. 10, no. 8, pp. 1187-1193, Aug 2001.

[31] N. T. Shaked ”Visualizing transparent biology with sub-nanometer accuracy,” in SPIE Newsroom, Copy- right 2014 SPIE, doi: 10.1117/2.1201206.004304.

[32] Singh, R.R., Lian Leng, Guenther, A, Genov, R., ”A CMOS-Microfluidic Chemiluminescence Contact Imaging Microsystem,” Solid-State Circuits, IEEE Journal of, vol. 47, no. 11, pp. 2822-2833, Nov. 2012.

[33] D. Ho, P. G. Gulak, R. Genov, ”CMOS 3-T digital pixel sensor with in-pixel shared comparator,” Circuits and Systems (ISCAS), 2012 IEEE International Symposium on , pp. 930-933, 20-23 May 2012.

[34] A. Olyaei, R. Genov, ”Focal-Plane Spatially Oversampling CMOS Image Compression Sensor,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 54, no. 1, pp. 26-34, Jan. 2007.

[35] Y. Chae, J. Cheon, S. Lim et. al ”A 2.1 M Pixels, 120 Frame/s CMOS Image Sensor With Column-Parallel ∆Σ ADC Architecture,” Solid-State Circuits, IEEE Journal of, vol. 46, no. 1, pp. 236-247, Jan. 2011.

[36] S. Isikman, W. Bishara, H. Zhu, and A. Ozcan, ”Optofluidic Tomography,” Frontiers in Optics 2011/Laser Science XXVII, OSA Technical Digest (Optical Society of America), paper FMA4, 2011.

[37] G. Zheng, S. A. Lee, S. Yang et al., ”Sub-pixel resolving optofluidic microscope for on-chip cell imaging,” Lab Chip, vol. 10, pp. 3125-3129, 2010.

[38] W. Bishara, Isikman SO, Ozcan A, ”Lensfree optofluidic microscopy and tomography.,” Ann Biomed Eng, vol. 40(2), pp. 251-262, 2012.

[39] W. Bishara, T. Su, A. F. Coskun, et al. ”Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Optics Express, vol. 18(11), pp. 11181-11191, 2010.

[40] X. Cui, L. Man Lee, X. Heng et al., ”Lensless high-resolution on-chip optofluidic microscopes for Caenorhabditis elegans and cell imaging,” PNAS, 2008.

[41] DD. Li, J. Arlt, D. Tyndall, et al., ”Lensless high-resolution on-chip optofluidic microscopes for Caenorhab- ditis elegans and cell imaging,” J. Biomed. Opt., vol. 16(9), pp. 096012, 2011.

[42] Venkatasubbarao, Srivatsa, ”Microarrays - status and prospects,” Trends in Biotechnology, vol. 22(12), pp. 630-637, 2004.

[43] Murari, K., Etienne-Cummings, R., Thakor, Nitish, Cauwenberghs, G., ”Which Photodiode to Use: A Comparison of CMOS-Compatible Structures,” Sensors Journal, IEEE , vol. 9, no. 7, pp. 752-760, July 2009. 84 REFERENCES

[44] Xiaotie Wu, Milin Zhang, Engheta, N., Van der Spiegel, J., ”Design of a monolithic CMOS image sensor integrated focal plane wire-grid polarizer filter mosaic,” Custom Integrated Circuits Conference (CICC), 2012 IEEE , pp. 1-4, 9-12 Sept. 2012.

[45] Markovic, B., Tosi, A, Zappa, F., Tisa, S., ”Smart-pixel with SPAD detector and time-to-digital converter for time-correlated single photon counting,” IEEE Photonics Society, 2010 23rd Annual Meeting of the , pp. 181-182, 7-11 Nov. 2010.

[46] Nissinen, I, Nissinen, J., Lansman, A, Hallman, L., Kilpela, A, Kostamovaara, J., Kogler, M., Aikio, M., Tenhunen, J., ”A sub-ns time-gated CMOS single photon avalanche diode detector for Raman spec- troscopy,” Solid-State Device Research Conference (ESSDERC), 2011 Proceedings of the European , pp. 375-378, 12-16 Sept. 2011.

[47] Berube, B.-L., Rheaume, V.-P., Therrien, AC., Parent, S., Maurais, L., Boisvert, A, Carini, G., Charlebois, S.A, Fontaine, R., Pratte, J.-F., ”Development of a single photon avalanche diode (SPAD) array in high volt- age CMOS 0.8 m dedicated to a 3D integrated circuit (3DIC),” Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2012 IEEE , pp. 1835-1839, Oct. 27 2012-Nov. 3 2012.

[48] Jian Guo, Sonkusale, S., ”A CMOS imager with digital phase readout for fluorescence lifetime imaging,” ESSCIRC (ESSCIRC), 2011 Proceedings of the , pp. 115-118, 12-16 Sept. 2011.

[49] Derek Ho, ”CMOS Contact Imagers for Spectrally-Multiplexed Fluorescence DNA Biosensing”, Ph.D . dissertation, Dept. Elect. Comp. Eng., Univ. Toronto, ON, 2013.

[50] Murari, K., Etienne-Cummings, R., Thakor, N.V., Cauwenberghs, G., ”A CMOS In-Pixel CTIA High- Sensitivity Fluorescence Imager,” Biomedical Circuits and Systems, IEEE Transactions on , vol. 5, no. 5, pp. 449-458, Oct. 2011.

[51] Ashkan Olyaei, ”VIPRO: FOCAL-PLANE CMOS SPATIALLY-OVERSAMPLING COMPUTATIONAL IMAGE SENSOR”, M.A.Sc . dissertation, Dept. Elect. Comp. Eng., Univ. Toronto, ON, 2006.

[52] D.A. Johns, and K. Martin, Analog Integrated Circuit Design, John Wiley & Sons, 1997.

[53] Cho, T.B., Gray, P.R., ”A 10 b, 20 Msample/s, 35 mW pipeline A/D converter,” Solid-State Circuits, IEEE Journal of , vol. 30, no. 3, pp. 166-172, Mar 1995.

[54] Garcia, J., Rodriguez, S., Rusu, A, ”A Low-Power CT Incremental 3rd Order /spl Sigma//spl Delta/ ADC for Biosensor Applications,” Circuits and Systems I: Regular Papers, IEEE Transactions on , vol. 60, no. 1, pp. 25-36, Jan. 2013. REFERENCES 85

[55] Mengyun Yue, Dong Wu, Zheyao Wang, ”A 15-bit two-step sigma-delta ADC with embedded compression for image sensor array,” Circuits and Systems (ISCAS), 2013 IEEE International Symposium on , pp. 2038- 2041, 19-23 May 2013.

[56] Marijan, M., Demirkol, I, Maricic, D., Sharma, G., Ignjatovic, Z., ”Adaptive Sensing and Optimal Power Allocation for Wireless Video Sensors With Sigma-Delta Imager,” Image Processing, IEEE Transactions on , vol. 19, no. 10, pp. 2540-2550, Oct. 2010.

[57] Youngcheol Chae, Gunhee Han, ”Low Voltage, Low Power, Inverter-Based Switched-Capacitor Delta- Sigma Modulator,” Solid-State Circuits, IEEE Journal of , vol. 44, no. 2, pp. 458-472, Feb. 2009.

[58] Fang Tang, Yuan Cao, Xiaojin Zhao, ”Column-parallel continuous-time Σ∆ ADC with implicit front- end variable gain amplifier,” Circuits and Systems (MWSCAS), 2013 IEEE 56th International Midwest Symposium on , pp. 253-256, 4-7 Aug. 2013.

[59] Mahmoodi, A, Joseph, D., ”Pixel-level delta-sigma ADC with optimized area and power for vertically- integrated image sensors,” Circuits and Systems, 2008. MWSCAS 2008. 51st Midwest Symposium on , pp. 41-44, 10-13 Aug. 2008.

[60] Stoppa, D., Massari, N., Pancheri, L., Malfatti, M., Perenzoni, M., Gonzo, L., ”A Range Image Sen- sor Based on 10-µm Lock-In Pixels in 0.18-µm CMOS Imaging Technology,” Solid-State Circuits, IEEE Journal of , vol. 46, no. 1, pp. 248-258, Jan. 2011.

[61] Labonne, E., Sicard, G., Renaudin, M., et. al., ”A 100dB dynamic range CMOS image sensor with global shutter,” Electronics, Circuits and Systems, 2006. ICECS ’06. 13th IEEE International Conference on , pp. 1133-1136, 10-13 Dec. 2006.

[62] Guilloux, F., Degerli, Y., Flouzat, C., Orsini, F., Venault, P., ”CMOS APS with in-pixel discrimination and improved rolling shutter architecture for charged particle tracking,” Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2013 IEEE, pp. 1-6, Oct. 27 2013-Nov. 2 2013.

[63] Ignjatovic, Z., Maricic, D., Bocko, M.F., ”Low Power, High Dynamic Range CMOS Image Sensor Em- ploying Pixel-Level Oversampling Σ∆ Analog-to-Digital Conversion,” Sensors Journal, IEEE , vol. 12, no. 4, pp. 737-746, April 2012.

[64] R. Schreier and G. C. Temes, Understanding Delta-Sigma Data Converters. New Jersey: John Wiley and Sons, 2005.

[65] R. Schreier and T. Caldwell, ”Lecture 9: Noise in Switched-Capacitor Circuits,” in University of Toronto ECE1371 Course Notes, 2009. 86 REFERENCES

[66] R. Schreier and T. Caldwell, ”Lecture 3: EXAMPLE DESIGN: PART 2,” in University of Toronto ECE1371 Course Notes, 2008.

[67] Jie Yuan, Ho Yeung Chan, Sheung Wai Fung, Bing Liu, ”An Activity-Triggered 95.3 dB DR - 75.6 dB THD CMOS Imaging Sensor With Digital Calibration,” Solid-State Circuits, IEEE Journal of, vol. 44, no. 10, pp. 2834-2843, Oct. 2009.

[68] Ho, D., Noor, M.O., Krull, U.J., Gulak, G., Genov, R., ”CMOS Tunable-Color Image Sensor With Dual- ADC Shot-Noise-Aware Dynamic Range Extension,” Circuits and Systems I: Regular Papers, IEEE Trans- actions on, vol. 60, no. 8, pp. 2116-2129, Aug. 2013.

[69] Olyaei, A, Genov, R., ”Focal-Plane Spatially Oversampling CMOS Image Compression Sensor,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 54, no. 1, pp. 26-34, Jan. 2007.