<<

Development of Accessible Architectures Towards Biomedical Applications

by

C. Harrison Brodie

A Thesis presented to The University of Guelph

In partial fulfilment of requirements for the degree of Master of Applied Science in Engineering

Guelph, Ontario, Canada

© C. Harrison Brodie, December, 2018 ABSTRACT

DEVELOPMENT OF ACCESSIBLE HYPERSPECTRAL IMAGING ARCHITECTURES TOWARDS BIOMEDICAL APPLICATIONS

C. Harrison Brodie Advisor: University of Guelph, 2018 Dr. Christopher M. Collier

Hyperspectral imaging combines the attributes of imaging (detecting physical features) and (detecting chemical features) and is a technology with great potential in many applications. However, to facilitate widespread adoption of hyperspectral imaging, such systems require enhanced accessibility (i.e., being inexpensive and upcomplicated) and can be developed as novel hyperspectral imaging instrumentation architectures. This thesis presents, designs, develops, and evaluates an accessible hyperspectral imaging instrumentation architecture, with snapshot operation, based on the integration of readily-available components and frequency multiplexing with Fourier analyses. This is achieved through the identification of incident spatial image channels with frequency encoding from unique dynamic binary codes. Comparison to data from a commercial reveals the performance of the hyperspectral imaging instrumentation architecture. Overall, the hyperspectral imaging instrumentation architecture compares favourably to commercially available products and can be adapted for two-dimensional operation. The presented hyperspectral imaging instrumentation architecture can provide benefit for regions of the world that have limited financial resources and have a need for accessible hyperspectral imaging technologies. iii

ACKNOWLEDGEMENTS

I wish to extend my sincere gratitude to my advisor, Dr. Christopher Collier. Thank you for your guidance and leadership. Your encouraging mentorship has been a positive influence on my life.

Jasen Devasagayam, thank you for your enthusiasm and collaboration on this project.

Thank you to my parents, Donna and Keith. You have loved and supported me throughout my entire life.

iv

TABLE OF CONTENTS

Abstract ...... ii

Acknowledgements ...... iii

Table of Contents ...... iv

1 Introduction ...... 1

1.1 Hyperspectral Imaging Background...... 1 1.1.1 Optoelectronic Devices ...... 1 1.1.2 Imaging ...... 2 1.1.3 Spectroscopy ...... 2 1.2 Hyperspectral Imaging ...... 4 1.2.1 Hyperspectral Imaging Applications ...... 5 1.2.2 Hyperspectral Imaging Development ...... 5 1.2.3 Photonic Innovations and Low-cost Opportunities for Hyperspectral Imaging ..... 10 1.3 Contributions of this Thesis ...... 11 1.4 Thesis Scope ...... 12 1.5 Dissemination of Results ...... 12 2 Background Scientific Knowledge ...... 13

2.1 Fourier Series and Transform ...... 13 2.1.1 Fourier Series ...... 13 2.1.2 Fourier Transform ...... 16 2.2 Temporal Fourier Transform: Lock-in Detection ...... 17 2.3 Spatial Fourier Transform: Diffraction and Fourier Optics ...... 18 3 Instrumentation Development ...... 20

3.1 Instrumentation Design ...... 20 3.2 Instrumentation Analysis...... 25 4 Results ...... 27

v

4.1 Instrumentation Evaluation ...... 27 4.2 Comparison to Other Instrumentation ...... 35 5 Conclusions ...... 38

5.1 Hyperspectral Imaging Instrumentation Architecture ...... 38 5.2 Future Work ...... 40 Copyright Notice ...... 41

Bibliography ...... 42

Appendix ...... 52

APPENDIX A: Component Selection and Experimental Challenges ...... 52

APPENDIX B: MATLAB Script for HSI Instrumentation Architecture ...... 56

vi

LIST OF FIGURES

Figure 1: The electromagnetic is shown with decreasing wavelength and increasing

frequency progressing over radio, microwaves, terahertz, infrared, visible light, ultraviolet,

X-rays, and gamma-rays (-rays)...... 3

Figure 2: Whiskbroom HSI acquisition is shown. To populate the datacube, time must be used. . 6

Figure 3: Pushbroom HSI acquisition is shown. To populate the datacube, time must be used. ... 7

Figure 4: Wavelength scan HSI acquisition is shown. To populate the datacube, time must be used.

...... 8

Figure 5: Snapshot HSI acquisition is shown. To populate the datacube, time is not required...... 9

Figure 6: The Fourier Series approximations (blue) of a periodic square wave function (green),

gsw(x), are shown in primary (temporal or spatial) domain in subfigures (a)-(e) for n = 1,

2, 5, 10, and 100. The corresponding frequency domain representations are shown in

respective subfigures (f)-(j) for the aforementioned Fourier Series approximations...... 15

Figure 7: Periodic and aperiodic time domain signals (a)-(d) are shown along with the

corresponding frequency domain representations (e)-(h)...... 16

Figure 8: A standard implementation for an optical chopper is shown...... 18

Figure 9: Transmission diffraction gratings with (a) large diffraction grating pitch and (b) small

diffraction grating pitch are shown...... 19

vii

Figure 10: The schematic of the HSI instrumentation architecture is shown. The inset shows

scanning electron microscope images of the diffractive element, being the polycarbonate

layer of a CD. © 2018 IEEE...... 21

Figure 11: The dynamic coded aperture is shown. The spatial image channels are identified in the

inset. The reference channel is also identified. © 2018 IEEE...... 24

Figure 12: The Fourier amplitude spectra is shown as a surface plot versus its argument variables,

being CCD pixel, j, and frequency, f. The DC Fourier amplitude spectrum can be seen at f

= 0 Hz. The pixel Fourier amplitude spectra associated with spatial image channel i = 1, 2,

and 3 can be seen at f = F1 = 19.3 Hz, f = F2 = 15.4 Hz, and f = F3 = 11.6 Hz, respectively.

© 2018 IEEE...... 28

Figure 13: The amplitude of the Fourier spectra for each spatial image channel are shown as a

function of CCD pixel, j (first axis). The encoded frequency bins are plotted as light grey,

dark grey, and black lines for A(j, f = F1), A(j, f = F2), and A(j, f = F3), respectively. The

summation of the Fourier spectra, Asum, is plotted as a function of CCD Pixel, j (second

axis). The DC Fourier spectrum, ADC, is plotted as a function of CCD Pixel, j (third axis).

© 2018 IEEE...... 30

Figure 14: The amplitude of the Fourier spectra A(λ, f = F1) (first axis), A(λ, f = F2) (second axis),

and A(λ, f = F3) (third axis) are shown as a function of wavelength, λ, for the first, second,

and third spatial image channels (i = 1, 2, and 3), respectively. The reference spectrum,

Aref(λ) (fourth axis), is shown for comparison. © 2018 IEEE...... 32

1 Introduction 1.1 Hyperspectral Imaging Background 1.1.1 Optoelectronic Devices The technological field of electronics emerged near the end of the industrial revolution [1].

This technological field has fundamentally changed the human existence. Evidence of this fundamental change can be seen in day-to-day life through the use of cell phones, televisions, internet, and computers. In more recent years, the technological field of photonics has emerged.

Photonics refers to the technology based around the use of (the fundamental particles of light), rather than electrons (the fundamental particles of electricity), and holds great potential for continued improvement of the human existence.

Simple photonic devices include eye glasses and telescopes, however, greater potential exists through the combination of photonics and electronics, through optoelectronic devices [2].

Optoelectronic devices can be found in many sectors, including military applications (e.g., light detection and ranging (LIDAR) [3], optical detection of trace explosives [4], adaptive camouflage systems [5]), manufacturing applications (e.g., machine vision systems [6], photolithography [7], and automated surface inspection systems [8]), telecommunication applications (e.g., fiber-optic systems [9], free space optical communications [10], electro-optic network switches [11]), and biomedical applications (e.g., optical identification of diseased biomarkers [12], laser in-situ keratomileusis (LASIK) eye surgery [13], real-time in-vivo endoscopy [14], and optoelectronic point-of-care devices [15]). These biomedical applications are of particular interest—given their ability to directly save and improve human lives—and have manifested themselves through imaging [16] and spectroscopy [17].

1

1.1.2 Imaging Imaging is the capturing of physical (i.e., spatial) data, and it is ubiquitous in biomedical technologies through its use in histopathology [18] and radiology [19]. However, imaging technologies have evolved over time from humble beginnings. Until the advent of digital image sensors like the charge-coupled device (CCD), conventional imaging (i.e., photography) required onetime use films to store spatial information from incident light exposure. The spatial information stored on film was not accessible until the exposed film was developed; a chemical process that required at least fifteen minutes to produce a finished photograph.

Since this point, digital imaging has emerged as a momentous advancement in the field of photography. The advancement of digital imaging eliminates the need to store spatial information onto film and the associated film exposure/development time. Digital image sensors like the CCD can acquire optical intensities at many discrete locations within a single field of view, with the output of each location being saved digitally as pixels.

1.1.3 Spectroscopy In contrast to imaging, spectroscopy is the capturing of chemical (i.e., spectral) data, and it is ubiquitous in biomedical technologies through its use in detection of biomarkers possessing unique spectral signatures [20]. Spectroscopy technologies have been developed over each part of the (shown in Fig. 1).

2

Figure 1: The electromagnetic spectrum is shown with decreasing wavelength and increasing frequency progressing over radio, microwaves, terahertz, infrared, visible light, ultraviolet, X-rays, and gamma-rays (-rays).

At the highest energy, gamma-ray spectroscopy is used in gamma-camera-based cancer imaging systems [21], prompt gamma spectroscopy (PGS) for regulation of radiotherapy particle beams [22], and PGS for in-vivo supervision of boron based cancer therapies [23]. At the next highest photon energy, X-ray spectroscopy is used in the characterization of low solubility pharmaceutical drugs [24], mammography dose calculations [25], and low dose X-ray phase imaging of soft tissues [26]. Ultraviolet spectroscopy is used in solar UV exposure estimation [27], auto-fluorescence imaging of colon growths [28], and fluorescence microscopy of tooth calculus

[29]. Visible light spectroscopy is used to acquire the cellular metabolism of tissues for inferencing various pathologies [30], diagnosing chronic fatigue syndrome from blood serum [31], and determining influenza infection from nasal aspirates [32]. is used in assessing cellular interactions of new cancer drugs [33], intra-operative detection of lymph node cancer metastasis [34], and identification of irregular colon polyps for cancer screening [35].

Terahertz spectroscopy, making use of time-domain terahertz pulses [36]-[38], is used in point-of- care microfluidic diagnostics [39], which can detect malignant cancer cells [40]-[41], with

3

potential for use as non-invasive parental testing [42]. Microwave and radio spectroscopy are not typically used in biomedical applications.

These various spectroscopy devices all have unique features that make them suitable for their respective biomedical applications, however, they are expensive and can be difficult to use

(requiring training). As such, spectroscopy devices are often used in developed countries, saving many lives, but rarely used in developing countries, costing many lives [43].

1.2 Hyperspectral Imaging The medical benefits of imaging and spectroscopy devices are numerous. However, when used in isolation, these technologies often leave a medical professional without a complete diagnosis. For example, a spectroscopy device for detection of skin cancer would provide spectral signatures of cancerous tissue, however, it would only be applied in a few areas and yield little information on location of affected areas. On the other hand, an imaging device for detection of skin cancer would provide a large imaging area but would not provide the required measurement of the spectral signature of the cancer. As such, it is desirable to combine the attributes of imaging and spectroscopy devices and this can be achieved through hyperspectral imaging (HSI) devices.

Hyperspectral imaging devices incorporate the spatial information acquired from digital imaging, with the spectral information obtained from spectroscopy. These HSI devices have evolved over time. Initially, imaging devices were not hyperspectral and were based around monochrome imaging, with information on optical intensity but not wavelength. This then led to the development of polychrome imaging, with information on optical intensity and discrete colour wavelengths. Finally, full HSI was obtainable, with information on optical intensity over continuous wavelengths [44]-[47].

4

1.2.1 Hyperspectral Imaging Applications As HSI, and its associated instrumentation, serves applications which require identification through the measurement of spectral features, rather than just spatial features, HSI lends itself to biomedical applications requiring both spatial and spectral features such as skin cancer detection

[48]-[50], assessing retina function [51], and monitoring oxygenation of brain tissue [52].

However, as the complexity of applications has increased, so too has the complexity (and associated costs) of HSI systems. As such, developing countries have a need for the development of HSI instrumentation architectures that are accessible, being both inexpensive and uncomplicated. This challenge has been noted in the literature [43], [53].

1.2.2 Hyperspectral Imaging Development Hyperspectral imaging systems have progressed from whiskbroom, to pushbroom, to wavelength scan, to snapshot instrumentation architectures, with each iteration getting progressively more inaccessible due to the increased complexity and associated cost. This complexity comes about because HSI instrumentation architectures must store wavelength information in addition to spatial information into a datacube, which is a multidimensional array.

Thus, in a HSI instrumentation architecture, the sensor must possess capabilities to capture one dimension beyond that of the image. This forms an image/sensor dimensional relationship of scalar/vector for whiskbroom instrumentation architectures, vector/matrix or matrix/tensor for pushbroom instrumentation architectures, matrix/tensor for wavelength scan instrumentation architectures, and snapshot instrumentation architectures.

5

Figure 2: Whiskbroom HSI acquisition is shown. To populate the datacube, time must be used.

Whiskbroom instrumentation architectures, at the simplest level, have a single image channel (scalar) with a one-dimensional data array sensor (vector), as shown in Fig. 2. Here, the initial datacube is shown after a single image acquisition. The spectral dimension of a single image pixel is resolved and mapped to the Y dimension of the image sensor. The remaining image pixel spectra are obtained through additional acquisitions over time. This instrumentation architecture can be implemented through an imaging fiber-optic cable coupled to a diffraction grating and linear optical sensor (corresponding to one spectral dimension), e.g., the Thorlabs CCS200 spectrometer.

Whiskbroom instrumentation architectures can be undesirable due to their reliance on time.

6

Figure 3: Pushbroom HSI acquisition is shown. To populate the datacube, time must be used.

Pushbroom instrumentation architectures, at the next level of complexity, have a one- dimensional image channel array (vector) coupled with a two-dimensional data array sensor

(matrix), as shown in Fig. 3. This instrumentation architecture is implemented through linear imaging coupled to a diffraction grating and a two-dimensional optical sensor [54] (with one spatial and one spectral dimension). To image over the next spatial dimension and form a two- dimensional image (matrix), the pushbroom instrumentation architecture scans line-by-line over time forming a three-dimensional datacube (tensor). Pushbroom instrumentation architectures can be undesirable due to their reliance on time.

7

Figure 4: Wavelength scan HSI acquisition is shown. To populate the datacube, time must be used.

Wavelength scan instrumentation architectures, at a similar level of complexity to pushbroom instrumentation architectures, have a two-dimensional image channel array (matrix) with a two-dimensional data array sensor (matrix), as shown in Fig. 4. Spectral information is obtained by applying different colour filters one after another over time, forming a three- dimensional datacube (tensor). Wavelength scan instrumentation architectures can be undesirable due to their reliance on time.

8

Figure 5: Snapshot HSI acquisition is shown. To populate the datacube, time is not required.

Snapshot instrumentation architectures, at the highest level of complexity, have a two- dimensional image channel array (matrix) with a two-dimensional data array sensor (matrix), as shown in Fig. 5. However, in contrast to the pushbroom and wavelength scan instrumentation architectures, the snapshot instrumentation architecture do not rely on time to form the remaining

(spatial or wavelength) dimension. With this in mind, they are desirable.

Solutions to this significant challenge in instrumentation and measurement include image mapping spectrometry [55], image-replicating imaging spectrometry [56], and coded aperture snapshot spectral imaging (CASSI) [57] systems. Generally, these techniques require a large format image sensor to allow sufficient spatial separation of the three measured dimensions (two spatial and one wavelength) [58], with cost and complexity inhibiting widespread adoption [43,

9

53]). The CASSI system incorporates a static coded aperture which avoids the large format image sensor, thereby lowering cost. The CASSI system and its subsequent iterations, however, require iterative software algorithms and sometimes a mechanical-piezo translation stage for improved performance [59], both of which maintain a high level of complexity. With this in mind, there is a great need for accessible (inexpensive and uncomplicated) HSI instrumentation architectures with snapshot operation. Meeting this need is a central motivation for this thesis.

1.2.3 Photonic Innovations and Low-cost Opportunities for Hyperspectral Imaging At the same time as the developments described above, innovations in optical chopper instrumentation and technologies have been explored for optical measurements as an effective means to encode light with a periodic amplitude modulation at a defined frequency. This optical chopper instrumentation is very effective at reducing a majority of pink and white noise (i.e., 1/f and background noise, respectively) from a measured signal [60] and represents an opportunity for use in HSI. Chaudry et al. describe the separation of an optical beam into discrete channels through optical chopper instrumentation and spatial encoding [61]. The spatial information from the measured optical signals is reconstructed using frequency domain analysis.

The concept described above has been expanded to include diffractive elements and amplitude modulation of different wavelengths [62]. These works in instrumentation and measurement have enabled and inspired investigations towards measurement of chemical signatures from voltage perturbations of optical disc drive photo detectors [63]. With this in mind, utilizing optical disc technologies can allow for a hybrid adaptation of both an optical chopper and a diffraction grating into one instrumentation system. This combination has great promise to be a

10

low-cost and accessible method to resolve light into component wavelengths and simultaneously encode spatial information onto incident light for channel separation.

1.3 Contributions of this Thesis This thesis introduces an accessible HSI instrumentation architecture and shows its design, development, and evaluation. The HSI instrumentation architecture achieves snapshot operation using readily-available components (e.g., optical discs) and uncomplicated Fourier (frequency- domain) analyses. The HSI instrumentation architecture takes inspiration from the CASSI system and incorporates a dynamic coded aperture (in contrast to the static coded aperture of the CASSI system), whereby, spatial image channels are encoded with a dynamic binary code at different repetition periods which can be multiplexed using Fourier analyses. As such, each spatial image channel is assigned a frequency bin based on the dynamic coded aperture. (This concept has similarities to the instrumentation and working principle of Hadamard transform imagers [64].)

Each channel of a one-dimensional image channel array (vector) is captured on a one-dimensional data array sensor (vector). A two-dimensional sensor is avoided as Fourier analyses are used to extract the data associated with each spatial image channel. (This leaves the second dimension of a CCD sensor available for spatial data and for future development of a two-dimensional HSI instrumentation architecture.) Ultimately, the HSI instrumentation architecture is made accessible through incorporation of ubiquitous compact disc (CD) technology (i.e., optical disc technology) and easy-to-apply Fourier analyses. Additionally, the dynamic coded aperture mitigates low frequency noise not present at the frequency bins of the dynamic binary codes.

11

1.4 Thesis Scope The rest of this thesis is laid out in the following way. Chapter 2 provides the background scientific knowledge. Chapter 3 presents the instrumentation development of the HSI device.

Chapter 4 presents the results of the HSI device. Chapter 4 provides concluding remarks and recommendations for future work.

1.5 Dissemination of Results Elements of the results from this thesis are disseminated as a journal article in IEEE

Transactions on Instrumentation and Measurement [65] and as presentations at the Photonics

North 2018 Conference and CSBE/SCGAB Annual General Meeting and Technical Conference.

12

2 Background Scientific Knowledge This chapter investigates the background knowledge that is required for understanding of this thesis. Specifically, temporal and spatial Fourier transforms are ultimately required for the system via optical lock-in detection and optical diffraction, respectively. The analysis begins with a description of the Fourier series and its relationship to the Fourier transform.

2.1 Fourier Series and Transform 2.1.1 Fourier Series The Fourier Series is first considered. The Fourier Series describes how periodic functions can be represented by (and broken down into) underlying cosine and sine functions at integer multiples of the fundamental frequency. This is described generally through

∞ ∞ 1 2휋푛 2휋푛 푔 (푥) = 푎 + ∑ 푎 cos⁡( 푥) + ∑ 푏 sin⁡( 푥), (1) 푝푒푟푖표푑푖푐 2 0 푛 푇 푛 푇 푛=1 푛=1 where gperiodic(x) is the periodic function, a0 = an=0 is the DC offset, an is each of the (even) cosine function amplitudes, bn is each of the (odd) sine amplitudes, and n is the integer multiples of the fundamental frequency, being 푇−1 for this general case of period, T. The argument x is a placeholder that can later be replaced with either time, t, or space, z. The DC offset, cosine function amplitudes, and sine function amplitudes take different forms when describing different periodic functions, and are found through

푇/2 2 2휋푛 푎 = ∫ 푔 (푥) cos ( 푥) 푑푥, (2) 푛 푇 푝푒푟푖표푑푖푐 푇 −푇/2

13

which describes both DC offset and cosine function amplitudes, and

푇/2 2 2휋푛 푏 = ∫ 푔 (푥)sin⁡( 푥) 푑푥, (3) 푛 푇 푝푒푟푖표푑푖푐 푇 −푇/2 which describes sine function amplitudes. Here, the periodic function is represented more and more closely as n approaches infinite.

The Fourier Series can be further illustrated through consideration of a periodic square wave function, gSW(x). Shown in Fig. 6, the periodic square wave function is represented more and more closely by a Fourier Series with the progression of n = 1, n = 2, n = 5, n = 10, and n = 100 in

Fig. 6(a)-(e), respectively. This progression is represented within the frequency-domain in Fig.

6(f)-(j) and the addition of more and more harmonics can be observed as n increases.

14

Figure 6: The Fourier Series approximations (blue) of a periodic square wave function (green), gsw(x), are shown in primary (temporal or spatial) domain in subfigures (a)-(e) for n = 1, 2, 5, 10, and 100. The corresponding frequency domain representations are shown in respective subfigures (f)-(j) for the aforementioned Fourier Series approximations. 15

2.1.2 Fourier Transform The Fourier Series is a tremendous aid in analysing periodic signals. However, for aperiodic signals, the Fourier Series can be modified. Such aperiodic signals are considered in Fig.

7.

Figure 7: Periodic and aperiodic time domain signals (a)-(d) are shown along with the corresponding frequency domain representations (e)-(h). 16

Here, as the period approaches infinite, the periodic signal becomes aperiodic. This has important consequences for the spacing between the corresponding harmonic frequencies, as this spacing has a reciprocal relationship with the period. That is to say, the resolution between harmonic frequencies becomes infinitesimally small (i.e., continuous) as the period becomes infinite (i.e., the frequency-domain becomes continuous). Frequency representations of this sort manifest themselves through the Fourier Transform, where the continuous frequencies describing an aperiodic signal are

∞ 퐺(푋) = ∫ 푔(푥)푒−푗2휋푥푑푥. (4) −∞

The concept of the Fourier Transform is applied in this thesis with both temporal arguments, through the lock-in detection, and spatial arguments, through diffraction and Fourier optics.

2.2 Temporal Fourier Transform: Lock-in Detection The Fourier Transform can be applied with a temporal argument to achieve lock-in detection in optical systems. Such an optical system is shown in Fig. 8, where an optical chopper is used to modulate two optical signals through a periodic spacing of holes. Each optical beam forms a temporal square wave with a positive DC offset, measured on the photodiode. The temporal

Fourier Transform can be applied to the signal that is measured on the photodiode and the contribution from each beam will appear at its corresponding frequency. By selecting either modulation frequency with a lock-in detector (in relation to the frequency reference signal), the noise and background signal can be removed. This concept of an optical chopper and lock-in detector is used in this thesis for a low-cost optical implementation whereby HSI channels are separated through Fourier Transform analyses.

17

Figure 8: A standard implementation for an optical chopper is shown.

2.3 Spatial Fourier Transform: Diffraction and Fourier Optics The Fourier Transform can be applied with a spatial argument to achieve controlled spatial diffraction in optical systems. This occurs through the diffraction grating shown in Fig. 9. Here, a near-field spatial pattern will form a far-field spatial pattern that is the Fourier Transform of the near-field spatial pattern. For the diffraction grating, the near-field pattern is the diffraction grating and the far-field pattern is a repetitive pattern of many rainbows. The Fourier Transform relationship is seen as there is a reciprocal relationship between the spacing in the near-field and the spacing of the far-field. This is illustrated in Fig. 9(a) that shows a large near-field spacing of the pitch between tines on the grating, with a smaller far-field spacing of the pitch between the

18

(diffracted) rainbows, and Fig. 9(b) shows the opposite. This reciprocal relationship can be seen from the Fourier Transform pair

푋 퐺 ( ) = 퐹{푔(퐾푥)}, (5) 퐾 where K is a constant. The exact wavelength can be found (for a transmission grating with normal incidence) when the diffraction angle, , is known through

푑 sin 훳 휆 = ⁡ ⁡, (6) 푚 where d is the diffraction grating pitch, m is the order of diffraction, and λ is the wavelength of the diffracted light.

The concept of diffraction gratings (and Fourier optics) is used in this thesis for spatial separation of HSI signals into constituent wavelengths.

Figure 9: Transmission diffraction gratings with (a) large diffraction grating pitch and (b) small diffraction grating pitch are shown.

19

3 Instrumentation Development This chapter discusses the required instrumentation development and analysis techniques for the experimental HSI instrumentation architecture.

3.1 Instrumentation Design The experimental instrumentation design schematic of our HSI instrumentation architecture is shown in Fig. 10. The HSI instrumentation architecture takes an image beam and separates it into three spatial image channels which then undergo encoding and diffraction from the dynamic coded aperture. These diffracted beams are then directed onto a charge-coupled device (CCD) sensor which acts as the one-dimensional data array sensor. (A Toshiba TCD1304 is selected as the CCD sensor as it is inexpensive, readily-available, and is commonly found within commercial .) The diffracted beams spatially overlap on the CCD sensor, however, this challenge is overcome as the spectra are resolved through filtering at specific frequency bins corresponding to the unique dynamic binary codes for each spatial image channel. These details are expanded upon below. The image beam, which is incident on the dynamic coded aperture, is separated into separate spatial image channels identified as i = 1-3. Each channel is encoded with a different dynamic binary code of frequency f = F1 = 5f0 = 19.3 Hz, F2 = 4f0 = 15.4 Hz, and F3 = 3f0 = 11.6

Hz while the dynamic coded aperture is rotated by the motor at a fundamental frequency of f0 =

3.85 Hz.

20

Figure 10: The schematic of the HSI instrumentation architecture is shown. The inset shows scanning electron microscope images of the diffractive element, being the polycarbonate layer of a CD. © 2018 IEEE. 21

The chosen fundamental frequency, f0, is interwoven in the system design for acquisition of data, such that both aliasing and harmonic interference are avoided. These performance reducing phenomenon are mitigated by ensuring the frequency of the outer channel is below one half of the sampling frequency, Fsamp. With this in mind, the fundamental frequency of the motor must satisfy the criterion of f0  Fsamp /(4N – 2), where N = 3 is both the number of channels and the number of binary increments on the inner channel. For Fsamp = 120 Hz, a fundamental motor frequency of

3.85 Hz is chosen as it falls well within the upper limit of 12 Hz.

The dynamic coded aperture, constructed to maintain the aforementioned inequality, is divided into only three spatial image channels to facilitate the experimentation of this work, however, many additional spatial image channels can be added as required for future HSI applications.

A brushless DC motor is used to rotate the dynamic coded aperture. Diffracted light leaving the dynamic coded aperture that is incident on the CCD sensor, is collected using an electronic shutter function with a sample and hold circuit. This sample and hold circuit successively measures each of the CCD sensor's discrete photoreceptive pixels at Fsamp = 120 Hz. The continuous electrical output of the CCD sensor is a function of time-varying voltage, with the optical intensity of each CCD pixel being outputted as a voltage magnitude for a duration equal to the reciprocal of the total number of CCD pixels, multiplied by the reciprocal of the CCD sampling frequency. A

National Instruments 6343 X series data acquisition system is used to digitize the continuous electrical output of the CCD signal for streamlined data reconstruction using uncomplicated

Fourier analyses.

22

The dynamic coded aperture is shown in Fig. 11. The spatial image channels can be seen and are identified in the inset as i = 1-3. Each spatial image channel is assigned a dynamic binary code with a fifty percent duty cycle. The three spatial image channels have a channel pitch of a2 =

800 μm and channel width of w = 400 μm. The reference channel is also identified in Fig. 11. The fundamental frequency is monitored by the reference beam (low intensity laser) and the photodiode

(Thorlabs DET36A) (as shown in Fig. 10). To minimize the optical propagation distance between the dynamic coded aperture and the diffractive element, these components are bonded together.

The design of the dynamic coded aperture is printed (with high-resolution) onto a polyethylene film and bonded to the diffractive element, being a rotational diffraction grating. The selected diffractive element is the polycarbonate layer of a CD. The polycarbonate layer of a CD has grooves which inherently spiral to form a rotational diffraction grating with pitch of a1 = 1.50 μm.

This pitch is measured through scanning electron microscope images as shown in the Fig. 10 insets with progressive magnification (scale bars of 50 μm, 5 μm, and 1 μm from left to right). The use of the polycarbonate layer of a CD avoids expensive fabrication of a custom rotational diffraction grating. As CDs are a ubiquitous product nearing obsoletion, they are an accessible and sustainable optical tool to be recycled, repurposed, and integrated into emerging instrumentation and technologies.

23

Figure 11: The dynamic coded aperture is shown. The spatial image channels are identified in the inset. The reference channel is also identified. © 2018 IEEE.

24

3.2 Instrumentation Analysis In order to interpret results from the CCD sensor, the connection must be made for each spatial image channel on each CCD pixel and the corresponding wavelength. This connection can be developed and analysed by investigating the experimental instrumentation geometry of the HSI instrumentation architecture. To begin this geometry investigation, one can consider that each channel on the image beam encounters the dynamic coded aperture located at Cartesian coordinates of (xi, yi) where

푥푖 = 0⁡, (7) and

푦푖 = ⁡ 푎2(1 − 푖). (8)

These Cartesian coordinates are positioned relative to the (0, 0) origin located at the i = 1 channel.

Each section of the image beam passes through the dynamic coded aperture and undergoes diffraction from the rotational diffraction grating and exits at a diffraction angle of

−1 푦푗−⁡푦푖 휃푑푖푓푓 = ⁡ 푡푎푛 ( ), (9) 푥푗−⁡푥푖 and strikes the CCD sensor at Cartesian coordinates of (xj, yj) where

푥푗 = 푏 − (⁡푗 − 1)푎3 sin(휃푟표푡), (10) and

푦푗 = 푐 + (⁡푗 − 1)푎3 cos(휃푟표푡). (11)

Here a3 = 8 μm is the pitch of the CCD pixels on the CCD sensor and a Cartesian coordinate of (b, c) = (6 cm, 3 cm) is the location of the j = 1 pixel on the CCD sensor. The CCD sensor is rotated about the (b, c) Cartesian coordinate at an angle of rot = –8. To continue this geometry 25

investigation, one can consider the wavelength Fourier amplitude spectrum of the wavelengths of

th the i channel, A(λ, f = Fi), to be equal to the pixel Fourier amplitude spectrum of the corresponding th i spatial image channel, A( j, f = Fi), where the relationship between wavelength of λ and diffraction angle is

휆 = 푎1[sin(휃푖푛푐) −⁡sin(휃푑푖푓푓)], (12) where inc = 70 is the angle of oblique incidence of the image beam with respect to the normal to the dynamic coded aperture (i.e., the horizontal axis as shown in Fig. 10). This Eq. 12 can be used to convert between the jth CCD pixel and its corresponding wavelength for the ith spatial image channel. This conversion is of great importance to the operation of the HSI instrumentation architecture.

26

4 Results 4.1 Instrumentation Evaluation The fundamental operation of the HSI instrumentation architecture is tested experimentally.

Here, it must be confirmed that the overlapping spectra on the CCD sensor can be resolved by applying the Fourier transform and searching for each dynamic binary code at the corresponding frequency bin. Representative experimental results of the HSI instrumentation architecture are shown in Fig. 12 with the spatial image channels all being set to the same white light emitting diode (LED) spectrum. In this situation, the spatial image channels will overlap on the CCD sensor and CCD pixels will have superposition and interference of three different wavelengths from the three spatial image channels. In the Fig. 12 experimental results, the Fourier transform is applied to the time-domain data of each pixel for every spatial image channel of the image beam. The

Appendix provides the MATLAB script used in this analysis, as well as, the rational for the final experimental configuration and component selection. The results of this Fourier transform are depicted as the Fourier amplitude spectrum, A( j, f ), being a function of frequency, f, and CCD pixel, j, which are shown on the remaining axes. The reconstructed spectra are found within three distinct frequency bins at three, four, and five times the fundamental encoding frequency corresponding to spatial image channels of i = 1, 2, and 3. These frequency bins are at f = F1 =

19.3 Hz, F2 = 15.4 Hz, F3 = 11.6 Hz, respectively, and there is a unity-amplitude spectra reconstructed at each frequency bin as a function of the CCD pixel, j. The remainder of the surface plot of the Fourier amplitude spectra corresponds to frequencies that are not within the designated frequency bins as encoded by the dynamic coded aperture.

27

Figure 12: The Fourier amplitude spectra is shown as a surface plot versus its argument variables, being CCD pixel, j, and frequency, f. The DC Fourier amplitude spectrum can be seen at f = 0 Hz. The pixel Fourier amplitude spectra associated with spatial image channel i = 1, 2, and 3 can be seen at f = F1 = 19.3 Hz, f = F2 = 15.4 Hz, and f = F3 = 11.6 Hz, respectively. © 2018 IEEE. 28

Each frequency bin isolates amplitude contributions of the Fourier spectrum that are encoded at that frequency. This is shown in Fig. 13 on the first axis, where the three distinct Fourier spectra are shown separately, labeled as A(j, f = F1) (light grey line), A(j, f = F2) (dark grey line), and A(j, f = F3) (black line). It is clear that each spectra has been isolated and spatial interference on the CCD sensor has been mitigated.

There is, however, a significant low frequency (i.e., DC) component of each signal (as the transmitted light through the dynamic coded aperture will always represent a positive measurement on the CCD sensor). This low frequency component reveals insights into the interference that would be present on the CCD sensor without the encoding from the dynamic coded aperture. In fact, these DC components combine to form a DC Fourier amplitude spectrum, being

퐴퐷퐶 = 퐴(⁡푗, 푓 = ⁡0), (13) at a frequency bin at f = 0 Hz, and this DC Fourier amplitude spectrum should be approximately equal to the summation of each of the reconstructed pixel Fourier amplitude spectra, being the summation Fourier amplitude spectrum,

3 퐴푠푢푚 = ∑푖=1 퐴(⁡푗, 푓 = ⁡ 퐹푖). (14)

(This suggests that ADC  Asum should hold true.) This connection is shown in Fig. 13. The summation of each of the reconstructed amplitude spectra is shown in Fig. 13 on the second axis

(middle) and it is approximately the same as the DC Fourier amplitude spectrum at a frequency bin of f = 0, shown on the third axis (bottom) of Fig. 13. It is quite clear that the three pixel Fourier amplitude spectra have been superimposed and interfered together on the CCD sensor, and this is equivalent to what would be measured if the three spatial image channels were not separated. This

29

Figure 13: The amplitude of the Fourier spectra for each spatial image channel are shown as a function of CCD pixel, j (first axis). The encoded frequency bins are plotted as light grey, dark grey, and black lines for A(j, f = F1), A(j, f = F2), and A(j, f = F3), respectively. The summation of the Fourier spectra, Asum, is plotted as a function of CCD Pixel, j (second axis). The DC Fourier spectrum, ADC, is plotted as a function of CCD Pixel, j (third axis). © 2018 IEEE.

30

is similar to what would be observed if the three individual spatial image channels formed a single

(and three times wider) spatial image channel (e.g., the operation of a single-channel whiskbroom spectrometer). In this case the increased width of the spatial image channel would result in an artificially blurred spectra with much lower wavelength resolution, and only one spatial image channel.

The fundamental challenge with HSI instrumentation architectures is apparent from Fig.

13. On the first axis each curve overlaps with its neighbours with both spatial and spectral information being mapped onto the CCD sensor. It is clear that the spectra would not be resolved without the dynamic binary code. With each pixel Fourier amplitude spectrum now separated according to the corresponding frequency bin, the independent variable of the pixel Fourier amplitude spectra, being the CCD pixel, j, can now be converted to wavelength, λ, to produce the true wavelength Fourier amplitude spectra, A(λ, f = Fi). This conversion between CCD pixel and wavelength is applied and the results are shown in Fig. 14 with wavelength Fourier amplitude spectra versus wavelength. Here, the CCD pixel values for the amplitude curves (as in the first axis of Fig. 13) are converted to wavelength through Eq. 12. These are shown as A(λ, f = F1), A(λ, f = F2), and A(λ, f = F3) on the first, second, and third axes, respectively, of Fig. 14. Here, the offset spectra have been shifted and each amplitude curve is approximately equal.

31

Figure 14: The amplitude of the Fourier spectra A(λ, f = F1) (first axis), A(λ, f = F2) (second axis), and A(λ, f = F3) (third axis) are shown as a function of wavelength, λ, for the first, second, and third spatial image channels (i = 1, 2, and 3), respectively. The reference spectrum, Aref(λ) (fourth axis), is shown for comparison. © 2018 IEEE. 32

It should be noted that each spatial image channel produces a shifted spectra and the extreme ends of each spectra may not be fully encompassed by the CCD sensor, unless it is placed correctly. This would result in a loss of spectral information, and should be considered when extending the HSI instrumentation architecture to have additional spatial image channels. This loss of spectral information can be mitigated by moving the CCD image sensor closer to the dynamic coded aperture, however, this will be at the expense of a wider spectral sampling interval (i.e., fewer CCD pixels will capture the same wavelength range). These trade-offs can be addressed with a geometrical analysis. The geometrical analysis can approximate the spectral sampling interval given a set spectral window of 700 - 400 nm = 300 nm (comparable to the spectral window of other spectrometers [66]), an incidence angle of inc = 70, a pitch and number of spatial image channels of a2 = 800 μm and N = 3, respectively, and a pitch and number of CCD pixels of a3 = 8

μm and n = 3500, respectively. The spectrum associated with each spatial image channel must be encompassed by the length of CCD image sensor, being a3 × n CCD pixels, plus the a2 × N image channels. The spectral sampling interval can be estimated as the specified spectral window divided by n. These parameters dictate the approximate distance to place the CCD image sensor from the dynamic coded aperture. In the demonstrated system, with a3 = 800 μm and N = 3 spatial image channels, the required distance of the CCD image sensor from the dynamic coded aperture is found to be approximately 6 cm and the corresponding spectral sampling interval is found to be slightly smaller than 0.1 nm/pixel. Increasing the number of spatial image channels to 10, while maintaining the same pitch of the spatial image channels yields a spectral sampling interval that is slightly larger than 0.1 nm/pixel. This same spectral sampling interval can be maintained by decreasing the pitch of the spatial image channels to 80 μm and increasing the number of spatial image channels to 100. In this way, the placement of the CCD image sensor and the pitch of the 33

spatial image channels can be balanced to produce a desired combination of a specified number of spatial image channels and a specified spectral sampling interval.

Ultimately, the results of Fig. 14 show that each spatial image channel of the HSI instrumentation architecture accurately and simultaneously measure the white LED spectra for reliable snapshot operation over a spectral window from 400 to 700 nm. This successful operation is evaluated with an experimentally-measured reference spectrum, Aref(λ), as shown on the fourth axis of Fig. 14. The reference spectrum is acquired using a commercial spectrometer, being a

Thorlabs CCS200 spectrometer, with a 200 bin centered moving average filter applied to approximate the resolution of the spatial image channel width of w = 400 μm. This reference spectrum is seen to be approximately equal to each of the wavelength Fourier amplitude spectra.

As such, it can be concluded that each spatial image channel of the HSI instrumentation architecture is able to provide accurate measurements of a unique spectral signature, even though this one-dimensional array of spatial image channels is captured on the one dimensional CCD sensor.

The HSI instrumentation architecture can be extended for a matrix of spatial image channels captured by a two-dimensional CCD sensor. However, this modification to the HSI instrumentation architecture has potential challenges. For example, it would require careful control and mapping of pixels on the two-dimensional CCD sensor. The spatial image channels, defined by the binary phase mask, would become elongated spatial image columns. As such, the vertical pixels on the two-dimensional image sensor could identify the vertical position on the binary phase mask. The horizontal pixels could identify the wavelength upon Fourier analyses to identify the spatial image pixel (from the corresponding frequency bin).

34

Additionally, the size of the image would be limited as the diffraction grating on the optical disc should not be used for a vertical image area beyond what is approximately straight. Upon successfully navigating these potential challenges, this work can provide an accessible two- dimensional instrumentation HSI architecture with snapshot operation—of importance for various applications in developing countries.

As the HSI instrumentation architecture is developed to make use of accessible techniques, with readily-available components (e.g., CD diffractive elements, printed dynamic coded aperture, inexpensive linear CCD sensor) and uncomplicated analyses (i.e., Fourier analyses), it is envisioned that this HSI instrumentation architecture can contribute to widespread adoption of

HSI. This is particularly applicable to regions of the world with limited financial resources that can benefit from biomedical and biological spectral technologies.

4.2 Comparison to Other Instrumentation It is beneficial to compare and contrast our optical-disc-based HSI instrumentation architecture with state-of-the-art hyperspectral instrumentation and measurement systems. The

HSI instrumentation architecture of this thesis will be compared to other instrumentation that make use of optical chopping techniques and will be compared to commercial spectrometers in terms of relevant technical indices of wavelength accuracy and wavelength repeatability.

In terms of the comparison to other optical chopping techniques, the dynamic coded aperture mask from our system is rotated on a transparent optical disc with three distinct spatial channels encoded onto the acquired signal, while Chaudry et al. developed an optical-chopped-based measurement system with 10 channels [61]. Our HSI was developed with only three channels as this number is sufficient for demonstrating the proof-of-principle concept of our HSI

35

instrumentation architecture in terms of resolving spatially overlapped spectra. It is envisioned that our HSI instrumentation architecture could easily be adapted for operation with many more channels than 10, particularly if micron scale features are incorporated into the dynamic coded aperture, as in other instrumentation works [67]. In our HSI instrumentation architecture, uncomplicated Fourier analysis are performed to extract wavelength in spatially overlapping spectra.

In terms of the comparison to commercial spectrometers, the technical indices of particular importance are wavelength accuracy, being the similarity of a measured spectra to a known spectra, and wavelength repeatability, being the drift of the wavelength measurement. Our HSI instrumentation architecture can be compared to the commercial Thorlabs CCS200 spectrometer.

In our work, wavelength accuracy can be assessed through a comparison of the measured spectra to the known reference spectra of a white light emitting diode (LED) measured with the commercial Thorlabs CCS200 spectrometer. Specifically, the wavelength accuracy is measured by comparing the wavelength of local maxima from the violet peak of the reference LED spectra and that of the three spatial image channels. It is found that the presented HSI instrumentation architecture reproduces the measured spectra with a wavelength accuracy of 4 nm. In comparison, the Thorlabs CCS200 spectrometer itself has a wavelength accuracy of 2 nm. Wavelength repeatability is typically compared before and after a period of 60 seconds. Over such a period of

60 seconds, our HSI instrumentation architecture showed no significant spectral drift (less than 1 nm) on the three spatial image channels. This is comparable to the Thorlabs CCS200 spectrometer which showed strong wavelength repeatability (also less than 1 nm spectral drift) over the standard

60 seconds of measurement.

36

Overall, our HSI instrumentation architecture has comparable performance to other instrumentation.

37

5 Conclusions This final chapter provides a review of the main contributions presented in this thesis. The conclusions of this experimental work is provided along with remarks for future work.

5.1 Hyperspectral Imaging Instrumentation Architecture This thesis described the development and performance of an accessible hyperspectral imaging instrumentation architecture as explored through analysis and experimentation.

The first chapter provided background information on HSI—an emerging modality being a hybrid of spectroscopy and digital imaging. As hundreds of spectral bands can be acquired across a large area, HSI is seen to be significant to many applications. Hyperspectral imaging was shown to be uniquely positioned for biomedical applications involving the initial detection of disease tissue biomarkers. Snapshot operation was shown to be the ideal mode of acquisition for HSI applications in the biomedical domain (i.e., all spectral and spatial information is acquired simultaneously) and that other snapshot implementations (e.g., CASSI) have been too expensive and complex—limiting accessibility for use in developing countries. To address this need, this thesis introduced a HSI instrumentation architecture capable of accessible snapshot operation.

The second chapter outlined the background scientific knowledge required in this thesis, being the Fourier Series, Fourier Transform, and Fourier optics. These fundamental concepts of temporal and spatial Fourier transformations were shown to be interwoven in the applications of optical chopping/lock-in detection and optical diffraction, respectively.

The third chapter illustrated the HSI instrumentation architecture, and how it was made accessible by incorporating repurposed optical disc technology. A rotational diffraction grating

(optical disc) was affixed with a binary pattern (i.e. dynamic coded aperture) and used to 38

simultaneously provide optical chopping and optical diffraction. This avoided fabrication of expensive and customized optical components. The operation of this architecture entailed an incident image beam divided into spatial image channels, each with an assigned dynamic binary code via a dynamic coded aperture. This dynamic coded aperture was constructed from repurposed diffractive optical disc technologies and was patterned with strategic opaque and transparent regions. When rotated by a motor, dynamic binary codes were used, along with Fourier analyses, to identify the diffraction of each spatial image channel. The spatially overlapped spectra from the diffraction was directed onto a charge-coupled device sensor

The fourth chapter showed that the interference from the measured spatially overlapping spectra, corresponding to each spatial image channel, can be distinguished and decoupled using the Fourier analyses. The resulting Fourier amplitude spectra were transformed into corresponding functions of wavelength, and this transformation was based on the experimental instrumentation geometry. The performance of the HSI instrumentation architecture was found to be comparable to a commercial spectrometer.

Ultimately, this work showed that the presented proof-of-principle HSI instrumentation architecture is capable of snapshot operation—made possible using accessible and uncomplicated technologies. The increased accessibility of the HSI instrumentation architecture was intended to facilitate adoption of HSI in regions of the world with limited financial resources that can benefit from biological and biomedical spectral technologies.

39

5.2 Future Work Our proof-of-principle experimental work is limited in scope of application by the number of image pixels; each of the three being comprised of continuous spectra acquired at a single spatial dimensional (1D). Biomedical applications, like real-time hyperspectral endoscopy, are most benefitted by acquiring all three dimensions simultaneously (i.e., x, y, and λ) across many image pixels. Thus, further refinement of this work can be done by increasing the number of image pixels and extending the spatial acquisition capability to a second spatial dimension (2D). To achieve this, repurposing liquid crystal display (LCD) technology, as an alternative accessible technology, could perform the necessary frequency modulation across two spatial dimensions in unison. It is envisioned that accessible HSI systems capable of snapshot operation will utilise LCD technology for biomedical applications in developing countries.

40

COPYRIGHT NOTICE

Part of this thesis is © 2018 IEEE. Reprinted, with permission, from C. Harrison Brodie,

Jasen Devasagayam, and Christopher M. Collier, “A Hyperspectral Imaging Instrumentation

Architecture Based on Accessible Optical Disc Technology and Frequency-Domain Analyses,”

IEEE Trans. Instrum. Meas., Sept., 2018.

41

BIBLIOGRAPHY

[1] M. Guarnieri, “The early history of radar [historical],” in IEEE Industrial Electronics

Magazine., vol. 4, no. 3, pp. 36-42, Sep. 2010.

[2] N. Sabri, S. A. Aljunid, M. S. Salim, R. B. Ahmad, and R. Kamaruddin, “Toward optical

sensors: Review and applications,” J. Phys. Conf. Ser., vol. 423, no. 1, p. 012064, Apr.

2013.

[3] R. Sabatini, M. A. Richardson, A. Gardi, and S. Ramasamy, “Airborne laser sensors and

integrated systems,” Prog. Aerosp. Sci., vol. 79, pp. 15–63, Nov. 2015.

[4] L. Guo, Z. Yang, and X. Dou, “Artificial olfactory system for trace identification of

explosive vapors realized by optoelectronic Schottky sensing,” Adv. Mater., vol. 29, no. 5,

p. 161604528, Feb. 2017.

[5] C. Yu et al., “Adaptive optoelectronic camouflage systems with designs inspired by

cephalopod skins,” Proc. Natl. Acad. Sci., vol. 111, no. 36, pp. 12998–13003, Sep. 2014.

[6] S. Liu, Z. Xing, Z. Wang, S. Tian, and F. R. Jahun, “Development of machine-vision

system for gap inspection of muskmelon grafted seedlings,” PLoS One, vol. 12, no. 12, p.

e0189732, Dec. 2017.

[7] J. Seo, H. Cho, J. K. Lee, J. Lee, A. Busnaina, and H. Lee, “Double oxide deposition and

etching nanolithography for wafer-scale nanopatterning with high-aspect-ratio using

photolithography,” Appl. Phys. Lett., vol. 103, no. 3, p. 033105, Jul. 2013.

42

[8] A. Yazaki et al., “Ultrafast dark-field surface inspection with hybrid-dispersion laser

scanning,” Appl. Phys. Lett., vol. 104, no. 25, p. 251106, Jun. 2014.

[9] J. Liu et al., “Tensile strained Ge p-i-n photodetectors on Si platform for C and L band

telecommunications,” Appl. Phys. Lett., vol. 87, no. 1, p. 011110, Jul. 2005.

[10] C. M. Collier, X. Jin, J. F. Holzman, and J. Cheng, “Omni-directional characteristics of

composite retroreflectors,” J. Opt. A Pure Appl. Opt., vol. 11, no. 8, p. 085404, Aug. 2009.

[11] A. Bianco, D. Camerino, D. Cuda, and F. Neri., “Optics vs. electronics in future high-

capacity switches/routers,” 2009 International Conference on High Performance

Switching and Routing, pp. 1-6, Paris, Jun. 2009.

[12] C. Krafft and V. Sergo, “Biomedical applications of Raman and infrared spectroscopy to

diagnose tissues,” Spectroscopy, vol. 20, no. 5–6, pp. 195–218, Dec. 2006.

[13] M. Abahussin, S. Hayes, H. Edelhauser, D. G. Dawson, and K. M. Meek, “A microscopy

study of the structural features of post-LASIK human corneas,” PLoS One, vol. 8, no. 5, p.

e63268, May 2013.

[14] D. R. Rivera et al., “Compact and flexible raster scanning multiphoton endoscope capable

of imaging unstained tissue,” Proc. Natl. Acad. Sci., vol. 108, no. 43, pp. 17598–17603,

Oct. 2011.

[15] S. Lee et al., “Flexible opto-electronics enabled microfluidics systems with cloud

connectivity for point-of-care micronutrient analysis,” Biosens. Bioelectron., vol. 78, pp.

290–299, Apr. 2016.

43

[16] R. Melo, J. P. Barreto, and G. Falcão, “A new solution for camera calibration and real-time

image distortion correction in medical endoscopy-initial technical evaluation,” IEEE

Trans. Biomed. Eng., vol. 59, no. 3, pp. 634–644, Mar. 2012.

[17] P. V. Butte, A. N. Mamelak, M. Nuno, S. I. Bannykh, K. L. Black, and L. Marcu,

“Fluorescence lifetime spectroscopy for guided therapy of brain tumors,” Neuroimage, vol.

54, no. SUPPL. 1, pp. S125–S135, Jan. 2011.

[18] M. Veta, J. P. W. Pluim, P. J. van Diest, and M. A. Viergever, “Breast cancer

histopathology image analysis: A review,” IEEE Trans. Biomed. Eng., vol. 61, no. 5, pp.

1400–1411, May 2014.

[19] E. Conti et al., “Use of a CMOS image sensor for an active personal dosimeter in

interventional radiology,” IEEE Trans. Instrum. Meas., vol. 62, no. 5, pp. 1065–1072, May

2013.

[20] F. Gao, X. Feng, Y. Zheng, and C.-D. Ohl, “Photoacoustic resonance spectroscopy for

biological tissue characterization,” J. Biomed. Opt., vol. 19, no. 6, p. 067006, Jun. 2014.

[21] A. C. Sharma et al., “Design and development of a high-energy gamma camera for use

with NSECT imaging: Feasibility for breast imaging,” IEEE Trans. Nucl. Sci., vol. 54, no.

5, pp. 1498–1505, Oct. 2007.

[22] P. M. Martins et al., “Prompt gamma spectroscopy for range control with CeBr3,” Curr.

Dir. Biomed. Eng., vol. 3, no. 2, pp. 113–117, Jan. 2017.

[23] P. M. Munck af Rosenschöld, W. F. A. R. Verbakel, C. P. Ceberg, F. Stecher-Rasmussen,

and B. R. R. Persson, “Toward clinical application of prompt gamma spectroscopy for in 44

vivo monitoring of boron uptake in boron neutron capture therapy,” Med. Phys., vol. 28,

no. 5, pp. 787–795, May 2001.

[24] R. Abreu-Villela, C. Adler, I. Caraballo, and M. Kuentz, “Electron microscopy/energy

dispersive X-ray spectroscopy of drug distribution in solid dispersions and interpretation

by multifractal geometry,” J. Pharm. Biomed. Anal., vol. 150, pp. 241–247, Feb. 2018.

[25] X. Duan, J. Wang, L. Yu, S. Leng, and C. H. McCollough, “CT scanner x-ray spectrum

estimation from transmission measurements,” Med. Phys., vol. 38, no. 2, pp. 993–997, Jan.

2011.

[26] Z. Izadifar, A. Honaramooz, S. Wiebe, G. Belev, X. Chen, and D. Chapman, “Low-dose

phase-based X-ray imaging techniques for in situ soft tissue engineering assessments,”

Biomaterials, vol. 82, pp. 151–167, Mar. 2016.

[27] J. Turner, A. V. Parisi, D. P. Igoe, and A. Amar, “Detection of ultraviolet B radiation with

internal smartphone sensors,” Instrum. Sci. Technol., vol. 45, no. 6, pp. 618–638, Nov.

2017.

[28] T. E. Renkoski et al., “Ratio images and ultraviolet C excitation in autofluorescence

imaging of neoplasms of the human colon,” J. Biomed. Opt., vol. 18, no. 1, p. 016005, Jan.

2013.

[29] J. E. Schoenly, W. Seka, and P. Rechmann, “Pulsed laser ablation of dental calculus in the

near ultraviolet,” J. Biomed. Opt., vol. 19, no. 2, p. 028003, Feb. 2014.

45

[30] Q. Liu et al., “Compact point-detection fluorescence spectroscopy system for quantifying

intrinsic fluorescence redox ratio in brain cancer diagnostics,” J. Biomed. Opt., vol. 16, no.

3, p. 037004, Mar. 2011.

[31] A. Sakudo, H. Kuratsune, T. Kobayashi, S. Tajima, Y. Watanabe, and K. Ikuta,

“Spectroscopic diagnosis of chronic fatigue syndrome by visible and near-infrared

spectroscopy in serum samples,” Biochem. Biophys. Res. Commun., vol. 345, no. 4, pp.

1513–1516, Jul. 2006.

[32] A. Sakudo, K. Baba, and K. Ikuta, “Discrimination of influenza virus-infected nasal fluids

by Vis-NIR spectroscopy,” Clin. Chim. Acta., vol. 414, pp. 130–134, Dec. 2012.

[33] A. Derenne, R. Gasper, and E. Goormaghtigh, “The FTIR spectrum of prostate cancer cells

allows the classification of anticancer drugs according to their mode of action,” Analyst,

vol. 136, no. 6, pp. 1134–1141, Mar. 2011.

[34] N. Haj-Hosseini, P. Behm, I. Shabo, and K. Wårdell, “Fluorescence spectroscopy using

indocyanine green for lymph node mapping,” Proc. of SPIE 8935, Advanced Biomedical

and Clinical Diagnostic Systems XII, San Francisco, p. 893504, Feb. 2014.

[35] X. Shao, W. Zheng, and Z. Huang, “Near-infrared autofluorescence spectroscopy for in

vivo identification of hyperplastic and adenomatous polyps in the colon,” Biosens.

Bioelectron., vol. 30, no. 1, pp. 118–122, Dec. 2011.

[36] C. M. Collier, T. J. Stirling, I. R. Hristovski, J. D. A. Krupa, and J. F. Holzman,

“Photoconductive terahertz generation from textured semiconductor materials,” Sci. Rep.,

vol. 6, no. 1, p. 23185, Sep. 2016.

46

[37] C. M. Collier, M. H. Bergen, T. J. Stirling, M. a Dewachter, and J. F. Holzman,

“Optimization processes for pulsed terahertz systems,” Appl. Opt., vol. 54, no. 3, pp. 535–

545, Jan. 2015.

[38] C. M. Collier and J. F. Holzman, “Ultrafast photoconductivity of crystalline,

polycrystalline, and nanocomposite ZnSe material systems for terahertz applications,”

Appl. Phys. Lett., vol. 104, no. 4, p. 042101, Jan. 2014.

[39] R. Al-hujazy and C. M. Collier, “Design considerations for integration of terahertz Time-

domain spectroscopy in microfluidic platforms,” Photonics, vol. 5, no. 1, p. 5, Mar. 2018.

[40] A. Salim and S. Lim, “Review of recent metamaterial microfluidic sensors,” Sensors, vol.

18, no. 1, p. 232, Jan. 2018.

[41] S. Sugumaran, M. F. Jamlos, M. N. Ahmad, C. S. Bellan, and D. Schreurs, “Nanostructured

materials with plasmonic nanobiosensors for early cancer detection: A past and future

prospect,” Biosens. Bioelectron., vol. 100, pp. 361–373, Feb. 2018.

[42] S. K. Pavuluri, R. Lopez-Villarroya, E. McKeever, G. Goussetis, M. P. Y. Desmulliez, and

D. Kavanagh, “Integrated microfluidic capillary in a waveguide resonator for chemical and

biomedical sensing,” J. Phys. Conf. Ser., vol. 178, p. 012009, Jul. 2009.

[43] N. Thekkek and R. Richards-Kortum, "Optical imaging for cervical cancer detection:

Solutions for a continuing global problem," Nat. Rev. Cancer, vol. 8, pp. 725-731, Sep.

2008.

[44] M. Zucco, V. Caricato, A. Egidi, and M. Pisani, “A hyperspectral camera in the UVA

band,” IEEE Trans. Instrum. Meas., vol. 64, no. 6, pp. 1425-1430, Jun. 2015. 47

[45] C. Zhang, Q. Wang, F. Liu, Y. He, and Y. Xiao, “Rapid and non-destructive measurement

of spinach pigments content during storage using hyperspectral imaging with

chemometrics,” Measurement, vol. 97, pp. 149-155, Feb. 2017.

[46] G. Lu, and B. Fei, "Medical hyperspectral imaging: A review," J. Biomed. Opt., vol. 19,

no. 1, p. 010901, Jan. 2014.

[47] P. Mishra, M. S. M. Asaari, A. Herrero-Langreo, S. Lohumi, B. Diezma, and P.

Scheunders, "Close range hyperspectral imaging of plants: A review," Biosyst. Eng., vol.

164, pp. 49-67, Dec. 2017.

[48] S. Vyas, J. Meyerle, and P. Burlina, “Non-invasive estimation of skin thickness from

hyperspectral imaging and validation using echography,” Comput. Biol. Med., vol. 57, pp.

173–181, Feb. 2015.

[49] D. T. Dicker et al., “Differentiation of normal skin and melanoma using high resolution

hyperspectral imaging,” Cancer Biol. Ther., vol. 5, no. 8, pp. 1033–1038, Aug. 2006.

[50] M. Salmivuori, N. Neittaanmäki, I. Pölönen, L. Jeskanen, E. Snellman, and M. Grönroos,

“Hyperspectral imaging system in the delineation of Ill-defined basal cell carcinomas: A

pilot study,” J. Eur. Acad. Dermatology Venereol., pp. 1–8, Jun. 2018.

[51] D. J. Mordant, I. Al-Abboud, G. Muyo, A. Gorman, A. R. Harvey, and A. I. McNaught,

“Oxygen saturation measurements of the retinal vasculature in treated asymmetrical

primary open-angle glaucoma using hyperspectral imaging,” Eye, vol. 28, no. 10, pp.

1190–1200, Oct. 2014.

48

[52] L. Giannoni, F. Lange, and I. Tachtsidis, “Hyperspectral imaging solutions for brain tissue

metabolic and hemodynamic monitoring: Past, current and future developments,” J. Opt.,

vol. 20, no. 4, p. 044009, Apr. 2018.

[53] K. D. Shepherd and M. G. Walsh, "Infrared spectroscopy – enabling an evidence-based

diagnostic surveillance approach to agricultural and environmental management in

developing countries," J. Near Infrared Spectrose., vol. 15, no. 1, pp. 1-19, Feb. 2007.

[54] D. Wu and D. Sun, "Advanced applications of hyperspectral imaging technology for food

quality and safety analysis and assessment: A review – Part I: Fundamentals," Innov. Food

Sci. Emerg. Technol., vol. 19, pp. 1-14, Jul. 2013.

[55] L. Gao, R. T. Kester, G, N. Hagen, and T. S. Tkaczyk, "Snapshot image mapping

spectrometer (IMS) with high sampling density for hyperspectral microscopy," Opt.

Express, vol. 18, no. 14, pp. 14330-14344, Jun. 2010.

[56] A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, "Generalization of the Lyot filter

and its application to snapshot spectral imaging," Opt. Express, vol. 18, no. 6, pp. 5602-

5608, Mar. 2010.

[57] A. Wagadarikar, R. John, R. Willett, and D. Brady, "Single disperser design for coded

aperture snapshot spectral imaging," Appl. Opt., vol. 47, no. 10, pp. B44-B51, Apr. 2008.

[58] N. Hagen, R. T. Kester, L. Gao, and T. S. Tkaczyk, "Snapshot advantage: A review of the

light collection improvement for parallel high-dimensional measurement systems," Opt.

Eng., vol. 51, no. 11, p. 111702, Jun. 2012.

49

[59] D. Kittle, K. Choi, A. Wagadarikar, and D. J. Brady, “Multiframe image estimation for

coded aperture snapshot spectral imagers,” Appl. Opt., vol. 49, no. 36, p. 6824, 2010.

[60] S. Zhang, G. Li, J. Wang, D. Wang, Y. Han, H. Cao, L. Lin, and C. Diao, “Note:

Demodulation of spectral signal modulated by optical chopper with unstable modulation

frequency,” Rev. Sci. Instrum., vol. 88, no. 10, p. 106104, Oct. 2017.

[61] J. Chaudry, R. J. Seethaler, and J. F. Holzman, “Multichannel spatial encoding and

extraction for low-noise laser applications,” IEEE Trans. Instrum. Meas., vol. 58, no. 7,

pp. 2283-2289, Jul. 2009.

[62] K. Duerr, R. J. Seethaler, and J. F. Holzman, “Experimental design and algorithmics for a

multichannel spectral data acquisition system,” J. Dyn. Syst. Meas. Control, vol. 131, no.

5, p. 051011, Aug. 2009.

[63] S. Schaefer and K. J. Chau, “Improving the signal visibility of optical-disk-drive sensors

by analyte patterning and frequency-domain analysis,” Meas. Sci. Technol., vol. 22, no.

045302, Mar. 2011.

[64] R. D. Swift, R. B. Wattson, J. A. Decker, R. Paganetti, and M. Harwit, "Hadamard

transform imager and imaging spectrometer," Appl. Opt., vol. 15, no. 6, pp. 1595-1609,

Jun. 1976.

[65] C. H. Brodie, J. Devasagayam, and C. M. Collier, “A hyperspectral imaging

instrumentation architecture based on accessible optical disc technology and frequency-

domain analyses,” IEEE Trans. Instrum. Meas., vol. PP, pp. 1–8, Sep. 2018.

50

[66] R. L. Bostick and G. P. Perram, “Spatial and spectral performance of a

chromotomosynthetic hyperspectral imaging system,” Rev. Sci. Instrum., vol. 83, p.

033110, Mar. 2012.

[67] J. F. Holzman and A. Y. Elezzabi, “A dispersion-free high-speed beam chopper for

ultrafast-pulsed-laser applications,” Meas. Sci. Technol., vol. 14, pp. N41-N44, Jul. 2003.

51

APPENDIX

APPENDIX A: Component Selection and Experimental Challenges This thesis presented an experimental snapshot HSI instrumentation architecture. The final configuration of this architecture was determined after overcoming various logistical challenges.

Many of the challenges were avoided by the careful selection of the optical and electronic components. The first component to be selected was the linear CCD sensor (Toshiba TCD 1304).

This component was selected as it is commonly available, low-cost, and is used in commercial spectrometers. An ARM-based microcontroller was considered to drive and sample the CCD sensor, however, the supporting documentation for such a solution was of limited availability. The

Arduino microcontroller platform was selected to drive this sensor as the required documentation is freely available online. The Arduino microcontroller platform was not sufficiently capable to sample the CCD sensor. This challenge was surmounted by using a National Instruments DAQ to sample the output voltage of the CCD sensor.

Basic hardware and soldering was required mount and connect the CCD sensor to the microcontroller according to the available documentation. It was best practice for the CCD sensor to be mounted in an arbitrary position upon the surface of an optical breadboard. This allowed the remaining components to be mounted and configured in reference to the CCD sensor. The next component to be chosen was an appropriate CD. The CD chosen for this experiment was a recordable CD (CD-R). The CD-R consists of a polycarbonate disk with diffraction groves on the underside. A reflective film was coated on the label side, which was to be removed. Care should be taken to keep the both sides of the CD-R clean of oils, particulates, and mechanical damage.

The CD-R was scored with a knife around the entire circumference of the label. Adhesive tape was

52

then applied to the cover the CD-R label. Removal of the adhesive tape, as one entire piece, eliminated the reflective film from the label side of the CD. It should be noted that a DVD-R was not chosen, as the DVD-R was found to have a reflective film in-situ between two polycarbonate disks. Splitting the DVD-R in half and removing the film proved to be unpractical for this experiment.

Next, the appropriate binary photo mask was to be designed on illustration software and printed onto an overhead transparency. The photomask was cut and applied to the label surface of the CD-R using an adhesive. Strategic application of the adhesive is necessary to ensure no spatial image channels are obstructed. It was important to combine the photomask concentrically to the

CD-R to avoid channel drift during rotation.

A brushless direct current (BLDC) motor from an optical disc drive was chosen to spin the dynamic coded aperture (photomask plus CD-R). This motor was selected as it comes with the necessary shaft mounting hardware to hold a CD. The selected motor was to be driven by an appropriate BLDC motor controller. A widely available hobby grade motor controller was sufficient to drive the compact BLDC motor. The ribbon cable motor must be soldered to wires with bullet connectors to safely attach the BLDC motor to the motor controller. Heat shrink tubing was essential for insulating exposed metal between solder joints and the exterior of the bullet connectors.

The selection of the illumination source was carefully considered. A cool white LED was chosen to provide a unique reference spectrum for this experiment. Initially, the white LED was driven by the LED driver circuit from a consumer grade flashlight. However, doing this caused the illumination source to constantly flash on and off. This undesirable characteristic was not

53

apparent for some time, as the frequency of LED flashing was beyond the limit of human perception. It was later verified using an oscilloscope that the reference spectrum was inconstantly being measured by the linear CCD sensor. That is, the LED repetition frequency was causing interference within the data—being insufficient for further frequency analyses. This challenge was overcome when the illumination source was driven by a DC power supply. An appropriate current limiting resistor was connected in series with the LED. Specifically in this instance, four volts and

170 milliamps were read on the power supply whilst the LED was driven in series with a seven ohm resistor. Care should be taken to keep a chosen LED and current limiting resistor within its maximum current and voltage rating limits.

The LED source and dynamic coded aperture must be aligned to impart a diffraction maxima onto the CCD sensor. The CCD sensor is sensitive to low light levels, such that, non- diffracted light will saturate the sensor output. Beam blocks should be used to block excess white light from the LED source from projecting onto the CCD. With this in mind, the LED and corresponding image beam was aligned to interact with the dynamic coded aperture at an oblique incidence. The oblique incidence angle of 70 degrees was chosen to avoid the zeroth order diffraction (white light) passing though the rotational diffraction grating and saturating the CCD sensor. Having oblique incidence allows only the m = -1 order diffractions of the spatial image channels to be project onto the CCD sensor. Furthermore, all final measurements for the CCD should be done with the room lighting turned off. The light from the overhead fixtures will also saturate the CCD sensor.

The illumination intensity of the CCD sensor must be calibrated before conducting the experiment. An oscilloscope was used to measure the CCD sensor output while the dynamic coded

54

aperture was not spinning and while the LED light is being diffracted. The dynamic coded aperture should be in the position with the most spatial image channels having light passing through (i.e., the maximum illumination intensity position). The LED intensity should be varied, via changing the DC power supply voltage, to have the maximum intensity without clipping/saturation as shown on the oscilloscope. This ensures that each possible combination of channel illumination for the dynamic coded aperture is incapable of saturating the CCD sensor—a requirement for accurate data acquisition.

Once the final configuration of the experimental components and calibration of the LED intensity had been verified, a data acquisition unit could be used to obtain the data. A National

Instruments DAQ was used with LabView software to obtain the data from the CCD sensor and saved as CSV files for further analyses.

55

APPENDIX B: MATLAB Script for HSI Instrumentation Architecture

% ------% % MATLAB script for the hyperspectral imaging instrumentation architecture % ------%

% ------% % C. Harrison Brodie % Applied Optics and Microsystems Laboratory % University of Guelph % ------%

% ------% % ------Import Data ------% % ------% clear all; close all;

Freq_Encode = importdata('120hz 20 sec - 3.csv');

Freq_Encode_1 = -1*(Freq_Encode); % Flip data Freq_Encode_1 = Freq_Encode_1 - min(Freq_Encode_1);% Remove baseline offset

% ------% % ------Initialize variables ------% % ------%

% DAQ sample frequency (Hz) Fs_DAQ = 439586;

% CCD sample and hold frequency (Hz) Fs_CCD = 119;

Data = Freq_Encode_1; [Clean_Data,D1,D2] = Remove_Outliers(Data);

% ------% % ------% Noise Filter & Remove Outliers - Function %------% % ------% function [Clean_Data,D1,D2] = Remove_Outliers(Data) % output args input args

% Find length of data L = length(Data);

D1 = abs(diff(Data)); % Derivative D1 = D1/max(D1); % Normalize amplitude

% Remove outliers using d/dx thresholding for k =11:L-11; if D1(k)>= 0.305; for j=1:10; Data(k) = ((Data(k-10)+Data(k+10))/2); k = k+1;

56

if j == 9; k = k-9; end end end end

% Derivative D2 = abs(diff(Data)); D2 = D2/max(D2); % Remove baseline offset Data = Data - min(Data); % Normalize amplitude Data = Data/(max(Data)); % Function Output Clean_Data = Data; end % ------Return from Function ------%

D_Freq_Encode_1 = D1; D_Fcode = D2; Fcode = Clean_Data; Data_F = Fcode;

[Find_Amount,Frame_Starts,Frame_Lengths,WW] = Find_Frames(Data_F,Fs_CCD,Fs_DAQ);

% ------% % ------% Find CCD Spectra - Function %------% % ------% function [Find_Amount,Frame_Starts,Frame_Lengths,WW] = Find_Frames(Data_F,Fs_CCD,Fs_DAQ) % output args input args

% Flip data Data = -1*Data_F; % Remove baseline offset Data = Data - min(Data); % Normalize amplitude Data = Data/max(Data); % Derivative Data = diff(Data); % Find length of data L = length(Data); % Find number of whole spectra in the sample time series Find_Amount = (L/Fs_DAQ)*(Fs_CCD); Find_Amount = fix(Find_Amount);

% Find where frame starts for k =1:L-1; if Data(k)< -0.08; Data(k)=10; end end 57

Data = Data - min(Data); % Remove baseline offset Data = Data/(max(Data)); % Normalize amplitude

% Find max point in frame intervals (i.e. Frame start) for k = 1836:L-1836;

if Data(k)== max(Data(k-1835:k+1835)); Data(k) = 10; k = k+ 3670; % Increment loop by approx. frame size end end % Skip first frame as spectrum could be incomplete for k =1836:L-1836; if Data(k)== 10; Data(k) = 9; k = k + 500; else Data(k) = 0; end end

% Make matrix of size required Frame_Starts = find(Data == 9) +270; % get indices of frame starts

% Find indices of frame lengths and put into vector for k = 2:length(transpose(Frame_Starts)-1); Frame_Lengths(k) = (Frame_Starts(k))- (Frame_Starts(k-1)); end WW = 1; end % ------Return from Function ------%

Frame_Starts_2 = Frame_Starts; Frame_Lengths_2 = Frame_Lengths; Find_Amount_2 = Find_Amount;

[M,Frame_Min_Length] = Grab_Frames( Frame_Starts_2, Frame_Lengths_2, Find_Amount_2,Data_F);

% ------% % ------% Grab CCD Spectra - Function %------% % ------% function [ M,Frame_Min_Length ] = Grab_Frames(Frame_Starts_2, Frame_Lengths_2,Find_Amount_2,Data_F)

Data = Data_F; Data = transpose(Data); Frame_Starts_2 = transpose(Frame_Starts_2);

% Find length of data L = length(Data);

% Zero pad first index value of frame start vector Frame_Starts_3 = padarray(Frame_Starts_2,[0 1],'symmetric','pre'); 58

Frame_Starts_3(1) = 0;

% Pad new variable to length proportional to its parent. Frame_Lengths_3(1:length(Frame_Lengths_2)-1 )= Frame_Lengths_2(2:length(Frame_Lengths_2)); Frame_Lengths_4(length(Frame_Lengths_3)+1:length(Frame_Lengths_3)+1)= 0;

% Find minimum and maximum frame size Frame_Min_Length = min(Frame_Lengths_3); Frame_Max_Length = max(Frame_Lengths_2); L2 = length(Frame_Lengths_2);

% Make matrix of appropriate dimension: % (Rows x Columns) M = zeros(Find_Amount_2-1+0,L);

% Counter k = 0; % Fill matrix M with CCD frames data

for i = 2:length(Frame_Lengths_2)-1; for j = 1:L-1; k = k+1; % Counter M(i,j) = Data(j+Frame_Starts_3(i)); if k > Frame_Lengths_3(i); % Stop if counts larger than frame break end end k = 0; % Reset counter end

% Truncate matrix rows to min CCD frame length M(:,Frame_Min_Length:end) = []; M(:,3501:end) = []; M(1,:) = []; M(end-3:end,:) = []; end % ------Return from Function ------%

[M_Shift_Re_Abs,W3] = FFT_Frames(M,Fs_CCD,Frame_Min_Length);

% ------% % ------% FFT CCD Spectra - Function %------% % ------% function [M_Shift_Re_Abs,W3] = FFT_Frames(M,Fs_CCD,Frame_Min_Length)

% Find FFT of each spectra at the pixel level over time M_fft = M; M_shift = M;

for i = 1:3500; M_fft(:,i) = fft(M_fft(:,i)); % FFT over time M_shift(:,i) = real(fftshift(M_fft(:,i))); end

59

[L,W] = size(M_fft); N = L; % Samples of DTFT per pixel ~ length of mat W = 2*pi * (0:(N-1)) / N; % From here... W2 = fftshift(W); W3 = unwrap(W2 - 2*pi); % ... to here: convert x axis to rad/sample W3 = W3*(Fs_CCD/(2*pi)); % Convert x axis from rad/sample to Hz W3 = transpose(W3); M_Shift_Re_Abs = abs(M_shift); end % ------Return from Function ------%

M_Data_F = M_Shift_Re_Abs;

% ------% % ------Plot Data ------% % ------%

AA = M_Data_F(139,:); % DC Component AA = AA-min(AA); % Remove baseline offset

AA1 = padarray(AA,[0 0],'symmetric','pre'); AA1 = AA1/max(AA1); BB = M_Data_F(184,:); % 19.3 Hz BB = BB-min(BB); % Remove baseline offset BB = BB/max(BB); % Normalize amplitude

AA2 = padarray(AA,[0 0],'symmetric','pre'); AA2 = AA2/max(AA2);

CC = M_Data_F(175,:); % 15.4 Hz CC = CC-min(CC); % Remove baseline offset CC = CC/max(CC); % Normalize amplitude

AA3 = AA; AA3 = padarray(AA,[0 0],'symmetric','pre'); AA3 = AA3/max(AA3); % Normalize amplitude

DD = M_Data_F(166,:); % 11.6 Hz DD = DD-min(DD); % Remove baseline offset DD = DD/max(DD); % Normalize amplitude

Mat_f = zeros(3500,4); Mat_f(:,1) = transpose(M_Data_F(139,:)); % 'DC' Mat_f(:,2) = transpose(M_Data_F(184,:)); % 19.3 Hz Mat_f(:,3) = transpose(M_Data_F(175,:)); % 15.4 Hz Mat_f(:,4) = transpose(M_Data_F(166,:)); % 11.6 Hz

% Plot figure figure(1); mesh(M); title('Grab frames at dc');

% Plot figure figure(2); 60

mesh(M_Data_F); title('fft DC data at pixel level');

% Plot figure figure(3); subplot(4,1,1); plot(AA1,'r'); % DC axis([0 3500 0 1]) hold on plot(BB,'b') title('AC data - DTFT'); legend('DC','19.3 Hz'); subplot(4,1,2); plot(AA2,'r'); axis([0 3500 0 1]) hold on plot(CC,'k') title('AC data - DTFT'); legend('DC','15.4 Hz'); subplot(4,1,3); plot(AA3,'r'); hold on plot(DD,'g') axis([0 3500 0 1]) title('AC data - DTFT'); legend('DC','11.6 Hz'); subplot(4,1,4); plot(AA3,'r'); hold on plot(BB,'b') plot(CC,'k') plot(DD,'g') title('AC data -DTFT'); legend('DC','19.3 Hz','15.4 Hz', '11.6 Hz'); axis([0 3500 0 1])

61