<<

Marine Snow Tracking Stereo Imaging System by Junsu Jang B.Sc., Carnegie Mellon University (2018) Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning in partial fulfillment of the requirements for the degree of Master of Science in Media Arts and Sciences at the Massachusetts Institute of Technology September 2020 © Massachusetts Institute of Technology 2020. All rights reserved.

Author...... Program in Media Arts and Sciences August 17th, 2020

Certified by...... Joseph A. Paradiso Professor of Media Arts and Sciences

Accepted by ...... Tod Machover Academic Head, Program in Media Arts and Sciences Marine Snow Tracking Stereo Imaging System by Junsu Jang

Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning on August 17th, 2020, in partial fulfillment of the requirements for the degree of Master of Science in Media Arts and Sciences

Abstract The transport of particles of organic from the ’s surface to its bottom plays a key role in the global and . Quantifying the rate of this Biological Carbon Pump – the size and velocity distribution of falling particles below the mixing layer, for example – is thus of considerable importance. The complexity of this Pump, however, together with systematic biases in available measurement methodologies and vast spatial and temporal undersampling, makes this quantification difficult. In this thesis I set out to design and build a low-cost underwater stereo-imaging system to remotely measure the flux of sinking particles in the mid-ocean. By record- ing time-lapsed images of marine snow falling through the imaging volume over day- to-week timescales, we can estimate both the size distributions and, via 3D particle tracking velocimetry, their velocity distributions too. This allows us to di- rectly estimate the net flux. Making the system low-cost and compact enables large- scale observations capable of resolving relevant length and time-scales over which this flux likely varies in the ocean. The hardware design is thus primarily constrained by the target depth, expected particle size distribution, expected sinking rates, deploy- ment duration, and cost. The resulting prototype was then tested in the lab and, computationally, against simulated data in preparation for eventual deployment the Minion platform, a Lagrangian float designed to quantitatively explore the Biological Carbon Pump. An evaluation of the system’s efficacy in estimating particle concen- tration and sinking rate, and ultimately estimate the particle flux, indicates a good match to our target specifications.

Thesis Supervisor: Joseph A. Paradiso Professor of Media Arts and Sciences This thesis has been reviewed and approved by the following committee members

Joseph A. Paradiso ...... Professor of Media Arts and Sciences The MIT Media Lab

Allan W. Adams...... Principal Investigator of the Future Ocean Lab at MIT MIT Comparative Media Studies / Writing

Jennifer S. Chow ...... Education and Program Manager of Open Ocean Initiative The MIT Media Lab Acknowledgments

First, I would like to thank my advisor, Allan Adams, who gave me the op- portunity to work on this project. He always prioritized students’ well-being and growth as an academic and as a person. I am deeply grateful that I could have been a part of his mission: to design and build accessible ocean technologies that enable us to tell accurate and impactful stories about the ocean. He also taught us that communication and collaboration with people from different disciplines is the key to solve the problems regarding the ocean. There are too many valuable lessons that I have learned from working with him and the Future Ocean Lab. I hope to take these lessons with me and to spread our mission and values to the people I encounter next. I am grateful for my academic advisor, Joe Paradiso. He took me on when I was having a challenging time at the MIT Media Lab and allowed me to work with Allan on this project. He provided me with insightful comments throughout the project that brought my attention on important details. I appreciate the welcoming atmosphere of Responsive Environments and being a part of such an inspiring group of people. I would also like to thank Jenni Chow, who regularly checked up on my well-being as well as project status throughout the pandemic. Her bright personality as well as resilience inspired me to stay focused and push on despite challenging situations. In addition, her expertise in biological carbon pump helped me understand and tackle the problem at hand. I am lucky to have been a part of the Future Ocean Lab, with whom I could share such a warm camaraderie. Jacob Bernstain provided tremendous help whenever I was stuck with technical problems and would review hardware design with care- ful analysis. I am also thankful for Charlene Xia and Evan Denmark for their supportive, fun and inspiring conversations throughout our academic journey. The members of the Open Ocean Initiatives, Katy Croff Bell, Jenni Chow and Daniel Novy, fueled me with positivity. Although National Geographic and Lindblad expedition to Alaska in 2019 was not intended for this specific project, the experience helped me think more practically about the future deployments. Addi- tionally, the seminars and conversations during afternoon tea expanded my horizon and inspired me to move forward. On a personal note, I could complete this thesis during the pandemic thanks to my housemates in November House: Karla Haiat Sasson, Ayokunle Akinyemi, Haiyan Xu, Carly Nix, and Annelise Rittberg. On top of their generous offer to share personal items and space to enable my experiments in a domestic environment, they have provided so much care and support for my well-being. Finally, I would like to thank my loving family and friends, who believed in me and provided endless support. I could not have come this far without them. Contents

1 Introduction 1

2 Related Works 4 2.1 Sediment Traps ...... 5 2.2 Geotracers ...... 6 2.3 Optics ...... 7 2.4 Imaging ...... 8

3 Minions 9 3.1 Challenge ...... 10 3.2 Design Goals ...... 11 3.3 Particle Flux ...... 12

4 Single Imaging Optics 13 4.1 Overview ...... 13 4.2 Effective Focal Length Underwater ...... 14 4.2.1 Depth of Field and Imaging Volume ...... 17 4.2.2 Lens ...... 19 4.2.3 Setup ...... 20 4.3 Evaluation ...... 21

5 Stereo Design 25 5.1 Baseline and Disparity ...... 25 5.2 Depth Error ...... 28 5.3 Correspondence search area and probability of confusion ...... 31 5.4 Imaging Volume ...... 31 5.5 Design Choice and Evaluation ...... 33 5.6 Stereo Calibration ...... 35 5.6.1 Setup ...... 37 5.6.2 Results ...... 37

6 Design 40 6.1 Calculation Setup ...... 41 6.2 Ray Tracing from the Light Source to the Camera ...... 42 6.3 Simulation ...... 47 6.3.1 Projection onto spheres ...... 47 6.3.2 Results ...... 51 6.4 Evaluation of optical elements ...... 56

7 Electrical System 59 7.1 Embedded System ...... 59 7.1.1 Camera and LED Synchronization ...... 61 7.1.2 Image compression and retrieval ...... 64 7.1.3 Mission Programming and Data Retrieval ...... 65 7.1.4 Peripheral Electronics ...... 66 7.1.5 Internal Logging ...... 66 7.1.6 Wireless Communication Underwater ...... 66 7.2 Circuit ...... 67 7.2.1 Mainboard ...... 67 7.2.2 LED Driver ...... 68 7.2.3 Circuit Isolation and Short Protection ...... 69 7.3 Power ...... 70 7.3.1 Power Consumption ...... 70 7.3.2 Battery Protection ...... 71 8 Mechanical Design 72 8.1 Housing ...... 72 8.2 Float-Fluid Interaction ...... 73 8.2.1 The Motion of the Float ...... 73 8.2.2 Heat ...... 78 8.3 Buoyancy ...... 79

9 Particle Tracking Velocimentry (PTV) 81 9.1 Image pre-processing ...... 82 9.2 Particle tracking ...... 83 9.3 3D coordinate Estimation ...... 83 9.4 Simulation ...... 84 9.4.1 Bokeh Effect ...... 85 9.5 Evaluation ...... 90 9.5.1 Detection ...... 90 9.5.2 Tracking Error ...... 92 9.6 Future Work ...... 96 9.6.1 Further Evaluation ...... 98

10 Conclusion 101 10.1 Summary ...... 101 10.2 Discussions and Future Works ...... 103 10.2.1 Hardware ...... 103 10.2.2 Software ...... 105 10.2.3 Beyond Stereo Imaging System ...... 105

A Circuit Schematic and Layout 107

B Software 112

C Optics Performance Comparison 113 C.1 Camera Lens ...... 113 C.2 Light Optics ...... 114

D Bill of Materials 118 List of Figures

1-1 The flowchart of the biological carbon pump. Biological debris includ- ing aggregates, and once dense enough, start to sink. Some gets consumed by marine organisms, but approximately 1% is known to reach the seafloor. The figure is from [1] ...... 2

2-2 POC is comprised of various forms of biological debris. The third row shows the marine snow, faecal pellets (center row) and carcasses of planktonic organisms. Not many of these particles seem as round as a sphere. The images are from from [2] ...... 7

3-1 Summary of how Minion floats will be operated in the field and their functionalities[3] ...... 9

4-1 Photos of the imaging system for testing in air and in water...... 14 4-2 A diagram that shows the geometrical relations on computing the ef- fective field of view and working distance. The ray of light travels through three different media. The diagram is exaggerated for visibility. 15 4-3 We built an optical rail to test each camera against a Siemens star calibration target in air and in water...... 20 4-4 Images of the Siemens Star calibration target during test underwater. 21 4-5 A graph of contrast as a function of the spatial resolution in air and underwater. The minimum resolution at 20% contrast is noted. Ambi- ent light scattering seems to reduce the maximum contrast underwater, thereby decreasing the minimum resolution...... 23 5-1 Two-dimensional view of the intersection of two identical circular cones, which represent the field of view of the cameras. Unlike common stereo- imaging systems, the depth of field restricts the closest intersection point to the cameras and thus the maximum disparity...... 26 5-2 The geometry and relevant variables involved in computing the depth errors. Note that the depth is measured along the d-axis, which is the optical axis of the left camera...... 29 5-3 The computed probability of another particle present in the search area of the a particle as a function of the baseline. The two outer lines indicate the results of a change of pan angle by ±0.5°...... 32 5-4 Graph showing the overlapping imaging volume in which the particles 3D positions can be tracked. The red dot indicates the result of cur- rently chosen pan angle, and the dotted lines show how the overlapping imaging volume varies with a change of ±0.5° ...... 33 5-5 A graph displaying the maximum disparity and depth error as a func- tion of the baseline. The two outer lines indicate the result of a change of pan angle by ±0.5°...... 34 5-6 Visualization of the extrinsic camera parameters in the water by taking the center of the left imaging sensor as the origin of the 3D space coordi- nate system. The multiple orientations of the planes of the calibration target have been re-projected as well. The x-axis is the horizontal dis- tance from the left camera, the y-axis is vertical, and the z-axis is the depth. The empty space in the graph has been omitted for visibility. . 36 5-7 The setup of a stereo calibration rig in the water...... 38

6-1 The geometrical setup of the cameras, and the target volume seen from the top...... 41 6-2 Measured angular profile of the combination of Cree XLamp XP-E2 photored LED and Carclo 10412 lens. This specific combination has a FHWM of 16°in the air...... 43 6-3 A diagram to show the relations between each vector and angle involved on the surface of the target area...... 45 6-4 A camera cannot see a full disc of a sphere. We can compute whether a surface point lies in the visible or invisible side by solving for the plane where all of the points whose tangent lines pass through point A lay...... 47 6-5 An example of an illuminated sphere whose radius is 1mm. A much bigger sphere was used to demonstrate the changes due to different light angles...... 48 6-6 Simulation example of 9 particles spread over the imaging volume. The ESD of these spheres are 500µm, and the light angle is 55°, 300mm away from the center of the imaging volume...... 50 6-7 A geometrical setup of the simulation. There are two LEDs that are angled so that the central ray faces the center of the target volume. They are also symmetrically placed about the x-z plane...... 51 6-8 Detection offset of the centers of particles on the image in x- and y-axis at given ESD of spheres in pixels. For bigger spheres, the offsets are more significant...... 52 6-9 Standard deviation of the detection offset of the centroids of particles in x- and y-axis. To reduce error for calibration, it is optimal to increase the light angle...... 52 6-10 A graph showing the Voltage at the pixel due to illumination at vary- ing angles for different sizes of spheres. Regardless of the size ofthe spheres, the voltage is reduced as the light angle increases. The dis- tance between the target and the light was kept the same at 35cm. . 53 6-11 A graph showing the pixel voltage depending on the light angle and the distance of the lights from the spheres of ESD=100µm. Farther the distance, the lower the illumination as expected...... 54 6-12 Experiment setup and an example outcome using a single LED and a reflector to measure its FWHM ...... 56 6-13 Two optical elements setup to verify that the design choice leads to expected results ...... 58

7-1 The overall electrical system architecture separated by each of their respective housings...... 60 7-2 The diagram shows the flow of how two clients synchronize their clocks with that of the server by using TPSN. Communication happens in an arbitrary order. In this case, client A communicates with the server first and then client B is synchronized...... 62 7-3 Output from an oscilloscope when each SBC sets a GPIO high at the same time based on its clock. A client could be behind in time com- pared to the server. To avoid triggering camera before the LED strobe, we intentionally delay the clock of the cameras by 1ms, which is longer than the maximum measured synchronization error, 300us ...... 63 7-4 Experiment setup for TPSN synchronization and wireless communica- tion feasibility in saltwater ...... 64 7-5 An example I-V curves of Cree XLamp XP-E2 LED series[4]. The for- ward current varies significantly with a small change in voltage across the LEDs...... 68 7-6 A high-level circuit schematic of the LED driver. The MOSFET and BJT transistor form a feedback loop that prevents the over-current draw that can happened due to variation in temperature or voltage. . 69

8-1 Partly exploded front view of the housing model for Maka-Niu[5] . . . 73

9-1 PTV algorithm overview ...... 82 9-2 An example of left and right images of particles in simulation. The epipolar lines that correspond to the circled left particles are drawn on the right image...... 83 9-3 Images of simulated particles projected onto left and right image planes respectively using the camera calibration parameters. The Bokeh effect based on the depth of field computation is applied...... 85 9-4 The experiment setup and an example image of the calibration target for analyzing the Bokeh effect...... 86 9-5 Radial intensity profile across a circular disc on a calibration target at varying distances offset from the working distance (focal plane). The images on the top row show the actual image and the bottom row the corresponding profiles that are observed and simulated. . . . . 87 9-6 Close up comparison of the intensity profiles of the target at 7mm (a) and 20mm (b) and their corresponding images ...... 88 9-7 Image of the target located 30mm and 40mm forward from the focal plane. The blurriness of the target becomes very difficult to character- ize starting at 30mm...... 89 9-8 Example images with different noise levels ...... 90 9-9 Imaging volume in which the particles of different ESD can be de- tected at varying noise standard deviation. The maximum overlapping volume with the simulated stereo parameter is 99mL. Particles bigger than ESD=320µm could all be detected within the maximum depth of field and noise level...... 91 9-10 Example traces of the particles settling in different directions imaged on the right camera...... 93 9-11 Apparatus for testing against actual microbeads potted inside an epoxy. The xyz-translational microstage can be used to control the motion of the particles...... 99 9-12 (a) shows the DeepPIV system and the vortex-ring generator atop an ROV, and (b) shows an example of ring generated from the vortex-ring generator. Image from [6] ...... 99 10-1 Final design configuration of the optical setup of the stereo imaging system...... 102

A-1 Mainboard schematic: peripherals and power (1/2) ...... 108 A-2 Mainboard schematic: LED driver (2/2) ...... 109 A-3 Top (left) and bottom (right) layout of the mainboard evaluation circuit110 A-4 Schematic (left) and layout (right) of the LED evaluation PCB for CREE XLamp XP-E2 ...... 111

C-1 Comparison of lens performance among three lenses considered. While the lenses approach a similar asymptote at higher resolution, the per- formance varies at lower resolution. In particular, it is important to note that the lens from Edmund Optics has higher contrast as the target resolution gets smaller. This is the determining factor. The comparison at lower resolution is highlighted in the zoomed in graph on the right. The plots have been filtered to smooth out the noise for clarity...... 114 C-2 Radial Intensity using Cree XLampe XP-E2 LEDs and different optics 115 C-3 Radial Intensity using Lumileds Luxeon Rebel LEDs and different optics116 C-4 Example light beam using XLamp XP-E2 LED and Carclo 10412 lens 117 C-5 Example light beam using XLamp XP-E2 LED and Lideon C11347 Regina reflector ...... 117 List of Tables

3.1 Parameters containing the design phase of the system. The highest priority is to resolve and track the particles ...... 11

4.1 Table of variables and their values to be used to calculate the DoF of the optical system. Some of the values such as the working distance and the focal length are computed in Sec.4.2...... 18 4.2 Minimum resolution of two cameras in air and underwater ...... 24

5.1 Variables and constants involved in computing the maximum disparity. 27 5.2 Variables involved in computing the depth error ...... 28 5.3 Design specifications for the mechanical design of the stereo imaging pair ...... 35

6.1 The air FWHM measurement results with three different combinations of LED and reflectors/lens. The FWHM in water was calculated based on the refractive index...... 57

8.1 Mass of the electronics housing ...... 80 8.2 Mass of the LED housing ...... 80

9.1 The analysis of PTV performance for particles settling vertically (mo- tion A) ...... 94 9.2 The analysis of PTV performance for particles settling vertically and farther away from the cameras (motion B) ...... 95

10.1 Summary of the system performance ...... 103 C.1 Contrast at required resolution for different lenses ...... 113

D.1 Bill of Material ...... 122 Chapter 1

Introduction

One of the primary ways that the ocean impacts our climate is by the transport of organic carbon to its bottom, a phenomenon known as the biological carbon pump (BCP). On the surface of the ocean, biological of and zoo- plankton form an aggregate known as marine snow or particlulate organic carbon (POC). When they become dense enough, they sink to the seafloor. Some of the at- mospheric carbon consumed by these plankton is now converted into an organic form and stays within the ocean. The cycle is displayed in Fig. 1-1. However, scientists still have difficulties modeling the impact of the ocean o the global climate withcer- tainty. To study the BCP, scientists have devised and used various instruments and methods. Naturally, the difference in methodologies lead to collection of data thatdo not agree concretely[7][8]. Furthermore, the instruments involved are quite expensive, preventing researchers from deploying more than a few of them simultaneously. This lead to data that is both temporally and spatially aliased. This thesis aims to address these issues by engineering a mass-deployable (i.e. low-cost) imaging sensor system for estimating the POC flux more accurately.

1 Figure 1-1: The flowchart of the biological carbon pump. Biological debris including plankton aggregates, and once dense enough, start to sink. Some gets consumed by marine organisms, but approximately 1% is known to reach the seafloor. The figure is from [1]

In this project, I develop an underwater stereographic imaging system that will take time-lapse images of settling POC in a target volume of water in the ocean. This will enable us to measure the particle motion in three geometric axes with their concentration to estimate the particle flux. The first part of the project is todefine the design goals and implementing the hardware that meets these goals to the best ability. The second part is to implement an analysis software tool to compute the particle concentration and settling velocity based on the images by applying a particle tracking velocimetry (PTV) algorithm. Current methods for measuring POC include the use of sediment traps[9], radio

2 geotracers[7], optical, remote-sensing system (e.g., lasers and satellite imaging)[10][11], and various in-situ sensing systems[8]. The POC cycle is known to vary greatly both in short-term (e.g., within a day) and long-term (e.g., seasonal) [12][13]. By deploying instruments that collect data at sub-second intervals over days to weeks, we should be able to observe both of these temporal scales. Furthermore, by making instruments low-cost, we will be able to cover spatial variability. Achieving this would provide more holistic insight into the BCP and help us better model the impact of the ocean on . Ultimately, this imaging system will be mounted along with other low-cost sensors from collaborators on the project [6] on a Lagrangian (i.e., neutrally buoyant) float called the Minion (Miniature isopynal).

Summary

In this thesis, we present a stereographic imaging system that is able to resolve POC as small as 70µm. The fiducial volume available for counting total particle fluxis roughly 150mL; the sub-volume over which each particle’s 3D motion can be tracked is roughly 100mL, though this depends on the noise level of the images. The system can operate down to a depth of 2000m for 18 continuous hours. The final cost of a full system1is estimated to be $797 and $565 if mass produced (>1000). The current PTV software is able to track particles that are within ±30mm range of the working distance with an average 3D tracking error of 30µm/px.

1This excludes some parts due to incomplete mechanical designs, and missing parts are listed in Appendix D

3 Chapter 2

Related Works

Researchers have devised various instruments and methods to study the biological carbon pump. In this chapter, we discuss some of the commonly used instruments for measuring the POC flux. They include sediment traps, geotracers, optical (remote and in-situ) sensing, and imaging. On a related note, there are other sensors for general study of small marine or- ganisms and . While they have not been used for studying the POC, similar techniques could be applied in the future. They often involve use of optics and acoustic sensors. Some of these works include: ZooGlider[14] and ZOOPS-O2[15]. Additionally, hologram systems[16][17][2] have been used to study shapes and forma- tion of marine organisms and particulates.

4 (b) A representative ratio of 234Th to (a) Three forms of deploy- 238U as a function of depth. The dise- ment. To reduce the hydrodynamic bias quilibrium is proportional to the amount at the entrance of the funnel, the neu- of particle sediments at the givenT trally buoyant trap is recommended[9] depth.[18]

(d) A Video Plankton Recorder that is (c) A transmissiometer of path length used to study the concentration of parti- 10cm. Image from [19] cles as a function of depth. image from [8]

2.1 Sediment Traps

Sediment traps are containers that collects settling POC in the ocean, and it is one of the most commonly used instruments for studying the marine snow[20], since it allows for direct sampling. The trap channels the settling particles into vials which are programmed to be swapped at various depths. However, there are some limitations

5 to using sediment traps to study the particle flux. First, when POC is gathered into a vial, it attracts surrounding , also referred to as the “swimmers”, that eat and alter the samples. Furthermore, the particle aggregates get modified and become a part of the overlying supernatant[9]. The phenomenon is referred to as the “solubilization”. Finally, the opening of the traps introduces hydrodynamic bias to the samples collected. For this reason, scientists have been using the neutrally buoyant sediment traps instead of those that are moored or tethered to a buoy as shown in Fig. 2-1a. While each of these limitations have been acknowledged and mitigated for, the cost of sediment traps remain very high ($75,000)[3]. This prevents us from deploying more than a few, leading to data that is temporally and spatially aliased. [9][20]

2.2 Geotracers

Thorium-234 (234Th) is a naturally occurring radionuclide produced in the ocean from an alpha-decay of its parent, Uranium-238 (238U). 234Th is prone to attaching or “scavenging” itself to particles[18]. There is a secular equilibrium ratio between 238U and 234Th, given the long half-life of 238U (4.5billion years) and that of 234Th(24.1 days)[21][18]. When 234Th attaches onto a POC created in the euphotic zone and sinks together, there is a disequilibrium of 238U and 234Th ratio on the surface. By comparing the 234Th flux in the upper ocean and the ratio ofPOC/234Th on the POC at varying depths, one can compute the carbon flux leaving from that depth horizon. For this reason, 234Th has been used as a geotracer to indirectly estimate the carbon settling in the ocean. A representative comparison of 234Th activity to that of 238U is shown in Fig. 2-1b. The major challenge of this approach is that in order to obtain the information on the POC/234Th ratio within particles, samples must be taken from the water through sediment traps or pump filters[7].

6 2.3 Optics

Researchers have also found correlation between the attenuation coefficient and ocean color information from satellite imaging to estimate the surface POC over wider region of the ocean [22][23]. Satellite imaging, however, is restricted to shallow measurement (at most 200m[24]), and understanding POC flux at deeper levels where no imaging cannot be done remotely needs an alternative solution. The transmissiometer (e.g., SeaTech transmissiometer[25]), shown in Fig. 2-1c, is a device that measure the attenuation of 660nm laser beam in a 10cm to 25cm path. The particles within that path length will absorb and scatter the laser, and the attenuation is correlated with the concentration and characteristics of the particles present. While empirical studies have shown such correlation, they are incapable of drawing conclusions on the size distribution and types of particles present in the path. The consistency of the measurement is very low because of the spatial and temporal variability of the suspended particles[8][10][11].

Figure 2-2: POC is comprised of various forms of biological debris. The third row shows the marine snow, faecal pellets (center row) and carcasses of planktonic organ- isms. Not many of these particles seem as round as a sphere. The images are from from [2]

7 2.4 Imaging

There are numerous imaging systems that have been developed to observe marine organisms and marine snow. In particular, researchers have been using the Video Plankton Recorder (VPR) (e.g. Seascan, Inc.[26]) shown in Fig. 2-1d or underwater vision profiler (UVP)[27] to study particles and plankton in the sea. For example, VPR has only been used to measure the concentration of the POC at different depths as the camera was reeled up 30m min−1[8][28]. In such a case, settling rate has been indirectly measured by applying Stoke’s law[29], which assumes spherical particles. However, as seen in Fig. 2-2, the particles are rarely spherical. Hence, application of Stoke’s law can be lead to improper flux estimation[12]. Our approach is also based on imaging, but we plan to measure not only the par- ticle concentration, but also the settling velocities directly. Furthermore, unlike UVP which is 11m tall[27], my system is more modular and mass-deployable to capture the spatial variabilitiy more accurately. There was a recent work[30], whose goal and idea closely aligns with those of the thesis. It uses a stereo imaging pair to conduct PTV to measure particle flux and sedimentation rate. Their system uses a pair of Go Pro Hero 4 digital cameras. The cameras are operated inside a GoPro dual wreck, which is enclosed with another acyrlic housing with LED strips attached. Right outside of the acrylic box (i.e., 8cm away from the lens) the smallest particles that can be measured are 0.17mm long. They capture images of a sampling volume of about 45cm×25cm×24cm (27L). They used a multi-object tracking algorithm[31], based on the use of Kalman filter[32] and Munkres assignment algorithm[33]. The PTV software will generally follow the software setup of this work. This is, however, intended for shallow water studies (rated for depth of 60m) and not a Lagrangian float.

8 Chapter 3

Minions

Figure 3-1: Summary of how Minion floats will be operated in the field and their functionalities[3]

An alternative proposal to study BCP of the ocean is to build mass-deployable isopycal floats with sensors that can directly monitor the quantities and behaviors of POC with appropriate temporal and spatial resolution. Miniature Isopycnal (Minion) floats is a collaborative effort by ocean scientists and engineers funded byNational Ocean Partnership Program (NOPP) to devise such low-cost sensor systems that can be mass-deployed into the ocean. Attached to the float are hardware and sensors

9 that include: upward facing POC flux camera, gel-based sediment trap, dissolved oxygen sensor, photosynthetically active radiation (PAR) sensor, a commercial GPS tracker for recovery, an acoustic receiver, a burn-wire, ballast, and a pair of side- looking stereo imaging systems[6]. This thesis is about designing and implementing the stereo imaging system pair for deployment in the Minion platform. The goal is to deploy these floats at the same time at various depths. First, the density of the at target depth is measured by a conductivity, temperature and depth sensors. Then, the density of each Minion float will be matched to the density of seawater of depth of interest. The floats will be deployed for days to weeks, after which the burn-wire will drop the ballast and the float will surface. The floats are retrieved so that the sensor data can be analyzed. The operation of Minion float deployment is summarized in Fig. 3-1.

3.1 Challenge

Our aim is to devise an instrument that enables investigation of the POC size spectrum, along with their flux and relative particle-float motion in the ocean. POC is defined as organic particles whose equivalent spherical diameter (ESD) is bigger than 0.2µm[34]. Those smaller are considered as [35]. In the literature, it is observed that the smaller the particles, the higher the concentration. Specifically, small and medium-sized particles, whose ESD ranges between 50µm to 1000µm, are often studied[8][36]. Many POCs are aggregated and undergo changes in euphotic zones (200m)[37]. Beyond that level, in the , the POCs are expected to settle with consistent settling rate. Some of the fast settling particles are estimated to settle as fast as 200m/day[8]. In addition, our system will be atop an existing float, The aforementioned Minion, whose body causes dominant float-fluid motion around the platform. The boundary layer is estimated to be around 5cm (see Sec.8.2.1). Therefore, we need to design and build a stereographic imaging system that can resolve and track undisturbed particles in the mesopelagic zone.

10 3.2 Design Goals

Parameter Value Unit POC ESD 50 - 1000 µm Max. POC settling Velocity 200 m/day Imaging Distance 20 cm Duration 1 day Depth 2000 m

Table 3.1: Parameters containing the design phase of the system. The highest priority is to resolve and track the particles

The goal of this project is to measure the size spectrum and settling rate of POC. We are interested in studying particles sized 50 - 1000µm. This requires an imaging system with a resolution of 25µm/px based on the Shannon-Nyquist sam- pling theorem[38]. To allow some margin, my imaging system will have resolution of 20µm/px. The fast settling particles are estimated to sink 200m/day[8][12], which corresponds to 115px/s given the resolution above. In addition, the bobbing of the float and the heat conduction are expected to introduce a hydrodynamic boundary layer to its surrounding fluid. Therefore, the imaging target volume will be imaged at least 20cm away from the cameras. Once the system is optimized for the above design parameters, it can be fur- ther optimized for other parameters. Additional parameters include imaging volume, power-consumption (i.e., battery), pressure tolerance, and cost. At the moment, the target maximum depth is 2000m deep, which will be taken into account when picking out the housing. The current goal for the power consumption is to allow the system to last 1 day, solely on the battery. Finally, the budget for an individual device is approximately $1000. The summary of the parameters to consider during the design phase are shown in Table. 3.1. These are optimal goals that we strive for, and in fact, the current system cannot achieve the aforementioned imaging resolution. The system performance is evaluated and discussed in later sections.

11 3.3 Particle Flux

We are interested in determining the relationship among particle flux size distri-

−2 −1 −1 −3 −1 bution (F, No. m d µm ), concentration size distribution (퐶푖, No. m µm ) and settling velocity (W, m d−1). They follow the relationship:

퐹푖 = 퐶푖 * 푊

According to McDonell et. al.[12], the flux of particles is dependent on multiple factors, such as the density, geometry and composition. Therefore, particles cannot be assumed to follow Stoke‘s law[29] where the bigger particles would be modelled to sink faster than the smaller ones. It is imperative that at least two of the three aforementioned variables are simultaneously measured to compute a spatially and temporally appropriate estimate of the particle behavior. With the imaging pair, we are able to directly observe the concentration and the settling rate to compute the particle flux.

12 Chapter 4

Single Imaging Optics

4.1 Overview

As described in Section 3.2, the imaging system has the following design goals: to provide 20µm/px resolution, take images of water volume at least 20cm away from the optical port, take images at 1FPS. We would also like to minimize the weight, cost and power consumption of the system. Given these considerations, we decided to use a USB 2.0 board-level camera, DMM 72BUC02-ML[39] from TheImagingSource, which uses an MT9P031[40] monochrome imaging sensor from ON Semiconductor. As for the lens, a 25mm focal length Blue Lens Series M12 lens[41], with f-stop 8 aperture, from Edmund Optics was used. The total cost per imaging system when purchased in single unit is $119 (camera) + $95 (lens) = $214 and estimates in bulk (i.e., 1000 units) is $83.3 + $52.5 = $135.8.

13 (a) Assembled board camera with M12 (b) Camera housed inside an acrylic hous- lens for tests in air. ing for underwater tests

Figure 4-1: Photos of the imaging system for testing in air and in water.

RaspberryPi Cameras[42] and ArduCams[43] are very appealing to use for our purpose. They are much cheaper (Less than $50) and have Camera Serial Interface (MIPI CSI-2)[44] interface, which consume lower power and transfer data faster than USB2.0. The cameras and their intended platform, Raspberry Pi, also have a great community support. However, the pixel sizes of these cameras were so small that unfortunately, we were limited by the ability of the lenses to resolve at the required resolution. We could bin 2x2 pixels as one pixel, but the imaging area would be- come much smaller (quarter of the original). However, ArduCam is open for camera customization, so in the next iteration, it would be worthwhile to search for a sen- sor compatible with MIPI CSI-2 that can be customized through ArduCam. Other venders such as XIMEA[45], Basler[46] and LeopardImaging[47] offered great cam- eras, but their price was above our budget ( $200 >).

4.2 Effective Focal Length Underwater

During our deployment, light reflected from the POC travels through three differ- ent media and therefore refracts before arriving at the imaging sensor. We need to

14 Figure 4-2: A diagram that shows the geometrical relations on computing the effective field of view and working distance. The ray of light travels through three different media. The diagram is exaggerated for visibility.

consider the refractive index of seawater[48], (푛푤 = 1.34), sapphire port (푛푠 = 1.75) and air (푛푎 = 1). Fig. 4-2 shows the variables and geometrical diagram of the prob- lem of computing the effective field of view and thereby the effective focal lengthin the seawater. Specifically, a ray of light forms a triangle in each medium, and weare interested in finding the effective field of view, focal length and working distance.

Given the resolution of 20µm/px resolution, the width of the target area, 푤푡푎푟푔푒푡 is

푤푡푎푟푔푒푡 = 2592px × 0.02mm/px = 51.84mm

Similarly, the width of the sensor, 푤푠푒푛푠표푟 is

푤푠푒푛푠표푟 = 2592px × 0.0022mm/px = 2.8512mm

15 Provided the focal length, 푓, the field of view in air is

푤 휃 = tan−1( 푠푒푛푠표푟 ) 푎 2푓 2.8512mm = tan−1( ) 25mm = 6.51°

Hence, given the distance between the sapphire port and the lens, 푑푎, the height of

the triangle formed by the ray of light in air, ℎ푎 is,

ℎ푎 = 푑푎 tan(휃푎)

Rearranging Snell’s law[49],

−1 푛푎 휃푠 = sin ( 푠푖푛(휃푎)) 푛푠

Given the thickness of the sapphire port, 푑푠, we can also compute the height of the triangle formed by the ray of light in sapphire port, ℎ푠,

ℎ푠 = 푑푠 tan(휃푠)

. We compute the height of the triangle formed by the the ray of light in the water,

ℎ푤, 푤 ℎ = 푡푎푟푔푒푡 − (ℎ + ℎ ) 푤 2 푎 푠

Furthermore, we can compute 휃푤 using Snell’s law again. Then, we can compute the shortest distance between the target and the sapphire port, 푑푤

ℎ푤 푑푤 = tan(휃푤)

Assuming the following values for each variables: 푑푎 = 5mm and 푑푠 = 5mm, using

16 Snell’s law, 휃푤 = 4.85° and finally, 푑푤 = 294.88mm. The total distance, 푑푡표푡푎푙, is

푑푡표푡푎푙 = 푑푎 + 푑푠 + 푑푤 = 304.88mm

Thus, the effective field of view underwater, 휃푓표푣,푤 is,

−1 푤푡푎푟푔푒푡 휃푓표푣,푤 = tan ( ) 2푑푡표푡푎푙 25.92mm = tan−1( ) 304.88mm = 4.86°

The resultant effective focal length underwater is 33.52mm.

4.2.1 Depth of Field and Imaging Volume

One of the key objectives of this system is to enable estimation of the particle concentration. Below and including the mesopelagic zone, the concentration of parti- cles are expected to be very low. Very roughly, we expect 0.1 to 10 particles present within our imaging volume [12][50][51]. Therefore, it is important to be able to cap- ture as much volume as possible. With a 20 (µm/px) resolution and the dimension of 2592 (px) × 1944 (px), the width and length of the imaging frame is 5.184cm x 3.888cm (area of 20cm2). Therefore, to maximize the imaging volume, the depth of field (DoF) needs to be maximized. Depth of Field (DoF) is the range of depth across which an object is focused on an imaging plane[52]. An object is “focused” on an image plane if its projection (i.e., blur spot) is smaller than the “acceptable” circle of confusion (CoC). When out of focus, the object become blurry, and this effect is referred to as Bokeh Effect in photography[52]. Depending on the application, the criteria for an “acceptable” CoC and thus DoF varies. A very conservative criteria for DoF is that as long as an object is within the DoF, the image does not change. This requires that the CoC remains very small (e.g., <2px) despite the changes. On the other hand, for our application, small change in depth results in noticeable change in CoC. This is acceptable as long

17 as the object can be identified and tracked. This, however, requires most ofthe particle sizes to be inferred rather than directly measured. We draw a distinction between an extremely conservative definition of DoF estimation and a more realistic one for our application.

Extremely Conservative DoF Estimation

We are interested in learning about the size of individual particles to compute the particle flux for different sizes of particles. Ideally, all of the particles present in the image are focused and resolvable so that their sizes can be measured directly. However, that is not the case for our application, and we want to know the amount of volume in which that is possible. Specifically, we set the diameter of maximum circle of confusion to be 2px wide (i.e., the blur does not exceed 2px), and find the corresponding DoF.

Variable Value Unit Description 푐 4.4 µm diameter of circle of confusion (2px) 푁 8 - F-stop number 퐷 304.88 mm working distance in water 푓 33.52 mm effective focal length in water

Table 4.1: Table of variables and their values to be used to calculate the DoF of the optical system. Some of the values such as the working distance and the focal length are computed in Sec.4.2.

According to [52], using a thin symmetrical lens model, the DoF in front of the focused plane is 푁푐퐷(퐷 − 푓) 퐷표퐹 = (4.1) 푓푟표푛푡 푓 2 + 푁푐(퐷 − 푓)

푁푐퐷(퐷−푓) and similarly, the DoF beyond is 퐷표퐹푏푎푐푘 = 푓 2−푁푐(퐷−푓) . Using the values in Table 4.1 and 4.2, the computed values are 퐷표퐹푓푟표푛푡 = 2.30mm and 퐷표퐹푏푎푐푘 = 2.35mm. The total DoF to be 퐷표퐹 = 퐷표퐹푓푟표푛푡 +퐷표퐹푏푎푐푘 = 4.65mm. The DoF with resolvable and focused particles is only 4.65mm. This only provides an imaging volume of 9.4mL.

18 Realistic DoF Estimation for Our Application

9.4mL is indeed a very little volume of water where the particle size can be resolved with high accuracy. However, we know that more particles at wider depth will appear in the image frame. They will be blurry, but it is possible to infer their sizes if we can calibrate the blurriness as a function of its depth and sizes. In Section 9.4.1, we show that blurred discs with ±30mm of the working distance could be identified. If we consider the range of depth in which the particles can still be identified and tracked, the effective imaging volume of each camera increases significantly to 120.9mL. From hereon, DoF will be referred to as the range of depth through which the target particles can be identified and tracked, but not necessarily focused.

4.2.2 Lens

After the imaging sensor was chosen, the appropriate lens is determined. Multiple M12 lenses were purchased from various vendors to find the best lens at a cost below $100. We intentionally sought for a high f-stop number to minimize the aperture so that the DoF could be maximized. It is important to test the lens performance in com- bination with the sensor. These lenses could claim certain resolution (e.g., 10 Mega Pixels), but without knowing the application, the information is not helpful. Further- more, data provided by the manufacturer was inappropriate for our purpose, since manufacturers often measure at different working distance. Even among the same series of lens, individual lenses would have varying focal length with sub-millimeter inaccuracy.

19 (a) The optical rail for testing (b) Camera resolution test inside a bath- tub

Figure 4-3: We built an optical rail to test each camera against a Siemens star calibration target in air and in water.

4.2.3 Setup

A Siemens star pattern[53][54]1 with 32 stripes engraved as chrome on glass from Thorlabs, Inc.[56] shown in Fig. 4-4 was used to test and focus the lens. To fix the relative distance between the camera and the target, I built an optical rail using 80/20 aluminum bars as shown in Fig. 4-3a. To test underwater, the camera is mounted onto a laser-cut acrylic chassis inside an acrylic housing as shown in Fig. 4-1b. For in-lab testing purposes, I am using the housing kit from BlueRobotics 2” housing kit[57] with a transparent optical port. The entire optical rail along with the housed camera was dunked into a water tank as shown in Fig. 4-3b. Focusing the lens Each lens was hand-screwed onto a lens-mount screwed onto the imaging board. It was first focused on an optical rail to take images of a Siemens star pattern resolution target (Fig. 4-4a). Initially, the camera is placed at an ex- pected working distance (227.3cm in air for 25mm focal length lens). I implemented

1Camera resolution performance is usually provided as a Modulation Transfer Function (MTF) curve, which compares contrast against the resolution in a unit of line pair per pixel. On top of that, contrast for Sagital and Meridonial lines are distinguisheds[53][55]. They are diagonal lines that pass the center of the sensor and connect two diagonal corners. As we are more concerned with distinguishing among more rounded shaped targets, the combined effect of the Sagital and Meridonial lines are more relevant. A Siemens star allows such a measurement, where we take the contrast of lines around a specific radius.

20 a simple code that streams the full resolution of the image for visual feedback and measures the contrast. The distance between the lens and the camera are re-adjusted so that the Siemens star pattern, whose exterior diameter is 1cm, fills the 500px (1mm / 20µm/px = 500px) wide region of interest. The fact that the distance needed to be re-adjusted means that the focal length is not exactly 25mm. This is later calibrated using a different target described in 5.6. At this point, an image of the target istaken and processed to output a resolution performance curve. The minimal resolution at 20% contrast for each lens is noted2.

4.3 Evaluation

(a) An image of the Siemens star pattern (b) The frame is divided into 9 regions target. The minimum resolution that can and minimum resolution in each region is be resolved using this target is 4.36 um evaluated. The performance around the per stripe. Our goal is to resolve 20um corners tend to be worse than in the cen- per line. ter.

Figure 4-4: Images of the Siemens Star calibration target during test underwater.

Lenses from the following vendors were surveyed: CN-AICO[58], Edmund Optics, Scorpion Vision[59] and Lensation[60]. Initially, some CS-mount lenses were consid- ered, but they weighed more and took more space than the M12 lenses. Among them, the Blue Series M12 Lenses from Edmund Optics provided the minimum res-

2This is not to be confused with the metric that we use for computation of the DoF. Here, we are interested in learning about how much the image is distorted by the lens when particles are focused. As for DoF, we are estimating how much the image is distorted depending on the location of the object.

21 olution. The lens together with the imaging sensor provided at most 27.2µm/px and 34.9µm/px resolution at 20% contrast in air and in water, respectively. The performance comparison of the lenses are made in Appendix. C.1. With the selection of the sensor and lens made, the optical system was more thoroughly investigated. It was found that the lens performance deteriorates around the corners of the lens. We investigated the lens performance underwater. While seawater and are very different in terms of their constituent chemicals, the optical properties are similar (refractive index of 1.34[48]). We divided the lens into 9 different regions within the frame as shown in Fig. 4-4b, and the minimum resolution at 20% contrast were measured with the target in each region of the frame. The system was tested both in air and in water. Since the target is transparent glass, the contrast ratio depends highly on the back illumination. White paper on a cardboard and a white cutting board were placed behind the target in air and in water, respectively. Bright white light from a stand was used to illuminate directly above the calibration target. However, in an underwater situation, the ambient light was scattering such that the dark stripes of the targets appeared relatively bright in the images. For example, in air, using a 0 to 255 gray intensity scale, the dark stripes would have an intensity around 25, but in the water, this would be around 80. This problem was resolved by turning off the ambient light and illuminating with a narrow beam using a red LED and a reflector. As a result, in the air, the maximum contrast is approximately 0.8 and in the water, the maximum contrast rose to 0.75, as opposed to 0.65 with an ambient light. The graph in Fig. 4-5 shows an example of measured contrast between the black and white strips on the target as a function of the spatial resolution on the Siemens star. At each of the divided regions within the frame, we note the spatial resolution at which the contrast was measured to be at least 20%. The performance of two pairs of our imaging system is outlined in Table. 4.2. Based on our observations, minimum resolution that the system can provide is 34.9µm/px. Hence, it would be able to resolve particles that are as small as 70µm in diameter. The original design goal was to be able to resolve 50µm diameter.

22 Figure 4-5: A graph of contrast as a function of the spatial resolution in air and underwater. The minimum resolution at 20% contrast is noted. Ambient light scat- tering seems to reduce the maximum contrast underwater, thereby decreasing the minimum resolution.

It seems that we are currently limited by the resolution test apparatus. Chromium on glass is not perfectly absorptive[61][62] (e.g., 11% reflectivity at 436nm), and the transparent soda-lime glass makes the overall contrast sensitive to how the target is illuminated. Furthermore, in the water, the ambient light introduces scattering that reduces the maximum contrast. A more appropriate Siemens target with opaque background and necessary resolution could not be found. However, an alternative target for this study could be a negative (opaque) USAF 1951 resolution target[63], which provides resolution as low as 4.4 µm/px, from II-VI

23 Resolution at 20µm/px Resolution at 20µm/px Region Air Water Region Air Water 1 22.7 33.2 1 24.4 27.9 2 22.7 31.4 2 26.2 34.9 3 22.7 31.4 3 24.4 31.4 4 22.7 27.9 4 22.7 29.7 5 24.4 27.2 5 22.7 29.7 6 22.7 29.7 6 24.4 34.9 7 22.7 26.2 7 24.4 29.7 8 24.4 29.7 8 24.4 31.4 9 22.7 26.2 9 22.7 27.9 (a) Left Camera (b) Right Camera

Table 4.2: Minimum resolution of two cameras in air and underwater

24 Chapter 5

Stereo Design

We need to decide the baseline and the pan angle of the two cameras to design a stereographic imaging system. We define three dimensional axis: x-axis is the line between two vertices at the center of each lens, y-axis is coming out of the page, and the z-axis is orthogonal to both of these axis, looking towards the target. The term ”pan angle”, 휃, will be used to describe the angle between optical axis of the camera and the x-axis. In our design, we set the pan angle such that the optical axis intersects with z-axis at distance of working distance, W (i.e., 휃 = cos−1(퐵/2푊 )). There exists a trade-off among the depth measurement error, the maximum correspondence disparity on an image, and the imaging volume. This is further discussed in Sec. 5.5. In computing the baseline and pan angle, we use the effective focal length and working distance underwater, which was derived in 4.2.

5.1 Baseline and Disparity

In choosing the baseline, we want to minimize the depth error and the maximum correspondence disparity between the particles on the left and right images. The maximum disparity will occur at the closest point, 푃푚푖푛, within the overlapping areas.

In our case, since the depth of field is narrow, 푃푚푖푛 is different from the intersection of the nearest field of view. We need to find 푃푚푖푛 and then compute the corresponding disparity, Δ푥.

25 Figure 5-1: Two-dimensional view of the intersection of two identical circular cones, which represent the field of view of the cameras. Unlike common stereo-imaging systems, the depth of field restricts the closest intersection point to the cameras and thus the maximum disparity.

We work in the two-dimensional x-z plane where y=0. We assume that the cameras are only rotated around the y-axis. This ensures that even in three-dimensions, the closest point is still point 푃푚푖푛 and hence the maximum disparity. The center of the imaging sensor is set to be the origin of the sensor. 푥푙 and 푥푟 are shifts relative to that origin. The left shift is negative and right shift is positive. The variables involved in calculation are listed in Table 5.1 and shown in Fig. 5-1. Knowing the pan angle from above and the depth of field, 퐷표퐹 , the distance along the z-axis, 푉푚푖푛, can be calculated.

푉 = 푊 sin(휃)

26 Variable Value Unit Description 푊 303.17 mm working distance underwater 푓 33.349 mm focal length underwater 푝푖푡푐ℎ푝푥 0.0022 mm pixel pitch 푓푝푥 15158.64 px focal length in units of pixel 퐷표퐹 30 mm depth of field 푥푙 - px pixel on the left sensor 푥푟 - px pixel on the right sensor 퐵 - mm baseline 휃 - ° pan angle Δ푥 - px disparity 푃푚푖푛 - - intersection point of overlapping regions closest to x-axis 푉 - mm distance from x-axis to focal point 푉푚푖푛 - mm distance from x-axis to point P 휃푚푖푛 - ° pan angle between x-axis and the camera to point P Table 5.1: Variables and constants involved in computing the maximum disparity.

퐷표퐹 푉 = 푉 − (5.1) 푚푖푛 cos(90° − 휃) 퐷표퐹 = 푊 sin(휃) − (5.2) sin(휃)

With the baseline and 푉푚푖푛, we can compute the angle between x-axis and the line joining the optical center, 푂퐿 , and 푃푚푖푛.

2푉 휃 = tan−1( 푚푖푛 ) (5.3) 푚푖푛 퐵

The projection of point P on left and right camera lands on 푥푙 and 푥푟 respectively. The angle between these two points and the lens center relative to the optical axis is equal to 90° − 휃푚푖푛. 푥푙 and 푥푟 are the same except for their signs.

푥푙 = −푓푝푥 tan(휃 − 휃푚푖푛)

푥푟 = 푓푝푥 tan(휃 − 휃푚푖푛)

27 The disparity, Δ푥 is defined as the difference between 푥푙 and 푥푟, and so,

Δ푥 = 푥푟 − 푥푙 (5.4)

= 2푓푝푥 tan(휃 − 휃푚푖푛) (5.5)

We substitute 5.3 into 5.5 to find the disparity as a function of the baseline andthe pan angle.

−1 2 ∴ Δ푥 = 2푓푝푥 tan(휃 − tan ( )) (5.6) 퐵(푊 sin(휃) − 퐷표퐹/sin(휃))

5.2 Depth Error

Variable Value Unit Description 푟 0.020 mm spatial resolution 휃+ - ° divergence angle shift due to a pixel shift to the right 휃− - ° divergence angle shift due to a pixel shift to the left 푑+ - mm depth shift due to a pixel shift to the right 푑− mm depth shift due to a pixel shift to the left 푥− -1 px pixel shift from 푥푚푖푛 to the left 푥+ 1 px pixel shift from 푥푚푎푥 to the right 휓− ° angle between z-axis and a line through 푃− and lens center 휓+ ° angle between z-axis and a line through 푃+ and lens center Table 5.2: Variables involved in computing the depth error

We also calculate the maximum depth error as a function of the baseline. On top of the variables defined in Table 5.1, we define a few more variables to calculate thedepth error in Table. 5.2. The geometry is displayed in Fig. 5-2. In our PTV computation, we consider the depth as the distance along the optical axis of the left camera. This is different from z-axis defined previously. The left optical axis is hereafter referred to as the d-axis. The depth uncertainty is the amount of uncertainty along the d-axis corresponding to the spatial resolution of the system, which is 20µm/px.

We compute the amount of change in depth, 푑− − 푑+, as a result of a shift in a

28 Figure 5-2: The geometry and relevant variables involved in computing the depth errors. Note that the depth is measured along the d-axis, which is the optical axis of the left camera.

29 pixel on the right image sensor. The depth error, 휖 is defined as:

1 휖 = (푑 − 푑 ) (5.7) 2 − +

We first compute the angles, 휓− and 휓+, between the d-axis and a line through 푃− and 푃+ with trigonometry. We know from the triangle (푂퐿, 푂푅, 푃−) that

휓− = 180° − 휃 − (휃 + 휃−) (5.8)

= 180° − 2휃 − 휃− (5.9)

Equivalently, 휓+ = 180° − 2휃 + 휃+. We can the compute the divergence angle as a

result of a pixel shift, 휃+ and 휃−

−1 푥− −1 1 휃− = tan (− ) = tan (− ) (5.10) 푓푝푥 푓푝푥 and likewise, 휃 = tan−1( 푥+ ) = tan−1( +1 ). + 푓푝푥 푓푝푥

Given that the arc from 푃− and 푃+ to 푃 have very narrow angles, we can approx- imate this to be a straight line with the length of 푟. Technically, the d-axis and the

line joining 푃− and 푂푅 are not perpendicular. However, given that 휃− is very narrow (≈ 0.0038°), we assume this angle to be perpendicular. In this case, we can compute

푑− and 푑+ as follows:

푟 푑− = 푊 − (5.11) sin(휓−) 푟 = 푊 − (5.12) sin(180° − 2휃 − 휃−) 푟 = 푊 − (5.13) sin(2휃 + 휃−)

푟 Similarly, 푑+ = 푊 + . Using their difference, we can now compute the final sin(2휃−휓+) depth error. 1 푟 푟 ∴ 휖 = ( + ) (5.14) 2 sin(2휃 + 휃−) sin(2휃 − 휃+)

30 5.3 Correspondence search area and probability of confusion

We would like to estimate the likelihood, p, of finding two particles in the search area. We use the maximum concentration reported in McDonnell’s phD thesis[8], which results in approximately at most 10 particles in a frame. It is assumed that the particles are uniformly distributed within an image frame. We can apply the probability calculation of finding two particles in a specified volume given a larger volume [64]. However, we assume that the imaging volume is a cuboid with same depth to use imaging area instead. We compare the number of pixels in search areas to the total number of pixels in the frame, A=2592*1944. In our correspondence search algorithm, we search around the epipolar line with ±10px (total of 20px) buffer. In addition, we impose a condition that one particle is already in the search area. Hence, we look at the probability of only one particle falling within the same search area. In this case, the total number of potential particles is N=9, the anticipated number of particle is n=1, and the search area is a=Δ푥 × 20. We directly use the maximum disparity computed from each potential baselines and map the likelihood of two particles present in the search area. Then the final equation for finding p becomes: (︃푁)︃ 푝 = (푎/퐴)푛(1 − 푎/퐴)푁−푛 (5.15) 푛 The result of changing the baseline on the likelihood of being confused between two particles within the search area is shown in Fig. 5-3. The two outer lines show the impact of changing the pan angle by ±0.5°.

5.4 Imaging Volume

According to [65], the overlapping volume of two congruent cones whose axes intersect symmetrically are bigger as the pan angle, 휃, gets bigger. This holds for

31 Figure 5-3: The computed probability of another particle present in the search area of the a particle as a function of the baseline. The two outer lines indicate the results of a change of pan angle by ±0.5°. cones that are infinitely tall. In our case, we are limited by the optics toanarrow depth of field that puts more constraint on the overlapping volume. Therefore, the overlapping imaging volume is computationally found using alphaShape[66] function- ality of MATLAB. In particular, 2D coordinates of the image frame is projected onto 3D space coordinates at varying depths using the stereo camera parameters that are selected for the design. As a result, Fig. 5-4 shows the relationship between the overlapping imaging volume and the pan angle. We verify that the chosen pan angle, (74.7°) is close to a maximum volume. We see that even a small change in pan angle can have significant effect, on not only the volume, but also the maximum correspondence disparity as well as the depth error (Fig. 5-5).

32 Figure 5-4: Graph showing the overlapping imaging volume in which the particles 3D positions can be tracked. The red dot indicates the result of currently chosen pan angle, and the dotted lines show how the overlapping imaging volume varies with a change of ±0.5°

5.5 Design Choice and Evaluation

We compute the maximum correspondence disparity, depth error, likelihood of detection confusion, and the overlapping imaging volume as a function of the baseline between two cameras. We also observe how they change if the pan angle varies just by ±0.5°, which are shown with the outer lines in each of the graphs. Let us take a look at the baseline B1=160mm and B2=260mm, which leads to approximate depth error of 40µm/px and 25µm/px respectively. By compromising the accuracy of the depth error by 15µm/px, we are able to reduce the chance of coincidence of two particles in the search area from 3.1% to 5.4%. This is a factor of 1.7, and thus, reducing the maximum disparity is valuable. On the other hand,

33 Figure 5-5: A graph displaying the maximum disparity and depth error as a function of the baseline. The two outer lines indicate the result of a change of pan angle by ±0.5°. the accuracy of depth is important for computing the motion of the particles, not measuring the particle size itself. Hence, we do not have to impose strict accuracy of 25µm/px in computing the motion along the d-axis. With all of the calculations and metrics discussed above, it is most optimal to design the stereo camera with a baseline of 160mm and pan angle, 휃, of 74.7°. This will result with a maximum disparity of 910 pixels and depth error of 39.3µm/px. The probability of finding another particle within the search area in this case is3.1%. The summary is shown in Table 5.3. Finally, we would like to emphasize that a small misalignment of two cameras due to mechanical variation affects the quality of observation significantly. Weare observing a very small volume of water at a relatively long distance due to the float- fluid interaction. Hence, a small change in a pan angle can lead to a significant change

34 Variable Value Unit Baseline 160 mm Pan Angle 74.7 ° Maximum Disparity 904 px Depth Error 39.5 µm/px Probability of correspondence confusion 3.1 %

Table 5.3: Design specifications for the mechanical design of the stereo imaging pair in system specifications. In particular, by allowing a pan angle tolerance of ±0.5°, the maximum disparity correspondence swings between 650 to 1150px. This also leads to depth error between 38.4 to 40.7µm/px. On the other hand, the imaging volume can vary by 5mL. A noticeable change is seen for the correspondence, and thus, a very careful shimming of the lens within the housing is required so that the final system can meet such a tight and critical tolerance. A result of small misalignment is visible in Fig. 5-8.

5.6 Stereo Calibration

In order to correctly track the particles in 3D space, the cameras need to cali- brated. Parameters to be calibrated include intrinsic and extrinsic calibration pa- rameters as well as any distortion coefficients. Intrinsic parameters of the cameras refer to focal length and image center points. They pertain solely to the camera itself and do not rely on the rest of the world view. The extrinsic parameters relate positions and motion of each cameras to the 3D space. With intrinsic and extrin- sic parameters, one can map the 3D point to 2D pixel coordinates on each camera, and vice versa. In our camera model, we only consider radial distortion to second degree.[67] A common approach in stereo calibration is to use a set of images of a 2D plane with a known grid of features (e.g., a checkerboard) at various orientation taken by both left and right cameras. By knowing the accurate relative positions of the features on in 3D space, one can infer the camera parameters by least mean square method[68][69]. This is provided by OpenCV and MATLAB.

35 Figure 5-6: Visualization of the extrinsic camera parameters in the water by taking the center of the left imaging sensor as the origin of the 3D space coordinate system. The multiple orientations of the planes of the calibration target have been re-projected as well. The x-axis is the horizontal distance from the left camera, the y-axis is vertical, and the z-axis is the depth. The empty space in the graph has been omitted for visibility.

Another stereo calibration method, which is used by OpenPTV, requires the knowledge of accurate relative depth among calibration targets. In particular, they suggest using a 3D calibration block, on which the features are accurately marked at different known relative depths. Since the 3D target required by OpenPTV was unavailable at hand, we decided to proceed with the OpenCV/MATLAB approach using a 2D plane calibration target. In any case of calibration, the resolution of physical features must be high enough for them to be accurately detected by the detection algorithm. On the first trial, a checkerboard calibration target was used. It was printed out using an HP Inkjet printer. The detector algorithm would look for corners at which each edge of the checkers was being crossed. However, the printer’s quality was not good enough for

36 accurate calibration to take place. Specifically, the edges around the corners were not focused leading to error in the detection algorithm. After all, the cameras were meant for resolution as low as 50µm/px, resulting in a corner detection error of more than 20px. Therefore, we decided to purchase professionally made calibration target from II-VI Aerospace & Defense[70]. The target is a 26×26 grid of dots whose diameter is 250µm and separated 1mm apart. This proved to result in much higher detection resolution and the final calibration errors were 0.34px in air and 0.42px in water. The calibrated extrinsic parameters are visualized in Fig. 5-6. In addition, it is important to note that at much lower depths underwater, the entire mechanical rig will start to compress due to pressure. This will alter the pan angles of the cameras and ultimately introduce error in calibration. It will be an important step to calibrate how the mechanical structures change in a pressure chamber so that the calibration data can be appropriately adjusted per deployment.

5.6.1 Setup

A stereo camera rig was framed with 1” wide 80/20 aluminum bars. The cameras were installed on a chassis inside of a 2” acrylic tube with an acrylic port purchased from BlueRobotics[57]. Each camera was screwed onto a standoff on top of a rotating platform [71]. The rotating platform was screwed onto a 80/20 frame roller, so that the baseline of the stereo camera could be controlled. While most of the software is written in MATLAB, MATLAB does not include a package to detect circular grids. Hence, we use a simple blob detection in OpenCV to detect circular grids. Therefore, during the calibration process, a C++ program saves the detected coordinates of the grid, and MATLAB loads those coordinates for calibration.

5.6.2 Results

1This is an empirical example of how a small misalignment in our rig has resulted in a noticeable vertical shift in left and right cameras. This was in a controlled situation, and we can only expect there to be more challenge when we go through a deployment.

37 Figure 5-7: The setup of a stereo calibration rig in the water.

The tests were conducted in both air and in water using the 26x26 circular grid target[70]. Images of the target from the left and right cameras are shown in Fig. 5-8. Upon calibration, each point on the grid was mapped to a 3D point based on the calibration results and re-projected onto the 2D image frame. The difference in observed grid points and simulated grid points were measured as the re-projection error. In air, the mean error was 0.34px and in water, 0.42px. Furthermore, to verify the design calculation, we compare the results from under- water calibration to our calculation. In our underwater setup (see Fig. 5-7), we used a 140mm baseline, and when the cameras were oriented to view a common target, the pan angle was 12°. The distance between the focused target and the principal point on the imaging sensor was noted on the extrinsic visualization tool, and it was 324mm. The distance between the lens aperture and the sensor is approximately 34mm, making the working distance 290mm. Re-running the calculation of effective

38 Figure 5-8: An example of a pair of calibration target images (left and right cameras, respectively).1 working distance using the calibrated parameters, the expected effective working dis- tance is 289.4mm. There were many human errors involved in the calibration process, but the calculation and empirical results seem to agree.

39 Chapter 6

Light Design

The mesopelagic area where the Minions will be deployed has little ambient light[72]. Therefore, a lighting system is necessary. First, we need to select the LEDs and their quantities. Then, the best location for the light sources will be identified. Most of the light should be focused on the target volume. The optical path between the target and the camera should have least illumination to minimize scattering (i.e., noise). Each source of light will be housed in separate cylindrical housings. It was important to survey LEDs and optics available in the market to guide the preliminary computation. The target volume is very small and cost should be min- imized. Thus, the search focused on off-the-shelf LEDs and narrow beam reflectors. Among the power-efficient and cost-effective LEDs were the XLamp XP-E2 (Cree) and Luxeon Rebel (Lumileds) series. We found the narrow beam reflectors provided a full width half maximum(FWHM) angle of at most 16°. Note that illumination distribution depends on the combination of the LEDs and reflectors. Since light sources consume significant amount of power, it is worth thoroughly analyzing and optimizing for power-efficient lighting given a power-restricted envi- ronment. The illumination depends on many factors including the distance and the angle between the target and the light source, the spatial distribution of illumination, and the wavelength. Therefore, we use some of the details from data sheets of LEDs and reflectors as well as empirical results to estimate the required LED powerand optics. We employ ray tracing[73] to model the optical relations. We trace how lights

40 propagate from the light source to the camera.

6.1 Calculation Setup

Figure 6-1: The geometrical setup of the cameras, lights and the target volume seen from the top.

In the 3D space, we are interested in the physical and optical relations among three elements: camera, target volume and light source. From Sec. 5, the position and orientation of the cameras and the target volumes are well-defined. Again, we take the same world-coordinate system from Sec. 5 and the geometrical relationship is shown in Fig. 6-1. The origin of the coordinate system is the mid-point between the two cameras. We assume that the left and right cameras are on the same x-z plane where y=0. We take the point, P, where the optical axis intersects with the Z-axis, as the center of the target volume. To provide similar light distribution for both left and cameras, the light source moves only within the y-z plane where x=0 and rotates around point P. The illumi- nation is modelled as a directional light instead of a spotlight, despite such a narrow beam angle. The size of particles are much smaller than the distance between the particle and the light source (by approximately 3 orders of magnitude).

41 6.2 Ray Tracing from the Light Source to the Cam- era

The outline of the calculation is as follows. (1) compute the total radiant flux (Watts) of the LEDs, (2) given a reflector, compute the radiant intensity (Watt/Steradian) at given ray angle, (3) compute the irradiance on the surface of the particle across a 20µm x 20µm area, (4) assuming radiance of a Lambertian surface that diffuse light into a hemisphere, compute radiance of the same area, (5) compute irradiance onto the aperture of the lens, and finally, (6) compute the voltage response of a pixel to the given ray and normalize by the supply voltage that should give the digital output. (1) LED Radiant Flux The LED’s data are provided either in luminous flux or radiant flux, measured in lumen and Watts, respectively. The former depends on sensitivity of human eyes (683 lm/W at 555nm) and luminous efficacy[74], η(λ), which scales the sensitivity at other wavelengths relative to the sensitivity at 555nm. The conversion from radiant flux is in Eq. 6.1, which can be reversed to convert Lumens to Watt in Eq. 6.2[75].

퐿퐿푢푚푒푛 = 683휂(휆)퐿푊 푎푡푡 (6.1)

퐿 퐿 = 퐿푢푚푒푛 (6.2) 푊 푎푡푡 683휂(휆) Even though the central wavelength is the dominant wavelength, total flux is distributed across some narrow bandwidth ( 20nm). This detail is also provided in the datasheet, whose shapes can be modelled as a Gaussian distribution. We scale the spectral distribution with the total flux to compute flux at each wavelength. The result is convolved with the luminous efficacy profile to output the radiant flux profile and can be summed to output the total flux.

42 Figure 6-2: Measured angular profile of the combination of Cree XLamp XP-E2 photored LED and Carclo 10412 lens. This specific combination has a FHWM of 16°in the air.

(2) LED and reflector radiant intensity The spatial distribution of the illumination changes depending on the combination of the LED and the reflector. Fig. 6-2 shows the intensity of the light reflected on white paper across the apex angles of the projected light.1 Upon evaluating multiple combinations of LEDs and reflectors, we have selected Cree XLamp XP-E2 Photored LEDs and Carclo 10412 optical lens. Considering the refractive index, the result is a beam whose FWHM angle is 12.4°. Any results shown hereon are based on this specific intensity profile. From the intensity profile, we can compute the radiant intensity at a given angle. We want to convert the normalized intensity to radiant intensity because the irradiance on the target surface is easy to compute.

1the angle on the profile is referred to as apex angle because the projection of the light formsa cone with this apex angle.

43 The Steradian is a unit for a solid angle. Given a unit sphere, one solid angle subtends 1/4휋 of the surface area of the sphere[75, p. 254]. The radiant intensity, measured in Watts per Steradian, is useful to compute because if we know the sur-

face area of irradiance and the distance, we know that the surface subtends 1⁄distance2 Steradian from the source. Hence, we can apply the inverse square law by dividing the radiant intensity by square of the distance and multiply by the surface area of interest to compute the irradiance. In the case where the luminous flux (lumen) are provided instead of radiant flux (Watt), the intensity profile is converted to radiant intensity profile. The profile only shows the relative intensity averaged over the apex angle. It also does not follow Lambertian law, so we cannot simply apply the inverse square law. Hence, it needs to be projected into 2D surface area to convert and calculate the radiant intensity at each apex angle. We compute the surface area of the cap of a sphere subtended by a cone whose apex angle is 2휑. Given radius R and angle 휑, the integration to compute the surface area of the cap[75, p. 257] is:

푆(휑, 푅) = 2휋푅2(1 − cos(휑)) (6.3)

Since our goal is to find the radiant intensity which is a unit per Steradian, we set the sphere’s radius R=1. This allows us to work with cones that subtend a unit sphere and thus, if we find the flux per unit area, it is equivalent to fluxperunit Steradian. We separate the apex angles into 1 degree wide bins up to 푛 degrees. Each bin is assigned an index i=1,...,푛 and corresponding apex angle, 휑푖. With every increment of apex angle, 푆 increases by 퐴푖, which is the surface area of a ring (i.e.,

퐴푖 = 푆(휑푖) − 푆(휑푖−1)). Then, the relative flux, 퐿푟푒푙,푖, at 푖th bin over each ring is a multiplication of the ring area and the relative intensity, 퐹푟푒푙,푖:

퐿푟푒푙,푖 = 퐹푟푒푙,푖퐴푖 (6.4)

44 We divide 퐿푊 푎푡푡 by the sum of the relative flux to find the radiant flux perunit

relative intensity. Now, for each bin 푖, this value is multiplied by relative flux, 퐿푟푒푙,푖

and divided by 퐴푖 to calculate the radiant flux per unit area. As stated previously,

this is equivalent to the radiant intensity, 퐼퐿(휑푖), whose unit is radiant flux per unit

Steradian. We also apply the efficiency of the reflectors, 휂푟푒푓푙푒푐푡표푟 To put it all together,

퐿푟푒푙,푖 퐿푊 푎푡푡 퐼퐿(휑푖) = 휂푟푒푓푙푒푐푡표푟 ( 푛 ) (6.5) 퐴푖 ∑︀ 퐿푟푒푙,푖 푖=1

Figure 6-3: A diagram to show the relations between each vector and angle involved on the surface of the target area.

(3) Irradiation on the particle We are primarily concerned with a 20µm x

−6 2 2 20µm (퐴푝푎푡푐ℎ = (20 × 10 ) m ) patch of area on the particles because it corresponds to one pixel. Let us call the 3D center of this patch C. Assuming that the particle is a sphere and the center point is known, we can find the normal vector, ⃗푛퐶 , of each point on the surface. As mentioned previously, we also assume a directional light whose vector is defined by ourselves. Let us call this vector from light source

45 to point C, ⃗푣퐿퐶 . If ⃗푣퐿퐶 and ⃗푛퐶 form an angle 휑퐿퐶 , the light is spread over much larger area. This reduces the irradiation by cos(휑퐿퐶 )[75, p. 258]. There is further loss due to the absorption and scattering by the seawater. We employ Beer-Lambert Law and attenuation coefficient, α(λ) empirically found[76, Chapter 3][77] to compute the attenuation of the light at given wavelength, 휆. Finally, we use the angle between the target and the light source, 휑, to find the initial radiant intensity. Therefore, the irradiance, 퐻푝푎푡푐ℎ on the surface of interest is:

퐴 −훼(휆)푑퐿퐶 푝푎푡푐ℎ 퐻푝푎푡푐ℎ(휑, 휆) = 푒 cos(휑퐿퐶 ) 2 퐼(휑) (6.6) 푑퐿퐶

(4) Radiant intensity of the patch The patch reflects or re-emits upon ab- sorption some light back, which is seen by the imaging sensor. According to [78], the single scattering albedo, 휔푝 of marine particles in visible spectra are at least 0.8. We assume that the the diffusivity of the surface is very high and radiates intoa

휔푝 hemisphere. In this case, the radiant intensity is 퐼푝푎푡푐ℎ = 휋 퐼퐿푎푝푎푡푐ℎ. (5) Irradiance at the lens aperture Now we need to compute how much light

is reflected into the aperture of the lens. To do so, avector, 퐴퐶⃗푣 between the center of the lens and the point C is computed. We need to multiply by the area of the

푓표푐푎푙푙푒푛푔푡ℎ 2 aperture (퐴푎푝푒푟푡푢푟푒 = 휋( 푓−푠푡표푝 ) ), divide by the square of the distance, 푑퐴퐶 , and

multiply by the cosine of the angle between the reflected light vector and 퐴퐶⃗푣 , 휑퐴퐶 . Finally, we also take into account the light attenuation in the seawater.The irradiance at the aperture is:

1 −훼(휆) 퐻푎푝푒푟푡푢푟푒(휆) = 2 푒 퐴푎푝푒푟푡푢푟푒 cos(휑퐴퐶 )퐼푝푎푡푐ℎ (6.7) 푑퐴퐶

(6) Sensor output We need to convert the amount of the irradiance on the pixel into a digital output. Assuming that the lens converges all of its irradiance,

퐻푎푝푒푟푡푢푟푒 into one pixel, we only need to consider the light fall-off [54, Chapter 3].

With increasing field angle, 훿푓푖푒푙푑, the luminous intensity decreases by a factor of cos4, getting darker around the periphery. Unfortunately, given the propriety information on the sensor itself, it is not pos-

46 sible to derive the exact conversion from irradiance to voltage. However, we work with the best estimate using available information. In particular, the responsivity of the sensor is 휌푙푚 = 1.4V/lux-sec at 550nm given the RGB Bayer pattern[40]. Recall that lux is a unit for illuminance, which is in lm/m2. We need to divide by the area

2 2 of the pixel, 퐴푝푥 = 2.2µm , later. We convert the responsivity into V/Jm . Hence, 2 휌푤 = 1.4/(683lm/W) = 2.05V/Jm . Based on the monochrome spectral sensivity provided by the camera manufacturer[39], the quantum efficiency (QE) is approxi- mately at 50% at 550nm. The QE at 600nm to 650nm is around 40%. The efficiency is reduced by 20%, and thus, we can also reduce the responsivity by 20%. We gener- alize the responsivity as a function of wavelength, 휌푤(휆). Lastly, we multiply further by the exposure time, 푡푒푥푝 and gain, 푔.

4 1 푉푝푥(휆) = 휌푤(휆)푔푡푒푥푝 cos (훿푓푖푒푙푑)퐻푎푝푒푟푡푢푟푒(휆) (6.8) 퐴푝푥

Finally, to digitize the value between 0 and 1, we divide by the nominal supply voltage, 3.1V.

6.3 Simulation

6.3.1 Projection onto spheres

Figure 6-4: A camera cannot see a full disc of a sphere. We can compute whether a surface point lies in the visible or invisible side by solving for the plane where all of the points whose tangent lines pass through point A lay.

47 Now that we can project each patch of area on the particles onto an imaging pixels, we can simulate how the entire sphere would look like given its location and the light. First, a camera does not see a full disc of the sphere as shown in Fig. 6-4. It can only see up to a point which forms a tangent line with point O. Such points form a base of a cone whose apex angle is 2(90°- 휑) and whose axis goes through O and C. Since we know the 3D point of point O (lens location) and C (chosen location of the sphere), we simply subtract these two points to get the normal vector, ⃗푛, of a plane on which cone’s base lay. This makes the calculation straightforward because we need to find the point B that lies on the cone axis and is Δ푑 away from the point C along that axis. Taking any plane where the cone’s axis lays, we can use a simple equivalent geometry to compute Δ푑. Specifically, cos(휑) = 푟/퐷 = Δ푑/푟, so Δ푑 = 푟2/퐷. To determine whether a surface point, 푃푆, lies on the visible side or not, a dot product between ⃗푛 and the vector between B and 푃푆 is taken. If the sign is positive, it is on the visible side, and if negative, this point is not visible. Fig. 6-5 demonstrate the effects of the light at different angles on the particle illumination.

Figure 6-5: An example of an illuminated sphere whose radius is 1mm. A much bigger sphere was used to demonstrate the changes due to different light angles.

Using the rendering of sphere, we simulate a scene of multiple particles. The goal of this simulation is to place the lights at different locations and orientations

48 to find the optimal positioning of the lights. To minimize mechanical structure and complexity, two LED light sources are symmetrically placed to the x-z plane (see Fig.

6-7). From hereon, we refer to light angle, 휑퐿, as the angle between the y-axis and the vector between the light source and the center of the target volume. Similary, the distance refers to the norm of that vector. Particular attention is placed on corners within the imaging volume since they are susceptible to light fall-off, Bokeh effect and most light attenuation. Hence, nine particles were placed within the imaging volume (eight on each corners and one in the center). An example of enlarged spheres with ESD of 500µm is shown in Fig. 6-6.

49 50

Figure 6-6: Simulation example of 9 particles spread over the imaging volume. The ESD of these spheres are 500µm, and the light angle is 55°, 300mm away from the center of the imaging volume. 6.3.2 Results

Figure 6-7: A geometrical setup of the simulation. There are two LEDs that are angled so that the central ray faces the center of the target volume. They are also symmetrically placed about the x-z plane.

We evaluate how different light positioning affects the measurements. Since lights at varying angles do not illuminate the entire body of the particles, the detected cen- troid of particles differs from the actual centroid[79]. Hence, we evaluate the detected centroid offset and its error (standard deviation) as a function of the light angles. In addition, we are interested in maximizing intensities of the lights reflected from particles. We study how distance affects the irradiance intensities of the particles.

51 (a) (a) Detection offset across x-axis (b) (b) Detection offset across y-axis

Figure 6-8: Detection offset of the centers of particles on the image in x- and y-axis at given ESD of spheres in pixels. For bigger spheres, the offsets are more significant.

(a) (a) Detection offset error (standard devi-(b) (b) Detection offset error (standard devi- ation) across x-axis ation) across y-axis

Figure 6-9: Standard deviation of the detection offset of the centroids of particles in x- and y-axis. To reduce error for calibration, it is optimal to increase the light angle.

We evaluated the detection offset on the images and found with smaller light angles, that more illumination is focused on the frontal face of the spheres, offsetting the detected centroid of the spheres. We noticed that the distance does not have significant effect on the offset, so the offsets were averaged over for thesameESD. Fig. 6-8 shows the detected centroid offsets in x-axis and y-axis compared tothe ground truth centroids. For small particles, the offset is less than 0.5px while for bigger particles (e.g., ESD=1000um), offset is highly dependent on the angle ofthe

52 lights. However, what matters more is the standard deviation of these detection offsets. We can calibrate these offsets physically and estimate the correct centroid location given the size of these particles. However, depending on the locations of the particles, the offsets can vary. Fig. 6-9 shows the standard deviation of these offsets, andwe want to minimize this value by increasing the angle as much as possible.

Figure 6-10: A graph showing the Voltage at the pixel due to illumination at varying angles for different sizes of spheres. Regardless of the size of the spheres, the voltage is reduced as the light angle increases. The distance between the target and the light was kept the same at 35cm.

We now look at how different angles affect the intensity of the irradiace onthe particles. Fig. 6-10, shows the voltage per pixel profile of spheres of different ESD as the light angle changes. The light distance was kept the same at 250mm from the center of the imaging volume. As we can see, the voltage decreases as the angle gets bigger. Therefore, if we only consider the effectiveness of the illumination, the

53 lights should be as close as possible to the target and also illuminating at lowest angle possible. However, we have shown above that smaller the light angle, the more offset error in particle detection. Considering all of these factors, it seemed 45°would be reasonable. We lose at most 1/3 of the intensity but reduce the detection error by 1/2.

Figure 6-11: A graph showing the pixel voltage depending on the light angle and the distance of the lights from the spheres of ESD=100µm. Farther the distance, the lower the illumination as expected.

Finally, we have to decide how far the light needs to be from the target. It is clear from Fig. 6-11 that the farther the distance, the less intensity per pixel. However, depending on the FWHM angle of the optics, the minimum distance at which the illu- mination over the target volume is flat varies. Specifically, we would like to illuminate with at least 50% of the maximum intensity around the peripherals. Given a 45°tilt in the light angle, the longest half diagonal length of the imaging volume is 37.8mm. This

54 was the deciding factor for us to choose Carclo 10412 lens over Roithner CLP17CR. CLP17CR gave 7.8°of FWHM in water. The result is the lights will have to be 527mm

away from the target volume (distance푚푖푛 = 37.8mm/ tan(7.8/2°) = 526.7mm.). On the other hand, the Carclo 10412 lens preferably offers 12°of FHWM that requires minimum distance to be 347.3mm. Also, recall that we intend to use one LED on top and one LED at the bottom. The form factor of the entire mechanical structure becomes much bigger if we use a CLP17CR. With the Carclo 10412 lens, the distance between the two LEDs (i.e., height of the entire system) is 456mm, while with the CLP17CR, it would be 745mm. Having selected a particular combination of LED and reflector, we selected an illumination distance of 350mm and angle of 45°. Particles within the target volume would be evenly illuminated and visible as long as the gain is controlled. The resultant maximum detection offset is 2px with a standard deviation of 0.3px for 1mmESD spheres, which are expected to be rare. It is also approximately 217mm (247.5mm- 30mm) away from the column of water that we are interested in studying, which is far beyond the boundary layer that was computed in Sec.8.2.1.

55 6.4 Evaluation of optical elements

(a) The test setup for measuring the (b) Illumination 10cm away from the wall intensity profile of the optical system using the combination of the XP-E2 pho- among various LEDs and reflectors. In tored LED and the Carclo 10412 lens. this image, red-orange LED was used.

Figure 6-12: Experiment setup and an example outcome using a single LED and a reflector to measure its FWHM

The FWHM angle of the LED’s on the Cree XLamp XP-E2 or the Luxeon Rebel LED’s FWHM is very wide (approximately 130°) and thus wasteful if used directly for our application. Therefore, we attach a reflector, which will focus the beam of light to a much narrower angle. Several combinations of LEDs and reflectors have been tested using the setup shown in Fig. 6-12b. We used our camera system to take images of the illumina- tion. For LEDs, red-orange and photored LEDs of both the Cree XLamp XP-E2 and Lumileds Luxeon Rebel were first considered. As for the optics, we tried out the OPC11COL from Dialight[80], C11347_REGINA from Ledil[81], 10412 lens from Carclo[82], and CLP17CR from Roithner LaserTechnik[83]. A very simple PCB with mechanical adaptation (see Appendix A) of various reflectors were used. Radial pro- files of various combinations of LEDs and reflectors/lenses are shown in Appendix C.2. Among them, the combination of Carclo 10412 lens and XP-E2 photored LED provided the most power efficient and illumination with appropriate FWHM angle (12.4°). An example reflection on a white paper using this optical combination 10cm

56 away from the paper is shown in Fig. 6-12a. Its angular intensity profile was provided in Fig. 6-2, whose FHWM is only 16.6°in air and 12.4°in water.

LED Reflector Air FWHM (°) Water FWHM (°) XP-E2 Photored CLP17CR 10.4 7.8 XP-E2 Photored Carclo 10412 16.0 12.0 Rebel Photored OPC11C0L 13.6 10.2

Table 6.1: The air FWHM measurement results with three different combinations of LED and reflectors/lens. The FWHM in water was calculated based on the refractive index.

The analysis of each combination is shown in Table. 6.1. The Ledil reflector left a noticeable black mark at the center, so it was not used further. Mechanically, the combination of Rebel photored and OPC11C0L would have been the most optimal. However, the XP-E2 photored LED was 20% more efficient than Rebel photored such that we decided to go with the other combination. In addition, red-orange LEDs were not used because they are at least 30% less efficient than photored LEDs of the same series. The benefit of less attenuation in water was minimal. Specifically, at a distance of 50cm, the attenuation due to salt water absorption is 0.86 for 610nm (red-orange) and 0.82 for 660nm (photored)[48].

57 (a) The setup for empirically measuring (b) Illumination of a white paper on a the performance of two XP-E2 Photored wall using the setup on the left under ex- LEds and Carclo 10412 lens with an in- posure of 1ms and gain of 8. The light tended design specifications.2 fall-off is expected towards the peripher- als, and in the field, the gain could be adjusted higher if necessary.

Figure 6-13: Two optical elements setup to verify that the design choice leads to expected results

We decided to place the LEDs 350mm away from the target volume and at 45°angle in the water. In the air, this corresponds to 270mm away from the target. We set up an experiment to check that the illumination of two LEDs onto the wall at this distance from the target will indeed illuminate the area of interest without significant light fall-off. Fig. 6-13a shows the physical setup, and its outcome is displayed in Fig. 6-13b.

2Thanks COVID-19 for pushing our limits to be creative so that we can still get our research done. This specific setup involved scissors and wooden chopstics from Mu Lan in Cambridge.

58 Chapter 7

Electrical System

7.1 Embedded System

As long as we are building a stereographic imaging system, having three separate units seemed most reasonable. A central computing unit that handles all of the imaging and peripherals sounds ideal. However, if all cameras were housed inside one housing, the geometry of the stereo requires a very special housing whose fabrication is costly. If only the cameras were housed separately, the power and data communication with the central unit require numerous wires through penetrators, which are expensive and risky in regards to leaks and failure . Hence, we came to a conclusion that having three housings with a separate controlling unit is the best solution for our needs. Then the question is about synchronization. We decided to take a wireless approach using WiFi instead of a wire. This is further discussed in Section 7.1.1, which describes an approach that we have taken to avoid the problem of severe RF attenuation in salt water. As shown in the system architecture diagram in Fig. 7-1, there are three separate housings, each containing a single board computer (SBC) and either a camera or an LED driver. These three SBCs form a small local sensor network. They communi- cate wirelessly, thereby reducing the chance of water penetration by eliminating a penetrator for wired connection. This is further discussed in section 7.1.6. A NanoPi Duo2[84] from FriendlyElec is installed in each camera housing, and a RapsberryPi

59 Figure 7-1: The overall electrical system architecture separated by each of their re- spective housings.

Zero W (RPI0W)[85] is installed in the LED housing. Duo2 runs on a Friendly- Core OS[84], based on Ubuntu with Linux kernel-4.14. RPI0W runs on Raspberry Pi OS[86] with Linux kernel-4.19. Duo2 plays following roles: wirelessly communicates and synchronizes time to that of RPI0W, controls camera trigger, compresses and saves images, and internally logs relevant events. RPI0W plays the following roles: serves Duo2 with its time, hosts a WiFi access point, measures pressure and temperature, timestamps, drives the LEDs, and internally logs relevant events. The primary reason that the aforementioned SBCs were chosen was the cameras come with an open source SDK[87] that requires a Debian/Ubuntu operating system, provided by the manufacturer. Given their small form factor, low price, online support and wireless communication capability, RPI0W was a good fit for driving the LED and acting as a master among the wireless nodes. Duo2 has similar features to those of RPI Zero and quad-core 1.2GHz clock speed, making it suitable to handle image

60 processing and storage. Duo2 uses cpuset[88] and chrt[89] to run the application code on 3 CPUs with 672MHz clock frequencies.

7.1.1 Camera and LED Synchronization

When cameras and LEDs are not connected through a wire, we need a means of synchronizing the two cameras and the LED strobe. There are two options1: (1) installing photodiodes, which trigger cameras upon each strobe, close to the optical port of the cameras; or (2) wirelessly synchronizing the time of each SBC to trigger LED/cameras at the same time. Option (1) is a relatively simple approach, but there is a high chance that the target volume is empty, such that no light is reflected into the photodiodes. This is unreliable. Therefore, we decided to take approach (2), synchronizing time wirelessly. In this section, the camera modules are referred to as a ”client node”, or a client, and the LED module is referred as a ”server node”, or a server. We apply existing sensor time synchronization techniques. Timing-sync Proto- col for Sensor Networks (TPSN) [90] synchronizes clocks of the clients to that of the server. Alternatively, in Reference Broadcast Synchronization(RBS)[91], a server broadcasts its time, and the clients share their time with each other to compute the time skew. However, [90] shows that TPSN is twice more accurate than RBS. Furthermore, broadcasting caused random delays leading to highly inaccurate syn- chronization. Therefore, we decided to use TPSN. TPSN involves 1:1 communication between the server and the client. In particular, client creates a timestamp T1, and sends a message to the server. Upon receiving a message from the client, the server stamps T2 and after some delay, stamps T3. The server immediately sends T2 and T3 to the client, which receives the messages at T4. The clock skew (difference) between the client and the server is then computed (푇 2−푇 1)−(푇 4−푇 3) with 2 . However, as described in [90], there are many places where errors/jitters can rise, leading to inaccurate skew. The noise is modelled as a Gaussian

1One may be able to use sound, but it is physically inefficient to transmit sound through media of significantly different densities

61 Figure 7-2: The diagram shows the flow of how two clients synchronize their clocks with that of the server by using TPSN. Communication happens in an arbitrary order. In this case, client A communicates with the server first and then client Bis synchronized. distribution in [91]. Therefore, the skew is averaged over multiple computations. In this particular case, we average over 50 skew computations. This computation is repeated by another client and the server. When two clients are synchronized with the server, each client’s clock skew can be accounted for by the clients. The flow of communication is depicted in a diagram in Fig. 7-2. We use TCP protocol when conducting 1:1 communication[92]. As an experiment to verify the accuracy of the synchronization, a GPIO from each SBCs were triggered at the ”same” time based on their local clocks and observed on the oscilloscope[93]. An example of the triggers are displayed in Fig. 7-3. The triggers were initially separate from each other by at most 35µs, which is sufficient for our needs. Since each strobe is 10ms and the camera has an exposure of 1ms, the trigger is required to be synchronized within 9ms. Another issue, however, is that the clocks on each SBC drift at different speeds. We also verify that temperature affects the rate of drift by doing an experiment inside a mini-fridge and at room-temperature (see Fig. 7-4a). Inside the fridge, the drift was repeatedly observed to be 400µs/min while in the air, it was only 350µs/min.

62 Figure 7-3: Output from an oscilloscope when each SBC sets a GPIO high at the same time based on its clock. A client could be behind in time compared to the server. To avoid triggering camera before the LED strobe, we intentionally delay the clock of the cameras by 1ms, which is longer than the maximum measured synchronization error, 300us

In our deployment, temperature is expected to vary, so we need to synchronize the time dynamically. The client’s clock is synchronized every 10 minutes, which gives us the time skew, 푇푠푘푒푤1 . One minute after each synchronization, we compute the drift rate by re-measuring the skew, 푇푠푘푒푤2 and compare it to the skew at the time of synchronization. We can compute one second of the server with respect to the clock of the client by computing 60/(푇푠푘푒푤2 − 푇푠푘푒푤1). To verify that the images continue to stay in synchronization, continuous im- ages under synchronized strobing were taken for one hour multiple times inside a dark room. The cameras and LEDs were operating remotely as if they are under- water. The LEDs were strobing for 10ms every second, and cameras were triggered every second based on the synchronized local clocks to take images. The clocks were re-synchronized every 10 minutes. The droprate of the frames proved to be approx- imately 1 out of 3000 (once every 50 minutes) images upon taking approximately

63 (a) Setup for clock drift test in a mini (b) WiFi communication feasibility test fridge in saltwater

Figure 7-4: Experiment setup for TPSN synchronization and wireless communication feasibility in saltwater

20,000 images. BThe rest of them were images of the wall under bright strobe light, and the cameras and the LED were in sync. The synchronization operation of the three nodes are as follows. Let us call the camera nodes nodeC and the LED node nodeL: (1) nodeC connect to local Wi-Fi hosted by the nodeL; (2) nodeC synchronizes with the LED node based on TPSN by correcting its clock with averaged skew time over 50 samples; (3) after one minute, it queries nodeL for skew time to compute the drift; (4) trigger period are reset taking into account the drift rate. Every 30 minutes, steps (2) to (4) are repeated to re-synchronize and calibrate for any temperature effect. The code is available in Appendix B.

7.1.2 Image compression and retrieval

Each image is 5MB in size if saved as a raw 8-bit array of data. This means that for 24-hour long deployments with 1FPS, we need to store 5MB/s×24hr/day×3600s/hr = 432GB. This is an exceedingly large amount of data that needs to be stored in a SD card. Naively dumping the binary file into the SD card is not going towork.

64 Thankfully, we expect the images to be mostly empty. Therefore, employing the loss- less compression algorithm Packbit[94] that employs Run-Length Encoding (RLE)[95] and compress our images significantly. The images are saved as TIFF, accomplished using LibTIFF[96]. We should also consider noise in the images. Noiseless images could be compressed as low as 80kB from 5MB, taking 150ms on average. Yet, a pitch black image with white Gaussian noise having variance of 0.0004 compresses to only 4MB.˜ Though not equivalent, to get a sense of time necessary to compress images with noise. We measured the time for the SBC to compress images with many features. It takes approximately 400ms. Therefore, it is vital to apply some preliminary filtering as each image comes in to reduce noise. This will allow us to store the required amount of data and run much faster. Since the majority of the image is an empty background, the average of the pixels is close to the value of the background noise. Hence, when each frame is available, pixels are set below the threshold of average + 3×variance to an average value . Assuming a Gaussian noise distribution, this should make 99.7% of the pixels to be the same value. Particles are expected to be relatively very bright compared to noise. They are also low on concentration (even ten 1mm ESD spheres will only take up 0.4% of 5 million pixels). This compression can become very effective and fast. Realistically, we cannot take the average and variance for every frame. Since we expect the background not to vary much, the pixel values from the average and variance of the first 50 images are used as the threshold. The pixels of theimages thereafter are set to the average value if below the threshold.

7.1.3 Mission Programming and Data Retrieval

The mission programming will be done via transferring over a configuration file using “scp”[97]. Similarly, the data are retrieved from each node by using the “scp” command at the terminal. Given each camera produces approximately 100kB, the total amount of data needed to be retrieved is 8.64GB. The empirical Wi-Fi transfer speed is 1MB/s (RPI0W uses Cypress wireless chipset[98] that only allows for 802.11n

65 with 20MHz bandwidth[99]). This would take 8640 seconds, or 2.4hrs for each camera, and total of 4.8hrs for a stereo pair.

7.1.4 Peripheral Electronics

During the deployment, the system needs to log the depth and temperature, as well as the time. The Keller 7LD[100], rated for 200bar (i.e., 2000m depth), is a pressure sensor with a pressure measurement accuracy of ±0.3bar and the tempera- ture accuracy ±2°C. Accurate timestamps are made from a real time clock (RTC), DS3231[101]. These devices communicate with I2C[102]. For a RaspberryPi to com- municate with Keller 7LD, an Arduino library[103], developed by BlueRobotics, was modified to work with C++. As for the RTC, we used Linux i2c-tools package[104] that refers to a set of bash scripts written by WittyPi[105].

7.1.5 Internal Logging

DS3231[101] RTC has an accuracy of ±2ppm, yielding an offset of ±173ms within 24 hours. The clocks on the SBCs, however, drift much faster ( 200ppm), dependent on temperature. To compute accurate particle settling velocity, we need to be able to track back the drift by making a comparison to the timestamp of the DS3231. Therefore, the RTC time is logged every 30 minutes.

7.1.6 Wireless Communication Underwater

If we use wireless communication, the chance of water penetration is significantly reduced and the endcap can host other peripherals (e.g., a wireless charging coil). However, radio is rarely used underwater because the ocean behaves as a conductor that absorbs the electro-magnetic wave[106]. Wireless communication at 2.46GHz (Wi-Fi central frequency) in the seawater with 3S/m salinity was conducted in [107]. The received signal is attenuated by 70dBm when antennas are merely 5cm apart. However, our housings are interconnected by a dielectric mechanical structure that

66 is transparent for radio signals. This enables wireless communication between the camera board and the LED board. A preliminary experiment was carried out to verify the feasibility of wireless com- munication. Unfortunately, given the pandemic, it had to be carried out at home with available domestic materials. An RPI0W and NanoPi Duo2 were each placed inside a small waterproof container and powered by a commodity mobile portable charger. On boot, the RPI0W hosts Wi-Fi, and NanoPi connects to the WiFi if accessible. Once connected, a simple message exchange takes place every 5 seconds. To mimic a seawater environment, salt water of 35ppt concentration was made by mixing 1.4kg of salt with 40L of tap water. A setup is shown in Fig. 7-4b. The first experiment aimed to find the maximum communication distance through saltwater between the two containers. The maximum distance was noted to be only 3cm. Another experiment was conducted by placing a plastic (rectangular cylinder of LEGO blocks) of varying length and width in between two containers inside the salt water. In this setup, a communication was logged up to 15cm distance, which was the maximum length tested, using a cylinder of width 4cm. Obviously, an additional experiment is required in the field with and an actual mechanical structure. However, this verified that wireless communication through a plastic medium is feasible. We expect the communicating modules to be at most 15cm apart from each other.

7.2 Circuit

7.2.1 Mainboard

To evaluate fully the operation of the electronic system, an evaluation circuit board was designed and tested. Since we are using two different embedded boards (NanoPi Duo2 and RPI0W), whose header dimensions differ, the board was designed such that headers from either board could be connected. Particular attention was given to ensure no pins (e.g., 3.3V, control pins) overlapped. The layout allows for all of the components: DC-DC converter, RTC, LED-driver circuit, headers, and

67 connectors for peripherals (power, pressure sensor, LEDs, and camera trigger as well as USB pins). The schematic and layout are presented in Appendix A. If the proposed end-cap from Sec. 8.1 is used, components like RTC, pressure sensor and DC-DC converter do not have to be present. In that case, the motherboard will only serve as a hub for connecting the LEDs, camera, power and peripherals to the main embedded board.

7.2.2 LED Driver

Figure 7-5: An example I-V curves of Cree XLamp XP-E2 LED series[4]. The forward current varies significantly with a small change in voltage across theLEDs.

We will drive two low-power LEDs periodically for sub milliseconds. There are numerous power supply circuit designs to drive the LEDs depending on their applica- tions. Namely, one could use a linear power supply or a switch-mode supply designs. The former is less sophisticated but lacks efficiency compared the latter option[108]. We decided to employ a linear constant-current power supply circuit, where the cur- rent is controlled by the resistor in series with the LEDs. However, as shown in an example, the current-voltage (I-V) curve profile of an LED in Fig. 7-5, a small change in voltage leads to a noticeable difference in current. In particular, the voltage and current of the LED increases with temperature, which are bound to rise due to power dissipation. This can quickly overheat the LED and lead to a thermal runaway[108].

68 Therefore, to add a thermal protection as well as a short-circuit protection, we added an extra pair of transistor components to act as a feedback loop.

Figure 7-6: A high-level circuit schematic of the LED driver. The MOSFET and BJT transistor form a feedback loop that prevents the over-current draw that can happened due to variation in temperature or voltage.

To drive the LEDs, we designed a simple yet stable current source with a MOSFET and a BJT transistor. The BJT transistor and MOSFET form a feedback loop that ensures constant current despite changes in temperature and voltage[109]. On a high- level, if the current increases in R2, the voltage across the base of the BJT transistor will increase, turning on the transistor. This in turn, increases the voltage at the gate of the MOSFET, reducing the current from drain to source. Appropriate parts have been chosen such that the each LED is driven at 2.15V at 500mA. The schematic and layout are shown in Appendix A.

7.2.3 Circuit Isolation and Short Protection

Since the ocean is conductive and corrosive, we need to isolate the electronics from any metal that touches the ocean. Otherwise, the housing with electro-chemical potential with respect to the ocean will face electrolysis and ultimately corrosion. The electronics are therefore not grounded to the chassis. Another potential failure is a

69 short in parts exposed in the water, such as the LEDs. This could cause damage to the electronics and the batteries. Hence, for the LEDs, there is a short circuit protection IC, which is programmed to shut down beyond 1A current, and the battery protection circuit shuts the batteries when the current of 2A is drawn by the system.

7.3 Power

7.3.1 Power Consumption

To measure the power consumption of the entire electrical system, a voltage across the 1Ωpower resistor in series with the power source to the PCB board was measured. The power supply Siglent SPD3303X-E was used to power the electronics and the oscilloscope SDS1104X-E was used to measure the voltage continuously. When the imaging system was running with full power (i.e., measuring sensor data, wirelessly communicating, and taking images), it consumed on average 1.95W. As for the LED electronics, the overall power consumption was 0.9W on average. We decided to use LG MJ1 18650 Li-Ion batteries because of their dimensions and power efficiency. When the power consumption was broken down into different modules, the most consumption was coming from running the camera library (0.75W when idle). To run 1.95W for 24 hours, we need 46.8W-hr worth of batteries. The MJ1 Li-Ion batteries have a nominal 12.72W-hr. To run the camera for 24 hours, we need 4 of these batteries. The LED electronics only require 2 batteries. For the first prototype, we will use 3 batteries per housing instead of4.The housing that was proposed in Sec.8.1 has been designed specifically for charging three batteries. There is a space on top of the embedded board to place one or two more batteries. However, the fact that the effective density is already above that of sea- water favors fewer batteries. Having 4 batteries further adds cost and engineering complications to an existing power system. It would thus be advisable to work on reducing power consumption, instead.

70 7.3.2 Battery Protection

Another important aspect of battery electronics is its protection. While Li-Ion batteries are known to be safe if used appropriately, they be catastropic when under pressure[110]. To reduce risk, certain level of protection of the circuit is necessary. The level of protection of batteries vary widely. Simple circuits protects batteries from over/under voltage and current, while sophisticated protection allows for calibrating, diagnosing and reporting the health of individual cells in battery packs. Compared to the automobile industry or electric scooters, our system only has three batteries in the pack and discharges batteries at low power (approx. 2W) in a non-human occupied environments. It would be sufficient to have a minimum safety protection where any over/under voltage or short circuit on load/batteries would lead to an immediate discontinuation of operation. Therefore, we decided to use the PCM-SM3 from AA Portable Power Corp[111].

71 Chapter 8

Mechanical Design

8.1 Housing

Each camera and its relevant electronics are housed inside a custom built 40mm inner diameter (ID) Delrin plastic cylinder housing that is 246.5mm long. One of its end-caps is a sapphire optical port, which is 5mm thick. This housing is intended for at most 2000m of depth pressure tolerance, and we intend to use this down to 1500m. There is a chassis inside to affix the camera, electronics, and batteries. Another end-cap hosts a wireless charger, a pressure sensor, and a PCB board with charging circuit, GPS, DC-DC converter as well as an RTC. This housing is a joint effort with the Maka Niu project done in collaboration with the MIT Media Lab Open Ocean Initiative[5].

72 Figure 8-1: Partly exploded front view of the housing model for Maka-Niu[5]

We use two housings for LEDs that are the same but shorter (140mm) than those described above. One housing will have batteries with a light source. Another one will house an LED driver and a light source. They are connected via a wire through a 4-pin penetrator (SubConn Micro Circular-4) on each end-cap. All of these four housings will be rigidly held together by a plastic structure with 3D printed holders to meet the design specifications that are made in previous sections. Future work includes iteration of the mechanical structure, and wireless communication capability as mentioned in Section 7.1.6.

8.2 Float-Fluid Interaction

8.2.1 The Motion of the Float

Let us consider the interaction of the fluid and float during the deployment. The floats will be deployed at the depths ranging from 200m to 1500m. Theinterior of the ocean is vertically stratified based on the seawater’s density (isopycnals) af- fected by temperature, pressure and salinity. Internal waves are generated due to the

73 ocean’s interaction with the atmosphere and the barotropic tidal flow[112]. Depend- ing on the physical properties of the , the amplitudes of these waves can be taller than 50m and their frequencies can range from one cycle per minutes to hours[113][112]. When a float settles to its matched isopycanl stratum, its motion will be affected by the internal waves. A neutrally buoyant float responds perfectly to a horizontal motion[114]. However, its vertical response depends on a local property of the seawa- ter as well as the relative physical proprieties between the float and the seawater (i.e., the difference in compressibility and thermal expansion coefficients). This difference leads to change in buoyancy and leads to a different restoration force[115] on the float compared to the water parcel in motion. This causes an oscillation ofthefloat relative to the seawater motion, and thus, the fluid around the float is disturbed. We would like to study the behavior of the particles least affected by the disturbed fluid. Therefore, we need to understand the boundary layer of the fluid aroundthe float. To do so, we have to approximate the float’s relative motion to the seawa- ter parcel motion, find the float’s oscillating displacement amplitude, compute the Reynold’s number to determine the behavior of the boundary layer, and finally, find the thickness of that boundary layer. Based on the literature, the motion of the float can be approximated by the driven damped harmonic oscillator[114][115]:

푑2푍 푑푍 푟 + 휎 푟 + (휔 )2푍 = (휔 )2푍 (8.1) 푑푡2 푑푡 0 푟 1 푤

where 푍푟 is the relative vertical position of the float to that of the seawater dis- placed due to the , 휎 is a constant that encapsulates drag properties,

휔0 and 휔1 are the natural frequencies of oscillation caused by the change in float’s buoyancy relative to the water parcel and the absolute adiabatic change in buoyancy due to the water, respectively.

Let us consider an internal wave whose vertical displacement amplitude is 퐴푤 oscillating at frequency 휔 (i.e., 푍푤 = 퐴푤 cos(휔푡)). We substitute 푍푤 in Eq. 8.1 and

74 solve the equation, knowing that the solution to 푍푟 = 퐴푟 cos(휔푡), where 퐴푟 is the float’s amplitude. This leads to the float’s relative amplitude,

2 휔1퐴푤 퐴푟 = 2 2 2 2 2 1/2 (8.2) [(휔0 − 휔 ) + 휎 휔 ]

Now we consider two different cases: “near-equilibrium” and “near-resonance”.

Near Equilibirum

If the timescale of the internal wave is much longer than that of natural frequency and if diffusivity of the surrounding fluid can be ignored, we are in “near-equilibrium”.

In this case, 휔 ≪ 휔0 and 휔휎 ≪ 휔0, and Eq. 8.2 can be simplified to

2 휔1퐴푤 퐴푟 = 2 (8.3) 휔0

. Alternatively, if the timescale of the vertical motion of the internal wave is close

to the resultant natural frequency of oscillation of the float, 휔0, the motion of the float is said to be in a “near resonance regime”[114]. In the case of a near-equilibrium, Eq. 8.3 can be re-arranged to define the response ratio, 2 퐴푟 휔1 푟 = = 2 (8.4) 퐴푤 휔0

In [114], both 휔0 and 휔1 are found in terms of the physical properties of the water and the floats. Furthermore, the impact of salinity gradient is assumed tobe negligible. With this, we can find the response ratio, which is approximated as,

1 − 푠 푟 ≈ (8.5) 1 − 푠 + (푁/Ω)2

where s is the ratio of the compressibility of the float, 훾푓 to that of water, 훾푤 (i.e.,

푠 = 훾푓 /훾푤), N is the buoyancy frequency ranging between 0.0001 to 0.01[112], and Ω 2 is the characteristic frequency defined as Ω = 휌0푔 훾푤/(1 − 훼푓 /훼푤) with 훼푓 being the

75 thermal expansivity of the float, 훼푤 that of water, 휌0 the density of the local seawater and g the gravitational acceleration. Given the range of physical properties of interest, 푟 can range from 0 to 11. That means that regardless of the relative compressibility or the relative thermal expan- sivity between the float and the water, 0 ≤ 퐴푟 ≤ 퐴푤. Hence, the maximum vertical speed of the float relative to water is 푢푚푎푥 = 2퐴푟휔. To find an approximate upper bound for the vertical speed of the float, weusethe relationship defined in Garrett-Munk (GM) internal wave model[116][117] in the spec- tral continuum regimes. Namely, we use the maximum vertical displacement internal wave 퐴푤 ≈ 100m with a frequency, 휔푖푛 of 휔푖푛 = 0.000028Hz. Based on the model, the vertical displacement is inversely proportional to the frequency of the wave[117], so waves of higher frequencies would yield slower relative vertical speed. If we let

−1 퐴푟 = 퐴푤 = 100m, 푢푚푎푥 = 2(100푚)(0.00002s ) = 5.6mm/s . With this, we compute the Reynold’s number, 푅푒 = 푙푢푚푎푥/휈, where 푙 is the characteristic length of the float in meters and 휈 = 1.5 × 10−6m2/s is the kinematic viscosity of seawater[118][119]. The characteristic length is the ratio of the volume of the float to its surface area. Goodman et. al. [114] presents further derivation to find compute the average relative displacement amplitude, 푏푟. Finally, Reynold’s number is found as 푅푒 = −1 −1 |푑푍푟/푑푡|퐿휈 = 퐿푏푟휔0휈 , where 퐿 is the height of the cylindrical float and 휈 is the kinematic viscosity of seawater. Since the body of the Minion float is much larger than that of the housings of our cameras, we consider the impact of the motion of the float itself onto the surrounding fluid. The float’s radius is 7.5cm and theheight is 60cm. Plugging all of the values, we find 푅푒 = 123. This is a result for worst case scenario, and we expect the impact of the float to the surrounding to be only less. Hence, since 푅푒 ≪ 1000, we can conclude that the float’s motion will cause laminar flow around the body[120]. Finally, we would like to find the boundary layer. We approximate this calculation by modelling the surface of the cylinder to be a flat surface and applying Stoke’s first

1Ideally, we would like 푟=0 so that the float moves with the fluid and the particles of interest. However, realistically, with the possible range of physical properties of water and the float, 푟 can easily be as big as 1. This adds a complication to extracting the settling speed of the particles.

76 problem. From [120], the thickness of the diffusive layer due to an infinite plateat time, 푡, is defined as the distance at which the initial velocity falls by 99%.The solution is √ 훿99 ∼ 3.64 휈푡 (8.6)

For an infinite plate, more fluid is in motion as time progresses. However, inour situation, the float has a finite length, 퐿. A time for a float to move by its length is 푡푓 = 퐴푟/푢푚푎푥, which yields 푡푓 = 108s. Plugging in the values in Eq. 8.6, the boundary layer becomes 훿99 = 0.046m. Given the boundary layer, where most of the resides in an even thinner layer, it was appropriate to apply an infinite plate model on the cylinder motion.

Near Resonance

If the internal wave oscillates at 휔 = 휔0, the float can resonate with the motion. In this case, 8.2 is then simplified to

2 휔1퐴푤 푟휔0 퐴푟 = = (8.7) 휎휔0 휎

2 2 where we further replaced 휔1 with 푟휔0 from Eq. 8.4. In the conventional harmonic

oscillation analysis, the quality factor, 푄, would be defined as 푄 = 휔0/휎. Using the derivation from [114], the natural frequency of float oscillation and the quality factor is as follows: √︁ 2 2 휔0 = 푁 (1 − 훼푓 /훼푤) + Ω (1 − 푠) (8.8)

1 4 2 3 5 0.21푅 퐿 (휔0) 푄 = 2 2 2 (8.9) 휔푖푎 휈 푟

−5 where 휔푖 = 0.04cph = 1.11 × 10 Hz is the canonical internal inertial wave frequency √ from [121] and 푎 = 100/ 2 = 70.8m is the RMS internal wave displacement. Assum- ing 푟=1 for the worst case scenario, plugging in all of the values lead to 푄 = 1.25.

Therefore, 퐴푟 = 125m and thus the speed, 푢푚푎푥 = 0.00694m/s. Following the cal- culation from Section 8.2.1, the maximum Reynold’s number is 308.6. We are still

77 2 within the laminar flow, and thus, the boundary layer thickness is 훿99 = 0.0414m .

8.2.2 Heat

We first identify whether fluid around the housing would be transferring heatvia conduction or convection. Then, we simulate to verify that the target volume is much farther away from our housings than the thickness of the boundary layer. First we compute the change in temperature at a distance far from the system. Let us consider a sphere whose volume is equivalent to the volume of our cylindrical housing (approximately 696mL). Then the equivalent spherical diameter of this sphere is 11cm, and its radius 푟1 = 5.5cm. Furthermore, its shell constantly conducts heat and the temperature inside the shell is 푇1. The system is running at maximum power is 3W, and this is directly transferred as heat (i.e., 푄=3W). We assume that this sphere is floating in a seawater whose temperature is 10C°and 35ppt salinity. Deriving from the Fourier’s Law on heat conduction, the heat conduction from the sphere to a radial distance, 푟2, with temperature, 푇2, is:

4휋푘푟1푟2 푄 = (푇2 − 푇1) (8.10) 푟2 − 푟1 where 푘 is the heat conductivity of the seawater, which is 0.6mW/K at given condition [118][119]. We are interested in learning about the temperature difference (Δ푇 =

푇2 − 푇1) when 푟2 is very far (i.e., 푟2 ≫ 푟1). In that case, 푟1 in the denominator is 푄 ignored and 푟2 cancels. We can simplify our equation to derive Δ푇 = . When 4휋푘푟1 we plug in the values, Δ푇 = 1.3mK. With this information, we can compute the Reynold’s number to determine whether our system will cause conduction or convection. When the Reynold’s number is bigger

2One might expect the boundary layer to be thicker for the faster floats. Faster float motion would stir up the surrounding fluid more, but it also passes by the region of interest much faster. The bounary layer is also defined for 99% decay of the initial speed. If we computed the boundary layer so that the fluid speed is below an absolute threshold, the result would be different.

78 than 103, we use convection for our simulation. The Reynold’s number is computed:

푔Δ휌퐿3 푔훽Δ푇 퐿3 푅 = = (8.11) 푎 휇훼 휈훼

where 푅푎 is the Reynold’s number, g is the gravitational acceleration, Δ휌 is the change in density, L is the characteristic length, 휇 is the dynamic viscosity, 휈 is the kinematic viscosity, and 훼 is the thermal diffusivity. We use the definition of the characteristic length to be the ratio of the cylinder’s surface area to its volume, which is approximately 퐿 = 0.01. We again use [118][119] to find the relevant constants (휈 = 1.36 × 10−6m2/s, 훼 = 1.43 × 10−7m2/s, 훽 =

−4 −1 1.72 × 10 K ). Plugging in all of these values, the Reynold’s number, 푅푎 = 11.3, is much less than 1000. Hence, we can estimate that the thermal conduction leads to a laminar flow. One could interpret the impact of the heat to introduce buoyancy to the fluid around the float. A computational flow dynamic model showed that the boundary layer is within sub-millimeter thickness. However, if the instantaneous power con- sumption increases by an order of magnitude, the buoyancy of the surrounding fluid can change significantly, affecting the motion of the float. Therefore, it is important that the power consumption of the system is kept low.

8.3 Buoyancy

In order to compute the effective density of the hardware, we need both the volume and the mass of each components. It would have been ideal to measure the mass of the entire system, but unfortunately, some mechanical components were not available. As a first pass, an estimated maximum mass is computed. There are two Delrin[122] housings (Table. 8.1) for cameras and two more of similar but shorter Delrin housings for each light source (Table. 8.2). The effective density of the system excluding the backbone structure and the synthetic foam is 휌푒푓푓 = (2 × (724g) + 2 × (432g))/(2 × (630cm3) + 2 × (330cm3)) = 1.2g/cm3. This is much more than 1.022 g/cm3[118][119], which is the expected maximum density of the

79 seawater during deployment. Note again that this is an estimate, and thus, once a fully system, including the mechanical backbone structure, is built, its weight should be measured. Then, an appropriate synthetic foam structure would be designed such that the system is neutrally buoyant at the target density.

Volume=630cm3 Part Mass (g) Sapphire Port 46 Housing Hold-Down (long) 36 Housing Hold-Down (short) 24 Housing Body (tube) 354 End-cap puck 64 Chassis and electronics 200 Total 724

Table 8.1: Mass of the electronics housing

Volume=330cm3 Part Mass (g) Optics 3 Sapphire Port 46 Housing 231 End-cap puck 64 Chassis and electronics 88 Total 432

Table 8.2: Mass of the LED housing

80 Chapter 9

Particle Tracking Velocimentry (PTV)

The particle tracking velocimetry (PTV) algorithm tracks the trajectory of indi- vidual particles in a series of images. Typically, it is used to study the behavior of a flow of fluid, but it will be applied to directly track the POC in our case. Thereare a few open source projects implementing the PTV system, including the open-source project OpenPTV[123] and another open software at Stanford[124]. While they have been used as a reference, the software primarily follows the algorithm described by MATLAB on multiple object tracking[31]. A similar method has been applied in [30] at IGB-Berlin.

81 Figure 9-1: PTV algorithm overview

Our approach is to detect and track in the 2D image space from each camera.[125][126]. Upon detecting particles in 2D, they are matched in left and right camera frames to compute particle tracks in 3D object space. We apply a Kalman Filter[32] to pre- dict the trajectory of individual particles and a Munkres assignment algorithm[33] to assign detected particles to a track. A 3D estimation will be computed based on epipolar and correspondance computation. The overview of the PTV algorithm is summarized in Fig. 9-1

9.1 Image pre-processing

As explained in Sec. 7.1.2, the images will have been filtered under a threshold. However, some noise is still expected, so we first apply a low-pass filter usinga Gaussian filter. Secondly, another threshold is computed using the mean and standard deviation to eliminate further noise. Thirdly, we take the base-2 logarithm of the image intensity values. Since the DoF of the cameras are very shallow(4.65mm), many of the particles present in the images are expected to be blurry and have lower intensity. Therefore, it is important to maximize the dynamic range within the image. The reason that we can take the logarithm of the images is that we expect a black

82 background with only a few bright POC. Hence, we will not lose any information by taking a base-2 logarithm. Finally, we binarize the image using the global threshold algorithm via Otsu’s method[127] to prepare for blob detection.

9.2 Particle tracking

First, the particles are detected using an adaptive blob-analysis tool from the pre- processed images. Each of the particles detected in the images are assigned in the tracker matrix based on Munkres algorithm[33]. Their next position in the following frame is predicted by the Kalman Filter[32]. When the next frame is obtained, the vicinity of the predicted position is searched to identify the assigned particles. The particle that is closest to the prediction (i.e., has the least Lagrangian acceleration) is considered as the same particle, and the traces are formed. The Kalman filter is updated with the newly detected particle locations.

9.3 3D coordinate Estimation

Figure 9-2: An example of left and right images of particles in simulation. The epipolar lines that correspond to the circled left particles are drawn on the right image.

Once the cameras are calibrated, the calibration parameters can be used to find 3D coordinates of objects in the images that overlap in left and right cameras. To reduce the computational complexity of the object correspondence problem, left and right

83 cameras are rectified based on these parameters. This will transform the cameras as if they are looking at the scene in a horizontally parallel plane (i.e., the epipolar lines are parallel to one another). For this to work, rectified perspectives of the cameras need to be overlapping. This requires that the baseline needs to be of certain length and the angle of view must be wide. However, our camera system has a very narrow field of view, and with a given baseline of 160mm, the rectified images arenolonger overlapping. Hence, a more naive approach to the correspondence approach is used. If an object in left image is also seen by right image, it will be viewed along the corresponding epipolar line. While an epipolar line is infinitely long, we can restrict our search by realizing that the depth at which the particles lie are between ±30mm of the working distance (see Fig. 9-2. We also need to allow for calibration errors and noise, so we decided to search for corresponding objects in a rectangle that has a 10px wide buffer centered along the epipolar line. The particle whose centroid location was closest to the epipolar line is determined as the corresponding particle on the right image frame. We can then triangulate the 3D position of the particle based on the camera’s extrinsic calibration parameters.

9.4 Simulation

In order to verify the PTV software, settling POC is simulated based on the size distribution and flux provided by [8]. An example is shown in Fig. 9-3. Particles were randomly generated within the visible volume, and their 3D points were projected onto the left and right imaging planes using calibration parameters. To provide some diversity in the sample, 3D particles were represented by cylinders and spheres at random orientations. It would have been optimal to use 3D shape of actual POC, but development would have taken a much longer time. Given that the volume size includes particles that are out of the focus in the planes being evaluated, simulated particles were also placed at deeper depths than the DoF.

84 Figure 9-3: Images of simulated particles projected onto left and right image planes respectively using the camera calibration parameters. The Bokeh effect based on the depth of field computation is applied.

From the previous Section 4.2.1, the DoF where the circle of confusion (CoC) is less than 2px was only 4.65mm. However, since particles out of that DoF are still visible, but blurry, the 3D particles were placed with a depth range of Δ푍 = ±25푚푚. When 3D points were projected onto image planes, the Bokeh effect was applied using a Gaussian Filter. The simulated Bokeh effect on a circular disc is compared toan actual blurry discs in an image. The best fitting filter size was to use a standard deviation whose values area 1.67 times the CoC.

9.4.1 Bokeh Effect

To ensure that the simulation is reflective of the actual measurements, blurriness due to Bokeh effect was empirically observed in the actual system and compared with the simulation results. As shown in Fig. 9-4a, the same setup as that used for the optics resolution was employed. However, instead of a Siemens star pattern as the target, we used the circular grid that was used for the stereo calibration. Each point is a 250µm diameter circle separated 1mm apart. The distance between the camera and the target was manually adjusted on the rail with sub-millimeter accuracy and fine-tuned on the manual linear stage.

85 (a) Experiment Setup for measuring the (b) focused image of the target at working Bokeh Effect distance

Figure 9-4: The experiment setup and an example image of the calibration target for analyzing the Bokeh effect.

86 87

(a) offset = 7.5mm (b) offset = 15mm (c) offset = 20mm (d) offset = 30mm

Figure 9-5: Radial intensity profile across a circular disc on a calibration target at varying distances offset from theworking distance (focal plane). The images on the top row show the actual image and the bottom row the corresponding profiles that are observed and simulated. (a) offset=7.5mm (b) offset=20mm

Figure 9-6: Close up comparison of the intensity profiles of the target at 7mm (a) and 20mm (b) and their corresponding images

In each of the images taken at varying distances, the lens distortion was undone based on a calibration. Then, the intensity around the center of each circle at in- creasing radii were measured1. As a simulation, the sharp image taken at the working distance was blurred with a Gaussian filter with the computed CoC. The results from actual images and simulation were compared and shown in Fig. 9-5.2 In this figure, we can see how the intensity and blur profile changes as the targets are moved farther away from the working distance. A close-up view for comparison are shown in Fig.

1We are measuring the Bokeh effect on a flat 2D plane. Spheres would show different behavior. However, we are evaluating how well the current simulation approach reflects the physical measure- ments. In our actual simulation during PTV, spheres with given lights are blurred out with same methods 2The close proximity of two dots relative to the blur size led to an overlap of blurred discs. As a result, there is a bump at radius of 25px in the intensity profile of the distance offset of 30mmin Fig. 9-5.

88 9-6. This study shows that Gaussian filter whose variance is 1.67 times the diameter of the expected CoC is a reasonable model. However, physical measurement shows a reduction in intensity at the center compared to that of simulation. Depending on the experiment setup, the intensity varies, and thus, the simulation does not scale with the particles appropriately. It is appropriate for examining the tracking capability of the PTV implementation, but calibrating the size of the particles based on the intensity would be difficult using the current simulation.

(a) 30mm (b) 40mm

Figure 9-7: Image of the target located 30mm and 40mm forward from the focal plane. The blurriness of the target becomes very difficult to characterize starting at 30mm.

The above study reveals an important physical limitation of the optical system. Beyond ±30mm, a point is blurred out and located close to background such that it cannot be distinguished. The result is shown in Fig. 9-7. Interestingly, the intensity profile at 30mm is not too promising. However, if one takes a look at theactual image, it is clear that individual discs are identifiable. However, as stated above, at 40mm, it is really hard to distinguish the discs. Therefore, while the simulation may show particles beyond depth of field of ±30mm, it is unlikely that the actual field test will show such an outcome.

89 9.5 Evaluation

PTV was applied to the simulated timelapse images of the settling particles. We evaluate the particle settling in two different directions. Respective of the calibrated 3D coordinate in Sec.5.6, the particles settle vertically (along y-axis), vertically to- wards the camera (y- and z-axis), and in all three possible axes (x-, y- and z-axis). White Gaussian noise with different standard deviations were also applied to see the impact on the detection and tracking errors. In particular, the noise standard devia- tions were: 0.002, 0.005, 0.02, 0.05. Examples of the same scene with different noise levels are shown in Fig. 9-8. We took 1800 frames (i.e., 30 minutes long timelapse) based on the concentration data provided in [8].

Figure 9-8: Example images with different noise levels

9.5.1 Detection

Depending on the noise level, distance, and the size of the particles, the volume in which the particles can be detected varies. The farther away from the working distance, the less visible the particles are due to Bokeh effect. Additionally, the

90 more noisy the image, the less likely the particles are detected. The flux calculation requires a knowledge of the concentration and thus the volume. Since the observation volume is very small (approximately 100mL), we need to know exactly how much of the volume that we are seeing to properly calculate the concentration.

Figure 9-9: Imaging volume in which the particles of different ESD can be detected at varying noise standard deviation. The maximum overlapping volume with the simulated stereo parameter is 99mL. Particles bigger than ESD=320µm could all be detected within the maximum depth of field and noise level.

We simulate the depth in which particles of different ESD are detectable at varying noise level to finally compute the imaging volume. A grid of4×4 divides the imaging frame. In each region, a particle was placed at the center and projected in 3D space coordinates based on simulation camera parameters. The Bokeh effect was applied accordingly with depth increasing/decreasing by 5mm. Then, White Gaussian noise at varying standard deviations was applied on top of the blurry images. We note the combination of depth and noise levels at which the particles could no longer be

91 detected correctly. These depths were used as an acceptable DoF to compute the overlapping volume between two cameras. Fig. 9-9 plots the relative volume in which the particles can be detected at given ESD and noise level. The maximum overlapping volume with the given stereo parameter is 99mL. ESDs that were bigger than 320µm could be detected beyond the maximum depth of field (30mm) at noise standard deviation as high as 0.05. It would be ideal to calibrate this in the field. To achieve this, we propse potting particles of varying sizes and move along different depths to measure how much of they are visible. It is much harder to mimic noise so measurements in the ocean could provided a more realistic noise level to simulate against.

9.5.2 Tracking Error

The PTV software is evaluated by comparing the expected detection in 2D and 3D to that detected and tracked by the PTV system. We evaluate two scenarios. In motion A, the particles are settling vertically only along the y-axis (Fig. 9-10a. In motion B, the particles are settling vertically along the y-axis as well as moving farther away along the z-axis (Fig. 9-10b). Table 9.1 and Table 9.2 summarize the performance in their respective motions. Specifically, we measure the number of detected particles in either left or right image and particles whose 3D position could be estimated, the average and standard deviation of 2D track offsets (x- and y-axis in pixels), and the average and standard deviation of 3D track offsets (x-, y- and z-axis in µm).

92 (a) Traces of particles settling settling along y-axis

(b) Traces of particles settling along y-axis and z-axis

Figure 9-10: Example traces of the particles settling in different directions imaged on the right camera.

93 Attributes PTV Analysis: Motion B

Noise Level, 휎푛표푖푠푒 0.002 0.005 0.02 0.05 No. Particles Detection 30 28 22 17 No. Particles Tracked in 3D 16 15 11 6 No. False Correspondence 0 0 0 0 Avg. 2D Tracking Error, [x y] (px) [0.17, 0.11] [0.17, 0.12] [0.46, 0.26] [0.25, 0.20] Std Dev. 2D Tracking Error, [x y] (px) [0.46, 0.34] [0.14, 0.26] [4.19, 1.67] [0.43, 0.33] Avg. 3D Tracking Error, [x y z] (µm) [23, 20, 7] [23, 20, 7] [23, 20, 8] [23, 20, 15] Std Dev. 3D Tracking Error, [x y z] (µm) [3, 4, 16] [3, 4, 16] [6, 3, 9] [6, 5, 8]

94 Table 9.1: The analysis of PTV performance for particles settling vertically (motion A) Attributes PTV Analysis: Motion B

Noise Level, 휎푛표푖푠푒 0.002 0.005 0.02 0.05 No. Particles Detection 57 57 49 36 No. Particles Tracked in 3D 34 33 33 22 No. False Correspondence 6 6 4 0 Avg. 2D Tracking Error, [x y] (px) [0.19, 0.14] [0.21, 0.17] [0.36, 0.33] [0.44, 0.44] Std Dev. 2D Tracking Error, [x y] (px) [0.42, 0.89] [0.57, 0.93] [1.54, 1.80] [1.49, 2.29] Avg. 3D Tracking Error, [x y z] (µm) [23, 20, 8] [23, 20, 8] [23, 20, 8] [23, 20, 9] Std Dev. 3D Tracking Error, [x y z] (µm) [3, 3, 98] [3, 3, 99] [4, 3, 8] [5, 3, 9]

95 Table 9.2: The analysis of PTV performance for particles settling vertically and farther away from the cameras (motion B) In motion A, there were in total 33 particles present the images, out of which 26 of them could be tracked in 3D. Our software detected 30 of them either in left or right images, but only 16 of them could be tracked in 3D. In motion B, there were 59 particles, out of which 34 could be tracked in 3D. The number of tracked particles in 3D are less than expected because particles were not present consistently in left and right images during the process. However, in motion B, there were more missed in the 3D tracks due to incorrect 3D correspondence matching. Based on results found in Sec. 9.5.1, when noise standard deviation is 0.002, all of the particles should have been detectable. However, in either left or right images, we missed 3 and 2 particles in motion A and motion B, respectively. They were actually particles whose ESD was smaller than 75µm, and farther than 16mm from the working distance. There are two possible explanations. One is that with combination of blurry and focused particles of varying sizes, the detection algorithm with global threshold filters out the particles with weak signals. Additionally, upon detection, theuseof Kalman filter could be filtering out these particles as well. There is either aneed for an improvement in PTV software or further calibration of the effective imaging volume per size and noise beyond the detection.

9.6 Future Work

A PTV algorithm based on Kalman filter and Munkres tracking algorithms is implemented. It is evaluated against a simple simulation where we assume that the float moves along with the motion of the fluid and particles perfectly. However, there remains further work to ensure that PTV is able to track the particles accurately with more realistic assumptions. We have seen in Section 8.2.1 that it is very unlikely that the float will move per- fectly with the fluid. It is expected to oscillate relative to the fluid. Furthermore, it could rotate and accelerate unexpectedly. The system therefore incorporates an iner- tial measurement unit (IMU), pressure and temperature sensors. The next iteration of PTV should incorporate data streams from these sensors to account for relative

96 motions of the float within the fluid. In addition, one of the main goals of Minions stereo imaging system is to estimate the flux of POC. This requires the knowledge of their concentration and settling velocity per different groups of POC sizes. Although the current implementation of PTV is able to detect and track most of the particles, it is not sufficient to analyze the quantities of interest. As previously discussed, the depth of field is very narrow, and thus, much of the particles appear blurry. This makes the size estimation of each particle challenging. In addition, the volume in which particles of different sizes are detectable varies depending on the noise, particle size and the filters used. Therefore, one needs to calibrate further with actual POC of known sizes. One could calibrate the radial intensity profile of blurred POC based on the dis- tance and size of the particles. Once the particles have been detected using the blob analyzer, average intensity across the radius can be measured. It can then be matched to a dictionary of known radial profiles at different depths and particle sizes. Forex- ample, the yellow lines in Fig. 9-5 represent the radial intensity of a disc of radius 62.5µm at different depths. Given a 3D position of these discs, we could estimate the actual size of the discs. Another improvement that could be made is to infer the size and 3D motion of particles in a non-overlapping volume of water. Each camera takes an image of approximately 125mL of water. Considering the overlapping volume of 100mL, a total of 150mL of water is currently being observed. However, 3D tracking is being done in 100mL of the overlapping volume. We are discarding 1/3 of the data in regards to 3D tracking. Using the calibration method mentioned above, and with an assumption that most particles settle in similar directions, one could estimate the settling velocity and sizes of the particles in non-overlapping images. If a size estimation algorithm is devised, one could then compute the concentration and the settling velocity per size distribution. Specifically for the concentration, the computation will be done in the following way. The number of particles detected in each size class bin for each frame is summed over all of the image frames. We also multiply the imaging volume of each of the particle sizes by the number of frames.

97 Then the total number of particles can be divided by the total imaging volume to compute the average concentration per particle sizes. There are also more sophisticated approaches for PTV implementation that could improve the tracking performance. Right now, the particles are only being tracked within the image. The justification is that the expected abundance of co-occuring particles is low, such that a simple implementation is sufficient. However, as seen in the previous results, there are still errors in our analysis. Therefore, if improvements are required, one could take the approach of OpenPTV, which is to compare tracking results in 2D and 3D simultaneously. One also corrects for any mistakes or ambiguity in the past based on the recent observations. The approach of [123] is especially effective on separating particles in cross paths and is more robust against arbitrary motion of the particles.

9.6.1 Further Evaluation

After the simulation evaluation, we investigate how to track potted 150-180µm diameter red glass microbeads[128] in an epoxy. As a prototype, we designed and laser cut an acrylic mold to pot an epoxy (Opti-tec4200[129]) whose optical refractive index matches closely to that of water. Two female standoffs were potted together at the top so that it could be mechanically screwed to a xyz microstage (3 translational degree of freedom)[130]. The prototype is shown in Fig. 9-11b along with the setup to manipulate the target with a xyz-microstage on a mechanical rig (Fig. 9-11a). The first prototype was too narrow (1cm deep) to fully examine the blurry particles and most of the microbeads sunk to the bottom. We also discovered stirring the epoxy with microbeads when there is a release agent introduced air bubbles, despite being degassed in a vacuum chamber[131] and potted inside a pressure chamber[132] at 50psi. In the next prototype, a 6cm deep acrylic mold would be used. Instead of stirring the microbeads, a small portion of microbeads could be sprinkled on top of a layer of epoxy and covered by another layer.

98 (a) An example setup for testing against (b) Microbeads potted with an epoxy potted microbeads

Figure 9-11: Apparatus for testing against actual microbeads potted inside an epoxy. The xyz-translational microstage can be used to control the motion of the particles.

Figure 9-12: (a) shows the DeepPIV system and the vortex-ring generator atop an ROV, and (b) shows an example of ring generated from the vortex-ring generator. Image from [6]

Finally, the original plan was to test our system against an existing particle image velocimetry (PIV) system. The Laser Lab at Monterey Bay Aquarium Research Institute (MBARI) has built and been using a PIV system called DeepPIV[133] shown in Fig. 9-12a that quantifies the motion of a sheet of fluid. It does so by shininga sheet of laser onto a fluid and analyzing the movement of the reflective particles in the sheet of fluid. They also have a well-understood vortex ring generator shown in Fig. 9-12b. Both DeepPIV and vortex-ring generator can be mounted onto an ROV. Plans prior to the Covid-19 pandemic was to bring the system to MBARI, an oceanographic research center located in California, to compare its performance

99 against that of DeepPIV. There are two major challenges that need to be taken care of prior to conducting this experiment. To image the same motion of the fluid, DeepPIV and the stereocamera need to exchange turns of taking images in between each frame because the lighting is different. This may require communication between two devices. Additionally, the positioning of the stereocamera relative to the DeepPIV should be calibrated.

100 Chapter 10

Conclusion

10.1 Summary

A stereographic imaging system for tracking particulate organic carbon (POC) in the mesopelagic zone of the ocean is designed and evaluated through a simulation. The goal is to deploy several of these imaging systems at different depths of the ocean to measure the size distribution and settling rate of POCs in-situ. We explored a variety of low-cost off-the-shelf optics that best suit our design goals. The design configuration of the optical setup is displayed in Fig. 10-1. Two cameras and twolight sources are housed inside four separate housings, and they are wirelessly synchronized. This allows us to reduce the chance of water leakage. The summary of the final design performance of the system is provided in Table. 10.1.

101 (a) Top View

(b) Side View

Figure 10-1: Final design configuration of the optical setup of the stereo imaging system.

102 Attribute Value Unit Resolution 35 µm/px Overlapping Imaging Volume 99 mL Depth Error 39.5 µm Max. Correspondence disparity 904 px Max. Deployment Duration 18 hrs Max. Operational Depth 2000 m Bulk price per full system excluding some parts1 564.63 $

Table 10.1: Summary of the system performance

10.2 Discussions and Future Works

10.2.1 Hardware

There were three key factors that made the hardware design process challenging. The initial goal of resolving particles of 50µm pushes us for a very short working distance and therefore limited DoF. This poses a need to be able to estimate the particle size from blurry images of particles that are outside of the DoF. Furthermore, the cost has made us focus on M12 lenses and finding a lens with a suitable resolution has been difficult. Hence, we could not resolve with20µm/px resolution. Similarly, the camera that we use drew too much power to fulfill our goal of 24hours long operation. Some alternative cameras and embedded boards which allow for lower-level development may be useful to bring power consumption significantly down. Lastly, the float-fluid interaction required some minimum distance. This combined withthe scales of tens of micrometers resolution, made the system performance susceptible to small mechanical changes (i.e., pan angle or baseline). One of the limiting factors of the electronics system has been the power and computational speed. Batteries are valuable and expensive resources. It adds sig-

1The parts that are missing are listed in Appendix D

103 nificant physical weight and cost to the system. When the cameras were surveyed, DMM-72BUC02-ML camera was an optimal solution in terms of the resolution, com- munication and price. However, we realized later during the evaluation that the power consumption due to the camera and image processing consumed 1.15W out of total power consumption of 1.95W. This reduced our running time to 18hrs. Furthermore, a powerful (4-core) embedded board had to be used to handle 1FPS datarate and the image processing. This increased the cost as well as the power consumption. How- ever, it is worth noting that the running duration of 24hrs is not a strict requirement, and therefore, one more battery could be added2, whose weight can be compensated by extra synthetic foam. That being said, a greater variety of cameras could be tested. As long as the scientifically important specifications (i.e., resolution and frame rate) are met, power efficiency is the pressing concern. Ideally, it would be optimal to collaborate with a company like Arducam to customize a board-level camera to produce low-cost monochrome cameras whose pixel pitch and dimension meet our needs. Leopard Imaging[47] offers multiple MIPI-based color cameras, but an inquiry on monochrome sensor customization would also be worth a try. The more expensive board-level cameras from XIMEA[45] or Flir[134] are unfortunately only offered in USB3 and are rated for similar or higher power consumption. If development time is not a limiting factor, deeper dive into other sets of MCUs and cameras is recommended. The current camera and embedded boards are chosen because of USB2.0 communication and the requirement of the Linux-based operating system from the camera library. This saved the development time significantly. How- ever, the system could be much more light-weight with an imaging signal processing unit (e.g., STMIPID02) and an MCU (e.g., STM32F427 or Teensy4.1) to communi- cate with custom cameras whose data is transmitted over MIPI CSI-2. WiFi synchro- nization could happen using a wireless chip such as ESP8266. Image data transfer by USB is also viable, as long as not too much power is consumed. More engineering effort is expected, but the optimization in terms of power could be superior.

2This requires re-design of the existing battery charging mechanism and protection circuit

104 10.2.2 Software

Unfortunately, the recent pandemic has restricted us to evaluate our system mostly in simulation, and thus, not sufficient empirical data could be drawn from the field. It was also difficult to prepare physical prototypes for deployments. The design and implementation of the mechanical backbone structure to hold everything together remains incomplete. Multiple iterations of the prototype would have been very helpful for us to better understand practical limits of the system. There is also a need for improvements in analyzing the data in order to accu- rately estimate the particle flux. This requires information on both concentration and settling rate of POC. The narrow DoF has been limiting the computation of their concentration. The volume in which the particles of different sizes are detectable vary as a function of their sizes as well as the noise level. Particle sizes, however, require comprehensive calibration to determine the relationship among the blur size, distance, and size need to be analyzed comprehensively. As for the settling rate, the current PTV is able to monitor with an average tracking error of at most 0.5px in 2D and 25µm in 3D. However, this does not take into account the relative motion of the float to the fluid. Based on the float-fluid interaction analysis, the motion of the float often will not follow the motion ofthe fluid given an internal wave. Therefore, we are catching the impact of the relative motion of the float to the fluid as well as the settling of the particles together.To separate these two components, we would want to either attach a velocity sensor or find a statistical method of analyzing the motion of the settling particles. Theformer option poses a problem that finding or designing a sensor whose sensitivity is ableto capture the slow internal wave would be challenging.

10.2.3 Beyond Stereo Imaging System

Finally, the flux beyond our detectable particle size spectrum should be further investigated. Specifically, the density[8] and the shape of the particles[2] vary signifi- cantly as seen in Fig. 2-2. Furthermore, the chemical composition of these particles

105 are very complex, even more so due to the introduction of anthropogenic particulates, such as micro-plastic and from combustion, into the ocean[34]. There- fore, relying on ESD alone would result in uncertainty when it comes to calculating the carbon flux. The study should be done in conjunction to measure aforementioned properties. The small sediment-trap on the Minion float would bear some insight in terms of the particle composition. One might also be able to estimate some physical properties of the POC based on albedo given a well-understood lighting or a spec- troscopy. An instrumentation that might be particularly useful for understanding the shape of the particles is a holographic imaging system[17][135]. We are hopeful that the design and simulation tools that have been developed in this thesis prove useful and instructive for next steps in the system when readying it for future deployment.

106 Appendix A

Circuit Schematic and Layout

The purpose of the mainboard is to facilitate the communication between the main MCU and the peripheral sensors as well as to supply power. There are multiple connectors that connect to peripherals (Fig. A-1). The LED driver and protection circuit is shown in Fig. A-2). The overall layout of the PCB board is displayed in Fig. A-3. Finally, the schematic and layout of the LED breakout board for testing different optics is shown in Fig. A-4.

107 108

Figure A-1: Mainboard schematic: peripherals and power (1/2) 109

Figure A-2: Mainboard schematic: LED driver (2/2) Figure A-3: Top (left) and bottom (right) layout of the mainboard evaluation circuit

110 Figure A-4: Schematic (left) and layout (right) of the LED evaluation PCB for CREE XLamp XP-E2

111 Appendix B

Software

All of the code developed and tested are available in the following url: https: //github.com/futureoceanlab/Minions. It also contains the circuit schematics, and the mechanical designs will be added to this repository. The code is separated into firmware, PTV, simulation, and optical analysis. There are two firmware codes (c++): LED server and camera client. PTV code (MATLAB) contains both simulation and evaluation of settling particles. The evaluation of stereo design as well as the light design are contained inside the optical analysis (MATLAB) directory. Finally, the optical analysis (c++ and MATLAB) provide resolution and calibration measurement and analysis code. More details can be found in each directory’s README as well as the header comments in individual code files.

112 Appendix C

Optics Performance Comparison

C.1 Camera Lens

Below is the table showing the maximum contrast of the imaging target at resolu- tion of 22.7µm/px. Due maximize the depth of field, only lenses of F-number=8 are listed in the end. There were also two lenses from Lensation as well as some more from the same vendors, but the largest F-number was 2.5. cnAICO informed us that their lens F-number could be tuned even higher, but the lens resolution was not the best.

Vendor Lens Contrast (%) at 22.7µm/px Price ($) Edmund Optics 83-955 27.2 95 cnAICO ACH2580MAC 6.2 70 Scorpion Vision SVL-2508SMAC 8.6 68

Table C.1: Contrast at required resolution for different lenses

113 Figure C-1: Comparison of lens performance among three lenses considered. While the lenses approach a similar asymptote at higher resolution, the performance varies at lower resolution. In particular, it is important to note that the lens from Edmund Optics has higher contrast as the target resolution gets smaller. This is the determin- ing factor. The comparison at lower resolution is highlighted in the zoomed in graph on the right. The plots have been filtered to smooth out the noise for clarity.

C.2 Light Optics

We have tested in hosue two models of LEDs with two different colors: XLamp XP- E2 photored/orange and Lumileds Luxeon Rebel photored/orange. We tried four dif- ferent optics whose specifications seemed to match our requirement: OPC11COL from Dialight[80], C11347_REGINA from Ledil[81], CLP17CR from Roithner Technologik[83] and 10412 lens from Carclo[82]. The images were captured using the camera and lens that were selected in Section 4

114 Figure C-2: Radial Intensity using Cree XLampe XP-E2 LEDs and different optics

115 Figure C-3: Radial Intensity using Lumileds Luxeon Rebel LEDs and different optics

We focused on the following combinations of LEDs and optical elements. Cree XLamp XP-E2 LEDs were matched with the CLP17CR and Carclo 1041O, and Lu- mileds Luxeon Rebel LEDs were matched with OPC11COL and Carclo 10412. Fig. C-2 shows the intensity profiles of XP-E2 LEds and Fig. C-3 displays that ofRebel LEDs. An example of the light beam using XP-E2 photored LED and Carclo 10412 lens is shown in Fig. C-4. On the other hand, C11347_REGINA from Ledil could not be used because it created a blind spots in the middle as shown in Fig. C-5. Both images were It is worth noting that the curves in the following graphs are average intensities taken over each of the radius away from the center of the light beam to filter out the noise. Interestingly, a close inspection of the intensity close to the center is not as smooth as the tails of the graph. There seems to be inherent some distortion at the center of the light.

116 An example image of a good light distribution

Figure C-4: Example light beam using XLamp XP-E2 LED and Carclo 10412 lens

An example image of a bad light distribution

Figure C-5: Example light beam using XLamp XP-E2 LED and Lideon C11347 Regina reflector

117 Appendix D

Bill of Materials 118

Below is a table with a list of parts that were used to build the hardware. Both the unit and bulk prices are listed for comparison. In addition, since some mechanical parts as well as the electronics units are still in design phase, their prices have not been included. Namely, the list is missing cost for acetal Delrin tube, housing, sapphire ports, penetrators, wireless charging system, an IMU and a fabrication fee. Note that the camera model listed is different from the model used for testing. This model has a USB micro-A connector instead of a ribbon cable connector. It is $20 less expensive than latter model, and the current housing is able to incorporate either models without any issue.

Price Total Price Part Device Manufacturer Quantity Unit Bulk Unit Bulk Price Total Price Part Device Manufacturer Quantity Unit Bulk Unit Bulk Embedded Boards NanoPi Duo2 - FriendlyElec 2 17.99 17 35.98 34 RPI0W - RaspberryPi 1 10 10 10 10 2x20 Female Headers 2222 Adafruit 2 1.5 1.5 3.0 3.0 PPTC16 Sullins Connector 1x16 Female Header 2 0.84 0.51 1.68 1.02 1LFBN-RC Solutions PCB - PCBWay 3 6.7 0.36 20.1 10.8

119 Total: Embedded Boards 70.76 58.82 Peripherals DMM 72BU Camera The Imaging Source 2 119 83.3 238 166.6 C02-ML M12 Lens Holder TLH 102s The Imaging Source 2 16.8 11.76 33.6 23.52 Lens 83-955 Edmund Optics 2 95 55 190 110 Pressure Sensor 7LD Keller America 1 127 104.14 127 104.14 Coin Battery Holder 3000 Keystone Electronics 1 0.44 0.24 0.44 0.24 Coin Battery CR1220 Panasonic-BSG 1 0.85 0.49 0.85 0.49 RTC DS3231S#T&R Maxim Integrated 1 5.52 4.55 5.52 4.55 Price Total Price Part Device Manufacturer Quantity Unit Bulk Unit Bulk 4pin PicoBlade 0530480410 Molex 4 0.39 0.184 1.56 0.74 Connector 6pin PicoBlade 0530480610 Molex 2 0.42 0.21 0.84 0.42 Connector Assembled wires 0151340402 Molex 2 2.63 1.67 5.26 3.34 (4pin) Assembled wires 0151340600 Molex 2 3.26 2.07 6.52 4.14 120 (6pin) Total: Peripherals 609.59 418.18 Power LMZ14202 Buck Converter Texas Instruments 3 8.59 8.26 25.77 24.78 HTZX/NOPB Battery MJ1 LG 9 5.35 3.75 48.15 33.75 Battery Protection AA Portable Power PCM-SM3 3 6.55 6.55 19.65 19.65 Board Corp C1608X7R1H C_FF TDK Corporation 3 0.16 0.03 0.48 0.09 223K080AE Price Total Price Part Device Manufacturer Quantity Unit Bulk Unit Bulk 35ZLH1 C_IN_BAT 00MEFC6. Rubycon 3 0.26 0.06 0.78 0.18 3X11 EMK316BB C_OUT_5 Taiyo Yuden 3 0.66 0.22 1.98 0.66 J476ML-T C0603C472J5 C_SS KEMET 3 0.09 0.01 0.27 0.03 RACTU

121 2pin 3mm Conn 0436500200 Molex 3 0.82 0.45 2.46 1.35 Header 20-24 AWG Crimp 0430300001 Molex 6 0.1 0.05 0.6 0.30 2pin 3mm Conn. 0436450200 Molex 3 0.28 0.13 0.84 0.39 Receptacle Total: Power 100.98 81.18 Light XPEBPR-L1- Photred LED Cree Inc. 2 1.6 1.27 3.2 2.54 0000-00D01 Price Total Price Part Device Manufacturer Quantity Unit Bulk Unit Bulk Carclo Technical LED Lens 10412 2 1.19 0.48 2.38 0.96 Plastics Load Switch AP22653W6-7 Diodes Incorporated 1 0.36 0.13 0.36 0.13 BJT FJV1845FMTF ON Semiconductor 1 0.18 0.04 0.18 0.04 MGSF1N02 MOSFET ON Semiconductor 1 0.38 0.11 0.38 0.11 LT1G 2pin 1mm Conn. 5015680207 Molex 2 0.44 0.27 0.88 0.54 122 Header Assembled Cable 0151330202 Molex 1 2.29 1.46 2.29 1.46 CL10F104 Samsung C_LED 2 0.09 0.01 0.18 0.02 ZO8NNNC Electro-Mechanics LED PCB - PCBWay 2 2.3 0.27 4.6 0.54 Total: Lights 15.33 6.45

Total 796.66 564.63

Table D.1: Bill of Material Bibliography

[1] The Science Education Resource Center at Carleton College, “ and the carbon cycle.” https://serc.carleton.edu/eslabs/carbon/6a.html, 2017.

[2] A. Bochdansky, M. Clouse, and G. Herndl, “Dragon kings of the : Marine particles deviate markedly from the common number-size spectrum,” Scientific Reports, vol. 6, p. 22633, 03 2016.

[3] V. LaCapra, “Chasing ocean ‘snowflakes’.” https://www.whoi.edu/oceanus /feature/chasing-ocean-snowflakes/, March 2019.

[4] CREE, “Cree XLamp XP-ED LEDs.” https://www.cree.com/led-compone nts/media/documents/XLampXPE2.pdf, 2019. Rev. 11D.

[5] K. C. Bell and D. Novy, “Maka niu a low-cost oceanographic camera and sensor platform.” https://www.media.mit.edu/projects/maka-niu/overview/, 2020.

[6] M. Omand, “Minions: A low-cost float for distributed, langrangian observations of the biological carbon pump,” 2018.

[7] K. Buesseler, C. Benitez-Nelson, S. Moran, A. Burd, M. Charette, J. Cochran, L. Coppola, N. Fisher, S. Fowler, W. Gardner, L. Guo, O. Gustafsson, C. Lam- borg, P. Masque, J. Miquel, U. Passow, P. Santschi, N. Savoye, G. Stewart, and T. Trull, “An assessment of particulate organic carbon to thorium-234 ratios in the ocean and their impact on the application of 234th as a poc flux ,” Marine Chemistry, vol. 100, no. 3, pp. 213 – 233, 2006. Future Applications of 234Th in Aquatic (FATE).

[8] A. Mcdonnell, Marine particle dynamics: sinking velocities, size distributions, fluxes, and microbial degradation rates. PhD thesis, 08 2011.

[9] K. O. Buesseler, A. N. Antia, M. Chen, S. W. Fowler, W. D. Gardner, Ö. Gustafsson, K. Harada, A. F. Michaels, M. M. R. van der Loeff, M. Sarin, D. K. Steinberg, and T. W. Trull, “An assessment of the use of sediment traps for estimating upper ocean particle fluxes,” 2007.

[10] J. K. Bishop, “Transmissometer measurement of poc,” Deep Sea Research Part I: Oceanographic Research Papers, vol. 46, no. 2, pp. 353 – 369, 1999.

123 [11] I. Cetinić, M. J. Perry, N. T. Briggs, E. Kallin, E. A. D’Asaro, and C. M. Lee, “Particulate organic carbon and inherent optical properties during 2008 north atlantic bloom experiment,” Journal of Geophysical Research: Oceans, vol. 117, no. C6, 2012.

[12] A. McDonnell and K. Buesseler, “Variability in the average sinking velocity of marine particles,” and , vol. 55, no. 5, pp. 2085–2096, 2010.

[13] R. Bol, S. A. Henson, A. Rumyantseva, and N. Briggs, “High-frequency vari- ability of small-particle carbon export flux in the northeast atlantic,” Global Biogeochemical Cycles, vol. 32, no. 12, pp. 1803–1814.

[14] M. D. Ohman, R. E. Davis, J. T. Sherman, K. R. Grindley, B. M. Whitmore, C. F. Nickels, and J. S. Ellen, “Zooglider: An autonomous vehicle for optical and acoustic sensing of zooplankton,” Limnology and Oceanography: Methods, vol. 17, no. 1, pp. 69–86, 2019.

[15] C. Briseño-Avena, P. L. Roberts, P. J. Franks, and J. S. Jaffe, “Zoops-o2: A broadband echosounder with coordinated stereo optical imaging for observing plankton in situ,” Methods in Oceanography, vol. 12, pp. 36 – 54, 2015.

[16] Sequoia, “LISST-Holo2: Holographic Camera for Flocs and Plankton.” http: //www.sequoiasci.com/product/lisst-holo/, 2020.

[17] H. Sun, P. Benzie, N. Burns, D. Hendry, M. Player, and J. Watson, “Underwa- ter digital holography for studies of marine plankton,” Philosophical Transac- tions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 366, no. 1871, pp. 1789–1806, 2008.

[18] C. R. Benitez-Nelson, K. Buesseler, M. Dai, M. Aoyama, N. Casacuberta, S. Charmasson, A. Johnson, J. M. Godoy, V. Maderich, P. Masqué, W. Moore, P. J. Morris, M. Rutgers van der Loeff, and J. N. Smith, “Radioactivity in the marine environment: Uranium-thorium decay series,” Limnology and Oceanog- raphy e-Lectures, vol. 8, no. 1, pp. 59–113, 2018.

[19] WHOI, “Transmissiometer.” https://www.whoi.edu/what-we-do/explore/ instruments/instruments-sensors-samplers/transmissometer/.

[20] WHOI, “Sediment trap.” https://www.whoi.edu/what-we-do/explore/inst ruments/instruments-sensors-samplers/sediment-trap/.

[21] U. S. E. P. Agency, “Radioactive decay.” https://www.epa.gov/radiation/ radioactive-decay, 2019.

[22] D. Stramski, R. A. Reynolds, M. Kahru, and B. G. Mitchell, “Estimation of particulate organic carbon in the ocean from satellite remote sensing,” Science, vol. 285, no. 5425, pp. 239–242, 1999.

124 [23] W. Gardner, A. Mishonov, and M. Richardson, “Global poc concentrations from in-situ and satellite data,” Deep Sea Research Part II: Topical Studies in Oceanography, vol. 53, no. 5, pp. 718 – 740, 2006. The US JGOFS Synthesis and Modeling Project: Phase III.

[24] C. Le, J. C. Lehrter, C. Hu, H. MacIntyre, and M. W. Beck, “Satellite observa- tion of particulate organic carbon dynamics on the louisiana ,” Journal of Geophysical Research: Oceans, vol. 122, no. 1, pp. 555–569, 2017.

[25] Unique Group, “Sea Tech 20cm Transmissometer.” https://www.uniquegrou p.com.

[26] Eca Group, “SeaScan.” https://www.ecagroup.com/en/solutions/seasca n-mk2, 2019.

[27] L. Guidi, L. Stemmann, L. Legendre, M. Picheral, L. Prieur, and G. Gorsky, “Vertical distribution of aggregates (>110 µm) and mesoscale activity in the northeastern atlantic: Effects on the deep vertical export of surface carbon,” Limnology and Oceanography, vol. 52, no. 1, pp. 7–18, 2007.

[28] K. Möller, M. John, A. Temming, J. Floeter, A. Sell, J.-P. Herrmann, and C. Möllmann, “Marine snow, zooplankton and thin layers: Indications of a trophic link from small-scale sampling with the video plankton recorder,” Ma- rine Ecology Progress Series, vol. 468, pp. 57–69, 11 2012.

[29] L. Guidi, S. Chaffron, L. Bittner, D. Eveillard, A. Larhlimi, S. Roux, Y. Darzi, S. Audic, L. Berline, J. Brum, L. P. Coelho, J. C. Ignacio Espinoza, S. Malviya, S. Sunagawa, C. Dimier, S. Kandels-Lewis, M. Picheral, J. Poulain, S. Searson, and G. Gorsky, “Plankton networks driving carbon export in the oligotrophic ocean,” Nature, vol. 532, 02 2016.

[30] S. Simoncelli, G. Kirillin, A. P. Tolomeev, and H.-P. Grossart, “A low-cost underwater particle tracking velocimetry system for measuring in-situ particle flux and sedimentation rate in low-turbulence environments,” Limnology and Oceanography: Methods, vol. 17, no. 12, pp. 665–681, 2019.

[31] MATLAB, “Multiple Object Tracking Tutorial.” https://www.mathworks.co m/help/driving/examples/multiple-object-tracking-tutorial.html, 2020.

[32] R. E. Kalman, “A new approach to linear filtering and prediction problems,” ASME Journal of Basic Engineering, 1960.

[33] J. R. Munkres, “Algorithms for the assignment and transportation problems,” 1957.

[34] J. J. Kharbush, H. G. Close, B. A. S. Van Mooy, C. Arnosti, R. H. Smittenberg, F. A. C. Le Moigne, G. Mollenhauer, B. Scholz-Böttcher, I. Obreht, B. P. Koch,

125 K. W. Becker, M. H. Iversen, and W. Mohr, “Particulate organic carbon decon- structed: Molecular and chemical composition of particulate organic carbon in the ocean,” Frontiers in Marine Science, vol. 7, p. 518, 2020.

[35] S. Wagner, F. Schubotz, K. Kaiser, C. Hallmann, H. Waska, P. E. Rossel, R. Hansman, M. Elvert, J. J. Middelburg, A. Engel, T. M. Blattmann, T. S. Catalá, S. T. Lennartz, G. V. Gomez-Saez, S. Pantoja-Gutiérrez, R. Bao, and V. Galy, “Soothsaying dom: A current perspective on the future of oceanic dissolved organic carbon,” Frontiers in Marine Science, vol. 7, p. 341, 2020.

[36] P. J. Lam, S. C. Doney, and J. K. B. Bishop, “The dynamic ocean : Insights from a global compilation of particulate organic carbon, caco3, and opal concentration profiles from the mesopelagic,” Global Biogeochemical Cycles, vol. 25, no. 3, 2011.

[37] M. Omand, R. Govindarajan, J. He, and A. Mahadevan, “Sinking flux of par- ticulate in the oceans: Sensitivity to particle characteristics,” Scientific Reports, vol. 10, 12 2020.

[38] H. J. Landau, “Sampling, data transmission, and the nyquist rate,” Proceedings of the IEEE, vol. 55, no. 10, pp. 1701–1706, 1967.

[39] TheImagingSource, “DMM 72BUC02-ML, USB 2.0 monochrome board cam- era.” https://www.theimagingsource.com, 2020.

[40] ON Semiconductor, “MT9P031: 5MP CMOS Digital Image Sensor.” https: //www.onsemi.com/pub/Collateral/MT9P031-D.PDF, 01 2017. Rev. 10.

[41] Edmund Optics, “Blue series m12 imaging lens.” https://www.edmundoptics .com/p/25mm-fl-f8-blue-series-m12-mu-videotrade-imaging-lens/270 56/.

[42] RaspberryPi, “Camera module.” https://www.raspberrypi.org/documentat ion/hardware/camera/, 2020.

[43] ArduCam, “Cameras for RaspberryPi.” https://www.arducam.com/product- category/cameras-for-raspberrypi/, 2020.

[44] “MIPI Camera Serial Interface 2 (MIPI CSI-2).” https://www.mipi.org/spe cifications/csi-2, 2020.

[45] XIMEA, “Board Level camera modules - whole range.” https://www.ximea. com/en/products/application-specific-oem-and-custom/board-level- cameras, 2020.

[46] Basler AG, “Basler Cameras for Embedded Vision Applications.” https://ww w.baslerweb.com/en/embedded-vision/embedded-vision-portfolio/emb edded-vision-cameras/, 2020.

126 [47] LeopardImaging, “CSI-2 MIPI Module.” http://www.leopardimaging.com/p roduct/, 2019.

[48] R. Austin and G. Halikas, “The index of refraction of seawater,” SIO Ref. No. 76-1, Scripps Inst. Oceanogr., la Jolla, p. 124, 01 1976.

[49] W. Chu, “Snell’s Law.” https://www.math.ubc.ca/~cass/courses/m309-0 1a/chu/Fundamentals/snell.htm, 2001.

[50] L. T. Bach, T. Boxhammer, A. Larsen, N. Hildebrandt, K. G. Schulz, and U. Riebesell, “Influence of plankton community structure on the sinking velocity of marine aggregates,” Global Biogeochemical Cycles, vol. 30, no. 8, pp. 1145– 1165, 2016.

[51] E. Cavan, M. Trimmer, F. Shelley, and R. Sanders, “Remineralization of par- ticulate organic carbon in an ocean oxygen minimum zone,” Nature Communi- cations, vol. 8, 04 2017.

[52] J. Conrad, “Depth of field in depth,” 07 2018.

[53] C. Loebich, D. Wueller, B. Klingen, and A. Jaeger, “Digital camera resolution measurement using sinusoidal siemens stars,” SPIE, vol. 6502, 02 2007.

[54] T. Luhmann, S. Robson, S. Kyle, and J. Boehm, Close-Range Photogrammetry and 3D Imaging. De Gruyter STEM, De Gruyter, 2019.

[55] EdmundOptics, “Modulation transfer function (mtf) and mtf curves,” 2020.

[56] Thorlabs, Inc., “R1L1S2P-Positive Sector Star Test Target.” https://www.th orlabs.com/.

[57] BlueRobotics, “Watertight Enclosure for ROV/AUV (2” Series).” https://bl uerobotics.com.

[58] AICO Electronics Limited, “M12/S Mount Lens.” "https://www.aico-lens. com/products/megapixel-fixed-focal-lenses/, 2020.

[59] Scorpion Vision Limited, “M12 Lenses.” https://www.scorpionvision.co. uk/m12-lenses.

[60] Lensation, “S-Mount/M12 Lenses.” https://www.lensation.de/product-c ategory/s-mount-lenses/, 2020.

[61] Applied Image Inc., “Substrate and Coating Info.” https://www.appliedima ge.com/reference-info/substrate-and-coating-info/, 2019.

[62] Dot Vision, “Chrome on soda lime glass (float glass).” https://www.dot-visi on.com/CustomizationMaterial/20190512162248.html, 2020.

127 [63] II-VI Aerospace & Defense, “USAF 1951 Resolution Target 2”X2” NEG.” http s://www.maxlevy.com/product/DA004.html, 2020.

[64] R. Mauri, Non-Equilibrium Thermodynamics in Multiphase Flows. 01 2013.

[65] W. Beyer, L. Fawcett, R. Mauldin, and B. Swartz, “The volume common to two congruent circular cones whose axes intersect symmetrically,” Journal of Symbolic Computation, vol. 4, no. 3, pp. 381 – 390, 1987.

[66] MATLAB, “alphaShape.” https://www.mathworks.com/help/matlab/ref/ alphashape.html, 2020.

[67] MATALB, “estimateCameraParameters.” https://www.mathworks.com/help /vision/ref/estimatecameraparameters.html, 2020.

[68] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.

[69] J. Heikkila and O. Silven, “A four-step camera calibration procedure with im- plicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1106–1112, 1997.

[70] II-VI Aerospace & Defense, “Dot Array 25mm x 25mm, .25mm Dots on 1.0mm Centers, White Ivory.” https://www.maxlevy.com/product/FA145.html, 2020.

[71] Optics Focus, “Manual Rotary Platform.” http://www.optics-focus.com/m anual-rotary-platform-p-770.html#.XxaFrChKg-U, 2020.

[72] Woods Hole Oceanographic Institution, “Ocean twilight zone.” https://www. whoi.edu/know-your-ocean/ocean-topics/ocean-life/ocean-twilight- zone/, 2020.

[73] D. J. Eck, “Introduction to Lighting.” http://math.hws.edu/graphicsbook/ c4/s1.html#gl1light.1.4, 2018.

[74] R. Hunt and M. Pointer, Measuring Colour. The Wiley-IS&T Series in Imaging Science and Technology, Wiley, 2011.

[75] W. J. Smith, Modern Optical Engineering : The Design of Optical Systems., vol. 4th ed of McGraw Hill Professional. McGraw-Hill Professional, 2008.

[76] M. W. Sverdrup, H. U. Johnson and R. H. Fleming, The Oceans Their Physics, Chemistry, and General Biology. New York: Prentice-Hall, Inc., 1942.

[77] A. A. Mazin, “Characteristics of optical channel for underwater optical wireless communication system,” IOSR Journal of Electrical and Electronics Engineer- ing, vol. 10, pp. 1–9, 03 2015.

128 [78] M. Babin, A. Morel, V. Fournier-Sicre, F. Fell, and D. Stramski, “Light scatter- ing properties of marine particles in coastal and open ocean waters as related to the particle mass concentration,” Limnology and Oceanography, vol. 48, no. 2, pp. 843–859, 2003.

[79] T. Luhmann, “Eccentricity in images of circular and spherical targets and its impact to 3d object reconstruction,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5, pp. 363–370, 06 2014.

[80] Dialight, “Reflective Optic.” http://www.dialightsignalsandcomponents.c om/Assets/Brochures_And_Catalogs/Illumination/MDEXLUMREB10MM.pdf.

[81] LEDiL, “Regina.” https://www.ledil.com/data/prod/Regina/11347/11347 -ds.pdf. C11347_REGINA.

[82] Carclo Optics, “10.0mm Narrow Spot Plain TIR (10412).” http://www.carc lo-optics.com/optic-10412. 10412.

[83] Roithner LaserTechnik, “CLP series Reflectors for Lambertian.” http://www. roithner-laser.com/datasheets/led_optics/clp17cr_clp20cr.pdf.

[84] FriendlyARM, “NanoPi Duo2.” http://wiki.friendlyarm.com/wiki/index. php/NanoPi_Duo2.

[85] RaspberryPi, “RaspberryPi Zero W.” https://www.raspberrypi.org/produc ts/raspberry-pi-zero/, 2017.

[86] RaspberryPi, “Raspberry Pi OS.” https://www.raspberrypi.org/document ation/raspbian/, 05 2020.

[87] TheImagingSource, “The linux sdk for the imaging source cameras.” https: //github.com/TheImagingSource/tiscamera, 03 2020.

[88] “CPUSET(7) Linux Programmer’s misc.” https://man7.org/linux/man-pag es/man7/cpuset.7.html, 06 2020.

[89] “CHRT(1) User Commands.” https://man7.org/linux/man-pages/man1/ch rt.1.html, 01 2016.

[90] S. Ganeriwal, R. Kumar, and M. B. Srivastava, “Timing-sync protocol for sen- sor networks,” in Proceedings of the 1st International Conference on Embedded Networked Sensor Systems, SenSys ’03, (New York, NY, USA), p. 138–149, Association for Computing Machinery, 2003.

[91] J. Elson, L. Girod, and D. Estrin, “Fine-grained network time synchronization using reference broadcasts,” SIGOPS Oper. Syst. Rev., vol. 36, p. 147–163, Dec. 2003.

129 [92] V. G. Cerf and R. E. Icahn, “A protocol for packet network intercommunica- tion,” SIGCOMM Comput. Commun. Rev., vol. 35, p. 71–82, Apr. 2005.

[93] SIGLENT, “SDS1104X-E (100 MHz).” https://siglentna.com/product/sd s1104x-e-100-mhz/, 2020.

[94] Apple Inc., “Technical Note TN1023: Understanding PackBits.” http://deve loper.apple.com/technotes/tn/tn1023.html, 1990.

[95] D. Salomon, G. Motta, and D. Bryant, Data Compression: The Complete Ref- erence. Molecular biology intelligence unit, Springer London, 2007.

[96] Warmerdam, Frank and Kiselev, Andrey and Welles, Mike and Kelly, Dwight, “Libtiff - tiff library and utilities.” http://www.libtiff.org/, 2019.

[97] “SCP(1) BSD General Commands misc.” https://man7.org/linux/man-pag es/man1/scp.1.html, 04 2020.

[98] C. S. Corporation, “Raspberry pi selects cypress’ wireless connectivity solution for industry-leading iot boards.” https://www.cypress.com/news/raspberry -pi-selects-cypress-wireless-connectivity-solution-industry-lead ing-iot-boards, 03 2017.

[99] Cypress Semiconductor Corporation, “CYW43438Single-Chip IEEE 802.11 b/g/n MAC/Baseband/Radio with Integrated Bluetooth 4.2.” https://www.cy press.com/file/298076/download, 07 2018. Document Number: 002-14796 Rev. *L.

[100] Keller, “Pressure transmitters with i2c interface and embedded signal condi- tioning series.” https://www.kelleramerica.com/pdf-library/Piezoresis tive%20OEM%20Pressure%20Transmitters%20with%20I2C%20interface%20 and%20embedded%20signal%20conditioning.pdf, 05 2020.

[101] Maxim Integrated, “DS3231 Extremely Accurate I2C-Integrated RTC.” https: //datasheets.maximintegrated.com/en/ds/DS3231-DS3231S.pdf, 03 2015.

[102] Texas Instruments, Inter-Integrated Circuit (I2C) Module User’s Guide, 2009.

[103] “BlueRobotics KellerLD Library.” https://github.com/bluerobotics/Blue Robotics_KellerLD_Library, 2018.

[104] Kernel.org, “I2C Tools.” https://i2c.wiki.kernel.org/index.php?title =I2C_Tools&oldid=1667, 2017.

[105] UUGear, “Witty-Pi-3.” https://github.com/uugear/Witty-Pi-3, 2020.

[106] R. K. Moore, “Radio communication in the sea,” IEEE Spectrum, vol. 4, no. 11, pp. 42–51, 1967.

130 [107] F. Le Pennec, C. Gac, H. Guarnizo Mendez, and C. Person, “2.4 ghz radio transmission measurements in a basin filled with sea water,” 10 2013.

[108] S. Winder, Power Supplies for LED Driving. Elsevier Science, 2011.

[109] K. Macaland, “Fundamentals to automotive led driver circuits.” http://www. ti.com/lit/wp/slyy163/slyy163.pdf?ts=1591745377257, 05 2019.

[110] Battery University, “BU-304a: Safety Concerns with Li-ion.” https://batter yuniversity.com/learn/article/safety_concerns_with_li_ion, 2019.

[111] AA Portable Power Corp., “Protection Circuit Module (PCB) for 11.1v Li-Ion 9.6V LFP Battery Pack (2.0A Limit).” https://www.batteryspace.com/pro tectioncirciusmodulepcbfor74vli-ionbatterypack6-8alimit-2.aspx, 2019.

[112] T. Gerkema, “An introduction to internal waves,” 01 2008.

[113] Earth Search Online, “Minimize oceanic internal waves.” https://earth.esa. int/web/guest/missions/esa-operational-eo-missions/ers/instrumen ts/sar/applications/tropical/-/asset_publisher/tZ7pAG6SCnM8/cont ent/oceanic-internal-waves, 2020.

[114] L. Goodman and E. R. Levine, “Vertical Motion of Neutrally Buoyant Floats,” Journal of Atmospheric and Oceanic Technology, vol. 7, pp. 38–49, 02 1990.

[115] H. Rossby, E. Levine, and D. Connors, “The isopycnal swallow float—a sim- ple device for tracking water parcels in the ocean,” Progress in Oceanography, vol. 14, pp. 511 – 525, 1985.

[116] C. Garrett and W. Munk, “Space-time scales of internal waves,” Geophysical Fluid Dynamics, vol. 3, no. 3, pp. 225–264, 1972.

[117] C. Garrett and W. Munk, “Space-time scales of internal waves: A progress report,” Journal of Geophysical Research (1896-1977), vol. 80, no. 3, pp. 291– 297, 1975.

[118] M. H. Sharqawy, J. H. Lienhard V, and S. M. Zubair, “Thermophysical proper- ties of seawater: A review of existing correlations and data,” Desalination and Water Treatment, vol. 16, pp. 354–380, April 2010.

[119] K. G. Nayar, M. H. Sharqawy, L. D. Banchik, and J. H. Lienhard V, “Ther- mophysical properties of seawater: A review and new correlations that include pressure dependence,” Desalination, vol. 387, pp. 1–24, July 2016.

[120] I. M. Kundu, Pijush K. Cohen and D. R. Dowling, Fluid Mechanics. Academic Press, 2012.

131 [121] C. Garrett and W. Munk, “Space-time scales of internal waves: A progress report,” Journal of Geophysical Research (1896-1977), vol. 80, no. 3, pp. 291– 297, 1975.

[122] DuPont, “Delrin: The high-performance acetal resin.” https://www.dupont.c om/products/delrin.html?src=om-gg_ti-delrin_acetal-delrin#Resourc es, 2020.

[123] OpenPTV Consortium, “OpenPTV.” https://www.openptv.net/.

[124] N. T. Ouellette, “Lagrangian particle tracking.” https://web.stanford.edu /~nto/LPT.shtml, 2019.

[125] D. Engelmann, “3d-flow measurement by stereo imaging,” 01 2000.

[126] J. Willneff, “A spatio-temporal matching algorithm for 3D particle tracking velocimetry,” 2003.

[127] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979.

[128] Corpuscular, “Colored glass microspheres and microbeads.” https://www.mi crospheres-nanospheres.com/Microspheres/Inorganic/Glass/red%20cl ass%20beads.htm, 2011.

[129] Intertronics, “Opti-tec 4200 optically clear polyurethane encapsulant & potting compound.” https://www.intertronics.co.uk/product/opt4200-optical ly-clear-polyurethane-encapsulant-potting-compound/, 2020.

[130] Optics Focus, “Metric XYZ Translation Stage.” http://www.optics-focus.c om/metric-xyz-translation-stage-p-989.html#.XxaFvihKg-U, 2020.

[131] Best Value Vac, “15 gallon aluminum vacuum and degassing chamber.” https: //www.amazon.com/Aluminum-Degassing-Assembled-domestic-component s/dp/B019QT7IEG.

[132] Smooth-On Inc., “Pressure chamber.” https://www.smooth-on.com/produc ts/pressure-chamber/, 2020.

[133] K. Katija, R. E. Sherlock, A. D. Sherman, and B. H. Robison, “New technology reveals the role of giant larvaceans in oceanic carbon cycling,” Science Advances, vol. 3, no. 5, 2017.

[134] Flir, “Blackfly S Board Level.” https://www.flir.com/products/blackfly -s-board-level/?model=BFS-U3-88S6M-BD2, 2020.

[135] T. Zimmerman, N. Antipa, D. Elnatan, A. Murru, S. Biswas, V. Pastore, M. Bo- nani, L. Waller, J. Fung, G. Fenu, and S. Bianco, “Stereo in-line holographic digital microscope,” bioRxiv, 2019.

132