Experiments and Simulations on the Incompressible, Rayleigh-Taylor Instability with Small Wavelength Initial Perturbations

Item Type text; Electronic Dissertation

Authors Roberts, Michael Scott

Publisher The University of Arizona.

Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.

Download date 06/10/2021 00:55:18

Link to Item http://hdl.handle.net/10150/265355 EXPERIMENTS AND SIMULATIONS ON THE INCOMPRESSIBLE, RAYLEIGH-TAYLOR INSTABILITY WITH SMALL WAVELENGTH INITIAL PERTURBATIONS

by

Michael Scott Roberts

A Dissertation Submitted to the Faculty of the

AEROSPACE AND MECHANICAL ENGINEERING DEPARTMENT

In Partial Fulfillment of the Requirements For the Degree of

DOCTOR OF PHILOSOPHY WITH A MAJOR IN MECHANICAL ENGINEERING

In the Graduate College

THE UNIVERSITY OF ARIZONA

2012 2

THE UNIVERSITY OF ARIZONA GRADUATE COLLEGE

As members of the Dissertation Committee, we certify that we have read the dis- sertation prepared by Michael Scott Roberts entitled Experiments and simulations on the incompressible, Rayleigh-Taylor insta- bility with small wavelength initial perturbations and recommend that it be accepted as fulfilling the dissertation requirement for the Degree of Doctor of Philosophy.

Date: November 7 2012 Edward Kerschen

Date: November 7 2012 Hermann Fasel

Date: November 7 2012 Arthur Gmitro

Final approval and acceptance of this dissertation is contingent upon the candidate’s submission of the final copies of the dissertation to the Graduate College. I hereby certify that I have read this dissertation prepared under my direction and recommend that it be accepted as fulfilling the dissertation requirement.

Date: November 7 2012 Dissertation Director: Jeffrey Jacobs 3

STATEMENT BY AUTHOR

This dissertation has been submitted in partial fulfillment of requirements for an advanced degree at the University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library.

Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his or her judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author.

SIGNED: Michael Scott Roberts 4

ACKNOWLEDGEMENTS

The road to my PhD has been a long one, but well worth it. I must of course thank my family for always believing in me. Even when I sometimes doubted myself, they did not. With what they lacked in truly understanding what I was going through in my studies, they made up for in unwavering support. Luckily, I usually have good friends that I am able to talk to about my academic woes and for this I am grateful as well. In recent years Kung Fu class helped me grow as a person and gave me the tools to deal with stressful situations. Although times may become stressful, it is important to live in the moment without overwhelming oneself with what might be or might have been. The words of Si Gong taught me to remain calm even when the river around me becomes turbulent. I have learned a lot over the years and using this knowledge in a way that would help others would be a very noble and satisfying vision for myself. My advisor, Dr. Jacobs, always had meaningful input every step of the way along my journey. While other professors often force their students to fend for themselves, he did not and for this he has earned my complete respect. I am also thankful to Bill Cabot, my mentor while I was at LLNL. He always had insight into solving problems that I would often encounter with my simulations and was willing to assist even after I left LLNL and there was no longer any obligation for him to do so. The people I have worked with through the years were an important part of my research. Many problems would not have been solved without their input and for this I am grateful and humbled. Also, the help with editing this massive document is much appreciated and I would not have been able to finish in the time frame desired without key editors.

“We must acknowledge that the key is to be patient and make it to the end”

“Knowing others is wisdom, knowing yourself is Enlightenment” -Lao Tzu

This research was supported by Lawrence Livermore National Laboratory and by DOE NNSA under its Stewardship Science Academic Alliance program. 5

TABLE OF CONTENTS

LISTOFFIGURES ...... 7

LISTOFTABLES ...... 10

ABSTRACT ...... 11

CHAPTER1 INTRODUCTION ...... 12 1.1 BackgroundandMotivation ...... 12 1.2 PreviousResearch...... 23 1.3 ProposedResearch ...... 36 1.4 Self-Similarity ...... 38

CHAPTER 2 EXPERIMENTAL APPARATUSES ...... 41 2.1 WeightandPulleyDropTower ...... 41 2.1.1 DropTowerandTestSled ...... 41 2.1.2 ReleaseMechanism ...... 44 2.1.3 Acceleration Production ...... 45 2.1.4 Data Acquisition and Timing ...... 48 2.2 LinearInductionMotorDropTower...... 53 2.2.1 DropTowerandTestSled ...... 53 2.2.2 ReleaseMechanism ...... 53 2.2.3 Acceleration Production ...... 54 2.2.4 Data Acquisition and Timing ...... 56 2.3 FluidTanks ...... 57

CHAPTER 3 EXPERIMENTAL LIQUIDS AND IMAGING ...... 61 3.1 RefractiveIndexMismatch...... 62 3.1.1 Liquid Combinations ...... 62 3.1.2 Mixing Layer Imaging ...... 67 3.1.3 Mixing Layer Imaging Artifacts ...... 73 3.2 AbsorptionImagingLiquids ...... 91 3.3 OtherImagingConcepts ...... 101

CHAPTER 4 INITIAL PERTURBATIONS ...... 105 4.1 ForcedInitialPerturbations ...... 105 4.2 Background Noise Induced Initial Perturbations ...... 113 4.2.1 BackgroundNoise...... 114 6

TABLE OF CONTENTS – Continued

CHAPTER 5 NUMERICAL SIMULATIONS ...... 121

CHAPTER 6 RESULTS AND DISCUSSION ...... 126 6.1 Experimental Qualitative Results ...... 126 6.2 Experimental Quantitative Results ...... 139 6.2.1 Mixing Width and Plots ...... 143 6.2.2 GrowthParameterPlots ...... 145 6.3 Numerical Qualitative Results ...... 156 6.4 NumericalQuantitativeResults ...... 159 6.4.1 GrowthParameterPlots ...... 160 6.5 Comparison ...... 167

CHAPTER7 CONCLUSION ...... 171

APPENDIX A MATHEMATICAL DERIVATIONS ...... 174 A.1 Viscous Linear Stability Theory ...... 174 A.2 ViscousEffects ...... 189 A.3 Parametric Excitation with Viscous Damping ...... 190 A.4 Uncertainty Analysis and Absorption Analysis ...... 197 A.5 SphericalCapImplementation ...... 204

APPENDIX B COMPUTER PROGRAMS ...... 207 B.1 TankLidSphericalCapG-CodeProgram ...... 207 B.2 MatlabFFTPolarizationProgram ...... 207 B.3 Matlab Triangular Tank Beer’s Law Uncertainty Program ...... 208 B.4 Matlab Gradient Refractive Index Model Program ...... 211 B.5 Matlab Interfacial Tension Calculation Program ...... 213 B.5.1 myfun.f ...... 214 B.6 DuffandHarlowSolutionFortranProgram...... 214 B.7 JavaImageAnalysisProgram ...... 220 B.7.1 Main.java ...... 220 B.7.2 Excel.java ...... 243 B.8 JavaStackEnsembleAverageProgram ...... 259

REFERENCES...... 264 7

LIST OF FIGURES

1.1 Explanation of buoyancy instability for general fluid particle . . . . . 13 1.2 Explan of RTI where the deformed intf displaces a fluid particle . . . 14 1.3 Rayleigh-Taylor explanation ...... 16 1.4 Depiction of indirect drive ICF ...... 18 1.5 RTIinICFcapsule ...... 19 1.6 Evolution of the Rayleigh-Taylor instability...... 21

2.1 WPSystem,largerangeaccelerationoutput ...... 43 2.2 WPreleasemechanismrendering ...... 44 2.3 WPsystemaccelerationplot...... 46 2.4 Renderings of the Weight and pulley system...... 47 2.5 WPtestsledrendering ...... 47 2.6 Imagingsystemdiagram ...... 50 2.7 Acomputer generated image ofthe LIM drop tower...... 54 2.8 AccelerationplotforLIMsystem ...... 56 2.9 SolidWorksDrawingofWPTank ...... 58 2.10 WP, 0.5 Atwood num, miscible unforced exp with small balloons . . . 59 2.11 WP, 0.5 Atwood num, miscible unforced exp with large balloon . . . 60 2.12 WP, 0.5 Atwood num, miscible unforced exp with external balloon . . 60

3.1 Polarization gated unmatched refractive index images ...... 70 3.2 Povraymixingregionrendering ...... 71 3.3 Povray 3D view rendering with mismatched RI ...... 72 3.4 Miscible, 0.5 Atw num WP exp displaying three regions ...... 73 3.5 Miscible, 0.5 Atw num LIM exp displaying three regions ...... 74 3.6 Gradientrefractiveindexmodel ...... 78 3.7 Sim of parallel light rays of refractive mismatched combination . . . . 79 3.8 Sim of 27 degree light rays of refractive mismatched combination . . 80 3.9 Sim of ±14 degree light rays of refractive mismatched combination . . 81 ± 3.10 Experimental investigation of mirage effect ...... 84 3.11 Test displaying obvious mirage effect ...... 85 3.12 responseofmirageeffectfromstirring...... 86 3.13 Three-axis accelerometer measurements on WP system ...... 88 3.14 Possible explanation for three regions ...... 90 3.15 Triangulartankdepiction ...... 94 3.16 SoldiWorks triangular tank drawing ...... 95 8

LIST OF FIGURES – Continued

3.17 Beer’s law verification for experiment using anethole ...... 96 3.18 Beer’s law contrast comparison plot ...... 101 3.19 LST heavy liquid nD and isopropyl alcohol nD vs. wavelength . . . . 102

4.1 Non-uniformity of amp from horizontal forcing of small wavelength .107 4.2 Rendering of the vertically constrained weight/motor system...... 110 4.3 Resonantboxrendering ...... 110 4.4 Small wavelengths produced by vertical oscillation ...... 111 4.5 Test experiment with vertical oscillation ...... 112 4.6 0.56 Atw num exp displaying RT growth w/o forcing in rect tank . . 113 4.7 Diffusionthicknessmeasurement...... 116 4.8 Experiment with hollow glass spheres without clumping ...... 119 4.9 Experiment with hollow glass spheres with clumping ...... 120

5.1 Simulation convergence plots ...... 122 5.2 Simulation grid size convergence plots ...... 124

6.1 Immiscible, unforced 0.48 Atwood number experiment ...... 127 6.2 Immiscible, unforced 0.48 Atw num experiment w/o surfactant . . . . 129 6.3 Immiscible, forced 0.48 Atwood number experiment ...... 131 6.4 Miscible, unforced 0.48 Atwood number experiment ...... 132 6.5 Miscible, 25 Hz forced 0.48 Atwood number experiment ...... 134 6.6 Forced, unforced, miscible and immiscible exps comparison ...... 135 6.7 Immiscible matched RI hvy liquid / anethole montage ...... 137 6.8 Immiscible, unforced 0.48 Atw number exp on LIM apparatus . . . . 138 6.9 Intensity profile and cutoffs for experiment ensemble ...... 142 6.10 h for0.48Atwoodnumberexperiments ...... 144 6.11 Reynolds number data for 0.48 Atwood number experiments . . . . . 145 6.12 α for 0.48 Atw num exps calc using √h vs. t√Ag ...... 147 6.13 α for 0.48 Atw num exps calculated using C&C method ...... 151 6.14 α for 0.48 Atw num exps calc using C&C meth w/ local parabola . . 152 6.15 α for 0.48 Atw num exps calc using local parabolafit ...... 153 6.16 α from anethole / hvy Liquid exps calc w/ C&C & √h vs. t√Ageff . 154 6.17 α for 0.48 Atw num exps calc w/ C&C & √h vs. t√Ageff forLIM . . 155 6.18 0.48 Atwood number nanometer IC simulation laplacian images . . . 158 6.19 0.48 Atwood number nanometer IC simulation laplacian images . . . 159 6.20 α plots of 0.48 Atwood num nm IC sim for laplacian images . . . . . 161 6.21 α plots of 0.48 Atw nm IC sim for dens with 90% & 10% thresh . . . 162 6.22 α plots of 0.48 Atw num nm IC sim for dens with 95% & 5% thresh . 162 6.23 α plots of 0.48 Atwood num µm IC sim for laplacian images . . . . . 163 9

LIST OF FIGURES – Continued

6.24 α plots of 0.48 Atwood num µm IC LIM sim for lapl images . . . . . 164 6.25 α plots of 0.57 Atwood num extrap IC sim for laplacian images . . . 165 6.26 α plots of 0.48 Atwood num Fdy IC sim for laplacian images . . . . . 166

A.1 Depiction of interface for viscous RT derivation ...... 174 A.2 Calculated stability curves for parametric forcing ...... 196 A.3 Natural log of I fortriangulartank...... 199 I0 A.4 Beer’slawleastsquareserror ...... 200 A.5 Beer’slawconcentrationplots ...... 201 A.6 Beer’slawcontrastplots ...... 203 A.7 Tank lid spherical cap derivation ...... 205 10

LIST OF TABLES

3.1 Refractive index mismatch liquid combinations ...... 66 3.2 Matched refractive index liquid combinations ...... 98

6.1 α comparisontable ...... 170 11

ABSTRACT

The Rayleigh-Taylor instability is a buoyancy driven instability that takes place in a stratified fluid system with a constant acceleration directed from the heavy fluid into the light fluid. In this study, both experimental data and numerical simulations are presented. Experiments are performed primarily using a lithium-tungstate aqueous solution as the heavy liquid, but sometimes a calcium nitrate aqueous solution is used for comparison purposes. Experimental data is obtained for both miscible and immiscible fluid combinations. For the miscible experiments the light liquid is either ethanol or isopropanol, and for the immiscible experiments either silicone oil or trans-anethole is used. The resulting Atwood number is either 0.5 when the lithium-tungstate solution is used or 0.2 when the calcium nitrate solution is used. These fluid combinations are either forced or left unforced. The forced experiments have an initial perturbation imposed by vertically oscillating the liquid containing tank to produce Faraday waves at the interface. The unforced experiments rely on random interfacial fluctuations, due to background noise, to seed the instability. The liquid combination is partially enclosed in a test section that is accelerated downward along a vertical rail system causing the Rayleigh-Taylor instability. Accelerations of approximately 1g (with a weight and pulley system) or 10g (with a linear induction motor system) are experienced by the liquids. The tank is backlit and digitally recorded with high speed video cameras. These experiments are then simulated with the incompressible, Navier-Stokes code Miranda. The main focus of this study is the growth parameter (α) of the mixing region produced by the instability after it has become apparently self-similar and turbulent. The measured growth parameters are compared to determine the effects of miscibility and initial perturbations (of the small wavelength, finite bandwidth type used here). It is found that while initial perturbations do not affect the instability growth, miscibility does. 12

CHAPTER 1

INTRODUCTION

1.1 Background and Motivation

The Rayleigh-Taylor instability (RTI) is a buoyancy driven fluid instability that occurs when the light fluid in a stratified two fluid system is accelerated into the heavier one often by means of a pressure gradient. In order for the instability to develop, P ρ < 0 (where P is pressure and ρ is density). This pressure is ∇ ·∇ increasing in the direction from the more dense to the less dense fluid. Rayleigh- Taylor instability is buoyancy driven, as opposed to the mathematically similar Kelvin-Helmholtz instability that is shear driven [30]. The most notable example of the Rayleigh-Taylor instability is when a heavy fluid lies atop a light one while in the presence of a downward acting gravitational field. The purpose of this study is to determine how the mixing region (which develops due to the instability) grows in time, and how it is affected by altering experimental parameters. An easy way to understand how we may have a Rayleigh-Taylor (RT) stable (or unstable) stratified configuration is by considering the situation in which there

is an acceleration geff acting downward (in the negative z direction) on a stratified fluid system as depicted in figure 1.1. Considering a fluid particle, we can look at the forces acting on it in reference to the coordinate system in which z is directed upward. The acceleration produces a pressure gradient ∂P = ρg inside the fluid ∂z − eff which may create a force imbalance upon the fluid particle. If we choose a fluid particle in the upper fluid with density ρ2, we see that the force, due to pressure, at the lower surface of this particle would be [P ρ g ` ρ g (z `)] A (where A 0 − 1 eff − 2 eff − is the area) and would be [P ρ g ` ρ g (z ` + ∆z)] A for the upper surface. 0 − 1 eff − 2 eff − We have chosen the geometry of the fluid particle here to simplify the equations. The force due to gravity on the fluid particle is ρ Vg = ρ ∆zAg (where V − 2 eff − 2 eff 13

ρ2

∆z l z geff ↑ ↓ ` ρ1

z = 0

Figure 1.1: A fluid particle in the upper fluid is interchanged with one from the lower fluid in a stratified system with downward acting acceleration geff. Once displaced to the bottom fluid, the force balance on the particle (net pressure force + force due to particle mass = mass d2z acceleration) yields ρ1geff∆zA ρ2geff∆zA =(ρ2V + ρ1β) dt2 . Here, ρ×is the fluid density, A is area, −V is the volume and z is the vertical displacement. Therefore, if ρ2 > ρ1, the fluid particle is accelerated further downward and the system is unstable. If ρ2 < ρ1, the fluid particle is pushed back across the interface and the system is stable. is the volume). Writing out Newton’s second law we have (lower pressure force − upper pressure force + gravity force = mass acceleration), × d2z F = mz¨ = ρ g ∆zA ρ g ∆zA =(ρ V + ρ β) =0, 2 eff − 2 eff 2 1 dt2

where we have also included the added mass ρ1β to account for the other fluid that must be accelerated away with the fluid particle. For this configuration, the fluid particle does not move, which is expected. If we interchange the fluid particle with

one from the lower fluid where the density is ρ1, from Newton’s second law, for the initial particle, we obtain (noting that the pressure forces have changed since we are in the lower fluid),

[P0 ρ1geffz] A [P0 ρ1geff (z + ∆z)] A ρ2geff∆zA { − − − } − 2 = ρ g ∆zA ρ g ∆zA = (ρ V + ρ β) d z . 1 eff − 2 eff 2 1 dt2 14

From this, if ρ1 > ρ2, the fluid particle is pushed back to where it came from (the

system is stable). However, if ρ2 > ρ1, the fluid particle is pushed further away from where it originated and the system is unstable. This concept of a fluid particle moving across the interface resulting in instability can be extended to the deflection of an interface in the Rayleigh-Taylor instability and illustrates the necessity of an initial perturbation on the interface since there is no mechanism to interchange a fluid particle across the interface. An example of a simple interface is shown in figure 1.2, where the coordinate system is the same as in figure 1.1. The interface has been deformed, simulating perturbations on the interface. For simplicity the geometry of the interface deformation has been chosen to be rectangular (the derivation here can be generalized to an individual Fourier mode so that any interface deformation would follow the same behavior). The fluid

ρ2

ρ1 ∆z

z geff ↑ ↓

ρ1

Figure 1.2: Here an interface is shown with downward acting accel- eration geff, of a fluid particle displaced from the lower to upper fluid by means of interface deformation. Once displaced to the upper fluid, the force balance (net pressure force + force due to particle mass = d2z mass acceleration) yields ρ2geff∆zA ρ1geff∆zA =(ρ1V + ρ2β) dt2 . Here,×ρ is the fluid density, A is area,−V is the volume and z is the vertical displacement. Therefore, if ρ2 > ρ1, the system is unstable (the fluid particle moves up farther from the center further deforming the interface). If ρ2 < ρ1, the fluid particle is moved back toward the center and the system is stabilized.

particle relocation is caused by deformation of the interface. The pressure force on 15 the fluid particle’s lower surface is [P ρ g `] A and is [P ρ g ` ρ g ∆z] A 0 − 1 eff 0 − 1 eff − 2 eff for the upper surface. The force due to the weight of the fluid particle (which has density ρ ) is ρ Vg = ρ ∆zAg . Note that once again we have chosen the 1 − 1 eff − 1 eff interface deformation shape to simplify the calculations. We can then form the equation for the force balance as,

d2z ρ g ∆zA ρ g ∆zA =(ρ V + ρ β) . 2 eff − 1 eff 1 2 dt2

In this arrangement, if ρ1 > ρ2, the fluid particle is pushed back to its original position (and thus the interface is brought back to equilibrium, so the system is stable). However, if ρ2 > ρ1, the fluid particle is pushed further away from where it originated (deforming the interface further) and thus the system is unstable. Another way to understand the instability is from the baroclinic torque present at the stratified, perturbed interface. This baroclinic torque is created from the mis- alignment of the pressure and density gradients at the perturbed interface. When in the unstable configuration, for a particular harmonic component of the initial per- turbation, this torque between the two fluids will create vorticity. This vorticity will impose a velocity field that will tend to increase the misalignment of the gradient vectors, which in turn will create additional vorticity, leading to more misalignment. This becomes more obvious if we consider the two-dimensional, incompressible, in- viscid vorticity equation, Dω 1 = ρ P, (1.1) Dt ρ2 ∇ ×∇ where ω is vorticity, ρ density and P is the pressure [12]. The pressure gradient here is produced by the acceleration. This concept is presented in figure 1.3, where it is observed that the two counter-rotating vortices with strength ω have velocity fields that sum at the peak and trough of the perturbed interface. In the stable configuration the vorticity, and thus the induced velocity field, will be in a direction that decreases the misalignment and therefore stabilizes the system. 16

Figure 1.3: Visualization of an unstable Rayleigh-Taylor configu- ration where baroclinic torque at the interface creates vorticity and induces a velocity field that increases the baroclinic torque. Here ω is vorticity, P is pressure, ρ is density, u is velocity and g is gravity. The thick circular arrows represent the velocity field created by the vortex.

RTI occurs often in both natural and man-made systems. An inverted glass of water is one such example, where the heavy fluid is water and the light fluid is air. A very important application (and one which much of the funding for this study has come) is Inertial Confinement Fusion (ICF). In ICF, a capsule containing a Deuterium / Tritium (DT) mixture is bombarded with energy originating from high powered lasers with the purpose of causing a fusion reaction to take place; the

two isotopes fuse producing He4, a neutron and energy [37]. ICF experiments are currently being performed at the National Ignition Facility (NIF) in Lawrence Liv- ermore National Laboratory (LLNL), but ignition (a net output of energy from the fusion reaction) has not yet been achieved. At NIF, 192 frequency-tripled Nd:Glass lasers are used to irradiate the target. The system is designed to output 1.8 MJ of energy and 500 TW of power [59]. The ICF capsule is a sphere comprised of three main layers. The outer shell is a ablator material made from plastic doped with other elements such as Beryllium or Germanium. Interior to that is a layer of DT ice surrounding DT gas. There are two main types of ICF, direct and indirect drive. In direct drive, lasers directly irradiate the target. In indirect drive, lasers enter a hohlraum which has the capsule in the center. The hohlraum is a hollow cylinder 17 that is composed of a high Z (large atomic number) material, such as gold. The lasers irradiate the inside of the hohlraum which re-emits the energy as x-rays. In the indirect drive method, a more uniform energy distribution is deposited on the ablator layer. The energy deposited on the ablator causes it to blow off and by New- ton’s third law, PdV work is done on the interior of the capsule. The compression of the DT gas region results in an increase in pressure at the center of the capsule causing very high temperatures to develop. In addition, shocks (caused by the ab- lation) pass into the DT gas region which also add to the pressure and temperature rise. The pressure rise at the center eventually acts to decelerate the initially accel- erating implosion until a stagnation point is reached [8]. This “hot spot” will reach the conditions for thermonuclear burn if a high enough temperature is achieved. The process is depicted in figure 1.4, which is from a summary talk given by Basko [4]. During this process, there are two ways in which the Rayleigh-Taylor instability can develop which acts to mitigate ignition and decrease total yield. Firstly, RTI can occur at the interface of the outer ablator shell (after becoming a plasma) and the DT ice layer during the initial implosion of the target. In this configuration, the smaller density of the outer ablator plasma layer and the larger density DT ice inner layer create an inward acting density gradient. This in conjunction with the outward acting pressure gradient results in an RT unstable configuration. By choos- ing layers of gradually varying density with different dopants such as Germanium, for the ablator material, the Atwood number can be decreased; thus, decreasing RT growth. Also, by using indirect drive (to produce a more uniform energy deposi- tion), the effect of RTI can be minimized as well [11]. The second way that RTI can occur is during the deceleration phase between the high temperature, high pressure DT gas and the outer, colder DT ice layer. Here, the pressure gradient is directed inward and the density gradient is directed outward which is also an RT unstable configuration. Pictorial depictions of both RTI during the implosion phase and RTI during deceleration phase are shown in figure 1.5. The RTI generated in both these instances causes mixing. This mixing brings cold fuel from the outer layer into the center “hot spot”, lowering the temperature and decreasing the reaction rate; this 18

Figure 1.4: A depiction of indirect drive inertial confinement fusion. The DT capsule is first irradiated by x-rays created by lasers focused on the inside of the hohlraum. This causes the outer ablator mate- rial to blow off which results in implosion of the fuel. Eventually, a large enough pressure and temperature is reached in the core causing ignition. This figure is from Basko [4].

process may prevent ignition altogether [62, 79]. By more fully understanding this instability, more efficient capsules can be designed [11]. Simulations are performed to compare different capsule designs. However, the simulations cannot resolve the small turbulent length scales and therefore models are required. Experiments are then needed to validate these models. In general, RTI is affected by both interfacial tension (or specifically surface tension if there is a free surface) as well as viscosity. Interfacial tension acts to suppress smaller wavelengths up to some critical wavelength where anything smaller will no longer be unstable. The stabilizing effect of surface tension can be observed in a liquid filled straw, where the top is covered and the straw is removed from the liquid. The dimension of the straw limits the maximum wavelength that can exist such that it is below the critical wavelength. Only very vigorous oscillation of the straw, causing non-linear effects and/or effectively increasing the acceleration, will 19

Pabl. > PDTice, ρabl. < ρDTice PDTgas > PDTice, ρDTgas < ρDTice (a) RTI between the ablation layer and the (b) RTI between the DT gas and the DT ice DT ice layer from implosion phase. layers from deceleration phase.

Figure 1.5: This figure represents the two instances of the Rayleigh- Taylor instability during the ignition process of an ICF capsule. First (a), during the initial deposition of energy on the ablation layer, the ablation turns into a plasma and blows off. This blow-off causes the capsule to implode. During this implosion phase, what is left of the low density plasma ablation material is at a higher pressure than the higher density DT ice and thus this configuration is RT unsta- ble. Imperfections in the energy deposition or ICF capsule create perturbations that seed the instability. Secondly during (b), during the deceleration phase, where the inner core has reached tremendous pressures (from both shock waves and compression), there is another RT unstable configuration. The inner, lower density DT gas core has a higher pressure than the outer, higher density DT ice layer. Again, imperfections in the core and non-uniform energy deposition seed the instability.

cause the fluid to fall out. Viscosity also suppresses small wavelengths. However, its effect is to damp and not stabilize the shorter wavelengths. A lava lamp displays the Rayleigh-Taylor instability in a configuration where viscous damping plays a major role. In a lava lamp, the two fluids used are often paraffin wax and water. The paraffin (when cool) is more dense than water, but becomes less dense than water when heated from the incandescent bulb at the bottom of the lamp. The heated paraffin wax at the bottom is in an RT unstable configuration with the water above it, therefore the instability develops. Once at the top (and detached from the bottom paraffin layer), the paraffin cools and creates an RT unstable configuration in the opposite direction. The cycle begins again when the paraffin is once again heated 20 from below. The viscosity of paraffin is many times that of water and therefore we only observe large wavelength perturbations. The evolution of the Rayleigh-Taylor instability follows four main stages. Ini- tially, if the perturbation amplitudes are small when compared to wavelength, the growth is exponential. By assuming small amplitudes the equations of motion can be linearized, and this results in exponential growth for the instability; this is the first stage. At the limit of linear stability theory, we observe the ubiquitous mush- room shaped spikes (fluid structures of heavy fluid growing into light fluid) and bubbles (fluid structures of light fluid growing into heavy fluid). The growth of these structures can be modeled by using a buoyancy drag model and the growth is linear in time (the velocity is constant); this is the second stage [71, 39]. At this time, non-linear terms in the equations of motion can no longer be ignored and mode-coupling will begin to play a role. Then, the spikes and bubbles interact with each other through bubble merging and competition, where fluid structures merge to create larger structures and larger structures envelop smaller ones respectively; this is the third stage. This eventually develops into a region of turbulent mixing, which is the fourth and final stage. The mixing region that develops is believed to be self-similar and turbulent if the Reynolds number is large enough [29]. It is of particular interest to examine RTI after the flow has evolved to the fourth stage. Figure 1.6 represents the evolution of the Rayleigh-Taylor instability from small wavelength perturbations at the interface. 21

ρ2 ρ2

ρ1 ρ1

(a) (b)

ρ2 ρ2

ρ1 ρ1

(c) (d)

Figure 1.6: This figure represents the evolution of the Rayleigh- Taylor instability from small wavelength perturbations at the inter- face (a) which grow into the ubiquitous mushroom shaped spikes (fluid structures of heavy into light fluid) and bubbles (fluid structures of light into heavy fluid) (b) and these fluid structures interact due to bubble merging and competition (c) eventually developing into a mix- ing region (d). Here ρ2 represents the heavy fluid and ρ1 represents the light fluid. Gravity is acting downward and the system is RT unstable.

The turbulent mixing that takes place represents active-scalar, level 2 mixing where the mixing is coupled to the flow dynamics [29]. The flow is postulated to

2 ρ2 ρ1 follow the model h = αAgt , where h is the mixing layer width, A − (the ≡ ρ2+ρ1 density contrast) is the Atwood number, g is the acceleration and t is time [107]. Under the self-similar hypothesis, the flow at different times has the same geometry 22 and there is no obvious temporally constant length scale for the mixing region to be scaled with; the mixing layer width is only coupled to the length scales within the mixing region. Thus, the mixing layer width and the internal wavelengths in- crease in time and must grow proportionally with each other. Eventually, the range of scales within the mixing region form a sufficient inertial range for fully devel- oped turbulence to be assumed. A derivation, through dimensional analysis, of this self-similarity is presented in section 1.4. A fully developed turbulent flow implies self-similarity, but since a self-similar flow does not necessarily imply turbulence, turbulence cannot be assumed without quantifying the statistical properties of the flow or by making comparisons to other studies where flow statistics are quantified. A byproduct of self-similarity is that the flow loses its memory of the initial conditions. The dependence of initial conditions on α would allow one to infer whether self-similarity is achieved. One of the goals of the present research is to test this hypothesis by varying the initial conditions and observing the affect on the constant of proportionality in the model. It has also been hypothesized that this model holds for larger Atwood numbers as well as smaller ones. However, results from previous experiments with large Atwood numbers are limited. The large Atwood number experiments performed in the past were almost always immiscible which adds an additional experimental parameter. Also, past experiments often only had a few images per experiment giving a limited amount of data. In this dissertation, we will present results with a large Atwood number in both miscible and immiscible configurations. The goal of these experiments is to better understand how the self-similar model applies to RTI. It is hypothesized that the initial conditions will have an influence on the insta- bility until the mixing layer is much larger than the longest wavelength [2] and that the flow becomes self-similar beyond this limit. To increase the amount of data we can collect in the self-similar region, it is desirable to have the smallest wavelengths possible for the initial perturbations in our experiment so that the flow transitions to stage four as quickly as possible. However, we must not produce waves smaller than the critical wavelength when using immiscible fluids, or the instability will not 23 grow.

1.2 Previous Research

After its first discovery by Lord Rayleigh in 1883 [84] and its rediscovery by Geoffrey Taylor in 1949 [99], the Rayleigh-Taylor instability has received much attention over the years. In this section much of the previous research will be outlined, but will be restricted to studies pertaining mostly to experimentation. Lord Rayleigh analyzed the stability of non-uniform density fluid in the presence of a gravitational field. He considered the case where two fluids of different densities are stratified and when there is an exponentially varying density distribution. To perform his analysis he used the incompressible, 3D equations of motion. In addition to this, he included a kinematic condition to match the fluid interface velocity with that of the flow. He allowed the dependent variables to be perturbed (by a small amount) from the base state and neglected squares of small terms. He then performed a normal mode analysis where individual Fourier modes were tested for stability allowing him to determine an eigenvalue relationship between growth rate and wavenumber. His analysis did not however account for the effects of viscosity or surface tension. Harrison [40] considered the motion of superposed fluids in the context of inves- tigating the natural frequencies of the fluids when oscillated. His analysis does allow for the RT unstable case (although he does not mention it). He also included the effects of viscosity and interfacial tension. His analysis was performed by assuming potential flow, but he added an extra term to represent viscous flow using the stream function. Because of his choice of formulation, his analysis was two-dimensional. As did Rayleigh, he neglected nonlinear terms and performed a normal mode analysis. Harrison also included interfacial tension in the equations. He did not consider an exponentially varying density (only considered a discontinuous density jump), but he did include the effect of finite domain for one of the fluids. Almost 70 years later in 1950, seemingly unknowing of the analysis of Lord 24

Rayleigh, Taylor [99] re-derived the equations for RTI. His analysis started with potential flow. Therefore, the form of the solution follows from the solution to Laplace’s equation by separation of variables. He proceeded with his solution in two- dimensions, where he substituted the solution for individual wavenumbers into the unsteady Bernoulli equation. He neglected nonlinear terms and matched pressure across the interface. He also used the kinematic condition to match the interface velocity with that of the flow and matched boundary conditions. He obtained the same flow relationship that Rayleigh had previously and he also included the effects of finite fluid depth and a body force other than that of gravity. At the same time that Taylor published his theoretical analysis, Lewis [56] was first to investigate the Rayleigh-Taylor instability experimentally. All of the fluid combinations used by Lewis were immiscible with a range of Atwood numbers. Ex- periments were performed on liquid-liquid interfaces with Atwood numbers of either 0.065 or 0.228. Also, a few experiments were performed with gas-liquid interfaces using air as the light fluid, all of which had an Atwood number of approximately one. The initial perturbations at the interface were created by using a moving paddle to excite the interface with always less than five waves across the tank. From these experiments, Lewis concluded that there were two main stages of the growth of the instability: an exponential stage (dictated by linear stability theory) and a constant velocity stage that verified the findings of Davies and Taylor [20] who studied the rising of bubbles in stratified fluids. In 1951, Bellman and Pennington [6] re-analyzed the Rayleigh-Taylor instability including the effects of viscosity and surface tension. They also went a step further than Harrison and calculated a few numerical examples to show the effects of sur- face tension and viscosity on the growth rate. Chandrasekhar [9] performed a more complete study where he derived the equations for the Rayleigh-Taylor instabil- ity, including both interfacial tension and viscosity, for a three-dimensional system starting from the full incompressible equations of motion. Since interfacial tension and viscosity are often important, a better understanding of their impacts on the Rayleigh-Taylor instability is very useful. 25

Layzer [54], in 1955, expanded upon Taylor’s analysis to include nonlinear ef- fects. Layzer introduced the concept of bubble competition. When fully developed bubbles grow, they compete with each other and larger ones eventually incorpo- rate smaller ones. Later, Fermi and Von Neumann [35] re-derived Taylor’s results using a Lagrange formulation. In their nonlinear analysis it was noted that the Kelvin-Helmholtz instability may develop due to the presence of fluid shear be- tween different fluids. Emmons, Chang and Watson [33] investigated the instability of an air-liquid combination using a paddle to excite an initial sinusoidal pertur- bation with large wavelengths (which we will be referring to here when there are only a few waves per tank width). Their results were compared with the theory of Bellman and Pennington and good agreement was found. Additionally, they de- rived a weakly nonlinear solution for the incompressible Rayleigh-Taylor instability including surface tension. Also, the phenomena of bubble competition and Kelvin- Helmholtz rollups were observed, both of these were predicted by Fermi and Von Neumann. Chandrasekhar [9] then published a book in which he compiled Rayleigh’s and Taylor’s similar derivations (along with viscous and surface tension effects) and forever identified the instability with their names. In addition, Chandrasekhar also considered the instability when a magnetic field is present. In 1973, Cole and Tankin [13] published RTI experiments using air and water with a single mode sinusoidal initial perturbation present (created in a way similar to Emmons, Chang and Watson). A comparison was made with theory as well as to the experiments of Emmons, Chang and Watson. Cole and Tankin found that their data compared well with the results of linear stability theory. However, they did not have good quantitative agreement with theory when attempting to verify the growth rate of the second order wave in the analysis of Emmons, Chang and Watson. Also in 1973, Ratafia [83] published RT experiments that consisted of a 2D tank filled with two liquids having an initial perturbation present (produced by oscillating the tank rather than using a paddle). The fluids used yielded an Atwood number of 0.095. The fluids were immiscible and for this reason a surfac- tant was added to decrease the surface tension. Ratafia was able to perform his 26 experiments into late time and observe the transition from the linear to nonlin- ear regime. In 1974, Plesset and Whipple developed a simplified expression for the fastest growing wavelength for viscous RTI using the wave viscous damping model of 1/3 ν2 Lamb [53], yielding λfg =4π , where ν is the average kinematic viscosity geff [76]. This simplification is very useful because the equations developed by Bellman and Pennington are quite complex and must be solved numerically. In 1979, Popil and Curzon published their experimental work that consisted of a 2D air-water RT experiment with an initial perturbation produced by high voltage electrodes. An advantage to their perturbation method was that since no mechanical device was utilized, very complicated waveforms should be possible. Also, creating a perfectly stationary waveform was possible. Sharp [93], in 1984, compiled a summary of all of the progress on the Rayleigh- Taylor instability. Sharp was the first to clearly define the different stages of RTI. First, with small amplitudes when compared to the wavelength, the instability will develop exponentially as dictated by linear stability theory. After this, mushroom shaped bubbles (structures of light into heavy fluid) and spikes (structures of heavy into light fluid) will form. This stage is strongly affected by three-dimensionality. The next stage consists of non-linear interactions between the bubbles and spikes. Merging of the bubbles and spikes takes place and Kelvin-Helmholtz instability develops. The final stage is the break-up of the spikes, penetration of the bubbles into a finite thickness slab of fluid and the development of turbulent mixing. Youngs [107], also in 1984, published results where he found that in long enough time the RTI appears to become turbulent, insensitive to its initial conditions and self-similar. By using dimensional arguments and assuming the flow is self-similar, 2 he derived that the mixing layer width is proportional to Agefft . For 2D simula- tions, he found the proportionality constant, α, to lie between 0.04 and 0.05 and to be relatively independent of Atwood number. At the same time, Read [85] verified this result in experiments that utilized a tank which was accelerated downward on a set of vertical rails by the use of rocket motors. The system was capable of produc- ing accelerations up to 75g. Only the small random initial perturbations that are 27 present without forcing the tank (owing to background noise in the system) were used as the initial perturbation for the experiment. The fluids were imaged using a 200 fps 35 mm camera in conjunction with an array of light bulbs behind a diffuser acting as a backlight. Experiments were performed for both 2D and 3D RTI (by altering the aspect ratio of the fluid tank) using air-liquid interfaces and immisci- ble liquid-liquid interfaces achieving Atwood numbers in a range of approximately 0.25 to 1. Surfactant was also added to lower the interfacial tension between the various fluids. Because the fluid combinations did not have matched refractive in- dices, the only information that could be obtained was the evolution of the mixing layer width in time. Since these early experiments lacked a means of measuring the characteristics of turbulence within the mixing regions, emphasis was placed on the measurement of the growth rate, α. According to the self-similar model, α should

not depend on A or geff. When considering the various fluid combinations used by

Read, it was observed that the growth rate of the spike (αs) depends on A, while

the growth rate of the bubble (αb) does not. Also, the simulations performed by

Youngs resulted in smaller values of αb (0.04 to 0.05) as opposed to the experiments (0.06 to 0.07). These results illustrated a lack of understanding; the difference of α between experiments and simulations raises questions about the validity of the results. From the unmatched refractive index experiments of Read, the width of the mixing region was determined from its edges. Simulations, however, output frac- tions of one fluid mixed into the other. To account for this, the comparison between experiments and simulations was made with horizontally averaged simulation vol- ume fraction bounds of 1 to 99% in an attempt to mimic the edge tracking method used in the experiments. After his ground breaking work, Youngs did more work refining his simulations and performing experiments with different configurations [108, 109, 110]. He also co-authored a number of other papers pertaining to the RTI often performing numerical simulations to compare to the simulations of others. Linden and Redondo [57] then in 1991 published RT experimental work that used two miscible liquids that had a very small Atwood number. Their experimental apparatus was a simple one that consisted of a barrier, separating two fluids in a 28 tank, that was removed by hand to initiate the fluid flow and create the initial perturbations. They used dye to investigate the mixed width. Refractive index was not matched, so the mixing layer width was determined by edge detection. Numerical simulations were also performed to model their experiments, including the large wavelength introduced by removal of the barrier. When comparing the experimental and numerical results, good agreement was found for the growth rate [58]. Kucherenko, et. al. [52] investigated the RTI by means of an experimental ap- paratus where a gas gun was used to fire a container housing two immiscible fluids downward at a rate greater than gravity. Experiments were performed with acceler- ations of 100g. The experiments were imaged either using a pulsed x-ray technique on the EKAP system or light technique on the SOM system. Experiments were performed either with a mercury / water combination, a water / xenon combination or a petrol / zinc chloride aqueous solution combination. The mercury experiments could not be performed using the light technique and only the x-ray technique was used. Their results display similar growth rates to that of the rocket rig experiments of Youngs. In 1997, experiments were then performed with a range of accelerations up to 700g. These experiments used three different immiscible liquid combinations. Either an aqueous NaCl solution / petrol combination, an aqueous ZnCl aqueous solution / petrol combination or a Clerichi liquid / petrol combination were used. These liquid combinations gave Atwood numbers of 0.2, 0.5 and 0.71 respectively. The X-ray imaging technique was used for the two largest Atwood number cases and the light imaging technique was used for the two smaller Atwood number cases. Clerichi liquid is a poisonous liquid that has a large density, allowing for large At- wood numbers in a liquid-liquid configuration. The α value for the bubble was found here to be 0.07. The separation of the liquids back into a stable configuration was also studied [48, 49, 51]. In 2001, the size of individual structures within the RT turbulent mixing zone was investigated while using three liquids. The heaviest liquid was an aqueous solution of Na2SO3 with glycerin added, the middle layer was water with glycerin added and the top layer was petrol. The glycerin was added to 29 match the refractive indices of the three layers. The structures were imaged using a light sheet where gelatin was added to the middle layer to create small particles with which to image the scattering. The size of such structures was found to be 1 mm [50]. ∼ Dalziel (beginning in 1993) [19, 18, 47] performed RT experiments where he matched refractive indices which allowed for better mixing width measurements as well as investigation into the nature of the flow. He used an experimental apparatus similar to that of Linden and Redondo. In these experiments, a rectangular Plexi- glas tank was used to contain the instability, and was visualized primarily using light induced fluorescence. Images of a slice of the mixing region were captured using a high resolution CCD camera in conjunction with a 1/100 s mechanical shutter. The initially unstable fluid configuration was kept in place by a nylon covered barrier. Once this barrier was removed, the instability developed under the acceleration of gravity. The liquids used were a salt water aqueous solution as the heavy liquid and an isopropanol aqueous solution as the light liquid producing an Atwood number 3 of 2 10− . The fluorescent dye was added to one liquid so that intensity mea- × surements taken at the camera can be related to volume fractions. Although α was investigated in the experiments of Dalziel, a particular value was not obtained due to the ambiguity in defining the bounds. The underlying assumption of turbulent RTI is that the flow is self-similar or self-preserving. When looking at turbulent flows, the self-preservation property is usually attributed to homogeneous, isotropic turbulence. Although RTI is globally anisotropic, RTI may be assumed isotropic locally based on the experimentally ob- served behavior. It seems obvious to investigate whether or not the flow is indeed turbulent and if a self-similar model can be used. Looking at the spectrum and whether or not it obeys the Kolmogorov 5/3 law is one way of determining the − turbulence condition. Although it is well accepted that fully developed turbulence 5/3 displays the k− dependence for the velocity spectrum, the same wavenumber de- pendence will be present in an initially smooth scalar field that is disturbed by turbulent velocity fluctuations [100]. This is an important point to make since the 30 lack of velocity fluctuation data from the experiments of Dalziel necessitated a dif- ferent way of determining how fully developed the flow is. For the experiments of Dalziel [18], a normalized power spectrum was determined by taking an FFT of the concentration profiles. A least squares fit of these data was then taken and was compared to that of the Kolmogorov 5/3 law. In addition, the slope versus − time of the normalized power spectrum was also extracted. It was observed that at early times the slope was approximately 2 which was believed to be due to both − the large scales introduced by the barrier removal and the fact that a state of fully developed turbulence had not yet been reached. At later times, the slope became larger due to the finite extent of the tank and the globally stable stratification that was taking place. Also, as a way to characterize the structures within the turbu- lence and demonstrate self-similarity, the fractal dimension was determined. The fractal dimension was determined by counting the number of boxes, N, that an iso- concentration contour passes through when a region (in this case the region was the width of the tank) was broken up into boxes of size . From this, a fractal dimension D() D() was determined where N() − . If D is independent of  then the contour ∝ is a fractal and this implies that the flow is self-similar. Dalziel did indeed determine that D was independent of the box size at a value of 1.47 which was consistent be- tween the experiments and simulations and for different iso-concentration contours. The velocity power spectrum for a 2D flow field is known to follow the relationship D 2 P (k) k − and it has been suggested [96] that a similar relationship will hold u ≈ for 3D flow, where D 1 would be the exponent. With these relationships, the − Kolmogorov 5/3 law was realized for the velocity power spectrum. The Reynolds − number for this flow was found to be approximately 2500, where Re HH˙ (H here ≡ ν represents the full mixing layer width). In 1994, Snider and Andrews [95] published small Atwood number experimental work utilizing a water channel. A splitter plate was used to separate two liquids having different densities that are flowed at the same velocity. The heavier liquid was positioned on top of the lighter liquid creating a Rayleigh-Taylor unstable con- figuration. The mixing layer width grows in space under the acceleration of gravity, 31 thus allowing a statistically steady flow. Cold water atop hot water in the water channel was used as the working fluids in the initial experiments. The growth, al- though monitored in space, can be related to time by use of Taylor’s hypothesis t = x/U. This relation holds only if u0 << U¯, where u0 is the velocity fluctuation and U¯ is the mean velocity; this was indeed demonstrated to be the case. The α value found was 0.07, where the values are practically identically for bubble and spike for this small Atwood number cases. These experiments were followed by experiments using the same water channel in 2003 and 2009 in which statistics to characterize the turbulent nature of the flow were obtained. The combinations used in the later study were either a cold water / hot water combination or a salt water / fresh water combination. The Atwood numbers for these experiments were between 4 4 5.5 10− to 7.5 10− . To track both the mixing layer width and the degree of × × mixing, Andrews’ group utilized either a Nigrosine dye [65], light scattering par- ticles [81] or the pH indicator Phenolphthalein [65] to track the degree of mixing. For the pH indicator experiments, the two fluids were maintained at a different pH while the indicator was mixed in only one of the liquids. When molecular mixing takes place a color change can be observed and the corresponding degree of light absorption is related to volume fraction of one liquid into the other. This has benefit over dye absorption as it distinguishes molecular mixing from stirring. Also, PIV measurements to obtain velocities and temperature measurements for calibration purposes were utilized. Here, α was determined within bounds of 5 to 95% horizon- tally averaged volume fractions. A value of 0.07 to 0.85 is obtained, depending on the particular experiment. The experiments performed by the Andrews group allows both spectra from con- centration profiles in addition to spectra from velocity fluctuations to be obtained. In the particular experiment that utilized cold water atop hot water, a PIV-s system that allowed for the simultaneous measurement of velocities and fluid concentrations was employed. The colder fluid was seeded with approximately twice as many par- ticles as the hotter fluid. The local average intensity of scattered light could then be related to the concentration of the fluid. The velocity was determined by cross 32 correlating two subsequent images. With this information, FFT were taken for both the concentration profiles and velocity fluctuations. The concentration profile versus wavenumber log-log plot displayed an inertial subrange in which the slope is indeed 5/3. At later times, the Kolmogorov slope was evident for nearly two decades of − frequencies. The velocity fluctuations also showed similar behavior, having 5/3 − slope in the inertial subrange thus implying fully developed turbulence was reached. Andrews’ group also computed the Reynolds number of the flow in an attempt to further characterize the flow’s state of turbulence. They measured a Taylor Reynolds

v0λ number of ν = 60, where v0 is the velocity fluctuation, λ is a characteristic length scale and ν is the kinematic viscosity. It is believed that a Taylor Reynolds number of 100 is required for transition to turbulence [28]. Also, the Reynolds number HH˙ proposed by Cook and Dimotakis [14], ν (where H represents the full mixing layer width), was calculated to be 1000. Dimonte, et. al. (in 1996) performed experiments that used a Linear Electric Mo- tor (LEM) rig. With this apparatus, an initially RT stable fluid combination was added to a transparent Delrin and Lexan tank. The tank was accelerated downward by linear electric motors at a rate greater than gravity. The LEM apparatus was ca- pable of variable acceleration up to approximately 70g for constant acceleration and 800g for impulsive acceleration. RT experiments were performed using a constant acceleration and experiments in which the impulsively driven Richtmyer-Meshkov instability was studied utilize the impulsive acceleration. Backlit imaging was used to visualize the fluids where a xenon strobe was used as the light source and either CCD cameras or 35 mm film was used to acquire the images. In this apparatus, the imaging system did not move with the tank and therefore each image required its own camera and backlight; this limited the number of images which could be ob- tained for each experiment to four [22]. Experiments were performed with Schneider [23] in an immiscible configuration with freon and water having an Atwood number of 0.22. In 1998, Schnieder, Dimonte and Remington [90] performed experiments in which they matched the refractive indices of the two fluids utilized. The fluids used were decane and salt water with glucose added to match the Refractive index and 33

AOT added to lower the interfacial tension; the Atwood number for the combination is 0.34. Along with backlit photography, Laser Induced Fluorescence (LIF) was per- formed by adding a fluorescent dye to the salt water. For the LIF RTI experiments, an α value of 0.054 and 0.062 was found for the bubble and spike respectively. It was found that α for the LIF experiments was 10% smaller than that for the back- lit experiments. Later on this work was continued with many different immiscible fluid combinations with a range of Atwood numbers. Laser induced florescence was utilized when the Atwood number is less 0.33, but only backlit photography was utilized for larger Atwood number experiments owing to the inability to match re- fractive index. For the bubbles, the average α value was found to be 0.051 with only slight dependence on Atwood number. For the spike, the α value was found to follow α α R0.33 [24]. Also, work was done by Dimonte looking at the effect of S ∼ B initial perturbations [21] and non-constant acceleration profiles [27]. Dimonte also investigated the RTI in relation to its dependence on Atwood number [25] and on how α varies across different studies [26]. Waddell, Niederhaus and Jacobs (in 2001) [46] then published work on the in- compressible RT instability. A relatively safe and inexpensive approach was used to generate the acceleration on a tank containing the liquids. A system of weights and pulleys was used to accelerate an initially RT stable configuration at an accelera- tion greater than gravity, thus producing an unstable RT configuration. The fluid combinations used were either a calcium nitrate aqueous solution / 70% isopropyl alcohol combination producing a miscible configuration with an Atwood number of 0.15, or a calcium nitrate aqueous solution / heptane combination that produced an immiscible configuration with an Atwood number of 0.336. These experiments were found to agree well with existing analysis, unfortunately the late time turbu- lent growth regime was not reached. In 2006, Roberts [88] introduced the use of a safe, aqueous liquid that could produce Atwood numbers of 0.56 in a miscible liquid combination. It was observed in Rayleigh-Taylor experiments performed without a forced initial perturbation that the self-similar regime appeared to be reached. These experiments produced an α value of 0.03. At the same time, experiments 34 were performed by Olson [70] that utilized parametric excitation of a tank contain- ing miscible liquids to create small wavelength perturbations at the interface. Olson performed experiments reaching an Atwood number of 0.215 which produced an αb value of 0.04. In parallel with the experimental investigations described above, computer sim- ulations were also performed. Due to computational limitations, simulations of the full 3D turbulent Rayleigh-Taylor instability have only begun to be undertaken in the past 20 years. The instability was first computed at Los Alamos in a 2D frame- work that was single mode and performed using an Eulerian scheme [17]. Since then, many other computational studies have been performed using a variety of methods [31]. Not until 1990 was the instability computed in three dimensions[101] and there was a difference in α, revealing the need for more 3D simulations. Youngs was the first to propose and simulate multimode self-similar RT growth [107, 109, 110]. From simulations, Youngs obtained α values for bubble growth of approximately 0.03 which are less than that of his experiments. A number of computer programs have been developed and utilized since Youngs. Notably, in 2001, simulations were performed that used front tracking methods which yielded larger α values. Glimm, et. al. [38] found a bubble α of 0.07, and Oron et. al. [71] found a bubble α value of 0.05 and a spike α value of 0.07. Oron also derived a drag-buoyancy model de- scribing the bubble and spike late time, single mode, terminal velocities. Dimonte [26] compared the results of many of these codes. However, he only compared pro- grams which do not fully resolve the small scales, using a MILES technique. From this study, α values in the range of 0.02 to 0.04 were obtained when front track- ing was not employed, which again do not match previous experiments performed of a similar nature. Miranda, a computer program from Lawrence Livermore Na- tional Lab [15, 2], is a DNS code that computes all the scales of the flow including the diffusive ones. Using a compact finite difference scheme [55], a high resolution 3D, DNS RT simulation performed using Miranda has consistently displayed similar small alpha values [103]. This numerical simulation only further reveals the need to better understand the discrepancies between simulations and experiments. In 2005, 35

Ramaprabhu, et. al. [82] performed LES simulations in which the initial conditions were varied. It was found that depending on the initial conditions, the α values could be altered. It was found that self-similarity could be achieved either through non-linear mode-coupling to create larger scales (bubble merger) or through the growth and saturation of initially present modes (bubble competition). With large wavelengths present (a broadband spectrum), bubble competition will occur and the α value tends to be larger at late time. If only small wavelengths are present (bubble

merger limit), a lower bound of αb =0.03 was found. It was found that although in the bubble merger limit initial conditions do not affect α, initial conditions do affect α when bubble competition is involved. This may explain some of the discrepancies between simulations and experiments. In summary, the study of the RTI began with the single mode growth of bubbles and spikes. It was observed that non-linear effects as well as bubble competition and merging will eventually take place. If the initial wavelength was small enough so that non-linear effects occur early in the experiment’s development, a mixing layer will develop. This mixing layer is believed to be self-similar and turbulent and a model was proposed to characterize it. This model has an unknown propor- tionality constant and this constant has been the contention of many of the studies both numerical and experimental. Experiments have been getting better over time. Initially, the number of images per experiment were few and the images were of low quality. More recent experiments allowed for more images (and thus more data) to be acquired. Also, with the use of different imaging techniques, it became possible to measure the internal statistics of the mixing region. Numerical simulations have also improved over time. With more computational power, it is becoming possible to perform DNS simulations where the full Navier-Stokes equations are solved in 3D without the need to introduce artificial smoothing factors and without the need to limit resolution. 36

1.3 Proposed Research

Single mode RTI is already well understood, however there are still discrepancies in the value of α for the turbulent RTI. Therefore, it is obvious that future studies of the RTI will be pertaining to the self-similar, turbulent mixing layer region. For experiments, this requires that the wavelength of the initial perturbation be small enough so that there is enough experimental time for the turbulent, mixing layer region to develop. Since the mixing layer width is proportional to the dominant wavelength within it, the width needs to become many times larger than the initial wavelength to develop a large enough range of scales in the mixing region for tur- bulence to occur. It is also of interest to verify if the initial conditions are indeed forgotten, as in the bubble merger limit [82], which requires the ability to control the perturbations produced so that only small wavelengths with a finite bandwidth are initially present. Also, determination of the effects of interfacial tension and how it may contaminate experimental results for both immiscible and miscible fluid combinations that have very closely related properties is necessary. Another com- plication is that depending on the particular experimental acceleration, an Atwood number large enough must be chosen to achieve a measurable mixing layer growth in the limited experimental time. Large Atwood number liquid combinations that are safe are not easy to find, especially in the miscible case. In our experiments, there are two ways to produce the short wavelengths needed. In preliminary experiments it has been observed that small wavelength perturba- tions sometimes appear by performing an experiment without forcing any initial perturbation and allowing them to develop on their own. In this case, what appears is an irregular interface where there is an obvious dominant wavelength that is many times smaller than the tank dimensions. Due to viscosity, very short wavelengths are damped (which would tend to grow fastest in the absence of viscosity). The initial growth will present itself as a fastest growing wavelength. This is predicted and can be calculated from viscous linear stability theory as described in the Ap- pendix (sec. A.1). Another method of achieving small wavelength perturbations is 37 to use parametric forcing (Faraday forcing) of the tank. The non-constant coeffi- cient (acceleration varies with time) in this parametric excitation creates a situation where there are ranges of parameters of stability and instability. Therefore, under certain circumstances waves may not appear. This makes it difficult to observe re- sults without the proper forcing and therefore a better understanding of the system is needed. Thus, we must model the system (including viscosity) to determine the parameters necessary to observe the results we desire as described in the Appendix (sec. A.3). Along side physical experiments, numerical simulations were also per- formed in order to help choose the experimental parameters and help explain the experimental results. The numerical simulations revealed that often waves will not present themselves immediately and one must be patient. However, the interface will eventually become diffuse and the waves will vanish. The late time growth rate of the parametrically forced experiments were compared with the numerics to verify the self-similar model’s effectiveness. Experiments were performed mainly with an Atwood number of approximately 0.5. But, some 0.2 Atwood number experiments were also performed because they are more easily visualized. The larger Atwood experiments have visualization is- sues associated with the difficulty in matching refractive indices between the light and heavy liquids. This causes some ambiguity in the measurements which can be diminished by making comparisons with the smaller Atwood number experiments that have more freedom in refractive index matching. Mainly, experiments were per- formed on an apparatus that produces approximately 1g of acceleration. However, the use of the smaller Atwood number fluid combinations sometimes necessitated the use of a larger apparatus, that produces approximately 10g of acceleration, in order to reach the self-similar regime. With all the different parameters that are varied in these experiments, one can in- fer useful conclusions about the self-similarity assumption for the late time Rayleigh- Taylor instability. This will give a clearer picture of how the late time growth is affected by altering the different parameters that were previously discussed, giving a better understanding of the discrepancy between the growth rate determined from 38 numerical simulations and those obtained from experiments. Also, by comparing our Reynolds numbers to those obtained by Andrews’ and Dalziel’s groups, we can draw conclusions of the flow’s turbulent nature.

1.4 Self-Similarity

After some growth the Rayleigh-Taylor instability no longer can be calculated using the linearized equations of fluid motion as realized in section A.1. This breakdown is due to the assumption that length scales and their derivatives are small used to derive the linearized equations. One consequence of this assumption is that the product of wavenumber and amplitude, ka, must be small as well. This quantity is effectively the slope of the interface. In our linearized theory we expanded a Taylor series about the center of the interface and linearized by only retaining the first term. If ka is finite, linearization is no longer valid. Therefore, we require that ka << 1 for the linear analysis to hold. In the experiments presented here we observe a mixing region develop in which the width of the mixing region is greater than the wavelength, therefore the linear analysis does not hold. There are some characteristics that allow us to deduce that the flow may be self-similar. This often is connected with the ideas of fully developed turbulence because having fully developed turbulence implies a self-similar flow [100]. Besides the internal length scales and the large scale mixing layer width, there is no other obvious length scale that we can characterize the flow with. Also, the initial con- ditions no longer appear recognizable in later images implying that they have been forgotten. Using dimensional analysis and the Buckingham Pi Theorem [73], the similarity relationship can be derived. This derivation follows previous ones by Youngs [108], Ristorcelli & Clark [87] and Jacobs & Dalziel [47]. Assuming viscous and drag effects to be negligible there are five parameters that govern the system. The viscous effects can be neglected here because we are assuming the mixing region has grown beyond the scale where viscosity has an effect. This is discussed in the Appendix (sec. A.2). 39

A mixing layer of width h, grows in time t due to the acceleration geff that acts upon the two stratified fluids with densities ρ1 and ρ2. A dimensional matrix can be created where M is mass, L is length and T is time,

t ρ1 ρ2 h geff M 01 100 . L 0 3 31 1  − − T 10 00 2  −    The number of dimensional variables minus the rank of the dimensional matrix gives 5 3 = 2 dimensionless groups, f (Π , Π ) = 0. We will choose t, ρ and g − 1 2 1 eff as our repeating variables and h and ρ2 as non-repeating. For the first Π group we choose h as the non-repeating variable,

α β γ Π1 = ht ρ1 geff.

This gives: For the M exponent - 0 = β, for the T exponent - 0 = α 2γ, − and for the L exponent - 0 = 1 3β + γ. − Thus, β = 0, γ = 1 and α = 2. The first non-dimensional variable is therefore − − h Π1 = 2. (1.2) gefft

For the second Π group we choose ρ2 as the non-repeating variable,

α β γ Π2 = ρ2t ρ1 geff.

This gives: For the M exponent - 0 = 1 + β, for the T exponent - 0 = α 2γ, − 40 and for the L exponent - 0 = 3 3β + γ. − − Thus, β = 1, γ = 0 and α = 0. The second non-dimensional variable is therefore −

ρ2 Π2 = . (1.3) ρ1

We realize that we can recast the two linearly independent density variables, ρ1 and ρ , as two different, also linearly independent variables, ρ + ρ and ρ ρ . Since 2 2 1 2 − 1 these new variables have the same dimensions as the original variables the second Pi group will take a similar form,

ρ2 ρ1 Π2 = − . (1.4) ρ2 + ρ1

Equation (1.4) is the previously defined Atwood number A. ≡ h ρ2 ρ1 With these two Π groups we can construct a functional form as 2 = f − . gefft ρ2+ρ1 Rewriting this we can obtain (as one possibility) the self-similarity model first de- fined by Youngs [107], 2 h = αAgefft , (1.5)

where α here is a constant of proportionality. From dimensional analysis alone, nothing dictates the functional form of A. However, from an energy conservation analysis, as performed by Jacobs and Dalziel [47], the linear form presented in equation (1.5) is the correct one. 41

CHAPTER 2

EXPERIMENTAL APPARATUSES

The two experimental apparatuses used to perform the Rayleigh-Taylor instability experiments that were examined in this study are often referred to as drop towers. A test sled containing the data producing components is constrained between two vertical rails while allowed to move freely in the vertical direction. Depending on the particular apparatus, either a weight and pulley system or a set of linear induction motors is used to accelerate the test sled downward at a rate greater than gravity. This produces an upward body force on the fluids in the direction opposite of gravity; this effectively switches the direction of gravity and possibly amplifies it. Performing the experiment in this way allows us to initially add the two liquids to a clear Plexiglas tank (the denser of the two first) in a hydrostatically stable configuration and then create the unstable stratification in a controlled manner. The weight and pulley (WP) drop tower was designed and first implemented by Charles Niederhaus and Jesse Waddell [46] and has seen much use since then, including the initial work for this study [88]. The Linear Induction Motor (LIM) drop tower was designed and first implemented by Nicholas Yamashita and Garrett Johnson [106, 44]. For both of these experimental systems, an Advanced Illumination backlight illuminates the fluid containing tank from behind and a high speed camera captures images of the instability development. Both systems also include on-board accelerometers that acquire acceleration measurements during the experiment.

2.1 Weight and Pulley Drop Tower

2.1.1 Drop Tower and Test Sled

This drop tower consists of two 102 mm 102mm, 3.05 meter long thick-walled × vertical, steel beams on which are mounted precision linear rails. Four Thompson 42

Roundway Linear Roller Bearings (two for each guide rail) are attached to a test sled (housing the liquid tank and imaging system), which travels down the rails. The bearings provide a low friction mechanical connection to minimize acceleration fluctuations caused by friction drag [102]. The test sled is made from Techno-Isel 30 mm 250 mm extruded aluminum panels that have a system of slots that allow × for easy fastening of laboratory instruments [67]. An I-beam is attached to the bottom of the test sled and it provides the main support for the cable to direct its force through. At the bottom of the drop tower are two Enidine shock absorbers to stop the sled. They were chosen to keep the acceleration below 70g, which is the camera’s maximum acceleration limit. It was, however, noticed that when maximum acceleration was used, the peak acceleration was greater than the allowed maximum 70g acceleration specified for the camera. To solve this problem, clay was used in conjunction with the shocks to absorb some of the impact. A plot of the data output from a 100g accelerometer, which is mounted to the tank lid, ± is shown in figure 2.1. It is observed that the acceleration is below 60g when the clay is implemented. Another small improvement over previous experiments in this apparatus was a change to the way the rubber stoppers are affixed to the I-beam. In the past, they were attached with Velcro. However, the use of clay and various experimental oils complicates matters a bit. The clay would clog the spaces between the Velcro hoops preventing it from adhering and the oils would dissolve the glue holding the Velcro to the I-beam. To remedy this, new rubber stoppers were formed with strong neodymium magnets impregnated within them. This allows attachment of the rubber stoppers without degradation of the holding force. 43

Figure 2.1: Acceleration output after the implementation of clay shown to be within 70g camera specification limit.The acceleration is represented in g’s (the ordinate) and time is represented as millisec- onds (the abscissa). 44

2.1.2 Release Mechanism

Since timing is of great importance to the success of these experiments, it is necessary to have a method to quickly and repeatably release the sled initiated by an electronic trigger. To accomplish this, a release mechanism with an unstable linkage secures 1 a 4 inch bolt attached to the test sled. The unstable linkage quickly releases the bolt when a small force is applied by a Dormeyer Industries P10-201L solenoid. 1 In the past a 4 inch shoulder bolt was used, however after a few occurrences of the bolt shearing apart (posing a safety risk), it was decided to replace this bolt with a straight bolt of the same diameter. This doubles the diameter under shear, producing only a quarter of the shear stress that was in the shoulder bolt. Since only a small force is required for release, the entire experimental apparatus poses a safety hazard. For this reason a second, P6-101L “safety” solenoid is used, which effectively stabilizes the unstable linkage and must be disengaged before the test sled can be released. A SolidWorks rendering of the release mechanism is depicted in figure 2.2. More details of this mechanism can be found in the thesis of Jesse Waddell [102].

Figure 2.2: SolidWorks rendering of the WP system release mecha- nism. 45

2.1.3 Acceleration Production

In order to accelerate the test sled downward at a rate greater than that of gravity, and thus create an unstable RT configuration, an ingenious and simple device was constructed by Waddell [102] that uses a weight and pulley system. SolidWorks renderings of the setup are shown in Figure 2.4. A cable is fed through 5 pulleys so that the downward acceleration of the much more massive weight sled (when compared to the test sled) pulls the test sled downward. The pulley system is designed so that the tension in the cable creates a larger force on the test sled than that of gravity. This is very similar to a block and tackle system in reverse. The small distance traversed by the large mass is translated into a larger distance at the test sled which requires a larger velocity and acceleration. The mass of the weight and pulley system can be altered by changing the number of lead bricks contained in it and the corresponding weight can be increased up to 1000 lbs yielding a test sled acceleration of approximately 2g. By subtracting the 1g downward acceleration due to gravity from the 2g upward acceleration experienced by the liquids from the cable pulling the test sled downward, the body force experienced by the liquids at maximum acceleration creates a net acceleration of approximately 1g. A typical acceleration profile measured by a 5g accelerometer that has been attached to the ± tank lid is shown in figure 2.3. A line has been fit through the main acceleration region between approximately 100 and 400 milliseconds yielding a 1g acceleration. A rendering of the test sled with the experimental equipment attached is shown in figure 2.5. 46

Figure 2.3: Typical Acceleration profile during an experiment for the WP apparatus. The acceleration is represented in g’s (the ordinate) and time is represented as milliseconds (the abscissa). 47

Figure 2.4: Renderings of the Weight and pulley system.

Figure 2.5: Rendering of the WP test sled with the experimental equipment attached. 48

2.1.4 Data Acquisition and Timing

There have been some noteworthy improvements that have been made to the ap- paratus from previous experiments performed on the same system. Mainly these improvements were in the data acquisition side of the experiment, while the gen- eral operation of the acceleration producing mechanism remains unchanged. The 60 fps rate of the analog camera that was used in earlier studies limited the amount of data that could be acquired. For the faster developing, larger Atwood number- larger wavenumber experiments, this resulted in approximately ten frames of data acquired per experiment. In addition to this poor temporal resolution, the camera had a grayscale resolution of only 8 bits which was a limitation as well. It was deemed necessary to replace the camera with one that has a faster frame rate and is therefore more suited for studying the types of fluid combinations under considera- tion for this work. In the past, separate computers were used to acquire image data and to record accelerometer measurements which created an added complication. It also necessitated the synchronization of the images with the acceleration data to be manually performed increasing the possibility of error. It was decided that using LabVIEW to unify all data acquisition would be ideal and would allow the ability of further automation. A MacPro computer with two quad-core Intel processors and 6 GB of RAM was used for this implementation due to its ability to use both the Windows and Mac operating systems when required. A 200 fps Pulnix TM-6740CL camera with a Cameralink interface in conjunction with a National Instruments PCIe-1427 frame grabber were chosen. This gave a more than three times increase in camera frame rate, thus increasing the temporal resolution of the data that could be acquired. This camera also has a bitdepth of 10 rather than 8 (factor of four increase in grayscale resolution). Also, with the newer CCD sensor a nearly factor of two increase in sensitivity and no loss in SNR is obtained. It was desired to have the entire illuminated area of the tank in the field of view of the camera. The illuminated area of the backlight has dimensions of 5 inches across by 6 inches high. To determine the proper focal length lens to use, 49 the thin lens equation written in the following form was used [41],

y y i = o , (2.1) f s f o − where yo and yi represent the object (backlight when considering the field of view) and image (sensor) dimensions, so is the distance from the lens to the object (which was taken from the middle of the tank) and f is the necessary lens focal length. Considering that the closest distance of the tank to the camera is 11 inches, the maximum extent of the backlight is 6 inches and the maximum extent of the sensor is 4.74 mm, it was determined from equation (2.1) that an 8 mm focal length lens would be suitable. A Schneider lens with a minimum f-number of f/1.4 was chosen. An illustration of the imaging system is depicted in figure 2.6. From the illustration it is made clear that many of the light rays emanating from the diffuse backlight do not make it to the camera. To clarify the imaging system further, two points are imaged where the principal ray and the marginal ray drawn. For high speed imaging, only two types of camera interface are suitable: Gigabit Ethernet (GigE) interface or CameraLink interface. The GigE interface type can accommodate cables up to 100 meters in length and has hardware compatibility with most computers, which means no frame grabber is required. The major drawback to the GigE interface is that there is latency in data transmission because the images are broken up into packets. This latency is problematic in our setup where real time triggers are needed to synchronize the strobed backlight. The CameraLink interface’s two major drawbacks are its cable length limit of 10 meters and its need for a dedicated frame grabber, however with the ease of real time triggering, it is ideal and was selected. Acceleration measurements are also acquired during each experiment. Two ac- celerometers were used: a Silicon Designs Model 2210-005 capacitive 5g accelerom- ± eter and a PCB model JQ353-B32 constant-current piezoelectric 100g accelerom- ± eter. These are necessary to understand the experimental results and to monitor the sudden deceleration at the end of the experiment. Both of these accelerometers 50

Figure 2.6: Illustration of the imaging system

have low output impedance (in the 1 s to 10 s of ohms) and therefore interface well with the data acquisition board. It became necessary to acquire not just vertical acceleration, but also trans- verse accelerations which can affect the experimental outcomes as well. To accom- plish this, an inexpensive 3-axis 6g analog capacitive accelerometer was chosen ± (Freescale model MMA7361L). The inexpensive nature of this accelerometer meant that the impedance was higher (32 kOhms) and therefore when interfaced with the data acquisition board, ghosting between channels took place. The high impedance of the accelerometer when coupled with the small capacitance present in the data ac- quisition card creates an RC circuit where the decay time is equal to the impedance multiplied by the small capacitance. If the impedance is too large, this time constant becomes appreciable. To correct this issue, the interchannel delay between succes- sive measurements from different accelerometers and accelerometer channels was increased. With a signal acquisition frequency of 1000 Hz and an interchannel delay 51 for the 5 channels set to 1/5000 s, the ghosting effect was reduced to less than 1%. When only the low impedance accelerometers were utilized, the interchannel delay was set to a minimum corresponding to practically simultaneous measurements be- tween channels. For our purposes simultaneous measurements for all accelerometer signals are not necessary, only the times of the signal acquisitions are required. Data acquired from the accelerometers must be synchronized with the camera and a National Instruments Data AcQuisition (DAQ) card was chosen for this task. National Instruments frame grabbers and data acquisition boards can be internally connected via a Real Time System Integration (RTSI) bus using an IDE cable. This allows for the exchange of trigger signals between the two boards making it ideal for real time data acquisition and timing. The PCIe-6251 multifunction data acquisition board was chosen because of its versatility. It provides an upgrade from the 12 bit resolution data acquisition board used previously to 16 bits. In addition to this, the multifunction DAQ has extra functionality such as counters and delay generators which proved useful. The output of the frame grabber is fed into the DAQ card so that the frame rate is accurately measured. Also, since the shutter of the camera is opened at the end of its cycle, whereas the TTL synchronization signal is transmitted at the beginning, a delay was configured in the DAQ board to delay the trigger pulse that is sent to the backlight to make it strobe at the correct time. The system is also designed to run a stepper motor to oscillate the tank horizon- tally using a Galil DMC-1500 motion controller and stepper motor controller. The motion controller was configured to activate both the release and safety solenoids that release the test sled and initiate the experiment. This aspect of the experi- ment was not changed from previous studies. However, it was incorporated into LabVIEW in order to provide more versatility. The particular program that the motion controller uses to control the stepper motor and activate the solenoids is user selectable in LabVIEW. LabVIEW creates a text file (based on user entered parameters) which is the program that the controller runs. When using the stepper motor, it is ideal to have the motion controller send a trigger signal to LabVIEW 52 when it has completed its operation. For simplicity in the configuration, the same technique is employed when the stepper motor is not used. For this study, the stepper motor is not used. During some tests, the test sled would release and no images would be acquired. It was determined that the experiment was proceeding before the camera was ready. A failsafe needed to be added so that a failure of camera readiness would not result in the loss of an experiment. Recovery of the LST heavy liquid is time consuming and a wasted miscible experiment creates much wasted time. Empty images are buffered before they are acquired, and the buffering time is variable and dependent on the number of programs running in the background since system memory is used. Initially, a delay was set to give enough time for an image buffer to be created. Some- times, however, the experiment would start without all the images being buffered and therefore no images would be acquired. To remedy this, a software trigger was implemented in LabVIEW that would allow the solenoid release part of the program to run only if all the images are buffered. This is usually accomplished (as given by LabVIEW example files) by terminating the LabVIEW program after all images are acquired. In later experiments, camera connections became problematic and would loosen at impact before all images were acquired. This means that although all the experimentally important images were acquired, the last few images would never be received and the program would hang. To accommodate this failure scenario, a method of acquiring images one at a time (while still buffering all the images first) without requiring all the images to contain valid data was implemented. In summary, the new system works as follows. First, all devices are configured to work in real time. The frame grabber is configured to acquire images when it receives a trigger signal from the DMC-1500 motion controller. It is typically configured to acquire 500 images (which can be changed via the LabVIEW control panel) and to send out a pulse, through the DAQ board, every time it acquires an image. A hardware delay is implemented on the DAQ card to send out a pulse after a defined amount of time whenever it receives a pulse from the frame grabber. This delayed pulse is sent to the controller of the strobing LED backlight so that it is in sync with 53 the shutter. Also, the DAQ board is configured to begin acquiring accelerometer output signals from the 5g, the 100g and the 3-axis 6g accelerometers when ± ± ± it receives its first pulse from the hardware delay that is sent for the first frame. In addition to this, the DAQ card is configured to count the time between images, giving the true frame rate. This value is verified to be very close to 200 fps. All of this data is saved in an Excel file. In order to maintain the interactive nature (including access to the formulas) of the Excel file that is created, LabVIEW is programmed to create a Visual Basic Script that is executed after the raw data is filled. The Visual Basic Script opens up Excel and fills in the formulas. This method was used because LabVIEW is unable to create an Excel file in a way that would preserve the formulas used for the cell calculations.

2.2 Linear Induction Motor Drop Tower

2.2.1 Drop Tower and Test Sled

The linear induction motor drop tower is an 8.2 meter tall structure created primar- ily using 4 inch 4 inch hollow steel tubing with 1/4 inch thick walls. Very High × Molecular Weight (VHMW) plastic guides are attached to the two steel tubes that are vertical and span the entire tower; the test sled follows along these plastic guides. The test sled is an aluminum plate that all the data gathering components are at- tached to. After being accelerated, the plate is substantially slowed by the use of permanent magnet brakes. After passing through the permanent magnet brakes two shock absorbers bring the plate to a complete stop. A computer generated image of the tower is available in figure 2.7. More information about the LIM drop tower can be found in the theses of Nicholas Yamashita and Garrett Johnson [106, 44].

2.2.2 Release Mechanism

The aluminum plate must be suspended within the Linear Induction Motors at the top of the drop tower in order to proceed with an experiment. The plate is initially held in place using an electromagnet at the top of the drop tower attached to a 54

Figure 2.7: A computer generated image of the LIM drop tower. rectangular piece of steel that is affixed to the top of the aluminum plate. The electromagnet allows for adjustment (depending on the weight of the test sled) so that only the minimum force needed to hold the plate in place is used. The large force created by the LIM itself is what releases the plate.

2.2.3 Acceleration Production

Similar to the weight and pulley system, the aluminum plate of the LIM drop tower is accelerated downward by the linear induction motors at a rate greater than grav- ity producing an upward body force on the fluid system. Induction motors do not have any direct electrical connection between the stator and rotor and therefore one 55 does not have to worry about such connections eroding over time. A linear induction motor can be understood by imagining the unrolling of a standard circular induction motor. There are a few different laws of electromagnetism that interact and allow such a system to work. The aluminum plate (analogous to the rotor of a circular motor) is accelerated downward by the Lorentz force in which a current within the plate interacts with an external magnetic field to produce motion. This current within the aluminum plate is induced by a changing magnetic field, by Faraday’s Law of Induction. This changing magnetic field is created by AC current carrying windings, by Ampere’s Law, in the two Linear Induction Motors (analogous to the stator of a circular system) affixed to the top of the vertical rail system. This is the same changing magnetic field that creates the Lorentz force used to propel the plate downward. The induction motors used here are three phase which eliminates the need for a special mechanism for starting as would be needed otherwise. The deceleration of the plate is accomplished by the use of permanent magnet brakes as mentioned earlier. They operate in a way similar to that of the linear induction mo- tors. The permanent magnet brakes are a Halbach array of magnets which creates a very strong, permanent system of alternating magnetic fields. As the aluminum plates passes through this magnetic field, a time dependent magnetic flux is expe- rienced by the plate. By Faraday’s Law of induction, currents (often referred to as eddy currents) are induced in the plate. These currents are such that they interact with the magnetic field to produce a Lorentz force that opposes the motion. This is consistent with Lenz’s law. This system currently produces acceleration of approximately 10g, however it is theoretically possible to achieve accelerations as large as 100g using these motors. These accelerations are a factor of 10 greater than that of the WP drop tower and is the reason it was designed and built. With this larger acceleration there is more exponential growth of the instability when in the linear regime which is extremely useful for the unforced initial condition experiments. In the WP apparatus for the unforced experiments, the sub-micron amplitude perturbations grow to the point of non-linear interactions to form a visible mixing region only with Atwood numbers 56 of approximately 0.5 and greater. With this drop tower, growth is observable with smaller Atwood numbers. Also, the larger acceleration present in this drop tower translates into a smaller fastest growing wavelength as dictated by viscous linear stability theory (sec. A.1) and these smaller wavelengths help achieve the non-linear regime faster as well. A plot of the acceleration versus time for a typical experiment can be observed in figure 2.8. Useful experimental data is acquired only until ap- proximately 120 ms after which the acceleration is no longer constant. It is noted that the average acceleration is almost exactly 10g (the negative here is from the fact that the tank is accelerated downward).

Figure 2.8: Typical Acceleration profile during a LIM experiment. The acceleration is represented in g’s (the ordinate) and time is rep- resented as milliseconds (the abscissa).

2.2.4 Data Acquisition and Timing

With this apparatus, the fluids are imaged through a Plexiglas tank using a white LED Advanced Illumination backlight in conjunction with a Phantom 1200fps, 12- bit monochrome camera. Here a white backlight is used as opposed to the red one 57 used in the WP drop tower. Acceleration measurements are acquired using a G- logger used in conjunction with a 100g Silicon Design capacitive accelerometer. ± The G-logger, camera and backlight comprise a battery powered system that is not tethered to a computer. Synchronization of the system was accomplished using an extra channel on the G-Logger (since there are three accelerometer inputs and only one accelerometer). Once the test sled is fired, a weighted switch momentarily engages (due to its inertia) and triggers the camera to start taking images and the G-Logger to start recording. Most of the synchronization and optimization on the LIM drop tower was performed by Matthew Mokler.

2.3 Fluid Tanks

The experiments presented here were performed using square Plexiglas tanks to contain the liquids. The inside dimensions of the Plexiglas tanks are 3 in. x 3 in. as viewed from the top. The exact height in the vertical direction depends on the particular apparatus used, but it is greater than double the horizontal direction. To prevent air bubbles from entering during filling, air is allowed to escape from a valve in the center of the lid. The tank lid has an O-ring groove that has been machined into it. The WP drop tower lid also has a spherical cap cut into it to help guide air bubbles into the valve (the derivation and implementation of the spherical cap CNC program can be found in section A.5 of the Appendix). Before an experiment, the lid is secured in place and there is a chemical resistant O-ring that keeps the system sealed. A SolidWorks drawing of the tank used for the WP apparatus (before the valve was added to the lid) is shown in figure 2.9, where mirrors have been added to the bottom and sides to aid in visualization. It was noticed that when miscible liquid combinations are used, small bubbles would form during an experiment and the volume of the liquid would decrease. From this, it became clear that the assumption that an ideal solution [68] was being created during mixing was not valid. This volume change caused regions of low pressure to form and bubbles to be created. These bubbles are not wanted in 58

Figure 2.9: A SolidWorks drawing of the Plexiglas fluid tank used in the WP apparatus. This valve in the tank lid is not depicted in this drawing, however experimental fluids are. experiments because they obscure the mixing region. In order to allow the volume of the tank to change to accommodate the miscible liquid volume decrease (which will be discussed further in chapter 3), different methods were tested. First, two small air-filled balloons glued to the inside lid were tested. However, it was observed that small air bubbles would remain in the region between the balloons during the filling process. To minimize this effect, a single larger air-filled balloon was tested. This was sometimes effective in terms of removing air bubbles, however, having this larger balloon inside the tank raises questions about the effect on the instability development. An effect was possibly observed and this method was eliminated. By attaching a balloon filled with the light liquid to the open valve on the outside of the tank we mitigate this issue and still account for the volume change. When the liquid volume in the tank decreases, the external balloon would act as a source of liquid 59 to compensate for the change. This technique complemented with injecting light liquid through the valve below the surface keeps the amount of bubbles both from boiling and trapped air to a minimum. Image sequences of experiments employing a 0.48 Atwood Number, diluted LST heavy liquid / 90% ethyl - 10% water liquid combination for the small internal balloons, large internal balloon and external balloon respectively are displayed below (figures 2.10, 2.11 and 2.12). It is observed that bubbles are present in both internal balloon cases but are not present in the external balloon case. Also, the late time mixing region development is not as uniform across the tank when internal balloons are used as opposed to when the external balloon is used.

Figure 2.10: An experiment performed with LST heavy liquid and 90% ethyl - 10% water, on the WP drop tower with an Atwood number of 0.48. Small balloons are affixed to the inside top of the tank to account for the volume change. Bubbles are present (they are circled in the last image) and at late time the mixing region development is not constant across the tank (excluding side roll-ups). 60

Figure 2.11: An experiment performed with LST heavy liquid and 90% ethyl - 10% water, on the WP drop tower with an Atwood number of 0.48. One large balloon is affixed to the inside top of the tank to account for the volume change. Bubbles are present (they are circled in the last image) and at late time the mixing region development is not constant across the tank (excluding side roll-ups).

Figure 2.12: An experiment performed with LST heavy liquid and 90% ethyl - 10% water, on the WP drop tower with an Atwood number of 0.48. A light liquid filled balloon is affixed to the valve on the outside of the tank to account for the volume change. Not many bubbles are present (they are circled in the last image) and the mixing region development is constant across the tank (excluding side roll- ups). 61

CHAPTER 3

EXPERIMENTAL LIQUIDS AND IMAGING

A number of different liquids have been utilized in this study. Since the effect of miscibility on the self-similarity hypothesis is investigated here, liquid combinations that facilitate this are necessary. This is studied here in both miscible and immiscible configurations. Also, smaller Atwood number combinations are used for comparison purposes when necessary. Much of the difficulty associated with this study involved determining a heavy liquid to use in the larger Atwood number experiments. For most liquids, as the density increases the refractive index increases as well. This creates difficulty when trying to visualize the flow. The large refractive index associated with the large density of the heavy liquid is difficult to match with a less dense liquid that would yield an Atwood number of approximately 0.5. Matching the refractive indices is desirable as it eliminates artifacts associated with the refractive index mismatch. Matching the refractive index for the miscible case was deemed not practical since no safe light liquids could be found, so an attempt was made to determine how the refractive index mismatch affects the visual output of the experiments. Experimen- tal results associated with liquid combinations where the refractive indices did not match were compared to results where the refractive index was matched. For these large Atwood number experiments, it was desirable to find a liquid that was heavier than those previously used, safe enough to be used in a standard labo- ratory environment and had a kinematic viscosity that was not significantly greater than water. In addition to these constraints, it was necessary that a two-fluid com- bination could be found that would be miscible. The heavy fluid eventually chosen was “LST Heavy Liquid” produced by Central Chemical Consulting in Australia. This liquid has a proprietary formula consisting of an aqueous solution of lithium heteropolytungstates. The other likely choice for a heavy liquid is “LMT Heavy 62

Liquid”. Both of these liquids are aqueous solutions of inorganic, lithium-tungsten salts with slightly different molecular configurations. The LMT salt has a kinematic viscosity of more than four times that of the LST liquid at a specific gravity of 2.85 (12.5 cSt versus 3.9 cSt). Since we often desire small wavelength perturbations to develop, smaller viscosity is necessary to prevent damping of the small wavelength perturbations. Also, the LST heavy liquid is more thermally stable. The high price (approximately $700 a liter) of the liquid requires that it be reused. The pure salt is obtained by evaporating all the liquid in a rotary evaporator, reconstituting with distilled water and then filtering to regain purity. Depending on the light liquid’s boiling point, temperatures up to 200◦C are needed to evaporate all the liquid in a timely way. The LMT liquid decomposes above 100◦C, while the LST liquid de- composes above 140◦C. However, the price of the LST liquid is double that of its LMT counterpart. Even with the high cost, the LST liquid is the better choice. For the smaller Atwood number experiments, the heavy liquid used was a cal- cium nitrate aqueous solution. At a density of 1.2 S.G., the kinematic viscosity is 1.1 cSt. This particular fluid was chosen because it is inexpensive and has been used extensively in our laboratory.

3.1 Refractive Index Mismatch

3.1.1 Liquid Combinations

When LST heavy liquid is used in combination with isopropyl alcohol, an Atwood number of 0.57 can be achieved in a miscible configuration. However, an Atwood number of 0.48 (which corresponds to the pure LST heavy liquid and pure water combination) was chosen. This Atwood number is more universal since both im- miscible and miscible liquid combinations are needed and reaching larger Atwood numbers than this for all combinations is not always possible. Also, in past simu- lations [2, 109], an Atwood number of approximately 0.5 was commonly used pre- senting the possibility of comparison with our experiments. In the past, isopropyl alcohol was used [88] as the light liquid. It was later observed that the solubility 63 of the LST salt in isopropyl alcohol is not high enough for the salt to fully dissolve during an experiment. When the salt is not completely dissolved, a suspension of salt particles appears in the mixing region creating Mie scattering and resulting in a dark region when using backlit imaging or a white mixing region when observed with ambient light. This Mie scattering adds another layer of complexity to the visualization. The solubility of the LST salt in ethyl alcohol however, was observed to be greater and was therefore an acceptable choice. However, it was found that the denaturing agent in ethyl alcohol reacts with the LST liquid altering its light absorption characteristics. Therefore, care must be taken to have pure ethyl alcohol. In a proportion of 90% ethyl alcohol and 10% water, complete mixing appears to take place with a diluted heavy liquid combination producing an Atwood number of 0.48. As an added complication, the approximately 1 to 10% decrease in volume as- sociated with alcohol and water mixing (as discussed in section 2.3) creates a partial vacuum which causes the pressure in the liquid to drop below the saturated liquid vapor pressure, resulting in cavitation. This effect was observed with ethyl alcohol, but was not observed when isopropyl alcohol was used. After further investigation, it was determined that the decrease in volume associated with mixing heavy liquid with pure isopropyl alcohol to a 50% mixture is 3%, where for pure ethyl alcohol it is 7.5% [69, 74]. The light liquid will cavitate to fill this space until the pressure in the void rises above the saturated liquid vapor pressure. Using data from the known saturated liquid vapor pressures of the alcohol, the volume change, the density of the liquid and values for the gas constant of the alcohol vapor, an estimate of the volume to relieve the pressure was calculated using the ideal gas equation. It was determined that for the isopropyl case, 0.003 mL is required whereas for the ethanol case 0.008 mL is needed. The ethyl alcohol mixture requires almost three times the volume of isopropyl alcohol and explains why we may see cavitation with ethyl alco- hol and not with isopropyl alcohol. These effects were greatly reduced by allowing the alcohol to flow in from the external balloon to fill the void that is created during mixing. For the immiscible configuration with an Atwood number of 0.48, diluted LST 64 heavy liquid is used in conjunction with Clearco silicone oil as the light liquid. silicone oil was chosen because it has a low viscosity and is non-toxic. There are a number of different Clearco silicone oils with different viscosities and densities, which permits closer matching of experimental parameters between miscible and immiscible experiments (which was made as close as possible here). The fastest growing wavelength is essentially the initial condition for the unforced case. Therefore, matching the fastest growing wavelength is an additional parameter that is matched in addition to the Atwood number. Without considering surface tension, the fastest growing wavelength is calculated using equation (A.30) and the fluid combination with the closest agreement to the miscible combination is chosen. It was found that the fastest growing wavelength of the 5 cSt silicone oil / LST heavy liquid combination matches that of the miscible combination within 10%. Surfactant was also added to the liquid combination to reduce the interfacial tension. The amount of surfactant added was based primarily on the saturation point of the surfactant; as much as possible was added without coming out of solution (less than 1 g per liter of AOT for the LST heavy liquid). Both AOT and Polysorbate 80 were tested. AOT is a solid and requires heating the liquid in order for the surfactant to be dissolved, which is time consuming. Polysorbate 80 has the advantage of being in a liquid form, so it is easily dissolved. In addition to the liquid preparation time being shortened, Polysorbate 80 has the added benefit of being able to be removed by evaporation allowing the full recovery of the heavy liquid. Unfortunately, it was noticed that Polysorbate 80 forms an emulsion more readily which often contaminates the interface with bubbles. AOT does not have this effect, and was chosen for this study. The calcium nitrate aqueous solution is the other heavy liquid used, which when combined with isopropyl alcohol, results in an Atwood number of 0.21 in a miscible configuration. With these smaller Atwood number experiments there was no indi- cation that the calcium nitrate was not fully dissolving in the mixing region and therefore either isopropyl or ethyl alcohol can be used. In the past, isopropyl alcohol was used almost exclusively. Isopropyl alcohol is not typically denatured, so con- 65 tamination was not a concern. An immiscible, small Atwood number combination of calcium nitrate aqueous solution / silicone oil was also used. The fastest growing wavelength (characterized by the viscosities and densities) of the miscible, calcium nitrate solution / isopropyl alcohol combination can be matched to within 10% when a calcium nitrate solution is used in conjunction with 2 cSt silicone oil. Surfactant was also added to this combination in the same manner as with the larger Atwood number configuration. A table displaying the refractive index mismatched liquid combinations used in this study is shown in table 3.1. 66

A=0.48 Forced Unforced A=0.21

lithium polytungstate salt aqueous soln. / 90% lithium polytungstate salt ethanol - 10% Water aqueous soln. / 90% mixture Miscible ethanol - 10% Water mixture

calcium nitrate salt aqueous soln. / isopropyl alcohol

lithium polytungstate salt aqueous soln. with AOT as surfactant / low viscosity silicone oil lithium polytungstate salt Immiscible aqueous soln. with AOT as surfactant / low viscosity silicone oil calcium nitrate salt aqueous soln. with AOT as surfactant / low viscosity silicone oil

Table 3.1: This is a table showing the different fluid combinations used for the refractive index mismatched case. 67

3.1.2 Mixing Layer Imaging

When experiments with unmatched refractive index were initially performed, it was observed that once a mixing region develops at the interface between the two fluids it would become darker than the quiescent fluid making it visible to the camera. Since both fluids are nearly transparent, the only way the mixing region can become visible is if the redirection of light due to the refractive index gradients within the turbid mixing region prevent light from the backlight from reaching the camera. This is related to the shadowgraph principle when we consider an extended source [91]. There are obvious issues associated with imaging the refractive index mismatch of the two liquids. One such issue is that if some of the light reaching the camera has been highly scattered, the edges of the mixing layer may be distorted. Biological imaging, because it often deals with highly scattering media, can give us guidance about the process taking place in our experiment. In the biological imaging literature, photons that are transmitted through turbid media can be characterized by three types: ballistic, snake and diffuse. Ballistic photons pass directly through without many refraction and scattering events (this would correspond to unrefracted light mainly unaffected by the mixing layer). Snake photons are scattered slightly but mainly have forward direction of travel (this may be light interacting with the edge of the mixing region). Diffuse photons are photons that experience many refraction events. [78] Diffuse photons are our major concern and have the greatest potential to corrupt our data, so understanding the degree to which these photons are captured by the lens is important. There are some interesting properties of diffuse photons that present methods for filtering them out. The most obvious property is that many diffuse photons would emerge from the turbid media off axis. Spatial filtering can be used to filter these photons. To accomplish this, often a confocal setup is used in which both the light source and aperture have pinholes to filter out the scattered large angle light. This is usually used in a microscope setup, but implementation in our setup would be problematic. A basic method to remove off-axis light was 68 tested. An interference bandpass filter centered at the backlight wavelength of 660 nm should have the effect of filtering out off-axis light. An interference filter essentially works by having a transparent material that has a thickness equal to half the desired wavelength bounded on either side by semi-reflective coatings. Light rays will reflect within the filter and interfere. There will be destructive interference such that only the proper wavelength will exit. Changing the angle of incidence effectively increases the optical path difference between interfering rays and therefore this light is not passed through as it should; thus, off-axis light is removed. An experiment was performed with this type of filter placed over the camera lens. The images acquired did not show any noticeable difference when compared to images acquired without the filter present. This implies that most of the light rays that enter the camera do not have large angles of incidence (an angle of incidence of zero represents a light ray that is normal to the lens) and are therefore not diffuse. Diffuse photons have a longer traveling path than ballistic and snake photons and therefore take a longer time to reach the camera. Time gating is often implemented by pulsing the light source and quickly opening and closing an optical gate. This requires timing on the order of nanoseconds and is therefore not feasible here. Another property of diffuse photons has to do with the fact that during a scat- tering or refraction event, the polarization of the light changes. If enough of these events take place, the polarization information at the source is lost. One method of exploiting this phenomenon, called polarization gating, is to have a linear polarizer at the source and then collecting the light through a similarly aligned linear polar- izer. The highly scattered diffuse light becomes unpolarized and therefore a fraction of this light would be filtered out by the linear polarizer at the lens. By implement- ing polarization gating we can determine the degree of scattering of the photons reaching the camera and remove the highly scattered ones, thus characterizing the amount of corruption taking place in our images. This method of using two linear polarizers was tested but did not show conclusive results. This is expected because in this configuration, only a fraction of the unpolarized diffuse photons would be filtered out. 69

A more sophisticated approach of polarization gating which removes all of the diffuse light is polarization modulation. In the paper of Mujumdar [66], a rotat- ing linear polarizer was used at the light source. The light would then propagate through turbid media and be collected through a stationary linear polarizer. In our experiments, it was more feasible to rotate the polarizer at the camera lens instead of at the light source. But, the effect is the same. The light collected from ballis- tic and snake photons would maintain their initial polarization state after passing through the turbid mixing region. This light would then pass through the rotating polarizer at the lens and the collected light would have a time dependent intensity that varies at the frequency of the rotating polarizer rotational frequency. Light from diffuse photons would become unpolarized and therefore unaffected by the spinning polarizer. Thus, its intensity would not be time dependent. By taking a temporal FFT of the acquired images, the oscillatory and mean components can be extracted. This was accomplished using a program written in Matlab which is contained in the Appendix (sec. B.2). The oscillatory component corresponds to ballistic and snake photons, while the mean component corresponds to diffuse pho- tons. Using this method during an experimental run does present difficulty. Firstly, the rotating polarizer (which is spinning on a small DC motor) would be affixed to the test sled. Since there is a large impact at the end of the experiment, this con- nection would need to be extremely robust. Secondly, and more importantly, having the intensity vary in time means that more than half of the experimental data would be lost, which is unacceptable. Tests were done to determine degree that our mixing region measurements would be altered by the diffuse photons. The tank was filled as it would be during an experiment and the liquids stirred vigorously to produce a mixing layer similar to the one created during an experiment and images of this were acquired and analyzed. Figure 3.1 shows that in the stirred LST heavy liquid / water combination images there are some diffuse photons present. The average pixel intensity of the liquid region is 370 for the raw image, 218 for the mean component image representing the diffuse photons and 160 for the oscillatory image component representing the snake and ballistic photons. Even though there are slightly more 70

Figure 3.1: Polarization gated images of stirred stratification of LST heavy liquid on the bottom and water on the top. The left image is uncorrected, the middle image is the highly scattered photon (mean) component comprised of diffuse photons and the right image is the little scattering photon (oscillatory) component comprised of snake and ballistic photons.

diffuse photons than snake and ballistic ones, the only noticeable effect is to make the image more blurry; the diffuse photons do not change the observed mixing layer width and do not alter the content of the image. We must also realize that this polarization technique does not provide much insight into how diffuse the photons are; extremely scattered photons are of main concern to us, but mildly scattered ones (even though they will become unpolarized too and will therefore register as diffuse) are of little concern to us. Even snake photons will become unpolarized to some degree, so the fact that we are observing many highly scattered photons should be of little concern since they do not appear to affect the mixing layer width, which is the quantity we are interested in measuring. As another exercise, raytracing was performed using POVRay. The Plexiglas tank, LST heavy liquid region, isopropyl alcohol region and the mixing region were modeled with care being taken to accurately match the refractive indices to the 71 experimental values. The observed mixing region was then compared to the mixing region without any refraction effects as shown in figure 3.2. It was determined that there is not a significant difference between the measured mixing layer width and the actual one. Therefore, the effect due to refractive index mismatch is small. In figure 3.3 a 3D raytracing image is also presented.

Figure 3.2: A Povray rendering of a mixing region with mismatched refractive indices (left) and without mismatched refractive indices (right).

By using polarization and raytracing techniques we were able to demonstrate that the mixing layer width measurements should not be affected significantly by a mismatch in refractive index. This, however, only holds for an ideal situation as is modeled here. During an experiment, there are edge effects (other than the mixing region) and bubbles that will refract light in unpredictable ways and some such effects were noticed. 72

Figure 3.3: A 3D view of a Povray rendering of a mixing region with mismatched refractive indices. 73

3.1.3 Mixing Layer Imaging Artifacts

When experiments were performed with LST heavy liquid as the heavy liquid and water as the light liquid (Atwood number of 0.48) a mixing region develops, but not always as expected. In this miscible liquid combination, sometimes three distinct mixing regions would initially be observed and would grow, eventually merging into one mixing region. This can be observed in figure 3.4. To rule out the possibility that this effect was unique to the WP drop tower, an experiment with the same liquid combination was performed on the LIM drop tower (fig. 3.5) and it indeed showed the same effect. Experiments with the same Atwood number and accelera- tion were performed with various liquid combinations in an attempt to determine the cause of these artifacts. Interfacial tension, diffusion and refractive index all vary significantly with choice of liquid. By altering these fluid properties individually, hopefully conclusions can be drawn about the cause of these artifacts.

Figure 3.4: An experiment performed with LST heavy liquid (bot- tom liquid) and water (top liquid), on the WP drop tower, showing three distinct mixing regions. 74

Figure 3.5: An experiment performed with LST heavy liquid (bot- tom liquid) and water (top liquid), on the LIM drop tower, showing three distinct mixing regions.

The effect of interfacial tension on the observance of the three region develop- ment was investigated by performing LST heavy liquid / silicone oil combination experiments. In these experiments the three region effect was never observed. Also, in the miscible case, one may expect wetting effects from the interaction with the Plexiglas might have caused the artifacts. Experiments were performed with an LST heavy liquid / water combination, where surfactant was added to the heavy liquid. Also, an experiment was performed with an LST heavy liquid / water combination in which the tank was coated with oil to alter its wetting properties. Since interfacial tension did not seem to affect the presence of the three regions, it was concluded that interfacial tension is not the cause of the phenomena that we are observing. This would also rule out the possibility of Korteweg stress being the cause. Korteweg stresses produce an effect similar to interfacial tension due to gradients in density and diffusion across an interface, but are very small [45]. Attempts to measure the interfacial tension between LST heavy liquid and alcohol using a spinning drop tensiometer were unsuccessful because of the small value of the interfacial tension. This, also leads us to believe that this effect is negligible in the experiments. 75

Whether the refractive index mismatch can be causing the 3-region effect must also be determined. Three sets of unforced experiments were performed in which the refractive index ratio was altered. The experiments performed used an LST heavy liquid / distilled water combination, an LST heavy liquid / ethanol combination and an LST heavy liquid / dimethyl sulfoxide combination. The refractive index ratios of these three cases is 1.17, 1.09 and 1.05, respectively. The Atwood number of all these cases is approximately 0.5. The mass diffusivity and kinematic viscosity are all of the same order for these three cases; since the 3-region effect occurs after the turbulent mixing region has developed, we would not expect diffusion or viscosity to play a role on the large scale effect we are observing. The effect of viscosity on RTI is discussed in section A.2 of the Appendix. Diffusion acts on an even smaller scale than viscosity, so if viscous effects are negligible, diffusion effects must be negligible as well. It was observed that the 3-region effect is indeed affected by the change in refractive index ratio. The effect is the most pronounced with the LST heavy liquid / distilled water experiment and least pronounced with the LST heavy liquid / ethanol experiment. Therefore, we believe that the refractive index mismatch is the cause of the 3-region effect, but this must be verified. If refractive index mismatch is the cause of the 3-region effect, we could be observing a reproduction of a single mixing region due to a mirage effect. It is well known that mirages can produce many distorted virtual images. In a miscible configuration, a gradient in refractive index will be present at the top and bottom of the mixing region. This gradient will have the tendency to bend light. In our configuration of larger refractive index on the bottom and lower one on the top, light will tend to bend downward. This may explain the upper, darker mixing region, but not the lower one. However, the fact that the backlight emits light at many angles complicates the issue. In order to evaluate this effect, the system of a miscible, LST heavy liquid / isopropyl alcohol fluid combination is modeled at a time when a clear mixing region has developed. Light rays are followed along the path from the backlight to the exit of the system and the deviation from what is ideal is noted. Ideally, light rays would travel in a straight line from the backlight 76

to the camera where as it traverses the mixing region some of the light would be lost due to scattering and refraction events. From Mamola in 1991 [61], the path of a light ray when in the presence of a vertical (y) gradient in refractive index (n) is given by d2y 1 dn dy 2 = 1+ . (3.1) dx2 n dy dx "   # Here y represents the vertical tank dimension (this is the coordinate in which the mixing region grows and the thus has a gradient refractive index). The coordinate x represents the direction going through the tank from the backlight to the camera. In this model 3D effects are not accounted for. This equation was solved using ODE45 in Matlab. In order to solve this 2nd order ODE, equation (3.1) had to be recast into two 1st order ODEs:

dy 1 = y dx 2 dy 1 dn 2 = 1+ y2 . (3.2) dx n dy 2   The diffuse mixing layer was modeled by having the boundaries of the mixing layer described by error functions (to take into account diffusion effects) in which the refractive index varies continuously from 1.38 at the top to 1.465 in the middle of the mixing region and then again from 1.465 to 1.55 at the bottom. This represents the situation in which LST heavy liquid is at the bottom and isopropyl alcohol is at the top as during an experiment. Also, a slight non-uniformity from front to back is modeled using exponential functions for the boundary of the mixing region to account for slight tank misalignment effects. The refractive index through the tank is assumed to be,

1.55+1.38 1.55+1.38 n = 1.38 /2 [ erf(2/(D/2)(y A exp(B x)))]+ +1.38 /2 2 − × − 1 − 1 1 2    (3.3) 77

for the top region and

1.55+1.38 1.55+1.38 n = 1.55 /2 [ erf(2/(D/2)(y A exp(B x)))]+ +1.55 /2 − 2 × − 1 − 2 2 2    (3.4)

for the bottom liquid, where n is the refractive index, A1, B1, A2 and B2 are con- stants to adjust the exponential functions that mimic the tank misalignment and D is the thickness of the mixing region. In addition to accounting for the bending of light rays due to the index gradient, the loss of light due to scattering of the mixing region was accounted for by adding an absorption equation to the model following Beer’s law,

dI = σ(y)I, (3.5) d` −

where σ is the absorption coefficient, I is intensity and ` is length tangent to the light path. To put this in a form that can be utilized in the Matlab ODE45 algorithm 2 dy 2 along with equation (3.1), d` is recast in terms of x and y, d` = dx dx + dx = dy 2 q dx dx + 1. Equation (3.5) then takes the form,  q  dI dy 2 = +1σ(y)I. (3.6) dx − dx s 

The absorption coefficient through the mixing region is modeled as, (1 erf2( 2 y)). − D/2 This function gives the largest absorption at the center of the mixing region (which is what we observe during an experiment). With this, we have three 1st order differential equations that are solved along light rays through the tank model,

dy 1 = y dx 2 dy 1 dn 2 = 1+ y2 dx n dy 2   2 y = 0.02 1 erf2 y y2 +1y , (3.7) 3 − − D/2 1 2 3    q 78

where y3 represents the light intensity and the constant 0.02 was chosen empirically to give the desired results for mixing region absorption mimicking that of the ex- periment. Moving from left to right starting with an intensity of one, light paths are traced through the mixing region; intensity also decreases as it is scattered by the mixing region depending on the vertical position of the light ray. After leaving the tank (the width of the tank is 75 mm), the angle of the light ray is algebraically negated and the same distance traversed. This represents recreation of the object with unity magnification as it would appear during an experiment due to bending of light rays. A depiction of the model is shown in figure 3.6 and the Matlab program used is in the Appendix (sec. B.4).

Figure 3.6: A depiction of the gradient refractive index scenario modeled in Matlab to investigate a gradient refractive index.

The path of light rays emitted from the backlight through the back of the tank (left) to the front of the tank (right) at various angles were investigated. The back- light is not a collimated source, so many angles of light rays need to be considered. Also, considering the absorption profile through the mixing region, the fraction of light absorbed is investigated as well. Figures displaying these simulations are shown in figures 3.7, 3.8 and 3.9. The plots on the left represent light ray path (where light rays travel from left to right) and the plots on the right represent light intensity at 79 the exit of the tank.

40 4 30 2 20 0 10

−2 y [mm] 0 y [mm]

−4 −10

−6 −20

−8 −30 0 50 100 150 0.4 0.5 0.6 0.7 0.8 0.9 1 x [mm] Intensity [I/I ] 0 Lower resolution simulation

40 5

30 0

20 −5

10 −10 y [mm] y [mm] 0 −15

−10 −20

−20 −25

−30 −30 0 50 100 150 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x [mm] Intensity [I/I ] 0 Higher resolution simulation

Figure 3.7: Plots of Matlab simulation of light ray path (left plot) and intensity distribution (right plot) for parallel light rays through refractive index mismatch configuration. For the ray path plots, light starts at the backlight at x = 0. Some light rays going through the interface are observed to end up at the bottom of the observation plane. 80

60 6

50 4

40 2

30 0

y [mm] 20 y [mm] −2

10 −4

0 −6

−10 −8 0 50 100 150 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 x [mm] Intensity [I/I ] 0 +27 degree simulation

5 4

0 3

−5 2 −10 1 −15 0 −20 y [mm] y [mm] −1 −25 −2 −30 −3 −35

−40 −4

−45 −5 0 50 100 150 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 x [mm] Intensity [I/I ] 0 27 degree simulation − Figure 3.8: Plots of Matlab simulation of light ray path (left plot) and intensity distribution (right plot) for approximately 27 degree light rays through refractive index mismatch configuration.± For the ray path plots, light starts at the backlight at x = 0. These rays do not deviate much from the actual interface location. 81

50 5

40

30 0

20 y [mm] y [mm]

10 −5

0

−10 −10 0 50 100 150 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 x [mm] Intensity [I/I ] 0 +14 degree simulation

40 5

30 0

20 −5

10 −10 y [mm] y [mm] 0

−15 −10

−20 −20

−30 −25 0 50 100 150 0.7 0.75 0.8 0.85 0.9 0.95 1 x [mm] Intensity [I/I ] 0 14 degree simulation − Figure 3.9: Plots of Matlab simulation of light ray path (left plot) and intensity distribution (right plot) for approximately 14 degree ± light rays through refractive index mismatch configuration. For the ray path plots, light starts at the backlight at x = 0. Light rays passing through the interface at negative angles here are spread away below the interface with gaps of coverage; this could give an effect of bright and dark bands. 82

Some interesting effects are observed from this exercise. There is obvious bunch- ing of light rays due to the refractive index gradient. This presents itself as gaps in illumination. This is especially obvious with the 14 degree negative angle case. This simulation shows that it may be possible to have strange optical effects taking place. Light exiting from the backlight at different angles will be refracted differently and this could account for an apparent reproduction of the interface above and below the actual interface location. Looking at the results, light rays either exiting the backlight straight or at a downward angle are altered more than rays exiting at an upward angle. Light exiting the tank has been redirected from the upper to lower region. This effect explains the upper, darker apparent mixing region better than the lower one, since we do not observe a decrease in light for the lower region as we do for the upper one. Of course this simulation only takes refraction into account. A fraction of the incident light will be reflected away from the mixing region and this could account for the lower apparent mixing region. This exercise makes it clear that an experimental investigation of this particular phenomena should to be performed. The 3-region phenomena was investigated experimentally by performing calcium nitrate / isopropanol or silicone oil experiments on the LIM drop tower. With calcium nitrate solution it is possible to match refractive index in both miscible and immiscible configurations thus allowing the exploration and comparison of different configurations while matching Atwood number. Since the Atwood number is smaller with these fluid combinations, the experiments had to be performed with larger accelerations (thus the LIM was used) to observe instability growth without forcing. These experiments were performed using a 3M transparent film attached to the backlight with numbers and letters printed in a pattern so that any mirage effects would be observed (the numbers and letters are actually backwards in this particular set of experiments, but this is a real effect). The transparency film was affixed to the outside of the tank directly in front of the backlight, so any optical effects due to the mixing region occur past this plane. The experiments for unmatched miscible, unmatched immiscible and matched miscible configurations are shown in figure 3.10. 83

Some mirage effects were observed. But, whether or not it is the only effect is not conclusive because the alphanumeric grid is no longer visible in the later stages of development. Unmatched refractive index, miscible liquid, calcium nitrate / isopropanol com- bination mirage tests were also performed in a more controlled manner. In those tests, an alphanumeric grid was used and the stratified light over heavy liquid com- bination was periodically stirred to increase the width of the interface. The image displayed in figure 3.11 shows a very pronounced mirage effect. The numbers 5, 6, 7, which are at the lower end of the interface have two repeated images. There is an inverted mildly shrunken version above it and a non-inverted strongly shrunken version above that. One can observe that there are three regions which show the same numbers, and this can be compared to the three region effect visible in figure 3.4, where the effect was first observed during an experiment. Also, the letter, num- ber combination z, 1, 2, is in approximately the correct place (at the interface), but it is highly shrunken and is difficult to recognize. Realizing that the polarization stirring tests performed with LST heavy liquid (described in figure 3.1) did not show obvious mirage effects, another stirring test was also performed, where a diffusion region was allowed to develop first and polarization effects were not accounted for. In figure 3.12 we observe a progression of images in time in which a rod is used to stir the interface. A mirage effect of the alphanumeric grid is visible at first. But, as the interface is stirred it disappears. Thus, there is no obvious mirage effect due to the mixing region itself. This leads us to believe that the scattering effect of the mixing region would mask any mirage effect that may occur. 84

Miscible, Unmatched

Immiscible, Unmatched

Miscible, Matched

Figure 3.10: An experimental investigation of the mirage effect where a calcium nitrate aqueous solution is the heavy, bottom liquid and either isopropanol or silicone oil is the light, top liquid. These ex- periments were performed on the LIM apparatus. The alphanumeric grid is reversed in these pictures; this is not an optical effect. 85

Figure 3.11: Obvious mirage effect (especially when considering the numbers 5,6,7) when using an unmatched refractive index combina- tion in which the bottom liquid is a calcium nitrate aqueous solution and the top fluid is isopropanol. 86

Figure 3.12: Aqueous calcium nitrate solution / isopropanol mirage test where stirring appears to make the mirage effect disappear. 87

From these stirring, mirage tests we can conclude that mirage effects were not necessarily the only effect causing mixing region distortion. Because of the fact there was no mirage effect visualized during the stirring phase of the tests, another theory was developed that included large scale rollups due to misalignment in the exper- iment. In order to prevent perturbations from being produced at the tank walls, the tank must be aligned both vertically with gravity and with the experimentally produced acceleration. Vertical misalignment would cause an initial interfacial am- plitude disturbance, misalignment with the acceleration would cause a jarring effect that will create an initial interfacial velocity disturbance. Since, both of these mis- alignment effects will create rollups, they will be lumped together. As is usually the case, it is very difficult to perfectly align the tank. This is observed most ob- viously from the side rollups that sometimes appear during an experiment. These rollups would precede the main instability growth since they are at the boundaries and are therefore propelled by their image vortices [5]. Once sufficient mixing takes place, it can be assumed that there will no longer be a clear image vortex and the propagation will be primarily due to the instability. Front to back rollups are also expected and the degree to which they are observable depends on the liquids that are used. It is believed that the jarring of the tank from front to back and side to side is what causes these rollups because the test sled appears to be level with gravity prior to an experiment’s initiation. Attempts were made to align the system with the imposed acceleration to decrease this effect. This jarring motion can be observed in acceleration measurements acquired using the three-axis accelerometer and is shown in figure 3.13. From this plot, one can conclude that there are front to back and left to right accelerations present, but only at the beginning of the acceler- ation. Also, the magnitude of the front to back accelerations are approximately 50% less then the left to right and up-down accelerations (which have approximately the same magnitude). After alignment, the front-back and left-right accelerations are decreased by 50%. It is important to note that although these rollups are present during an RT experiment, they would not be present for the stirring tests and might explain why we did not always observe optical artifacts during the tests. 88

(a) (b)

Figure 3.13: Acceleration measurements taken from the three-axis accelerometer of the WP apparatus both before alignment (a) and af- ter alignment (b). Acceleration for the side to side and front to back motion are depicted. These accelerations are presumed to be the cause of the front-back and left-right rollups. The accelerations are repre- sented in g’s (the ordinate) and time is represented as milliseconds (the abscissa). 89

These large wavelength rollups at the front and back of the tank would be ex- pected to grow at a rate similar to single mode Rayleigh-Taylor instability. The edges of these fluid structures would eventually become random while the portion closer to the mixing region would remain smooth. In this situation light rays would smoothly travel through the bulk of the rollup unaltered. However, rays would be scattered away at the turbid peak creating the darker region that we observe. Since the front and back rollups are essentially single mode Rayleigh-Taylor instability, they would grow with constant velocity at late time [88], whereas the velocity of the mixing region would increase linearly in time according to the self-similar model presented in Section 1.2. Therefore, later in time the center portion of the mixing region would overtake and appear to merge with the rollups. The level of refractive index mismatch dictates the degree to which this effect is visible. If the refractive indices are matched, the effect would be invisible. In the immiscible, unmatched re- fractive index case light rays will be refracted even in the bulk of the rollup, not just at the peak. Whereas, in the miscible case there would be diffusion to smooth the jump in refractive index in the less turbid areas (and thus less light is refracted away in those areas). This mechanism of smoothing due to diffusion does not exist in the immiscible case. Figure 3.14 shows this explanation for the three visible regions for the miscible liquid case. In this illustration, front and back rollups and a gradient refractive index in the rollup peaks are assumed. Light travels from the backlight through the mixing region and rollups where it is bent due to the refractive index gradients. Regions that are more turbid result in larger gradients and thus, more mixing and ray deflection will take place. The intensity of the light reaching the camera would also decrease through some of the more highly turbid regions, but this effect is not depicted here. To experimentally test the rollup hypothesis, attempts were made to align and adjust the experimental setup. At first, purposely de-aligning and re-aligning the system showed no change. However, these tests were carried out using an immiscible combination in an attempt to conserve experimental fluids. Considering that we are observing a combination of gradient refractive index in conjunction with the rollups, 90

t1 t2

Figure 3.14: A depiction of a possible explanation of the three dis- tinct regions visible in a miscible, unmatched refractive index exper- iment. The arrows represent light rays traveling right (back of tank from the backlight) to left (front of tank). The right drawing repre- sents a later time in the progress of the mixing process. It can be observed that the deflection of the light rays from regions of gradient index at the boundaries between the fluids creates dark regions that become more pronounced at later times. it is expected that no change would be noticed in the immiscible case owing to the lack of refractive index gradient. Miscible experiments are indeed necessary to observe this effect. Once a miscible experiment was eventually performed, the optical artifacts were visible but not as pronounced, leading us to believe that adjusting the alignment worked and the front to back rollups are at least part of the phenomena. Also, it can be observed in figure 3.4 that the upper apparent mixing region has a different geometry than the lower one (other than just the intensity difference which can be explained by a mirage type of effect). The upper region appears to not extend to the sides of the tank while the lower one does. This would be expected if we are observing front to back rollups where the top region is the back rollup and the bottom region the front rollup. The light rays passing through the rollup edges at the back wall have a smaller angle of incidence with the camera lens, because they are farther away from the lens, than that of the rays passing through the front 91 wall rollup edges. This would result in the effect we are observing, whereas if it was merely a mirage type effect we would not expect to see this. With the refractive index mismatch imaging, we have two possible explanations for the three observed mixing regions, which are possibly complementary. These effects cannot easily be removed from all the experiments. One way of accounting for this effect is to perform experiments in which we can easily control the refractive index mismatch (using Ca(NO3)2 solution in conjunction with isopropanol) and determine how the extracted data is affected by refractive index mismatch. Then, this information can be used to determine a correction for the LST heavy liquid experiments where this parameter is not easily controlled.

3.2 Absorption Imaging Liquids

From the preceding discussion, it is obvious that the imaging of refractive index mismatch has shortcomings. Therefore, having another, better understood imag- ing method would give a reference experiment for which to better understand the unmatched refractive index case. When used properly, an absorbing liquid (often made by the addition of a dye) is known to follow the Beer-Lambert law where the light intensity decays exponentially through the liquid and depends on the concen- tration of absorbing media. By using absorption as an imaging technique, we have a known comparison for better interpreting the refractive index mismatch imaging results. Although we cannot match refractive index (which is a necessity for using absorption imaging) for all fluid configurations used here, we can for some. However, by matching refractive index we can no longer closely match the theoretical fastest growing wavenumber to that of the unmatched case since matching both viscosity and refractive index is often impossible or very difficult. Refractive index is depen- dent on the density of the liquids and since we already need to match the Atwood number, we can only match one other parameter in this case, refractive index. With the large Atwood number experiments, matching refractive index is not trivial. It was found that by using trans-anethole in conjunction with LST heavy 92 liquid, the refractive index can be matched in an immiscible combination. Trans- anethol has the property of having a larger refractive index than other liquids of similar density (the density of trans-anethole is close to that of water). However, trans-anethole reacts with most materials that would be used to create a tank. Also, it was discovered that the anethole becomes polymerized when mixed with LST heavy liquid forming a layer of polymerized fluid. This polymerized layer must be removed after each experiment. This did not seem to alter the development of the instability and therefore it is judged to be acceptable. It was, however, noticed that another chemical reaction takes place between the two liquids causing the LST heavy liquid to become a dark maroon color. Besides using fresh heavy liquid after each experiment (which is unacceptable because of the high cost), nothing can be done to remove this color. Therefore, it was decided to use this coloring effect as the dye used for the absorption measurements. One issue is that if the liquid is too dark, Beer’s law may no longer apply. This therefore needed to be verified. To test Beer’s law, one can use a triangular Plexiglas tank filled with uniformly dyed liquid so that distance across the tank represents distance traversed by the ( µ )ρ`(x) light ray, `(x), in the equation I(x) = I0e− ρ , where I is the light intensity, µ ρ is the mass attenuation coefficient and ρ is the dye concentration. With the combination of the approximately 45 degree angle and the anethole-Plexiglas-air interaction, full internal reflection would take place and this was observed. Even if the refractive indices and angles were such that there would not be full internal reflection, the experimental system would have to be altered to view the outgoing light at an angle. Therefore, a square tank with a diagonal wall separating two triangular compartments was used. If the two compartments are filled with liquids that have the same refractive index, these effects can be corrected for (producing only a slight shift in the exiting light that does not matter for our purposes here). Of course it is still possible to have full internal reflection (if the Plexiglas and liquid have large differences in refractive index), but this is very unlikely and will not take place with the fluid combinations we use here. An illustration of the tank is depicted in figure 3.15, where dashed lines represent possibilities of emerging light if the front 93 compartment is not present and a SolidWorks drawing is shown in figure 3.16. For the anethole / LST heavy liquid configuration, the back compartment was filled with the absorbing heavy liquid whereas the front compartment was filled with the non-absorbing anethole that has a matched refractive index. By taking the natural log of I , linearity indicates that Beer’s law is indeed obeyed. Figure 3.17 displays I0 a plot of ln I versus horizontal position in pixels. It is observed that the plot I0 is approximately  linear in the range of 60 pixels to 320 pixels which represents a thickness of 76 mm to 15 mm. One must note that there are parallax effects that can be attributed to the non-linearity near the sides. Since the rectangular tank used for the experiments has a width of 76 mm, figure 3.17 shows reasonable agreement with Beer’s law for the experimental conditions. 94

Figure 3.15: An illustration of the top view of the two compartment tank, used for verifying Beer’s law, in which each section is a trian- gle. Without the front section (bottom of drawing) light would either succumb to full internal reflection, as with the anethole / LST heavy liquid combination, or emerge at an angle. This is displayed by the dashed arrows. The light emerges parallel to the incoming light when the front compartment is used, which is represented by solid arrows. 95

Figure 3.16: A SolidWorks drawing of the fluid tank that has two triangular compartments. This tank was used to test the validity of Beer’s law. A dyed liquid was put in the rear compartment and a transparent liquid with the same refractive index was put in the front compartment. 96

ln I  I0 

Figure 3.17: Verification of Beer’s law for the colored LST heavy liquid used in the anethole / LST heavy liquid combination. 97

There is no safe, miscible, large Atwood number liquid combination with matched refractive index that has been found. One promising liquid combination which would yield an Atwood number of 0.44 is picoline and diluted LST heavy liquid, which is only 10% less than our goal of 0.48. However, the solubility of the LST salt in picoline is low enough that when the two liquids come into contact the salt comes out of solution making the combination unusable. The only other index matched, miscible liquid combination that has a large Atwood number is Dimethyl Sulfoxide (DMSO) and diluted LST heavy liquid. This combination achieves an Atwood number of 0.34, which is 30% from our goal. The lower value of Atwood number in addition to the difficulty of recovery (due to the high, 200◦ C, boiling point of DMSO) of ≈ pure heavy liquid when mixed, makes this combination impractical. For the smaller Atwood number experiments, an index matched configuration can be formed by adjusting the concentration of calcium nitrate. For the miscible case, adjusting the concentration of calcium nitrate for the heavy liquid and using isopropyl alcohol for the light liquid, we can match refractive index. Index matching can also be accomplished in an immiscible combination by using a mixture of 2 cSt silicone oil and 0.65 cSt silicone oil, whilst also adjusting the concentration of calcium nitrate as to match the Atwood number of the miscible case. Care had to be taken when selecting the dye to be used, especially for the immiscible case, to make sure the dye would dissolve only in the heavy liquid and not the silicone oil. Since for immiscible experiments we reuse the liquids, contamination of the silicone oil with dye would increase with time and decrease the liquid combination contrast. It was observed that the Blue-Green #621 (which is often used in our laboratory) did not meet this requirement, while standard food coloring did. A table displaying the matched refractive index liquid combinations used is shown in table 3.2. With the index matched LST heavy liquid / anethole case, the optical absorp- tion characteristics were fixed because the chemical reaction that colors the heavy liquid cannot be changed. Therefore, we needed to verify that the liquid obeyed Beer’s law reasonably well. With the small Atwood number case we can control the amount of food coloring added. Therefore, work was done to find the optimum 98

A=0.48 Forced Unforced A=0.21

Miscible calcium nitrate salt aqueous soln. / isopropyl alcohol

lithium polytungstate salt aqueous soln. with AOT as surfactant / trans-anethole

Immiscible

calcium nitrate salt aqueous soln. with AOT as surfactant / low viscosity silicone oil mixture

Table 3.2: This is a table showing the different fluid combinations used for the refractive index matched case. 99 dye concentration that obeyed Beer’s law while also maximizing contrast. Higher dye concentration gives more contrast, but also decreases the agreement with Beer’s Law. The contrast was calculated as

I I C = 0 − , (3.8) I0 + I

where I is the vertically averaged intensity taken at various dye concentrations and

I0 is the same measurement with no dye added. The agreement with Beer’s law was determined by taking the natural log of the absorption through the colored fluid,

( µ )ρ`(x) I(x) ( µ )ρ`(x) I(x) µ I(x)= I0(x)e− ρ = = e− ρ = ln = ρ`(x), ⇒ I0(x) ⇒ I0(x) − ρ     (3.9) where `(x) is the absorption distance as a function of distance across the tank and µ is the absorption coefficient. Then, a least squares linear fit was performed in which the sum of the squared errors was used to determine agreement with Beer’s Law. It would also be advantageous to be able to directly compare the agreement with Beer’s law with the intensity contrast. To do this we would need use a formulation of Beer’s law that has the number of drops of dye as a parameter,

(µ/ρ)ρ`(x) I(x; ρ)= I0(x)e− , (3.10)

µ ρ is the number of drops of diluted food coloring or drop density and ρ would be the mass attenuation coefficient in terms of drops instead of actual mass.  At a known drop concentration and distance along the triangular tank we can obtain the mass attenuation coefficient by rearranging eq. (3.9),

I(xL;ρ ) ln ∗ I0 µ = , (3.11) −  ρ L  ρ ∗   where ` = L represents the distance traversed by a light ray at the horizontal distance xL and ρ is the known drop concentration. xL here does not represent the ∗ 100

wall location because of parallax effects from the Plexiglas that need to be avoided. Using this expression, we can obtain the Beer’s law calculated drop density by combining equations (3.9) and (3.11),

I(x;ρ) L ln I ρ = ρ 0 . ∗  I(xL;ρ) `(x) ln ∗ I0   In an ideal situation, the fractional expression would equal unity. Letting x = xL and ` = L, where L corresponds to the approximate width of the square experimental tank (75 mm), we obtain, I(ρ) ln I ρ = ρ 0 . (3.12) ∗ I(ρ ) ln ∗ I0   We can use this expression to evaluate how the deviation between the measured con- centration and the actual concentration changes with intensity according to Beer’s law. We can also compare how the contrast obtained both by experimental measure- ments and that predicted by Beer’s Law differ. By combining equations (3.11), (3.10) and (3.8), we obtain,

ρ µ I(ρ ) ρ I(ρ ) ρ ρL ln ∗ ∗ ∗ ( ρ )  I0  ρ 1 I0 I I0 I0e− 1 e ∗ − I0 − = − µ = − I(ρ ) ρ = ρ (3.13) I + I ( ρ )ρL ln ∗   ρ 0 −  I0  ρ I(ρ ) I0 + I0e 1+ e ∗ 1+ ∗ ∗ I0   To better understand the problem at hand, we should also perform an uncertainty analysis on these equations. The analysis to determine the ideal dye concentration and the uncertainty is continued in the Appendix (sec. A.4) and the Matlab program used to facilitate this analysis is in section B.3 of the Appendix. From the analysis, it was determined from figure 3.18 that the ideal food coloring concentration (where units are in grams of food coloring) is,

150 Ppte Drops 0.0180g ρ = DilSoln = F.C. . (3.14) F.C. 500 mL Soln. 500 mL Soln.     101

0.7 Measured Contrast 0.6 BeersLawCalcContrast

0.5

0.4 I0−I I0+I 0.3

0.2

0.1

0 0 100 200 300 400 Number of Drops

Figure 3.18: Measured contrast and Beer’s Law calculated contrast and the uncertainty of its value (dashed lines)

3.3 Other Imaging Concepts

During the process of determining which liquids to use, the possibility of using other imaging techniques was also investigated. An imaging method in which the fraction of one liquid into the other is determined throughout the mixing layer for a larger selection of liquid configurations is desired. The large refractive index of the heavy liquid is a major difficulty for optical techniques, therefore either determining a way to match this refractive index or finding a way to remove refractive index from the imaging system would be ideal. The refractive index of liquids is not constant at all wavelengths and this can be exploited. Samples of liquids were sent to J.A Woollam Co., Inc. and using a M2000 ellipsometer the refractive index versus wavelength was determined for pure and diluted LST heavy liquid as well as isopropyl alcohol. A plot of the refractive index versus wavelength are shown in figure 3.19. It was determined that at approximately 200 nm the index of refraction of iso- 102

Figure 3.19: A plot of refractive index versus wavelength for iso- propyl alcohol, pure LST heavy liquid and diluted LST heavy liquid.

propyl alcohol and diluted LST heavy liquid match, allowing an Atwood number of 0.5 to be achieved in a miscible configuration. The creation of 200 nm light can be accomplished using a deuterium lamp, however, CCD cameras do not respond to light at this wavelength so a scintillator would need to be used. The scintillator would need to be an electronic one that has a fast enough refresh rate to accommo- date the high frame rate needed during these experiments. To complicate matters more, at this wavelength (UV-C) there is absorption due to the presence of air, so care would have to be taken to minimize imaging through air. The use of x-rays have been used in past studies and was a concept that was considered in this study. In previous investigations, experiments that use x-ray imaging in a similar drop tower setup as used in this study have a limited number of frames that are acquired [49, 22]. We use a 200fps camera for imaging and would like to maintain this frame rate. The ideal setup is to have approximately 10 flash x-ray sources that would progressively fire as the test sled crossed each source’s path. A large x-ray detector (computed radiography detector) imaging plate 103

would be used behind the test sled and this would be scanned after an experiment. The feasibility of this was first tested using the x-ray sources in the laboratory of Ricardo Bonazza at the University of Wisconsin [104]. It is also necessary to determine the energy of the x-ray source that would be required to provide large contrast, while still providing a large enough signal to noise ratio. To accomplish this, the x-ray mass attenuation coefficient is needed. This was determined for the LST heavy liquid at Lawrence Livermore National Laboratory using precise x-ray sources. The primary x-ray absorber in the heavy liquid is tungsten and therefore the mass attenuation coefficient, corresponding to the proper x-ray energy for tungsten, can be utilized yielding an effective tungsten density for the LST g heavy liquid. The effective tungsten density was determined to be ρeff =1.755 cm3 . With this information the fraction of transmitted x-rays is determined at various µ I ρefft µ − ρ E x-ray energies I = e , where ρ is the mass attenuation coefficient at a 0 | | E particular energy, t is the thickness of the liquid to be penetrated and I is the I0

fraction of photons transmitted [3]. A similar calculation is performed for isopropyl alcohol and the contrast between it and LST heavy liquid was determined at various

I/I0 I/I0 x-ray energies | |LST −| |IP A . The calculated contrast for a 300 keV, 450 keV I/I0 + I/I0 | |LST | |IP A E and 1MeV tube are 0.95, 0.66 and 0.22 respectively. Although the 300 keV source has the highest contrast, only 1% of the photons are transmitted through the heavy liquid. This increases to 10% for the 450 keV tube. The main limiting factor in implementing an x-ray system is cost. With 10 x-ray tubes the system would cost approximately $800, 000 and this is excluding any infrastructure costs. Another way to circumvent the refractive index mismatch problem is to measure something other than electromagnetic radiation absorption. Since we are dealing with a salt water solution as one of the liquids and alcohol as the other we can measure the resistance and conductance of the liquids. One technique, that uses electrical resistance for imaging, is electrical resistance tomography. In this tech- nique, the inside of the tank is filled with current sources and voltage probes that are affixed to the walls. Current is sent from each source individually and the volt- age at each voltage probe is then measured. This is done for every current source. 104

By discretizing the fluid domain, a system of linear equations can then be formed. Ideally, the system can then be solved by matrix methods giving the resistivity throughout the fluid (which represents the density field). This would give the full density at a single instant in time. Thus, this would have to be done for every time step. The main difficulty with this technique is that the current does not travel in straight lines (unlike in x-ray tomography) and therefore the voltage field within the fluid, created by a single source, depends on the entire domain. This creates an ill-posed problem that has a greater number of unknowns than equations and there- fore is non-invertible. There are methods to solve this type of problem by utilizing prior information, but this is usually at the cost of spatial resolution [42]. There is an opensource project “EIDORS” [1], written in MATLAB, which implements this technique. The difficulty is that with larger systems the program becomes numer- ically unstable crashes. This turned out to be an issue in our application because millimeter resolution is needed in order to resolve the dynamics of the mixing region and the model is large when compared to the models EIDORS was designed for. Therefore, this method was not pursued. Possibly, the way to implement such a system is to be less ambitious and perform 2D or 1D as opposed to 3D tomography. 105

CHAPTER 4

INITIAL PERTURBATIONS

For the Rayleigh-Taylor instability to develop, it is necessary to have initial per- turbations present on the interface between the two fluids. Without initial pertur- bations there will be no misalignment of the density and pressure gradients and therefore no baroclinic torque to drive the instability (fig. 1.3). Also, since we in- tend to study the instability in the self-similar regime, small wavelength waves when compared to the tank width are required [2]. The self-similar flow is characterized by a progression in time from smaller to larger scales as the mixing width grows in time. Scales cannot become larger than the tank and therefore self-similarity ends when the largest scale reaches that point. Thus, the initial perturbation wavelength needs to be small enough to allow sufficient time to study the self-similar flow. In the experiments presented here, the small initial perturbations on the interface are either forced or left unforced where background noise acts as the perturbations.

4.1 Forced Initial Perturbations

In the past, initial perturbations were created in our laboratory by horizontally oscillating the tank which was affixed to a set of horizontal bearings [105, 102]. This method works well to produce large wavelength waves (up to approximately 5 wavelengths per tank width). However, this technique has difficulty creating smaller wavelengths. The problem with this technique owes to the fact that waves are created at the side boundary and these waves propagate inward. An attempt to conceptualize this will be made here. For deep water waves, as we have here, smaller wavelength waves travel more slowly and therefore have a longer time for viscous effects to become large enough to decrease their amplitude. This creates a non-uniformity of amplitude (the wave amplitude is smaller in the center that on 106

the boundary). The dispersion relation for gravity waves at an interface is ω = tanh(kH)Agk [12]. When kH = 2, tanh(kH) > 0.96. From this, when H (the pdepth of the fluid) is greater than 0.32λ we have tanh(kH) 1. All the experiments ≈ performed in this study have an initial wavelength of at most 10 mm and the vertical tank extent is at least 150 mm. Therefore, for all of the experiments we refer to in which we desire that we have small wavelengths, we can confidently use the deep water wave relationship. Then, ω = √Agk and since the wave speed is defined ω g as c = k , we have c = A k . Viscous effects can be included by using the model given by Lamb [53] wherep the viscous damping is added to the water wave differential equation by including a damping term of 4νk2. After inserting this into the simplified interfacial wave equation, as can be found in equation (A.54), we obtain (where the effects of non-constant gravity and interfacial tension have been omitted),

a¨ +4νk2a˙ + Agka =0. (4.1)

The form of the solution of the interface (in two dimensions) is η = 2 2 4 ikx  2νk √4ν k Akgt a(t)e . The solutions of equation (4.1) is, a(t) = A1,2e − ± − = 2νk2t √Akg 4ν2k4it A1,2e− e± − . After accounting for the no slip condition at the boundary (since the wave propagates from the wall) the interface displacement becomes,

2νk2t 2 4 η = a e− cos(kx Akg 4ν k t) , (4.2) 0 − −  p  which shows that for a traveling wave generated at the boundary the interface amplitude decays as the wave travels inward. This viscous damping of amplitude was observed in calcium nitrate aqueous solution / isopropyl alcohol liquid combination experiments performed by Wilkonson [105]. As observed in figure 4.1, the amplitude of the waves decrease from the side walls as the number of wavelengths per tank width was made greater than approximately five. 107

Figure 4.1: Images of Jeff Wilkonson’s experiment of horizontally forced, small wavelength, Rayleigh-Taylor instability. Obvious non- uniformity due to viscosity is present.

An estimate of the ratio of theoretical interface amplitude at the center of the tank to that initially at the wall can be obtained from equation (4.2). The center of the tank is at a distance of 38 mm from the wall. For a particular wavelength, the time until the wave reaches the center of the tank can be calculated using the wavespeed c = A g 4ν2k2. This time is then used in the expression, k − p aˆ 2νk2t = e− , (4.3) a0

where the decrease in interface amplitude (ˆa) is determined. From equation (4.3), for 1.5 waves: aˆ =0.968, for 3.5 waves: aˆ =0.761 and for 5.5 waves: aˆ =0.428. a0 a0 a0 It is obvious from this that something other than horizontal shaking is needed to produce a uniform small wavelength initial perturbation. Another way to produce waves at an interface is to parametrically (where grav- ity is the parameter being varied) excite the two stratified fluids to produce the perturbation uniformly across the interface. Any damping effects traveling through the liquids should occur uniformly across the interface. This method was first im- plemented by Olson [70] using a large stepper motor to oscillate the entire tank 108

containing the liquids in the vertical direction. The production of large wavenum- ber disturbances requires higher frequency oscillations which proved to be very dif- ficult to accomplish with Olson’s setup. Also, the large mass of the motor produced large forces on connecting joints by the nearly 70g acceleration at the end of the experiment. Because of these difficulties, different mechanisms were considered and tested. One particular concept was to oscillate only the liquids instead of the entire tank. This would reduce the amount of force needed. Both piezoelectric elements and a more aggressive mechanical design similar to a diaphragm pump placed inside the tank were tested. These methods did not produce usable parametric waves at the interface due to a variety of implementation difficulties. The piezoelectric elements produced very little amplitude at the interface, not nearly enough to cause parametric waves. The diaphragm pump idea, although promising, was plagued with leaks. In addition, there was not enough distance between the actuator and the interface to yield a uniform velocity distribution. Thus, it was concluded that for practical purposes we need to oscillate the entire tank. However, a method to produce high frequency tank oscillations with large displacement and with a relatively low mass actuator is required. As observed by Michael Faraday [34] on an air water interface, the production of small amplitude, small wavelength waves only requires low amplitude vibration. This led to the concept of using a motor with an off center weight attached to vibrate the tank. This method was tested and did show promise, but did not produce observable perturbations as desired. It was realized that more appreciable tank oscillation amplitudes were necessary. This was verified by an analysis of the Mathieu-Hill equation (including viscous ef- fects). This derivation is shown in the Appendix (sec. A.3). It was determined that oscillation amplitudes of order 1 mm are required due to the damping effects of vis- cosity. Larger oscillation amplitude can be accomplished if the effect of resonance is utilized. To achieve this, the tank and backlight were mounted on an open enclosure that is affixed on the sides to the test sled using crossed roller bearings so that it is constrained to only move in the vertical direction. The system uses three springs 109

in parallel which are held in place on the bottom of the shaker and the test sled. These parallel springs provide a large enough spring constant to achieve the desired approximate 20 wavelengths across the width. The equation for Faraday wave res- 2π onance as derived in the Appendix (sec. A.3) is ωw = 4 λ Ag and this was used K q to size the springs needed, where ωs = m is the resonance frequency for a spring and K represents the spring constant. Thisq information, along with the availability of springs that would fit the system, was used to choose particular spring combina- tions. Without including interfacial tension effects, we can compare how changing the mechanical resonance alters the fluid resonance for Faraday waves. By equat- ing the spring resonance equation and the wave resonance equation we obtain an equation that describes how changing the spring constant affects the wavenumber created. K K ω = = 4kAg = k = , (4.4) rm ⇒ 4Agm p where K is the spring constant, ω is the shaking frequency, A is Atwood number, g is gravity and m is the mass of the system. Small counter weights that are free to move front to back to help balance the system and allow for resonance adjustment are placed on the sides of the resonant box as well. A later design improvement included the use of plastic bottles as counter weights that allow for finer adjustment of resonance by adding lead or tungsten shot. This allows up to an approximately 10% variation in the resonance frequency. The resonant box was initially coupled to a system in which a motor using a double linkage arm to oscillate a weight vertically shakes the system (figure 4.2). This system did succeed in producing waves of small wavelength on the interface in a uniform manner. However, this system was later improved with the use of a voice coil to produce the force to drive the spring-mass system. The voice coil allows us to circumvent the restriction that with the motor the applied force is a function of frequency. In the new system, the acceleration amplitude produced can be easily increased by increasing the current passing through the coil without changing the frequency. A rendering of the shaker system is shown in figure 4.3. 110

Figure 4.2: Rendering of the vertically constrained weight/motor system.

Figure 4.3: Rendering of the resonant box used to create the vertical oscillations used for the parametric excitation. 111

Parametrically forced experiments were performed on the WP drop tower to test the resonant system. The fluids used here were a pure LST heavy liquid / isopropyl alcohol combination. They were forced at 18 Hz which corresponds to the resonant frequency of the vertically oscillating shaker box when a particular set of springs was used. The system was first parametrically forced to create an initial perturbation at the interface and it was then made RT unstable with an effective acceleration of approximately 1g. Two different views of this are presented in figure 4.4 and figure 4.5, where refractive index mismatch was used for the imaging technique. One view displays the three-dimensionality of the instability. It is observed that when small wavelength perturbations are forced, the instability quickly develops and appears to become turbulent and self-similar as desired.

Figure 4.4: Sequence of images of a 0.56 Atwood number experiment with vertical oscillation to produce the initial perturbation viewed at a downward angle of approximately 20 degrees from above to show the three-dimensionality. 112

Figure 4.5: Sequence of images of a 0.56 Atwood number exper- iment, with vertical oscillation to produce the initial perturbation, allowed to progress in time to display the apparent turbulent mixing. 113

4.2 Background Noise Induced Initial Perturbations

In previous experiments in our laboratory [88] it was observed that under certain circumstances, without forced initial perturbations, a small wavelength disturbance would develop into a turbulent Rayleigh-Taylor instability. These original unforced initial perturbation experiments were performed in a rectangular tank, where the smaller of the two cross-sectional dimensions is still many times larger than the wavelength of the perturbations that develop. These experiments were performed using a combination of isopropyl alcohol and pure LST heavy liquid yielding an Atwood number of 0.56 with an effective acceleration of 0.8g. A sequence of images of one of these experiments is shown in figure 4.6. It is observed from these images

Figure 4.6: Sequence of images of a 0.56 Atwood number experiment performed without forcing initial conditions displaying growth of the instability using a rectangular Plexiglas tank. that the Rayleigh-Taylor instability develops from what appears to be a perfectly flat interface. Eventually, a small scale disturbance appears at approximately t = 114

150 ms with a dominant wavelength that compares favorably to that calculated for the fastest growing wavelength from viscous linear stability theory as presented in section A.1 of the Appendix. The initial perturbation for these experiments is believed to arise from background noise which is elaborated on in the next section. The flow quickly begins to become turbulent, evident by the development of a large range of scales in time. Since the only length scales present are those of the dominant scale of the mixing region and the width of the mixing region itself, and that these are coupled (both increasing in time), the flow does appear to have characteristics of self-similarity.

4.2.1 Background Noise

Determining the size and cause of the initial perturbations that drive the instability during an unforced experiment is useful and may present a possibility to control them. At an early time in the experiment’s progression, at the point where per- turbations first become visible, the wavelength and amplitude were noted. Using viscous linear stability theory the growth rate is calculated for the corresponding wavenumber, fluid properties and acceleration using the eigenvalue relationship pre- sented in equation (A.30). Using the measured wavenumber, computed growth rate and the amplitude of the first visible perturbation at its corresponding time, an ap- proximate initial amplitude is then determined by extrapolating backwards in time

√kAgefft √kAgefft using the solution for inviscid linear stability theory, a = C1e + C2e− [88]. Here only the positive exponential was used because of the realization that if the coefficients are assumed to be of the same order the first term is exponen-

√kAgefft e 2√kAgefft tially larger than the second term, √kAg t = e . With a growth rate of e− eff 1 √kAgeff = 100s− , the first term is already over 100 times larger after 10 percent of the experimental time has passed and we do not have observable perturbation growth until 50 percent of the time has passed. The initial amplitude was determined to be of nanometer scale using this method. Nanometer scale initial amplitude suggests the initial perturbations are originating from molecular motion and therefore should be temperature dependent. One way 115

of possibly proving or disproving this hypothesis would be to vary the temperature of the liquids to determine whether there is a noticeable time difference in the onset of the growth of these initially unobservable perturbations. In order to estimate the initial perturbation amplitude, the kinetic theory of liquids was used. It is presumed that the thermal velocity at the interface is what perturbs the interface and establishes the initial amplitude. The thermal velocity W/KT of liquids is proportional to e− , where W is a measure of the molecular energy, K is the Boltzmann constant and T is temperature [36]. Since the thermal velocity varies exponentially with temperature, it seems reasonable that a small change in temperature may show a noticeable change in growth rate, thus allowing us to verify this effect. Experiments were performed at 10, 0 and 10 degrees Celsius, − where changes in density due to temperature change was accounted for by matching Atwood numbers after the liquids were cooled. However, there was no noticeable difference in time of appearance of observable perturbations. This disproves the hypothesis that the initial perturbations are due to thermal motion of the molecules. It is known that diffusion reduces the initial growth of the Rayleigh-Taylor in- stability as shown by Duff, Harlow and Hirt [32]. This smaller growth rate has to be accounted for or our calculated value for initial amplitude would be incorrect. An initial diffusion thickness of 0.5 to 1 mm was obtained by considering the develop- ment of the diffusion region that grows as √Dt, assuming that it takes 5 to 10 min from when diffusion begins until the experiment is performed. The diffusion coeffi- 10 2 cient used here (1.12 10− m /s [77]), is approximated by taking the average of × the diffusion coefficients for ethanol into water and water into ethanol. In order to verify this approximate diffusion thickness, measurements of the diffusion thickness were taken by sweeping a laser beam from above to illuminate the interface. Because of the large refractive index difference between the liquids, a light channel effect at the interface was observed and therefore the diffusion thickness could be measured assuming that only this width is being illuminated (fig. 4.7). Essentially, the light rays were partially trapped inside of the diffusion region (due to total internal re- flection) and thus only the diffusion region is visible. The camera was positioned 116

Figure 4.7: Image in which a sweeping laser illuminates the interface causing a light channel effect allowing the diffusion thickness to be measured. 117

lower than than the interface, creating a slight 5.5◦ angle to the interface, since the small thickness would be difficult to resolve otherwise. Calculations were performed to account for this angle. There is some ambiguity as to whether the front or back of the diffusion region is observed, so calculations were performed for all possibilities, but the differences were minimal. An approximate thickness was determined result- ing in an 0.5 mm thickness, which is consistent with our previously approximated ≈ result. Using the diffusion thickness that was previously determined, calculations were performed following the theory of Duff, Harlow and Hirt. The boundary value problem, d dw dQ a2 (1 + AQ) = w 1+ AQ aψ (4.5) dσ dσ − dσ     is solved, where a, σ and Q are analogous to the values of wavelength, vertical displacement and density rescaled to include a finite diffusion zone thickness. The value for Q is calculated by integrating through the diffusion region and the factor ψ decreases the growth rate n, as

kAg n2 = . (4.6) ψ

Equation (4.6) is a representation of the inviscid, RT growth rate from linear stability theory with a scaling factor included. Including parameters to model the effect of viscosity the growth rate takes the form,

n = kAg /ψ + ν2k4 νk2. (4.7) eff − p Equation (4.5) is solved for the parameter ψ. A program was written in Fortran to solve equation (4.5) using the shooting method and it contained in section B.6 of the Appendix. When considering an initial diffusion thickness of 0.5 mm to 1 mm, 7 6 an initial amplitude of 1 10− m to 1 10− m is calculated. × × The initial amplitude was also determined for an immiscible experiment. Since the experiment is immiscible, there is no diffusion. Without the effects of diffusion 118

affecting our result, one can extrapolate backwards using the point that a perturba- tion first becomes visible to obtain an initial amplitude (as was done for the miscible 7 6 experiments). The resulting initial amplitude is 1 10− m to 1 10− m. This is × × the same initial amplitude determined from the miscible experiments when diffusion was taken into account. These initial perturbations are more believable than the nanometer scale ones that were calculated earlier. Perturbations of micrometer scale may be caused by background vibrations. If we were to consider them to be caused by sound they would again be too small as can be concluded from a quick calculation [75]. If we were to assume a background noise level of 70 dB (as is average for busy traffic which

Lp/20 will be assumed for our laboratory environment), we calculate prms = pref 10 = 5 0.063 Pa, where L is the decibel level and p is the reference pressure level (2 10− p ref × 1 8 Pa). We can deduce a RMS velocity from this from v = c 2 10− m/s, where ρ ≈ × ρ (the density) was taken as an average between LST heavy liquid and isopropyl alcohol for this case and c (the speed of sound) was taken as 1500 m/s corresponding to that of water. An amplitude can be deduced from this velocity by assuming a

Vrms 10 sinusoidal perturbation, A = 2 10− m. Here ω was calculated by rms ω ≈ × assuming gravity waves are produced, thus ω = √kAg, where k corresponds to the fastest growing wavelength observed for this particular calculation. We must also consider that sound will not couple very well to the fluid in our tank due to the very large impedance mismatch between air and the Plexiglas tank. The acoustic impedance of air, Plexiglas and water are 0.00033 103,3.15 103 and 1.48 103 Pa × × × 1 s m− respectively [80]. The amount of sound reflected is given by,

Z Z 2 R = 2 − 1 , (4.8) Z + Z  2 1 

where Z1 is the acoustic impedance of the first material, Z2 is the acoustic impedance of the second material and the transmitted sound is given by T =1 R. Therefore, − less than 1% of the sound will be transmitted into the liquids. From this approxima- tion it is clear that sound waves would not produce the required initial perturbation; 119

therefore, vibration in the equipment is the most likely cause. Other ways to produce small amplitude perturbations were also explored. Adding particles to the interface is one way to possibly alter the initial perturbation. Hollow glass spheres which have a diameter of approximately 12 micrometers and a density between that of alcohol and heavy liquid were added to the interface. Two experiments were performed. For one experiment a spray mister that had a suspen- sion of these particles in LST heavy liquid was used to avoid clumping (fig. 4.8). In the other experiment, the particles were sprinkled on the interface in a manner that allowed them to clump causing larger perturbations (fig. 4.9). It was observed

Figure 4.8: Experimental images progressing in time in which a diluted LST heavy liquid / 90% ethanol - 10% water fluid combination giving an Atwood number of 0.48 was used. Hollow glass spheres were sprayed on the interface such that very little clumping takes place. that in the experiments where the particles were sprayed on there is no obvious dif- ference from the experiments where no particles were introduced onto the interface as displayed in figure 2.12. This is expected since initial amplitudes on the order of micrometers were estimated (which is approximately the sphere size) and that wavelengths less than a few millimeters would be highly damped by viscosity. The more interesting case is when we allow the particles to clump together, which creates much larger amplitudes and wavelengths. It was observed in this case that there is indeed a noticeable effect. There were some regions observed along the interface where there were noticeable and larger wavelength perturbations produced. This is 120

Figure 4.9: Experimental images progressing in time in which a diluted LST heavy liquid / 90% ethanol - 10% water fluid combination giving an Atwood number of 0.48 was used. Hollow glass spheres were sprinkled on the interface such that clumping occurred causing an inhomogeneous perturbation distribution. understandable since some of the clumped particles will effectively produce a larger wavelength perturbation with an amplitude larger than 12 microns, which would affect RT growth. These experiments verify that small amplitude perturbations at the interface do affect the initial growth of the instability. 121

CHAPTER 5

NUMERICAL SIMULATIONS

Numerical simulations were performed to closely match the experiments using a reasonable amount of computational cost. These simulations were carried out re- motely on Lawrence Livermore National Laboratory computers using the incom- pressible side of the Direct Numerical Simulation (DNS) capable parallelized Mi- randa code which solves the Navier-Stokes equations in an Eulerian framework [60],[14]. The computers used for the calculations are Appro clusters that mostly use AMD Opteron processors. Simulations were done primarily on Hera, Atlas and Zeus. Eventually, Zeus was converted to RZZeus, which used Intel Xeon processors. All the processors operate at either 2.3 or 2.4 Ghz. Processor usage was usually kept at 400 processors so that jobs could be transfered between all three clusters depend- ing on the resources available. In this study since we are working with liquids, and thus small diffusivities, we are unable to fully resolve the diffusive scales without us- ing unreasonable amounts of computational power. For this reason we can consider this a partial DNS simulation where all but the diffusive scales are resolved; filtering is used to smooth only the sub-viscous smaller scales. Convective and viscous length scales are resolved where numerical values for diffusivity are introduced in order to control numerical stability. We can consider that the smallest viscous scales that will be apparent will occur in the self-similar turbulent mixing region. One such 1/4 ν3 measure of this length scale is given by Chertkov [10] as l A2g2 t . This scale ∼ eff is based on the late time self-similar Rayleigh-Taylor turbulent mixing regime. In our configuration this yields a viscous length scale of approximately 0.05 mm. With this information, we have a starting point for the grid size needed. The physical Schmidt number is of order 10, 000, where Sc ν , which was calculated using an ≡ D approximate diffusion coefficient and the average kinematic viscosity. The diffusion coefficient is calculated by averaging the two diffusion coefficients of ethanol into 122

10 2 water and water into ethanol, both at infinite dilution yielding, 1.12 10− m /s × [77]. The average kinematic viscosity of ethyl alcohol and diluted LST heavy liquid 6 2 is calculated as 2.69 10− m /s. It was found that for Sc 10, 000, the simula- × ≈ tion becomes numerically unstable at a grid size of 0.06 mm. For this reason the convergence of the growth rate versus Schmidt number was investigated using 2D simulations. Figure 5.1 (a) shows the growth rate, α, determined by fitting a line

through the late time √h versus t√Ageff plots as was done previously to determine α for the experiments. Besides one outlier (most likely due to errors introduced from the restarting of simulation runs), the plot shows little dependence on Schmidt

number. The bubble portion of the mixing layer width (hb) versus time for various Schmidt numbers was also plotted in figure 5.1 (b). This plot shows convergence as Schmidt number increases. A Schmidt number of 10 here appears to have reasonable convergence in early time and good convergence at late time. Therefore, this value was the one used in this study.

αb hb (m)

(a) (b)

Figure 5.1: Late time bubble growth parameter vs. Schmidt number (a) and hb vs. time (b), all with a grid size of 0.06 mm.

The solution procedure is as follows. First, the continuity equation is used 123

to obtain the density field at the next time step. The next step is to solve the momentum equations. This is done in two steps (similar to pressure projection methods). First an intermediate step is taken in time where the pressure is neglected. The second step, to complete the time step, is done by taking the divergence of the remaining pressure term yielding a Poisson equation [97]. Since we are using periodic boundary conditions in the horizontal directions, an FFT of the Poisson equation is first taken, after which it is solved and then transformed back into physical space. This newly calculated pressure field is then used to determine the velocities [63]. The spatial finite difference scheme used is a 10th order accurate compact scheme [55]. Either a 3rd order Adams-Bashford-Moulton (ABM) predictor corrector or a 4th order accurate Runge-Kutta (RK) method are used to advance the equations in time. Initially, the ABM method was used with no issues with numerical stability. The ABM method is preferred to the RK method for its decreased computational cost. It was, however, discovered that when liquids with a larger density difference (yielding a stiffer problem) were simulated, numerical stability became problematic when using the ABM method. This was remedied by switching to the more costly and more stable RK method. The main objective of these simulations is to compare three-dimensional simu- lations with their corresponding experiments. The main attribute that is compared is the late time growth rate, α. However, 3D simulations are costly and thus 2D simulations are first performed for a particular configuration to make success more likely. Due to time and resource constraints, we must make certain to run at a resolution which is fully resolved, but not overly so. This was done by performing convergence tests to determine the appropriate resolution. Figure 5.2 (a) shows the growth rate, α, (for various resolutions) determined by fitting a line through the

late time √hb versus t√Ageff plot. This plot shows a decrease in α with decreasing grid size, but then a small jump at a grid size of 0.03 mm. The jump is mostly likely due to restarts of the simulation for this high resolution, computationally expensive case. We can conclude that a grid size of 0.06 mm shows reasonable convergence.

Also, hb versus time for various grid sizes is also shown in figure 5.2 (b). This plot 124

shows convergence as grid size decreases. A grid size of 0.06mm here appears to produce reasonably resolved results at early time and good convergence at late time, so this grid size was chosen.

αb hb (m)

(a) (b)

Figure 5.2: Late time bubble growth parameter vs. grid size (a) and hb vs. time (b), all with a Schmidt number of 10.

The temporal resolution is determined by the Courant-Friedrichs-Lewy condi- tion, where the factor of safety could be adjusted. The safety factor often had to be adjusted depending on the specific configuration. Using too small a value caused the time step to become extremely small and this wasted resources. If the value was too large, excessive ringing in the simulation caused an eventual crash. As described in section 4.2 early on in this study, it was realized that when large Atwood number experiments were performed without forcing initial perturbations at the interface, after some time turbulent Rayleigh-Taylor growth was observed. By extrapolating backwards using linear stability theory it was determined that the initial perturbations were of micrometer to nanometer scale. Attempts were then made to numerically simulate the results to see if the numerics would agree with the experiments. Initial attempts to directly perform the simulations using white noise 125 of nanometer scale as the initial perturbation proved to be problematic. RTI Growth was eventually observed in these simulations. However, the time of appearance was later than it was in the experiments. Simulations performed with micrometer scale white noise initial conditions were also performed and these appeared to match the experiments better. Another method for calculating the growth of the very small perturbations was also developed. Using the viscous linear stability analysis presented in section A.1 of the Appendix, the initial nanometer scale perturbations were calculated forward in time to a point where linear stability theory still holds since there is no reason to waste computational time on something linear stability theory can accomplish. The eigenvalue relationship (eq. A.30) was used to determine growth rates for all wavenumbers. These were then used in the inviscid relationship for Rayleigh-Taylor growth. This is not entirely correct since the full viscous flow field is not determined, only the initial growth rate is. But, this was done for time constraints. A fully derived velocity field from viscous linear stability theory was also calculated and this will be used in future simulations. 126

CHAPTER 6

RESULTS AND DISCUSSION

6.1 Experimental Qualitative Results

Experiments were performed primarily with large Atwood number liquid combina- tions on the weight and pulley drop tower. Experiments were performed both with forced and unforced initial conditions and miscible and immiscible liquid combina- tions. Experiments were also performed on the linear induction motor drop tower to provide data at larger accelerations. These larger accelerations are required to observe RTI for unforced, smaller Atwood number experiments. For the immiscible experiments, the fluids were initially added to the tank with care being taken to remove all air bubbles. Experiments were performed repeatedly without emptying the tank. Enough time was allowed to pass (approximately 10 minutes) between experiments to allow for emulsion activity at the interface to dis- sipate. Displayed in figure 6.1 is a sequence of images of a typical experiment in which the heavy liquid is diluted LST heavy liquid (with AOT added as a surfac- tant) and the light liquid is 5 cSt silicone oil yielding an Atwood number of 0.48. The refractive index mismatch imaging technique was used here as with all the LST heavy liquid cases except when anethole is used. It is observed that a range of scales develops at late time. The initially small structures (having multiple wavelengths) merge into larger and larger structures as time progresses. Since a range of small scales is initially present, once the linear regime is no longer valid non-linear mode- coupling will allow the creation of additional wavelengths and this is observed. As the mixing layer width becomes larger, so do the scales present in the mixing region which is consistent with the concept of self-similarity. By using the eigenvalue ex- pression derived from the viscous linear stability analysis while including interfacial tension (eq. A.31), we can calculate the interfacial tension of the fluid combination 127

Figure 6.1: Experimental images progressing in time in which a diluted LST heavy liquid with AOT as a surfactant (bottom liquid) / 5 cSt silicone oil (top liquid) combination giving an Atwood number of 0.48 was used. 128

by using the observed dominant wavelength at the onset of the instability. At a time of approximately 100 ms, the instability is still in the linear regime and a dominant wavelength of approximately 4.3 mm is clearly visible. By using this wavelength and the known viscosities of the two liquids, it was determined (using a program written in Matlab, contained in section B.5 of the Appendix) that the interfacial tension is 2.1 mN/m. This value is also consistent with the value of 1.6 mN/m found for the heptane / calcium nitrate solution combination with AOT added as used by Waddell in 1999 [102]. Although it was expected that surfactant was needed, an experiment was per- formed without surfactant to verify this assumption. In figure 6.2 an experiment with the same parameters is presented where the only difference is a lack of sur- factant. In this experiment only a few wavelengths across the tank were observed and there is not enough domain size to display a large range of scales. Once again, an attempt to determine the interfacial tension was made using the experimentally observed wavelength. Although it was difficult to determine a dominant wavelength, it was approximated to be 11 mm which yields an approximate interfacial tension of 20 mN/m. Thus, adding surfactant produces a factor of 10 decrease in interfacial tension. It is insightful to look at the forces associated with buoyancy and interfacial tension in light of the observations made here. The force associated with buoyancy on an individual bubble or spike is approximately (ρ ρ ) g V and the force asso- 2 − 1 eff ciated with equilibrium restoring force of interfacial tension is γλ. Taking the ratio of the buoyancy force to that of interfacial tension yields,

(ρ ρ ) g V (ρ ρ ) g λ2 2 − 1 eff = 2 − 1 eff , (6.1) γλ γ

which should tell us how the two forces compare and if we are observing what is expected. From this expression, at order one an increase in the interfacial tension by a factor of 10 by removing the surfactant increases the dominant wavelength by a factor of approximately 3. This agrees with what is observed in the experiments. Also, it can be noted that using the LIM (with a ten fold increase in acceleration) 129

Figure 6.2: Experimental images progressing in time in which a diluted LST heavy liquid without a surfactant (bottom liquid) / 5 cSt silicone oil (top liquid) combination giving an Atwood number of 0.48 was used. These experiments are unforced and were performed on the WP apparatus. 130

resulted in the same decrease in wavelength as that produced by adding surfactant. Parametrically forced experiments with this liquid combination were also per- formed. The smallest wavelength perturbations that were able to be produced with this setup were approximately 3 mm when the forcing frequency was approximately 37 Hz. One of these experiments is shown as a sequence of images in figure 6.3. It is observed that there are initially many wavelengths across the interface and there does appear to be a range of scales present at late time. However, this is not as apparent as in the unforced case. By forcing a particular wavelength, we are possi- bly decreasing the amount of mode coupling that takes place which may affect the self-similar regime; thus, the effects of forcing on α must be investigated further. It is also obvious that the forced experiments develop into the turbulent, self-similar regime earlier than the unforced ones. With a larger initial amplitude, the forced ex- periments would be expected to progress through the four RTI stages more rapidly. However, once the experiment is in the self-similar regime the comparison of α for all experimental cases should still be valid. To verify this, α versus time must be calculated. Unforced, miscible experiments with the same Atwood number were also per- formed. In figure 6.4 a sequence of images in which diluted heavy liquid is used in conjunction with a 90% ethanol - 10% water mixture yielding an Atwood number of 0.48. In these experiments it is noticed that the first observable interfacial waves appear after twice as much time has passed when compared to the immiscible ex- periments. The initial perturbations do appear to be smaller than their immiscible counterparts and are of varying scale. At late time the development of a range of scales also is observed. Another interesting difference observed between miscible and immiscible experiments is that the mixing region appears darker in the miscible experiments than in the immiscible ones. A darker mixing region implies more light lost, which occurs at refractive index jumps, and thus is indicative of the number of wavelengths encountered. This implies that smaller wavelengths are present in the miscible experiments. Without the stabilizing effect of interfacial tension on small wavelengths, a larger range of wavelengths is present at late time. It is therefore very 131

Figure 6.3: Parametrically forced WP experimental images pro- gressing in time in which a diluted LST heavy liquid with surfactant (bottom liquid) / 5 cSt silicone oil (top liquid) combination giving an Atwood number of 0.48 was used. 132

Figure 6.4: Miscible, unforced liquid experiments performed, on the WP apparatus, with diluted heavy liquid (bottom liquid) / 90% ethanol - 10% water (top liquid) having an Atwood number of 0.48. 133

promising that the miscible experiments have achieved true self-similarity because of the larger range of scales achieved. Forced, miscible experiments with the same Atwood number as the unforced ones were also performed. In figure 6.5 a sequence of images is shown in which diluted heavy liquid is used in conjunction with a 90% ethanol - 10% water mixture yielding an Atwood number of 0.48. In this case the tank was oscillated vertically at approximately 25 Hz producing Faraday waves for the initial conditions. In these experiments it is observed that the growth of the mixing region develops nearly immediately as was observed in the forced, immiscible experiments. At late time we also observe the development of a range of scales. As was also observed with the unforced experiments, the mixing region appears darker in these experiments when compared to the immiscible ones. The darker mixing region again implies the presence of smaller wavelengths. It is also interesting to note that while the miscible unforced experiment shows a slight 3-region effect, the miscible forced experiment does not. This further validates our hypothesis that front-back rollups are part of the cause of the 3-region effect. This small amplitude, large wavelength perturbation does not have a chance to grow in the forced case before it is drowned out by the early development of the mixing region. Shown in figure 6.6 is a comparison of the late time behavior of all four cases. Experimental images are compared at a time in which the mixing layer width is the same. As mentioned previously, the unforced experiments take longer to develop than their forced counterparts and the miscible experiments take longer to develop than their immiscible counterparts. The forced experiments develop faster than the unforced ones most likely due to the larger initial amplitude and the miscible exper- iments develop slower most likely due to diffusion effects. When comparing the two immiscible experiments, it is observed that the dominant wavelength appears simi- lar. This is consistent with the assumption that forcing small wavelengths does not alter how the flow evolves once self-similarity is reached and that initial conditions are forgotten in a self-similar flow. Comparing the dominant length scales between the two miscible experiments yield a similar conclusion. However, it is observed that 134

Figure 6.5: Miscible, forced liquid experiments performed, on the WP apparatus, with diluted heavy liquid (bottom liquid) / 90% ethanol - 10% water (top liquid) having an Atwood number of 0.48. The frequency of forcing here is 25 Hz. 135

the dominant length scale of the immiscible case is not the same as the dominant length scale of the miscible case. Also, the miscible experiments have darker mixing regions implying the presence of smaller scales as described previously. From this qualitative comparison, it appears that forcing does not affect the late-time flow, while miscibility does.

Immiscible,Unforced Miscible,Unforced

Immiscible,Forced Miscible,Forced

Figure 6.6: Comparison between forced, unforced, miscible and im- miscible experiments performed on the Weight and Pulley apparatus for the 0.48 Atwood number case. Images were chosen at times in which the mixing layer width is approximately matched between ex- periments.

In an attempt to better understand the refractive index mismatch technique and if it corrupts the data, a set of experiments were performed in which anethole was used as the light liquid and a diluted LST heavy liquid solution was used as the heavier liquid. The Atwood number in these experiments was 0.49 and the two fluids had matched refractive index. It is observed that the development of the mixing region as shown in figure 6.7 is similar to that of the immiscible, unforced, 136

unmatched refractive index case. Experiments were also performed on the LIM apparatus. A montage of images of one such experiment is shown in figure 6.8, in which diluted LST heavy liquid with AOT as a surfactant is the bottom liquid and 5 cSt silicone oil is the top liquid. The Atwood number for these immiscible, unforced experiments is 0.48, the same as in the WP experiments. It is observed that a wide range of scales develops as well as a progression to larger and larger scales appears at later times. The large contrast of the mixing region at later time also suggests the presence of very small scales at late time (as occurs with the miscible WP experiments). From visual inspection of the mixing region, it appears as though self-similarity is achieved here. From observation of the dominant wavelength that develops, while still in the linear regime, we conclude that the fastest growing wavelength is approximately 2.7 mm. From a quick calculation, this is indeed what is expected when compared to the 4.3 mm fastest growing wavelength from the corresponding WP experiment. From inviscid linear stability analysis we know that an individual wavenumber grows as √kAg t 2k2νt e eff [88] and the viscous damping for waves decays as e− [53]. With the viscous damping approximation, RTI growth takes the form,

2 √kAgefft 2k νt a = a0e e− . (6.2)

From equation (6.2) the maximum of the exponent (with respect to k) is found from,

∂ kAg t 2k2νt =0, (6.3) ∂k eff − p  which is determined to be, Ag k3 eff . (6.4) ∼ ν2 Thus, a factor of 10 acceleration increase (as is the case with the LIM) yields a factor of 101/3 or 2.15 decrease in wavelength, which is consistent with what is observed. The discrepancy here owes to the fact that the effect of interfacial tension was not included in this quick calculation. 137

Figure 6.7: Immiscible experiment in which LST heavy liquid is the bottom fluid and trans-anethole is the top one. The Atwood number here is 0.49 and this is a matched refractive index experiment that was performed to compare with the LST heavy liquid / silicone oil unmatched experiment. 138

Figure 6.8: Immiscible liquid experiments performed with diluted heavy liquid (bottom liquid) / 5 cSt silicone oil (top liquid) with AOT added as a surfactant water having an Atwood number of 0.48. These experiments were performed on the LIM apparatus without forcing. 139

6.2 Experimental Quantitative Results

In this study the main quantity of importance is the width, h, of the mixing region. And, it is desired to determine how this quantity progresses in time. The mixing layer width can be divided into that of the bubble (the portion in which the less dense mixing region grows into the heavy liquid), hb, and that of the spike (the portion in which the more dense mixing region grows into the light liquid), hs. The widths of the bubble and spike regions are found by subtracting the locations of the spike / bubble extents from the initial interface location. Image analysis was performed using programs written in Java utilizing the ImageJ and JExcel libraries as the main components (sec. B.7 and B.8). This choice was not arbitrary. ImageJ is an open source, Java (and therefore platform independent) image analysis pro- gram. It is a good choice for image analysis because it allows one to interactively manipulate images with little required programing and to create macros and plugins to perform advanced functions as well. The mixing layer widths from the images were then input into an Excel spreadsheet where the data was analyzed. Excel was chosen over Matlab because it allows one to easily follow and alter individual cal- culations without having to recompile. To automate the process (thus eliminating some human error) it was decided to write the program in Java calling individual li- braries to run the ImageJ subroutines and then input the derived data into an Excel spreadsheet, using the JExcelAPI library, performing calculations within it. Pro- graming in Java (and utilizing Openoffice for post processing) allows the capability to use any platform without the necessity of licensing. The first step in the process was to obtain an intensity average across each row for each frame of the usable mixing region area. The usable mixing region area was determined by neglecting structures created by interaction with the walls of the tank. Depending on the particular imaging technique, slightly different methods were utilized to extract bubble and spike widths. For both absorption and refractive index imaging techniques, the measurements had to be taken while accounting for inhomogeneities in the tank and in the backlight. Ideally, one could divide each 140

experimental image by an image in which the tank was empty to account for the non-uniformities. However, in addition to the backlight intensity being non-uniform in space, it was also found to be non-uniform in time (the backlight brightness changes throughout the day due to its temperature increasing). For this reason all of the experimental images for a particular run were divided by the first image and then the top and bottom fluid regions were re-multiplied by an average of a representative area of the corresponding fluid. This area was taken to be close to the interface to account for the backlight intensity variations towards the extremities of each fluid. For the absorption images, boundaries on the mixing region were determined based directly on the concentration of the dyed fluid by using Beer’s law as de- scribed previously (sec. 3.2). First, after the initial rescaling process, the images were divided by I0. Then, an average representative area in the lighter liquid (near the interface to minimize vertical inhomogeneity effects) is used as a constant I0.

This value is used to obtain I/I0 by dividing through all the images. Then, the natural log is taken so that we are left with the exponent in Beer’s law. The next step is to subtract an average representative area in the darker, heavier fluid and divide by an average representative area in the lighter fluid,

ln I ln I I0 I0 C(x, y)= − drk . (6.5) I ln I 0 lght

Equation (6.5) yields the concentration of the lighter fluid in the heavier, darker one (since we have normalized the data on a [0,1] scale, we have removed the absorption coefficient in the exponent of Beer’s law). The data from each image is then input into an array and row averages are taken. This creates a concentration profile for each frame. To determine the mixing layer width, the bubble region width and the spike region width, it is necessary to determine the extent of the mixing region. Marching through the averaged concentration array from the heavy to the light liquid, every value is compared to that of the darker heavier liquid. When 141

this value matches the desired percentage of the pure heavy liquid concentration, the vertical pixel position is noted and converted to millimeters giving the width. Three different thresholds for the mixing region extent were obtaining, defined as (95 to 5)%, (90 to 10)% and (80 to 20)%, where the larger percentage represents the bubble and smaller percentage the spike. These different extent thresholds were compared to determine which one is the most consistent between different experimental runs. It was found that taking ensemble averages of the raw images first of many experiments gives smoother data, so this was the method primarily used. For the refractive index mismatch imaging experiments, since we do not obtain concentration measurements, a consistent way of extracting the mixing layer widths needed to be developed. After the initial rescaling process has been performed, the data from each image is then put into an array and then row averages are taken. Using representative quiescent regions in the light and heavy liquids, we march towards the interface from each of the fluids separately for the bubble and spike width comparing the row-averaged intensity value to the average intensity value of the quiescent area. This technique is similar to that used by the absorption measured images. However, the data here are the raw intensity values and do not represent concentration. The edge of the mixing layer is determined when the intensity (I) drops below a threshold percentage value (P ) defined by,

P I = I +(I I ) . (6.6) thresh drk lght − drk × 100

It was found that a threshold value of 80% produces the most consistent results. Also, a number of different methods were attempted to smooth data between indi- vidual experiments. In figure 6.9 a montage of images for an ensemble of refractive index mismatch experiments is shown. Superimposed on each image are a scaled row averaged profile and horizontal lines displaying 70%, 80% and 90% of the quiescent fluid. Also, a horizontal line showing the intersection with the quiescent intensities of a least squares curve fit line through the 60% to 80% thresholded region is super- 142 imposed. The evolution in time of the profiles do appear to have the characteristics of a self-similar flow; as the profile becomes wider with time, its shape remains similar.

Figure 6.9: A sequence of images where the images of an ensem- ble averaged refractive index mismatch experiment have been post- processed. Horizontally averaged intensity values are superimposed on the images. Also on the images are horizontal lines representing different bubble/spike extent measurements so they can be compared. 143

6.2.1 Mixing Width and Reynolds Number Plots

Mixing layer width plots are shown in figure 6.10. There is a very obvious dif- ference between the immiscible experiments and the miscible ones. The miscible experiments grow significantly less than the immiscible one. This is to be expected since with diffusion the growth rate in the early time regime dictated by linear stability theory would be reduced. The parametrically forced experiments obtain a larger mixing layer width than their unforced counterparts. This larger mixing layer width is expected since forcing produces larger amplitude initial perturbations that develop into the self-similar regime earlier. We would like to apply the self-similar model and observe how the different cases compare in terms of the growth parameter α. Since the self-similar model requires that fully developed turbulence has been reached and we cannot directly determine this in our case, we will compare the Reynolds number in these experiments with the results of those of Andrews [81] and Dalziel [47]. The Reynolds number obtained by Andrews’ group was approximately 1000 and that obtained by Dalziel’s group was approximately 2500. Both of these values are calculated using,

HH˙ Re = , (6.7) ν where H represents the full mixing layer width and H˙ is its temporal derivative. In this study, bubble and spike Reynolds numbers are calculated individually,

h h˙ Re = b/s b/s , (6.8) ν so the corresponding Reynolds numbers of Andrews and Dalziel become a factor of four smaller yielding 250 and 625 respectively. Plots of Reynolds number versus time for the four cases studied here are shown in figure 6.11. The approximate maximum values, neglecting the scatter towards the ends of the experiments, are 1000, 3000, 1500 and 4000 respectively where the values for the spikes are approximately 20% larger than those for the bubbles. It is observed that the Reynolds number for the 144

h(mm) h(mm)

(a) Immiscible, Unforced (b) Immiscible, Forced

h(mm) h(mm)

(c) Miscible, Unforced (d) Miscible, Forced

Figure 6.10: Mixing layer width for ensemble averaged experiments where 80% of the quiescent fluid intensity was taken as the mixing layer width cutoff. Data is for LST heavy liquid experiments for the four cases discussed previously. forced experiments is twice that for the unforced ones. Since these values are larger than those obtained by Andrews and Dalziel (which were found to be self-similar and turbulent by internal mixing layer measurements) for all four cases, we will assume our experiments to be self-similar. 145

(a) Immiscible, Unforced (b) Immiscible, Forced

(c) Miscible, Unforced (d) Miscible, Forced

Figure 6.11: Reynolds number data for ensemble averaged exper- iments where 80% of the quiescent fluid intensity was taken as the mixing layer width cutoff. Data is for the four cases of the LST heavy liquid experiments.

6.2.2 Growth Parameter Plots

Under the premise that we have a large enough Reynolds number to justify self- similarity we will proceed with a comparison of the growth factor α in the model 2 h = αAgefft . One way to measure α is to plot √h versus t√Ageff and fit a straight 146

line (by the method of least squares) through the part of the curve that appears linear. Squaring the slope of this line will yield an averaged value of α. This method for determining α was proposed by Dimonte [25]. It is often difficult to determine the region to apply the curve fit to. So, consistency is important. The

fitting region for the immiscible experiments was chosen to be 5 √m < t√Ageff <

25 √m and 15 √m < t√Ageff < 25 √m was chosen for the miscible experiments due to difficulty in measurement at early time. The curve fits are shown in figure 6.12. The α values obtained in all four cases are smaller than those obtained by past experiments. However, α is much smaller for the miscible experiments than their immiscible counterparts. This implies that interfacial tension has the effect of increasing the growth rate. This may help explain the disparity between past immiscible experiments and simulations which do not include interfacial tension effects. Forcing the immiscible experiments with a wavelength smaller than the fastest growing wavelength and larger than the cutoff wavelength (as is done here) yields a similar α value when compared to the unforced, immiscible experiments. This implies that both cases are achieving self-similarity. The larger amplitude of the forced case gives more experimental time in the non-linear regime for the flow to evolve, but the similar α values imply the extra non-linear evolution time was not necessary because self-similarity had been reached in both cases. Differences in the spike to bubble ratios for the growth rates are also observed. For the immiscible unforced, immiscible forced, miscible unforced and miscible forced the ratio αs was αb found to be 1.26, 1.30, 1.81 and 0.74 respectively. The αs ratios for the immiscible αb cases compare well with past experiments having similar Atwood number, where Youngs [16] found a ratio of 1.3, Dimonte [24] found a ratio of 1.26 and Kucherenko [52] found a ratio of 1.27. The ratios for both immiscible sets are within 10% of each other whereas the miscible, unforced case has an almost 50% larger ratio. The forced, miscible ratio is inverted. The discrepancies in the miscible ratios can possibly be attributed to imaging limitations. The measured values of α are displayed on the plots. 147

√h √h

t Ageff t Ageff p p (a) Immiscible, Unforced (b) Immiscible, Forced

√h √h

t Ageff t Ageff p p (c) Miscible, Unforced (d) Miscible, Forced

Figure 6.12: Measurements of α determined by the fitting a straight line to the √h vs. t√Ageff for ensemble averaged experiments where 80% of the quiescent fluid intensity was taken as the mixing layer width cutoff. Data is for the four LST heavy liquid cases.

Cabot and Cook [2] proposed determining α using,

h˙ 2 . (6.9) 4Ageffh 148

This time dependent α is plotted in figure 6.13, where h˙ is determined using a central difference approximation. A weighted average of 20% of the data is also performed (represented as solid and dashed lines on the plot) to smooth the data. This function performs a non-linear regression using a locally weighted regression method, where more weight is given to the central data point. Also, a dashed line is drawn representing the average α value obtained by the square-root method showing that it agrees well with Cabot and Cook’s method at late time. It is observed that α is smallest for the forced, miscible case, whereas the forced and unforced, immiscible cases show similar but larger values than those of the miscible results. At the end of the unforced, immiscible experiments α is nearly constant with a value of approximately 0.055 for the spike and 0.047 for the bubble. The forced, immiscible experiments also have much smoother curves. For this case, α is approximately 0.057 for the spike and 0.044 for the bubble. For the unforced, miscible experiments, α is approximately 0.047 for the spike and 0.030 for the bubble. These smaller values of α compare better with simulations than both the immiscible experiments performed in this study and similar experiments of previous RTI work with this Atwood number. Read and Youngs [16] obtained α values of 0.086 and 0.066 for spike and bubble respectively, Kucherenko [52] obtained values of 0.070 and 0.055 respectively and Dimonte [24] obtained values of 0.063 and 0.050 respectively. The forced, miscible case displays even smaller α values for spike and bubble (0.017 and 0.023 respectively). Values of α are larger for bubbles than spikes, which is not consistent with previous experiments. However, this may be expected owing to limitations in the imaging technique and the close values of αs and αb. The α value for this case displays the most similarity to the simulation of Cabot and Cook [2], where α values obtained were 0.02 and 0.017 for spike and bubble respectively. The larger amplitude of forcing used in the experiments permits growth into the non-linear regime much sooner allowing for the most certainty that self-similarity has indeed been reached. In an attempt to obtain smoother α curves, different methods of obtaining time dependent α measurements were tested. We can determine h and h˙ by using a 149

local parabolic fit to the mixing layer width data and then using these values in the method of Cabot and Cook. It was found that besides filtering short time scale information, not much was gained by applying this smoothing. Also, the different cases required different numbers of points for the parabolic fit (depending on the amount of noise in the data) and this created inconsistency since α is dependent on the number of points chosen. Plots using this method are shown in figure 6.14, where approximately 20 points was used for the fit. Another method to determine α is to locally fit a parabola to the plot of h versus time and to directly extract α from the quadratic term. Plots using this technique are shown in figure 6.15. Here we have used 20 points (10 forward and 10 backwards) to fit a least squares polynomial of order two. We see here that the curves show similar trends to the method of Cabot and Cook. However, unlike Cabot and Cook’s technique, this method has large fluctuations at late time and thus does not achieve a constant value. By adjusting the number of points used for the local curve fit, the fluctuations can be decreased. However, the optimum number of points is not the same across all experimental cases and therefore this method yields inconsistent results. One conclusion that can be drawn from this analysis is that the different methods for calculating α do not yield large differences in its value. The different methods primarily perform different smoothing and filtering operations. For the time de- pendent α calculations, equation (6.9) gives the best results, where h˙ is calculated using a central difference approximation. This method was chosen because it yields

more consistent results between experimental cases. Plotting √h versus t√Ageff and fitting a line to the late time linear part gives an overall averaged value of α which is useful for validating the Cabot and Cook result. The unforced, immiscible experiments performed using an LST heavy liquid / anethole combination on the WP apparatus were also analyzed. Mixing layer width measurements were extracted using the Cabot and Cook time dependent α

method and the h vs. t√Ageff method described previously. These experiments act as a comparison to determine the effects of the method of visualization on the 150

measured values of α. The thresholds used in this absorption configuration are 90% and 10%. These dye concentration thresholds were chosen primarily because noise in the data, caused by low light transmission in the heavy liquid, did not yield smooth results. These particular thresholds thus gave the smoothest results. The plots are shown in figure 6.16. The values for α for these experiments are 0.046 for the spike and 0.039 for the bubble. These values are smaller than the unforced, immiscible experiments discussed previously. This leads us to believe that refractive index mismatch imaging tends to yield larger values of α than those obtained using absorption imaging. Thus, the differences in α between the experiments of the present study and previous studies cannot be attributed to the method of imaging; a smaller α value would only act to make the miscible α values agree better with past simulations. The experiments performed using the LIM apparatus are consistent with the WP apparatus. The immiscible unforced experiments were reproduced (using the same liquid combination) and are shown in figure 6.17. These experiments were performed to verify that acceleration does not alter the values of α as should be ex- pected from the self-similar model. The acceleration here is approximately 10 times larger than the WP apparatus. The larger acceleration will produce a dominant initial wavelength of approximately half that of the WP apparatus. With this per- turbation, the instability evolves more quickly and a mixing region develops almost immediately. Thus, we can be more confident that self-similarity has been reached. The α values for the LIM experiments are 0.056 and 0.041 for the bubble and spike, respectively. These values deviate very little from those of the corresponding exper- iments on the WP apparatus and therefore this experiment acts to validate the WP results further. 151

(a) Immiscible, Unforced (b) Immiscible, Forced

(c) Miscible, Unforced (d) Miscible, Forced

Figure 6.13: Measurements of α determined by the method of Cabot and Cook for ensemble averaged experiments where 80% of the quies- cent fluid intensity was taken as the mixing layer width cutoff. Here h˙ was calculated using a central difference approximation. Data is for the four LST heavy liquid cases. The horizontal line here represents α obtained from the square root method and is drawn to show the consistency between methods of determining α. 152

(a) Immiscible, Unforced (b) Immiscible, Forced

(c) Miscible, Unforced (d) Miscible, Forced

Figure 6.14: Measurements of α determined by the method of Cabot and Cook using a local parabola fit to determine h and h˙ for ensemble averaged experiments where 80% of the quiescent fluid intensity was taken as the mixing layer width cutoff. Here approximately 20 points was used for the local parabola fit. Data is for the four LST heavy liquid cases. 153

(a) Immiscible, Unforced (b) Immiscible, Forced

(c) Miscible, Unforced (d) Miscible, Forced

Figure 6.15: Measurements of α determined by fitting a local parabola for ensemble averaged experiments and extracting α directly from the quadratic term, where 80% of the quiescent fluid intensity was taken as the mixing layer width cutoff. Here 10 points forward and 10 points back were used for the local parabola fit. Data is for the four LST heavy liquid cases. 154

√h

t Ageff p (a) (b)

Figure 6.16: Measurements of α, determined by fitting a straight line to the √h vs. t√Ageff for ensemble averaged experiments where 90% and 10% of the quiescent fluid intensity was taken as the mixing layer width, are shown (b). Also, α determined by the method of Cabot and Cook is shown (a). Data is for immiscible, unforced LST heavy liquid / anethole experiments using the WP apparatus. 155

√h

t Ageff p (a) (b)

Figure 6.17: Measurements of α, determined by the fitting a straight line to the √h vs. t√Ageff for ensemble averaged experiments where 80% of the quiescent fluid intensity was taken as the mixing layer width, are shown (b). α determined by the method of Cabot and Cook is also shown (a). Data is for immiscible, unforced LST heavy liquid experiments using the LIM apparatus. 156

6.3 Numerical Qualitative Results

To compare with the unmatched refractive index experiments, the laplacian of the density field for the numerical simulations was taken to mimic shadowgraph ex- perimental images [92]. More precisely, ( 2ρ)2 was used. This allowed for the ∇ simulation data to be analyzed using theq same ImageJ and Java routines as used in analyzing the experiments to ensure consistency. A number of numerical simulations were performed and although the results are similar, they do not precisely match the experiments because the fluid parameters were never exactly matched. This is due to the fact that the simulations take many computational hours to run (which trans- lates to months in real time) and the exact experimental parameters were constantly evolving; problems would arise and then be resolved (often by changing experimen- tal liquids) or more was learned about the experiment (such as the realization that we have micrometer, rather than nanometer amplitude initial perturbations). 9 In one 3D simulation, 4.1 10− m white noise initial perturbation amplitudes × were used. The domain size is 760 grid points in x and y with 0.06mm per grid point, which results in approximately 2/3 of the actual experimental tank size. This simulation mimics the unforced, miscible experiment with an Atwood number of 0.48. This simulation was begun before the effect of diffusion was accounted for in the initial perturbation amplitude calculation. The fluids used in this simulation are LST heavy liquid with a density of 2.85 g/cc and distilled water. For the experi- ments many fluids were utilized (such as ethanol, isopropanol, anethole, ...), but the Atwood number is the same and the viscosity is similar. A sequence of images from this particular numerical simulation is shown in figure 6.18. It is observed in figure 6.18 that the simulation develops slowly, which is to be expected since we initial- ize with nanometer scale initial perturbations. Comparing this to the experimental miscible images (fig. 6.4), we see that the experiments begin to develop at approxi- mately 175 ms and the simulations begin to develop at 200 ms; these two times are fairly close. We do observe a progression in time from small to larger scales as the mixing layer width grows in time. Shown in figure 6.19 is a sequence of images of 157 the same simulation where a 3D view of the density is depicted. The progression to a range of scales is more obvious in this figure. This development leads us to believe that the simulation is in the self-similar regime (comparing well to its experimental counterpart), but this hypothesis must be investigated further. 158

Figure 6.18: Sequence of laplacian images of a numerical simulation in which Atwood number is 0.48 and white noise initial perturbations with nanometer scale are used. 159

Figure 6.19: Sequence of 3D density images of a numerical simula- tion in which Atwood number is 0.48 and white noise initial pertur- bations with nanometer scale are used.

6.4 Numerical Quantitative Results

The numerical simulations were analyzed with the same Java program that the experiments were analyzed with, using the same thresholding technique to deter- mine hb and hs (sec. 6.2). In addition to the method of taking the laplacian of the 160

data to mimic the refractive index mismatched experimental images, mixing region edge thresholds were also determined using the density field directly to mimic ex- perimental dye absorption measurements. These two different methods were then compared.

6.4.1 Growth Parameter Plots

Plots where the laplacian method was used to extract the mixing layer widths are shown in figure 6.20. Here 80% thresholds were used to determine the mixing width, where the laplacian method explained previously creates an image that is analyzed in the same manner as the experimental refractive index mismatch experiments. For 9 this simulation, a white noise initial perturbation (with an amplitude of 4.1 10− m) × was used. The domain size is 760 760 grid points in the transverse direction and × allowed to expand as needed in the vertical direction. The grid spacing is 0.06 mm. Both the grid spacing and domain size are the same for all simulations performed in m this study. The vertical acceleration here (which produces RTI) is 11.77 s2 . Plots of

α versus time using the method of Cabot and Cook and √h versus t√Ageff method for different thresholds are shown. Lines were least-squares fit to the plots of √h

versus t√Ageff at late time and the value of the slope was then squared to obtain an average α; this method is consistent with that performed for the experiments. A horizontal line was then drawn through the time dependent α plot, so that the two methods could be compared. The values for α here are 0.019, for both bubble and spike, at late time. These small values are consistent with the small values obtained by other simulations such as by Cabot and Cook [103]. The plots where, thresholds were determined directly from density data (nor- malized on a [0,1] scale), are shown in figure 6.21, where the data was thresholded at 90% and 10% of the more dense fluid and figure 6.22, where the data was thresh- olded at 95% and 5% of the more dense fluid. The mixing widths were extracted here in an analogous way to the method used for the absorption experiments. The α values here are similar when compared to the laplacian post processing method. Therefore, being consistent with the experiments, we will adopt the method where 161

√h

t Ageff p (a) (b)

Figure 6.20: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. t√Ageff method (b), for a numerical simulation where white noise nanometer scale initial perturbations were used with an Atwood num- ber of 0.48. 80% of the quiescent fluid intensity was taken as the mixing layer width. The laplacian of the images was taken here. we use the laplacian method from here on. From these plots, it can be observed that as with the experiments there is good agreement between the two different methods of obtaining α. The α values obtained from the two different thresholding percentages show little deviation between each other. It is interesting to note that

αb is greater than αs in the density thresholded plots. We observed this with the forced, miscible experimental case as well, however this disagrees with all previous experiments and simulations. It was originally believed that this effect was primarily due to imperfections in the experiment. However, since this is also observed in the simulations, this has to be ruled out. This effect can possibly be attributed to the mixing width measurement method and since we are consistent in the thresholding technique used, we can be more confident in our experimental results. It must also be realized that for all the cases in which the spike to bubble ratio is less than one, the values of α are very similar, so we may be observing uncertainty in α. 162

√h

t Ageff p (a) (b)

Figure 6.21: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. t√Ageff (b), for a numerical simulation where white noise nanometer scale initial perturbations were used with an Atwood number of 0.48. 90% and 10% of the heavier fluid in the normalized images was taken for the mixing layer widths.

√h

t Ageff p (a) (b)

Figure 6.22: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. t√Ageff (b), for a numerical simulation where white noise nanometer scale initial perturbations were used with an Atwood number of 0.48. 95% and 5% of the heavier fluid in the normalized images was taken for the mixing layer widths. 163

In another simulation, a white noise initial perturbation spectrum was utilized 7 with an initial amplitude of 1 10− m. This initial perturbation was implemented to × match that determined from the experiments once diffusion effects were accounted m for. Here the acceleration was once again 11.77 s2 . The plots using the method

of Cabot and Cook for α and √h versus t√Ageff are shown in figure 6.23, where laplacian data was analyzed. We see that once again the two methods for obtaining α show similar values at late time. The values for α found here (0.020 for the bubble and 0.021 for the spike) are approximately the same as to those obtained from the simulation where nanometer scale initial perturbations were used. This would be an expected result if the flow is self-similar, since the initial conditions (of the type we are using in this study) would have no effect.

√h

t Ageff p (a) (b)

Figure 6.23: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. 7 t√Ageff (b), for a numerical simulation where white noise 1 10− m scale initial perturbations were used with an Atwood number× of 0.48. 80% of the quiescent fluid intensity was taken as the mixing layer width. The laplacian of the images was taken here.

One other white noise initial perturbation simulation was performed and ana- lyzed. In this simulation, the acceleration was set to match that of the LIM drop m tower, 122.63 s2 . This simulation again utilized a white noise initial perturbation 164

7 spectrum where the initial amplitude was 1 10− m. Again, the plots using the × method of Cabot and Cook for alpha and √h versus t√Ageff are shown in figure 6.24, where laplacian data was analyzed. Good agreement between the two methods of obtaining α is once again observed. Also, when comparing this simulation with the others performed so far, there is little difference in the values of α. The similar results lead us to further believe that we are indeed within the self-similar regime.

√h

t Ageff p (a) (b)

Figure 6.24: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. 7 t√Ageff (b), for a numerical simulation where white noise 1 10− m scale initial perturbations were used with an Atwood number× of 0.48. Here the acceleration is set to approximately 10g as to match the LIM apparatus. 80% of the quiescent fluid intensity was taken as the mixing layer width. The laplacian of the images was taken here.

Another simulation was performed to match the weight and pulley apparatus, m having an Atwood number of 0.57 with an acceleration of 7.85 s2 . The parameters for this simulation were originally chosen to parallel the experimental parameters for the LST heavy liquid / isopropyl alcohol configuration with a smaller acceleration due to the heavier experimental fluid system. A nanometer scale initial perturbation spectrum was extrapolated forward in time using the growth rate determined by the eigenvalue equation from viscous linear stability theory (eq. A.30) which was then 165

used in the inviscid linear stability theory for RT growth. The result from the inviscid linear stability theory analysis is advanced forward in time until ka 1. ≈ The perturbation spectrum at this point in time is then used as the initial condition for the simulation. Plots in which α was determined, using the laplacian thresholding technique, are shown in figure 6.25. Again, the plots using the method of Cabot and Cook for α versus time and √h versus t√Ageff yield consistent results.

√h

t Ageff p (a) (b)

Figure 6.25: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. t√Ageff (b), for a numerical simulation where nanometer scale initial perturbations were extrapolated forward in time using the growth rate determined by viscous linear stability theory plugged into the inviscid linearized RT relationships. The Atwood number used here is 0.57. 80% of the quiescent fluid intensity was taken as the mixing layer width. The laplacian of the images was taken here.

A simulation was also performed in which the acceleration was initially oscillated in time as to match the experimental case where parametric forcing was used. A 9 white noise initial condition (with an amplitude of 4.1 10− m) was imposed to × seed the parametric waves. The acceleration was oscillated around 1g at 18Hz at 3 an amplitude corresponding to a 1.2 10− m tank displacement. Then, after waves × m developed, the system was made RT unstable by imposing an acceleration of 11.77 s2 in the opposite direction. This simulation was analyzed as well. The plots using the 166

method of Cabot and Cook for α and √h versus t√Ageff are shown in figure 6.26, where laplacian data was analyzed. The amount of computational time needed for the first phase of this simulation (the Faraday forcing) did not allow sufficient time for the second phase (the RT unstable portion) to sufficiently develop. Because of this, α has not yet reached its asymptotic value. We still are able to extract an α value at the later times of the simulation and it agrees well with the other simulations.

√h

t Ageff p (a) (b)

Figure 6.26: Measurements of α, determined by both using the method of Cabot and Cook (a) and fitting a straight line to the √h vs. t√Ageff (b), for a numerical simulation where Faraday forced initial perturbations were used with an Atwood number of 0.48. 80% of the quiescent fluid intensity was taken as the mixing layer width. The laplacian of the images was taken here.

All of the simulations presented in this section show similar α values. In ad- dition, they all seem to reach steady state and do not show significant differences between spike and bubble values. This compares well with the miscible experiments also performed in this study, however they do not compare well with the immiscible experiments. This may be expected since Miranda does not account for interfa- cial/surface tension. The α values for our simulations also have similar values to those performed by Cabot and Cook [2]. It is important to recognize that unlike 167

those performed by Cabot and Cook, our simulations did not go out as far in time. In addition, the spatial resolution used here is not as fine as theirs (ours has 760 grid points in the transverse direction, while theirs has 3072). The initial conditions are different as well (theirs is a gaussian spectrum and none of our cases match this). From the large assortment of simulations we performed with differing initial conditions, we can conclude that α does not appear to depend on initial conditions. As with the experiments, all the initial conditions here followed the bubble merger limit [82] since once non-linear effects began, only small wavelengths were present.

6.5 Comparison

The results from experiments and simulations performed here along with results from past studies are shown in table 6.1. When comparing the α values from the weight and pulley experiments, where refractive index mismatch was imaged, good agreement is shown between the forced and unforced immiscible values. Even though there may be more mode coupling, in the unforced case (due to the larger band- width), this does not appear to strongly alter the self-similar behavior since the α values are in good agreement between cases. The values from the miscible exper- iments do not show the same level of agreement between the forced and unforced cases. This discrepancy may be expected, owing to difficulties when imaging the miscible, unforced experiments as discussed in section 3.1.3. The α values deter- mined from the matched refractive index case, where absorption imaging was used, show similar α values to the corresponding unmatched refractive index experiments. The similarity between unmatched and matched refractive index experiments allows us to conclude that the two imaging methods yield consistent results. The linear induction motor experiments also show similar α values, and we can therefore con- clude that the WP apparatus is not adding unknown artifacts and that acceleration does not affect α. When comparing our spike to bubble ratios to those from past experiments of similar type, the ratios are similar. It should also be noted that the unforced, immiscible cases display smaller α values than the experiments reported 168

in the literature. However, previous experiments could be in error, owing to the limited number of experimental images acquired during each experiment. Since the same imaging technique and mixing width extraction methods are used throughout the experiments presented here, a direct comparison can be confidently performed between our experiments. When we compare miscible to immiscible experiments, it is observed that the miscible experiments have smaller α values. This effect is especially obvious in the forced experiments, but is not seen as strongly in the unforced experiments, most likely due to the difficulties in imaging the unforced, miscible case. Also, in the unforced case, it is possible that since the long wavelength front-back rollup is not drowned out by the mixing region until late in the RT development, it may have an effect on α (due to a small amount of bubble competition that could take place). From the comparison of our experimental α values, in light of the LIM experi- ments and matched refractive index experiments that were performed to validate our findings, forcing initial perturbations does not appear to play a significant role in the value of α (when considering the small wavelength, finite bandwidth initial conditions utilized here), whereas miscibility does. From the simulations (which do not have interfacial tension effects accounted for) we can further justify the hypothesis that interfacial tension effects do play a role. The values of α from past experiments would often show larger values than found in past simulations. Simulations that use front tracking methods do show larger values [71, 38], but simulations that do not use a front tracking method show smaller values [109, 103]. The simulations performed in this study all display α values that are small when compared to those past experiments. It is observed from the table that values of α from the miscible experiments are similar to those of the simulations, while those from immiscible experiments are not. Therefore, miscibility effects are important for the growth of the turbulent RT mixing region. Simulations with many different initial conditions were performed in this study. Initial conditions were utilized that had a white noise spectrum starting with either nanometer or micrometer amplitudes (to mimic the unforced experiments), a spectrum imposed 169

by viscous linear stability theory (also to mimic the unforced experiments) and an imposed dominant wavelength (to the mimic the parametrically forced experiments). All of these initial conditions develop into a small wavelength, finite bandwidth spectrum before a mixing layer develops. These simulations display similar α values. Thus, as with the experiments, the initial conditions (of the type used in this study) do not appear to affect the value of α. It is also important to note that the values of α from the simulations performed here are similar to those from simulation results from the literature. 170

Forced Unforced

αSpk αBub αRatio WP, Ref 0.047 0.030 1.57 αSpk αBub αRatio WP, Sim, nm, Ref 0.019 0.019 1.0 Misc WP, Refr 0.017 0.023 0.74 WP, Sim, µm, Ref 0.021 0.020 1.05 WP, Sim, Ref 0.018 0.022 0.82 LIM, Sim, µm, Ref 0.021 0.014 1.5 C & C, Sim 0.02 0.017 1.2 WP, Sim, LST, Ref 0.022 0.018 1.22 WP, Sim, nm, Abs 0.014 0.019 0.74 Youngs, Sim 0.042 0.035 1.2

αSpk αBub αRatio WP, Ref 0.059 0.047 1.26 WP, Ref Im- WP, Abs 0.046 0.039 1.18 misc α α α Spk Bub Ratio LIM, Ref 0.056 0.041 1.37 0.057 0.044 1.30 Youngs 0.086 0.066 1.3 Kucherenko 0.070 0.055 1.27 Dimonte 0.063 0.050 1.26

Table 6.1: This is a table of comparison of α for experiments and simulations with Atwood number approximately 0.5. Here WP rep- resents the weight and pulley experimental apparatus (1g) and LIM represents the linear induction motor experimental apparatus (10g). Simulations performed (represented by Sim) in this study are done to mimic either the WP or LIM apparatus. The simulation in which lin- ear stability theory dictated the initial perturbation is represented by LST. Imaging is either done using the refractive index mismatch tech- nique (Ref) or the dye absorption technique (Abs); when a simulation is performed here, data is extracted to mimic a particular imaging technique. For the unforced simulations performed here, nm repre- sents a nanometer scale whitenoise initial perturbation and µm rep- resents a micrometer scale whitenoise initial perturbation. Data from previous research is also compiled in the table [52, 16, 24, 109, 103], where C & C represents the simulation of Cabot and Cook [103]. 171

CHAPTER 7

CONCLUSION

The Rayleigh-Taylor instability is a buoyancy driven instability that occurs at a stratified fluid interface when P ρ < 0 in the presence of initial perturbations ∇ ·∇ on the interface. In this study, a system containing two stratified liquids that was accelerated downward at a rate greater than gravity produced the instability. After the instability has progressed far enough in time, a mixing region develops. This mixing layer is believed to be self-similar and turbulent. Past experiments and sim- ulations have shown discrepancies of the growth parameter (α) of this mixing layer. In order to reconcile these differences, a range of experiments in a large Atwood num- ber configuration have been performed. Four main sets of experiments were carried out. The initial conditions of these experiments were either forced to produce small wavelength perturbations or left unforced (allowing small wavelengths, close in size to the fastest growing wavelength, to grow); in both of these cases we obtain a small wavelength, finite bandwidth initial perturbation consistent with the bubble merger, mode-coupling case presented by Ramaprabhu [82]. The experiment was performed with either a miscible or immiscible fluid combination with an Atwood number of approximately 0.5. Experiments were primarily performed on the weight and pul- ley apparatus, which operates at approximately 1g. However, in order to verify our results, a set of experiments was also performed on the linear induction motor apparatus, which operates at approximately 10g. Owing to the large refractive in- dex of the denser liquid for these large Atwood number experiments, the refractive index could not always be matched. Thus, during an experiment the refractive in- dex mismatch is imaged. To validate this technique, a set of experiments was also performed for the immiscible case in a matched refractive index combination using light absorption to obtain more quantitative results to check for consistency. Also, attempts were made to better understand the refractive index mismatch technique 172

and to determine the degree to which the mixing width measurements were affected by this method of visualization. Simulations were also performed where attempts to match the experimental parameters were made. These simulations were run with various initial conditions all of which became dominated by small wavelengths early in the instability’s development. The simulations were used to better understand the importance of experimental parameters and the experiments in turn helped to validate the simulations. In addition, the analyses of parametric waves and viscous linear stability theory often gave a starting point for both the experiments and the simulations. By examining the experimental images obtained for the mixing region versus time, we were able to qualitatively conclude that the experiments were indeed in the self-similar regime. This conclusion was drawn by observing that the dominant scales appear to evolve in time proportionally with the mixing layer width. Also, the Reynolds number was measured and compared to those obtained from past exper- iments where it can more confidently be concluded that self-similarity was reached (from quantitative internal mixing region measurements). Our Reynolds numbers were found to be larger, implying that our flow was also self-similar. Using the as- sumption of self-similarity, α values were obtained in a consistent manner for all the experiments and simulations presented here. In this study we experimentally showed that imposing an initial, small wavelength on the interface does not significantly al- ter the value of α when compared to the case where a small wavelength spectrum (from background noise perturbations) was allowed to develop due to viscous effects in the linear regime. It was also shown that all of the simulations performed here, which were performed with a range of initial conditions, have similar α values, thus validating our experimental results. Experiments in the literature (with large Atwood number) were all immiscible and their α values often did not match those obtained from simulations. However, the previous simulations that produced the smaller α values were miscible therefore warranting experimental investigation of this effect. It was determined that our miscible experiments have α values that are nearer to those of past simulations and 173 less than those of the immiscible experiments. Thus, it is concluded that miscibility may be the reason for the lack of agreement between previous simulations and pre- vious experiments. This effect may be due to the decrease in local Atwood number that would occur because of diffusion that is present in miscible fluid combinations, but not immiscible ones. The simulations performed here show consistent values when compared to the simulation of Cabot and Cook [103] and our miscible experi- ments. In conclusion, this study indicates that miscibility may have an effect on the growth rate of the turbulent, self-similar Rayleigh-Taylor instability, whereas initial perturbation does not, as would be expected if the flow is truly self-similar and the instability growth is dominated by bubble merging and not bubble competition [82]. 174

APPENDIX A

MATHEMATICAL DERIVATIONS

A.1 Viscous Linear Stability Theory

The derivation of the viscous Rayleigh-Taylor instability contained within a tank of rectangular cross section starts with the description of both the problem and all the fluid quantities, where past derivations were used as references [40, 9].

Figure A.1: Depiction of the interface where ρ, µ, u, v and w rep- resent density, viscosity and x,y,z velocities respectively. Also, η rep- resents the perturbation of the interface from z = 0 and geff is the acceleration taken in the negative z direction.

Now, we continue by formulating the equations of motion in the context here, which is described by the above figure. Since there is no initial motion, we need only to concern ourselves with perturbed velocities (in x,y and z): u(x, y, z), v(x, y, z) and w(x, y, z). Pressure, however does indeed have a base state associated with it. Namely it is the hydrostatic pressure distributionp ¯ = ρg z + C, which was − eff derived from the base state momentum equation considering that there is only a 175

pressure gradient in the z direction, ∂p¯ = ρg . Thus, we also have ∂p¯ = 0 and ∂z − eff ∂y ∂p¯ ∂y = 0. Such an equation forp ¯ can be written for both the lower and upper fluids as follows:

p¯ = ρ g z + C 1 − 1 eff 1 p¯ = ρ g z + C . (A.1) 2 − 2 eff 2

By realizing that the base state is a system without perturbation, we can say that

at z = 0 the base pressures must match yielding C1 = C2. The continuity equation is, ∂u ∂v ∂w + + = 0 (A.2) ∂x ∂y ∂z and the momentum equations are:

Du 1 ∂(p +¯p) ∂2u ∂2u ∂2u = + µ + + (A.3) Dt −ρ ∂x ∂x2 ∂y2 ∂z2   Dv 1 ∂(p +¯p) ∂2v ∂2v ∂2v = + µ + + (A.4) Dt −ρ ∂y ∂x2 ∂y2 ∂z2   Dw 1 ∂(p +¯p) ∂2w ∂2w ∂2w = g + µ + + . (A.5) Dt −ρ ∂z − eff ∂x2 ∂y2 ∂z2   Subtracting the base state momentum equation from these equations, we arrive at our perturbation equations (the continuity equation has not changed),

Du 1 ∂p ∂2u ∂2u ∂2u = + µ + + (A.6) Dt −ρ ∂x ∂x2 ∂y2 ∂z2   Dv 1 ∂p ∂2v ∂2v ∂2v = + µ + + (A.7) Dt −ρ ∂y ∂x2 ∂y2 ∂z2   Dw 1 ∂p ∂2w ∂2w ∂2w = + µ + + . (A.8) Dt −ρ ∂z ∂x2 ∂y2 ∂z2   These equations will be solved for the two regions using a normal mode analysis. Therefore, solutions will be assumed to be of the form: u =u ˜(z)eikxx+ikyy+nt, v = 176 v˜(z)eikxx+ikyy+nt, w =w ˜(z)eikxx+ikyy+nt, p =p ˜(z)eikxx+ikyy+nt. Using equations (A.2, A.6 - A.8) with our normal mode analysis and after ne- glecting non-linear perturbed terms due to their smallness (namely the convective terms) we obtain the equations:

∂w˜ ik u˜ + ik v˜ = (A.9) x y − ∂z

ik ∂2 nu˜ = x p˜ + ν k2 k2 + u˜ (A.10) − ρ − x − y ∂z2   ik ∂2 nv˜ = y p˜ + ν k2 k2 + v˜ (A.11) − ρ − x − y ∂z2   1 ∂p˜ ∂2 nw˜ = + ν k2 k2 + w.˜ (A.12) −ρ ∂z − x − y ∂z2   Now, by multiplying equation (A.10) by ik and multiplying equation (A.11) − x by ik and adding them together, we can eliminate the x and y velocities − y by utilizing equation (A.9). This is effectively taking the derivatives which has now become much simpler by assuming the exponential form that we have. 2 2 First, after adding the two equations we obtain, n(ik u˜ + ik v˜) = (kx+ky) p˜ + − x y − ρ ν ik3u˜ + ik k2u˜ + ik3v˜ + ik2k v˜ ik ∂2u˜ ik ∂2v˜ . By grouping terms and using x x y y x y − x ∂z2 − y ∂z2 continuity we are left with, 

∂w˜ (k2 + k2) ∂w˜ ∂3w˜ n = x y p˜ + ν (k2 + k2)+ . (A.13) ∂z − ρ − ∂z x y ∂z3     With this, in addition to the z-momentum equation we can finally derive an equation in whichw ˜ can be explicitly solved. Taking the z-derivative of equation (A.13) and adding this to equation (A.12) after the x, y laplacian ( ∂ + ∂ = (k2 + k2)) has ∂x2 ∂y2 − x y been taken of it, we rid ourselves of the pressure dependence and are left with

∂2w˜ ∂2w˜ ∂4w˜ ∂2w˜ n (k2 + k2)˜w = ν (k2 + k2) +(k2 + k2)2w˜ + (k2 + k2) . ∂z2 − x y − x y ∂z2 x y ∂z4 − x y ∂z2    (A.14) 177

This can the be simplified to

∂ ∂w˜ ∂2 ∂w˜ ∂2 n ν (k2 + k2) =(k2 + k2) n ν( (k2 + k2) w˜ ∂z ∂z − ∂z2 − x y ∂z x y − ∂z2 − x y      (A.15)  ν ∂2 ∂2 = 1 (k2 + k2) (k2 + k2) w˜ =0. ⇒ − n ∂z2 − x y ∂z2 − x y      When the equation is put in the above form, the solution becomes obvious to us as a superposition of four exponential functions,

+ k2+k2z k2 +k2z + k2 +k2+ n z k2 +k2+ n z w˜ = Ae √ x y + Be−√ x y + Ce √ x y ν + De−√ x y ν . (A.16)

Now, in the case we are studying with our experiments we have both an upper liquid and a lower liquid with different densities and viscosities. For this reason we can use the solution for the general incompressible case and apply it to each of our fluid regions individually. And of course since we can fully solve for the velocity in the z direction, we can then use this information to obtain solutions for all the other velocities and the interfacial amplitude. For the lower fluid region, since our tank height is greater than the wavelength, we say that the interfacial disturbances are not affected by the bottom of the tank (from deep water wave theory). Therefore, this can be assumed to be negative infinity. Also, to obtain realistic solutions, we require the interfacial disturbance to vanish at negative infinity. This of course restricts our solution and thus we obtain

2 2 2 2 n +√kx+kyz +√kx+ky+ ν z w˜1 = A1e + B1e 1 . (A.17)

A similar scenario goes for the upper liquid,

2 2 2 2 n √kx+kyz √kx+ky+ ν z w˜2 = A2e− + B2e− 2 . (A.18)

Also, let us take note that the coefficients here have been reorganized to reflect our new simplifications and for clarity. Now we can look at our boundary conditions. 178

At the interface we must have continuity of velocities and stresses. These interfacial conditions require us to match the condition for each fluid at the interface. To do this, we will expand the variable to be matched around the point z = 0 in a Taylor series and take the value at the interface position η. For the continuity ofw ˜, we have ∂w˜ ∂w˜ w˜ + 1 η + ... =w ˜ + 2 η + ... 1|z=0 ∂z 2|z=0 ∂z z=0 z=0

From this we will neglect obvious non-linear terms. Our definition of the interface

yields, ∂z ∂η ∂η z = η = = + u . η = w = + u . η. ⇒ ∂t ∂t −→ ∇ ⇒ ∂t −→ ∇ Now, using the normal mode analysis of above, integrating and substituting, we w˜ are left with η = n after non-linear terms have been neglected. Now, in our above interfacial condition this gives, (where we have already neglected η2 terms)

∂w˜ w˜ ∂w˜ w˜ w˜ + 1 =w ˜ + 2 . 1|z=0 ∂z n 2|z=0 ∂z n z=0 z=0

Now, the other non-linear term is obvious and we will neglect it to give our interfacial condition the form w˜ =w ˜ . (A.19) 1|z=0 2|z=0 Plugging in our solutions forw ˜ from the upper and lower fluids (eq. A.18 and A.17) we are left with

A1 + B1 = A2 + B2. (A.20)

∂w˜ For the continuity of ∂z across the interface at z = η we have,

2 2 2 2 n 2 2 2 2 n kx + kyA1 + kx + ky + B1 = kx + kyA2 kx + ky + B2. (A.21) ν1 − − ν2 q r q r Now, again this condition was obtained by expanding a Taylor series around z =0 as above and matching on either side of the interface whilst neglecting non-linear terms. In addition to these two relations defining what happens across the interface, we 179

also need to have continuity of tangential viscous stresses as follows,

∂u ∂w ∂u τ = µ + = µ + ik w (A.22) xz ∂z ∂x ∂z x     ∂v ∂w ∂v τ = µ + = µ + ik w . (A.23) yz ∂z ∂y ∂z y     These two equations can be combined in a way to remove the u and v dependence (in a way similar to that done with the u and v momentum equations),

∂ ∂2 iµ (k τ + k τ )= µ (ik u + ik v) (k2 + k2)w = µ +(k2 + k2) w. x xz y yz ∂z x y − x y − ∂z2 x y    (A.24) Now, we can say that this relation is continuous across the interface at z = η and follow the same procedure as above and this yields,

2 2 2 2 n 2 2 2 2 n µ1 2(kx + ky)A1 + 2(kx + ky)+ B1 = µ2 2(kx + ky)A2 + 2(kx + ky)+ B2 . ν1 ν2       (A.25)  Now, we will look at how the normal forces must be continuous across the interface. From the momentum integral, by considering a small arbitrary volume around the interface, after neglecting non-linear terms, we have (τ P ) = (τ P ) for nn − 1 nn − 2 the normal forces at the interface. Let us also keep note here that there is a base state for pressure and this is included in the pressure term. We will now expand the perturbed part of this into a Taylor series around z = η and neglect non-linear terms. We have an explicit equation for the base state, so this does not need to be expanded. The expansion is

∂ (τ p˜) ∂ (τ p˜) (τ p˜) + nn − 1 η+... p¯ = (τ p˜) + nn − 2 η+... p¯ . nn − 1|z=0 ∂z − 1|z=η nn − 2|z=0 ∂z − 2|z=η z=0 z=0

w Again, realizing that η = n and that τnn contains velocity terms by definition, we can neglect obvious non-linear terms and are left with

∂p˜ ∂p˜ (τ p˜) 1 η+... ( ρ g η + C )= (τ p˜) 2 η+... ( ρ g η + C ) . zz − 1|z=0− ∂z − − 1 eff 1 zz − 2|z=0− ∂z − − 2 eff 2 z=0 z=0

180

Also, we have used the fact that τnn = τzz at z = 0. Now, we can easily formulate this interfacial condition (after using our normal mode analysis) by noting that ∂w ∂p˜ τzz =2µ ∂z and making use of our already known representations of ∂z from equation w˜ (A.12) andp ˜ from equation (A.13). Also, we have replaced η with n . This gives,

∂w˜ 2µ 1 1 ∂z − z=0 nρ ∂w˜ µ ∂3w˜ ∂w˜ 1 1 + 1 1 µ 1 −k2 + k2 ∂z k2 + k2 ∂z3 − 1 ∂z  x y x y z=0 ∂2w˜ w˜ w˜ nρ w˜ + µ 1 µ (k2 + k2)˜w 1 ρ g 1 + C − − 1 1 1 ∂z2 − 1 x y 1 n − − 1 eff n 1  z=0  z=0  . ∂w˜ = 2µ 2 2 ∂z − z=0 nρ ∂w˜ µ ∂3w˜ ∂w˜ 2 2 + 2 2 µ 2 −k2 + k2 ∂z k2 + k2 ∂z3 − 2 ∂z  x y x y z=0 ∂2w˜ w˜ w˜ nρ w˜ + µ 2 µ (k2 + k2)˜w 2 ρ g 2 + C − − 2 2 2 ∂z2 − 2 x y 2 n − − 2 eff n 2  z=0  z=0  (A.26)

w˜ After neglecting non-linear terms (we have also expanded n around z = η) we have,

∂w˜ nρ ∂w˜ µ ∂3w˜ ∂w˜ w˜ 2µ 1 1 1 + 1 1 µ 1 ρ g 1 [C C ]= 1 ∂z − −k2 + k2 ∂z k2 + k2 ∂z3 − 1 ∂z − 1 eff n − 1 − 2 z=0  x y x y z=0 3 ∂w˜2 nρ2 ∂w˜2 µ2 ∂ w˜2 ∂w˜2 w˜2 2µ2 + µ2 ρ2geff . ∂z − −k2 + k2 ∂z k2 + k2 ∂z3 − ∂z − n z=0  x y x y z=0

∂w˜ We now can substitute in our expressions forw ˜ and ∂z (eq. A.20, eq. A.21). The term in brackets disappears because we know these two constants from the base pressure term are equal as determined earlier. Also, there is a major simplification that appears in the first three terms in parentheses. Namely, the B coefficients in our expressions forw ˜ completely cancel and much of what involves the A coefficients 181

cancel as well. This gives,

n nρ ρ 2µ A k2 + k2 + B k2 + k2 + + A 1 + 1 g (A + B )= 1 1 x y 1 x y 1 2 2 eff 1 1 ν1 kx + ky n  q r  n nρ ρ 2µ A k2 + k2 + B k2 + k2 + pA 2 + 2 g (A + B ). 2 2 x y 2 x y 2 2 2 eff 2 2 − ν2 − kx + ky n  q r  p We will now simplify a bit more so that we can easily understand the result by writing it in terms of density differences and viscosity differences as well as other reorganizations so that we can see where the well-known inviscid case comes from. 2 2 We will start by multiplying the entire equation by k, where k is kx + ky, and dividing by n. Also, we form the equation so that it is written in termsp of Atwood number. The Atwood number is a non-dimensional number giving a relationship

ρ1 ρ2 between the two different densities for the different fluids. It is defined as A = − . ρ1+ρ2

To do this we will divide our equation by ρ1 + ρ2 and make use of the fact that

A1 + B1 = A2 + B2 as in equation (A.20). Then we will make use of the fact 2 2 2 2 n 2 2 2 2 n that A1 k + k + B1 k + k + = A2 k + k B2 k + k + as from x y x y ν1 − x y − x y ν2 equationp (A.21) in orderq to rewrite the equationp to includeq a scaled viscosity term similar to our scaled density term. This yields,

2 k2 + k2 µ µ n ρ ρ k2 + k2 x y 1 − 2 A k2 + k2 + B k2 + k2 + + 1 − 2 x y g (A + B ) n ρ + ρ 1 x y 1 x y ν ρ + ρ n2 eff 1 1 p 1 2  r 1  1 2 p q ρ1 ρ2 = A1 A2 . − ρ1 + ρ2 − ρ1 + ρ2 (A.27)

Now, we will rewrite our interfacial conditions (eq. A.20, A.21, A.25 and A.27) in a matrix form so that we can begin to solve our system, 182

1 1 1 1  − − A1 k2 + k2 k2 + k2 + n k2 + k2 k2 + k2 + n q x y x y ν1 q x y x y ν2 B  q q  1  2 2 2 2 2 2 2 2    2µ1(kx + ky) 2µ1(kx + ky)+ nρ1 2µ2(kx + ky) 2µ2(kx + ky) nρ2 A  − − −  2  2 2 2 2 2 2 2 2 2 2 n   k +k 2(k +k ) k +k 2 kx+ky kx+ky +    q x y ρ1 x y µ1 µ2 q x y q q ν1 µ1 µ2 ρ2  B2 A 2 g + + − A 2 g + − 0    n eff ρ1+ρ2 n ρ1+ρ2 n eff n ρ1+ρ2 ρ1+ρ2  = 0. (A.28)

The determinant of this matrix must be zero so that we have a solution other than the trivial one. We can simplify things a bit however, by turning our determinant into that of a 3 3 matrix. To do this, we will subtract the first column from × the second, the third from the fourth and then lastly add the first column to the third. This will leave us with the first row being, 1000 which simplifies the determinant. This simplification yields, h i

k2 + k2 + n k2 + k2 2 k2 + k2 k2 + k2 + n k2 + k2 x y ν1 q x y q x y x y ν2 q x y q − q − 2 2 nρ1 2(kx + ky)(µ1 µ2) nρ2 2 2 2 2 2 −2 − 2 k +k k +k 2(k +k ) q x y µ1 µ2 2 2 n 2 2 ρ1 q x y x y µ1 µ2 ρ2 − k + k + k + k A 2 g + − + 1 n ρ1+ρ2 q x y ν1 q x y ρ1+ρ2 n eff n ρ1+ρ2 ρ1+ρ2 − − −

= 0. (A.29)

This gives the eigenvalue relationship for n as a function of kx and ky as,

ρ ρ ρ k 2k2 µ µ (q k) 2k2(µ µ ) 2 + nρ 1 − 2 g + 1 − 2 +1 1 − − 1 − 2 ρ + ρ 2 ρ + ρ n2 eff n ρ + ρ  1 2  1 2 1 2  ρ 2k µ µ ρ 2k nρ 2 + nρ (q k) 1 − 2 1 − − 1 ρ + ρ 2 1 − n ρ + ρ − ρ + ρ  1 2  1 2 1 2  ρ ρ k 2k2 µ µ +(q k) nρ 1 − 2 g + 1 − 2 +1 2 − 1 ρ + ρ n2 eff n ρ + ρ   1 2 1 2  2k µ µ ρ 2k2(µ µ ) (q k) 1 − 2 1 =0. (A.30) − 1 − 2 1 − n ρ + ρ − ρ + ρ  1 2 1 2 

In the above equation k = k2 + k2 and q = k2 + k2 + n . We can arrive at x y 1,2 x y ν1,2 the well-known inviscid solutionp by allowing µ to approachq zero for each fluid. Doing n this forces O ν terms to approach infinity and therefore dominate the equation.  183

Comparing terms of this order of magnitude and neglecting smaller order terms gives us, g k ρ ρ eff 1 − 2 +1 (q nρ + q nρ )=0. n2 ρ + ρ 1 2 2 1  1 2  This expression is only valid if the first term in parenthesis equals 0. This gives

2 ρ2 ρ1 n = − kgeff, ρ1 + ρ2

which is the inviscid result. Although we neglected interfacial tension for this result, it does come in handy sometimes when considering the immiscible experiments. We can easily introduce it into the eigenvalue equation (eq. A.30) by following the results of Roberts [88] in the linear stability theory derivation section. It appears in the equations such γk3 that it acts to diminish the RT growth by adding a term to the kAgeff term, − ρ2+ρ1 where γ is the interfacial or surface tension. This yields,

ρ ρ ρ k γk3 2k2 µ µ (q k) 2k2(µ µ ) 2 + nρ 1 − 2 g + + 1 − 2 +1 1 − − 1 − 2 ρ + ρ 2 ρ + ρ n2 eff n2(ρ + ρ ) n ρ + ρ  1 2  1 2 2 1 1 2  ρ 2k µ µ ρ 2k nρ 2 + nρ (q k) 1 − 2 1 − − 1 ρ + ρ 2 1 − n ρ + ρ − ρ + ρ  1 2  1 2 1 2  ρ ρ k γk3 2k2 µ µ +(q k) nρ 1 − 2 g + + 1 − 2 +1 2 − 1 ρ + ρ n2 eff n2(ρ + ρ ) n ρ + ρ   1 2 2 1 1 2  2k µ µ ρ 2k2(µ µ ) (q k) 1 − 2 1 =0. (A.31) − 1 − 2 1 − n ρ + ρ − ρ + ρ  1 2 1 2  This result is consistent with the results of Chandrasekhar [9]. We now have the ability to solve the viscous eigensystem depicted in equation (A.28). We will once again omit interfacial tension here. We can rewrite the solution forw ˜ in a way that makes obvious the eigenvectors and therefore we will include 184

arbitrary constants which will be multiplied by the eigenvectors,

n1 A1

n1  n n 2 2 k2 +k2+ 1 z 2 2 k2 +k2+ 1 z  B1  √kx+kyz q x y ν √kx+kyz q x y ν n1t w˜ =  e e 1 e− e− 2 C1e   An1  h i  2   n1   B2       An2   1 n2 2 2 n2 2 2 n2  B  k2+k2z kx+ky+ z k2+k2z kx+ky+ z 1 n2t + e√ x y eq ν1 e−√ x y e−q ν2 C2e  An2  h i  2   n2   B   2   n3  A1

n3 2 2 n3 2 2 n3  B  k2+k2z kx+ky+ z k2+k2z kx+ky+ z 1 n3t + e√ x y eq ν1 e−√ x y e−q ν2 C3e  An3  h i  2   n3   B   2   n4  A1

n4 n n    2 2 k2 +k2+ 4 z 2 2 k2+k2+ 4 z B1 √kx+kyz q x y ν √kx+kyz q x y ν n4t i(kxx+kyy) + e e 1 e− e− 2 C4e  e .  An4   h i  2    n4   B2        (A.32)  We will now use this result to develop solutions for the x and y velocities. Plug- ging in the solved w velocity equation in equation (A.13), we derive an equation for the pressure perturbation. The pressure perturbation equation is then used in the x and y momentum equations (eq. A.10 and A.11). We are left with an ODE for u 185

and v that can be solved. For the lower fluid, the pressure equation becomes:

3 ν1ρ1 ∂ w˜1 nρ1 ∂w˜1 p˜1 = 2 2 3 2 2 + ν1ρ1 kx+ky ∂z − kx+ky ∂z 3/2 ν ρ  3/2 k2 +k2z k2 +k2+ n z 1 1 2 2 √ x y 2 2 n √ x y ν1 = 2 2 A1 k + k e + B1 k + k + e kx+ky x y x y ν1  2 2 2 2 n nρ k +k z  kx+ky+ z 1  2 2 √ x y 2 2 n √ ν1 2 2 + ν1ρ1 A1 kx + kye + B1 kx + ky + e − kx+ky ( ν1 ((( 3/2  ((( h k2+k2z ν ρ k2+k2+ n z i ((2 2 √ x y 1 1 2 2 q n √ x y ν1 = ν1ρ1(A(1 k + k e p + 2 2 B1 k + k + e (( x y kx+ky x y ν1 2 2 2 2 n nρ A k +k z nρ B kx+ky+ z 1 1 √ x y 1 1 2 2  n √ ν1 2 p2 e k2 +k2 kx + ky + ν e −√kx+ky − x y( 1 ((( (( 2 2 k2+k2+ n z ((2 ( 2 √kx+kyqz 2 2 n √ x y ν ν1(ρ1(A(1 k + k e ν1ρ1B1 k + k + e 1 . −( x y − x y ν1 p q (A.33) This will be plugged into the x-momentum equation which can be written in a form in which the ODE is more apparent,

2 ikxp˜1 ∂ u˜1 2 2 = 2 n + ν1(kx + ky) u˜1. (A.34) ρ1 ∂z −  To solve this we will solve first for the homogeneous solution and then for the particular solution using the method of undetermined coefficients.

2 2 √n+ν1(kx+ky)z Homogeneous:u ˜1Homo. = D1e 2 2 2 2 n √kx+kyz √kx+ky+ ν z Particular:u ˜1Part. = E1e + F1e 1 2 2 2 2 2 2 √kx+kyz = (kx + ky) n + ν1(kx + ky) E1e ⇒ − 2 2 n 2 2 n 2 2 √kx+ky+ ν z + (kx + ky ) n + ν1(kx +ky) F1e 1 ν1 − 2 2 ikxh nρ1A1 √kx+kyz ν1ρ1i 2 2 n 3/2 = ρ 2 2 e + B1 k2+k2 (kx + ky + ν ) 1 −√kx+ky x y 1  2 2 n nρ1  kx+ky+ z 2 2 n 2 2 n √ ν1 2 2 kx + ky + ν1ρ1 kx + ky + e − kx+ky ν1 − ν1 q ikxn q  i = E1 = 2 2 2 2 2 2 A1 ; k +k (k +k ) n+ν1(k +k ) ⇒ −√ x y[ x y −( x y )] ikx ν1 2 2 n 3/2 F = B 2 2 (k + k + ) 1 2 2 n 2 2 1 k +k x y ν1 (kx+ky ) (n+ν1(kx+ky)) x y h ν1 − i n 2 2 n 2  2 n 2 2 kx + ky + ν1 kx + ky + . − kx+ky ν1 − ν1 q q  (A.35) 186

The solution for the x velocity then takes the form of

2 2 2 2 2 2 n √n+ν1(kx+ky)z √kx+kyz √kx+ky+ ν z u˜1 =u ˜1Homo. +u ˜1Part. = D1e + E1e + F1e 1 ,

where only D1 is still unknown. We can do the same derivation for the upper fluid. The pressure equation be- comes,

3 ν2ρ2 ∂ w˜2 nρ2 ∂w˜2 p˜2 = k2 +k2 ∂z3 k2 +k2 + ν2ρ2 ∂z x y − x y ((( (( 3/2 n ((( k2 +k2z ν ρ n k2 +k2+ z ((2  2 √ x y  2 2 2 2 √ x y ν2 = ν2(ρ2(A(2 kx + kye− 2 2 B2 kx + ky + e− ( − kx+ky ν2 2 2 2 2 n nρ A k +k z nρ B n kx+ky+ z 2 2 √ x y 2 2 2 2  −√ ν2 + 2 p2 e + k2 +k2 kx + ky + ν e √kx+ky x y 2 ((( ((( 2 2 2 2 n (2 ( 2 √kx+kyqz 2 2 n √kx+ky+ ν z +ν ρ (A((k + k e + ν ρ B k + k + e− 2 . (2(2 2 x y 2 2 2 x y ν2 p q (A.36) This is now plugged into the x-momentum,

2 ikxp˜2 ∂ u˜2 2 2 = 2 n + ν2(kx + ky) u˜2. (A.37) ρ2 ∂z −  To solve this we will solve first for the homogeneous solution and then for the particular solution using the method of undetermined coefficients.

2 2 √n+ν2(kx+ky)z Homogeneous:u ˜2Homo. = D2e− 2 2 2 2 n √kx+kyz √kx+ky+ ν z Particular:u ˜2Part. = E2e− + F2e− 2 2 2 2 2 2 2 √kx+kyz = (kx + ky) n + ν2(kx + ky) E2e− ⇒ − − 2 2 n 2 2 n 2 2 √kx+ky+ ν z + (kx + ky ) n + ν2(kx +ky) F2e− 1 − ν2 − 2 2 ikxh nρ2A2 √kx+kyz ν2ρ2 i 2 2 n 3/2 = e− + B 2 2(k + k + ) ρ2 2 2 2 k +k x y ν2 √kx+ky − x y  2 2 n nρ  kx+ky+ z 2 2 2 n 2 2 n √ ν2 + 2 2 k + k + + ν2ρ2 k + k + e− kx+ky x y ν2 x y ν2 q ikxn q  i = E2 = 2 2 2 2 2 2 A2 ; k +k (k +k ) n+ν1(k +k ) ⇒ √ x y[− x y −( x y )] ikx ν1 2 2 n 3/2 F = B 2 2 (k + k + ) 2 2 2 n 2 2 2 k +k x y ν2 (kx+ky ) (n+ν1(kx+ky)) x y h− ν1 − i − n 2 2 n 2  2 n + 2 2 k + k + + ν2 k + k + . kx+ky x y ν2 x y ν2 q q  (A.38) 187

The solution for the x velocity for the upper fluid then takes the form of

2 2 2 2 2 2 n √n+ν2(kx+ky)z √kx+kyz √kx+ky+ ν z u˜2 =u ˜2Homo. +u ˜2Part. = D2e− + E2e− + F2e− 2 ,

where only D2 is still unknown.

We can now go about solving for the coefficients D1 and D2 by enforcing that the velocity and its derivatives be continuous across the interface (like we have done for the w velocity previously).

At z = 0, we haveu ˜1 =u ˜2

D1 + E1 + F1 = D2 + E2 + F2 . (A.39)

∂u˜1 ∂u˜2 At z = 0, we also have ∂z = ∂z ,

2 2 2 2 2 2 n n + ν1(kx + ky)D1 + kx + kyE1 + kx + ky + ν F1 1 . (A.40) q 2 2 2 2 2 2 n = n + ν2(k + k )D2 p k + k E2 q k + k + F2 − x y − x y − x y ν2 q p q We can solve for D1 and D2 by use of Cramer’s rule. Putting the two equations and two unknowns in matrix form we obtain,

E + F E F 2 2 − 1 − 1   1 1 D1 − = . 2 2 2 2 2 2 2 2  n + ν1(k + k ) n + ν2(k + k )D   k + k E + k + k E +  x y x y 2  − x y 1 x y 2   2 2 n 2 2 n  q q    k + k + F1 + k + k + F2   px y ν1 p x y ν2     q q (A.41)  188

From equation (A.41), the solution for D1 is

E + F E F 1 2 2 − 1 − 1 −

2 2 2 2 k + k E1 + k + k E2+ x y x y 2 2 − n + ν2(k + k ) 2 2 n 2 2 n x y k + k + F1 + k + k + F2 px y ν1 p x y ν2 D = q (A.42) 1 q q  1 1 − 2 2 2 2 n + ν1(kx + ky) n + ν2(kx + ky)

q q

and the solution for D2 is

1 E + F E F 2 2 − 1 − 1

2 2 2 2 k + k E1 + k + k E2+ 2 2 x y x y n + ν1(k + k ) − x y 2 2 n 2 2 n k + k + F1 + k + k + F2 px y ν1 p x y ν2 D = q . (A.43) 2 q q  1 1 − 2 2 2 2 n + ν1(kx + ky) n + ν2(kx + ky)

q q

Now, back to the viscous result. From our solution for w we had four unknown

coefficients, namely A1, B1, A2 and B2. From our matrix equation we were able to create an eigenvalue equation, but when solving for our eigenvectors we must make note that one coefficient will be taken as arbitrary. For instance, we can

take B1 as arbitrary and rewrite our four coefficients as A˜1C,C, A˜2C and B˜2C respectively. Where we could solve for C by realizing an initial z velocity at the interface. Let us now look a little closer at the interface. We can write down the full equation for the interface as follows. We know that ∂η = w = (A˜ C + ∂t |z=0 1 C)eikxx+ikyy (en1t + en2t + en3t + en4t). We have made use of the fact that since our eigenvalue equation originates from a 4x4 relation, there will be 4 values of n for every k. The equation for the interface is found by integrating the above equation 189

and it yields,

1 1 1 1 η =(A˜ C + C)eikxx+ikyy en1t + en2t + en3t + en4t + C 1 n n n n 2  1 2 3 4  1 1 1 1 = C eikxx+ikyy en1t + en2t + en3t + en4t + C . (A.44) 3 n n n n 2  1 2 3 4  Where we have made use of another substitution clarifying our equation a bit more making it apparent of how we have two unknowns here which will be solved for our initial interface velocity and interface displacement.

A.2 Viscous Effects

The effects of viscosity act only at small scales in RTI and therefore act to select particular wavelengths as opposed to others. Since viscosity only acts at small scales, its effect can be neglected once the instability has become larger than these scales. This can be understood by comparing the terms of RT growth with that of viscous damping. First, The RT growth term from inviscid theory is e√kAgefft [88] and that 2νk2t of viscous damping is e− [53]. It is of interest here to see when the RT growth term is much larger than the viscous term,

√kAg t 2k2νt e eff >> e− . (A.45)

This yields,

1/3 Ageff k << 2 . (A.46) r 4ν Equation (A.46) tells us that for viscous effects to be neglected in the first stage of RTI (for our experiments), the wavenumbers must be less than approximately 11, 000. This translates to a wavelength of approximately 0.5 mm. The dominant scales that we are measuring are certainly larger than this, so for this regime we can neglect viscous effects. Next, for completeness the RT growth from the self-similar model will be compared to the viscous damping term. Again, let’s examine when 190

the RT growth is a lot larger than the viscous damping,

2 2k2νt αAgefft >> e− . (A.47)

This yields, ln (αAg t2) k << eff . (A.48) 2νt r − By assuming an approximate α value of 0.05 and a time of 300 ms (this time corre- sponds to the beginning of the measurable mixing region development in our exper- iments) we conclude that the wavenumbers should be less than approximately 2500 for viscous effects to be neglected. This equates to a wavelength of approximately 2.5 mm. Measurements are acquired at wavelengths larger than this and the mixing layer width is definitely much larger than this, so we can confidently neglect viscous effects here as well.

A.3 Parametric Excitation with Viscous Damping

We will take a bit of a different approach here in that there is a non-linearity that represents the physics that must be solved so we wish not to make the equations too complicated and will make simplifications where appropriate. Also, since the Mathieu equation has been solved by other authors in the past [72, 7], both with and without damping, we wish to use these as comparisons and will therefore try to be consistent with these references. Also, there were other sources used as a reference [43, 94]. We will first consider inviscid flow by assuming that viscosity does not introduce rotationality into the bulk of the fluid, just damping. We will consider only small amplitude waves therefore rotationality would only be introduced at the interface for the most part. For this derivation we are mainly concerned with the wavelengths that arise, not the specifics of the flow field, therefore the irrotationality assumption should be acceptable. Aside from the lack of viscosity here, much of the derivations used in the previous section are applicable here, where we introduce perturbed fields, linearize and do 191 a normal mode analysis. Of course differences in the stress tensor which cause the interfacial conditions to be different must be accounted for. We will now introduce a potential function since we are considering inviscid flow at first. The potential ∂φ ∂φ ∂φ function φ is such that, u = ∂x , v = ∂y and w = ∂z . With this we can write down the unsteady Bernoulli equation as,

∂φ P 1 + + ( φ)2 (g f cos ω t) z = F (t) = 0. (A.49) ∂t ρ 2 ∇ − − 0

We will use this to determine a kinematic condition at the interface. Also, we should take note that both the base and perturbed states are included since it will be used as an interfacial condition where all quantities need to be matched, not just the perturbed ones. Now, as we have done in the previous section (A.1), we will linearize this equa- tion (therefore neglecting the square potential function term) and then match the pressure across the interface, while also including a surface tension term, γ (interfa- cial tension between two liquids). The derivation of the surface tension term can be observed in the Linear Stability Theory chapter of Michael Roberts’ Master’s Thesis [88]. Since we are not including viscosity here, only pressure forces are present in the stress tensor. We will expand the pressure in a Taylor series around z = 0 at η and neglect non-linear terms following the same procedure as was used previously. This yields,

2 ∂φ1 2 P1 γ η = ρ1 + ρ1 (g f cos ω0t) η γ η |z=η − ∇x,y − ∂t z=0 − − ∇x,y (A.50) ∂φ2 = P2 = ρ2 + ρ2 (g f cos ω0t) η. |z=η − ∂t z=0 −

For this part of the analysis we will utilize the results from the previous section accounting for the small differences in the problem at hand. In the previous analysis for the viscous Rayleigh-Taylor instability we arrived at a solution for w˜ (eq. A.16). For the inviscid equations that we are considering here, we have only a second order equation instead of a fourth order one. Therefore, only the two terms that do not involve viscosity are obtained. Also, we are considering the same experimental setup, 192

so we will not allow the solution to “blow up” at infinity. This gives us, for the lower and upper fluids respectively,

2 2 2 2 √kx+kyz ikxx+ikyy √kx+kyz ikxx+ikyy w˜1 = A1e e andw ˜2 = A2e− e . (A.51)

The equation for the interface we will assume to be of the form of

η = a(t)eikxx+ikyy. (A.52)

With this information and that ∂η =w ˜ after linearization, we can conclude that ∂t |z=o ikxx+ikyy ikxx+ikyy a˙(t)e = A1,2e at the interface. Therefore, A1,2 =a ˙. Also, we know ∂φ that ∂z = w and thatw ˜1 =w ˜2 (from equation A.19). This allows us to solve for φ for the lower and upper fluids respectively as,

a˙ k2 +k2z ik x+ik y a˙ k2+k2z ik x+ik y φ = e√ x y e x y and φ = e−√ x y e x y . 1 2 2 2 2 2 − kx + ky kx + ky (A.53) p p Plugging this into equation (A.50) and also defining f (the amplitude of forcing acceleration) to be ζ ω2 which would arise if the forcing function was ζ cos(ω t), 0 0 − 0 0 we obtain,

a¨ ikxx+ikyy 2 2 2 ikxx+ikyy ρ1 2 2 e + ρ1 (g ζ0ω0 cos(ω0t)) + γ(kx + ky) ae = √kx+ky − a¨ ikxx+ikyy  2 ikxx+ikyy  ρ2 2 2 e + ρ2 (g ζ0ω0 cos(ω0t)) ae . − √kx+ky −

Which after simplification yields,

a¨ (ρ + ρ ) = (ρ ρ ) g ζ ω2 cos(ω t) γ(k2 + k2) a. 2 1 2 2 2 1 0 0 0 x y kx + ky − − −    p After reorganizing this equation we obtain the Mathieu equation for parametrically excited waves for superposed fluids, where we see the Atwood number (which was 193

defined previously) present in the equation.

ρ ρ γ(k2 + k2)3/2 a¨ + k2 + k2 2 − 1 g ζ ω2 cos(ω t) + x y a =0. (A.54) x y −ρ + ρ − 0 0 0 ρ + ρ "  2 1  2 1 # q  Now, we will add add a term for viscous damping. So, in this analysis we are still assuming irrotational flow, which is okay because as long as the waves do not become non-linear, viscous effects will be confined to the interface. there still will be damping of the waves associated with the viscosity. We will add a damping term to the Mathieu equation in the form of 2b in accordance with [89] where b is the friction term derived by Lamb [53] defined as b 2ν k2 + k2 . Equation (A.54) ≡ x y thus becomes, 

2 2 3/2 2 2 2 2 ρ2 ρ1 γ(kx+ky) a¨ + 4ν kx + ky a˙ + kx + ky − g + − ρ2+ρ1 ρ2+ρ1 − (A.55) 2 2 ρ2 ρ1 2 k + k  −hpζ0ω cos(ω0t) a=0. − x y − ρ2+ρ1 0 p   i This equation can be solved by taking a Fourier series expansion of the solu- tion and then solving for the coefficients. We will assume the form of the solu- ∞ iω0nt tion as a(t) = Cne , where we will allow n to take half values such as, n= n = [ , ..., 4/X2−∞, 3/2, 2/2, 1/2, 0, 1/2, 2/2, 3/2, 4/2, ..., ]. This form comes −∞ − − − − ∞ from the fact that by Floquet theory we will have two sets of solutions, both a periodic and anti-periodic case, which correspond to solutions of the period of our forcing function and double its period [86, 64]. The expansion we have used accounts for all of these frequencies. After insertion into equation (A.55) we arrive at,

∞ ∞ ω2n2 C eiω0nt + i4ν k2 + k2 ω n C eiω0nt − 0 n x y 0 n n= n= X−∞ X−∞ 2  ( A)ω ζ k2 + k2 ∞ 2 0 0 x y iω0t iω0t iω0nt + ω + − e + e− C e =0, (A.56) 2 n " p # n=  X−∞

3 γ k2 +k2 2 ρ2 ρ1 (√ x y) where ω = k2 + k2 − g + and it is recognized and substituted x y − ρ2+ρ1 ρ2+ρ1 p   194

ρ2 ρ1 sometimes for simplicity that the Atwood Number = A − . Let us also re- ≡ ρ2+ρ1 alize that unlike the viscous Rayleigh-Taylor derivation performed previously, here the Atwood number will be negative. Now , we will form a recursion relationship allowing us to build a matrix representation of the problem at hand. Once in ma- trix form, we are able to solve the system using numerical linear algebra packages, where we can choose to include or neglect higher order harmonics depending on the accuracy we require. Let us now group the terms into their harmonics,

∞ ω2n2C + i4ν k2 + k2 ω nC + ω2C eiω0nt − 0 n x y 0 n n n= X−∞ 2 2 2   2 2 2 ∞ ( A)ω ζ0 k + k ∞ ( A)ω ζ0 k + k 0 x y iω0(n+1)t 0 x y iω0(n 1)t + − e + − e − =0. 2 2 n= p n= p X−∞ X−∞ (A.57)

We will now use the substitutions n0 = n +1 n = n0 1 and n0 = n 1 ⇒ − − ⇒ n = n0 + 1 in the last two terms in the above equation respectively. This gives,

∞ C ω2n2 + i4ν k2 + k2 ω n + ω2 + n − 0 x y 0 n= −∞ (A.58) X 2 2 2 ( A)ω ζ0 k +k   0 √ x y iω0nt + − (Cn 1 + Cn+1) e =0. 2 − 

where we have also made use of the infinite limits to recast n0 as n. Since the exponential term surely cannot be zero unless we want a trivial solution (and we do not), using orthogonality we can obtain a recursion relationship as such:

∞ ∞ iω0mt 2 2 2 2 2 e− C ω n + i4ν k + k ω n + ω + n − 0 x y 0 ( n= −∞ −∞ 2 2X2 R ( A)ω ζ0 k +k    0 √ x y iω0nt + − (Cn 1 + Cn+1) e dt =0. 2 −   195

0, m = n iω0mt iω0nt 6 Using the fact that e− e =  , we arrive at the recursion relation- 1, m = n ship  2 2 2 2 2 2 2 2 ( A)ω0ζ0 kx + ky Cn ω0 n + i4ν kx + ky ω0n + ω + − (Cn 1 + Cn+1)=0. − 2p −   (A.59) This is recognized as an eigenvalue problem and can be rewritten to more obviously show this,

i4ν k2 + k2 2 2 x y 2 2 ω 4Cn n + n + 2( A)ζ0 kx + ky (Cn 1 + Cn+1)= 4 2 Cn, − ω0  ! − − − ω0 q (A.60) 4 where we have also multiplied through by 2 so that we can easily compare our ω0 results with the reference [89] as well as putting the equations into a non-dimensional form. This is an eigenvalue problem in the form [M]−→C = λ−→C. In this form the stability curves for the system can easily be solved numerically and the number of terms included in the solution expansion gives the accuracy that we achieve. A FORTRAN code was written using the LAPACK package to solve and create varying plots of the stability regions. There are regions that are stable which will not give us waves at the interface and there are unstable regions in which small perturbations at the interface in an experimental configuration (which are al- ways present) will grow and give us the waves we expect and desire. With viscosity present we do not necessarily have waves developing for every forcing condition. For smaller Atwood numbers it takes larger forcing amplitudes to even see any waves at the interface. Also, we must realize that we have not worried about the exact value of wave amplitude that is achieved here; this analysis was performed using linear stability theory and from that theory an unstable wave will grow without bound. This growth, however, will be suppressed by non-linear effects. With this knowledge we can deduce that the amplitude of our waves will saturate when linear theory no 2 2 longer holds or O( kx + kya) = 1. Therefore, we will see waves with an amplitude p 196

comparable to the wavelength and this is indeed what we see. Figure A.2 shows a plot of the calculated stability curves for different values of damping, where c repre- 2 sents the damping and is defined as 8νk , q is a representation of the tank displace- ω0 3 γ k2 +k2 1 2 2 (√ x y) ment and is defined as ( A)2ζ0k and p = a = ω2 4( A)g kx + ky + ρ +ρ − 0 − 2 1   represents the frequency. Both p and q here will be positivep numbers here because the Atwood Number, A, is negative.

q = (A)2ζ0k

2 c 8νk = ω0

3 2 + 2 1 2 2 γ(√kx ky ) p = a = 2 4(A)g k + k + ω0 x y ρ2+ρ1  q  Figure A.2: Calculated stability curves for parametric forcing

In addition to viscosity preventing waves from forming, it is obvious that diffu- sion would also possibly prevent the onset of waves. Diffusion can be presumed to decrease the local Atwood number which, in accordance with figure A.2, would shift the wavelength and increase required forcing Amplitude. In addition to this, it can be deduced that if the waves we are expecting to observe have a wavelength close 197

to or smaller than that of the diffusion thickness they would be smeared and would never appear. So far, this derivation was carried out for an infinite domain which is okay to provide us with a general understanding of the waves produced at the interface. In our case, we actually are constrained by the finite size of our liquid containing tank. Although this does not affect our calculations when considering the vertical dimen- sion (since the wavelengths are within the deep water wave limit), the transverse dimensions of the tank limit the wavenumbers that are possible to discrete values. Using the continuity equation in conjunction with the velocity potentials already defined, we will have the Laplace equation,

∂2φ ∂2φ ∂2φ + + =0. (A.61) ∂x2 ∂y2 ∂z2

With this equation we arrive at k = qπ and k = lπ , where q and l are positive x Lx y Ly integers starting at 1. This result puts another constraint on our equations. Whereas before, the wavelength we observe would be the most unstable one that occurs at specific excitation amplitude, frequencies, densities and viscosities, now it would be the one that is also at an “allowed“ wavelength.

A.4 Uncertainty Analysis and Absorption Analysis

This uncertainty analysis was performed in accordance with the concepts of propaga- tion of uncertainties as outlined by Taylor [98]. Let us first calculate the uncertainty associated with the average measured intensity at a particular horizontal location. The Vertical Intensity Average,

1 1 1 1 1 I(x)= (I 1) = I √H 12 = I , (A.62) H ± H ± H × H ± √ y y y H X X X 198

where an intensity uncertainty of one pixel bin level has been assumed. Incorporat- ing this into the contrast equation yields,

I 1 I 1 0 √H √H ± − ± . I 1  + I 1  0 ± √H ± √H     Then, by applying similar uncertainty principles where the uncertainty of each math- ematical operation is expanded, we obtain,

√2(1)2 (I0 I) − ± √H . √2(1)2 (I + I) 0 ± √H Combining the uncertainties once more yields the Intensity Contrast, 2 2 I0 I I0 I √2 √2 Cmeas. = − − + I0+I I0+I √H(I0 I) √H(I0+I) ± r − (A.63) I0 I I0 I √ 2 1 1    = − − 2 + 2 . I0+I I0+I √ H (I0 I) (I0+I) ± − q The uncertainty of the Beer’s law agreement calculation was also determined.

Taking the natural logarithm of the ratio of the intensities, we thus obtain,

I(x) 1 ln ± √H = µ`(x)= C x, I 1 − 1 0 ± √H ! where once again an intensity uncertainty of 1 pixel level has been assumed and the right hand side is set to be a linear function in x as predicted by Beer’s law

(the actual value for C1 is not of importance here). Each individual mathematical operation within argument of the natural logarithm function can be expanded in terms of uncertainties to yield,

I(x) I(x) 1 2 1 2 ln + = C x. I (x) ± I (x) √ √ 1 0 0 s HI(x)  HI0(x) !

Now, we wish to find the uncertainty associated with taking the natural logarithm. The uncertainty of a function is essentially the first terms of its Taylor series expan- 199

sion that is a deviation from the mean. Actually, the root mean square is taken of all the terms except the first. This results in,

∂f 1 ∂2f 2 ∂f 1 ∂2f 2 f(A, B)= f(A,¯ B¯) δA + (δA)2 + ... + δB + (δB)2 + ... . ± ∂A 2 ∂A2 ∂B 2 ∂B2 s    Applying this to our specific case we obtain,

ln I(x) ln I(x) √R, I0(x) ± I0(x)   2   2 2 2 where R = 1 + 1 1 1 + 1 √HI(x) √HI0(x) 2 √HI(x) √HI0(x) (A.64) "r −      2     2 2 3/2 1 1 1 + 3 √ + √ + ... HI(x) HI0(x) #      where we will only use the first three terms of the Taylor series expansion. As can be observed in figure A.3, the uncertainty is very small when compared with the values of ln I and can be neglected. I0   −4 x 10 0.2 4

3.8 0

3.6

−0.2 0 3.4

−0.4 0 3.2

Log I/I −0.6 3

2.8

−0.8 Uncertainty Max of Log I/I Number of Drops 2.6 −1 2.4

−1.2 2.2 100 150 200 250 300 350 400 450 0 50 100 150 200 250 300 350 400 Pixel position across tank Number of Drops

(a) (b)

Figure A.3: The natural logarithm of I across the tank for various I0 food coloring concentrations across the tank (a) and its corresponding maximum uncertainty (b).

As mentioned in section 3.2, to check the linearity of Beer’s law we performed 200

a linear least squares regression fit to ln I across the tank and determined the I0 root mean squared error (RMSE). This value  was determined for various dye con- centrations. Also, of interest here is the uncertainty in these measurements. We know the maximum uncertainty of the height averaged ln I at every horizontal I0 pixel location. This uncertainty can be added or subtracted to each horizontal value therefore creating 2(number of pixels across) possible states for each dye concentration. A linear least squares fit would then be performed for each state and the maximum calculated RMSE of all the states would then be taken as the maximum uncertainty. The large number of calculations represents a computational difficulty, therefore the data was re-sampled so that there were only 10 data points across the tank (yielding only 210 = 1024 least squares fits that must be preformed for each food coloring con- centration. To give a better understanding of the error, the RMSE was normalized with the average of the data in which the fit was performed. Both of these plots are shown in figure A.4.

0.05 0.12

0.045 0.1 0.04

0.035 0.08 0.03

0.025 0.06 RMSE 0.02

Normalized RMSE 0.04 0.015

0.01 0.02 0.005

0 0 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 Number of Drops Number of Drops (a) (b)

Figure A.4: Root mean squared error associated with a linear least squares fit of ln I (a) and its normalized counterpart (b), where I0 uncertainties are represented with dashed lines.

It is observed from the normalized RMSE in figure A.4 that the error decreases and becomes smoother at approximately 150 drops of diluted food coloring and then 201

increases. This plot shows that there is indeed an optimum dye concentration at approximately 150 drops. Also, as observed from the dashed lines, the uncertainty is very small. Utilizing equation (A.64), the uncertainty associated with the Beer’s law Cal- culated concentration (eq. 3.12) can now be calculated. For the calculated drop concentration we obtain,

2 2 I(xL;ρ) I(xL;ρ) I(xL;ρ) I(xL;ρ ) ln ln 2 δ ln δ ln ∗ I0(xL) I0(xL) ρ /40 I0(xL) I0(xL) ρ = ρ ρ v ∗ + + , ∗ I(xL;ρ ) ∗ I(xL;ρ ) u I(xL;ρ)  I(xL;ρ )  ∗ ± ∗ ρ    ∗  ln I (x ) ln I (x ) u ∗  ln I (x ) ln I (x ) 0 L 0 L u 0 L 0 L u         t    (A.65)

where we have assumed that for every 40 drops added there is an uncertainty 1 drop associated with human error. A plot of the actual concentration and the Beer’s Law calculated concentration along with the error associated with it are shown in figure A.5.

400 0.16

0.14 350 0.12 300 0.1 250 0.08

200 0.06

0.04 150 NumDrops Error Calculated Drops 0.02 100 0 50 −0.02

0 −0.04 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 Actual Number of Drops Number of Drops (a) (b)

Figure A.5: Actual dye concentration and the Beer’s law calculated number of drops (a) and the error associated with their deviation from each other (b).

It is observed in figure A.5 (a) that the calculated concentration deviates from the actual concentration (which is depicted as as a dashed-dotted diagonal line); 202

this deviation becomes noticeable once 150 drops is surpassed. The error of this deviation is shown in the right plot of the aforementioned figure. Less than 150 drops has noise associated with it owing to the fact that there are so few drops of dye and therefore a low signal to noise ratio. Above this the curve becomes smoother, but the error increases. The uncertainty associated with the calculated contrast can also be obtained. However, to do this the uncertainty of the exponential function must first be calcu- lated in a way similar to that done for the natural logarithm function. The expanded uncertainty of the exponential, unsimplified function becomes,

µ µ ( )ρL ( − )ρL e− ρ e ρ √R, ± 2 2 2 2 µ ρL δ[(µ/ρ) L] ρ/40 µ ρL δ[(µ/ρ)L] ρ/40 ( ρ ) 1 ( ρ ) where R = e− (µ/ρ)L + ρ + 2 e− (µ/ρ)L + ρ ( r       3/2 2     µ 2 2 1 ρL δ[(µ/ρ)L] ρ/40 −( ρ ) + 6 e (µ/ρ)L + ρ + ...   )     (A.66) With this we can calculate the uncertainty associated with the Beer’s law calculated uncertainty. Which becomes, 2 ( µ )ρL ( µ )ρL ( µ )ρL 1 e− ρ 1 e− ρ δ[e− ρ ] Ccalc. = − − 2 (A.67) ( µ )ρL ( µ )ρL v ( µ )ρL 1 + e− ρ ± 1 + e− ρ u ( e− ρ ) u t

Also, the uncertainty of the error associated with the observed and calculated value is obtained as,

abs(Cmeas. C ) − calc. Cmeas. ± 2 2 2 2 (A.68) abs(Cmeas. C ) √δ[Cmeas.] +δ[Ccalc.] δ[Cmeas.] − calc. + Cmeas. (Cmeas. C ) Cmeas. s − calc.    

203

0.7 Measured Contrast 0.35 BeersLawCalcContrast 0.6 0.3

0.25 0.5 0.2

0.4 0.15 I0−I I +I 0 0.1 0.3 0.05 Contrast Error 0.2 0

−0.05 0.1 −0.1

0 −0.15 0 100 200 300 400 0 50 100 150 200 250 300 350 400 Number of Drops Number of Drops (a) (b)

Figure A.6: Intensity calculated contrast and Beer’s Law calculated contrast (a) and the error associated with their deviation from ea- chother (b).

A plot of the observed and Beer’s Law calculated contrast along with the error associated with it is shown in figure A.6. As can be observed, the contrast does deviate from that predicted by Beer’s law behavior at approximately 150 drops. From the plot of the error, it is obvious that approximately 150 drops is ideal because anything lower adds noise due to the small contrast and it is less than 5% error. This gives a contrast of roughly 0.2. It is observed that the uncertainty for the Beer’s law calculated contrast becomes quite large as the number of drops increases. This is expected since we assumed an uncertainty of 1 drop per 40 as the uncertainty when experimentally adding drops; in actuality this is probably less. As can be observed, the uncertainty of the contrast is barely noticeable (due to the assumed intensity uncertainty of only 1 bin). It is obvious that 150 drops per 500 mL solution, with a 1 mL VWR, disposable pipette, of 1 drop of Food Coloring in 10 mL of liquid is not very descriptive. There would be difficulty in making large batches of different solutions if a more universal method of measurement is not adopted. For this reason all measurements were 204 converted to grams using the known density of water. It was determined that 150 pipette drops of the diluted solution per 500 mL of solution corresponds to,

150 Ppte DropsDilSoln 0.0270 gwtr 1 Drop Food Coloring 1mL wtr ρF.C. = 500 mL Soln. 1 Ppte Dropswtr 10mL wtr 1g wtr = 0.4050 Drop Food Coloring  0.0445 gF.C.  = 0.0180 gF.C. .    500 mL Soln. 1 DropsF.C. 500 mL Soln.     A.5 Spherical Cap Implementation

With miscible liquid experiments, the tank is refilled after every experiment and when filled care is taken to prevent bubbles of air from remaining in the tank when the lid is affixed. The lip on the tank lid accomplishes this well. However, when using immiscible liquids the tank is not emptied after each experiment. After an experiment, however, often small trapped bubbles are dislodged and a small amount of liquid will leak out. In this scenario air bubbles float to the top and need to be removed. They can be eliminated without removing the lid with a small valve in the top center of the lid where more light fluid can be added. It was noticed that the bubbles did not necessarily migrate to the valve and therefore needed to be guided there. To accomplish this a spherical cap was cut into the lid using a CNC machine. The canned command to mill a circle, G12, was used in increments where both the depth was increased and the radius was decreased. A cross-section view of the problem in which the different parameters needed to be solved is depicted in figure A.7. 205

Figure A.7: A cross-sectional view of the spherical cap cut into the tank lid. Here θ1 represents the start angle, θ2 represents the end angle, R is the radius of the spherical cap, r is the radius of the specific circle that will be milled, D is the depth of the cap and W is its width. In order to find representations for z and r (which is required by the CNC) we must first calculate θ1 and R based on D and W . 206

From the above diagram we realize that to create a complete spherical cap θ2 =

90◦. However, θ1 is unknown and must be derived. We obtain the two expressions:

z + R D = R sin(θ1)= z = R sin(θ1) R + D − ⇒ − (A.69) r = R cos(θ1).

In order to solve for θ1 we can plug the expression for R from the second equation in eq. (A.69) into the first at the known location of z = 0 and r = W/2. The expression for R than comes from the second equation in eq. (A.69) directly. This yields, W/2 W/2 W/2 D = sin(θ1)= D = W/2 tan(θ1) cos(θ1) − cos(θ1) ⇒ cos(θ1) − R = W/2 . cos(θ1)

From this equation θ1 and R can be solved for the desired spherical cap dimensions. These equations were then implemented into a G-code for use on a CNC machine and the code is available in section B.1. 207

APPENDIX B

COMPUTER PROGRAMS

B.1 Tank Lid Spherical Cap G-Code Program

G69 G20 (Inch) G90 (ABSOLUTE MODE) G90 . 1 (ABSOLUTE IJK MODE) G0 G49 G40 G17 G80 G50 G64 M03 S3500 F10 G0 X0 Y0 G1 Z0

#1=[18.13] (Radius) #2=[83.27] (Start angle) #3=[.125] (depth of bowl) #4=[.25] (tool diameter)

#100=[#2] (initialize angle) #101=[90] (end angle) #102=[100] (number of steps)

M98 P1 L[#102]

G1 Z1 M5 (stopspindel) M2 (endprogram)

O1 G90 G1 Z[0.0 [#1 sin[#100] #1+#3]] G90.1 G12 I[#1− cos[#100]]∗ − F10 #100=[#100+[#101∗ #2]/#102] M99 −

B.2 Matlab FFT Polarization Program

%%%%%%%%%%%%%%%%Wr i t t e n by : M i c h a e l R o b e r t s NumofFrames =50; FrameRate=200;

ImageData=uint16 ( zeros (640 ,480 ,NumofFrames )) ; fo r frame = 1:NumofFrames eval ([ ’ImageData(: ,: , ’ num2str(frame) ’)= imread(’ ’D: ExperimentData SmallDropTower Mi rageTest CaNO3 08 0 3 1 1 1 4 20SpinningPolarizer\ SpinningPolarizer\ ’ num2str(frame)\ ’. tiff ’’);’]) \ \ disp ([ ’frame# ’, num2str(frame) ]) end

n=length (ImageData(300 ,200 ,:)); freqbase=FrameRate (mod(((0:n 1)+floor (n/2)), n) floor (n/2))/n; % calculate the frequency corresponding∗ to each FFT− bin,this includes− negative frequ encies ! maskACOnly=(abs (freqbase ) > 10) ; %build a mask [... 0 0 0 1 1 ... 1 1 0 0 0 ...],with one entry per frequency maskACOnly=maskACOnly ’ ; % convert to column vector (the audio signal is also a column v ector) maskDCOnly=(abs (freqbase ) < 10) ; %build a mask [... 0 0 0 1 1 ... 1 1 0 0 0 ...],with one entry per frequency maskDCOnly=maskDCOnly ’ ; % convert to column vector (the audio signal is also a column v ector) DCImageData=uint16 ( zeros (640 ,480 ,NumofFrames )) ; ACImageData=uint16 ( zeros (640 ,480 ,NumofFrames )) ; fftImageDataSavedCol225=zeros (640 ,NumofFrames ) ; fftACImageDataSavedCol225=zeros (640 ,NumofFrames ) ; fftDCImageDataSavedCol225=zeros (640 ,NumofFrames ) ; fo r col = 1:480 disp ([ ’col# ’, num2str(col)]) 208

fo r row = 1:640 fftImageData=f f t (ImageData(row, col ,:) );% to frequency domain fftACImageData = fftImageData(:) . maskACOnly ; % apply filter fftDCImageData = fftImageData(:) . ∗ maskDCOnly ; % apply filter ACImageData(row , col ,:) =( ifft (fftACImageData))∗ ; % back to time domain DCImageData(row , col ,:) =( ifft (fftDCImageData)) ; % back to time domain i f col == 225 fftImageDataSavedCol225 (row ,:)=fftImageData; fftACImageDataSavedCol225 (row , :)=fftACImageData; fftDCImageDataSavedCol225 (row , :)=fftACImageData; end end end

fo r frame =1:NumofFrames eval ([ ’imwrite(ACImageData(: ,: , ’ num2str(frame) ’),’’D: ExperimentData SmallDropTower Mi rageTest CaNO3 08 0 3 1 1 1 4 20SpinningPolarizer AC\ ACImageData ’ num2str\ (frame) ’.tif\ ’’ );’ ]) \ \ \ eval ([ ’imwrite(DCImageData(: ,: , ’ num2str(frame) ’),’’D: ExperimentData SmallDropTower Mi rageTest CaNO3 08 0 3 1 1 1 4 20SpinningPolarizer DC\ DCImageData ’ num2str\ (frame) ’.tif\ ’’ );’ ]) \ \ \ eval ([ ’imwrite(ImageData(: ,: , ’ num2str(frame) ’),’’D: ExperimentData SmallDropTower Mi rageTest CaNO3 08 0 3 1 1 1 4 20SpinningPolarizer \AC AND DC ImageData\ ’ num2str(frame)\ ’. tif’’);’ ]) \ \ \ end

B.3 Matlab Triangular Tank Beer’s Law Uncertainty Program

clear all DrpUncertainty=1/40;

fo r i =0:40 CommandToRun = strcat ( ’fileToRead1=’ ’D: ExperimentalData SmallDropTower BlueFoodColoringWaterTriangTankBeersLaw\ TextFilesOfImages\ ’, sprintf\ (’%0.4d’,i), ’.txt’’ ’ ) ; \ \ eval ( CommandToRun ) ; PteDrpOfOneDrpPer10mL(: ,: , i+1) = importdata(fileToRea d1); end

NumDrops=linspace (0,400,41); PteDrpOfOneDrpPer10mLDivideNoDrops=PteDrpOfOneDrpPer10mL(: ,: ,:) ./ repmat( PteDrpOfOneDrpPer10mL (:,:,1),[1 1 41] ) ;

logOfPteDrpOfOneDrpPer10mLDivideNoDrops=log ( PteDrpOfOneDrpPer10mLDivideNoDrops ) ;

MeasurementRegionTop=342; MeasurementRegionBot=488; MeasurementRegionLeft =132; MeasurementRegionRight=413; MeasurementRegionHeight=MeasurementRegionBot MeasurementRegionTop+1; MeasurementRegionWidth=MeasurementRegionRight− MeasurementRegionLeft+1 ; − %%%%%%%For C o n t r a s t AvgI 0=squeeze ( mean( PteDrpOfOneDrpPer10mL(MeasurementRegionTop : MeasurementRegionBot , MeasurementRegionLeft :MeasurementRegionRight,1 ) ,1 ) ) ; AvgI=squeeze ( mean( PteDrpOfOneDrpPer10mL(MeasurementRegionTop : MeasurementRegionBot , MeasurementRegionLeft :MeasurementRegionRight ,: ) ,1 ) ) ;

Contrast= (AvgI 0(10) AvgI(10,:) ) ./ (AvgI 0(10)+AvgI(10 ,:) ); − ContrastUncertainty=Contrast sqrt (2) 1.0/( sqrt (MeasurementRegionHeight )) . sqrt ( 1./(AvgI 0(10) AvgI(10,:) ).ˆ2 + 1./(AvgI∗ 0(10)+AvgI(10,:)∗ ).ˆ2 ); ∗ − %%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%Beers Law S t u f f

%%%%%%%%%%%For Least Squares Log Error HeightAvgI=mean( PteDrpOfOneDrpPer10mL(MeasurementRegionTop : MeasurementRegionBot , MeasurementRegionLeft :MeasurementRegionRight ,: ) ,1 ); HeightAvgLogIOverI0=mean( logOfPteDrpOfOneDrpPer10mLDivideNoDrops(MeasurementRegionTop : MeasurementRegionBot , MeasurementRegionLeft :MeasurementRegionRight,: ),1 ); PixelPositionAcross=linspace (MeasurementRegionLeft , MeasurementRegionRight , MeasurementRegionWidth ); PixelPositionAcrossEveryN=PixelPositionAcross (1:30: end );

AvgI 0RecastForAvgI= repmat( AvgI 0(:) ,[1 41] ); LogUncertainty=sqrt ( ( sqrt ( (1.0./( sqrt (MeasurementRegionHeight ) AvgI(: ,:) )).ˆ2 + (1.0./( sqrt (MeasurementRegionHeight ) AvgI 0RecastForAvgI(: ,:) )).ˆ2 )∗ ... ∗ 209

1.0 ( (1.0./( sqrt (MeasurementRegionWidth MeasurementRegionHeight ) AvgI(: ,:) )).ˆ2 + (1.0./( − ∗sqrt (MeasurementRegionHeight ) AvgI 0RecastForAvgI(:∗ ,:) )).ˆ2 )∗ ... +1.0/3.0 ( (1.0./( sqrt (MeasurementRegionHeight∗ ) AvgI(:,:) )).ˆ2 + (1.0./( sqrt ( MeasurementRegionWidth∗ MeasurementRegionHeight∗ ) AvgI 0RecastForAvgI(: ,:) )).ˆ2 ).ˆ(3.0/2.0) ... ∗ ∗ ).ˆ2 ); LogUncertaintyMax=max(LogUncertainty ,[] ,1) ;

DropsExtincCoeffTimesL= HeightAvgLogIOverI0 (1 ,10 ,5)/NumDrops(5) ; DropsExtincCoeffTimesLUncertain=− DropsExtincCoeffTimesL sqrt ( (LogUncertainty(10,5) / HeightAvgLogIOverI0 (1,10,5) )ˆ2 + ( (NumDrops(5) DrpUncertainty)∗ / NumDrops(5))ˆ2) ; BeersLawCalcContrast= ( 1 exp( DropsExtincCoeffTimesL∗ NumDrops(1,:)) ) ./ (1 + exp( DropsExtincCoeffTimesL− NumDrops(1− ,:)) ); ∗ − ∗ ExponentUncertainty=DropsExtincCoeffTimesL NumDrops(1 ,:) . sqrt ( (DropsExtincCoeffTimesLUncertain /DropsExtincCoeffTimesL)ˆ2 + ((NumDrops(1∗ ,:) DrpUncertainty)∗ ./ NumDrops(1 ,:)).ˆ2 ); ∗ ExponentialUncertainty=exp (DropsExtincCoeffTimesL NumDrops(1 ,:) ) . sqrt ( ( exp ( DropsExtincCoeffTimesL NumDrops(1 ,:) ) . ExponentUncertainty∗ + 1/2∗ exp (DropsExtincCoeffTimesL NumDrops(1 ,:) ) . ExponentUncertainty.ˆ2∗ ∗ ... ∗ ∗ +1/6 exp (DropsExtincCoeffTimesL∗ NumDrops(1 ,:) ) . ExponentUncertainty.ˆ3 ).ˆ2 ); ∗ ∗ ∗ BeersLawCalcContrastUncertainty= BeersLawCalcContrast. sqrt ( 2 (ExponentialUncertainty ./ exp ( DropsExtincCoeffTimesL NumDrops(1 ,:)) ).ˆ2 ); ∗ ∗ ∗ ContrastErrorUncertainty= abs (Contrast BeersLawCalcContrast)./Contrast . sqrt ( ( sqrt ( ContrastUncertainty .ˆ2 + BeersLawCalcContrastUncertai− nty .ˆ2) ./ (Contrast∗ BeersLawCalcContrast) ).ˆ2 + (ContrastUncertainty./Contrast).ˆ2 ); −

CalcDropsFromBeersLaw=NumDrops(5) HeightAvgLogIOverI0 (1 ,10 ,:)/HeightAvgLogIOverI0 (1 ,1 0,5) ; CalcDropsFromBeersLawUncertainty=squeeze∗ (CalcDropsFromBeersLaw ) ’. sqrt ( (NumDrops(1 ,5) DrpUncertainty/ NumDrops(1,5))ˆ2 + (LogUncertainty (10 , :) ./ squeeze∗ (HeightAvgLogIOverI0∗ (1,10,:))’).ˆ2 ... +(LogUncertainty (10 ,5)/HeightAvgLogIOverI0 (1 ,10 ,5)) .ˆ2 );

NumDropsErrorUncertainty= abs (NumDrops(1 ,:) squeeze(CalcDropsFromBeersLaw) ’ )./NumDrops(1 ,:) . sqrt ( ( sqrt ( (NumDrops(1 ,:) DrpUncertainty).ˆ2− + CalcDropsFromBeersLawUncertainty .ˆ2) ./ ∗ abs (NumDrops(1 ,:) squeeze(CalcDropsFromBeersLaw)∗ ’ ) ).ˆ2 + ((NumDrops(1 ,: ) DrpUncertainty) ./ NumDrops(1 ,:)).ˆ2− ); ∗ fo r i = 1: length (HeightAvgLogIOverI0 (1 ,1 ,:) ) [fitobject ,gof]=fit( ( linspace (MeasurementRegionLeft , MeasurementRegionRight , MeasurementRegionWidth ) ) ’ ,(HeightAvgLogIOverI0 (1 ,: , i))’ , ’poly1’ ); curvefitcoefficients = coeffvalues(fitobject );

sse(i)=gof .sse; rsquare(i)=gof . rsquare; rmse(i )=gof .rmse; Normalized rmse( i)=rmse(i )/abs (mean(HeightAvgLogIOverI0 (1 ,: , i ))); ValuesFromCurveFit(: , i)=curvefitcoefficients (1) ( linspace (MeasurementRegionLeft , MeasurementRegionRight,MeasurementRegionWidth∗ ) ) ’ +cu rvefitcoefficients(2); AbsError(: , i )=abs ( ( ValuesFromCurveFit(: , i) (HeightAvgLogIOverI0 (1,:,i))’ ) ./ ( HeightAvgLogIOverI0 (1 ,: , i)) ’ ); − MaximumError( i )=max(AbsError(: ,i) );

%%%%%%%%%Build matrix of coefficinets for Log of Data at unc ertainty %%%%%%%%%extremes and sample b/c too many points disp (strcat(’Build Coeff Matrix of Log Data With Uncertainty fo r Frame #’, num2str(i) ) ); CoeffsIncludUncertain=[HeightAvgLogIOverI0 (1 ,: , i)’+LogUncertaintyMax(1, i ) , HeightAvgLogIOverI0 (1 ,: , i )’ LogUncertaintyMax(1, i ) ]; CoeffsIncludUncertainEveryN=CoeffsIncludUncertain− (1 :30: end ,:);

HeightAvgLogIOverI0EveryN=HeightAvgLogIOverI0 (1 ,1:3 0: end ,i); %%%%%%%%%%%%%%% %%%%%%%%%%%%%%Form all cominations of Log of Data with unce rtainty disp (strcat(’Build all combos of Log Data for Frame #’, num2str(i) ) ); StringForAllCombExpr= ’HeightAvgLogIOverI0WithUncertaintiesEveryN=allcomb ( CoeffsIncludUncertainEveryN(1 ,:) ’ ; fo r count=2: length (CoeffsIncludUncertainEveryN(: ,1) ) StringForAllCombExpr= strcat(StringForAllCombExpr , ’ , CoeffsIncludUncertainEveryN( ’ , num2str(count), ’ ,:)’ ); end StringForAllCombExpr=strcat (StringForAllCombExpr , ’) ;’); eval (StringForAllCombExpr) ; %%%%%%%%%%%%%%%%%% %%%%%%%%%calculate least squares for combinations of sampled Data at %%%%%%%%%u n c e r t a i n t y e x t r e m e s disp (strcat(’Perform all Least Squares on Uncertainties for Frame #’ ,num2str(i) ) ); Max sseFromUncertainty( i )=0.0; Max rmseFromUncertainty( i )=0.0; Min rmseFromUncertainty ( i )=10000.0; fo r count=1: length (HeightAvgLogIOverI0WithUncertaintiesEveryN(: ,1) ) [fitobject ,gof]=fit( (PixelPositionAcrossEveryN)’ ,( HeightAvgLogIOverI0WithUncertaintiesEveryN(count ,:) )’ , ’poly1’ ); i f gof . sse > Max sseFromUncertainty( i ) Max sseFromUncertainty( i )=gof . sse ; 210

end %disp(gof .rmse); i f gof .rmse > Max rmseFromUncertainty( i ) Max rmseFromUncertainty( i )=gof .rmse; Normalized Max rmseFromUncertainty ( i )=Max rmseFromUncertainty( i )/abs (mean( HeightAvgLogIOverI0WithUncertaintiesEveryN(count ,:) )); %disp(’rmse greater ’); end i f gof .rmse < Min rmseFromUncertainty ( i ) Min rmseFromUncertainty ( i )=gof .rmse; Normalized Min rmseFromUncertainty( i )=Min rmseFromUncertainty ( i )/abs (mean( HeightAvgLogIOverI0WithUncertaintiesEveryN(count ,:) )); %disp(’rmse greater ’); end end %%%%%%%%%%%%%%%% %%%%%%%%%%%%%Calculate least squares for resampled raw lo g data for %%%%%%%%%%%%%comparison [fitobject ,gof]=fit( (PixelPositionAcrossEveryN)’ ,(HeightAvgLogIOverI0EveryN) ’ , ’poly1 ’ ) ; sseFromDataEveryN( i )=gof . sse ; rmseFromDataEveryN( i )=gof . rmse ; Normalized rmseFromDataEveryN( i )=rmseFromDataEveryN( i )/abs (mean(HeightAvgLogIOverI0EveryN)) ; end

%figure (8) %plot (NumDrops,rmse) %xlabel(’Number of Drops ’) ; ylabel(’RMSE’) ; title (’Line ar Least Squares of Log Of I/I 0 ’);

%figure (1) %plot (NumDrops, Normalized rmse) %xlabel(’Number of Drops ’) ; ylabel(’Normalized RMSE’) ; t itle(’Linear Least Squares of Log Of I/I 0 ’ ) ;

%figure (3) %plot (NumDrops, Normalized M a x rmseFromUncertainty ); %%%squeeze removes extra dimensio n with just 1 in length %xlabel(’Number of Drops ’) ; ylabel(’Max Normalized RMSE’ );title(’Linear Least Squares of Resampled Log Of I/I 0 + Uncertainty in Many Combos’);

figure (13) plot (NumDrops, rmseFromDataEveryN ,NumDrops, Max rmseFromUncertainty, ’r: ’ , ... NumDrops, Min rmseFromUncertainty , ’r : ’) ; %%%squeeze removes extra dimension with just 1 in length xlabel ( ’Number of Drops’); ylabel ( ’RMSE’ ) ; title (’Linear Least Squares of Resampled Log Of I/I 0 ’) ;

figure (4) plot (NumDrops, Normalized rmseFromDataEveryN ,NumDrops, Normalized Max rmseFromUncertainty , ’r: ’ , ... NumDrops, Normalized Min rmseFromUncertainty, ’r: ’); %%%squeeze removes extra dimension with just 1 in length xlabel ( ’Number of Drops’); ylabel ( ’Normalized RMSE’) ; title (’Linear Least Squares of Resampled Log Of I/I 0’);

figure (5) plot (NumDrops, squeeze(Contrast ) , ’r ’ ,NumDrops, squeeze(BeersLawCalcContrast) , ’b’ ,NumDrops, squeeze ( Contrast)+ContrastUncertainty , ’r: ’ , ... NumDrops, squeeze (Contrast ) ContrastUncertainty , ’r : ’ ,NumDrops, squeeze(BeersLawCalcContrast) BeersLawCalcContrastUncertainty− , ’b: ’ , ... − NumDrops, squeeze (BeersLawCalcContrast)+BeersLawCalcContrastUncertainty , ’b: ’); legend( ’Contrast ’ , ’BeersLawCalcContrast ’); xlabel ( ’Number of Drops’); ylabel ( ’ $ frac I 0 I I 0+I $’, ’Interpreter’, ’Latex’,’fontsize’,14); title ( ’Contrast ’); \ { − }{ }

%figure (6) %plot(NumDrops,squeeze(ContrastUncertainty ) ); %xlabel(’Number of Drops ’) ;ylabel(’Contrast Uncertaint y ’); title(’Contrast Uncertainty ’); %figure (10) %plot (NumDrops,BeersLawCalcContrastUncertainty ); %XLABEL( ’Number of Drops ’) ;YLABEL( ’Contrast Uncertaint y ’);TITLE(’Beers Law Calc Contrast Uncertainty ’) ;

figure (9) plot (NumDrops, abs ( (squeeze(Contrast) squeeze(BeersLawCalcContrast)) ./ squeeze(Contrast) ) , NumDrops, ... − abs ( (squeeze(Contrast) squeeze(BeersLawCalcContrast)) ./ squeeze(Contrast) ) squeeze( ContrastErrorUncertainty),− ’r: ’ , ... − NumDrops, abs ( (squeeze(Contrast) squeeze(BeersLawCalcContrast)) ./ squeeze(Contrast) ) + squeeze(ContrastErrorUncertainty− ) , ’r: ’); xlabel ( ’Number of Drops’); ylabel (’Contrast Error’); title (’Contrast Error’);

figure (10) plot (NumDrops, abs ( (NumDrops squeeze(CalcDropsFromBeersLaw) ’ ) ./ NumDrops ) ,NumDrops, ... − 211

abs ( (NumDrops squeeze(CalcDropsFromBeersLaw) ’ ) ./ NumDrops ) squeeze ( NumDropsErrorUncertainty)− , ’r: ’ , ... − NumDrops, abs ( (NumDrops squeeze(CalcDropsFromBeersLaw) ’ ) ./ NumDrops ) + squeeze ( NumDropsErrorUncertainty)− , ’r: ’) ; xlabel ( ’Number of Drops’); ylabel ( ’NumDrops Error ’) ; title ( ’NumDrops Error ’) ;

%figure (11) %plot(NumDrops,squeeze(ContrastErrorUncertainty ) ); %xlabel(’Number of Drops’);ylabel(’Uncertainty ’); titl e (’Contrast Error Uncertainty ’);

figure (12) plot (NumDrops, squeeze (CalcDropsFromBeersLaw ) ,NumDrops, s queeze (CalcDropsFromBeersLaw ) CalcDropsFromBeersLawUncertainty ’ , ’r: ’ ... − ,NumDrops, squeeze (CalcDropsFromBeersLaw )+CalcDropsFromBeersLawUncertainty ’ , ’r: ’ , ... NumDrops,NumDrops, ’k ’); %errorbar (NumDrops, squeeze−− (CalcDropsFromBeersLaw) ,CalcDropsFromBeersLawUncertainty , ’r ’) ; xlabel (’Actual Number of Drops’); ylabel ( ’Calculated Drops’); title (’Beers Law Calculated Drops’);

figure (2) plot (NumDrops, squeeze(LogUncertaintyMax) ) ; %%%squeeze removes extra dimension with just 1 in length xlabel ( ’Number of Drops’); ylabel ( ’Uncertainty Max’); title (’Max Uncertainty for Log Of I/I 0 ’);

figure (7) c l f (7) ; hold on; fo r i = 1: length (HeightAvgLogIOverI0 (1 ,1 ,:) ) %for i = 1:5 plot (PixelPositionAcross , squeeze(HeightAvgLogIOverI0 (1 , :, i)) ,PixelPositionAcross ,squeeze( HeightAvgLogIOverI0 (1 ,: , i ))’ LogUncertainty(:,i),’r: ’, ... PixelPositionAcross , squeeze(HeightAvgLogIOverI0− (1 ,: ,i))’+LogUncertainty (: , i), ’r: ’); end xlabel (’Pixel Position ’); ylabel (’Log I/I 0 ’); title (’Log I/I 0 ’); hold off ;

B.4 Matlab Gradient Refractive Index Model Program

%%%%%%%%%%%%%%%%Wr i t t e n by : M i c h a e l R o b e r t s %%%%%%%%%%Main .m%%%%%%%%%%% clear all

%InitPos=0; InitAng=.25; DifWdth=1; MixingWdth=2; RollupHeight =3; HvyIOR=1.55; LghtIOR=1.38; MidIOR=(1.55+1.38) /2; TnkWdth=75; A1=MixingWdth/2/1; B1=log ( RollupHeight /A1)/TnkWdth; A2= RollupHeight /1; B2=−log( MixingWdth/2/A2) /TnkWdth; I0=1; − %%%%%%%%%%%%%%%%%%p l o t i n i t i t a l s t u f f hold on; figure (8) ; plot (1 ,MixingWdth/2 , ’ r ’) hold on; ∗ plot (1, MixingWdth/2 , ’ r ’) figure (8)− ; ∗ plot (1,RollupHeight , ’ r ’) hold on; ∗ plot (1, RollupHeight , ’ r ’) hold on;− ∗ figure (10) ; plot (1 ,MixingWdth/2 , ’ r ’) hold on; ∗ plot (1, MixingWdth/2 , ’ r ’) figure (10)− ; ∗ plot (1,RollupHeight , ’ r ’) hold on; ∗ plot (1, RollupHeight , ’ r ’) − ∗ fo r (x=0:1:75) n=@(y ) (MidIOR LghtIOR)/2 ( e r f (2/( DifWdth/2) ( y A1 exp (B1 x) ) ) )+(MidIOR+LghtIOR)/2 (LghtIOR +.01 LghtIOR)− ; ∗ − ∗ − ∗ ∗ − y=fzero (n,1)∗ ; hold on; 212

figure (5) ; plot (x,y, ’ r ’) hold on; ∗ figure (9) ; plot (x,y, ’ r ’) end ∗ fo r (x=0:1:75) n=@( y ) (HvyIOR MidIOR) /2 ( e r f (2/(DifWdth/2) (y A2 exp ( B2 x ) ) ) ) +(HvyIOR+MidIOR) /2 (HvyIOR .01 HvyIOR)− ; ∗ − ∗ − ∗ ∗ − y=fzero− (n,∗ 1) ; hold on; − figure (5) ; plot (x,y, ’ r ’) hold on; ∗ figure (9) ; plot (x,y, ’ r ’) end ∗ %%%%%%%%%%%%%%%%%%p l o t i n i t i t a l s t u f f

fo r (InitPos= 5:.1:5) %I0=1 (heaviside(InitPos+1)− heaviside(InitPos 1)) ; %%for∗ (InitAng =0:.1:0) − − %hold on; %figure (1); %plot(I0 ,InitPos ,’ ’ ) ; [x,y]=ode45( ’F’ ,[0∗ ,TnkWdth] ,[ InitPos ,InitAng,I0 ]) ; hold on; figure (5) ; plot (x,y(: ,1)) hold on; figure (9) ; plot (x,y(: ,1)) %hold on; %figure (6); %plot(x,y(: ,3)) %hold on; %figure (7); %plot3(x,(y(:,1)) ,(y(:,3))) hold on; figure (8) ; plot ((y( length (y) ,3)) ,(y( length (y) ,1)) ,’+’) hold on; figure (10) ; i f (x( length (x))>=TnkWdth ) [xBack , yBack]=ode45( ’ FBack ’ , [ TnkWdth , 2 TnkWdth ] , [ y ( length (y) ,1), (y( length (y) ,2)) ,(y( length (y) ,3))]) ; ∗ − hold on; figure (9) ; plot (xBack,(yBack(: ,1) )) %figure (6); %plot(xBack ,yBack(: ,3)) %figure (7); %plot3(xBack ,(yBack(: ,1)) ,(yBack(: ,3))) hold on; figure (10) ; plot ((yBack( length (yBack) ,3)) ,(yBack( length (yBack) ,1)) , ’+’) end %%end InitPos end hold off ; %%%%%%%%%%%%Main .m%%%%%%%%

%%%%%%%%%%%F.m%%%%%%%%%%%% function yp=F(x , y , DifWdth ,HvyIOR, LghtIOR ,TnkWdth) DifWdth=1; MixingWdth=2; RollupHeight =3; HvyIOR=1.55; LghtIOR=1.38; MidIOR=(1.55+1.38) /2; TnkWdth=75; A1=MixingWdth/2/1; B1=log ( RollupHeight /A1)/TnkWdth; A2= RollupHeight /1; B2=−log( MixingWdth/2/A2) /TnkWdth; − yp=zeros (3,1); % since output must be a column vector yp(1)=y(2) ;

%%%%%%%%%%%%% 2 erf mixing region with wall rollup i f y(1)>=0 n=(MidIOR LghtIOR)/2 ( e r f (2/( DifWdth/2) (y(1) A1 exp ( B1 x) ) ) )+(MidIOR+LghtIOR) /2; dn dy=(MidIOR− LghtIOR)∗ − /2 (2/ sqrt ( pi )) (2/(DifWdth/2)∗ − ∗ ) exp∗ ( (2/(DifWdth/2) )ˆ2 (y(1) A1 exp (B1 x))ˆ2) ; − ∗ ∗ ∗ − ∗ − ∗ yp(2)=1/n∗ dn dy (1+(y(2))ˆ2) ; ∗ ∗ 213

%yp(3)= 0.02 (1 erf (2/(DifWdth/2) (y(1) A1 exp (B1 x)))ˆ2) sqrt (y(2)ˆ2+1) y(3) ; else − ∗ − ∗ − ∗ ∗ ∗ ∗ n=(HvyIOR MidIOR) /2 ( e r f (2/(DifWdth/2) (y(1) A2 exp (B2 x) ) ) )+(HvyIOR+MidIOR) /2; dn dy=(HvyIOR− MidIOR)∗ − /2 (2/ sqrt ( pi )) (2/(DifWdth/2)∗ − ∗ ) exp∗ ( (2/(DifWdth/2) )ˆ2 (y(1) A2 exp (B2 x))ˆ2) ; − ∗ ∗ ∗ − ∗ − ∗ ∗ yp(2)=1/n dn dy (1+(y(2))ˆ2) ; %yp(3)= 0.02∗ (1 ∗ erf (2/(DifWdth/2) (y(1) A2 exp (B2 x)))ˆ2) sqrt (y(2)ˆ2+1) y(3) ; end − ∗ − ∗ − ∗ ∗ ∗ ∗ %%%%%%%%%%%%% 2 erf mixing region with wall rollup yp(3)= 0.02 (1 e r f (2/( MixingWdth/2) y(1))ˆ2) sqrt (y(2)ˆ2+1) y(3) ; %%%%%%%%%%%%%F.m%%%%%%%%%%%− ∗ − ∗ ∗ ∗

%%%%%%%%%%%%FBack .m%%%%%%%%% function yp=F(x , y , DifWdth ,HvyIOR, LghtIOR)

yp=zeros (2,1); % since output must be a column vector yp(1)=y(2) ; n=1; dn dy=0; yp(2)=1/n dn dy (1+(y(2))ˆ2) ; yp(3)=0; ∗ ∗ %yp(2)=1 (1+(y(2))ˆ2) ; %%%%%%%%%%%%%FBack∗ .m%%%%%%%%%%

B.5 Matlab Interfacial Tension Calculation Program

%%%%%%%%%%%%%%%%%Program W r i t t e n By : M i c h a e l R o b e r t s%%%%%%%%%%%% clear all ;

accelz =11.02; %accelz =122.625; %accelz =11.77; %accelz =686.7; r h o ligh =918.; %%%5Cst O i l r h o heav=2613.;

mu ligh=.0046; %%%5Cst O i l mu heav=.0063; %%%%%Dil LST 2.61 S.G. %mu ligh =0.00001; %mu heav=0.00001; %Tens=0; %Tens=0.030; %Tens=0.0720; alpha1=rho ligh/(rho ligh+rho heav) ; alpha2=rho heav/( rho ligh+rho heav) ;

n0 = 100+100i ; % Make a starting guess at the solution

Tens=0.017;

k To Match=2 3.14159/(76 10ˆ 3/298 45) k maxTens =10000;∗ ∗ − ∗

while ( ( k maxTens >(k To Match+.001 k To Match ) k maxTens <(k To Match .001 k To Match)) ... && Tens<1 ) ∗ || − ∗ Tens=Tens+0.0001; k max=10000; %initialize k max for each loop finding k max at different Tensions n max= 10000; %initialize n max for each loop finding k max at different Tensions k val =500;− k val prev=k val 1; deriv=100; − deriv prev=90; n prev=0; n=100; fid = fopen ( ’SolveTensFastestGrowing .txt ’ , ’w’);

%while (k v a l <8000 && (real(sign(deriv prev deriv))==1) ) while ( k val <4000 && n > 0) ∗ %Tens , k val ,n f=@(n)myfun(n, k val ,alpha1 ,alpha2 ,accelz ,rho ligh ,rho heav ,mu ligh ,mu heav ,Tens) ; %options=optimset(’Display ’, ’ iter ’, ’MaxFunEvals ’,10 00); % Option to display output options=optimset( ’Display ’ , ’off ’ , ’MaxFunEvals’ ,1000 ); [n,fval ,exitflag ,output] = fsolve(f ,n0,options); % Call solver %disp([ ’ k val=’,num2str( k val),’ n=’,num2str(n)])

fprintf (fid , ’%6.2f %12.8e %12.8e %s n’, k val , real (n) , imag(n),’i’ ); %figure (1); \ %hold on; %plot(k val ,real(n)); 214

i f ( real (n)>real (n max)) n max=n ; k max=k val ; end

deriv prev=deriv ; deriv=(n n prev)/(k val k val prev); k val prev=k− val ; − n prev=n ; k val=k val+100 e r f ( abs (deriv))+.1; % adaptivel adjust spacing between k values depending on slope , did∗ error function to make a max end

disp ([ ’Tens=’ ,num2str(Tens),’ fastest k max=’ , num2str( k max),’ fastest n=’,num2str( n max) ]) fprintf ( f i d , ’%s n’ , ’ ’); fprintf (fid , ’%6.2f\ %12.8e %12.8e %s n’, k max , real (n max) , imag(n max),’i’ ); \ k maxTens=k max ;

fclose (fid); end

disp ([ ’Tension =’ , num2str(Tens), ’ fastest k maxTens=’ , num2str( k max),’ fastest n=’, num2str(n max) ])

B.5.1 myfun.f

function CHANDRAS EQUA = myfun(n, k val ,alpha1 ,alpha2 , accelz ,rho1 ,rho2 ,mu1,mu2,Tens) CHANDRAS EQUA= (accelz k val /(nˆ2) (( alpha1 alpha2 )+k valˆ2 Tens/( accelz (rho1+rho2) ))+1.0D0) ( alpha2 ( sqrt− ( k valˆ2+n/(mu1/rho1)∗ ∗ ))+alpha1− ( sqrt ( k valˆ2+n/(mu2/rho2)∗ ∗ )) k val ) ... ∗ 4 . 0 D0 k val∗ alpha1 alpha2+ ... ∗ − − 4 . 0 D0∗ k valˆ2/n∗ (alpha1∗ (mu1/rho1) alpha2 (mu2/rho2) ) (alpha2 ( sqrt ( k valˆ2+n/(mu1/rho1) )) alpha1∗ ( sqrt∗ ( k valˆ2+n/(mu2/rho2)∗ − ))+k∗ val (alpha1∗ alpha2))+∗ ... − 4 . 0 D0 k valˆ3/nˆ2∗ (alpha1 (mu1/rho1) alpha2 (mu2/rho2)∗ − ) ˆ2 (( sqrt ( k valˆ2+n/(mu1/rho1)) ) k val ) ∗(( sqrt ( k valˆ2+n/(mu2/rho2))∗ ∗ ) k− val); ∗ ∗ − ∗ −

B.6 Duff and Harlow Solution Fortran Program

! !!!!!!!!!!!!Written by Michael Roberts!!!!!!!!!!!!! ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!MAIN program DuffHarlow IMPLICIT NONE DOUBLE PRECISION, PARAMETER :: pi = 3.141592653589793238462643383279502884197D0 DOUBLE PRECISION :: step ,usrstp DOUBLE PRECISION :: err (1000000) ,shterr , shterrprev DOUBLE PRECISION :: final ,x1,xnmain DOUBLE COMPLEX :: alpha ,bc,psigus(100) ,shot(100) ,exact(1000000) ,psi gus1 ,psigus2 ,psigus3 ,shot1 , shot2 ,shot3 , psi DOUBLE COMPLEX, allocatable :: y(: ,:) ,ynand1(:) ,yn(:) ,ynmain(:) ,eqn(: ,:) DOUBLE PRECISION :: x(1000000) ,xn,Atw,epsDif ,a,k,wavelength INTEGER n,nplus1,u,v,length ,num,flag ,j ,i ,b,knd,iteration ,eq n order CHARACTER(LEN=32) :: iteration method ,numerical method , kind o f problem , bvp iteration type DOUBLE COMPLEX :: bc iteration LOGICAL :: is eigen eqn order=2 allocate (y(eqn order ,1000000) ,ynand1(eqn order) ,yn(eqn order ) ,ynmain(eqn order) ,eqn(eqn order ,11)) open (97 ,FILE=’input.dat ’ ,STATUS=’OLD’ ) read (97, ) usrstp read (97, ∗ ) final read (97, ∗ ) x(1) read (97, ∗ ) y(2,1) !y read (97, ∗ ) y(1,1) !y prime read (97, ∗ ) bc read (97, ∗ ) knd close (97)∗ usrstp=.00002 final =.0040 x(1)= .0040 iteration− m e t h o d=”SECANT” ! !SECANT or FALSE POSITION or BISECTION iteration method for BVP 215

numerical method=”RK4 MERSON” ! ! RK4 or RK4 MERSON or FORWARD EULER or BACKWARD EULER or HEUN or TRAPEZOID or RICHARDSON kind o f p r o b l e m=”BVP” !!IVP or BVP bvp iteration t y p e=”PARAMETER” ! ! ! ! PARAMATER or PSI BC ITERATED alpha=( .1,.5) exact (1)=y(2− ,1)

i s eigen =.FALSE. !!!!Is this an eigen value problem or not ( moreso, are we solving for more than one eigenvalue through a loop). wavelength is the eigen value here

u=98 v=99 open (u, FILE=’wave k Psi.out’ ,STATUS= ’REPLACE ’ ) !write values to file

wavelength=2.98D 3 ! wavelength =1.5652D− 3 !!!Keshav !Atw=0.215 − Atw=0.56 epsDif=0.5D 3 ! !! diffusion thickness ! epsDif=2.08D− 3 x(1)=x(1)/epsDif− final=final/epsDif usrstp=usrstp/epsDif length=((final x(1))/step) − Do !!!!!!!For specifica case , an eigen value problem with anot her parameter k which is wavenumber here k=2. pi/wavelength a=1./(epsDif∗ k) ∗ !a=4.5 !!!!Jacobs numbers !psi=1.132 !!!!!! !Atw=.616 !!!!

! !!!!!!!!!!!!! Initial conditions or LEFT Boundary Condit ions , if one is to be itterated, leave blank and define with psi y(2 ,1)=Dexp((x(1))/a) y(1 ,1)=y(2 ,1)/a ! !!!!!!!!!!!!! Initial conditions or LEFT Boundary Condit ions

SELECT CASE (kind o f problem) CASE( ”BVP” ) !!!!!!!!!!!!!! for iteration if BVP bc=Dexp( (final)/a) !!!!!what we will use to compare and converge to (must define the specific BC− we are dealing with in bc iteration function at bottom !!!Psi is what we will iterate until we converge to the bc we want as defined by bc. Psi can be a parameter as defined in equation or a left bc p s i g u s 1=1D0 !!!left guess for psi used for bisec and false position psigus (1)=1D0 !!left guess for psi used for secant psigus3=10D0 !!right guess for psi used for bisec and false position psigus (2)=1.2D0 !!right guess for psi used for secant psi=psigus1 !!!!!!!!!!!!!!!!! for iteration if BVP CASE(”IVP”) !!!!!!!!!!!!!!IVP psi=1.132 !!!!!!!!can leave blank if everything defined in equation and IC all defined !!!!!!!!!!!!!!!IVP END SELECT

write ( , ) y(2,1), y(1,1), bc shterr=1∗ ∗ iteration=1 DO SELECT CASE (bvp iteration type) CASE( ” PSI BC ITERATED” ) call psi b c iterated(y,psi ,eqn order ) END SELECT open (v, FILE=’velocity .out ’,STATUS= ’REPLACE ’ ) nplus1=2 n=1 step=usrstp 240 continue ! write( , ) 1 i f ((x(n))∗ ∗ .LT. (final .00000001)) then !write( , ) 2 − n=nplus1∗ ∗1 i f (( final− x(n)) .LT. step) then step=(final− x(n)) end i f − x(nplus1)=x(n)+step xn=x(n) xnmain=x(n) do i=1,eqn order ynmain( i )=y(i ,n) yn(i)=y(i ,n) 216

end do SELECT CASE (numerical method) CASE( ”RICHARDSON” ) call rich(xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n,Atw,epsDif ,a,psi , eqn order ) CASE( ”FORWARD EULER” ) call feuler (xnmain,xn,ynmain,ynand1 ,yn,eqn ,step ,nplus1 ,n ,Atw,epsDif ,a,psi , eqn order ) CASE( ”BACKWARD EULER” ) call beuler(xnmain,xn,ynmain,ynand1 ,yn,eqn ,step ,nplus1 ,n ,num,Atw,epsDif ,a,psi , eqn order ) CASE( ”TRAPEZOID” ) call trap(xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n,num,Atw,epsDif ,a,psi , eqn order ) CASE( ”HEUN” ) call heun(xnmain,xn,ynmain,ynand1 ,yn,eqn , step , nplus1 ,n,Atw,epsDif ,a,psi , eqn order ) CASE( ”RK4” ) call rk4(xnmain,xn,ynmain,ynand1 ,yn,eqn , step , nplus1 ,n,Atw,epsDif ,a,psi ,eqn order ) CASE( ”RK4 MERSON” ) call merson(xnmain,xn,ynmain,ynand1 ,yn,eqn , step , nplus1 ,n , err , flag ,Atw,epsDif ,a, psi ,eqn order ) CASE DEFAULT write ( , ) ”Numerical method variable not defined correctly” STOP ∗ ∗ END SELECT !write( , ) 3 do i=1,eqn∗ ∗ order y(i ,nplus1)=ynand1(i ) !!!!!!!!puts next value along x of y in for all the systems of equations (does not affect y(i,1), left bc or ic) end do !write(98, ) ynand1 ∗ ! exact(nplus1)=2.5 (x(nplus1)) ( 1)+3.5 x(nplus1) 5 !write( , ) x(n),y(2,n)∗ ,(err(n)),exact(n)∗∗ − ∗ − !write( ∗ , ∗ ) x(n) ,REAL(y(2,n)) ,REAL(y(1,n)) write (v,∗ ∗) x(n),REAL(y(2,n)) ,REAL(y(1,n)) !write( ∗, ) 4 !write( ∗ , ∗ ) ”what” nplus1=nplus1+1∗ ∗ goto 240 endif ! write( , ) num,x(1) ,y(2,1) ,step ,eqn(11) ∗ ∗

SELECT CASE (kind o f problem) ! !!!!!!!!!!!!!!Depending on kind of problem do different things CASE( ”BVP” ) !!!!!!!!!!! IF it is a boundary value problem we will shoot and compare the shot with the BC and change the parameter psi !shot(iteration)=y(2,n) shot(iteration)=bc iteration(y(: ,n),eqn order ) !!!put in value of bc to check if we have converged there used for secant method (n is last value here, bc end of loop) shterr=abs ((bc shot(iteration))/bc) shterrprev=abs− ((bc shot(iteration 1))/bc) i f (iteration .EQ.1)− then − shot1=bc iteration(y(: ,n) ,eqn order ) !!!put in value of bc to check if we have converged there used for Bisection and false position methods (n is last value here, bc end of loop) psi=psigus3 shterrprev =10. endif i f (iteration .EQ.2) then shot3=bc iteration(y(: ,n) ,eqn order ) SELECT CASE (iteration method) CASE( ”SECANT” ) psigus( iteration+1)=(bc shot(iteration)) (psigus(iteration) psigus(iteration 1)) /(shot(iteration) shot(iteration− 1))∗ & − − +psigus(iteration) − − psi=psigus( iteration+1)

CASE( ”BISECTION” ) psigus2=(psigus1+psigus3) /2. psi=psigus2

CASE( ”FALSE POSITION” ) psigus2=(bc shot1) (psigus3 psigus1)/(shot3 shot1) +psigus1 psi=psigus2− ∗ − −

END SELECT endif i f (iteration .GE.3) then shot2=bc iteration(y(: ,n) ,eqn order ) SELECT CASE (iteration method) CASE( ”SECANT” ) 217

psigus( iteration+1)=(bc shot(iteration)) (psigus(iteration) psigus(iteration 1)) /(shot(iteration) shot(iteration− 1))∗ & − − +psigus(iteration) − − i f (abs(shterr shterrprev ) .GE.10D 12) then !!!only put new value of psigus in psi if the− error is not to small− between them, otherwise psi=psigus( iteration+1) !!!the next iteration will have very small slope in secant method end i f

CASE( ”FALSE POSITION” ) i f (REAL( bc shot2) REAL( bc shot3) .LE.1D 14) then psigus1=psigus2− ∗ − − shot1=shot2 else psigus3=psigus2 shot3=shot2 endif psigus2=(bc shot1) (psigus3 psigus1)/(shot3 shot1) +psigus1 psi=psigus2− ∗ − −

CASE( ”BISECTION” ) i f (REAL( bc shot2) REAL( bc shot1) .LE.1D 14) then psigus3=psigus2− ∗ − − shot3=shot2 ! write( , ) ”BISECTION” ,REAL(bc shot(iteration)) REAL( bc shot(iteration 1)) , psigus(∗ ∗ iteration+1) − ∗ − − else psigus1=psigus2 shot1=shot2 endif psigus2=(psigus1+psigus3) /2. psi=psigus2

END SELECT endif iteration=iteration+1

CASE(”IVP”) EXIT !! if an IVP, just do loop once END SELECT

close (v) write ( , ) shterr ,iteration ,psi i f (( shterr .LT.(10D∗ ∗ 10)) .OR.( iteration .GT.500) .OR.(abs(shterr shterrprev ) .LT.10D 12)) EXIT END DO − − −

write (u, ) wavelength ,k, real (psi) wavelength=wavelength+0.1D∗ 3 − i f ((wavelength .GT. 5D 2).OR.( .NOT.( is eigen))) EXIT !!!Exit main eigen value loop if conditions are met − END DO !!!!!!!this do is For specifica case , an eigen value problem with another parameter k which is wavenumber here

close (u) end ! !!!!!!!!!!!!!!!!!!!!!!!!!!!Main

subroutine feuler (xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n ,Atw,epsDif ,a,psi , eqn order ) ... end

subroutine beuler(xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n ,num,Atw,epsDif ,a, psi , eqn order ) ... end

subroutine rich(xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n,Atw,epsDif ,a,psi ,eqn order ) ... end

subroutine trap(xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n,num,Atw,epsDif ,a,psi , eqn order ) ... end

subroutine heun(xnmain,xn,ynmain,ynand1 ,yn,eqn , step ,nplus1 ,n,Atw,epsDif ,a,psi ,eqn order ) ... end

subroutine rk4(xnmain,xn,ynmain,ynand1 ,yn,eqn , step , nplus1 ,n,Atw,epsDif ,a,psi ,eqn order ) ... end ! !!!!!!!!!!!!!!!!!!!!!!!!!!Rk4 Merson subroutine merson(xnmain,xn,ynmain,ynand1 ,yn,eqn , step ,nplus1 ,n , err , flag ,Atw,epsDif ,a,psi , eqn order ) INTEGER :: nplus1,n,flag ,tmp,eqn order DOUBLE PRECISION :: xn,xi ,xnmain 218

DOUBLE COMPLEX :: ynmain(eqn order) ,eqn(eqn order ,11) ,ynand1(eqn order) ,yn(eqn order) ,yi( eqn order) ,psi DOUBLE PRECISION :: step ,Atw,epsDif ,a DOUBLE PRECISION :: stpchg DOUBLE COMPLEX :: yn3rd1(eqn order ) ,yn3rd2(eqn order) ,ynhaf1(eqn order ) ,yprm(eqn order ) ,yprm3d( eqn order ) DOUBLE COMPLEX :: yprmhf(eqn order ) ,yprmn1(eqn order ) ,yprime(eqn order ) ,yand(eqn order ) ,ynan1( eqn order ) ,yand1(eqn order) ,yscal DOUBLE PRECISION :: err (1000000) ,eps

xnmain=xn ynmain=yn step=step /3 call feuler (xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n ,Atw,epsDif ,a,psi , eqn order ) ! Euler Predictor yn3rd1=ynand1 step=step 3 ∗ xi=xnmain yi=ynmain call eqatn(eqn,yi ,xi ,Atw,epsDif ,a,psi ,eqn order ) call yprimee(yi ,xi ,yprime ,eqn ,eqn order ) !Trap Corrector yprm=yprime xi=xi+step/3 yi=yn3rd1 call eqatn(eqn,yi ,xi ,Atw,epsDif ,a,psi ,eqn order ) call yprimee(yi ,xi ,yprime ,eqn ,eqn order ) yprm3d=yprime yn3rd2=ynmain+step /6 (yprm+yprm3d ) ∗ yi=yn3rd2 call eqatn(eqn,yi ,xi ,Atw,epsDif ,a,psi ,eqn order ) call yprimee(yi ,xi ,yprime ,eqn ,eqn order ) yprm3d=yprime ! Adams Bashforth Pred. half step ynhaf1=ynmain+step /8 (yprm+3 yprm3d) − ∗ ∗ xi=xn+step /2 yi=ynhaf1 call eqatn(eqn,yi ,xi ,Atw,epsDif ,a,psi ,eqn order ) call yprimee(yi ,xi ,yprime ,eqn ,eqn order ) ! Adams Bashforth Predictor Full Step − yprmhf=yprime ynan1=ynmain+step /2 (yprm 3 yprm3d+4 yprmhf) ∗ − ∗ ∗ xi=xn+step yi=ynan1 call eqatn(eqn,yi ,xi ,Atw,epsDif ,a,psi ,eqn order ) call yprimee(yi ,xi ,yprime ,eqn ,eqn order ) !Simpson Rule Corrector yprmn1=yprime ynand1=ynmain+step /6 (yprm+4 yprmhf+yprmn1 ) yand1=ynand1 ∗ ∗ yand=ynand1 err (n)=maxval(abs(ynan1 yand1)) tmp=maxloc(abs (ynan1 yand1)− ,DIM=1) xn=xnmain − yn=ynmain eps=10.0 ( 6) !write( ∗∗, )− ynan1(11), ynan1(12) i f ( err ∗(n)∗ .GT.1E 12) then ! write( −, ) ’what’ call feuler∗ ∗ (xnmain,xn,ynmain,ynand1,yn,eqn ,step ,nplus1 ,n ,Atw,epsDif ,a,psi , eqn order ) yscal=ynand1(tmp) stpchg=((abs(eps yscal))/ err (n)) (.1D0) !increase step size ! write( , ) stpchg∗ ∗∗ i f (stpchg∗ ∗ < 1) then stpchg=((abs(eps yscal))/ err (n)) (.3D0) !decrease step size end i f ∗ ∗∗ step=step stpchg endif ∗ ynand1=yand return end ! !!!!!!!!!!!!!!!!!!!!Rk4 Merson

! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!yprime !!!!!!!!!!!!!This just assembles the equation at a specifi c value of xi and gets the values for all equations of the system subroutine yprimee(yi ,xi ,yprime ,eqn , eqn order ) INTEGER :: eqn order DOUBLE PRECISION :: xi DOUBLE COMPLEX :: eqn(eqn order ,11) ,yprime(eqn order) ,yi(eqn order ) do i=1,eqn order yprime( i )=eqn(i ,1) yi(i)+eqn(i ,2) yi(i) 2+eqn(i ,3) yi(i) xi+eqn(i ,4) xi+eqn(i ,5) (xi) 2+eqn(i ,6) (xi) 3+eqn(i∗ ,7)+& ∗ ∗∗ ∗ ∗ ∗ ∗ ∗∗ eqn(i ,8)∗ exp(∗∗(xi 2) 2/(2 (.075) 2))+eqn(i ,9) yi(i) 3 2 xi sin(xi 2)+eqn(i ,10) xi ( 3)+eqn ( i ,11)∗ (4 −(12 −xi )+&∗∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗ ∗ ∗∗ ∗ ∗∗ − ∗ − ∗ 219

12 (xi 2) 4 (xi 3)) yi(i) end∗ do∗∗ − ∗ ∗∗ ∗ !write(98, ) yprime, yi, xi, eqn(11) return ∗ end ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!yprime

! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!dgdy !!!!!!!!!!!! this is used in implicit methods backward eule r and trapezoid for newton raphson itteration . − ! !!!!!!! g=yguess ynmain step yprime !!!!!!!!!!!WE might− have− needed∗ to, in eqatn, define it usin g y(i,1) for the multiplication by y (i), instead of defining all in eqan(i,7), if using BE or TRAP subroutine dgdyfn(yi ,xi ,dgdy,eqn,step , eqn order ) INTEGER :: eqn order DOUBLE PRECISION :: xi,step DOUBLE COMPLEX :: eqn(eqn order ,11) ,dgdy(eqn order) ,yi(eqn order ) do i=1,eqn order dgdy( i )=1 step (eqn(i ,1)+2 eqn(i ,2) yi(i)+eqn(i ,3) xi+0 eqn(i ,4) xi+0 eqn(i ,5) (xi) 2+0 eqn(i ,6) (xi)− ∗3+0 eqn(i ,7)+&∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗∗ ∗ 0 eqn(i ,8)∗ exp(∗∗ (xi∗ 2) 2/(2 (.075) 2))+3 eqn(i ,9) yi(i) 2 2 xi sin(xi 2)+0 eqn(i ,10) xi ∗ ( 3)+eqn(i∗ − ,11)− (4∗∗ (12 ∗xi )+& ∗∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗∗ ∗ ∗ 12 (xi∗∗ −2) 4 (xi 3)))∗ − ∗ !write(98,∗ ∗∗ −)∗ dgdy∗∗ end do ∗ return end ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!dgdy

! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!PROBLEM SPECIFIC STUFF!!!!!!!!!!!!!!!!!!!!!!!!!! ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! equations as a system of 1st order equations !eqn defines the equation for yprime, y1prime, ... !It is in the form yprime = eqn(i,1) y+eqn(i ,2) yˆ2+eqn(i ,3) y x + !eqn(i ,4) x+eqn(i ,5) xˆ2+eqn(i ,6) xˆ3+eqn(i∗ ,7)+eqn(i∗ ,8) exp( ∗ (x∗ 2) 2/(2 (.075) 2) ) !+eqn(i ,9)∗ y ˆ3 2 x sin∗ (xˆ2)+eqn(10)∗ x ˆ( 3) ∗ − − ∗∗ ∗ ∗∗ ∗ ∗ ∗ ∗ − !!!!!!!!!!!!!!! here eqn(1,:) is the highest order derivat ive, eqn(2,:) is one order less, ... ! !!!!!!!!!!! for example for a second order equation y(2)’=y(1) , y(1)’=....y(2) + ...... y(1) ==> y(2) ’’=.....y(2) + .....y(2)’ !!!!eqn(i,7) is the most general one where anything can be en tered , eqn(i,1) multiplies current i eqn by y(i),you can do it all manually in eqn(i,7) or not subroutine eqatn(eqn,yi ,xi ,Atw,epsDif ,a,psi ,eqn order ) IMPLICIT NONE INTEGER :: eqn order DOUBLE COMPLEX :: eqn(eqn order ,11) ,yi(eqn order) ,psi DOUBLE PRECISION :: xi ,Atw,epsDif ,a DOUBLE PRECISION :: fcn dQdsig , fcn Q DOUBLE PRECISION, PARAMETER :: pi = 3.141592653589793238462643383279502884197D0 !write ( , ) ’eqatn ’, 1 !eqn(1,1)=(∗ ∗ 10.,0) !write ( , )− ’eqatn xi/epsDif = ’, xi/epsDif ,erf(xi/epsDif) p i 2 . !write ( ∗ , ∗ ) ’eqatn fcn Q= ’, f ∗ ∗∗ !eqn(1,1)=∗ ∗ (Atw fcn dQdsig ( xi ))/(1D0+Atw fcn Q(xi)) !write( , −) xi ,eqn(1,1)∗ ∗ eqn (1 ,1)=(0D0,0D0)∗ ∗ eqn (1 ,3)=(0D0,0D0) eqn (1 ,4)=(0D0,0D0) eqn (1 ,5)=(0D0,0D0) eqn (1 ,6)=(0D0,0D0) eqn(1 ,7)=yi (2)/a 2 (1D0+Atw fcn Q(xi) a psi fcn dQdsig( xi ) )/(1D0+Atw fcn Q(xi)) yi (1) ( ( Atw fcn dQdsig( xi∗∗ ) )/(1D0+Atw∗ ∗ fcn Q(xi)))− ∗ ∗ ∗ − ∗ ∗ !eqn(1,7)=yi (2)/a 2 (1D0+Atw∗ fcn Q(xi) a p s i fcn dQdsig ( xi ))/(1D0+Atw fcn Q(xi)) !write ( , ) eqn(1,7)∗∗ ∗ ∗ − ∗ ∗ ∗ !write( ∗, ∗) xi, fcn Q(xi) !eqn(1,7)=(∗ ∗ 39.47835 ,0) yi (2) !eqn(1,7)= −.1 yi (2) ∗ eqn(2 ,7)=yi− (1)∗

return end ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! equation DOUBLE COMPLEX FUNCTION b c iteration(y,eqn order ) !!!!!this defines which bc we will be using for comparis (y , y prime....which equation from sys. of equa tions IMPLICIT NONE INTEGER :: eqn order DOUBLE COMPLEX :: y(eqn order ) b c iteration=y(2) !!!!!!!!!!!here for a second order equation 2 is the y value as in eqan END FUNCTION b c iteration

subroutine psi b c iterated(y,psi ,eqn order ) !!!!if psi is a left bc then we must specify which one is is. that is done here IMPLICIT NONE INTEGER :: eqn order DOUBLE COMPLEX :: y(eqn order ,1000000) ,psi 220

y(1 ,1)=psi !!!!for a second order equation y(1,1) is the bc for the first derivative y’ following eqatn function , y(2,1) would be y return end

DOUBLE PRECISION FUNCTION fcn Q(value) IMPLICIT NONE DOUBLE PRECISION, PARAMETER :: pi = 3.141592653589793238462643383279502884197D0 DOUBLE PRECISION :: value

fcn Q=Derf(value) ! write( , ) ’fcn Q = ’, fcn Q ,value END FUNCTION ∗fcn∗ Q DOUBLE PRECISION FUNCTION fcn dQdsig(value) IMPLICIT NONE DOUBLE PRECISION, PARAMETER :: pi = 3.141592653589793238462643383279502884197D0 DOUBLE PRECISION :: value ! fcn dQdsig =(2. p i 2. exp( (value 2.))) (2. p i 2. exp ( ( 0 . 2.))) fcn dQdsig=2./sqrt(pi)∗ ∗∗ ∗ Dexp(− (value∗∗ 2.)) − ∗ ∗∗ ∗ − ∗∗ ! write( , ) ’fcn dQdsig∗ = ’,− fcn dQdsig∗∗ , value END FUNCTION ∗fcn∗ dQdsig

B.7 Java Image Analysis Program

B.7.1 Main.java

package runimagej ;

import java . lang .Math; import java .awt. ; import java. util∗ . ; import java.awt. event∗ . ; import java. io . ; ∗ import java.net∗ . ; import java .awt.image.∗ ; import javax .swing.event∗ . ; import javax .swing. table .TableModel;∗ import java .awt. BorderLayout ; import javax . swing.JFrame ; import javax .swing. JScrollPane ; import javax .swing.JTable ; import javax .swing.JPanel ; import javax . swing.JOptionPane ; import javax .swing. ImageIcon; import javax .swing. Icon;

import ij .gui. ; import ij .gui.Roi.∗ ; import ij .process . ∗ ; import ij .io. ; ∗ import ij .plugin.∗ ; import ij .plugin.∗ filter . ; import ij .plugin.frame. ∗; import ij .text. ; ∗ import ij . io .Opener;∗ import ij.util. ; import ij . ; ∗ import ij .IJ;∗ import ij .plugin.PlugIn;

import org. jfree .chart .ChartFactory; import org. jfree .chart .ChartPanel ; import org. jfree .chart .JFreeChart; import org . jfree . chart . renderer .xy.XYLineAndShapeRenderer; import org . jfree .data.xy.XYDataset; import org. jfree .data.xy.XYSeries; import org. jfree .data.xy. XYSeriesCollection; import org. jfree .ui .ApplicationFrame; import org. jfree .chart . axis .NumberAxis; import org. jfree .chart .ChartPanel ; import org. jfree .chart .JFreeChart; import org. jfree .chart . axis .NumberAxis; import org. jfree .ui .ApplicationFrame; import org. jfree .ui.RefineryUtilities ;

import edu.emory.mathcs. jtransforms . fft . ; import edu.emory.mathcs . jtransforms . ; ∗ import ij .measure. CurveFitter ; ∗ 221

import org. javatuples . ; ∗ import static fj .data.Array.array; ///////////////////////////Program by: Mike Roberts/////////////////////// public class Main public static{ void main(String [] args) Main main = new Main() ; { OverloadMaxAndMin MinMax = new OverloadMaxAndMin () ; DrawOnImages Draw = new DrawOnImages () ; Excel excel = new Excel () ; MultiDimArray MultiDimArray = new MultiDimArray() ; LinearFit LinearFit = new LinearFit() ; Analysis Analysis = new Analysis() ;

new ImageJ() ;

IJ.error(”Hello Sir or Madam!”); IJ.resetEscape() ;

OpenDialog od = new OpenDialog(”Select Path For Image Sequence or file to open” , ”/media /MIKESUSB/Research/OtherNotSharedWithDropbox /ExperimentDataLarge/”, ”Image Sequence” );

String Path = od.getDirectory(); String Filename = od.getFileName(); String FolderName = (new File (Path)) .getName() ;

GenericDialog gd = new GenericDialog(”Open What?”) ; gd.addMessage(”Do we open an image sequence or a stack?”); String[] choice = new String [] ”Image Sequence”, ”Stack” ; gd.addChoice(”Image Sequence or{ Stack”, choice , ”Image Se}quence”) ; gd.showDialog() ; String WhatToOpen = gd. getNextChoice () ;

String ForImageSequence = ””; i f (WhatToOpen. equals(”Image Sequence”)) String SeqString = ”. ” + IJ.getString(”If{ needed enter a regexp character sequen c e (like . Dif. ) for∗ image sequence to go before .tif”, ””); ForImageSequence∗ ∗ = ”open=” + Path + ” number=505 starting=1 increment=1 scale=100 file=[] or=” + SeqString + ”. .tif. sort”; else ∗\\ ∗ } ForImageSequence{ = Path + Filename; } gd = new GenericDialog(”Use Previous Inputs?”); gd.addMessage(”Should we use previous inputs that have been saved in file”); gd.addCheckbox(”Use Previous Inputs?” , false ); gd.showDialog() ; boolean UsePrevInputs = gd.getNextBoolean() ;

Object [] ReadInputReturned = new Object[14]; try ReadInputReturned = main.ReadInput() ; catch{ (Exception FileNotFoundException) } UsePrevInputs = false ; {

} String DoWeUseTurboReg = (String) ReadInputReturned [0]; boolean Invert Image = (Boolean) ReadInputReturned[1]; boolean Invert LUT = (Boolean) ReadInputReturned[2]; String TemplateTypeChoice = (String) ReadInputReturned [ 3]; double tank widthmm = (Double) ReadInputReturned [4]; double TimeBetwFrames = (Double) ReadInputReturned [5]; double Atwood = (Double) ReadInputReturned [6]; double Accel = (Double) ReadInputReturned [7]; double AvgKinemVisc = (Double) ReadInputReturned [8]; Integer pixel bins = (Integer) ReadInputReturned[9]; String DoWeUseNormalized = (String) ReadInputReturned [1 0]; boolean TakeNaturalLog = (Boolean) ReadInputReturned [11]; boolean UseEmptyTank = (Boolean ) ReadInputReturned [12]; String TypeOfExp = (String) ReadInputReturned[13];

ImagePlus MainStack = main.GetStack(ForImageSequence , WhatToOpen, UsePrevInputs ,( Boolean ) ReadInputReturned [1] ,( Boolean) ReadInputReturned [2]) ; Object [] MakeSubStackReturned = main.MakeSubStack(MainStack ,UsePrevInputs ,( Integer) ReadInputReturned [9] ,( String) ReadInputReturned[10] , (Boolean ) ReadInputReturned[11] ,(Boolean ) ReadInputReturned[12] ,( String) ReadInputReturned [13]) ; ImagePlus MainSubStack = (ImagePlus) MakeSubStackReturned [0]; DoWeUseNormalized = (String) MakeSubStackReturned [1]; pixel bins = (Integer) MakeSubStackReturned [2]; int interface loc = (Integer) MakeSubStackReturned [3]; int weird interface region top = (Integer) MakeSubStackReturned [4]; int weird interface region bot = (Integer) MakeSubStackReturned [5]; int startslice = (Integer) MakeSubStackReturned [6]; int endslice = (Integer) MakeSubStackReturned [7]; int tank width = (Integer) MakeSubStackReturned [8]; TypeOfExp = (String) MakeSubStackReturned [9]; 222

TakeNaturalLog = (Boolean ) MakeSubStackReturned [10]; UseEmptyTank = (Boolean ) MakeSubStackReturned [11];

Object [] AlignNormReturned = main.AlignAndNormalize(MainSubStack , DoWeUseNormalized , pixel bins , interface loc , TypeOfExp, TakeNaturalLog ,UseEmptyTank, UsePrevInputs , DoWeUseTurboReg ) ; ImagePlus AdjusImages = (ImagePlus) AlignNormReturned [0 ]; ImagePlus CropAdjusImages = (ImagePlus) AlignNormReturned [1]; double average bub = (Double) AlignNormReturned [2]; double average spike = (Double) AlignNormReturned [3]; ImagePlus EdgeImages = (ImagePlus) AlignNormReturned [4] ; ImagePlus CropEdgeImages = (ImagePlus) AlignNormReturned [5]; int analysisarea top = (Integer) AlignNormReturned [6]; int analysisarea bot = (Integer) AlignNormReturned [7]; int analysisarea left = (Integer) AlignNormReturned [8]; int analysisarea right = (Integer) AlignNormReturned [9]; AdjusImages . getProcessor () .setMinAndMax(AdjusImages . getStatistics () .mean 3 AdjusImages . getStatistics () .stdDev − ∗ ,AdjusImages. getStatistics () .mean + 3 AdjusImages . getStatistics () .stdDev ); ∗ AdjusImages . updateImage() ;

///////////For Threshold Object [] ReturnedFromImageProfiling= Analysis. ImagePr ofiling( AdjusImages , CropAdjusImages, analysisarea left , analysisarea right , analysisarea top , analysisarea bot , weird interface region top , weird interface region bot , DoWeUseNormalized , average spike ,Path, pixel bins ,”PassedValue”); double [ ] [ ] RowAverageImageArraySpk=(double [][]) ReturnedFromImageProfiling [0]; double [ ] [ ] RowAverageImageArrayBub=(double [][]) ReturnedFromImageProfiling [1]; double [ ] [ ] RowAverageSmoothImageArraySpk=(double [][]) ReturnedFromImageProfiling [2]; double [ ] [ ] RowAverageSmoothImageArrayBub=(double [][]) ReturnedFromImageProfiling [3]; float [][][] CropImageArray=( float [][][]) ReturnedFromImageProfiling [4]; double maxvalueForDrawThresh=(Double)ReturnedFromImageProfiling [5]; ReturnedFromImageProfiling=null ;

////////////////Into Excel File String [] HeaderInfoBlank = main.HeaderForExcel(””); Object[] CreateInitialExcelProfileReturned = excel.Cre ateInitialExcelProfileFile(Path + ”ImageJProfile” + FolderName + ”.xls”, HeaderInfoBlank, RowAverageImageArraySpk , RowAverageImageArrayBub , RowAverageImageArraySpk. len gth , RowAverageImageArrayBub . length , RowAverageImageArrayBub [0]. length , FolderName ); int [] ProfileExcelColIndex = ( int []) CreateInitialExcelProfileReturned [0]; String ProfileExcelTemplateTypeChoice = (String) Create InitialExcelProfileReturned [1]; excel .ReopenExcelProfileAndCreateColumns(Path + ”Imag eJProfile” + FolderName + ”.xls”, FolderName , RowAverageImageArrayBub [0]. length) ; //////////////////// Into Excel File

///////////////For Edge detection ReturnedFromImageProfiling= Analysis. ImageProfiling( EdgeImages , CropEdgeImages , analysisarea left , analysisarea right , analysisarea top , analysisarea bot , weird interface region top , weird interface region bot , DoWeUseNormalized , average spike ,Path, 65536 ,”MaxForAll”) ; double [ ] [ ] RowAverageEdgeImageArraySpk=(double [][]) ReturnedFromImageProfiling [0]; double [ ] [ ] RowAverageEdgeImageArrayBub=(double [][]) ReturnedFromImageProfiling [1]; double [ ] [ ] RowAverageSmoothEdgeImageArraySpk=(double [] []) ReturnedFromImageProfiling [2]; double [ ] [ ] RowAverageSmoothEdgeImageArrayBub=(double [] []) ReturnedFromImageProfiling [3]; double maxvalueForDrawEdge=(Double)ReturnedFromImageProfil ing [5]; ReturnedFromImageProfiling=null ; ////////////////////

//////////////////Get Amplitudes

/////////////////// i f (TypeOfExp. equals(”Refrac”)) ////if statement for type image ///////////////Amplitudes For{ Threshold Images /////// Bubble double ThreshError = ( double) IJ.getNumber(”Error for bubble / spike threshold”, 0.01) ;

double [ ] [ ] BubArrayMin = MinMax. getMinValue(RowAverageImageArrayBub ) ; double [] ArrayOfAverageBub = new double [BubArrayMin[0]. length ]; Arrays. fill (ArrayOfAverageBub , average bub); //need same value for all slices double [] Bub70Perc = main. PercParse (RowAverageImageArrayBub , BubArrayMin[1] , ArrayOfAverageBub, ”Forward”, ”Positive”, 70, ThreshError); double [] Bub80Perc = main. PercParse (RowAverageImageArrayBub , BubArrayMin[1] , ArrayOfAverageBub, ”Forward”, ”Positive”, 80, ThreshError); double [] Bub90Perc = main. PercParse (RowAverageImageArrayBub , BubArrayMin[1] , ArrayOfAverageBub, ”Forward”, ”Positive”, 90, ThreshError);

Draw.DrawDataOnImage(AdjusImages , Bub70Perc, pixel bins , DoWeUseNormalized , weird interface region bot , 0); 223

Draw.DrawDataOnImage(AdjusImages , Bub80Perc, pixel bins , DoWeUseNormalized , weird interface region bot , 0); Draw.DrawDataOnImage(AdjusImages , Bub90Perc, pixel bins , DoWeUseNormalized , weird interface region bot , 0);

/////////////// Spike double [] [ ] SpkArrayMin = MinMax.getMinValue(RowAverageImageArraySpk) ; double [] ArrayOfAverageSpk = new double [SpkArrayMin [0]. length ]; Arrays. fill (ArrayOfAverageSpk , average spike); //need same value for all slices double [] Spk70Perc = main. PercParse (RowAverageImageArraySpk, SpkArrayMin [1] , ArrayOfAverageSpk , ”Backward”, ”Negative”, 70, ThreshError) ; double [] Spk80Perc = main. PercParse (RowAverageImageArraySpk, SpkArrayMin [1] , ArrayOfAverageSpk , ”Backward”, ”Negative”, 80, ThreshError) ; double [] Spk90Perc = main. PercParse (RowAverageImageArraySpk, SpkArrayMin [1] , ArrayOfAverageSpk , ”Backward”, ”Negative”, 90, ThreshError) ;

Draw.DrawDataOnImage(AdjusImages , Spk70Perc , pixel bins , DoWeUseNormalized , analysisarea top , 0); Draw.DrawDataOnImage(AdjusImages , Spk80Perc , pixel bins , DoWeUseNormalized , analysisarea top , 0); Draw.DrawDataOnImage(AdjusImages , Spk90Perc , pixel bins , DoWeUseNormalized , analysisarea top , 0); IJ .selectWindow (AdjusImages . getID()) ; ////////////////////////// Slope Stuff new WaitForUserDialog(”Figure our Lower/Upper Percentages f or Slope Intersec of Adjusted Images(lines are for 70/80/90)”).show(); int ThreshLowerPerc = ( int ) IJ.getNumber(”Lower Percentage For Slope Intersec(Middle is zero)”, 60); int ThreshUpperPerc = ( int ) IJ.getNumber(”Upper Percentage For Slope Intersec(Middle is zero)”, 80);

double [] BubSlopeLowerIndex = main. PercParse (RowAverageSmoothImageArrayBub , BubArrayMin[1] , ArrayOfAverageBub , ”Forward”, ”Positiv e”, ThreshLowerPerc, ThreshError) ; double [] BubSlopeUpperIndex = main. PercParse (RowAverageSmoothImageArrayBub , BubArrayMin[1] , ArrayOfAverageBub , ”Forward”, ”Positiv e”, ThreshUpperPerc, ThreshError) ; Object [] ReturnedFromValueFromSlopeBub = LinearFit.ValueFromSlope(”BubForFits” , RowAverageSmoothImageArrayBub , BubSlopeLowerIndex, BubSlopeUpperIndex, ArrayOfAverageBub ) ; double [] BubSlope = ( double []) ReturnedFromValueFromSlopeBub [0]; double [][] CurveFittedIndexArrayBub = ( double [ ] [ ] ) ReturnedFromValueFromSlopeBub [1]; double [][] CurveFittedAverageArrayBub = ( double [][]) ReturnedFromValueFromSlopeBub [ 2 ] ; Draw.DrawCircleOnImage (AdjusImages , CurveFittedIndexArrayBub , CurveFittedAverageArrayBub , pixel bins , DoWeUseNormalized , weird interface region bot , maxvalueForDrawThresh , pixel bins 1, 1, 4); Draw.DrawDataOnImage(AdjusImages , BubSlope , pixel bins , DoWeUseNormalized− , weird interface region bot , 0);

double [] SpkSlopeLowerIndex = main.PercParse (RowAverageSmoothImageArraySpk , SpkArrayMin [1] , ArrayOfAverageSpk , ”Backward”, ”Negati ve”, ThreshUpperPerc, ThreshError) ; double [] SpkSlopeUpperIndex = main.PercParse (RowAverageSmoothImageArraySpk , SpkArrayMin [1] , ArrayOfAverageSpk , ”Backward”, ”Negati ve”, ThreshLowerPerc, ThreshError) ; Object [] ReturnedFromValueFromSlopeSpk = LinearFit.ValueFromSlope(”SpkForFits” , RowAverageSmoothImageArraySpk, SpkSlopeLowerIndex , SpkSlopeUpperIndex , ArrayOfAverageSpk) ; double [] SpkSlope = ( double []) ReturnedFromValueFromSlopeSpk [0]; double [][] CurveFittedIndexArraySpk = ( double [] []) ReturnedFromValueFromSlopeSpk [1]; double [][] CurveFittedAverageArraySpk = ( double [ ] [ ] ) ReturnedFromValueFromSlopeSpk [2]; Draw.DrawCircleOnImage (AdjusImages , CurveFittedIndexArraySpk , CurveFittedAverageArraySpk , pixel bins , DoWeUseNormalized, analysisarea top , maxvalueForDrawThresh , pixel bins 1, 1, 4); Draw.DrawDataOnImage(AdjusImages , SpkSlope− , pixel bins , DoWeUseNormalized , analysisarea top , 0); IJ.saveAs(AdjusImages, ”. tif”, Path + ”AdjustedImages”) ;

///////Data to be used in excel double SpkOffset = weird interface region top analysisarea top ; ///to find amplitude subtract above value, but− need to first find vakue to be subtracted since array

double [] SpkAmps70Perc = main.AddScalarToArray(Spk70Perc , SpkOffset , true ); //find diff between extracted //amp and interface (offset for diff region) to get spike double [] SpkAmps80Perc = main.AddScalarToArray(Spk80Perc , SpkOffset , true ); double [] SpkAmps90Perc = main.AddScalarToArray(Spk90Perc , SpkOffset , true ); double [] SpkAmpsSlope = main.AddScalarToArray(SpkSlope , SpkOffset , true );

double BubOffset = 0; //Since spk and bub are broken into to sections starting at 0, we don’t need to subtract intfc loc 224

//for bub, but we do neeed to subtract half diffusion thick double [] BubAmps70Perc = main.AddScalarToArray(Bub70Perc, BubOffset , false ); double [] BubAmps80Perc = main.AddScalarToArray(Bub80Perc, BubOffset , false ); double [] BubAmps90Perc = main.AddScalarToArray(Bub90Perc, BubOffset , false ); double [] BubAmpsSlope = main.AddScalarToArray(BubSlope, BubOffset , false ); /////////////////Amp for THresh

//////////Amplitude For Edge Images //////Bubble double [ ] [ ] BubArrayEdgeMax = MinMax. getMaxValue(RowAverageSmoothEdgeImageArrayBub ) ; double [] ArrayOfZeroEdgeBub = new double [BubArrayEdgeMax [0]. length ]; Arrays. fill (ArrayOfZeroEdgeBub, 0); double [] EdgeBub30Perc = main. PercParse (RowAverageSmoothEdgeImageArrayBub , BubArrayEdgeMax[1] , ArrayOfZeroEdgeBub, ”Backward” , ”N egative”, 30, 0.1); Draw.DrawDataOnImage(EdgeImages , EdgeBub30Perc, 65536 , DoWeUseNormalized , weird interface region bot , 65536 1) ; − ///////////Spike double [ ] [ ] SpkArrayEdgeMax = MinMax. getMaxValue(RowAverageSmoothEdgeImageArraySpk) ; double [] ArrayOfZeroEdgeSpk = new double [SpkArrayEdgeMax[0]. length ]; Arrays. fill (ArrayOfZeroEdgeSpk , 0); double [] EdgeSpk30Perc = main. PercParse(RowAverageSmoothEdgeImageArraySpk , SpkArrayEdgeMax[1] , ArrayOfZeroEdgeSpk , ”Forward”, ”Po sitive”, 30, 0.1); Draw.DrawDataOnImage(EdgeImages , EdgeSpk30Perc , 65536 , DoWeUseNormalized , analysisarea top , 65536 1) ; IJ .selectWindow (EdgeImages .getID()− ) ;

////////////////////////// Slope Stuff for Edge new WaitForUserDialog(”Figure our Lower/Upper Percentages f or Slope Intersec of Edge Images(line is for 30)”).show(); int EdgeLowerPerc = ( int ) IJ.getNumber(”Lower Percentage For Edge Slope Intersec( Middle is zero)”, 20); int EdgeUpperPerc = ( int ) IJ.getNumber(”Upper Percentage For Edge Slope Intersec( Middle is zero)”, 40);

double [] EdgeBubSlopeLowerIndex = main. PercParse (RowAverageSmoothEdgeImageArrayBub , BubArrayEdgeMax[1] , ArrayOfZeroEdgeBub, ”Backward” , ”N egative” , EdgeLowerPerc , ThreshError) ; double [] EdgeBubSlopeUpperIndex = main. PercParse (RowAverageSmoothEdgeImageArrayBub , BubArrayEdgeMax[1] , ArrayOfZeroEdgeBub, ”Backward” , ”N egative” , EdgeUpperPerc , ThreshError) ; Object [] ReturnedFromValueFromSlopeEdgeBub = LinearFit . ValueFromSlope(” EdgeBubForFits” , RowAverageSmoothEdgeImageArrayBub , EdgeBubSlopeLowerIndex, EdgeBubSlopeUpperIndex , ArrayOfZeroEdgeBub) ; double [] EdgeBubSlope = ( double []) ReturnedFromValueFromSlopeEdgeBub[0]; double [][] CurveFittedIndexArrayEdgeBub = ( double [][]) ReturnedFromValueFromSlopeEdgeBub [ 1 ] ; double [][] CurveFittedAverageArrayEdgeBub = ( double [][]) ReturnedFromValueFromSlopeEdgeBub [ 2 ] ; Draw. DrawCircleOnImage (EdgeImages , CurveFittedIndexArrayEdgeBub , CurveFittedAverageArrayEdgeBub , 65536 , DoWeUseNormalized , weird interface region bot , maxvalueForDrawEdge , 65536 1, 1, 4); Draw.DrawDataOnImage(EdgeImages , EdgeBubSlope , 65536 , DoWeUseNormalized− , weird interface region bot , 65536 1) ; − double [] EdgeSpkSlopeLowerIndex = main.PercParse (RowAverageSmoothEdgeImageArraySpk, SpkArrayEdgeMax[1] , ArrayOfZeroEdgeSpk , ”Forward”, ”Po sitive”, EdgeUpperPerc , ThreshError) ; double [] EdgeSpkSlopeUpperIndex = main.PercParse (RowAverageSmoothEdgeImageArraySpk, SpkArrayEdgeMax[1] , ArrayOfZeroEdgeSpk , ”Forward”, ”Po sitive”, EdgeLowerPerc , ThreshError) ; Object [] ReturnedFromValueFromSlopeEdgeSpk = LinearFit . ValueFromSlope(” EdgeSpkForFits” , RowAverageSmoothEdgeImageArraySpk, EdgeSpkSlopeLowerIndex , EdgeSpkSlopeUpperIndex , ArrayOfZeroEdgeSpk) ; double [] EdgeSpkSlope = ( double []) ReturnedFromValueFromSlopeEdgeSpk [0]; double [][] CurveFittedIndexArrayEdgeSpk = ( double [][]) ReturnedFromValueFromSlopeEdgeSpk [ 1 ] ; double [][] CurveFittedAverageArrayEdgeSpk = ( double [][]) ReturnedFromValueFromSlopeEdgeSpk [ 2 ] ; Draw. DrawCircleOnImage (EdgeImages , CurveFittedIndexArrayEdgeSpk , CurveFittedAverageArrayEdgeSpk , 65536 , DoWeUseNormalized, analysisarea top , maxvalueForDrawEdge , 65536 1, 1, 4); Draw.DrawDataOnImage(EdgeImages ,− EdgeSpkSlope, 65536 , DoWeUseNormalized , analysisarea top , 65536 1) ; IJ.saveAs(EdgeImages , ”. tif”,− Path + ”EdgeDetectedImages”) ; ////////Arrays shifted to be really for bubble / spike amps ( need for excel) double [] EdgeBubAmps30Perc = main. AddScalarToArray(EdgeBub30Perc, BubOffset , false ); double [] EdgeBubAmpsSlope = main. AddScalarToArray(EdgeBubSlope, BubOffset , false ); double [] EdgeSpkAmps30Perc = main. AddScalarToArray(EdgeSpk30Perc , SpkOffset , true ); double [] EdgeSpkAmpsSlope = main.AddScalarToArray(EdgeSpkSlope, SpkOffset , true );

//////////////////////////////// Edge Amps

////////////////Into Excel File 225

Object [] ReturnedDataForExcel = main.DataForExcel (BubAmpsSlope , SpkAmpsSlope , BubAmps70Perc , SpkAmps70Perc , BubAmps80Perc , SpkAmps80Perc , BubAmps90Perc , SpkAmps90Perc , EdgeBubAmps30Perc , EdgeSpkAmps30Perc , EdgeBubAmpsSlope, EdgeSpkAmpsSlope ) ; //Bubbles then Spikes double [][] MainValues = ( double [][]) ReturnedDataForExcel [0]; int NumOfRowsForExcel = (Integer) ReturnedDataForExcel [1]; int NumOfColumnsForExcel = (Integer) ReturnedDataForExcel [ 2]; String [] HeaderInfo = main.HeaderForExcel(”BubAmpsSlope”, ”SpkAmpsSlope”, ” BubAmps70Perc” , ”SpkAmps70Perc” , ”BubAmps80Perc” , ”SpkAmps80Perc” , ” BubAmps90Perc” , ”SpkAmps90Perc” , ”EdgeBubAmps30Perc” , ”EdgeSpkAmps30Perc” , ” EdgeBubAmpsSlope” , ”EdgeSpkAmpsSlope”) ; ////////////////Into Excel File ///////////////Reopen and Manipulate Excel Object[] CreateInitialExcelReturned = excel.CreateInit ialExcelFile(Path + ”ImageJ” + FolderName + ”. xls”, HeaderInfo , MainValues , NumOfRowsForExcel , NumOfColumnsForExcel , FolderName , UsePrevInputs ,TemplateTypeChoice) ; int [] AmpColIndex = ( int []) CreateInitialExcelReturned [0]; TemplateTypeChoice = (String) CreateInitialExcelReturn ed [1]; excel .ReopenExcelAndCreateColumns (Path + ”ImageJ” + FolderName + ”.xls”, HeaderInfo , MainValues , NumOfRowsForExcel + 1, NumOfColumnsForExcel , startslice , endslice , interface l o c analysisarea top , tank width , FolderName , AmpColIndex, weird interface− region bot weird interface region top , (String) ReadInputReturned[3]− , UsePrevInputs,(Double) ReadInputReturned [4] ,( Double) ReadInputReturned[5] , (Double) ReadInputReturned[6] , (Double) ReadInputReturned[7] ,( Double) ReadInputReturned [8]) ; ///////////////Reopen and Manipulate Excel

else if (TypeOfExp. equals(”Absorp”)) ///////if statement for type of image } ///////////Amps for Thresh { double ThreshError = ( double) IJ.getNumber(”Error for bubble / spike threshold”, 0.01) ;

/////// Bubble double [] [ ] SpkArrayMin = MinMax.getMinValue(RowAverageImageArraySpk) ; double [ ] [ ] BubArrayMin = MinMax. getMinValue(RowAverageImageArrayBub ) ; double [] ArrayOfAverageBub = new double [BubArrayMin[0]. length ]; Arrays. fill (ArrayOfAverageBub , average bub); //need same value for all slices double [] ArrayOfAverageSpk = new double [SpkArrayMin [0]. length ]; Arrays. fill (ArrayOfAverageSpk , average spike); //need same value for all slices double [] Bub95Perc = main. PercParse (RowAverageImageArrayBub , ArrayOfAverageSpk , ArrayOfAverageBub, ”Forward”, ”Negative”, 95, ThreshError); double [] Bub80Perc = main. PercParse (RowAverageImageArrayBub , ArrayOfAverageSpk , ArrayOfAverageBub, ”Forward”, ”Negative”, 80, ThreshError); double [] Bub90Perc = main. PercParse (RowAverageImageArrayBub , ArrayOfAverageSpk , ArrayOfAverageBub, ”Forward”, ”Negative”, 90, ThreshError);

Draw.DrawDataOnImage(AdjusImages , Bub95Perc, pixel bins , DoWeUseNormalized , weird interface region bot , 1); Draw.DrawDataOnImage(AdjusImages , Bub80Perc, pixel bins , DoWeUseNormalized , weird interface region bot , 1); Draw.DrawDataOnImage(AdjusImages , Bub90Perc, pixel bins , DoWeUseNormalized , weird interface region bot , 1); Draw.DrawDataOnImage(CropAdjusImages, Bub95Perc, pixe l bins , DoWeUseNormalized , weird interface region bot analysisarea top , 1); Draw.DrawDataOnImage(CropAdjusImages,− Bub80Perc, pixe l bins , DoWeUseNormalized , weird interface region bot analysisarea top , 1); Draw.DrawDataOnImage(CropAdjusImages,− Bub90Perc, pixe l bins , DoWeUseNormalized , weird interface region bot analysisarea top , 1); − /////////////// Spike double [] Spk5Perc = main.PercParse(RowAverageImageArraySpk, ArrayOfAverageSpk , ArrayOfAverageBub , ”Backward”, ”Negative”, 5, ThreshError); double [] Spk20Perc = main. PercParse (RowAverageImageArraySpk, ArrayOfAverageSpk , ArrayOfAverageBub , ”Backward”, ”Negative”, 20, ThreshError) ; double [] Spk10Perc = main. PercParse (RowAverageImageArraySpk, ArrayOfAverageSpk , ArrayOfAverageBub , ”Backward”, ”Negative”, 10, ThreshError) ;

Draw.DrawDataOnImage(AdjusImages , Spk5Perc , pixel bins , DoWeUseNormalized , analysisarea top , 0); Draw.DrawDataOnImage(AdjusImages , Spk20Perc , pixel bins , DoWeUseNormalized , analysisarea top , 0); Draw.DrawDataOnImage(AdjusImages , Spk10Perc , pixel bins , DoWeUseNormalized , analysisarea top , 0); Draw.DrawDataOnImage(CropAdjusImages, Spk5Perc , pixel bins , DoWeUseNormalized, 0, 0) ; Draw.DrawDataOnImage(CropAdjusImages, Spk20Perc , pixe l bins , DoWeUseNormalized, 0, 0) ; Draw.DrawDataOnImage(CropAdjusImages, Spk10Perc , pixe l bins , DoWeUseNormalized, 0, 0) ;

IJ .selectWindow (AdjusImages . getID()) ; ////////////////////////// Slope Stuff new WaitForUserDialog(”Figure our Lower/Upper Percentages f or Slope Intersec of Adjusted Images(lines are for 5/10/20/80/90/95)”).show( ); int ThreshLowerPerc = ( int ) IJ.getNumber(”Lower Percentage For Slope Intersec”, 80) ; int ThreshUpperPerc = ( int ) IJ.getNumber(”Upper Percentage For Slope Intersec”, 20) ; 226

double [][] SpkIndexArray = MultiDimArray.MakeArrayOfIndices2D(0, weird interface region top analysisarea top , endslice); double [][] BubIndexArray = MultiDimArray.MakeArrayOfIndices2− D( weird interface region bot analysisarea top , analysisarea bot analysisarea top , endslice);− − double [][] IndexImageArrayAll = MultiDimArray.AppendArray2D( SpkIndexArray , BubIndexArray , 2); double [] [ ] RowAverageImageArrayAll = MultiDimArray.AppendArray2D( RowAverageImageArraySpk, RowAverageImageArrayBub , 2) ; double [] [ ] RowAverageSmoothImageArrayAll = main.SmoothArraySlicesVarAvg ( RowAverageImageArrayAll , 7) ; //need to smooth all at once or will have zeros in middle from smoothed spk

double [] SlopeLowerIndex = main. PercParse(RowAverageImageArrayAll , ArrayOfAverageSpk , ArrayOfAverageBub , ”Forward”, ”Negative”, ThreshLowerPerc, ThreshError, weird interface region bot analysisarea top 1); double [] SlopeUpperIndex = main.− PercParse(RowAverageImageArr− ayAll , ArrayOfAverageSpk , ArrayOfAverageBub , ”Backward”, ”Negative”, ThreshUpperPerc, ThreshError , weird interface region top analysisarea top+1); − Object [] ReturnedFromValueFromSlopeBub = LinearFit.ValueFromSlope(”BubForFits” , RowAverageImageArrayAll , IndexImageArrayAll , SlopeUpperIndex, SlopeLowerIndex, ArrayOfAverageBub ) ; double [] BubSlope = ( double []) ReturnedFromValueFromSlopeBub [0]; double [][] CurveFittedIndexArrayBub = ( double [ ] [ ] ) ReturnedFromValueFromSlopeBub [1]; double [][] CurveFittedAverageArrayBub = ( double [][]) ReturnedFromValueFromSlopeBub [ 2 ] ; ReturnedFromValueFromSlopeBub = null ; Draw.DrawCircleOnImage (AdjusImages , CurveFittedIndexArrayBub , CurveFittedAverageArrayBub , pixel bins , DoWeUseNormalized, analysisarea top , maxvalueForDrawThresh , 0, 1, 5); Draw.DrawDataOnImage(AdjusImages , BubSlope , pixel bins , DoWeUseNormalized , analysisarea top , 1); Draw.DrawCircleOnImage (CropAdjusImages, CurveFittedIndexArrayBub , CurveFittedAverageArrayBub , pixel bins , DoWeUseNormalized, 0, maxvalueForDrawThresh , 0, 1, 5); Draw.DrawDataOnImage(CropAdjusImages, BubSlope , pixel bins , DoWeUseNormalized, 0, 1) ;

Object [] ReturnedFromValueFromSlopeSpk = LinearFit.ValueFromSlope(”SpkForFits” , RowAverageImageArrayAll , IndexImageArrayAll , SlopeUpperIndex, SlopeLowerIndex, ArrayOfAverageSpk) ; double [] SpkSlope = ( double []) ReturnedFromValueFromSlopeSpk [0]; double [][] CurveFittedIndexArraySpk = ( double [ ] [ ] ) ReturnedFromValueFromSlopeSpk [1]; double [][] CurveFittedAverageArraySpk = ( double [][]) ReturnedFromValueFromSlopeSpk [ 2]; ReturnedFromValueFromSlopeSpk=null ; Draw.DrawCircleOnImage (AdjusImages , CurveFittedIndexArraySpk , CurveFittedAverageArraySpk , pixel bins , DoWeUseNormalized, analysisarea top , maxvalueForDrawThresh , 0, 1, 5); Draw.DrawDataOnImage(AdjusImages , SpkSlope , pixel bins , DoWeUseNormalized , analysisarea top , 0); Draw.DrawCircleOnImage (CropAdjusImages, CurveFittedIndexArraySpk , CurveFittedAverageArraySpk , pixel bins , DoWeUseNormalized, 0, maxvalueForDrawThresh , 0, 1, 5);

Draw.DrawDataOnImage(CropAdjusImages, SpkSlope , pixel bins , DoWeUseNormalized, 0, 0) ; IJ.saveAs(AdjusImages, ”. tif”, Path + ”AdjustedImages”) ;

///////Data to be used in excel

double SpkOffset = weird interface region top analysisarea top ; double [] SpkAmps5Perc = main.AddScalarToArray(Spk5Perc− , SpkOffset , true ); double [] SpkAmps20Perc = main.AddScalarToArray(Spk20Perc , SpkOffset , true ); double [] SpkAmps10Perc = main.AddScalarToArray(Spk10Perc , SpkOffset , true ); double [] SpkAmpsSlope = main.AddScalarToArray(SpkSlope , SpkOffset , true );

double BubOffset = 0; double [] BubAmps95Perc = main.AddScalarToArray(Bub95Perc, BubOffset , false ); double [] BubAmps80Perc = main.AddScalarToArray(Bub80Perc, BubOffset , false ); double [] BubAmps90Perc = main.AddScalarToArray(Bub90Perc, BubOffset , false ); double [] BubAmpsSlope = main.AddScalarToArray(BubSlope, BubOffset , false ); BubAmpsSlope = main. AddScalarToArray(BubAmpsSlope , (weird interface region top analysisarea top) , false ); //need to subtract distance− of spike , − //since slope was calculated with both spike and bubble data at once ////////////////Amplitude for Threshold Images

//////////Amplitude For Edge Images //////Bubble double [ ] [ ] BubArrayEdgeMax = MinMax. getMaxValue(RowAverageSmoothEdgeImageArrayBub ) ; double [] ArrayOfZeroEdgeBub = new double [BubArrayEdgeMax [0]. length ]; Arrays. fill (ArrayOfZeroEdgeBub, 0); 227

double [] EdgeBub30Perc = main. PercParse (RowAverageSmoothEdgeImageArrayBub , BubArrayEdgeMax[1] , ArrayOfZeroEdgeBub, ”Backward” , ”N egative”, 30, 0.1); Draw.DrawDataOnImage(EdgeImages , EdgeBub30Perc, 65536 , DoWeUseNormalized , weird interface region bot + analysisarea top , 65536 1) ; Draw.DrawDataOnImage(CropEdgeImages , EdgeBub30Perc, 65536 ,− DoWeUseNormalized , weird interface region bot analysisarea top + analysisarea top , 65536 1) ; ///////////Spike − − double [ ] [ ] SpkArrayEdgeMax = MinMax. getMaxValue(RowAverageSmoothEdgeImageArraySpk) ; double [] ArrayOfZeroEdgeSpk = new double [SpkArrayEdgeMax[0]. length ]; Arrays. fill (ArrayOfZeroEdgeSpk , 0); double [] EdgeSpk30Perc = main. PercParse(RowAverageSmoothEdgeImageArraySpk , SpkArrayEdgeMax[1] , ArrayOfZeroEdgeSpk , ”Forward”, ”Po sitive”, 30, 0.1); Draw.DrawDataOnImage(EdgeImages , EdgeSpk30Perc , 65536 , DoWeUseNormalized, 0, 65536 1) ; − Draw.DrawDataOnImage(CropEdgeImages , EdgeSpk30Perc , 65536, DoWeUseNormalized , 0 analysisarea top , 65536 1) ; − IJ .selectWindow (EdgeImages .getID()− ) ;

////////////////////////// Slope Stuff for Edge new WaitForUserDialog(”Figure our Lower/Upper Percentages f or Slope Intersec of Edge Images(line is for 30)”).show(); int EdgeLowerPerc = ( int ) IJ.getNumber(”Lower Percentage For Edge Slope Intersec( Middle is zero)”, 20); int EdgeUpperPerc = ( int ) IJ.getNumber(”Upper Percentage For Edge Slope Intersec( Middle is zero)”, 40);

double [] EdgeBubSlopeLowerIndex = main. PercParse (RowAverageSmoothEdgeImageArrayBub , BubArrayEdgeMax[1] , ArrayOfZeroEdgeBub, ”Backward” , ”N egative” , EdgeLowerPerc , ThreshError) ; double [] EdgeBubSlopeUpperIndex = main. PercParse (RowAverageSmoothEdgeImageArrayBub , BubArrayEdgeMax[1] , ArrayOfZeroEdgeBub, ”Backward” , ”N egative” , EdgeUpperPerc , ThreshError) ; Object [] ReturnedFromValueFromSlopeEdgeBub = LinearFit . ValueFromSlope(” EdgeBubForFits” , RowAverageSmoothEdgeImageArrayBub , EdgeBubSlopeLowerIndex, EdgeBubSlopeUpperIndex , ArrayOfZeroEdgeBub) ; double [] EdgeBubSlope = ( double []) ReturnedFromValueFromSlopeEdgeBub[0]; double [][] CurveFittedIndexArrayEdgeBub = ( double [][]) ReturnedFromValueFromSlopeEdgeBub [ 1 ] ; double [][] CurveFittedAverageArrayEdgeBub = ( double [][]) ReturnedFromValueFromSlopeEdgeBub [ 2 ] ; ReturnedFromValueFromSlopeEdgeBub=null ; Draw. DrawCircleOnImage (EdgeImages , CurveFittedIndexArrayEdgeBub , CurveFittedAverageArrayEdgeBub , 65536 , DoWeUseNormalized , weird interface region bot , maxvalueForDrawEdge , 65536 1, 1, 4); Draw.DrawDataOnImage(EdgeImages , EdgeBubSlope , 65536 , DoWeUseNormalized− , weird interface region bot , 65536 1) ; − double [] EdgeSpkSlopeLowerIndex = main.PercParse (RowAverageSmoothEdgeImageArraySpk, SpkArrayEdgeMax[1] , ArrayOfZeroEdgeSpk , ”Forward”, ”Po sitive”, EdgeUpperPerc , ThreshError) ; double [] EdgeSpkSlopeUpperIndex = main.PercParse (RowAverageSmoothEdgeImageArraySpk, SpkArrayEdgeMax[1] , ArrayOfZeroEdgeSpk , ”Forward”, ”Po sitive”, EdgeLowerPerc , ThreshError) ; Object [] ReturnedFromValueFromSlopeEdgeSpk = LinearFit . ValueFromSlope(” EdgeSpkForFits” , RowAverageSmoothEdgeImageArraySpk, EdgeSpkSlopeLowerIndex , EdgeSpkSlopeUpperIndex , ArrayOfZeroEdgeSpk) ; double [] EdgeSpkSlope = ( double []) ReturnedFromValueFromSlopeEdgeSpk [0]; double [][] CurveFittedIndexArrayEdgeSpk = ( double [][]) ReturnedFromValueFromSlopeEdgeSpk [ 1 ] ; double [][] CurveFittedAverageArrayEdgeSpk = ( double [][]) ReturnedFromValueFromSlopeEdgeSpk [ 2 ] ; ReturnedFromValueFromSlopeEdgeSpk=null ; Draw. DrawCircleOnImage (EdgeImages , CurveFittedIndexArrayEdgeSpk , CurveFittedAverageArrayEdgeSpk , 65536 , DoWeUseNormalized, analysisarea top , maxvalueForDrawEdge , 65536 1, 1, 4); Draw.DrawDataOnImage(EdgeImages ,− EdgeSpkSlope, 65536 , DoWeUseNormalized , analysisarea top , 65536 1) ; IJ.saveAs(EdgeImages , ”. tif”,− Path + ”EdgeDetectedImages”) ; ////////Arrays shifted to be really for bubble / spike amps ( need for excel) double [] EdgeBubAmps30Perc = main. AddScalarToArray(EdgeBub30Perc, BubOffset , false ); double [] EdgeBubAmpsSlope = main. AddScalarToArray(EdgeBubSlope, BubOffset , false ); double [] EdgeSpkAmps30Perc = main. AddScalarToArray(EdgeSpk30Perc , SpkOffset , true ); double [] EdgeSpkAmpsSlope = main.AddScalarToArray(EdgeSpkSlope, SpkOffset , true );

//////////////////////////////// Edge Thresh ////////////////Into Excel File Object [] ReturnedDataForExcel = main.DataForExcel (BubAmpsSlope , SpkAmpsSlope , BubAmps95Perc , SpkAmps5Perc, BubAmps80Perc , SpkAmps20Perc, BubAmps90Perc , SpkAmps10Perc , EdgeBubAmps30Perc , EdgeSpkAmps30Perc , EdgeBubAmpsSlope, EdgeSpkAmpsSlope ) ; //Bubbles then Spikes double [][] MainValues = ( double [][]) ReturnedDataForExcel [0]; int NumOfRowsForExcel = (Integer) ReturnedDataForExcel [1]; int NumOfColumnsForExcel = (Integer) ReturnedDataForExcel [ 2]; ReturnedDataForExcel=null ; String [] HeaderInfo = main.HeaderForExcel(”BubAmpsSlope”, ”SpkAmpsSlope”, ” BubAmps95Perc” , ”SpkAmps5Perc” , ”BubAmps80Perc” , ”SpkAmps20Perc” , ”BubAmps90Perc 228

” , ”SpkAmps10Perc” , ”EdgeBubAmps30Perc” , ”EdgeSpkAmps30Perc” , ”EdgeBubAmpsSlope” , ”EdgeSpkAmpsSlope”) ; ////////////////Into Excel File

///////////////Reopen and Manipulate Excel Object[] CreateInitialExcelReturned = excel.CreateInit ialExcelFile(Path + ”ImageJ” + FolderName + ”. xls”, HeaderInfo , MainValues , NumOfRowsForExcel , NumOfColumnsForExcel , FolderName , UsePrevInputs , TemplateTypeChoice) ; int [] AmpColIndex = ( int []) CreateInitialExcelReturned [0]; TemplateTypeChoice = (String) CreateInitialExcelReturn ed [1]; CreateInitialExcelReturned=null ; excel .ReopenExcelAndCreateColumns (Path + ”ImageJ” + FolderName + ”.xls”, HeaderInfo , MainValues , NumOfRowsForExcel + 1, NumOfColumnsForExcel , startslice , endslice , interface l o c analysisarea top , tank width , FolderName , AmpColIndex, weird interface− region bot weird interface region top , (String) ReadInputReturned [3] , − UsePrevInputs,(Double) ReadInputReturned [4] ,(Double) ReadInputReturned [5] ,( Double) ReadInputReturned[6] , (Double) ReadInputReturned[7] ,(Double) ReadInputReturned [8]) ; ///////////////Reopen and Manipulate Excel

////End if for type of images, refrac , absorp }

main.WriteInput((String) ReadInputReturned[0] ,( Boolean) ReadInputReturned [1] ,(Boolean ) ReadInputReturned [2] ,( String) ReadInputReturned [3] , (Double) ReadInputReturned [4] ,(Double) ReadInputReturned[5] , (Double) ReadInputReturned[6] , (Double) ReadInputReturned[7] , (Double) ReadInputReturned[8] ,( Integer) ReadInputReturned[9] ,( String) ReadInputReturned[10] , (Boolean) ReadInputReturned[11 ], (Boolean) ReadInputReturned[12] , (String) ReadInputReturned[13]) ;

new WaitForUserDialog(”When Ready to Exit Click OKay”).show( ); System. exit (0) ; } public ImagePlus GetStack(String ForImageSequence , String WhatToOpen , boolean UsePrevInputs , boolean Invert Image , boolean Invert LUT) i f (WhatToOpen. equals(”Image{ Sequence”)) IJ.run(”Image Sequence... ”, ForImageSequence);{ else } ImagePlus{ Images = IJ.openImage(ForImageSequence) ; Images .show() ; } GenericDialog gd = new GenericDialog(”Do We invert?”); //gd.create(”Do We invert?”); gd.addMessage(”Should we invert the image and LUT (Do both i f image says inverting LUT)” ); gd.addCheckbox(”Invert Image” , false ); gd.addCheckbox(”Invert LUT” , false ); i f (UsePrevInputs == false ) gd. showDialog() ; { Invert Image = gd.getNextBoolean() ; Invert LUT = gd.getNextBoolean() ;

}ImagePlus imp = WindowManager. getCurrentImage () ; i f (Invert Image == true ) IJ.run(imp, ”Invert”,{ ”stack”);

}i f (Invert LUT == true ) IJ.run(imp, ”Invert LUT”,{ ””);

imp} = WindowManager. getCurrentImage () ;

return imp ; } public Object [] MakeSubStack(ImagePlus MainStack , boolean UsePrevInputs, Integer pixel bins , String DoWeUseNormalized , Boolean TakeNaturalLog ,Boolean UseEmptyTank, String TypeOfExp) { Object [] MakeSubStackReturned = new Object [12]; IJ.run(”Clear Results”); IJ.run(”Set Scale ...”, ”distance=0 known=0 pixel=1 unit=pixel”); //set scale to pixels int width = MainStack.getWidth() ; int height = MainStack.getHeight() ; new WaitForUserDialog(”Figure our Start (First Frame Jump ” + ” n” + ”or 0 g mark often on NonAvg (from accel time, (time\ / fram per iod) + 1 ) )” + ” n” + ”and\ End SLice and Remember for a sec”).show(); int startslice = ( int ) IJ.getNumber(”Experiment Start Slice (First Frame Jump ” + ” n” + ”or 0 g mark often on NonAvg (from accel time, (time / fram per iod) + 1 ))”,\ 30) ; int endslice = ( int ) IJ.getNumber(”Last Slice”, 180); 229

IJ.run(”Substack Maker”, ”slices=” + startslice + ” ” + endslice); ImagePlus MainSubStack = WindowManager. getCurrentImage− (); MainSubStack. setTitle (”main substack”) ;

i f (UsePrevInputs == false ) pixel bins = ( int ) IJ.getNumber(”How many pixel bins are there”, 1024); { } GenericDialog gd = new GenericDialog(”Use Normalized?”) ; gd.addMessage(”Should we use fully normalized data for ana lyzing? Often the bottom liquid has different absorption characteristics ,” + ” n” + ”if this is true say no, even though the images\ are not normal ized as a whole, they will be rescalled separately”); String[] choice = new String [] ”Yes”, ”No”, ”Don’t even Rescale” ; gd.addChoice(”Yes, No or Don’t{ even Rescale”, choice , ”No” ); } gd.addCheckbox(”Take Natural Log after divide , but before fit range?”, false ); gd.addCheckbox(”Use Exmpty tank to remove Tank inhomogene ties? (leave unchecked to use first image)”, false ); i f (UsePrevInputs == false ) gd. showDialog() ; { DoWeUseNormalized = gd. getNextChoice () ; TakeNaturalLog = gd.getNextBoolean () ; UseEmptyTank = gd. getNextBoolean () ;

JFrame} frame = new JFrame(”Frame”) ; Object[] Possibilities = ”Refrac”, ”Absorp” ; ImageIcon icon = new ImageIcon(”images/middle.{ } gif”) ; i f (UsePrevInputs == false ) TypeOfExp = (String) JOptionPane{ .showInputDialog( frame , ”Kind of Experiments for Amp Extraction”, ”Kind of Exp”, JOptionPane .PLAIN MESSAGE, i con , Possibilities , ”Refrac”) ; } IJ.run(”Select None”); int norm slice = 2; IJ .selectWindow (”main substack”) ; IJ.run(”Substack Maker”, ”slices=” + norm slice); //make duplicate slice for normalization ImagePlus AlignNormStack = WindowManager. getCurrentImage(); AlignNormStack. setTitle (”stack for alignment normalize”);

IJ.setTool(”line”); AlignNormStack. getProcessor () .setMinAndMax(AlignNormStack. getStatistics () .mean 3 AlignNormStack. getStatistics () .stdDev − ∗ ,AlignNormStack. getStatistics () .mean + 3 AlignNormStack. getStatistics () . stdDev) ; ∗ AlignNormStack. updateImage() ; new WaitForUserDialog(”select interface location and tank width if (line length)”).show (); int tank width = ( int ) AlignNormStack. getRoi() .getBounds() .getWidth() ; int interface loc = ( int ) AlignNormStack.getRoi() .getBounds() .getY() ; IJ.setTool (”rectangle”); IJ.wait(100) ; new WaitForUserDialog(”Select weird interface region to excl ude from analysis”).show(); int weird interface region top = ( int ) AlignNormStack.getRoi() .getBounds() .getY() ; int weird interface region bot = weird interface region top + ( int ) AlignNormStack. getRoi() .getBounds() . getHeight() ;

MakeSubStackReturned [0] = MainSubStack ; MakeSubStackReturned [1] = DoWeUseNormalized ; MakeSubStackReturned [2] = pixel bins ; MakeSubStackReturned [3] = interface loc ; MakeSubStackReturned [4] = weird interface region top ; MakeSubStackReturned [5] = weird interface region bot ; MakeSubStackReturned [6] = startslice ; MakeSubStackReturned [7] = endslice ; MakeSubStackReturned [8] = tank width ; MakeSubStackReturned [9] = TypeOfExp; MakeSubStackReturned [10] = TakeNaturalLog; MakeSubStackReturned [11] = UseEmptyTank; return MakeSubStackReturned ; } public Object [] AlignAndNormalize(ImagePlus MainSubStack, Str ing DoWeUseNormalized , int pixel bins , int interface loc , String TypeOfExp, Boolean TakeNaturalLog, Boolean UseEmptyTank, boolean UsePrevInputs, String DoWeUseTurboReg ) { Object [] AlignNormReturned = new Object[10];

IJ .selectWindow (”stack for alignment normalize”);

GenericDialog gd = new GenericDialog(”Use TurboReg?”) ; 230

gd.addMessage(”Do we do Image Registration (alignment) us ing TurboReg?”) ; String[] choice = new String [] ”Yes”, ”No” ; gd.addChoice(”Yes or No”, choice{ , ”No”); } i f (UsePrevInputs == false ) gd. showDialog() ; { DoWeUseTurboReg = gd. getNextChoice () ;

}IJ .selectWindow (”stack for alignment normalize”); ImagePlus AlignNormStack = WindowManager. getCurrentImage();

i f (DoWeUseTurboReg . equals(”Yes”) ) i f (UseEmptyTank) ///Must append{ empty tank image to end of main substack JFrame f = new{JFrame(”Select EmptyTank Image”) ; JOptionPane.showMessageDialog(f , ”Select Image to use fo r empty tank”); ImagePlus EmptyTankImage = IJ .openImage () ; EmptyTankImage . show () ; EmptyTankImage . setTitle (”EmptyTankImage”) ; IJ.run(MainSubStack,”Concatenate ... ”, ”stack1=main substack stack2= EmptyTankImage title=main substack”) ; MainSubStack = WindowManager. getCurrentImage () ; } IJ .selectWindow (”main substack”) ; new WaitForUserDialog(”select area (mixing region) to be excl uded from image registration (alignment) algorithm”).show(); Rectangle ExcludeTurboReg = (MainSubStack.getRoi() .getBounds()) ; IJ .selectWindow (”stack for alignment normalize”); AlignNormStack. setRoi (ExcludeTurboReg ) ; IJ.run(”Add Slice”); AlignNormStack. getStack () . getProcessor (2) . setColor ( 0) ; IJ.run(”Make Inverse”); IJ.run(”Set...”, ”value=100 slice”);

IJ .selectWindow (”main substack”) ; IJ.run(”TurboReg ”); new WaitForUserDialog(”Now you will need to set ’main substack’ as the source ” + ” n” \ + ”and ’stack for alignment normalize’ as the target” + ” n” + ” and then click on Accurate and Rigid body and Batch,\ click on OK here when the process is finished”).show();

IJ.wait(100) ; AlignNormStack. getStack () . deleteLastSlice () ; IJ.wait(100) ; IJ .selectWindow (”Registered”) ; ImagePlus AlignedImages = WindowManager. getCurrentImage(); AlignedImages . setTitle(”aligned images”); else } IJ.wait(100){ ; IJ .selectWindow (”main substack”) ; IJ.run(”Duplicate ...”, ”title=aligned images duplicate stack”); IJ .selectWindow (”aligned images”); IJ . run(”32 bit”); − } IJ.wait(100) ; IJ .selectWindow (”aligned images”); ImagePlus AlignedImages = WindowManager. getCurrentImage(); AlignedImages . setTitle(”aligned images”); int imagewidth = AlignedImages .getWidth() ; int imageheight = AlignedImages.getHeight() ; IJ.run(”Clear Results”); IJ . makeRectangle (( int ) Math.round(imagewidth / 4), ( int ) Math.round(imageheight / 10), 2 ( int ) Math.round(imagewidth / 4), 8 ( int ) Math.round(imageheight / 10)); IJ .selectWindow∗ (”aligned images”); ∗ AlignedImages . getProcessor () .setMinAndMax(AlignedImages. getStatistics () .mean 3 AlignedImages . getStatistics () .stdDev − ∗ ,AlignedImages. getStatistics ().mean + 3 AlignedImages. getStatistics () . stdDev) ; ∗ AlignedImages . updateImage() ; new WaitForUserDialog(”Select area for measurement”).show( ); int analysisarea left = ( int ) (AlignedImages.getRoi() .getBounds() .getX()); int analysisarea top = ( int ) (AlignedImages .getRoi() .getBounds() .getY()); int analysisarea right = ( int ) (analysisarea left + AlignedImages.getRoi() .getBounds() . getWidth() ) ; int analysisarea bot = ( int ) (analysisarea top + AlignedImages .getRoi() .getBounds() . getHeight()); int analysisarea width = ( int ) (AlignedImages .getRoi() .getBounds() .getWidth()); int analysisarea height = ( int ) (AlignedImages.getRoi() .getBounds() .getHeight());

StackStatistics stat = new StackStatistics(AlignedImages); double minval = stat.min; IJ .selectWindow (”aligned images”); IJ.run(”Select None”); 231

IJ.run(”Duplicate ...”, ”title=EdgeDetected images duplicate stack”); IJ . selectWindow (”EdgeDetected images”) ; ImagePlus EdgeImages = WindowManager. getCurrentImage () ; IJ.run(EdgeImages , ”Find Edges”, ”stack”); EdgeImages .show() ;

IJ .selectWindow (”aligned images”); ImageStatistics imgstat = AlignedImages. getStatistics( );

i f (DoWeUseNormalized . equals(”Don’t even Rescale”)) IJ . run(”32 bit”); { AlignedImages− . getProcessor () .setMinAndMax(0, pixel bins 1) ; IJ.wait(100) ; − IJ .selectWindow (”aligned images”); IJ.run(”Select None”); //Must always select all before duplicate all will crop also . IJ.run(”Duplicate ...”, ”title=normalized images duplicate stack”); IJ .selectWindow (”normalized images”) ; ImagePlus NormImages = WindowManager. getCurrentImage () ;

else } { IJ .selectWindow (”aligned images”); IJ.run(”Select None”); i f (UseEmptyTank) IJ.run(”Substack{ Maker”, ”slices=” + AlignedImages. getN Slices() + ” ” + AlignedImages . getNSlices ()); − ImagePlus AlignedEmptyTank = WindowManager. getCurrentImage () ; MainSubStack . setTitle (”AlignedEmptyTank”) ; IJ . selectWindow (”aligned images”); AlignedImages. setSlice(AlignedImages.getNSlices() ); IJ.run(AlignedImages, ”Delete Slice”, ””); ImageCalculator ic = new ImageCalculator() ; ImagePlus NormImages = ic.run(”divide create stack 32 bit equalize”, AlignedImages , AlignedEmptyTank) ; − NormImages. setTitle (”normalized images”) ; NormImages.show() ; IJ.run(”Select None”); NormImages. setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); NormImages. getProcessor () .setMinAndMax(NormImages. g etStatistics () .mean 3 NormImages. getStatistics () .stdDev − ∗ ,NormImages. getStatistics () .mean + 3 NormImages. getStatistics () .stdDev); IJ.run(”Select None”); ∗ new WaitForUserDialog(”Select area for average in lighter div ide”).show() ; imgstat = NormImages. getStatistics () ; double LighterDivide = imgstat.mean; IJ.run(”Select None”); IJ.run(”Divide...”, ”value=” + LighterDivide + ” stack”); NormImages. setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); NormImages. getProcessor () .setMinAndMax(NormImages. g etStatistics () .mean 3 NormImages. getStatistics () .stdDev − ∗ ,NormImages. getStatistics () .mean + 3 NormImages. getStatistics () .stdDev); IJ.run(”Select None”); ∗

i f (TakeNaturalLog) //For Beer law corrected images IJ . selectWindow{ (”normalized images”) ; IJ.run(”Select None”); IJ .run(NormImages,”Log” ,”stack”) ;

}IJ . selectWindow (”normalized images”) ; NormImages. setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); NormImages. getProcessor () .setMinAndMax(NormImages. g etStatistics () .mean 3 NormImages. getStatistics () .stdDev − ∗ ,NormImages. getStatistics () .mean + 3 NormImages. getStatistics () .stdDev); IJ.run(”Select None”); ∗ else } { //////////////Divide to get rid of tank inhomogeneties //ImagePlus ForNormalizeFromRegistered = new ImagePlus( ) ; IJ . selectWindow (”aligned images”); IJ.run(”Select None”); ImagePlus ForNormalizeFromRegistered = new Duplicator().run(AlignedImages, 1, 1) ; ForNormalizeFromRegistered .show() ; ForNormalizeFromRegistered . setTitle (”ForNormalizeFromRegistered”) ; ImageCalculator ic = new ImageCalculator() ; ImagePlus NormImages = ic.run(”divide create stack 32 bit”, AlignedImages , ForNormalizeFromRegistered ) ; − NormImages. setTitle (”normalized images”) ; NormImages.show() ;

IJ . selectWindow (”ForNormalizeFromRegistered”) ; 232

new WaitForUserDialog(”Select area for average in darker liqu id (close to interface) to rescale”).show(); imgstat = ForNormalizeFromRegistered . getStatistics () ; double DarkerMultiplyBack = imgstat .mean; new WaitForUserDialog(”Select area for average in lighter liq uid (close to interface) to rescale”).show(); imgstat = ForNormalizeFromRegistered . getStatistics () ; double LighterMultiplyBack = imgstat.mean;

IJ . selectWindow (”normalized images”) ; ////Multiply Back by average top and bottom fluid without ta nk inhomegenities present IJ.makeRectangle(0, interface loc+1, imagewidth, (imageheight (interface loc + 1))); − IJ.run(”Multiply...”, ”value=” + DarkerMultiplyBack + ” s tack”); IJ.makeRectangle(0, 0, imagewidth, interface loc); IJ.run(”Multiply...”, ”value=” + LighterMultiplyBack + ” stack”) ; IJ.run(”Select None”); IJ.run(”Duplicate ...”, ”title=RescaledImages duplicat e stack”);

i f (TakeNaturalLog) //For Beer law corrected images IJ . selectWindow{ (”normalized images”) ; IJ.run(”Select None”); IJ.run(”Divide...”, ”value=” + LighterMultiplyBack + ” st ack”); //Divide by I 0 taken as lighter liquid IJ .run(NormImages,”Log” ,”stack”) ; IJ . selectWindow (”aligned images”); IJ.run(”Duplicate ...”, ”title=LogNoTankCorrection dup licate stack”); IJ . selectWindow (”LogNoTankCorrection”) ; ImagePlus LogNoTankCorrection = WindowManager. getCurrentImage () ; LogNoTankCorrection . setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); LogNoTankCorrection . getProcessor () .setMinAndMax(LogNoTankCorrection . getStatistics () .mean 3 LogNoTankCorrection. getStatistics () .stdDev ,LogNoTankCorrection. getStatistics− ∗ () .mean + 3 LogNoTankCorrection . getStatistics () .stdDev); ∗ LogNoTankCorrection . updateImage() ; IJ.run(”Select None”); IJ.run(”Divide...”, ”value=” + LighterMultiplyBack + ” st ack”); //Divide by I 0 taken as lighter liquid IJ .run(LogNoTankCorrection ,”Log” ,”stack”) ; NormImages. setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); NormImages. getProcessor () .setMinAndMax(NormImages. g etStatistics () .mean 3 NormImages. getStatistics () .stdDev − ∗ ,NormImages. getStatistics () .mean + 3 NormImages. getStatistics () .stdDev); NormImages. updateImage () ; ∗ IJ.run(”Select None”);

}IJ . selectWindow (”normalized images”) ; NormImages. setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); NormImages. getProcessor () .setMinAndMax(NormImages. g etStatistics () .mean 3 NormImages. getStatistics () .stdDev − ∗ ,NormImages. getStatistics () .mean + 3 NormImages. getStatistics () .stdDev); IJ.run(”Select None”); ∗

i f (DoWeUseNormalized} } . equals(”Yes”)) IJ .selectWindow (”normalized images”){ ; ImagePlus NormImages = WindowManager. getCurrentImage () ; new WaitForUserDialog(”Select area for lowest in darker liqui d (close to interface) to rescale (subtract)to zero”).show(); imgstat = NormImages. getStatistics () ; double SubtractedValue = imgstat .mean; IJ.run(”Select None”); IJ.run(”Subtract...”, ”value=” + SubtractedValue + ” stac k”); //Rescale to zero with darker fluid

new WaitForUserDialog(”Select area in lighter liquid (close to interface) for average to Normalize to”).show(); imgstat = NormImages. getStatistics () ; double NormalizeValue = imgstat.mean; IJ.run(”Select None”); IJ.run(”Divide...”, ”value=” + NormalizeValue + ” stack”) ; //normalize dark value with lowest value of all images NormImages. getProcessor () .setMinAndMax(0.000000000 , 1.000000000) ;

//NormImages. setTitle (”normalized images”); NormImages.show() ; //Have to make visible for subtract step

////////rename} images to adjusted images IJ.wait(100) ; IJ .selectWindow (”normalized images”) ; ImagePlus NormImages = WindowManager. getCurrentImage () ; NormImages. setTitle (”adjusted images”) ; IJ.run(”Select None”); 233

////////rename images to adjusted images

IJ .selectWindow (”adjusted images”) ; ImagePlus AdjusImages = WindowManager. getCurrentImage ( ); IJ.run(”Duplicate ...”, ”title=cropped adjusted images duplicate stack”); IJ .selectWindow (”cropped adjusted images”) ; ImagePlus CropAdjusImages = WindowManager. getCurrentImage() ; CropAdjusImages. setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); IJ.run(CropAdjusImages, ”Crop”, ””); CropAdjusImages.show() ;

IJ . selectWindow (”EdgeDetected images”) ; IJ.run(”Duplicate ...”, ”title=cropped EdgeDetected images duplicate stack”); IJ .selectWindow (”cropped EdgeDetected images”) ; ImagePlus CropEdgeImages = WindowManager. getCurrentImage(); CropEdgeImages . setRoi ( analysisarea left , analysisarea top , analysisarea width , analysisarea height); IJ.run(CropEdgeImages , ”Crop”, ””); IJ .run(”32 bit”, ””); CropEdgeImages− .show() ;

IJ.wait(100) ; IJ .selectWindow (”cropped adjusted images”) ; CropAdjusImages. getProcessor () .setMinAndMax(CropAdjusImages. getStatistics () .mean 3 CropAdjusImages. getStatistics () .stdDev − ∗ ,CropAdjusImages. getStatistics () .mean + 3 CropAdjusImages. getStatistics () .stdDev); ∗ CropAdjusImages. updateImage() ; new WaitForUserDialog(”select lower box for average value on cropped Adjusted Images”). show() ; imgstat = CropAdjusImages. getStatistics (); double average bub = imgstat.mean; IJ.wait(100) ; ///kept getting duplicate call , this seems to work, putting a delay between two same commands new WaitForUserDialog(”select upper box for average value on cropped Adjusted Images”). show() ; imgstat = CropAdjusImages. getStatistics (); double average spike = imgstat.mean;

AlignNormReturned [0] = AdjusImages ; AlignNormReturned [1] = CropAdjusImages; AlignNormReturned [2] = average bub ; AlignNormReturned [3] = average spike ; AlignNormReturned [4] = EdgeImages ; AlignNormReturned [5] = CropEdgeImages ; AlignNormReturned [6] = analysisarea top ; AlignNormReturned [7] = analysisarea bot ; AlignNormReturned [8] = analysisarea left ; AlignNormReturned [9] = analysisarea right ; return AlignNormReturned ; } public float [][][] StackToArray(ImagePlus imp) int dimension = imp.getWidth() imp. getHeight(){ ; int ImageWidth = imp.getWidth()∗ ; int ImageHeight = imp.getHeight() ; int NumOfSlices = imp. getStackSize() ; float [] pixels = new f l o a t [dimension ]; float [][][] ImageArray = new f l o a t [ImageWidth ][ ImageHeight ][ NumOfSlices ];

ImageStack stack = imp.getStack(); fo r ( int k= 0; k < (NumOfSlices) ; k++) pixels = ( float []) stack.getPixels(k{ + 1); fo r ( int i = 0; i <= imp.getWidth() 1 ; i ++) fo r ( int j = 0; j <= imp.getHeight()− 1 ;{ j ++) i f ((ImageArray[ i ][ j ][k] != Float.POSITIVE− INFINITY{ ) (ImageArray[ i ][ j ][k] != F l o a t . NEGATIVE INFINITY ) (ImageArray[i ][ j ][k]|| != Float.NaN)) ImageArray[ i ][ j ][k] = pixels [ImageWidth|| j + i]; { ∗ return ImageArray;}} } } } public double [ ] [] RowAverageTheImagesArray( float [][][] ImagesArray, int bottom , int top , boolean TimesRowMax ) int NumOfColumns = ImagesArray{ . length ; //index in horizonatal System . out . println (NumOfColumns) ; int NumOfRows = ImagesArray [0]. length; ///index in vertical System . out . println (NumOfRows) ; int NumOfSlices = ImagesArray [0][0]. length; ///index of slice System.out . println(NumOfSlices) ; double [][] RowAverageImagesArray = new double [ bottom top + 1][NumOfSlices]; − OverloadMaxAndMin MinMax = new OverloadMaxAndMin () ; double [ ] [ ] [ ] RowMax = new double [2][ bottom top + 1][NumOfSlices]; i f ( TimesRowMax == true ) − { 234

RowMax = MinMax. getMaxValue (ImagesArray ) ; } double max = 1 ; //////If not multiplied by max of row, just set to 1 fo r ( int k= 0; k < (NumOfSlices) ; k++) fo r ( int j = top; j <= bottom; j++) { double sum = 0; { fo r ( int i = 0; i <= NumOfColumns 1 ; i ++) sum = ImagesArray[i ][j ][k] + sum;− {

}i f ( TimesRowMax == true ) max = RowMax [ 1 ] [ j ] [ k ]{ ;

double} AverageRow = sum / NumOfColumns; RowAverageImagesArray [ j top ][k] = AverageRow max ; − ∗ return}RowAverageImagesArray } ; } public double [][] SmoothArraySlicesVarAvg ( double [] [ ] RowAverageImagesArray , int NumToAvg) int ArrayColumns = RowAverageImagesArray . length ; ///index in vertical { System.out . println (ArrayColumns) ; int ArrayDepth = RowAverageImagesArray [0]. length; ///index of slice System.out . println (ArrayDepth) ; double [][] SmoothArraySlices = new double [ArrayColumns ][ ArrayDepth ];

fo r ( int k= 0; k <= (ArrayDepth 1); k++) fo r ( int j=0+( int ) Math.floor(NumToAvg− { / 2.0); j < ArrayColumns ( int ) Math. f l o o r (NumToAvg / 2 . 0 ) ; j ++) − double sum = 0; { fo r ( int count = ( int ) Math. floor(NumToAvg / 2.0); count <= ( int ) Math. floor( NumToAvg / 2 .− 0 ) ; count++) sum = sum + RowAverageImagesArray{ [j + count][k];

}i f (NumToAvg % 2 == 0) sum = sum RowAverageImagesArray{ [ j ][ k ]; //if even num− to avg need to not have center value in sum }SmoothArraySlices [ j ][k] = sum / NumToAvg;

return}SmoothArraySlices } ; } public double [] PercParse( double [][] SmoothArraySlices , double [] MinForPerc, double [] MaxForPerc, String Direction , String Slope, double PercDesired , double ErrorThresh) int NumOfRows = SmoothArraySlices . length; ///index in vertical { System . out . println (NumOfRows) ; int NumOfSlices = SmoothArraySlices [0]. length; ///index of slice System.out . println(NumOfSlices) ; double [] PercAmp = new double [ NumOfSlices ];

fo r ( int k= 0; k < (NumOfSlices) ; k++) PercAmp[k] = 0; { boolean PercFlag = false ; i f (Direction .equals(”Backward”) && Slope.equals(”Negati ve”)) fo r ( int j = (NumOfRows 2); j >= 0; j ) { double PercValue = (MinForPerc[k]− +−− (MaxForPerc[k]{ MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [j + 1][k] < (PercValue + PercValue ErrorThresh)) && (SmoothArraySlices [ j ][k] >= (PercValue PercValue ∗ ErrorThresh))) && ( j <= NumOfRows 3) && (PercFlag != true− )) ∗ PercAmp[k] = 1 / (SmoothArraySlices− [j + 1][k] {SmoothArraySlices [ j ][k]) (PercValue SmoothArraySlices [j ][k]) +− j; ////interpolate between∗ two points− PercFlag = true ;

else if}(Direction } . equals(”Backward”) && Slope. equals(”Positi ve”)) } fo r ( int j = (NumOfRows 2); j >= 0; j ) { double PercValue = (MinForPerc[k]− +−− (MaxForPerc[k]{ MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [j + 1][k] > (PercValue PercValue ErrorThresh)) && (SmoothArraySlices [ j ][k] <= (PercValue +− PercValue ∗ ErrorThresh))) && ( j <= NumOfRows 3) && (PercFlag != true )) ∗ PercAmp[k] = 1 / (SmoothArraySlices− [j + 1][k] {SmoothArraySlices [ j ][k]) (PercValue SmoothArraySlices [j ][k]) +− j; ////interpolate between∗ two points− PercFlag = true ;

else if}(Direction } . equals(”Forward”) && Slope.equals(”Positiv e”)) } fo r ( int j = 1; j < (NumOfRows ) ; j ++) { double PercValue = (MinForPerc[k] +{ (MaxForPerc[k] MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [ j ][k] > (PercValue PercValue ErrorThresh)) && ( SmoothArraySlices [ j 1][k] <= (PercValue− + PercValue∗ ErrorThresh))) && ( j >= 1) && (PercFlag− != true )) ∗ PercAmp[k] = 1 / (SmoothArraySlices [j{ ][k] SmoothArraySlices [ j 1][k]) (PercValue SmoothArraySlices [j ][k])− + j; ////interpolate− ∗ − 235

between two points PercFlag = true ;

else if}(Direction } . equals(”Forward”) && Slope.equals(”Negativ e”)) } fo r ( int j = 1; j < (NumOfRows ) ; j ++) { double PercValue = (MinForPerc[k] +{ (MaxForPerc[k] MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [ j ][k] < (PercValue + PercValue ErrorThresh)) && ( SmoothArraySlices [ j 1][k] >= (PercValue PercValue∗ ErrorThresh))) && ( j >= 1) && (PercFlag− != true )) − ∗ PercAmp[k] = 1 / (SmoothArraySlices [j{ ][k] SmoothArraySlices [ j 1][k]) (PercValue SmoothArraySlices [j ][k])− + j; ////interpolate− between∗ two points− PercFlag = true ;

else } } } System.out.println(”Error{ with Perc Parse Direction”);

double}[ ] } Amps = new double [ NumOfSlices ]; Amps = PercAmp; return Amps ; } public double [] PercParse( double [][] SmoothArraySlices , double [] MinForPerc, double [] MaxForPerc, String Direction , String Slope, double PercDesired , double ErrorThresh , int StartIndex ) int NumOfRows = SmoothArraySlices{ . length; ///index in vertical System . out . println (NumOfRows) ; int NumOfSlices = SmoothArraySlices [0]. length; ///index of slice System.out . println(NumOfSlices) ; double [] PercAmp = new double [ NumOfSlices ];

fo r ( int k= 0; k < (NumOfSlices) ; k++) PercAmp[k] = 0; { boolean PercFlag = false ; i f (Direction .equals(”Backward”) && Slope.equals(”Negati ve”)) fo r ( int j = (StartIndex); j >= 0; j ) { double PercValue = (MinForPerc[k]−− +{ (MaxForPerc[k] MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [j + 1][k] < (PercValue + PercValue ErrorThresh)) && (SmoothArraySlices [ j ][k] >= (PercValue PercValue ∗ ErrorThresh))) && ( j <= NumOfRows 3) && (PercFlag != true− )) ∗ PercAmp[k] = 1 / (SmoothArraySlices− [j + 1][k] {SmoothArraySlices [ j ][k]) (PercValue SmoothArraySlices [j ][k]) +− j; ////interpolate between∗ two points− PercFlag = true ;

else if}(Direction } . equals(”Backward”) && Slope. equals(”Positi ve”)) } fo r ( int j = (StartIndex); j >= 0; j ) { double PercValue = (MinForPerc[k]−− +{ (MaxForPerc[k] MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [j + 1][k] > (PercValue PercValue ErrorThresh)) && (SmoothArraySlices [ j ][k] <= (PercValue +− PercValue ∗ ErrorThresh))) && ( j <= NumOfRows 3) && (PercFlag != true )) ∗ PercAmp[k] = 1 / (SmoothArraySlices− [j + 1][k] {SmoothArraySlices [ j ][k]) (PercValue SmoothArraySlices [j ][k]) +− j; ////interpolate between∗ two points− PercFlag = true ;

else if}(Direction } . equals(”Forward”) && Slope.equals(”Positiv e”)) } fo r ( int j = StartIndex; j < (NumOfRows ) ; j ++) { double PercValue = (MinForPerc[k] + (MaxForPerc[k]{ MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [ j ][k] > (PercValue PercValue ErrorThresh)) && ( SmoothArraySlices [ j 1][k] <= (PercValue− + PercValue∗ ErrorThresh))) && ( j >= 1) && (PercFlag− != true )) ∗ PercAmp[k] = 1 / (SmoothArraySlices [j{ ][k] SmoothArraySlices [ j 1][k]) (PercValue SmoothArraySlices [j ][k])− + j; ////interpolate− between∗ two points− PercFlag = true ;

else if}(Direction } . equals(”Forward”) && Slope.equals(”Negativ e”)) } fo r ( int j = StartIndex; j < (NumOfRows ) ; j ++) { double PercValue = (MinForPerc[k] + (MaxForPerc[k]{ MinForPerc [k]) PercDesired / 100); − ∗ i f (((SmoothArraySlices [ j ][k] < (PercValue + PercValue ErrorThresh)) && ( SmoothArraySlices [ j 1][k] >= (PercValue PercValue∗ ErrorThresh))) && ( j >= 1) && (PercFlag− != true )) − ∗ PercAmp[k] = 1 / (SmoothArraySlices [j{ ][k] SmoothArraySlices [ j 1][k]) (PercValue SmoothArraySlices [j ][k])− + j; ////interpolate− between∗ two points− PercFlag = true ;

else } } } System.out.println(”Error{ with Perc Parse Direction”); } } 236

double [ ] Amps = new double [ NumOfSlices ]; Amps = PercAmp; return Amps ; } public Object [] DataForExcel( double []... ColumnsForExcel) int NumOfColumnsForExcel = ColumnsForExcel. length ; { int NumOfRowsForExcel = ColumnsForExcel [0]. length; double [][] MainValues = new double [NumOfRowsForExcel] [ NumOfColumnsForExcel ] ; fo r ( int i =0; i < (NumOfRowsForExcel) ; i++) //Basically transpose fo r ( int j = 0; j < (NumOfColumnsForExcel){ ; j++) MainValues[ i ][ j] = ColumnsForExcel[j ][ i ]; {

Object} [] } ReturnedDataForExcel = new Object [3]; ReturnedDataForExcel [0] = MainValues; ReturnedDataForExcel [1] = NumOfRowsForExcel; ReturnedDataForExcel [2] = NumOfColumnsForExcel ; return ReturnedDataForExcel ; } public Object [] DataForExcel2D ( double [][] ColumnsForExcel2D) int NumOfColumnsForExcel = ColumnsForExcel2D . length ; { int NumOfRowsForExcel = ColumnsForExcel2D [0]. length ; double [][] MainValues = new double [NumOfRowsForExcel] [ NumOfColumnsForExcel ] ; fo r ( int i =0; i < (NumOfRowsForExcel) ; i++) //Basically transpose fo r ( int j = 0; j < (NumOfColumnsForExcel){ ; j++) MainValues[ i ][ j] = ColumnsForExcel2D[ j ][ i ]; {

Object} [] ReturnedDataForExcel2D } = new Object [3]; ReturnedDataForExcel2D[0] = MainValues; ReturnedDataForExcel2D[1] = NumOfRowsForExcel; ReturnedDataForExcel2D[2] = NumOfColumnsForExcel ; return ReturnedDataForExcel2D; } public String [] HeaderForExcel(String ... ColumnsForExcel) int NumOfColumnsForExcel = ColumnsForExcel. length ; { String [] HeaderForExcel = new String [NumOfColumnsForExcel ]; fo r ( int i =0; i < NumOfColumnsForExcel ; i++) HeaderForExcel[ i ] = ColumnsForExcel[ i ]; {

}return HeaderForExcel; } public double [] AddScalarToArray( double [] array , double scalar , boolean NegArray) double [] result = new double [array.length ]; { fo r ( int i =0; i < array.length; i++) i f (NegArray == true ) { result[i] = 1.0 { array[i] + scalar; else − ∗ } result[i]{ = array[i] + scalar;

return}result } ; } public Object [] ReadInput() throws FileNotFoundException Object [] ReadInputReturned = new Object[14]; { try FileInputStream{ f = new FileInputStream(”Variables. txt”); Scanner input = new Scanner(f); String DoWeUseTurboReg = input.next() ; boolean Invert Image = input.nextBoolean(); boolean Invert LUT = input.nextBoolean(); String TemplateTypeChoice = input.next() ; double tank widthmm = input.nextDouble () ; double TimeBetwFrames = input. nextDouble () ; double Atwood = input.nextDouble () ; double Accel = input.nextDouble() ; double AvgKinemVisc = input.nextDouble () ; Integer pixel bins = input.nextInt(); String DoWeUseNormalized = input.next() ; boolean TakeNaturalLog = input.nextBoolean() ; boolean UseEmptyTank = input. nextBoolean () ; String TypeOfExp = input.next(); f.close();

ReadInputReturned [0] = DoWeUseTurboReg ; ReadInputReturned[1] = Invert Image ; ReadInputReturned [2] = Invert LUT; ReadInputReturned [3] = TemplateTypeChoice; ReadInputReturned [4] = tank widthmm ; ReadInputReturned [5] = TimeBetwFrames; ReadInputReturned [6] = Atwood; ReadInputReturned[7] = Accel ; ReadInputReturned [8] = AvgKinemVisc; 237

ReadInputReturned[9] = pixel bins ; ReadInputReturned [10] = DoWeUseNormalized ; ReadInputReturned [11] = TakeNaturalLog ; ReadInputReturned [12] = UseEmptyTank; ReadInputReturned [13] = TypeOfExp; return ReadInputReturned ; catch (Exception FileNotFoundException ) } System.out.println(”File for input variables{ not found”) ; ReadInputReturned[0] = ”null”; ReadInputReturned [1] = false ; ReadInputReturned [2] = false ; ReadInputReturned[3] = ”null”; ReadInputReturned[4] = 76.0; ReadInputReturned[5] = 5.0; ReadInputReturned[6] = 0.481; ReadInputReturned[7] = 11.77; ReadInputReturned[8] = 1E 6; ReadInputReturned[9] = 1024;− ReadInputReturned[10] = ”null”; ReadInputReturned [11] = false ; ReadInputReturned [12] = false ; ReadInputReturned[13] = ”null”; return ReadInputReturned ; } } public void WriteInput( String DoWeUseTurboReg , boolean Invert Image ,Boolean Invert LUT , String TemplateTypeChoice ,Double tank widthmm , Double Atwood, Double TimeBetwFrames, Double Accel , Double AvgKinemVisc , Integer pixel bins , String DoWeUseNormalized ,Boolean TakeNaturalLog , Boolean UseEmptyTank, String TypeOfExp) try { File{ file=new File(”Variables. txt”); FileWriter fw = new FileWriter( file ); BufferedWriter bw = new BufferedWriter (fw) ; PrintWriter pw = new PrintWriter(bw) ;

i f (!file.exists() ) file .createNewFile() ; { } pw. print (DoWeUseTurboReg) ;pw. println () ; pw. print(Invert Image);pw. println() ; pw. print (Invert LUT);pw. println () ; pw. print(TemplateTypeChoice) ;pw. println () ; pw. print (tank widthmm);pw. println () ; pw. print (TimeBetwFrames) ;pw. println () ; pw. print(Atwood) ;pw. println () ; pw. print(Accel);pw. println () ; pw. print(AvgKinemVisc);pw. println () ; pw. print(pixel bins);pw. println(); pw. print (DoWeUseNormalized) ;pw. println () ; pw. print(TakeNaturalLog);pw. println () ; pw. print (UseEmptyTank) ;pw. println () ; pw. print(TypeOfExp) ;pw. println () ; pw. close () ; bw. close () ; fw.close();

catch (IOException e ) } System.out.println(”Problem{ Writing to File”); } } class MultiDimArray public double []{ Make1DArrayCopy( double [] Array, int LowerIdx , int UpperIdx) double [] ArrayCopy = new double [ UpperIdx LowerIdx + 1]; { fo r ( int i = LowerIdx; i <= UpperIdx; i++)− ArrayCopy [ i LowerIdx] = Array[ i ]; { − }return ArrayCopy ; } public double [ ] [ ] Make2DArrayCopy( double [][] Array, int LowerIdxFirst , int UpperIdxFirst , int LowerIdxSecond , int UpperIdxSecond) double [][] ArrayCopy = new double [ UpperIdxFirst{ LowerIdxFirst + 1][ UpperIdxSecond LowerIdxSecond + 1]; − − fo r ( int i = LowerIdxFirst; i <= UpperIdxFirst; i++) fo r ( int j = LowerIdxSecond; j <= UpperIdxSecond;{ j++) ArrayCopy [ i LowerIdxFirst ][ j LowerIdxSecond] ={ Array[i ][ j ]; − − return}ArrayCopy } ; } public double [] Make2DArrayCopy( double [][] Array, boolean JustOne , int IdxFirst , int LowerIdxSecond , int UpperIdxSecond) double [] ArrayCopy = new double [ UpperIdxSecond{ LowerIdxSecond + 1]; i f (JustOne == true ) − { 238

fo r ( int j = LowerIdxSecond; j <= UpperIdxSecond; j++) ArrayCopy [ j LowerIdxSecond] = Array[IdxFirst ][ j ]; { − }return ArrayCopy ; else } { return ArrayCopy ; } } public double [] Make2DArrayCopy( double [][] Array, int LowerIdxFirst , int UpperIdxFirst , boolean JustOne , int IdxSecond) i f (UpperIdxFirst LowerIdxFirst{ > 0) double [] ArrayCopy− = new double [ UpperIdxFirst{ LowerIdxFirst + 1]; i f (JustOne == true ) − fo r ( int i = LowerIdxFirst;{ i <= UpperIdxFirst; i++) ArrayCopy [ i LowerIdxFirst] = Array[ i ][IdxSecond];{ − }return ArrayCopy ; else } return{ ArrayCopy ;

else} } System.out.{ println(”Lower Index Larger than Upper”); double [] ArrayCopy = new double [1]; return ArrayCopy ; } } public double [ ] [ ] Make2DArrayCopy( double [][] Array) { double [][] ArrayCopy = new double [Array.length][Array[0]. length]; fo r ( int i =0; i < Array.length; i++) fo r ( int j = 0; j < Array[0]. length;{ j++) ArrayCopy[i ][ j] = Array[i ][j ]; {

return}ArrayCopy } ; } public double [ ] [ ] [] Make3DArrayCopy( double [][][] Array, int LowerIdxFirst , int UpperIdxFirst , int LowerIdxSecond , int UpperIdxSecond , int LowerIdxThird , int UpperIdxThird) double [][][] ArrayCopy = new double [ UpperIdxFirst LowerIdxFirst + 1][ UpperIdxSecond{ LowerIdxSecond + 1][ UpperIdxThird LowerIdxThird− + 1]; − fo r ( int i = LowerIdxFirst; i <= UpperIdxFirst;− i++) fo r ( int j = LowerIdxSecond; j <= UpperIdxSecond;{ j++) fo r ( int k = LowerIdxThird; k <= UpperIdxThird; k++){ ArrayCopy [ i LowerIdxFirst ][ j LowerIdxSecond ][ k{ LowerIdxThird] = Array[ i][j][k];− − −

return ArrayCopy} }} ; } public double [ ] [ ] [] Make3DArrayCopy( double [][][] Array) double [][][] ArrayCopy = new double [Array.length][Array[0].{ length][Array[0][0]. length] ; fo r ( int i =0; i < Array.length; i++) fo r ( int j = 0; j < Array[0]. length;{ j++) fo r ( int k= 0; k < Array[0][0]. length;{ k++) ArrayCopy[i ][ j ][k] = Array[i ][ j ][k]; {

return ArrayCopy} }}; } public double [][] AppendArray2D( double [][] Array1, double [][] Array2, int DimCommon) i f (DimCommon == 2) { double [][] NewArray{ = new double [Array1.length + Array2.length][Array1[0]. length]; fo r ( int i = 0; i < Array1.length; i++) fo r ( int j = 0; j < Array1[0]. length;{ j++) NewArray[i ][ j] = Array1[i ][ j ]; {

fo r }( int i = } 0; i < Array2.length; i++) fo r ( int j = 0; j < Array2[0]. length;{ j++) NewArray[i + Array1.length][ j] = Array2[i{ ][ j ];

return}NewArray } ; else } double{ [][] NewArray = new double [Array1.length][Array1[0]. length + Array2[0]. length ]; fo r ( int i = 0; i < Array1.length; i++) fo r ( int j = 0; j < Array1[0]. length;{ j++) NewArray[i ][ j] = Array1[i ][ j ]; {

fo r }( int i = } 0; i < Array2.length; i++) fo r ( int j = 0; j < Array2[0]. length;{ j++) NewArray[i ][ j + Array1[0].length] = Array2[i{ ][ j ];

return}NewArray } ; } } 239

public double [] MakeArrayOfIndices( int FirstIdxLower , int FirstIdxUpper) i f (FirstIdxUpper FirstIdxLower > 0) { double [] ArrayOfIndices− = new double{ [ FirstIdxUpper FirstIdxLower + 1]; fo r ( int i = FirstIdxLower; i <= FirstIdxUpper; i++)− ArrayOfIndices [ i FirstIdxLower] = i; { − }return ArrayOfIndices ; else } System.out.{ println(”Lower Index Larger than Upper”); double [] ArrayOfIndices = new double [1]; return ArrayOfIndices ; } } public double [][] MakeArrayOfIndices2D( int FirstIdxLower , int FirstIdxUpper , int SecondDim)

i{ f (( FirstIdxUpper FirstIdxLower > 0)) double [][] ArrayOfIndices− = new double{ [ FirstIdxUpper FirstIdxLower + 1][SecondDim ]; − fo r ( int i = FirstIdxLower; i <= FirstIdxUpper; i++) fo r ( int j = 0; j < SecondDim; j++) { ArrayOfIndices [ i FirstIdxLower][{ j] = i; − return}ArrayOfIndices } ; else } System.out.{ println(”Lower Index Larger than Upper”); double [][] ArrayOfIndices = new double [1][1]; return ArrayOfIndices ; } } } class DrawOnImages void DrawDataOnImage(ImagePlus{ StackOfImages , double [][] SmoothArraySlices , int pixel bins , String DoWeUseNormalized , int StartArrayPosOnImg , double maxvalue ) int NumOfSlices = SmoothArraySlices [0]. length; ///index of slice { int NumOfRows = SmoothArraySlices . length; int imagewidth = StackOfImages.getWidth() ; i f (DoWeUseNormalized . equals(”Yes”)) fo r ( int k= 0; k < (NumOfSlices);{ k++) ImageProcessor imp = StackOfImages.getImageStack{ () . get Processor(k + 1); fo r ( int j = 0; j < NumOfRows ; j ++) int DataFitForImage = ( int ) Math.round(SmoothArraySlices{ [ j ][k] (imagewidth 2 imagewidth 0.1) / maxvalue + imagewidth 0.1) ; ∗ imp.putPixelValue(DataFitForImage− ∗ ∗ , j + StartArrayPosOnImg,∗ flip pixel value( imp. getPixel(DataFitForImage , j + StartArrayPosOnImg) , 0.5)); imp.putPixelValue(DataFitForImage + 1, j + StartArrayPosOnImg , flip pixel value(imp.getPixel(DataFitForImage + 1, j + StartArrayPosOnImg) , 0.5)); imp. putPixelValue(DataFitForImage 1, j + StartArrayPosOnImg , flip pixel value (imp. getPixel(DataFitForImage− 1, j + StartArrayPosOnImg) , 0.5)); − imp.putPixelValue(DataFitForImage + 2, j + StartArrayPosOnImg , flip pixel value(imp.getPixel(DataFitForImage + 2, j + StartArrayPosOnImg) , 0.5)); imp. putPixelValue(DataFitForImage 2, j + StartArrayPosOnImg , flip pixel value (imp. getPixel(DataFitForImage− 2, j + StartArrayPosOnImg) , 0.5)); −

else } } } fo r { ( int k= 0; k < (NumOfSlices); k++) ImageProcessor imp = StackOfImages.getImageStack{ () . get Processor(k + 1); fo r ( int j = 0; j < NumOfRows ; j ++) int DataFitForImage = ( int ) Math.round(SmoothArraySlices{ [ j ][k] (imagewidth 2 imagewidth 0.1) / maxvalue + imagewidth 0.1) ; ∗ imp.putPixelValue(DataFitForImage− ∗ ∗ , j + StartArrayPosOnImg,∗ flip pixel value( imp. getPixel(DataFitForImage , j + StartArrayPosOnImg) , ( int ) (( pixel bins 1) / 2))); imp.putPixelValue(DataFitForImage− + 1, j + StartArrayPosOnImg , flip pixel value(imp.getPixel(DataFitForImage + 1, j + StartArrayPosOnImg) , ( int ) ((pixel bins 1) / 2))); imp. putPixelValue(DataFitForImage 1, j + StartArrayPosOnImg− , flip pixel value (imp. getPixel(DataFitForImage− 1, j + StartArrayPosOnImg) , ( int ) ((pixel bins 1) /− 2))); imp.putPixelValue(DataFitForImage + 2, j + StartArrayPos− OnImg , flip pixel value(imp.getPixel(DataFitForImage + 2, j + StartArrayPosOnImg) , ( int ) ((pixel bins 1) / 2))); imp. putPixelValue(DataFitForImage 2, j + StartArrayPosOnImg− , flip pixel value (imp. getPixel(DataFitForImage− 2, j + StartArrayPosOnImg) , ( int ) ((pixel bins 1) /− 2))); − } } } } void DrawDataOnImage(ImagePlus StackOfImages , double [][] SmoothArraySlices , int pixel bins , String DoWeUseNormalized , int StartArrayPosOnImg , double maxvalue , int color) int NumOfSlices = SmoothArraySlices [0]. length; ///index of slice { int NumOfRows = SmoothArraySlices . length; int imagewidth = StackOfImages.getWidth() ; fo r ( int k= 0; k < (NumOfSlices) ; k++) { 240

ImageProcessor imp = StackOfImages.getImageStack () . get Processor(k + 1); imp. setColor(color); fo r ( int j = 0; j < NumOfRows ; j ++) int DataFitForImage = ( int ) Math.round(SmoothArraySlices{ [ j ][k] (imagewidth imagewidth 0.1) / maxvalue); ∗ − imp.drawLine(DataFitForImage∗ , j + StartArrayPosOnImg , DataFitForImage + 5, j + StartArrayPosOnImg) ; }} } void DrawDataOnImage(ImagePlus StackOfImages , double [] SpkAmps70Perc, int pixel bins , String DoWeUseNormalized , int StartArrayPosOnImg , int color) int NumOfSlices = SpkAmps70Perc. length; ///index of slice{ int imagewidth = StackOfImages.getWidth() ;

fo r ( int k= 0; k < (NumOfSlices) ; k++) ImageProcessor imp = StackOfImages.getImageStack{ () . get Processor(k + 1); imp. setLineWidth(1) ; imp. setColor(color); imp.drawLine(0, ( int ) Math.round(SpkAmps70Perc[k]) + StartArrayPosOnImg , imagewidth , ( int ) Math.round(SpkAmps70Perc[k]) + StartArrayPosOnImg) ; } } void DrawCircleOnImage (ImagePlus StackOfImages , double [][] IndexArray , double [][] ValueArray , int pixel bins , String DoWeUseNormalized , int StartArrayPosOnImg , double maxvalue , int Color , int LineWidth , int Diameter) int NumOfSlices = IndexArray.length; {///index of slice int imagewidth = StackOfImages.getWidth() ;

fo r ( int k= 0; k < (NumOfSlices) ; k++) ImageProcessor imp = StackOfImages.getImageStack{ () . get Processor(k + 1); imp. setLineWidth(LineWidth) ; imp. setColor(Color); fo r ( int j = 0; j < ValueArray[k]. length; j++) int DataFitForImage = ( int ) Math.round(ValueArray[k][{ j ] (imagewidth 2 imagewidth 0.1) / maxvalue + imagewidth 0.1) ; ∗ − ∗ imp. drawOval(DataFitForImage∗ Diameter / 2, ( int∗ ) Math.round(IndexArray[k][ j ]) + StartArrayPosOnImg Diameter− / 2, Diameter, Diameter); − } }} public int flip pixel value( double value , double cutoff) double new value = 0; { i f (value >= cutoff) new value = 0; { else } new{ value = cutoff 2.0; ∗ }return ( int ) new value ; } } class LinearFit public Object{ [] ValueFromSlope(String ForFits , double [][] SmoothArraySlices , double [] LowerIndex , double [] UpperIndex , double [] ValueForInterpolation) MultiDimArray MultiDimArray = new MultiDimArray() ; { Object [] ReturnedFromValueFromSlope = new Object [3]; int NumOfRows = SmoothArraySlices . length; ///index in vertical int NumOfSlices = SmoothArraySlices [0]. length; ///index of slice double [] ValueFromSlope = new double [ NumOfSlices ]; double [][] CurveFittedIndexArray = new double [NumOfSlices ][]; double [][] CurveFittedAverageArray = new double [NumOfSlices ][]; fo r ( int k= 0; k < NumOfSlices; k++) ValueFromSlope[k] = 0; { CurveFittedAverageArray [k] = MultiDimArray .Make2DArrayCopy(SmoothArraySlices , ( int ) Math.round(LowerIndex[k]) , ( int ) Math.round(UpperIndex[k]) , true , k); CurveFittedIndexArray [k] = MultiDimArray. MakeArrayOfIndices(( int ) Math.round( LowerIndex[k]) , ( int ) Math.round(UpperIndex[k]) );

i f (CurveFittedIndexArray [k]. length > 5) CurveFitter Fit = new CurveFitter (CurveFittedAverageArray{ [k] , CurveFittedIndexArray [k]) ; Fit.doFit(0); //0 is for straight line fit ValueFromSlope[k] = Fit. f(Fit.getParams() , ValueForInt erpolation[k]) ;

ReturnedFromValueFromSlope} } [0] = ValueFromSlope; ReturnedFromValueFromSlope [1] = CurveFittedIndexArray ; ReturnedFromValueFromSlope [2] = CurveFittedAverageArray ; return ReturnedFromValueFromSlope ; } public Object [] ValueFromSlope(String ForFits , double [][] SmoothArraySlices , double [][] SmoothArraySlicesIndices , double [] LowerIndex, double [] UpperIndex , double [] ValueForInterpolation) MultiDimArray MultiDimArray{ = new MultiDimArray() ; Object [] ReturnedFromValueFromSlope = new Object [3]; int NumOfRows = SmoothArraySlices . length; ///index in vertical int NumOfSlices = SmoothArraySlices [0]. length; ///index of slice double [] ValueFromSlope = new double [ NumOfSlices ]; 241

double [][] CurveFittedIndexArray = new double [NumOfSlices ][]; double [][] CurveFittedAverageArray = new double [NumOfSlices ][]; fo r ( int k= 0; k < NumOfSlices; k++) ValueFromSlope[k] = 0; { CurveFittedAverageArray [k] = MultiDimArray .Make2DArrayCopy(SmoothArraySlices , ( int ) Math.round(LowerIndex[k]) , ( int ) Math.round(UpperIndex[k]) , true , k); CurveFittedIndexArray [k] = MultiDimArray .Make2DArrayCopy(SmoothArraySlicesIndices , ( int ) Math.round(LowerIndex[k]) , ( int ) Math.round(UpperIndex[k]) , true , k);

i f (CurveFittedIndexArray [k]. length > 5) ////If too few points don’t even bother CurveFitter Fit = new CurveFitter (CurveFittedAverageArray{ [k] , CurveFittedIndexArray [k]) ; Fit.doFit(0); //0 is for straight line fit ValueFromSlope[k] = Fit. f(Fit.getParams() , ValueForInt erpolation[k]) ;

ReturnedFromValueFromSlope} } [0] = ValueFromSlope; ReturnedFromValueFromSlope [1] = CurveFittedIndexArray ; ReturnedFromValueFromSlope [2] = CurveFittedAverageArray ; return ReturnedFromValueFromSlope ; } } class OverloadMaxAndMin double getMaxValueForAll{ ( double [] [ ] RowAverageImagesArray ) int ArrayDepth = RowAverageImagesArray [0]. length; { double maxValue = 0;

fo r ( int k= 0; k <= (ArrayDepth 1); k++) fo r ( int j = 1; j < RowAverageImagesArray− { . length; j++) i f (RowAverageImagesArray [ j ][ k] > maxValue) { maxValue = RowAverageImagesArray [ j ][ k]; {

return maxValue} ; } } } double getMaxValueForAll ( double [] [ ] RowAverageImagesArray , double [] [ ] RowAverageImagesArray2 ) int{ ArrayDepth = RowAverageImagesArray [0]. length; double maxValue = 0;

fo r ( int k= 0; k <= (ArrayDepth 1); k++) fo r ( int j = 1; j < RowAverageImagesArray− { . length; j++) i f (RowAverageImagesArray [ j ][ k] > maxValue) { maxValue = RowAverageImagesArray [ j ][ k]; { } }fo r ( int j = 1; j < RowAverageImagesArray2 . length; j++) i f (RowAverageImagesArray2 [ j ][ k] > maxValue) { maxValue = RowAverageImagesArray2 [ j ][k]; {

return maxValue} ; } } } double [][] getMaxValue( double [] [ ] RowAverageImagesArray ) int ArrayDepth = RowAverageImagesArray [0]. length; { double [][] maxValue = new double [2][ ArrayDepth ];

fo r ( int k= 0; k <= (ArrayDepth 1); k++) maxValue[1][k] = RowAverageImagesArray− [0][k];{ maxValue[0][k] = 0; fo r ( int j = 1; j < RowAverageImagesArray . length; j++) i f (RowAverageImagesArray [ j ][ k] > maxValue[1][k]) { maxValue[1][k] = RowAverageImagesArray [ j ][k]; { maxValue[0][k] = j;

return maxValue} ; }///////0 } index is index of max 1 index is max values. Done for a l l slices } double [][] getMinValue( double [] [ ] RowAverageImagesArray ) int ArrayDepth = RowAverageImagesArray [0]. length; { double [][] minValue = new double [2][ ArrayDepth ];

fo r ( int k= 0; k <= (ArrayDepth 1); k++) minValue[1][k] = RowAverageImagesArray− [0][k];{ minValue[0][k] = 0; fo r ( int j = 1; j < RowAverageImagesArray . length; j++) i f (RowAverageImagesArray [ j ][ k] < minValue[1][k]) { minValue[1][k] = RowAverageImagesArray [ j ][k]; { minValue[0][k] = j;

return minValue} ; } } } double [][][] getMaxValue( double [][][] ImagesArray) int NumOfSlices = ImagesArray [0][0]. length; { int NumOfRows = ImagesArray [0]. length; 242

double [][][] maxValue = new double [2][NumOfRows ][ NumOfSlices ];

fo r ( int k= 0; k <= (NumOfSlices 1) ; k++) fo r ( int j = 1; j < NumOfRows ;− j ++) { maxValue[1][ j ][k] = ImagesArray [0][{ j ][k]; maxValue[0][ j ][k] = 0; fo r ( int i = 1; i < ImagesArray . length; i++) i f (ImagesArray[ i ][ j ][k] > maxValue[1][ j ][k]){ maxValue[1][ j ][k] = ImagesArray[i ][ j ][k]; { maxValue[0][ j][k] = i;

return maxValue}} ; ///////0 } index } is index of max 1 index is max values. Done for a l l slices } double [][][] getMaxValue( float [][][] ImagesArray) int NumOfSlices = ImagesArray [0][0]. length; { int NumOfRows = ImagesArray [0]. length; double [][][] maxValue = new double [2][NumOfRows ][ NumOfSlices ];

fo r ( int k= 0; k <= (NumOfSlices 1) ; k++) fo r ( int j = 1; j < NumOfRows ;− j ++) { maxValue[1][ j ][k] = ImagesArray [0][{ j ][k]; maxValue[0][ j ][k] = 0; fo r ( int i = 1; i < ImagesArray . length; i++) i f (ImagesArray[ i ][ j ][k] > maxValue[1][ j ][k]){ maxValue[1][ j ][k] = ImagesArray[i ][ j ][k]; { maxValue[0][ j][k] = i;

return maxValue}} ; ///////0 } index is } index of max 1 index is max values. Done for a l l slices } double [][][] getMinValue( double [][][] ImagesArray) int NumOfSlices = ImagesArray [0][0]. length; { int NumOfRows = ImagesArray [0]. length; double [][][] minValue = new double [2][NumOfRows ][ NumOfSlices ];

fo r ( int k= 0; k <= (NumOfSlices 1) ; k++) fo r ( int j = 1; j < NumOfRows ;− j ++) { minValue[1][ j ][k] = ImagesArray [0][{ j ][k]; minValue[0][ j][k] = 0; fo r ( int i = 1; i < ImagesArray . length; i++) i f (ImagesArray[ i ][ j ][k] < minValue[1][ j ][k]){ minValue[1][ j ][k] = ImagesArray[i ][ j ][k]; { minValue[0][j][k] = i;

return minValue}} ; } } } } class Analysis Object [] ImageProfiling(ImagePlus{ UnCroppedImages , ImagePlus CroppedImages , int Left , int Right , int Top , int Bot , int IntfTop , int IntfBot , String DoWeUseNormalized , double maxvalueForDrawThresh , String Path , int BitDepth , String MaxValueType) Main main = new Main() ; { DrawOnImages Draw = new DrawOnImages () ; OverloadMaxAndMin MinMax = new OverloadMaxAndMin () ;

Object [] ReturnedFromImageProfiling = new Object [6];

float [ ] [ ][ ] CropImageArray = main.StackToArray(CroppedImages) ; double [] [ ] RowAverageImageArraySpk = main.RowAverageTheImagesArray (CropImageArray , IntfTop Top , 0 , false ); double [ ] [ ] RowAverageImageArrayBub− = main.RowAverageTheImagesArray (CropImageArray , CropImageArray [0]. length 1, IntfBot Top , false ); double [] [ ] RowAverageSmoothImageArraySpk− =− main.SmoothArraySlicesVarAvg ( RowAverageImageArraySpk, 7) ; double [] [ ] RowAverageSmoothImageArrayBub = main.SmoothArraySlicesVarAvg ( RowAverageImageArrayBub , 7) ; ImageStatistics imgstat = UnCroppedImages. getStatistic s(); UnCroppedImages. setRoi (Left , Top, Right Left , Bot Top) ; − − i f (MaxValueType . equals (”MaxForAllUnlessNorm”) ) maxvalueForDrawThresh = MinMax. getMaxValueForAll{ (RowAverageImageArraySpk , RowAverageImageArrayBub ) ;

}i f (DoWeUseNormalized . equals(”Yes”)) maxvalueForDrawThresh = 1.0; {

}i f (MaxValueType . equals(”MaxForAll”)) maxvalueForDrawThresh = MinMax. getMaxValueForAll{ (RowAverageImageArraySpk , RowAverageImageArrayBub ) ; } 243

IJ.saveAs(UnCroppedImages , ”. tif”, Path + UnCroppedImages. getShortTitle ()+”PreProfiled”) ; Draw.DrawDataOnImage(UnCroppedImages , RowAverageImageArraySpk, BitDepth , DoWeUseNormalized , Top, maxvalueForDrawThresh ) ; Draw.DrawDataOnImage(UnCroppedImages , RowAverageImageArrayBub , BitDepth , DoWeUseNormalized , IntfBot , maxvalueForDrawThresh ) ; Draw.DrawDataOnImage(CroppedImages , RowAverageImageArraySpk, BitDepth , DoWeUseNormalized , 0, maxvalueForDrawThresh); Draw.DrawDataOnImage(CroppedImages , RowAverageImageArrayBub , BitDepth , DoWeUseNormalized , IntfBot Top, maxvalueForDrawThresh ) ; − ReturnedFromImageProfiling [0]=RowAverageImageArraySpk; ReturnedFromImageProfiling [1]=RowAverageImageArrayBub ; ReturnedFromImageProfiling [2]=RowAverageSmoothImageArraySpk; ReturnedFromImageProfiling [3]=RowAverageSmoothImageArrayBub ; ReturnedFromImageProfiling [4]=CropImageArray ; ReturnedFromImageProfiling [5]=maxvalueForDrawThresh ; return ReturnedFromImageProfiling ; } }

B.7.2 Excel.java

package runimagej ;

import jxl .demo. ; import java.io.File;∗ import java. io .FileOutputStream; import java . io .OutputStream; import java. io . BufferedReader; import java. io .FileReader; import java. util . ; import java. io . ; ∗ import java .math.∗ ; import javax . swing.JCheckBox;∗ import jxl .write.WritableCell; import jxl .Cell; import jxl . ; import jxl .Workbook;∗ import jxl .Sheet; import jxl . CellReferenceHelper. ; import jxl . biff .CellReferenceHelper;∗ import jxl . biff .formula.ExternalSheet; import javax .swing. ; ∗ import jxl .write. ; import jxl . write .Number;∗

import ij . ; import ij .gui.∗ ; import ij .gui.Roi.∗ ; import ij .IJ; ∗

//////// @author Mike Roberts public class Excel { public Object [] CreateInitialExcelFile(String OutputFileName ,String [] HeaderInfo , double [][] MainValues , int NumOfRowsForExcel , int NumOfColumnsForExcel , String FolderName , boolean UsePrevInputs, String TemplateTypeChoice)

{Object[] CreateInitialExcelReturned = new Object [2]; String str; int [] AmpColIndex = new int [2];

GenericDialog TemplateType = new GenericDialog(”Select Template Type”); TemplateType.addMessage(”Full Template only inputs raw data” + ” n” + ”Plot Template does all cell calculations but uses plots”\ + ” n” + ”No Template uses no template”); \ String [] TemplateChoice=new String [] ”Full” ,”Plot” ,”None” ; TemplateType.addChoice(”Full , Plot or{ None”,TemplateChoice} ,”Full”); i f (UsePrevInputs == false ) TemplateType. showDialog() ; { TemplateTypeChoice=TemplateType. getNextChoice () ; } try { i f (TemplateTypeChoice. equals(”None”)) WritableWorkbook workbook = Workbook.{ createWorkbook (new File (OutputFileName)) ; WritableSheet s1 = workbook. createSheet(FolderName. sub string (0, Math.min(20 ,FolderName. length()) ), 0); 244

WritableSheet s2 = workbook. createSheet(FolderName. sub string (0, Math.min(20 ,FolderName. length()) )+” ForFit”, 1); AmpColIndex = ArraysToExcel (HeaderInfo ,MainValues ,s1 , NumOfRowsForExcel , NumOfColumnsForExcel ) ; workbook. write() ; workbook. close () ;

else } JFrame{ f = new JFrame(”Select Template File”); JOptionPane.showMessageDialog(f , ”Select File to use for Excel Data template if desired” ); JFileChooser Chooser = new JFileChooser( (new File(OutputFileName)) . getParent() . toString ()); int returnVal = Chooser .showOpenDialog ( null ); i f (returnVal == JFileChooser .APPROVE OPTION) File TemplateFile=Chooser . getSelectedFile(){ ; Workbook InputTemplate = Workbook.getWorkbook(TemplateFile ); WritableWorkbook workbook = Workbook. createWorkbook (new File (OutputFileName) , InputTemplate ) ; WritableSheet s1 = workbook.getSheet(0) ; s1 .setName(FolderName. substring (0, Math.min(20 ,FolderName.length()) ) ); WritableSheet s2 = workbook.getSheet(1) ; s2 .setName(FolderName. substring (0, Math.min(20 ,FolderName. length()) )+” ForFit”) ; AmpColIndex = ArraysToExcel (HeaderInfo ,MainValues ,s1 , NumOfRowsForExcel , NumOfColumnsForExcel) ; workbook. write() ; workbook. close () ; InputTemplate . close () ;

else } WritableWorkbook{ workbook = Workbook. createWorkbook (new File (OutputFileName)) ; WritableSheet s1 = workbook. createSheet(FolderName. sub string(0, Math.min(20, FolderName.length()) ), 0); WritableSheet s2 = workbook. createSheet(FolderName. sub string(0, Math.min(20, FolderName. length()) )+” ForFit”, 1); AmpColIndex = ArraysToExcel (HeaderInfo ,MainValues ,s1 , NumOfRowsForExcel , NumOfColumnsForExcel) ; workbook. write() ; workbook. close () ;

catch (Exception} } e) } e.printStackTrace (){ ;

}CreateInitialExcelReturned [0]=AmpColIndex; CreateInitialExcelReturned [1]=TemplateTypeChoice; return CreateInitialExcelReturned ;

}public Object[] CreateInitialExcelProfileFile(String OutputFileName,String [] HeaderInfo , double [][] MainValues1 , double [][] MainValues2 , int NumOfRowsForExcel1, int NumOfRowsForExcel2, int NumOfColumnsForExcel , String FolderName)

{Object[] CreateInitialExcelProfileReturned = new Object [2]; String str; int [] AmpColIndex = new int [2]; try { JFrame f = new JFrame(”Select Excel Profile Template File”); JOptionPane.showMessageDialog(f , ”Select File to use for Excel Profile template if desired For Chart”); JFileChooser Chooser = new JFileChooser( (new File(OutputFileName)) . getParent() . toString ()); int returnVal = Chooser .showOpenDialog ( null ); i f (returnVal == JFileChooser .APPROVE OPTION) File TemplateFile=Chooser . getSelectedFile(){ ; Workbook InputTemplate = Workbook.getWorkbook(TemplateFile ); WritableWorkbook workbook = Workbook. createWorkbook (new File (OutputFileName) , InputTemplate ) ; WritableSheet s1 = workbook.getSheet(0) ; s1 .setName(FolderName. substring (0, Math.min(20 ,FolderName.length()) ) ); AmpColIndex = ArraysToExcelProfile (HeaderInfo ,MainValues1 ,MainValues2 ,s1 , NumOfRowsForExcel1, NumOfRowsForExcel2, NumOfColumnsForExcel) ; workbook. write() ; workbook. close () ; InputTemplate . close () ; else } WritableWorkbook{ workbook = Workbook. createWorkbook (new File (OutputFileName)) ; WritableSheet s1 = workbook. createSheet(FolderName. sub string(0, Math.min(20, FolderName.length()) ), 0); AmpColIndex = ArraysToExcelProfile (HeaderInfo ,MainValues1 ,MainValues2 ,s1 , NumOfRowsForExcel1, NumOfRowsForExcel2, NumOfColumnsForExcel) ; workbook. write() ; workbook. close () ;

catch (Exception} e) } e.printStackTrace (){ ; 245

}CreateInitialExcelProfileReturned [0]=AmpColIndex; return CreateInitialExcelProfileReturned ; } public int [] ArraysToExcel(String [] HeaderInfo , double [][] MainValues ,WritableSheet s1, int NumOfRows , int NumOfCols)

{int [] AmpColIndex = new int [2]; int ColIndex[] =new int [1]; try { ColIndex[0] = 4; while (ColIndex[0] < HeaderInfo. length+4) Label lblName=new Label(ColIndex[0]{ ,0 , HeaderInfo[ColIndex[0] 4]) ; ///First put labels for raw amp cols − s1 . addCell(lblName) ; ColIndex[0]++;

}int RowIndex = 1; while (RowIndex<=NumOfRows ) ColIndex[0] = 4; { while (ColIndex[0] < NumOfCols+4) Number n=new Number(ColIndex{ [0] , RowIndex, MainValues[RowIndex 1][ColIndex[0] 4]) ; // / Next Fill raw amp data − − s1 .addCell(n); ColIndex[0]++;

RowIndex++;}

catch} (Exception e) } e.printStackTrace (){ ;

}AmpColIndex[0]=4; AmpColIndex[1]=ColIndex[0] 1; return AmpColIndex; − } public int [] ArraysToExcelProfile (String [] HeaderInfo , double [][] MainValues1 , double [][] MainValues2 , WritableSheet s1 , int NumOfRows1 , int NumOfRows2 , int NumOfCols)

{int [] AmpColIndex = new int [2]; int ColIndex[] =new int [1]; try { ColIndex[0] = 4; while (ColIndex[0] < HeaderInfo. length+4) Label lblName=new Label(ColIndex[0]{ ,0 , HeaderInfo[ColIndex[0] 4]) ; s1 . addCell(lblName) ; − ColIndex[0]++;

}//Bubbles and spikes are separate , so two different mainval ue write blocks int RowIndex = 1; while (RowIndex

RowIndex++;}

}while (RowIndex

RowIndex++;}

catch} (Exception e) } e.printStackTrace (){ ;

}AmpColIndex[0]=4; AmpColIndex[1]=ColIndex[0] 1; return AmpColIndex; − } public void ReopenExcelAndCreateColumns ( String OutputFileName, St ring [] HeaderInfo , double [][] MainValues , int NumOfRows , int NumOfCols , int startslice , int endslice , int IntfLoc , int tank widthPxl , String FolderName , int [] AmpColIndex, int IntfThick , String TemplateTypeChoice , boolean UsePrevInputs , double tank widthmm , double TimeBetwFrames , double Atwood , double Accel , double AvgKinemVisc ) try { Workbook{ w1 = Workbook.getWorkbook(new File (OutputFileName)) ; 246

WritableWorkbook w2 = Workbook. createWorkbook (new File(OutputFileName) , w1); WritableSheet s1 = w2.getSheet(FolderName. substring(0, Math.min(20,FolderName. length()) )); //truncate Foldername because sheet name is limited to 31 ch a r s WritableSheet s2 = w2.getSheet(FolderName. substring(0, Math.min(20 ,FolderName. length()) )+” ForFit”);

int VirtOriginFrame = 10;

i f (UsePrevInputs == false ) tank widthmm=IJ .getNumber(”Tank{ Width (mm)” , 76.0) ; TimeBetwFrames=IJ .getNumber(”Time Between Frames (ms)” , 5); Atwood=IJ .getNumber(”Atwood Number” , 0.481) ;

AvgKinemVisc=IJ.getNumber(”Avg. Kinematic Viscosity , f or instance 1E 6 (m/sˆ2)”, 1E 6) ; − − }Accel=IJ.getNumber(”RT Accel . (m/secˆ2)”, 11.77) ; double yCal=tank widthPxl/tank widthmm ; int [] ColIndex=new int [1]; ColIndex[0]=NumOfCols+4+1; //we need to offset 4 because inputed acceleration before Data Columns

int [] AccelColIndex = AccelCol(OutputFileName,s1 ,ColIndex ); int SlicesColIndex = Slices(s1 ,ColIndex ,NumOfRows, startsl ice); int SubStackSlicesColIndex = SubStackSlices(s1 ,ColIndex ,NumOfRows ) ; StringBuffer yCalAbsAddress = yCalCell(s1 ,ColIndex ,yCal ,tank widthPxl ,tank widthmm , yCal) ; int ExpInfoColIndex=CellReferenceHelper.getColumn(yCalAbsAddress. toString ()); StringBuffer AtwoodAbsAddress = AtwoodCell(s1 ,ColIndex ,Atwood) ; StringBuffer AccelAbsAddress = AccelCell(s1 ,ColIndex ,Accel) ; StringBuffer IntfLocAbsAddress = IntfLocCell(s1 ,ColIndex ,IntfLoc); StringBuffer IntfThickAbsAddress = IntfThickCell(s1 ,ColIndex , IntfThick); StringBuffer AvgKinemViscAbsAddress = AvgKinemViscCell (s1 , ColIndex ,AvgKinemVisc) ;

int TimeMsColIndex = TimeMsCol(s1 ,NumOfRows, ColIndex ,TimeBetwFrames) ; int TimeSColIndex = TimeSCol(s1 ,NumOfRows, ColIndex ,TimeMsColIndex) ;

//Unless using a full template, we fill all the cells i f (TemplateTypeChoice. equals(”Plot”) TemplateTypeChoice. equals(”None”)) int TimeSRenormColIndex = TimeSRenormCol(s1|| ,NumOfRows, ColIndex , TimeSColIndex){ ; int TimeSSquaredColIndex = TimeSSquaredCol (s1 ,NumOfRows, ColIndex ,TimeSRenormColIndex) ; int tSqrtAgColIndex = tSqrtAgCol(s1 ,NumOfRows, ColIndex ,TimeSRenormColIndex , AtwoodAbsAddress, AccelAbsAddress) ; int [ ] mmAmpColIndex = mmAmpColumn( s1 , NumOfRows , NumOfCols , HeaderInfo , ColIndex , yCalAbsAddress, IntfLocAbsAddress ,AmpColIndex, IntfThickAbsAddress ) ; int [ ] SQRTAmpColIndex= SqrtAmpColumn(s1 ,NumOfRows, ColIndex ,mmAmpColIndex) ; int ColIdxShtForFit [] =new int [1]; Fit1Sqrt (s1 , s2 ,NumOfRows, ColIdxShtForFit ,SQRTAmpColIndex,tSqrtAgColIndex, startslice , endslice 20, endslice 15) ; int [] TSqrtMinusVirtOriginFromFitColIndex− − = TSqrtMinusVirtOriginFromFitColumn(s1 , s2 , NumOfRows, ColIndex ,mmAmpColIndex , tSqrtAgColIndex) ;

int [] DerivColIndex = DerivativeColumn(s1 ,NumOfRows,ColIndex ,mmAmpColIndex , TimeMsColIndex) ; int [ ] HdotSqO4AgHColIndex = HdotSquOver4AgH(s1 ,NumOfRows, ColIndex ,mmAmpColIndex , DerivColIndex , AtwoodAbsAddress. toString () ,AccelAbsAddress. toStrin g());

int [] TSquaredMinusVirtOriginFromFitColIndex = TSquaredMinusVirtOriginFromFitColumn(s1 , s2 , NumOfRows, ColIndex ,mmAmpColIndex , tSqrtAgColIndex , AtwoodAbsAddress. toString () , AccelAbsAddress. toString ()) ;

int [] RunningParbolaFitLeastSquaresAlphaColIndex = RunningParbolaFitLeastSquaresAlpha (s1 , NumOfRows , ColIndex , mmAmpColIndex , TimeSRenormColIndex , TimeSSquaredColIndex , AtwoodAbsAddress. toString () , AccelAbsAddress. toString ()) ;

int [] ReynoldsNumberColIndex = ReynoldsNumber(s1 ,NumOfRows, ColIndex ,mmAmpColIndex , DerivColIndex ,AvgKinemViscAbsAddress. toString () ) ;

MetaDataCol(s1 , ColIndex ,AmpColIndex, ExpInfoColIndex , NumOfRows , AccelColIndex , SlicesColIndex , SubStackSlicesColIndex , TimeMsColIndex , TimeSColIndex , TimeSRenormColIndex , TimeSSquaredColIndex, tSqrtAgColIndex, mmAmpColIndex , SQRTAmpColIndex, DerivColIndex , HdotSqO4AgHColIndex , RunningParbolaFitLeastSquaresAlphaColIndex , TSqrtMinusVirtOriginFromFitColIndex , TSquaredMinusVirtOriginFromFitColIndex , ReynoldsNumberColIndex) ;

w2.write()} ; w2. close() ; catch (Exception e) } e.printStackTrace (){ ; } } public void ReopenExcelProfileAndCreateColumns(String OutputFileName, String FolderName , int NumOfCols) try { Workbook{ w1 = Workbook.getWorkbook(new File (OutputFileName)) ; WritableWorkbook w2 = Workbook. createWorkbook (new File(OutputFileName) , w1); 247

WritableSheet s1 = w2.getSheet(FolderName. substring(0, Math.min(20,FolderName. length()) )); //truncate Foldername because sheet name is limited to 31 ch a r s int NumOfRows = s1 . getRows ( ) ;

int [] ColIndex=new int [1]; ColIndex[0]=4; //we need to offset 4 because inputed acceleration before Data Columns

FillProfileHeaderInfo (s1 ,NumOfCols+4,ColIndex) ; w2.write() ; w2. close() ; w1. close() ; catch (Exception e) } e.printStackTrace (){ ; } } private void FillProfileHeaderInfo(WritableSheet s1 , int NumOfCols , int [] ColIndex) ColIndex[0]=ColIndex[0] 1; { Label lbl= new Label(ColIndex[0]− , 0, ”Frame”); try s1.addCell(lbl);{ fo r ( int FillProfHeaderColIndex=ColIndex[0]+1; FillProfHeader ColIndex <=NumOfCols 1; FillProfHeaderColIndex++) − Number num=new Number(FillProfHeaderColIndex{ , 0, FillProfHeaderColIn d e x 4) ; s1 . addCell(num) ; −

} catch (Exception e) }e.printStackTrace () ; { }

} public int SubStackSlices(WritableSheet s1 , int [] ColIndex , int NumOfRows ) ColIndex[0]=ColIndex[0]+1; { int SubStackSlicesColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”SubStackSlice”); try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Number num=new Number(ColIndex[0] , RowIndex,− RowIndex{ ); s1 . addCell(num) ; catch (Exception e) }e.printStackTrace } () ; { return SubStackSlicesColIndex} ; } public int Slices(WritableSheet s1 , int [] ColIndex , int NumOfRows , int StartSlice) ColIndex[0]=ColIndex[0]+1; { int SlicesColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”Slice”); try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Number num=new Number(ColIndex[0] , RowIndex,− StartSlice+RowIndex{ 1 ); s1 . addCell(num) ; − catch (Exception e) }e.printStackTrace } () ; { return SlicesColIndex ; } } private StringBuffer yCalCell(WritableSheet s1 , int [] ColIndex , double ycal , int tankwidthPxl , double tankwidthmm , double yCal) ColIndex[0]=ColIndex[0]+1; { int yCalColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”y calib(pixels/mm):”); StringBuffer sb = new StringBuffer() ; − try s1.addCell(lbl);{ Formula fmla=new Formula(ColIndex[0] , 1, ””+yCal+””); s1 .addCell(fmla); CellReferenceHelper. getCellReference (yCalColIndex , true ,1 , true ,sb); catch (Exception e) }e.printStackTrace () ; {

}StringBuffer yCalAbsAddress=sb ; return yCalAbsAddress ; } private StringBuffer AtwoodCell(WritableSheet s1 , int [] ColIndex , double Atwood) Label lbl= new Label(ColIndex[0] , 2, ”Atwood Number:”); { StringBuffer sb = new StringBuffer() ; try { s1.addCell(lbl); Number num=new Number(ColIndex[0] , 3, Atwood); s1 . addCell(num) ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,3 , true ,sb); 248

catch (Exception e) }e.printStackTrace () ; { StringBuffer AtwoodAbsAddress=sb} ; return AtwoodAbsAddress; } private StringBuffer AccelCell(WritableSheet s1 , int [] ColIndex , double Accel ) Label lbl= new Label(ColIndex[0] , 4, ”Accel:”); { StringBuffer sb = new StringBuffer() ; try { s1.addCell(lbl); Number num=new Number(ColIndex[0] , 5, Accel); s1 . addCell(num) ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,5 , true ,sb); catch (Exception e) }e.printStackTrace () ; { StringBuffer AccelAbsAddress=sb;} return AccelAbsAddress; } private StringBuffer IntfLocCell(WritableSheet s1 , int [] ColIndex , int IntfLoc) Label lbl= new Label(ColIndex[0] , 6, ”IntfLoc:”); { StringBuffer sb = new StringBuffer() ; try { s1.addCell(lbl); Number num=new Number(ColIndex[0] , 7, IntfLoc); s1 . addCell(num) ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,7 , true ,sb);

catch (Exception e) }e.printStackTrace () ; { StringBuffer IntfLocAbsAddress=sb;} return IntfLocAbsAddress ; } private StringBuffer VirtOriginFrameCell(WritableSheet s1 , int [] ColIndex , int VirtOriginFrame) Label lbl= new Label(ColIndex[0] , 8, ”VirtOriginFrame:”); { StringBuffer sb = new StringBuffer() ; try { s1.addCell(lbl); Number num=new Number(ColIndex[0] , 9, VirtOriginFrame); s1 . addCell(num) ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,9 , true ,sb);

catch (Exception e) }e.printStackTrace () ; {

}StringBuffer VirtOriginFrameAbsAddress=sb; return VirtOriginFrameAbsAddress; } private StringBuffer IntfThickCell(WritableSheet s1 , int [] ColIndex , int IntfThick) Label lbl= new Label(ColIndex[0] , 10, ”IntfThick:”); { StringBuffer sb = new StringBuffer() ; try { s1.addCell(lbl); Number num=new Number(ColIndex[0] , 11, IntfThick); s1 . addCell(num) ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,11 , true ,sb);

catch (Exception e) }e.printStackTrace () ; {

}StringBuffer IntfThickAbsAddress=sb; return IntfThickAbsAddress ; } private StringBuffer AvgKinemViscCell(WritableSheet s1 , int [] ColIndex , double AvgKinemVisc ) Label lbl= new Label(ColIndex[0] , 12, ”AvgKinemVisc:”); { StringBuffer sb = new StringBuffer() ; try { s1.addCell(lbl); Number num=new Number(ColIndex[0] , 13, AvgKinemVisc); s1 . addCell(num) ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,13 , true ,sb); catch (Exception e) }e.printStackTrace () ; {

}StringBuffer AvgKinemViscAbsAddress=sb ; return AvgKinemViscAbsAddress ; 249

} private int [] AccelCol(String OutputFileName,WritableSheet s1 , int [] ColIndex) int [] AccelColReturned = new int [2]; {

try JFrame{ f = new JFrame(”Accel . Columns”); f. setVisible( true );

JOptionPane.showMessageDialog(f , ”Select File to Extrac t Acceleration Columns”); JFileChooser Chooser = new JFileChooser (OutputFileName) ;

int returnVal = Chooser .showOpenDialog ( null );

i f (returnVal == JFileChooser .APPROVE OPTION) String AccelTimeColString = JOptionPane .showInputDialo{ g(”Accel. Time Col. (first Col. is 0)? ” ,12) ; String AccelDataColString = JOptionPane .showInputDialog(”Accel. Data Col. (first Col. is 0)? ” ,10) ; Label lbl= new Label(0, 0, ”Accel time(S)”); s1.addCell(lbl); lbl= new Label(1, 0, ”Accel Data(g)”); s1.addCell(lbl); File fileToOpen=Chooser . getSelectedFile () ; String Path = fileToOpen.getParent(); String FolderName = ((new File(Path)) .getName()) ;

Workbook AccelSourceDocument = Workbook.getWorkbook(fi leToOpen) ; WritableWorkbook writableTempAccelSource = Workbook.createWorkbook (new File(”temp2. xls”) , AccelSourceDocument) ; WritableSheet AccelSourceSheet = writableTempAccelSource . getSheet(0) ; int numrows= AccelSourceSheet .getRows() ;

fo r ( int RowIndex = 1 ; RowIndex < numrows ; RowIndex++) WritableCell readCell = AccelSourceSheet.getWritableCe{ ll(Integer .parseInt( AccelTimeColString ) , RowIndex) ; WritableCell newCell = readCell.copyTo(0, RowIndex); s1 .addCell(newCell) ; String ContentsOfCell = AccelSourceSheet.getWritableCe ll(Integer .parseInt( AccelDataColString) , RowIndex) .getContents() ; Number num=new Number(1, RowIndex, Double .parseDouble(ContentsOfCell )); s1 . addCell(num) ;

AccelColReturned} } [0]=0; AccelColReturned [1]=1; catch (Exception e) }e.printStackTrace () ; {

} return AccelColReturned ; } private int TimeMsCol(WritableSheet s1 , int NumOfRows , int [] ColIndex , double TimeBetwFrames) ColIndex[0]=ColIndex[0]+1; { int TimeMsColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”time(ms)”); Formula fmla=new Formula(ColIndex[0] , 1, ”0”); try s1.addCell(lbl);{ s1 .addCell(fmla); fo r ( int RowIndex=2; RowIndex<=NumOfRows 1; RowIndex++) fmla=new Formula(ColIndex[0] , RowIndex,− CellReferenceHelper.{ ge tCellReference (ColIndex [0] ,RowIndex 1)+”+” +− Double. toString(TimeBetwFrames) ) ; s1 .addCell(fmla); catch (Exception e) }e.printStackTrace } () ; {

}return TimeMsColIndex; } private int TimeSCol(WritableSheet s1 , int NumOfRows , int [] ColIndex , int TimeMsColIndex) ColIndex[0]=ColIndex[0]+1; { int TimeSColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”time(s)”);

try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex[0]− , RowIndex, CellReferenceHelper.{ ge tCellReference ( TimeMsColIndex ,RowIndex)+”/1000” ) ; s1 .addCell(fmla); catch (Exception e) }e.printStackTrace } () ; {

}return TimeSColIndex; 250

} private int TimeSRenormCol(WritableSheet s1 , int NumOfRows , int [] ColIndex , int TimeSColIndex) ColIndex[0]=ColIndex[0]+1; { StringBuffer sb = new StringBuffer() ; CellReferenceHelper. getCellReference (TimeSColIndex , true ,1 , true ,sb); StringBuffer TimeSFrstAbsAddress=sb; int TimeSRenormColIndex=ColIndex [ 0 ] ; Label lbl= new Label(ColIndex[0] , 0, ”time(s) Renorm”);

try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex[0]− , RowIndex, CellReferenceHelper.{ ge tCellReference ( TimeSColIndex , RowIndex)+ ” ”+TimeSFrstAbsAddress ) ; s1 .addCell(fmla);− catch (Exception e) }e.printStackTrace } () ; {

}return TimeSRenormColIndex; } private int TimeSSquaredCol (WritableSheet s1 , int NumOfRows , int [] ColIndex , int TimeSRenormColIndex) ColIndex[0]=ColIndex[0]+1;{ StringBuffer sb = new StringBuffer() ; int TimeSSquaredColIndex=ColIndex [0] ; Label lbl= new Label(ColIndex[0] , 0, ”time(s) Squared”);

try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex [0]− , RowIndex, ”POWER(”+CellReference{ Helper. getCellReference (TimeSRenormColIndex ,RowIndex)+ ”,2)” ); s1 .addCell(fmla); catch (Exception e) }e.printStackTrace } () ; {

}return TimeSSquaredColIndex ; } private int TimeSVirtOriginCol(WritableSheet s1 , int NumOfRows , int [] ColIndex , int TimeSRenormColIndex , int SubStackSlicesColIndex , StringBuffer VirtOriginFrameAbsAddress) ColIndex[0]=ColIndex[0]+1; { StringBuffer sb = new StringBuffer() ; int TimeSVirtOriginColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”time(s) Virt Origin”);

try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) //Formula fmla=new Formula(ColIndex [0]− , RowIndex, CellR{ eferenceHelper . getCellReference( TimeSRenormColIndex , RowIndex)+ // ” INDIRECT (ADDRESS(”+ V i r t O r i g i n F r a m e A b s A d d r e s s+” ”+CellReferenceHelper . getCellReference(SubStackSlicesColIndex− ,2)+”+3,”+Co− lIndex[0]+”))” ); Formula fmla=new Formula(ColIndex[0] , RowIndex, CellReferenceHelper. ge tCellReference ( TimeSRenormColIndex , RowIndex)+ ” INDIRECT (ADDRESS( ”+Vi rtOri gi nFrameAbsAddress+” ”+CellReferenceHelper . − getCellReference (SubStackSlicesColIndex ,2)+”+3,”+(T− imeSRenormColIndex+1)+”))” ) ; s1 .addCell(fmla); catch (Exception e) }e.printStackTrace } () ; {

}return TimeSVirtOriginColIndex ; } private int tSqrtAgCol(WritableSheet s1 , int NumOfRows , int [] ColIndex , int TimeSRenormColIndex , StringBuffer AtwoodAbsAddress, StringBuffer AccelAbsAddress) ColIndex[0]=ColIndex[0]+1; { int tSqrtAgColIndex=ColIndex [0]; Label lbl= new Label(ColIndex[0] , 0, ”t SQRT(Ag) (SQRTmm) ” ) ; ∗ try s1.addCell(lbl);{ fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex[0]− , RowIndex, CellReferenceHelper.{ ge tCellReference ( TimeSRenormColIndex , RowIndex)+ ” SQRT( ”+AtwoodAbsAddress+” ”+AccelAbsAddress+” 1000)” ); s1 .addCell(fmla);∗ ∗ ∗ catch (Exception e) }e.printStackTrace } () ; {

}return tSqrtAgColIndex ; } 251

private int [ ] mmAmpColumn( W r i t a b l e S h e e t s1 , int NumOfRows , int NumOfCols, String [] HeaderInfo , int [] ColIndex ,StringBuffer yCalAbsAddress,StringBuffer IntfLocAbsAddress , int [] AmpColIndex, StringBuffer IntfThickAbsAddress )

{ ColIndex[0]=ColIndex[0]+2; int [] mmAmpColIndex = new int [2]; mmAmpColIndex[0]=ColIndex [ 0 ] ; try { //Need the 4 because things are offset by 4 to make room for the accel. data before. fo r ( int i=AmpColIndex[0]; i

}ColIndex[0]=ColIndex[0]+3;

catch} (Exception e) } e.printStackTrace (){ ; mmAmpColIndex[1]=} ColIndex[0] 1; return mmAmpColIndex; − } private int [] SqrtAmpColumn(WritableSheet s1 , int NumOfRows , int [] ColIndex , int [ ] mmAmpColIndex)

{ ColIndex[0]=ColIndex[0]+2; int SQRTAmpColIndex[]=new int [10]; SQRTAmpColIndex[0]=ColIndex [ 0 ] ; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) Label lbl= new Label(ColIndex[0] , 0, ”SQRT”+s1. getWritableCell(i{ ,0) . getContents() . r e p l a c e A l l ( ”ampmm” , ”SQRT(mm) ” ) ) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex[0]− , RowIndex, ”SQRT(”+CellReferenceH{ elper. getCellReference (i ,RowIndex)+”)” ); s1 .addCell(fmla);

}ColIndex[0]=ColIndex[0]+1;

SQRTAmpColIndex[1]=} ColIndex [0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return SQRTAmpColIndex; } private int [] TSqrtMinusVirtOriginFromFitColumn(WritableSheet s1 ,WritableSheet s2 , int NumOfRows , int [] ColIndex , int [ ] mmAmpColIndex , int tSqrtAgColIndex)

{ ColIndex[0]=ColIndex[0]+2; int TSqrtMinusVirtOriginFromFitColIndex[]=new int [10]; TSqrtMinusVirtOriginFromFitColIndex[0]=ColIndex [0]; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) Label lbl= new Label(ColIndex[0] , 0, ”TSqrtMinusVirtOri”+s1.{ getWrita bleCell(i ,0) . getContents() . replaceAll(”ampmm”, ”(s)”)); s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) − { 252

Formula fmla=new Formula(ColIndex[0] , RowIndex, CellReferenceHelper. ge tCellReference (tSqrtAgColIndex,RowIndex) +” ’”+s2 .getName()+” ’!”+CellReferenceHelper . getCellReference (3 (i mmAmpColIndex[0])− +6,7) ) ; s1 .addCell(fmla); ∗ −

}ColIndex[0]=ColIndex[0]+1;

}TSqrtMinusVirtOriginFromFitColIndex[1]=ColIndex[0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return TSqrtMinusVirtOriginFromFitColIndex; } private int [] TSquaredMinusVirtOriginFromFitColumn(WritableShee t s1,WritableSheet s2, int NumOfRows , int [] ColIndex , int [ ] mmAmpColIndex , int tSqrtAgColIndex , String AtwoodAbsAddress, String AccelAbsAddres)

{ ColIndex[0]=ColIndex[0]+2; int TSquaredMinusVirtOriginFromFitColIndex[]=new int [10]; TSquaredMinusVirtOriginFromFitColIndex[0]=ColIndex [ 0]; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) Label lbl= new Label(ColIndex[0] , 0, ”TSquaredMinusVirtOri”+s1.getWr{ itableCell(i ,0) . getContents() . replaceAll(”ampmm”, ”(s)”)); s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex [0]− , RowIndex,”POWER((”+CellReferenc{ eHelper. getCellReference (tSqrtAgColIndex,RowIndex) +” ’”+s2 .getName()+” ’!”+ CellReferenceHelper. getCellReference (3 (i mmAmpColIndex[− 0] ) +6,7)+”)” +”/SQRT(”+AtwoodAbsAddress +” ”+AccelAbsAddres∗ − +” 1000) ,2)”); s1 .addCell(fmla); ∗ ∗

}ColIndex[0]=ColIndex[0]+1;

}TSquaredMinusVirtOriginFromFitColIndex[1]=ColIndex[ 0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return TSquaredMinusVirtOriginFromFitColIndex; } private int [] DerivativeColumn(WritableSheet s1 , int NumOfRows , int [] ColIndex , int [] mmAmpColIndex , int TimeMsColIndex)

{ ColIndex[0]=ColIndex[0]+2; int DerivColIndex []=new int [10]; DerivColIndex [0]=ColIndex [0]; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) Label lbl= new Label(ColIndex[0] , 0, ”Deriv”+s1.getWritableCell(i{ ,0) .getContents() . replaceAll (”ampmm” , ”m/s”)) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) StringBuffer TimePlus1 = new StringBuffer()− ; { StringBuffer TimeMinus1 = new StringBuffer () ; CellReferenceHelper . getCellReference (TimeMsColIndex , true , RowIndex+1, false , TimePlus1 ) ; CellReferenceHelper . getCellReference (TimeMsColIndex , true , RowIndex 1, false , TimeMinus1 ); − Formula fmla=new Formula(ColIndex[0] , RowIndex, ”(”+CellReferenceHelpe r. getCellReference (i ,RowIndex+1)+” ”+ CellReferenceHelper. getCellReference− (i ,RowIndex 1)+” ) /( ”+ TimePlus1+” ”+TimeMinus1 − +”)”) ; − s1 .addCell(fmla);

}ColIndex[0]=ColIndex[0]+1;

}DerivColIndex [1]=ColIndex[0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return DerivColIndex ; } private int [] HdotSquOver4AgH(WritableSheet s1 , int NumOfRows , int [] ColIndex , int [ ] mmAmpColIndex , int [] DerivColIndex ,String AtwoodAbsAddress, String AccelAbsAddress)

{ ColIndex[0]=ColIndex[0]+2; int HdotSqO4AgHColIndex[]=new int [10]; HdotSqO4AgHColIndex[0]=ColIndex [0] ; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) { 253

Label lbl= new Label(ColIndex[0] , 0, ”HdotSqO4AgH”+s1 . getWritableCel l(i ,0) .getContents() . replaceAll(” ampmm” , ” ” ) ) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex [0]− , RowIndex, ”POWER(”+CellReference{ Helper. getCellReference (DerivColIndex [0]+(i mmAmpColIndex[0]) ,RowIndex)+” ,2) /(4 ”+ AtwoodAbsAddress+ ” ”+ AccelAbsAddress+”/1000− ”+CellReferenceHelper . ∗ getCellReference∗ (i ,RowIndex)+”)”) ; ∗ s1 .addCell(fmla);

}ColIndex[0]=ColIndex[0]+1;

}HdotSqO4AgHColIndex[1]=ColIndex[0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return HdotSqO4AgHColIndex; } private int [] HdotSquOver4AgHVirtOrigin(WritableSheet s1 , int NumOfRows , int [] ColIndex , int [] mmAmpColIndex , int [] DerivColIndex , int SubStackSlicesColIndex , String AtwoodAbsAddress, String AccelAbsAddress , String VirtOriginFrameAbsAddress)

{ ColIndex[0]=ColIndex[0]+2; int HdotSqO4AgHVirtOriginColIndex[]=new int [10]; HdotSqO4AgHVirtOriginColIndex[0]=ColIndex [0]; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) Label lbl= new Label(ColIndex[0] , 0, ”HdotSqO4AgHVirtOrigin”+s1.{ getW ritableCell(i ,0) . getContents() . replaceAll(” ampmm” , ”” ) ) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex [0]− , RowIndex, ”POWER(”+CellReference{ Helper. getCellReference (DerivColIndex [0]+(i mmAmpColIndex[0]) ,RowIndex)+” ,2) /(4 ”+ AtwoodAbsAddress+ ” ”+ AccelAbsAddress+”/1000− ( ”+ ∗ CellReferenceHelper.∗ getCellReference (i ,RowIndex)+”∗ INDIRECT (ADDRESS( ”+ VirtOriginFrameAbsAddress+” ”+CellReferenceHelper.− getCellReference ( SubStackSlicesColIndex ,2)+”+3,”+(i+1)+”))))”)− ; s1 .addCell(fmla);

}ColIndex[0]=ColIndex[0]+1;

}HdotSqO4AgHVirtOriginColIndex[1]=ColIndex[0] 1; − catch (Exception e) } e.printStackTrace (){ ;

}return HdotSqO4AgHVirtOriginColIndex; } private int [] RunningParbolaFitAlpha(WritableSheet s1 , int NumOfRows , int [] ColIndex , int [] mmAmpColIndex , int TimeSSquaredColIndex, String AtwoodAbsAddress, String AccelAbsAddress , int [] TSquaredMinusVirtOriginFromFitColIndex)

{ ColIndex[0]=ColIndex[0]+2; int RunningParbolaFitAlphaColIndex[]=new int [10]; RunningParbolaFitAlphaColIndex[0]=ColIndex [0]; try { Label lbl= new Label(ColIndex[0] , 0, ”Box Size”); s1.addCell(lbl); Number num= new Number(ColIndex[0] , 1, 5); s1 . addCell(num) ; StringBuffer sb = new StringBuffer() ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,1 , true ,sb); StringBuffer AbsAddressParabFitBoxSize = sb; ColIndex[0]=ColIndex[0]+1; fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) lbl= new Label(ColIndex[0] , 0, ”ParbolaFitAlpha”+s1.{ getWritabl eCell(i ,0) .getContents() . replaceAll(” ampmm” , ” ” ) ) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex [0]− , RowIndex, ”SLOPE(OFFSET(”+CellRe{ ferenceHelper. getCellReference (i ,RowIndex)+”, FLOOR(”+AbsAddressParabFitBoxSize+ ”/2 ,1) ,0 ,FLOOR(”+AbsAddressParabFitBoxSize+”/2− ,1) 2+1) ,OFFSET( ”+ CellReferenceHelper. getCellReference ( ∗ TSquaredMinusVirtOriginFromFitColIndex[0]+i mmAmpColIndex[0] , RowIndex)+ ”, FLOOR(”+AbsAddressParabFitBoxSize+”/2 ,1) ,0 ,FLOOR(”+− − AbsAddressParabFitBoxSize+”/2 ,1) 2+1))/(”+AtwoodAbsAddress+ ” ”+ AccelAbsAddress+” 1000)”) ; ∗ ∗ s1 .addCell(fmla); ∗

}ColIndex[0]=ColIndex[0]+1;

}RunningParbolaFitAlphaColIndex[1]=ColIndex[0] 1; − 254

catch (Exception e) } e.printStackTrace (){ ;

}return RunningParbolaFitAlphaColIndex; } private int [] RunningParbolaFitLeastSquaresAlpha (WritableSheet s1 , int NumOfRows , int [] ColIndex , int [ ] mmAmpColIndex , int TimeSRenormColIndex , int TimeSSquaredColIndex, String AtwoodAbsAddress, String AccelAbsAddress)

{ ColIndex[0]=ColIndex[0]+2; int RunningParbolaFitLeastSquaresAlphaColIndex []=new int [10]; RunningParbolaFitLeastSquaresAlphaColIndex [0]=ColIndex[0]; try { Label lbl= new Label(ColIndex[0] , 0, ”Box Size”); WritableCellFeatures features = new WritableCellFeatures() ; features.setComment(”This uses a running parabolic least squares fit in excel. n” + ”To do this we use a command like LINEST(C5:C9,A5:A9ˆ 1 , 2 ) ,however n”\ + ”I had trouble in open office just taking one term. =LINEST({ } C4 : C8 , A4\ : A8 : B4 : B8 ,1,1) , n” + ”seems to\ work. 1 , 2 essentially is time array raised to the 1 : then raised to the 2. n” { } + ”The extra\ 1,1 at the end adds calculate b TRUE (default) and calc extra statistics , n” + ”index 3,1 is\ Rˆ2. n” + ”We can use INDEX(LINEST()\ ,1,1), but here we only want firs t term, so it’s ommited . n” + ”OFFSET\ command determines range of cells for fir , FLOOR is used for the box size /2, to n” + ”prevent it\ from being non integer .” ,5,15); /////This adds a comment to the cell in excel− lbl . setCellFeatures(features); s1.addCell(lbl);

Number num= new Number(ColIndex[0] , 1, 5); s1 . addCell(num) ; StringBuffer sb = new StringBuffer() ; CellReferenceHelper. getCellReference (ColIndex[0] , true ,1 , true ,sb); StringBuffer AbsAddressParabFitLeastSquaredBoxSize = sb; ColIndex[0]=ColIndex[0]+1; fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) lbl= new Label(ColIndex[0] , 0, ”ParbolaFitLeastSquaresAlpha”+s{ 1.getWritableCell(i ,0) . getContents() . replaceAll(” ampmm” , ”” ) ) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIndex [0]− , RowIndex, ”LINEST(OFFSET(”+CellR{ eferenceHelper . getCellReference (i ,RowIndex)+”, FLOOR(”+AbsAddressParabFitLeastSquaredBoxSize+ ”/2 ,1) ,0 ,FLOOR(”+AbsAddressParabFitLeastSquaredBoxS− ize+”/2 ,1) 2+1) ,” + ”” ∗ + ”OFFSET(”+CellReferenceHelper . getCellReference (TimeSRenormColIndex , RowIndex)+ ”, FLOOR(”+AbsAddressParabFitLeastSquaredBoxSize+”/2 ,1 ) , 0 ,FLOOR( ”+ − AbsAddressParabFitLeastSquaredBoxSize+”/2 ,1) 2+1)” + ”” ////Just to separate things a bit ∗ + ”:OFFSET(”+CellReferenceHelper. getCellReference (TimeSSquaredColIndex , RowIndex)+ ”, FLOOR(”+AbsAddressParabFitLeastSquaredBoxSize+”/2 ,1 ) , 0 ,FLOOR( ”+ − AbsAddressParabFitLeastSquaredBoxSize+”/2 ,1) 2+1) ,1,1)” + ”/(”+AtwoodAbsAddress+ ” ”+ AccelAbsAddress+” 1000)”)∗ ; s1 .addCell(fmla); ∗ ∗

}ColIndex[0]=ColIndex[0]+1;

}RunningParbolaFitLeastSquaresAlphaColIndex [1]=ColIndex[0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return RunningParbolaFitLeastSquaresAlphaColIndex ; } private int [] ReynoldsNumber(WritableSheet s1 , int NumOfRows , int [] ColIndex , int [ ] mmAmpColIndex , int [] DerivColIndex , String AvgKinemViscAbsAddress)

{ ColIndex[0]=ColIndex[0]+2; int ReynoldsNumberColIndex[]=new int [10]; ReynoldsNumberColIndex[0]=ColIndex [0]; try { fo r ( int i=mmAmpColIndex[ 0]; i<=mmAmpColIndex[ 1 ] ; i++) Label lbl= new Label(ColIndex[0] , 0, ”ReynoldsNumber”+s1.{ getWritable Cell(i ,0). getContents() . replaceAll(” ampmm” , ”” ) ) ; s1.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) − { 255

Formula fmla=new Formula(ColIndex[0] , RowIndex, CellReferenceHelper. ge tCellReference (i ,RowIndex)+ ”/1000 ”+CellReferenceHelper. getCellReference (DerivColIndex [0]+(i mmAmpColIndex[∗ 0] ) ,RowIndex)+ − ”/”+AvgKinemViscAbsAddress) ; s1 .addCell(fmla);

}ColIndex[0]=ColIndex[0]+1;

}ReynoldsNumberColIndex[1]=ColIndex[0] 1; catch (Exception e) − } e.printStackTrace (){ ;

}return ReynoldsNumberColIndex; } private void ReplicateToAllData(WritableSheet s1 ,WritableSheet s2 , int NumOfRows , int [] ColIdxShtAllData , int [ ] HdotSqO4AgHColIndex)

{ColIdxShtAllData[0] = 1; try { fo r ( int i=0; i<=HdotSqO4AgHColIndex [1]; i++) Label lbl= new Label(ColIdxShtAllData[0] ,{ 0, s1.getWritableCell(i ,0) .getContents()); s2.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIdxShtAllData− [0] , RowIndex,{ ”IF(NOT(ISNUMBER( ’”+s1 . getName()+” ’!” +CellReferenceHelper. getCellReference (i ,RowIndex)+” )), ” ” , ’”+s1 .getName()+” ’ ! ” \ \ +CellReferenceHelper. getCellReference (i ,RowIndex)+” )”); s2 .addCell(fmla);

}ColIdxShtAllData[0]=ColIdxShtAllData [0]+1;

catch} (Exception e) } e.printStackTrace (){ ; } } private void Fit1Sqrt(WritableSheet s1 ,WritableSheet s2 , int NumOfRows , int [] ColIdxShtForFit , int [ ] SQRTAmpColIndex, int tSqrtAgColIndex , int StartSlice , int Fit1SliceBegin , int Fit1SliceEnd )

{ ColIdxShtForFit [0]=0; try { int Fit1RowRangeTop=Fit1SliceBegin StartSlice+2; int Fit1RowRangeBot=Fit1SliceEnd StartSlice+2;− − Label lbl= new Label(ColIdxShtForFit [0] , 0, ”Fit1 ”+s1 . getWritableCell(tSqrtAgColIndex,0) . getContents()) ; s2.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIdxShtForFit− [0] , RowIndex,{ ” ’”+s1.getName( )+” ’!”+ CellReferenceHelper. getCellReference (tSqrtAgColIndex ,RowIndex)) ; s2 .addCell(fmla);

}int tSqrtAgColIndexForFit = ColIdxShtForFit [0];

ColIdxShtForFit [0]=ColIdxShtForFit [0]+1;

ColIdxShtForFit [0]=ColIdxShtForFit [0]+1; lbl= new Label(ColIdxShtForFit [0] , 1, ”Range:”); s2.addCell(lbl); StringBuffer AbsAddressRangeLower = new StringBuffer () ; Number num = new Number(ColIdxShtForFit [0] ,2 ,Fit1RowRangeTop) ; s2 . add Cell(num) ; CellReferenceHelper. getCellReference (ColIdxShtForFi t [0], true ,2 , true , AbsAddressRangeLower ) ; StringBuffer AbsAddressRangeUpper = new StringBuffer () ; num = new Number(ColIdxShtForFit [0] ,3 ,Fit1RowRangeBot) ; s2 . add Cell(num) ; CellReferenceHelper. getCellReference (ColIdxShtForFi t [0], true ,3 , true , AbsAddressRangeUpper ) ;

ColIdxShtForFit [0]=ColIdxShtForFit [0]+2; fo r ( int i=SQRTAmpColIndex[0]; i<=SQRTAmpColIndex[ 1 ] ; i++) lbl= new Label(ColIdxShtForFit [0] , 0, ”Fit1 ”+s1. getWritableCell(i{ ,0) .getContents()); s2.addCell(lbl); lbl= new Label(ColIdxShtForFit [0]+1, 0, ”Fit1 ”+s1. getWritableCell(i ,0) .getContents() . replaceAll (”SQRT” ,”SQRTDerivTSqrtAg”)) ; s2.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIdxShtForFit− [0] , RowIndex,{ ”’”+s1.getName( )+” ’!”+ CellReferenceHelper. getCellReference (i ,RowIndex)); s2 .addCell(fmla); StringBuffer ScaledTimePlus1 = new StringBuffer () ; StringBuffer ScaledTimeMinus1 = new StringBuffer () ; CellReferenceHelper. getCellReference (tSqrtAgColIndexForFit , true , RowIndex+1, false , ScaledTimePlus1) ; 256

CellReferenceHelper. getCellReference (tSqrtAgColIndexForFit , true , RowIndex 1, false , ScaledTimeMinus1) ; − fmla=new Formula(ColIdxShtForFit [0]+1, RowIndex, ”(”+CellRefer enceHelper. getCellReference (ColIdxShtForFit [0] ,RowIndex+1)+” ”+ CellReferenceHelper. getCellReference− (ColIdxShtForFi t [0], RowIndex 1)+”) /(”+ScaledTimePlus1+” ”+ ScaledTimeMinus1+”)”− ); − s2 .addCell(fmla);

}lbl= new Label(ColIdxShtForFit [0]+2, 0, s2.getWritableCell(Col IdxShtForFit [0] ,0) . getContents()+”vs. Sqrt(Ag)t”) ; s2.addCell(lbl);

lbl= new Label(ColIdxShtForFit [0]+2, 4, ”SLOPE, Rˆ2, X INTERCEPT,Y INTERCEPT AND Alpha:”);s2.addCell(lbl); − − Formula fmla=new Formula(ColIdxShtForFit [0]+2 , 5, ”SLOPE(OFFSET(”+Cell ReferenceHelper. getCellReference (ColIdxShtForFit [0] ,0)+”,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” −”+AbsAddressRangeLower+”+1,1) ,OFFSET(” + CellReferenceHelper.− getCellReference (tSqrtAgColIndexForFit ,0)+” ,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” ”+AbsAddressRangeLower+”+1,1))”)− ; s2 .addCell(fmla); − fmla=new Formula(ColIdxShtForFit [0]+2 , 6, ”RSQ(OFFSET(”+CellRe ferenceHelper. getCellReference (ColIdxShtForFit [0] ,0)+”,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” −”+AbsAddressRangeLower+”+1,1) ,OFFSET(” + CellReferenceHelper.− getCellReference (tSqrtAgColIndexForFit ,0)+” ,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” ”+AbsAddressRangeLower+”+1,1))”)− ; s2 .addCell(fmla); −

fmla=new Formula(ColIdxShtForFit [0]+2, 7, ” ”+CellReferenceHelper. getCellReference ( ColIdxShtForFit [0]+2,8)+”/”+ − CellReferenceHelper. getCellReference (ColIdxShtForFi t [0]+2,5)); s2 .addCell(fmla);

fmla=new Formula( ColIdxShtForFit [0]+2 , 8, ”INTERCEPT(OFFSET(”+CellReferenceHelper. getCellReference (ColIdxShtForFit [0] ,0)+”,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” −”+AbsAddressRangeLower+”+1,1) ,OFFSET(” + CellReferenceHelper.− getCellReference (tSqrtAgColIndexForFit ,0)+” ,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” ”+AbsAddressRangeLower+”+1,1))”)− ; s2 .addCell(fmla); −

fmla=new Formula(ColIdxShtForFit [0]+2, 9, ”POWER(”+CellReferen ceHelper. getCellReference ( ColIdxShtForFit [0]+2,5)+” ,2)” ); s2 .addCell(fmla);

ColIdxShtForFit [0]=ColIdxShtForFit [0]+3;

catch} (Exception e) } e.printStackTrace (){ ; } } private void Fit2Sqrt(WritableSheet s2 ,WritableSheet s3 , int NumOfRows , int [] ColIdxShtForFitAllData , int [ ] SQRTAmpColIndex, int tSqrtAgColIndex , int StartSlice , int Fit1SliceBegin , int Fit1SliceEnd )

{ ColIdxShtForFitAllData [0]=ColIdxShtForFitAllData [0] +5; try { int Fit1RowRangeTop=Fit1SliceBegin StartSlice+2; int Fit1RowRangeBot=Fit1SliceEnd StartSlice+2;− − Label lbl= new Label(ColIdxShtForFitAllData [0] , 0, ”Fit2All ”+s2 . getWritableCell( tSqrtAgColIndex+1,0). getContents()) ; s3.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIdxShtForFitAllData− [0] ,{ RowIndex, ”’”+s2 .getName()+” ’!”+ CellReferenceHelper. getCellReference (tSqrtAgColIndex+1,RowIndex)) ; s3 .addCell(fmla);

}int tSqrtAgColIndexForFitAll = ColIdxShtForFitAllData [0];

ColIdxShtForFitAllData [0]=ColIdxShtForFitAllData [0] +1;

ColIdxShtForFitAllData [0]=ColIdxShtForFitAllData [0] +1; lbl= new Label(ColIdxShtForFitAllData [0] , 1, ”Range:”); s3.addCell(lbl); StringBuffer AbsAddressRangeLower = new StringBuffer () ; Number num = new Number(ColIdxShtForFitAllData [0] ,2 ,Fit1RowRangeTop) ; s3 . addCell(num) ; CellReferenceHelper. getCellReference (ColIdxShtForFi tAllData [0] , true ,2 , true , AbsAddressRangeLower ) ; StringBuffer AbsAddressRangeUpper = new StringBuffer () ; num = new Number(ColIdxShtForFitAllData [0] ,3 ,Fit1RowRangeBot) ; s3 . addCell(num); 257

CellReferenceHelper. getCellReference (ColIdxShtForFi tAllData [0] , true ,3 , true , AbsAddressRangeUpper ) ;

ColIdxShtForFitAllData [0]=ColIdxShtForFitAllData [0] +2; fo r ( int i=SQRTAmpColIndex[0]+1; i<=SQRTAmpColIndex[1]+1; i++) //+1 because Offset ne column for data set label { lbl= new Label(ColIdxShtForFitAllData [0] , 0, ”Fit2All ”+s2. getWritableCell(i ,0) . getContents()) ; s3.addCell(lbl); lbl= new Label(ColIdxShtForFitAllData [0]+1, 0, ”Fit2All ”+s2.getWritableCell(i ,0) . getContents() . replaceAll (”SQRT” ,”SQRTDerivTSqrtAg”) ); s3.addCell(lbl); fo r ( int RowIndex=1; RowIndex<=NumOfRows 1; RowIndex++) Formula fmla=new Formula(ColIdxShtForFitAllData− [0] ,{ RowIndex, ”’”+s2 .getName()+” ’!”+ CellReferenceHelper. getCellReference (i ,RowIndex)); s3 .addCell(fmla); StringBuffer ScaledTimePlus1 = new StringBuffer () ; StringBuffer ScaledTimeMinus1 = new StringBuffer () ; CellReferenceHelper. getCellReference (tSqrtAgColInde xForFitAll , true , RowIndex+1, false , ScaledTimePlus1) ; CellReferenceHelper. getCellReference (tSqrtAgColInde xForFitAll , true , RowIndex 1, false , ScaledTimeMinus1) ; − fmla=new Formula(ColIdxShtForFitAllData [0]+1, RowIndex, ”(”+Ce llReferenceHelper. getCellReference (ColIdxShtForFitAllData [0] ,RowIndex+1)+” ”+ CellReferenceHelper. getCellReference− ( ColIdxShtForFitAllData [0] ,RowIndex 1)+” ) /( ”+ ScaledTimePlus1+” ”+ − ScaledTimeMinus1+”)”− ); s3 .addCell(fmla);

}lbl= new Label(ColIdxShtForFitAllData [0]+2, 0, s3.getWritableC ell(ColIdxShtForFitAllData [0] ,0) .getContents()+”vs.Sqrt(Ag)t”); s3.addCell(lbl);

lbl= new Label(ColIdxShtForFitAllData [0]+2, 4, ”Line Fit and Rˆ2: ”);s3.addCell(lbl); Formula fmla=new Formula(ColIdxShtForFitAllData [0]+2 , 5, ”SLOPE(OFFSET( ”+ CellReferenceHelper. getCellReference (ColIdxShtForFi tAllData [0] ,0)+”,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” −”+AbsAddressRangeLower+”+1,1) ,OFFSET(” + CellReferenceHelper.− getCellReference (tSqrtAgColIndexForFitAll ,0)+” ,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” ”+AbsAddressRangeLower+”+1,1))”)− ; s3 .addCell(fmla); − fmla=new Formula(ColIdxShtForFitAllData [0]+2 , 6, ”RSQ(OFFSET(”+CellReferenceHelper. getCellReference (ColIdxShtForFitAllData [0] ,0)+” ,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” −”+AbsAddressRangeLower+”+1,1) ,OFFSET(” + CellReferenceHelper.− getCellReference (tSqrtAgColIndexForFitAll ,0)+” ,”+ AbsAddressRangeLower+” 1,0,”+ AbsAddressRangeUpper+” ”+AbsAddressRangeLower+”+1,1))”)− ; s3 .addCell(fmla); − fmla=new Formula(ColIdxShtForFitAllData [0]+2 , 9, ”POWER(”+Cell ReferenceHelper. getCellReference (ColIdxShtForFitAllData [0]+2,5)+” ,2 )”); s3 .addCell(fmla);

ColIdxShtForFitAllData [0]=ColIdxShtForFitAllData [0] +3;

catch} (Exception e) } e.printStackTrace (){ ; } } private void MaskCells(WritableSheet s1 , int StartColRange , int EndColRange , int StartRowRange , int EndRowRange)

{try { fo r ( int i=StartColRange; i<=EndColRange; i++) fo r ( int RowIndex=StartRowRange ; RowIndex<=EndRowRange;{ RowIndex++) Label lbl= new Label(i , RowIndex, ” =”+s1 . getWritableCell(i ,RowIndex).{ getContents()) ; ∗ s1.addCell(lbl);

catch (Exception} } e) } e.printStackTrace (){ ; } } private void UnMaskCells(WritableSheet s1 , int StartColRange , int EndColRange , int StartRowRange , int EndRowRange)

{try { fo r ( int i=StartColRange; i<=EndColRange; i++) fo r ( int RowIndex=StartRowRange ; RowIndex<=EndRowRange;{ RowIndex++) Formula fmla= new Formula(i , RowIndex, s1.getWritableCell(i ,RowIndex).g{ etContents() . substring(2)); s1 .addCell(fmla);

catch (Exception} } e) } { 258

e.printStackTrace () ; } } private void MetaDataCol(WritableSheet s1 , int [] ColIndex , int [] AmpColIndex, int ExpInfoColIndex , int NumOfRows , int [] AccelColIndex , int SlicesColIndex , int SubStackSlicesColIndex , int TimeMsColIndex , int TimeSColIndex , int TimeSRenormColIndex , int TimeSSquaredColIndex , int tSqrtAgColIndex , int [ ] mmAmpColIndex , int [ ] SQRTAmpColIndex, int [] DerivColIndex , int [] HdotSqO4AgHColIndex, int [] RunningParbolaFitLeastSquaresAlphaColIndex , int [] TSqrtMinusVirtOriginFromFitColIndex , int [] TSquaredMinusVirtOriginFromFitColIndex , int [] ReynoldsNumberColIndex) try { Label{ lbl= new Label(ColIndex[0] , 0, ”Meta Data:”); s1.addCell(lbl); lbl= new Label(ColIndex[0] , 1, ”NumOfDataRows:”); s1.addCell(lbl); Number num=new Number(ColIndex[0] , 2, NumOfRows) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 3, ”AmpColIndexBegin:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 4, AmpColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 5, ”AmpColIndexEnd:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 6, AmpColIndex[1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 7, ”ExpInfoColIndex:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 8, ExpInfoColIndex); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 9, ”AccelColIndexBegin :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 10, AccelColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 11, ”AccelColIndexEnd :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 12, AccelColIndex[1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 13, ”SlicesColIndex :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 14, SlicesColIndex); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 15, ”SubStackSlicesColIndex :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 16, SubStackSlicesColIndex ); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 17, ”TimeMsColIndex:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 18, TimeMsColIndex); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 19, ”TimeSColIndex:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 20, TimeSColIndex); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 21, ”TimeSRenormColIndex:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 22, TimeSRenormColIndex) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 23, ”TimeSSquaredColIndex:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 24, TimeSSquaredColIndex); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 25, ”tSqrtAgColIndex:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 26, tSqrtAgColIndex); s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 27, ”mmAmpColIndexBegin:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 28, mmAmpColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 29, ”mmAmpColIndexEnd:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 30, mmAmpColIndex[1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 31, ”SQRTAmpColIndexBegin:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 32, SQRTAmpColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 33, ”SQRTAmpColIndexEnd:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 34, SQRTAmpColIndex[1]) ; s1 . addCell(num) ; 259

lbl= new Label(ColIndex[0] , 35, ”DerivColIndexBegin :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 36, DerivColIndex [0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 37, ”DerivColIndexEnd :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 38, DerivColIndex [1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 39, ”HdotSqO4AgHColIndexBegin:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 40, HdotSqO4AgHColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 41, ”HdotSqO4AgHColIndexEnd:”); s1.addCell(lbl); num=new Number(ColIndex[0] , 42, HdotSqO4AgHColIndex[1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 43, ”RunningParbolaFitLeastSquares AlphaColIndexBegin :”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 44, RunningParbolaFitLeastSquaresAlphaColIndex [0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 45, ”RunningParbolaFitLeastSquaresAlphaColIndexEnd:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 46, RunningParbolaFitLeastSquaresAlphaColIndex [1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 47, ”TSqrtMinusVirtOriginFromFitColIndexBegin:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 48, TSqrtMinusVirtOriginFromFitColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 49, ”TSqrtVirtOriginFromFitColIndexEnd:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 50, TSqrtMinusVirtOriginFromFitColIndex[1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 51, ”TSquaredMinusVirtOriginFromFitColIndexBegin:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 52, TSquaredMinusVirtOriginFromFitColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 53, ”TSquaredVirtOriginFromFitColIndexEnd:”) ; s1.addCell(lbl); num=new Number(ColIndex[0] , 54, TSquaredMinusVirtOriginFromFitColIndex[1]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 55, ”ReynoldsNumberColIndexBegin:” ); s1.addCell(lbl); num=new Number(ColIndex[0] , 56, ReynoldsNumberColIndex[0]) ; s1 . addCell(num) ; lbl= new Label(ColIndex[0] , 57, ”ReynoldsNumberColIndexEnd :”); s1.addCell(lbl); num=new Number(ColIndex[0] , 58, ReynoldsNumberColIndex[1]) ; s1 . addCell(num) ; catch (Exception e) }e.printStackTrace () ; { }} }

B.8 Java Stack Ensemble Average Program

package stackensembleaverage ;

import java . lang .Math; import java .awt. ; import java. util∗ . ; import java.awt. event∗ . ; import java. io . ; ∗ import java.net∗ . ; import java .awt.image.∗ ; import javax .swing.event∗ . ; import javax .swing. table .TableModel;∗ import java .awt. BorderLayout ; import javax .swing. ; ∗ import ij .gui. ; import ij .gui.Roi.∗ ; import ij .process . ∗ ; import ij .io. ; ∗ import ij .plugin.∗ ; import ij .plugin.∗ filter . ; import ij .plugin.frame. ∗; import ij .text. ; ∗ import ij . io .Opener;∗ import ij.util. ; import ij . ; ∗ import ij .IJ;∗ 260

import ij .plugin.PlugIn; ////// @author Mike Roberts public class Main { public static void main(String [] args) Main main = new Main() ; {

new ImageJ ();

IJ.error(”Hello Sir or Madam!”);

JFrame f = new JFrame(”Ensemble Stack directory”); JOptionPane.showMessageDialog(f , ”Select random file in directory for Averaged Stack file to be stored ”);

JFileChooser Chooser = new JFileChooser () ; int returnVal = Chooser .showOpenDialog ( null ); String PathWhereAllDataWillBe=Chooser . getCurrentDire ctory() . toString() ; System. out . println (PathWhereAllDataWillBe) ;

GenericDialog gd = new GenericDialog(”Do We invert?”);

gd.addMessage(”How should we manipulate each stack before adding”) ; gd.addCheckbox(”Normalize” , true ); gd.addCheckbox(”Align Vertical With Interface”, true ); gd.addCheckbox(”Make Substack of images to include”, false ); gd.addCheckbox(”Register(Align) the Images”, false ); gd.addCheckbox(”Use a typical Average as opposed divide the image sum by a different number ” + ”(true is Typical average)”, true );

gd.showDialog() ; boolean Normalize = gd.getNextBoolean() ; boolean AlignVertical = gd.getNextBoolean() ; boolean MakeSubstack = gd.getNextBoolean () ; boolean RegisterImages = gd.getNextBoolean() ; boolean TypicalAverage = gd.getNextBoolean() ; try { Object [] AddEachStackReturned = main.AddEachStack(PathWhereAllDataWillBe , Normalize , AlignVertical ,MakeSubstack , RegisterImages ); int NumOfStacks=(Integer )AddEachStackReturned [0]; ArrayList FolderNames=(ArrayList)AddEachStackReturned [1]; ArrayList Stack = (ArrayList)AddEachStackReturned [2];

double [][][] AverageArray = main.AverageArray(Stack ,TypicalAverage ); ImageStack AverageStack = main.Double3DToStack(AverageArray) ; ImagePlus AverageStackPlus=new ImagePlus(”EnsembleAverage” ,AverageStack) ; AverageStackPlus .show() ; IJ .saveAs(AverageStackPlus ,”. tif” ,PathWhereAllDataWillBe+”/Ensemble”) ;

catch (Exception e) } e.printStackTrace (){ ;

new} WaitForUserDialog(”When Ready to Exit Click OKay”).show( );

System. exit (0) ; } public Object [] AddEachStack(String PathWhereAllDataWillBe, boolean Normalize , boolean AlignVertical , boolean MakeSubstack , boolean RegisterImages ) Main main = new Main() ; { Object [] OpenEachStackReturned = new Object [3];

JFrame f = new JFrame(”Select Stacks”); JOptionPane.showMessageDialog(f , ”Select Stacks to be Ensemble Averaged (press cancel when done selecting)”);

JFileChooser Chooser = new JFileChooser (PathWhereAllDataWillBe) ; Chooser . setMultiSelectionEnabled( true );

int StackIndex =0; ArrayList FolderName = new ArrayList () ; ArrayList Stack = new ArrayList (); int GlobalIntfLoc [] = new int [1]; GlobalIntfLoc [0]=0;

try int {returnVal = Chooser .showOpenDialog ( null ); while (returnVal == JFileChooser .APPROVE OPTION) //loop and keep opening file selection windows { File [] fileToOpen=Chooser . getSelectedFiles () ; fo r ( int i = 0; i

FolderName.add((new File(Path)).getName()) ;

Object [] OpenAndManipulateStackReturned = main.OpenAndManipulateStack ( fileToOpen [ i ] ,StackIndex , Normalize , AlignVertical , GlobalIntfLoc ,MakeSubstack , RegisterImages ) ; ImagePlus ManipulatedStack=(ImagePlus)OpenAndManipulateStackReturned[0];

Stack.add( ManipulatedStack ); StackIndex++;

catch (Exception e) } e. printStackTrace () ; { } } returnVal = Chooser .showOpenDialog ( null ); catch (Exception e) }e.printStackTrace } () ; { } OpenEachStackReturned[0]=StackIndex ; OpenEachStackReturned[1]=FolderName; OpenEachStackReturned[2]=Stack ; return OpenEachStackReturned; } public Object [] OpenAndManipulateStack(File fileToOpen , int StackIndex , boolean Normalize , boolean AlignVertical , int [] GlobalIntfcLoc , boolean MakeSubstack , boolean RegisterImages ) Object [] OpenAndManipulateStackReturned = new Object [1]; {

ImagePlus Stack = IJ.openImage(fileToOpen.toString() ); Stack.show() ;

i f (MakeSubstack ) new WaitForUserDialog(”Figure{ our Start (First Frame Jump ” + ” n” + ”or 0 g mark often on NonAvg (from accel time, (time\ / fram per iod) + 1 ) )” + ” n” + ”and\ End SLice and Remember for a sec”).show(); int startslice = ( int ) IJ.getNumber(”Experiment Start Slice (First Frame Jump ” + ” n” + ”or 0 g mark often on NonAvg (from accel time, (time / fram per iod) + 1 ))”,\ 30) ; int endslice = ( int ) IJ.getNumber(”Last Slice”, 180); IJ.run(”Substack Maker”, ”slices=” + startslice + ” ” + endslice); Stack.changes = false ; − Stack. close() ; Stack = WindowManager. getCurrentImage () ; Stack. setTitle(”Substack ” + Integer.toString(StackIndex) );

} i f (RegisterImages ) IJ.run(”Select{ None”); int norm slice = 2; IJ .selectWindow (Stack.getID()) ; IJ.run(”Substack Maker”, ”slices=” + norm slice); //make duplicate slice for normalization ImagePlus AlignNormStack = WindowManager. getCurrentImage(); AlignNormStack. setTitle (”stack for alignment normalize”); IJ .selectWindow (Stack.getID()) ; new WaitForUserDialog(”select area (mixing region) to be excl uded from image registration (alignment) algorithm”).show(); Rectangle ExcludeTurboReg = (Stack.getRoi() .getBounds( )); IJ .selectWindow (”stack for alignment normalize”); AlignNormStack. setRoi (ExcludeTurboReg ) ; IJ.run(”Add Slice”); AlignNormStack. getStack () . getProcessor (2) . setColor ( 0) ; IJ.run(”Make Inverse”); IJ.run(”Set...”, ”value=100 slice”);

IJ .selectWindow (Stack.getID()) ; IJ.run(”TurboReg ”);

new WaitForUserDialog(”Now you will need to set ’main substack’ as the source ” + ” n” + ”and ’stack for alignment normalize’ as the target” + ” n” \ + ” and then click on Accurate and Rigid body and Batch,\ click on OK here when the process is finished”).show();

IJ.wait(100) ; Stack.changes = false ; Stack. close() ; AlignNormStack. changes=false ; AlignNormStack. close () ; IJ .selectWindow (”Registered”) ; Stack = WindowManager. getCurrentImage () ; Stack. setTitle(”Substack ” + Integer.toString(StackIndex) ); } 262

i f (AlignVertical) IJ.setTool(”line”);{ new WaitForUserDialog(”select interface location”).show() ; int interface loc = ( int ) Stack.getRoi() .getBounds() .getY() ; i f (StackIndex == 0) GlobalIntfcLoc[0]=interface{ loc ;

}IJ.run(Stack, ”Translate ...”, ”x=0 y=” + ( double )(GlobalIntfcLoc[0] interface loc) + ” interpolation=None stack”); − } i f (Normalize) IJ.run(”Select{ None”); IJ.setTool (”rect”); new WaitForUserDialog(”Select area for lowest in darker liqui d to rescale (subtract)to zero”).show() ; ImageStatistics imgstat= Stack. getStatistics(); double SubtractedValue=imgstat .min; IJ.run(”Select None”); IJ.run(”Subtract ...”, ”value=”+SubtractedValue+” stac k”); //Rescale to zero with darker fluid Stack. getProcessor () .setMinAndMax(Stack. getStatisti cs () .mean 3 Stack. getStatistics (). stdDev − ∗ ,Stack. getStatistics().mean + 3 Stack. getStatistics () .stdDev); Stack .updateImage() ; ∗

new WaitForUserDialog(”Select area in lighter liquid for aver age to Normalize to”).show (); imgstat= Stack. getStatistics (); double NormalizeValue=imgstat .mean; IJ.run(”Select None”); IJ.run(”Divide ... ”, ”value=”+NormalizeValue+” stack”) ; //normalize dark value with lowest value of all images Stack. getProcessor () .setMinAndMax(0.000000000 , 1.000 000000) ; Stack. getProcessor () .setMinAndMax(Stack. getStatisti cs () .mean 3 Stack. getStatistics (). stdDev − ∗ ,Stack. getStatistics().mean + 3 Stack. getStatistics () .stdDev); Stack .updateImage() ; ∗ } Stack. setTitle(”ManipulatedStack” + Integer. toString( StackIndex )) ; Stack.show() ; ImagePlus ManipulatedStack=WindowManager. getCurrentImage() ; OpenAndManipulateStackReturned[0]=ManipulatedStack ;

return OpenAndManipulateStackReturned; } public double [][][] AverageArray(ArrayList Stack , boolean TypicalAverage ) { int NumOfSlices=1000; int ImageWidth=0; int ImageHeight =0; fo r ( int m= 0; m

}i f (Stack . get(m) .getWidth()>ImageWidth ) ImageWidth=Stack . get (m) . getWidth() ; {

}i f (Stack. get(m) . getHeight()>ImageHeight ) ImageHeight=Stack . get(m) . getHeight () ; {

double}[][][] }AverageArray = new double [ImageWidth ][ ImageHeight ][ NumOfSlices ];

System.out. println(”Average Start”); //In java all elements are initialised to 0 by default. int NumToDivideTotalBy = Stack . size () ; i f (! TypicalAverage ) NumToDivideTotalBy{ = ( int ) IJ.getNumber(”What number should the image sum be divided by” , Stack.size() );

}//Average The Slices fo r ( int m= 0; m

System.out.println(”Average} } Slice ”+ k+”, Input Stack ”+m+ ” Done”);

System.out.} println(”Average } Done”); return AverageArray ; 263

} public float [][][] StackToArray(ImagePlus imp) int dimension = imp.getWidth() imp. getHeight(){ ; int ImageWidth=imp. getWidth() ; ∗ int ImageHeight=imp. getHeight () ; int NumOfSlices = imp. getStackSize() ; float [] pixels = new f l o a t [ dimension ]; float [][][] ImageArray =new f l o a t [ImageWidth ][ ImageHeight ][ NumOfSlices ];

ImageStack stack = imp.getStack(); fo r ( int k=0;k<(NumOfSlices) ;k++) pixels=( float []) stack.getPixels(k+1);{ fo r ( int i=0;i<=imp. getWidth() 1; i ++) fo r ( int j=0;j<=imp. getHeight− () 1;{ j ++) i f ((ImageArray[ i ][ j ][k] !=− Float.POSITIVE{ INFINITY ) (ImageArray[i ][ j ][k] != F l o a t . NEGATIVE INFINITY ) (ImageArray[i ][ j ][k] !=|| Float.NaN)) ImageArray[ i ][ j ][k]=pixels [ImageWidth|| j+i ]; { ∗ return ImageArray;} } } } } public ImageStack Double3DToStack( double [][][] StackArray) int ImageWidth=StackArray. length ; { int ImageHeight=StackArray [0]. length; int NumOfSlices = StackArray[0][0]. length;

ImageStack Stack = new ImageStack(StackArray. length ,StackArray [0]. length ,StackArray[0][0]. length); float [][] PixelsForSlice1D = new f l o a t [ NumOfSlices ][ ImageWidth ImageHeight ]; ∗ System.out.println(”Put In Stack Begin”); fo r ( int k=0;k<(NumOfSlices) ;k++) fo r ( int i=0;i<=ImageWidth 1;{ i ++) fo r ( int j=0;j<=ImageHeight− 1;{ j ++) − { PixelsForSlice1D[k ][ ImageWidth j+i ]=( float )StackArray[ i ][ j ][k]; ∗ Stack.} setPixels(PixelsForSlice1D[k] } , k+1); System.out.println(”Put In Stack Slice ”+k+ ” Done”); System.out.println(”Put In Stack End”); return} Stack ;

}//////////////////// } the end//////////////////////// // 27 29 43 52 55 110 103 58 55 37 42 52 53 55 56 65 67 84 106 120 113 109 81 77 63 14 12 7 // 71 80 18 3 13 58 76 96 99 102 104 71 54 48 18 15 13 14 15 45 63 85 112 114 78 74 41 31 26 58 /////////////////// the end///////////////////////// 264

REFERENCES

[1] Andrew Adler. EIDORS: Electrical Impedance Tomography and Diffuse Op- tical Tomography Reconstruction Software.

[2] Paul L. Miller Andrew W. Cook, William Cabot. The mixing transition in Rayleigh-Taylor instability. Journal of Fluid Mechanics, 511:333–362, 2004.

[3] H.H. Barrett and K.J. Myers. Foundations of Image Science. Wiley series in pure and applied optics. Wiley-Interscience, 2004.

[4] M.M. Basko. Inertial confinement fusion: steady progress towards ignition and high gain (summary talk). Nuclear Fusion, 45(10):S38–S47, OCT 2005.

[5] G.K. Batchelor. An Introduction to . Cambridge University Presss, 1967.

[6] Richard Bellman and Ralph H. Pennington. Effects of surface tension and viscosity on Taylor instability. Quarterly of Applied Mathematics, 12(2):151– 162, 1954.

[7] T.B Benjamin and F. Ursell. The stability of the plane free surface of a liquid in vertical periodic motion. Proceedings of the Royal Society of London: Series A, Mathematical and Physical Sciences, 225(1163):505–515, September.

[8] R Betti, M Umansky, V Lobatchev, VN Goncharov, and RL McCrory. Hot- spot dynamics and deceleration-phase Rayleigh-Taylor instability of imploding inertial confinement fusion capsules. Physics of Plasmas, 8(12):5257–5267, DEC 2001.

[9] S. Chandrasekhar. Hydrodynamic and Hydromagnetic Stability. Dover, 1961.

[10] Michael Chertkov. Phenomenology of Rayleigh-Taylor turbulence. Physical Review Letters, 91(11), 2003. 265

[11] D. S. Clark, S. W. Haan, A. W. Cook, M. J. Edwards, B. A. Hammel, J. M. Koning, and M. M. Marinak. Short-wavelength and three-dimensional insta- bility evolution in national ignition facility ignition capsule designs. Physics of Plasmas, 18(8), AUG 2011.

[12] Ira M. Cohen and Pijush K. KUndu. Fluid Mechanics. Elsevier Academic Press, third edition, 2004.

[13] R.L. Cole and R.S. Tankin. Experimental study of Taylor instability. Physics of Fluids, 16(11):1810–1815, 1973.

[14] AW Cook and PE Dimotakis. Transition stages of Rayleigh-Taylor instability between miscible fluids. Journal of Fluid Mechanics, 443:69–99, SEP 25 2001.

[15] AW Cook and Y Zhou. Energy transfer in Rayleigh-Taylor instability. Physical Review E, 66(2, Part 2), AUG 2002.

[16] K I Read D L Youngs. Experimental investigation of turbulent mixing by Rayleigh-Taylor instability. AWRE Report, (O11/83), 1983.

[17] BJ Daly. Numerical study of 2 fluid Rayleigh-Taylor instability. Physics of Fluids, 10(2):297–&, 1967.

[18] SB Dalziel, PF Linden, and DL Youngs. Self-similarity and internal struc- ture of turbulence induced by Rayleigh-Taylor instability. Journal of Fluid Mechanics, 399:1–48, Nov 25 1999.

[19] Stuart B. Dalziel. Rayleigh-Taylor instability - experiments with image- analysis. Dynamics of Atmospheres and Oceans, 20(1-2):127–153, Nov. 1993.

[20] R.M. Davies and G. Taylor. The mechanics of large bubbles rising through extended liquids and through liquids in tubes. Proc. Roy. Soc. Lon. A - Mathematical and Physical Sciences, 200(1062):375–390, 1950.

[21] G Dimonte. Dependence of turbulent Rayleigh-Taylor instability on initial perturbations. Physical Review E, 69(5, Part 2), May 2004. 266

[22] G. Dimonte, J. Morrison, and Et Al. A linear electric motor to study turbulent hydrodynamics. Review of Scientific Instruments, 67(1):302–306, Jan. 1996.

[23] G. Dimonte and M. Schneider. Turbulent Rayleigh-Taylor instability exper- iments with variable acceleration. Physical Review E, 54(4):3740–3743, Oct. 1996.

[24] G Dimonte and M Schneider. Density ratio dependence of Rayleigh-Taylor mixing for sustained and impulsive acceleration histories. Physics of Fluids, 12(2):304–321, Feb 2000.

[25] G Dimonte and M Schneider. Density ratio dependence of Rayleigh-Taylor mixing for sustained and impulsive acceleration histories. Physics of Fluids, 12(2):304–321, Feb 2000.

[26] G Dimonte, Dl Youngs, A Dimits, S Weber, M Marinak, S Wunsch, C Garasi, A Robinson, Mj Andrews, P Ramaprabhu, Ac Calder, B Fryxell, J Biello, L Dursi, P Macneice, K Olson, P Ricker, R Rosner, F Timmes, H Tufo, Yn Young, and M Zingale. A comparative study of the turbulent Rayleigh- Taylor instability using high-resolution three-dimensional numerical simula- tions: The alpha-group collaboration. Physics of Fluids, 16(5):1668–1693, May 2004.

[27] Guy Dimonte, Praveen Ramaprabhu, and Malcolm Andrews. Rayleigh-Taylor instability with complex acceleration history. Physical Review E, 76(4, Part 2), Oct 2007.

[28] PE Dimotakis. The mixing transition in turbulent flows. Journal of Fluid Mechanics, 409:69–98, APR 25 2000.

[29] PE Dimotakis. Turbulent mixing. Annual Review of Fluid Mechanics, 37:329– 356, 2005.

[30] P.G. Drazin and W.H. Reid. Hydrodynamic Stability. Cambridge University Presss, 1981. 267

[31] D Drikakis, F Grinstein, and D Youngs. On the computation of instabilities and symmetry-breaking in fluid mechanics. Progress in Aerospace Sciences, 41(8):609–641, NOV 2005.

[32] Re Duff, Fh Harlow, and Cw Hirt. Effects of diffusion on interface instability between gases. Physics of Fluids, 5(4):417–425, 1962.

[33] H.W. Emmons, C.T. Chang, and B.C. Watson. Taylor instability of finite surface waves. Journal of Fluid Mechanics, 7(2):177, 1960.

[34] Michael Faraday. On a peculiar class of acoustical figures; and on certain forms assumed by groups of particles upon vibrating elastic surfaces. Philisophical Transactions of the Royal Society of London, 121:299–340, 1831.

[35] Enrico Fermi and John Von Neumann. Taylor instability of incompressible liquids. United States Atomic Energy Commision: Unclassified, Aecu-2979, Nov. 1955.

[36] J. Frenkel. Kinetic Theory of Liquids. Dover, 1943.

[37] A Ghasemizad, H Zarringhalam, and L Gholamzadeh. The investigation of Rayleigh-Taylor instability growth rate in inertial confinement fusion. Journal of Plasma Fusion Research, 8, 2009.

[38] J Glimm, JW Grove, XL Li, W Oh, and DH Sharp. A critical analysis of Rayleigh-Taylor growth rates. Journal of Computational Physics, 169(2):652– 677, MAY 20 2001.

[39] VN Goncharov. Analytical model of nonlinear, single-mode, classical rayleigh- taylor instability at arbitrary atwood numbers. Physical Review Letters, 88(13), APR 1 2002.

[40] W.J. Harrison. The influence of viscosity on the oscillations of superposed fluids. Proc. of the London Mathematical Society, 6:396–405, 1908. 268

[41] E. Hecht. Optics. Addison-Wesley, 2002.

[42] D. Holder and Institute of Physics (Great Britain). Electrical impedance tomography: methods, history, and applications. Series in medical physics and biomedical engineering. Institute of Physics Pub., 2004.

[43] Raouf Ibrahim. Liquid Sloshing Dynamics. Cambridge, 2005.

[44] Garett Johnson. Design of a linear induction motor driven Rayleigh-Taylor instability drop tower. Master’s thesis, University of Arizona, 2009.

[45] D.D. Joseph and Y.Y. Renardy. Fundamentals of Two-fluid Dynamics: Lubricated Transport, Drops, and Miscible Liquids. Interdisciplinary Applied Mathematics. Springer-Verlag, 1993.

[46] J.W. Jacobs J.T. Waddell, C.E. Niederhaus. Experimental study of the Rayleigh-Taylor instability: Low atwood number liquid system with single- mode initial perturbations. Physics of Fluids, 13(5):1263–1273, 2001.

[47] S.B. Dalziel J.W. Jacobs. Rayleigh-Taylor instability in complex stratifica- tions. Journal of Fluid Mechanics, 542:251–279, 2005.

[48] Y.A. Kucherenko, A.P. Pylaev, S.I. Balabin, and Et Al. Experimental de- termination of the turbulized mixtures separation rate for different atwood numbers. From the Proceedings of the Sixth International Workshop on: The Physics of Compressible Turbulent Mixing, June 1997.

[49] Y.A. Kucherenko, A.P. Pylaev, S.I. Balabin, and Et Al. Experimental in- vestigation into the behavior of inertial motion for different atwood numbers. From the Proceedings of the Sixth International Workshop on: The Physics of Compressible Turbulent Mixing, page 282, June 1997.

[50] Y.A. Kucherenko, A.P. Pylaev, S.I. Balabin, and Et Al. Experimental study into Rayleigh-Taylor turbulent mixing zone heterogenous structure. From 269

the Proceedings of the Eighth International Workshop on: The Physics of Compressible Turbulent Mixing, December 2001.

[51] Ya Kucherenko, Ap Pylaev, Vd Murzakov, Vn Popov, Or Komarov, Ve Savelev, R Cherret, and Jf Haas. Experimental study into the asymptotic stage of the separation of the turbulized mixtures in gravitationally stable mode. Laser and Particle Beams, 15(1):17–23, 1997. Zababakhins Scientific Talks, Snezhinsk, Russia, Sep 16-20, 1995.

[52] YA Kucherenko, LI Shibarshov, VI Chitaikin, SI Balabin, and Pylaev. Ex- perimental study of the gravitational turbulent mixing self-similar mode. Proceedings of the 3rd International Workshop on the Physics of Compressible Turbulent Mixing, June 1991.

[53] Sir Horace Lamb. Hydrodynamics. 1932.

[54] David Layzer. On the instability of superposed fluids in a gravitational field. The Astrophysical Journal, 122(1):1–12, Jul. 1955.

[55] Sanjiva K. Lele. Compact finite difference schemes with spectral like resolu- tion. Journal of Computational Physics, 103:16–42, 1991.

[56] D.J. Lewis. The instability of liquid surfaces when accelerated in a direction perpendicular to their planes. 2. Proc. Roy. Soc. Lon. A - Mathematical and Physical Sciences, 202(1068):81–96, 1950.

[57] P.F. Linden and J.M. Redondo. Molecular mixing in the Rayleigh-Taylor instability. part i: Global mixing. Physics of Fluids A - Fluid Dynamics, 3(5):1269–1277, May 1991.

[58] Pf Linden, Jm Redono, and Dl Youngs. Molecular mixing in Rayleigh-Taylor instability. Journal of Fluid Mechanics, 265:97–124, Apr 25 1994.

[59] JD Lindl, P Amendt, RL Berger, SG Glendinning, SH Glenzer, SW Haan, RL Kauffman, OL Landen, and LJ Suter. The physics basis for ignition using 270

indirect-drive targets on the national ignition facility. Physics of Plasmas, 11(2):339–491, FEB 2004.

[60] D. Livescu, J. R. Ristorcelli, R. A. Gore, S. H. Dean, W. H. Cabot, and A. W. Cook. High-reynolds number Rayleigh-Taylor turbulence. Journal of Turbulence, 10(13):1–32, 2009.

[61] Kc Mamola, Wf Mueller, and Bj Regittko. Light rays in gradient index media - a laboratory exercise. American Journal of Physics, 60(6):527–529, Jun 1992.

[62] John D. Lindl Mark C. Herrmann, Max Tabak. Ignition scaling laws and their application to capsule design. Physics of Plasma, 8(5), 2001.

[63] Patrick A. Mcmurthy. Direct Numerical Simulations of a Reacting Mixing Layer with Chemical Heat Release. PhD thesis, University of Washington, 1987.

[64] Peter D. Miller. Applied Asymptotic Analysis. American Mathematical Soci- ety, 2006.

[65] Nicholas J. Mueschke, Oleg Schilling, David L. Youngs, and Malcolm J. An- drews. Measurements of molecular mixing in a high-schmidt-number Rayleigh- Taylor mixing layer. Journal of Fluid Mechanics, 632:17–48, Aug 10 2009.

[66] S Mujumdar and H Ramachandran. Imaging through turbid media us- ing polarization modulation: Dependence on scattering anisotropy. Optics Communications, 241(1-3):1–9, Nov 1 2004.

[67] Charles Niederhaus. Experiments on the Richtmyer-Meshkov Instability of Incompressible Fluids. PhD thesis, University of Arizona, 2000.

[68] WA OATES. Ideal solutions. Journal of Chemical Education, 46(8):501–&, 1969. 271

[69] AL Olsen and ER Washburn. Fluidities and changes in volume of the binary systems isopropyl alcohol-benzene and isopropyl alcohol-water. Journal of Physical Chemistry, 42(2):275–281, FEB 1938.

[70] David Olson. Rayleigh-Taylor instability with complex initial perturbations. Master’s thesis, University of Arizona, 2006.

[71] D Oron, L Arazi, D Kartoon, A Rikanati, U Alon, and D Shvarts. Dimension- ality dependence of the Rayleigh-Taylor and richtmyer-meshkov instability late-time scaling laws. Physics of Plasmas, 8(6):2883–2889, JUN 2001.

[72] Lyngby P. Pedersen. Stability of the solutions to mathieu-hill equations with damping. Ingenieur-Archiv, 49:15–29, 1980.

[73] Ronald L. Panton. Incompressible Flow. John Wiley and Sons, Inc., 1996.

[74] PW Parsons and FJ Estrada. Changes in volume on mixing solutions. Industrial and Engineering Chemistry, 34:949–952, 1942.

[75] Allan D. Pierce. Acoustics, chapter 2, pages 60–63. 1989.

[76] M.S. Plesset and C.G. Whipple. Viscous effects in Rayleigh-Taylor instability. Physics of Fluids, 17(1):1–7, 1974.

[77] Bruce Poling. The Properties of Gases and Liquids. Mcgraw-Hill, 2001.

[78] Paras N. Prasad. Introduction to Biophotonics. John Wiley and Sons, 2003.

[79] R.L. Mccrory C.P. Verdon R. Betti, V.N. Goncharov. Growth rates of the ablative Rayleigh-Taylor instability in inertial confinement fusion. Physics of Plasma, 5(5), 1998.

[80] Baldev Raj, V Rajendran, and P Palanichamy. Science and Technology of Ultrasonics. Alpha Science International, 2004. 272

[81] P Ramaprabhu and Mj Andrews. Simultaneous measurements of velocity and density in buoyancy-driven mixing. Experiments in Fluids, 34(1):98–106, Jan 2003.

[82] P Ramaprabhu, G Dimonte, and MJ Andrews. A numerical study of the influence of initial perturbations on the turbulent Rayleigh-Taylor instability. Journal of Fluid Mechanics, 536:285–319, AUG 10 2005.

[83] M. Ratafia. Experimental investigation of Rayleigh-Taylor instability. Physics of Fluids, 16(8):1207–1210, 1973.

[84] Lord Rayleigh. Investigation of the character of the equilibrium of an incom- pressible heavy fluid of variable density. In Scientific Papers, volume 2, pages 200–207. Cambridge University Press, 1900.

[85] K.I. Read. Experimental investigation of turbulent mixing by Rayleigh-Taylor instability. Physica D, 12(1-3):45–58, 1984.

[86] Derek Richards. Advanced Mathematical Methods with Maple. 2002.

[87] JR Ristorcelli and TT Clark. Rayleigh-Taylor turbulence: Self-similar analysis and direct numerical simulations. Journal of Fluid Mechanics, 507:213–253, MAY 25 2004.

[88] Michael Roberts. Experiments on the miscible liquid Rayleigh Taylor and richtmyer meshkov instabilities. Master’s thesis, University of Arizona, 2006.

[89] Wesley L. Nyborg R.P. Brand. Parametrically excited surface waves. Journal of the Acoustical Society of America, 37(3), 1965.

[90] Mb Schneider, G Dimonte, and B Remington. Large and small scale structure in Rayleigh-Taylor mixing. Physical Review Letters, 80(16):3507–3510, Apr 20 1998.

[91] Gary S. Settles. Schlieren and shadowgraph imaging in the great outdoors. Proceedings of PSFVIP-2, 1999. 273

[92] Gary S. Settles. Schlieren and Shadowgraph Techniques: Visualizing Phenomena in Transparent Media. Springer, 2001.

[93] DH Sharp. An overview of Rayleigh-Taylor instability. Physica D, 12(1-3):3– 18, 1984.

[94] Svetlana V. Simakhina. Stability analysis of hill’s equation. Master’s thesis, University of Illinois at Chicago, 2003.

[95] D.M. Snider and M.J. Andrews. Rayleigh-Taylor and shear-driven mixing with an unstable thermal stratification. Physics of Fluids, 6(10):3324–3334, Oct. 1994.

[96] KR Sreenivasan. Fractals and multifractals in fluid turbulence. Annual Review of Fluid Mechanics, 23:539+, 1991.

[97] J.C. Tannehill, D.A. Anderson, and R.H. Pletcher. Computational Fluid Mechanics and Heat Transfer. Series in Computational and Physical Pro- cesses in Mechanics and Thermal Sciences. Taylor & Francis, 1997.

[98] John R. Taylor. An Introduction to Error Analysis. University Science Books, 1982.

[99] Sir Geoffrey Taylor. The instability of liquid surfaces when accelerated in a direction perpendicular to their planes. i. Proc. of R. Soc. Lond., 201, 1950.

[100] H Tennekes and J L Lumley. A First Course in Turbulence. Mit Press, 1972.

[101] G Tryggvason and SO Unverdi. Computations of 3-dimensional Rayleigh- Taylor instability. Physics of Fluids A-Fluid Dynamics, 2(5):656–659, MAY 1990.

[102] Jesse Todd Waddell. Experimental study of the Rayleigh-Taylor instability of miscible liquids. Master’s thesis, University of Arizona, 1999. 274

[103] A.W. Cook W.H. Cabot. Reynolds number effects on Rayleigh-Taylor in- stability with possible implications for type-ia supernovae. Nature Physics, 2(8):562–568, 2006.

[104] Jeremy White, Jason Oakley, Mark Anderson, and Riccardo Bonazza. Ex- perimental measurements of the nonlinear Rayleigh-Taylor instability using a magnetorheological fluid. Physical Review E, 81(2, Part 2), FEB 2010.

[105] Jeffrey Wilkinson. Experimental study of the three-dimensional Rayleigh- Taylor instability for low atwood numbers. Master’s thesis, University of Arizona, 2004.

[106] Nicholas Tadao Yamashita. Design of a linear induction motor driven Rayleigh-Taylor instability drop tower. Master’s thesis, University of Arizona, 2009.

[107] David Youngs. Numerical simulation of turbulent mixing by Rayleigh-Taylor instability. Physica D, 12:32–44, 1984.

[108] Dl Youngs. Modeling turbulent mixing by Rayleigh-Taylor instability. Physica D, 37(1-3):270–287, Jul 1989.

[109] Dl Youngs. 3-dimensional numerical-simulation of turbulent mixing by Rayleigh-Taylor instability. Physics of Fluids A-Fluid Dynamics, 3(5, Part 2):1312–1320, May 1991. International Symp On Fluid Mechanics Of Stirring And Mixing, Univ Calif San Diego, La Jolla, Ca, Aug 20-24, 1990.

[110] Dl Youngs. Numerical-simulation of mixing by Rayleigh-Taylor and richtmyer- meshkov instabilities. Laser and Particle Beams, 12(4):725–750, 1994.