<<

Summer 2014 – Spring 2015

HSGC Report Number 15-24 Compiled in 2015 by HAWAI‘I SPACE GRANT CONSORTIUM

The Hawai‘i Space Grant Consortium is one of the fifty-two National Space Grant Colleges supported by the National Aeronautics and Space Administration (NASA).

Material in this volume may be copied for library, abstract service, education, or personal research; however, republication of any paper or portion thereof requires the written permission of the authors as well as appropriate acknowledgment of this publication.

This report may be cited as

Hawai‘i Space Grant Consortium (2015) Undergraduate Fellowship Reports. HSGC Report No. 15-24. Hawai‘i Space Grant Consortium, Honolulu.

Individual articles may be cited as

Author, A.B. (2015) Title of article. Undergraduate Fellowship Reports, pp. xx-xx. Hawai‘i Space Grant Consortium, Honolulu.

This report is distributed by:

Hawai‘i Space Grant Consortium Hawai‘i Institute of Geophysics and Planetology University of Hawai‘i at Mānoa 1680 East West Road, POST 501 Honolulu, HI 96822 Table of Contents Foreword………………………………………………………………………………… i

REPORTS

RADIO FREQUENCY OVER OPTICAL FIBER DESIGN AND IMPLEMENTATION FOR THE EXAVOLT ANTENNA …………………………….……………………….. 1 James L. Bynes III University of Hawai‘i at Mānoa

DIGITAL IMAGERY AND GEOLOGIC MAP OF CHEGEM CALDERA, RUSSIA …9 Christina N. Cauley Unversity of Hawai‘i at Hilo

DESIGN OF A WIRELESS POWER TRANSFER SYSTEM UTILIZING MICROWAVE FREQUENCIES ……..……………………………………..………… 17 Steven S. Ewers University of Hawai‘i at Mānoa

MULTI-WALLED CARBON NANOTUBE NANOFORESTS AS GAS DIFFUSION LAYERS FOR PROTON EXCHANGE MEMBRANE FUEL CELLS…..…………….25 Kathryn Hu University of Hawai‘i at Mānoa

CORRECTING SPECTRAL DATA FROM EXTRAGALACTIC STAR-FORMING REGIONS FOR ATMOSPHERIC DISPERSION………………………………………34 Casey M. Jones University of Hawai‘i at Hilo

DEVELOPMENT OF SAMPLE MOUNTING FOR STARDUST INTERSTELLAR CANDIDATES ………………………………………………………..…..…………….40 Logan K. Magad-Weiss University of Hawai‘i at Mānoa

DESIGN AND DEVELOPMENT OF A SUSPENSION SYSTEM USED IN ROUGH- TERRAIN VEHICLE CONTROL FOR VIBRATION SUPPRESSION IN PLANETARY EXPLORATION …………………………...…………...………………47 Arvin R. Niro University of Hawai‘i at Mānoa

ESTIMATION OF DAYTIME SLEEPINESS IN SPACEFLIGHT SUBJECTS...... 56 Roberto F. Ramilo Jr University of Hawai‘i at Mānoa

DEVELOPING A SOFTWARE TOOL TO ENABLE CREATION OF COMMAND SCRIPTS FOR A SATELLITE MISSION ...... ………………………………………...64 Erik K. Wessel University of Illinois at Urban Champaign

AN EMBEDDED MICROPROCESSOR DESIGN FOR THE PULSE SHAPE DISCRIMINATION OF A PLASTIC SCINTILLATING NEUTRON DETECTOR ……………...………………………………………………………………………….....72 Marcus J. Yamaguchi Kaua‘i Community College

STUDY OF THE MOST HARMFUL SOLAR ENERGETIC PARTICLE FOR SHEILDING NEXT HUMAN SPACE FLIGHTS…..……...…………...………………80 Bryan K. Yamashiro University of Hawai‘i at Mānoa

SUMMER 2014 REPORTS

NASA MARSHALL SPACE FLIGHT CENTER - INTERNSHIP PROGRAM

MODELING OF H2O ADSORPTION ON ZEOLITES ………………………….…….88 David H. Harris University of Hawai‘i at Mānoa

NASA WALLOPS FLIGHT FACILITYAND PACIFIC MISSLE RANGE FACILITY - INTERNSHIP PROGRAM

LOW DENSITY SUPERSONIC DECELERATOR ……………………………………98 Kolby M.K. Javinar University of Hawai‘i at Mānoa

LOW DENSITY SUPERSONIC DECELERATOR ….…………………………….....105 Jacob J. Matutino University of Hawai‘i at Mānoa

Foreword

This volume contains fourteen reports from Hawai‘i Space Grant Undergraduate Fellows at the University of Hawai‘i at Mānoa, the University of Hawai‘i at Hilo, and Kaua‘i Community College. The students worked on their projects during the Summer 2014, Fall 2014, and Spring 2015 semesters under the guidance of their faculty mentors and supervisors. We congratulate all of the students for their outstanding reports and warmly thank their faculty mentors and supervisors for generously supporting the Hawai‘i Space Grant Consortium Undergraduate Fellowship & Internship Programs.

The Hawai‘i Space Grant Consortium is supported by NASA through its National Space Grant College and Fellowship Program with matching funds from the University of Hawai‘i. The goal of the program is to strengthen the national capabilities in space-related science, technology, engineering, and mathematics (STEM) and to prepare the next generation of space scientists. All of the students’ projects are related to the goals of NASA’s Strategic Plan.

For more information about the Fellowship Program, please visit our website: http://www.spacegrant.hawaii.edu/fellowships.html

Edward R.D. Scott Associate Director, Fellowships

i

RADIO FREQUENCY OVER OPTICAL FIBER DESIGN AND IMPLEMENTATION FOR THE EXAVOLT ANTENNA

James Lamar Bynes III Department of Physics University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT

The ExaVolt Antenna (EVA) is a planned ultra-high energy (UHE) particle observatory under development for NASA's suborbital super-pressure balloon program in Antarctica. EVA will use an antenna array to capture UHE events from deep space then transfer this information to a payload. In order to reduce weight and mitigate signal attenuation, RF signals will be transmitted across the balloon over an RF over optical fiber link to the payload, in place of traditional coaxial cables. A fiber transmitter and receiver pair is evaluated within this report in order to determine whether or not it will be reliable for the crucial mission in which it will partake within EVA. The design and implementation of three tests boards allow careful evaluation of the fiber transmitter and receiver pair. It was concluded that with careful microwave circuit design and modulation of the Fabry-Perot laser used within the transmitter will be sufficient enough to send data within EVA to a payload. This project is currently under review for NASA’s suborbital super-pressure balloon program and also complies with NASA objective 1.6 with an overall goal of understanding the distance sources of UHE particles.

1 INTRODUCTION 1.1 Motivation

EVA is a planned NASA balloon-borne particle observatory capable of measuring the absolute flux levels and energy spectral characteristics of the UHE cosmogenic neutrino flux [1]. UHE neutrinos contain energies in the Exavolt range (1018 eV or higher) and can propagate through vast galactic distances without attenuation. Studying the universe by looking at these particles will open the door to understanding the behavior of distant sources, allowing breakthroughs currently not possible by particle accelerators, such as: discovering the origins of the universe, as well using this knowledge to further our understanding beyond the Standard Model of particle physics. Two methods of detection by EVA are of interest: the Askaryan effect due to UHE neutrino interaction with the Antarctic ice, and gyrosynchrotron emission due to UHE cosmic rays interacting with the atmosphere [2]. Both methods portray information with radio waves, thus an antenna array will be the implemented on EVA.

EVA will employ a suborbital super-pressure balloon (SPB) 115-meters in diameter, which is currently under consideration for a NASA SPB Figure 1 EVA full-scale model of SPB. mission in Antarctica [3]. An RF reflective layer 10- meters high will be mounted on the outer membrane 1

of EVA and positioned in such a way which will allow a synoptic view of the Antarctic ice sheet during its flight. An inner feed antenna, supported by tendons hanging from the inside of the balloon, shall act as a focal point for the outer RF reflective band. Figure 1 shows a full-scale model of the SPB with the RF reflective band and an inner feed antenna array. Power will be provided by strategically placed photo-voltaic panels on the balloon.

1.2 Radio Frequency over Optical Fiber

In order to transfer the RF signals to a payload, the use of standard coaxial cables cannot be used due to weight constraints. Instead, a network of internal RF over optical fiber (RFoF) links shall be implemented. The Avago AFBR-1310Z fiber transmitter and AFBR-2310Z fiber receiver from Avago Technologies were chosen for evaluation for possible use on EVA [4][5]. The Avago fiber pair was chosen because it incorporates a linear wide bandwidth InGaAsAl/InP Fabry-Perot laser diode (FLD) and a floating monitor photodiode (MPD) for closed loop operation, and it is also able to operate in a non-stable temperature environment while maintaining optimal performance. A number of channels utilizing the RFoF system will be implemented within the balloon, thus creating strict power and weight constraints.

2 SETUP & METHODS 2.1 Design & Fabrication of Evaluation Boards

This project began with just the bare components of the Avago AFBR-1310Z fiber transmitter and AFBR- 2310Z fiber receiver shown in Figure 2. The FLD accepts a laser bias in milliamps which is then used to maintain a steady optical power output of a signal containing data which is modulated over the fiber pigtail, where the transmitted power is expressed Figure 2 Avago fiber pair. Fiber Receiver in dB (Figure 3). The MPD (left), fiber transmitter (right) which is adjacent to the FLD outputs a current which is proportional Figure 3 Equation used for transmitted power. to the optical power of the FLD. Figure 4 shows the schematic diagram of the internal circuitry of the transmitter.

The first step in characterizing the Avago fiber optic pair was to design and fabricate boards that will be suited for such tasks. The required circuit boards were designed with Mentor Graphics PADs. Three boards were fabricated: A controller, a transmitter, and a receiver. The controller board contains all the necessary components which monitored the current in milliamps from the MPD as well as providing the laser bias for the FLD. In order to determine the current from the MPD, a transimpedance amplifier circuit allowed conversion from current-to- voltage. The voltage was then read to an on-board dsPIC Figure 4 Internal circuitry of Avago AFBR- microcontroller (MCU) through an external analog-to- 1310Z fiber transmitter. digital converter (ADC) chip. The MCU also provides a current to the FLD using a digital-to-

2

analog (DAC) chip then through a voltage-to-current converter circuit. The MCU communicated with both chips via the SPI protocol where all embedded software was written in C. Figure 5 shows how these circuits were designed before they were finalized for fabrication of the controller board.

The controller board connects directly to the transmitter board in order to provide a direct link for the laser bias as well as the MPD. In designing the

Figure 5 Transimpedance current to voltage circuit (left), mosfet voltage to current circuit transmitter board, (right). Mentor Graphics Hyperlynx was used in order to properly match the transmission line at 50Ω. This is necessary to mitigate reflections from the injected RF signal and deliver the optimal signal power to the transmitter board. The receiver board did not have any special connections besides an RF output connector for the signal. Overall, the boards allowed for RF data to be sent over the fiber pigtail while providing total control in modulation of the FLD.

2.2.1 Preliminary Testing of Evaluation Boards

Once fabricated, a preliminary test was performed using a Network Analyzer to verify the matching of the 50Ω transmission lines. This test insured that further data which was determined from this point on was valid by obtaining S-parameter plots for S(2,1), S(2,2), and S(1,1). Next, the input and output RF power of the transmission lines was measured for swept frequencies from 100MHz to 6GHz. With these measurements, a gain (dB) vs. frequency (Hz) bode plot was created in order to supplement the data obtained from the Network Analyzer. This was done for three different temperatures: 25°C, 50°C, and 75°C. As per the datasheet, the gain temperature dependence of the Avago fiber optical pair should vary no more than ±2dB from room temperature to 85°C.

2.2.2 Secondary Tests of Obtaining Lookup Table for FLD

The next sets of tests determine how the transmitted power can be stable as a function of the laser bias and temperature. In a normal fiber optic system, a thermoelectric cooler is usually used to keep the temperature of the FLD stable. This way, the only fluctuation which is observed in the transmitted and optical powers comes from drifts seen in the laser bias. Because of the power budget in EVA as well as the number of potential fiber links for the entire system, a thermoelectric cooler for each fiber link would not be realizable. Therefore, the RFoF system must compensate for the fluctuation of temperature and how it will affect the transmitted power of the FLD. With these measurements, a look up table (LUT) can be implemented by the microcontroller. A temperature sensor provides input data, and a corresponding laser bias output controls the FLD so that a steady power transmission in maintained over the fiber pigtail.

3

Figure 6 Fabricated controller board (left) and fiber transmitter/receiver pair with fiber pigtail (right).

The test setup includes the following: • Agilent E4432B 250MHz-6GHz sine wave generator. • Tektronix TDS6804B 8GHz digital storage oscilloscope. • Micro Climate -70°C to 175°C temperature chamber. • The fabricated controller, transceiver, and receiver boards (Figure 6). • Other various RF equipment.

Figure 7 shows the test setup used for most of the conducted tests. Because the output power of the fiber optic system varies greatly over temperature, all tests were conducted in a temperature controlled environment. A 500MHz signal at 0dBm (224mV) was injected into the system where the data was then transmitted over the fiber pigtail, this data was then analyzed using an oscilloscope. Both the transmitter and receiver boards were positioned inside the Figure 7 Test setup showing fabricated board within temperature controlled environment, where temperature chamber. the controller board remained at room temperature during all the tests. Two tests were conducted in order to obtain this data. The first set of data was obtained by conducting tests over a range of temperatures and laser bias currents. The laser bias was varied from 50mA to 85mA while keeping the temperature constant, this was then repeated at intervals of 5 degrees Celcius from 25°C to 85°C, all parameters were recorded including the MPD current (mA) and the transmitted power (dB) of the signal. All obtained data was able to provide insightful graphs which describe the relationship of temperature and the transmitted power due to the laser bias. These graphs are shown and explained in the results section.

3 RESULTS 3.1 Results from Preliminary Tests

According to the S-parameters on the left of Figure 6, the S(1,1) plot for the transmitter shows that an unstable amount of return loss occurs. This plot does not drop below -10dB until about the 3.5GHz range, where the power then dips to -25dB at 3.88GHz, which is desirable. It is expected that the amount of return loss will drop at higher frequencies since FR4 was used as a substrate for the boards. This is due to the fact that FR4 absorbs power at higher frequencies. The S(2,2) plot seems to be experiencing the same behavior except there is no large dip at 3.88GHz, 4 still the FR4 substrate absorbs the signal at higher frequencies. The S(2,1) plot starts at around - 6dB then drops to about -16dB at 6GHz. Overall a loss of at least 10dB is expected due to the fact that a 10dB attenuator was used on the fiber pigtail.

Figure 8 S-parameters plot for fiber optic system (left). Bode plot of same fiber optic system (right).

The right of Figure 8 shows the bode plot for the frequency test. Each shade denotes the same test but with different temperatures. The test that is done at 75°C has lower overall power than the other two tests. This is due to the fact that the corresponding laser bias at that temperature is too low. The output power at 100MHz is already at approximately 7.5dB loss at the lower temperatures, this is due to the fact of the 10dB attenuator in the fiber pigtail. Finally a 3dB cutoff is experienced very early in the plot. According to the datasheet, the transmitter and receiver pair should not experience this 3dB cutoff before 5.5GHz. There are several reasons why the power begins to roll off earlier than expected in the frequency domain. The main reasons are shown in the S-Parameters plot, where non-ideal matching creates excessive reflection within the circuit, as well as the decision to use FR4 substrate which attenuates higher frequencies as was previously discussed.

5

3.2 Results from FLD Stabilization (LUT)

Figure 9 Transmitted power vs laser bias, temperature range from 20°C to 75°C.

Figure 9 shows the graph obtained relating the transmitted power according to the provided laser bias. Each line in this Figure corresponds to a change in temperature of 5 degrees Celsius, with 20°C above and 75°C below. The laser bias was swept from 45mA to 85mA in 5mA increments, these numbers were chosen because any laser bias before 45mA would result in no transmitted power, and 100mA is the absolute maximum allowed current through the FLD per the datasheet. Figure 9 shows that as laser bias increases, the transmitted power initially increases until the power peaks, and then begins to roll off, where the peak and roll-off points are functions of temperature. This confirms a non-linear relationship between the laser bias and transmitted power. For example, the transmitted power at 75°C will always been less than the transmitted power at 20°C, which suggests that the transmitted power may only be stable for certain intervals of temperature.

Figure 10 Transmitted power vs temperature showing intervals of stable transmitted power.

6

Figure 10 displays the intervals in which the transmitted power may be stable at different intervals of temperature. Due to variances in manufacturing and other real-world effects, the results obtained for the final system will vary, but for the most part, it will use the same method described here. For the temperature intervals of 20°C - 32°C, 33°C - 46°C, 47°C - 68°C, and 69°C - 75°C, the laser bias will be adjusted using a LUT, as described in Section 2.2.2, so that a steady transmitted power is obtained throughout the fiber optic system.

Figure 11 Look up table from 20C to 29C (left), stable transmitted power as a result of using the lookup table (right).

Figure 11 shows a LUT (left) for the interval from 20°C to about 29°C, the corresponding graph of the transmitted power is shown to the right which shows that it could be stable. Further calibration must be done in order to provide a flatter transmitted power, although small fluctuations shouldn’t matter much in the final system. During the production of each fiber optic link for the final system, an automated test will be in place in order to determine a LUT per fiber optic pair; this is due to the fact that not all pairs will have the exact same characteristics. In order to transition from one temperature interval to the next, software PID controller will be implemented to ensure that there will be no unexpected spikes and ensure smooth transition from each interval. 4 CONCLUSION

The design and fabrication of the fiber transmitter, receiver, and controller boards allowed further evaluation of the decision to implement the Avago fiber transceiver pair for the design of EVA. Due to non-linearity of the transmitted power with respect to the laser bias and temperature, a standard transfer function was non-realizable. Although a LUT may be used for the final system, the question remains whether or not it would be practical to control and maintain a steady transmitted power over each fiber optic system via software. Also, much effort will be needed in order to design an automated system that will allow the calibration of multiple fiber optic links and their corresponding LUTs for different temperature intervals. Careful consideration will determine whether or not the Avago transmitter/receiver pair could be used for EVA.

ACKNOWLEDGEMENTS

I would like to thank Dr. Gary Varner and Dr. Peter Gorham for giving me this opportunity to conduct this project in evaluating the Avago fiber optic pair for a potentially

7

upcoming NASA sponsored mission. I would also like to thank Hawai‘i Space Grant Consortium for their continued support in allowing this opportunity. I would also like to thank my colleagues Khanh Le and Steven Ewers for their help in previous semesters on mentoring the microwave circuit design.

REFERENCES

[1] P. W. Gorham, et al., “The ExaVolt Antenna: A Large-Aperture, Balloon-embedded Antenna for Ultra-high Energy Particle Detection”, University of Hawai‘i , 9 Aug 2011

[2] D. Saltzberg, P. Gorham, D. Walz, et al., “Observation of the Askaryan Effect: Coherent Microwave Cherenkov Emission from Charge Asymmetry in High Energy Particle Cascades,” Phys. Rev. Lett., 86, 2802 (2001)

[3] H. M. Cathey, “The NASA Super Pressure Balloon. A Path to Flight,” Advances in Space Research Vol. 44, Issue 1, 1 July 2009, pages 23-38

[4] Avago Technologies, “AFBR-1310Z/AFBR-1310xZ Fiber Optic Transmitter for Multi GHz Analog Links Data Sheet.” (2013) http://www.avagotech.com/docs/AV02-3184EN

[5] Avago Technologies, “AFBR-2310Z Fiber Optic Receiver for Multi GHz Analog Links Data Sheet”, (2011) http://www.avagotech.com/docs/AV02-3183EN

8

DIGITAL IMAGERY AND GEOLOGIC MAP OF CHEGEM CALDERA, RUSSIA

Christina Cauley Department of Geology and Anthropology University of Hawai‘i at Hilo Hilo, HI 96720

ABSTRACT

The late Pliocene Chegem Caldera in the North Caucasus, Russia, is the youngest, most deeply dissected, resurgent caldera complex in the world. The caldera system contains close to three kilometers of vertical exposure including the resurgent intrusion. Field mapping of this remarkable system, done during the summers of 1990 and 1991, remain unpublished and exist only on photographs of Russian topographic maps. The primary project goals were to register and digitize the maps onto a topographic base derived from an ASTER global DEM and to use multispectral imagery to confirm and extend the geologic mapping of this remote site. Most of the caldera lies above tree-line, making it well suited for satellite imaging. Utilizing data from the LANDSAT and ASTER satellites in conjunction with ground-based research conducted in 1990 and 1991, a publishable geologic map of the caldera was produced.

INTRODUCTION

The Chegem Caldera complex is notable not only for containing a diverse number of units but also for the degree of down cutting that makes the complex the youngest, most deeply dissected, resurgent caldera in the world. The current size of Chegem Caldera is 11 x 15 km in diameter. The original topographic size of the Chegem caldera was on the order of 15 x 20 km, with outflow sheets extending at least 40-50 km from the caldera margins. Intra-caldera ash- flow tuffs are more than two kilometers thick within the source caldera (Lipman et al. 1993).

The Chegem Caldera complex was active 2.8 million years ago during the late Pliocene continental collisional event between the northward moving Arabian plate and Eurasian plate (Lipman et al. 1993, Gazis et al. 1995). Recent rates of uplift in the Northern Caucasus are as great as four millimeter per year, which along with denudation has resulted in a 2.3 km thick section of caldera fill to be exposed in deeply dissected river drainages (Lipman et al. 1993, Gazis et al. 1995). The region is surrounded by thrust and reverse faults, along with trending transverse faults in the northeast zone of the caldera. Four sequential orogenic belts are co- located around the Chegem caldera system (Lipman et al. 1993).

9

Figure 1: The Kum Tyube section of the thick rhyolitic tuffs capped by dacite tuffs, glacial till and andesite lavas. Photo taken from Mt. Likarilgi, facing the southwest, by Dr. Ken Hon in 1991.

A variety of volcanic and intrusive events have formed a compositionally diverse suite of units, including basaltic andesites, andesites, granodiorites, granites, and welded and nonwelded dacite and rhyolite tuffs. These overlay Jurassic- Cretaceous sedimentary rocks and Precambrian to Paleozoic crystalline basement rocks that are predominantly schists. Glacial till has been deposited on both within and on top of these sequences and fossil talus breccia was identified in the caldera collapse features. The intra-caldera ash-flow sequences are characterized by the absence of complete cooling breaks or other evidence for interruptions in deposition. The southeastern portion of the caldera complex was shaped by simple piston uplift of a resurgent block. Overall, the whole mass appears to have formed over a period of less than 50,000 years (Lipman et al. 1993, Gazis et al. 1995, 1996).

The field conditions in the Chegem Caldera region are extremely rugged and in places not conducive to fieldwork. Portions of the contacts between units remained poorly constrained due to difficulty accessing the contact in the field, especially the northern and western regions. Additionally, the glacial meltwaters of the Chegem River and its tributaries make crossings hazardous (Lipman et al. 1993). A geologic map for the caldera was compiled from pre-existing topographic maps and mapping done by a joint Russian Akademia NAUK and U.S. Geological Survey team lead by Peter Lipman during the summers of 1990 and 1991. The map remains unpublished, with regions of uncertainty where the landscape was not traversable. Current social conflicts in the area make future fieldwork in the region difficult. The application of remote sensing imagery was done in an effort to refine the geologic map structures and contacts where possible in light of these physical restrictions in access to the area as well as to produce a digital map with a rectified topographic base.

METHODS

A LANDSAT 7 EMT+ image and two ASTER Global digital elevation models (DEM) were procured and processed. These datasets were essential to geo-registering and transforming the original geologic maps to an accurate topographic base.

10

The DEM datasets and the Landsat image were processed in ENVI and ArcGIS. The ASTER DEMs used were collected in October 201l. No other ASTER data then DEMs were availed online for this region. The two DEM were reprojected, mosaicked and cropped. The mosaicked DEM then imported into ArcGIS and used to produce 100 m and 200 m topographic lines. The resulting DEM was processed in ArcHydro to generate an accurate stream drainage system. A network of points identifiable on the original map, the stream drainage map, the Landsat image, and 1 m satellite imagery available within ENVI were chosen.

The completed ArcGIS feature classes drew from a total of ten map images. Creating a stable topographic base to insure the alignment of the field and satellite data was crucial as this provided the backbone for my analysis. The topographic maps were initially geo- Figure 2: The high degree of agreement between rectified using 44 manually created control the ASTER derived contour lines and the points that were identifiable in all three topographic map contours assured the geo- data sets, then further refined by registering points at the edge of one image rectification of the base maps was accurate. NASA to the same point in another. By ASTER Program (2011). comparing the map elevation contour lines against the independently created contour lines from the ASTER DEM the accuracy of the geo- rectified map was tested. The high degree of agreement between the data assured that transformed base was accurate. The best-fit for the entire map had to be tested and the results considered when defining the primary geologic contacts.

The LANDSAT 7 EMT+ image was taken in August 1999. The image was cropped, Gram-Schmidt pan sharpening and a QUAC atmospheric correction was applied. Several decorrelation stretches have been applied to the LANDSAT images to enhance the visual differences between the geologic units. The stretches were done using band combinations 754, 542, and 731. These combinations were elected to make use of three key bands. The short wave length IR bands 7 and 5 are useful for general mineral and rock discrimination. The near IR band 4 provided an accurate delineation of vegetation covered. Once the map contacts were matched to units visible in the LANDSAT image, the spectral signature of each unit was calculated.

RESULTS AND DISCUSSION

The spectral signatures of the volcanic units were distinct from those of the limestone and glacial till (Fig. 3). There is a significant amount of overlap between the volcanic and intrusive units within the caldera and plots demonstrated that they were highly correlated. Several decorrelation stretches were applied that accurately discriminated most of the mapped units on

11

the Landsat image. Attempts at classification were hampered by shadowed regions in deep valleys, large amounts of talus that bleed the contacts downhill, and by dense vegetation particularly in the valleys. Even so, most of the geologic units were clear in the decorrelated images. In particular, the decorrelated stretches were successful in highlighting the resurgent intrusion, the hornfelsed tuff in the resurgent block, and large areas of potential tuff plastered against the topographic caldera wall outside the structural caldera that were not identified in the field.

Figure 3: Spectral signatures of the limestone and igneous geologic units. NASA Landsat Program (1999).

Glacial till deposits were readily identifiable in the LANDSAT image, which we had initially assumed would be the most difficult unit to identify spectrally due to the mixed composition. It appears to be composed predominantly of limestone parent material, which clearly delineates the glacial till from the surrounding igneous deposits.

In the red and infrared bands, the granodiorite intrusion and the lower rhyolite tuff were clearly delineated from all other units. Contact metamorphism from the intrusion of the granodiorite caused the partial melting and recrystallization of the tuff into hornfels created the unique spectral signature of this unit. The LANDSAT images clearly support the field team’s initial inferences about the overall structure as the fault bounding of resurgent block in the northeast section of Chegem Caldera. Much of area of the resurgent block lies in inaccessible terrain, so the confirmation of the geology from the remote sensing data is extremely valuable.

Vegetation cover was a larger issue in the northeast region of Chegem Caldera then we had assumed it would be at the start of the project. The hornfelsed lower rhyolite tuff identified in the field on the Hobetayeen mountain block was not detectible in the LANDSAT image. This was largely due to extensive grass, brush, and trees in this area. The unit extent was determined based upon the original field notes and the size limits from the surrounding granodiorite unit, decreasing the extent of this unit compared to the original geologic maps.

12

CONCLUSIONS

Ten geologic maps were accurately geo-rectified and compiled into a single digital map. This allows for the successful use of satellite imagery to confirm geologic units and the structure of the caldera system. This includes the shape of the resurgent block, as well as several large areas of potential Chegem tuff outside of the structural margins. Future work in this region would include analyzing a larger footprint of the area to identify the geographic extents of the ash flow sheets described by Lipman et al. in 1993.

ACKNOWLEDGEMENTS

Many thanks to my mentor, Dr. Ken Hon, for giving me this opportunity and providing the topographic maps for this project. I would also like to thank Dr. Ryan Perroy for introducing me to ENVI, and to the University of Hawai‘i at Hilo for access to the software used in this project.

13

REFERENCES

Adamia S., Zakariadze G., Chkhotua T., Sadradze N., Tsereteli N., Chabukiani A., and Gventsadze A. (2011). Geology of the Caucasus: A Review. Turkish J. Earth Sci. 20, 489–544.

Gazis C. A., Lanphere M., Taylor H. P., and Gurbanov A. (1995). 40Ar/39Ar and 18O/16O studies of the Chegem ash-flow caldera and the Eldjurta Granite: Cooling of two late Pliocene igneous bodies in the Greater Caucasus Mountains, Russia. Earth Planet Sci. Lett. 134, 377–391.

Gazis C., Taylor H. P., Hon K., and Tsvetkov A. (1996). Oxygen isotopic and geochemical evidence for a short-lived, high-temperature hydrothermal event in the Chegem caldera, Caucasus Mountains, Russia. J. Volcanol. Geotherm. Res. 73, 213–244.

Hon K. (Photographer) (1991). Kum Tyube Ridge, Chegem Caldera, Russia [photograph]. Private Collection.

Lipman P. W., Bogatikov O. A., Tsvetkov A. A., Gazis C., Gurbanov A. G., Hon K., Koronovsky N. V., Kovalenko V. I., and Marchev P. (1993). 2.8-Ma ash-flow caldera at Chegem River in the northern Caucasus Mountains (Russia), contemporaneous granites, and associated ore deposits. J. Volcanol. Geotherm. Res. 57, 85–124.

NASA ASTER Program (2011). ASTER Global DEM scene, ASTERGDMV2_0N43E042, USGS, Sioux Falls, 10/17/2011.

NASA ASTER Program (2011). ASTER Global DEM scene, ASTERGDMV2_0N43E043, USGS, Sioux Falls, 10/17/2011.

NASA Landsat Program (1999). Landsat 7 EMT+ scene, L71171030_03019990818, level-1G, SLC-On. USGS, Sioux Falls, 08/08/1999.

14

15

16

DESIGN OF A WIRELESS POWER TRANSFER SYSTEM UTILIZING MICROWAVE FREQUENCIES

Steven Shane Ewers Department of Electrical Engineering University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT

This report describes a system, loosely based on a model originally proposed by Dr. William C. Brown in 1961, which theoretically would be able to transmit significant wireless power, with precise directivity, over a distance using a frequency of 2.45 GHz. A large-scale model of this system could be very valuable to NASA in the form of a solar powered satellite which could wirelessly transmit this power to a base station on earth. The system outlined herein is composed of a transmitter and a receiver. A helical (Kraus) antenna is the targeted radiator, capable of transmitting 10W, or 40 dBm of RF power 4 meters to a receiver. Microstrip matching networks are utilized for impedance matching on both ends. This theoretical design is mainly conceptual, including idealistic calculations for transmission efficiency, power, and radiation patterns.

INTRODUCTION

Nikola Tesla described the ability to transmit power to a device wirelessly as “an all- surpassing importance to man.” In our world of portable technology, wireless communication, and so called “smart devices”, it seems slightly out of place to have such a dense, wired power grid. Why, if we have the ability to send dense packets of information around the globe, has mankind not been able to free the power grid from the burdensome stitches of wires running thousands of miles around the planet? It is towards this purpose that the following research has been performed. This report will demonstrate a design whose concept has been demonstrated over 50 years ago. In 1961, William C Brown published a paper that described a system which could send energy through the air, in the form of microwaves. In 1964 he demonstrated a design could power a model helicopter completely from a microwave beam. There are several papers outlining his block diagram of the system [1]. The research done herein closely follows this model. A large-scale version could be very valuable to NASA. Solar-powered satellites (SPS) already orbit our planet. The concept is that a large array of solar panels, at high-earth orbit, would receive sunlight 99% of the year. This satellite would collect energy from the sun’s rays and beam this energy to a location on the earth’s surface. The transmission would occur only when a pilot beam from the receiving station is aligned with the SPS. Of course many safety concerns arise with this type of system, not the least of which is maintaining direct alignment to prevent reaction of the beam with organisms on the planet. The merits of such a system can easily be seen in the form of continuous power from space, consuming comparatively no space on the Earth’s surface [2].

It is worthwhile to elaborate on the concept of wireless power transfer (WPT), in order to have a comprehensive viewpoint, prior to explaining the system governing it. There are two

17

general methods to transmit power that have been demonstrated; namely, near field and far field techniques. In order to understand the difference, one must look at Maxwell’s equations for EM wave propagation for a plane wave detaching from an oscillating source. Basically expressed, the propagation of the electric field from a short current radiator can be described by equation 1.

Idl  β 2 jβ 1  = θ − + + − jβr Eθ sin  2 3  e Eq. 1 jωε0 4π  r r r 

This equation shows the field has a dependency on r, which is the distance from the source to the observation point. As r gets closer to 0, the r 3 term dominates, thus defining the near field as a region less than one wavelength. The far field is dominated by the r-term, as r0 - approximately two wavelengths from the source. The near field is reactive, dominated by capacitive or inductive characteristics, and is non-radiative [3]. Near field (non-radiative) methods include inductive coupling, resonance inductive coupling (RIC), and air ionization. The first can be seen in electric toothbrush chargers, and generally has a very short distance of effectiveness. RIC is similar to classic induction, with the difference being the two circuits are coupled via strong magnetic fields, at a particular frequency. This method has been demonstrated with most success, and has the lowest potential for negative effects, as only objects within the specific coupling regime are affected. Air ionization is the breakdown of air, causing a conduction current, similar to lightning. As can be imagined, this is the least safe. Far field (radiative) techniques involve microwave power transmission (MPT) and laser transmission. The former is accomplished by converting energy to microwave power, beaming this energy through a directive antenna to a rectenna, and converting it to a conventional power form. The latter method requires direct line of sight, and is considered dangerous due to the high power laser [1].

Although RIC would be cheap, efficient, and fairly simple, the distance of transfer would be limited. Therefore, far field radiative techniques were more attractive for this particular design. For legal reasons, the ISM range of frequencies, which are reserved by the FCC for scientific and medical purposes, was considered. Because of high attenuation above 6GHz, and excessively large physical structures required at frequencies less than 2GHz, the operating c frequency was chosen to be 2.45 GHz, which has a wavelength of 12.24 cm, as given by λ = . f METHODOLOGY

Whereas many designs follow a simple A-B-C progression, this was a non-linear process. Following fundamental research, the nominal frequency was chosen, as well as the method of transmission (radiative far-field). It was decided that a received power of 10 Watts across a distance of 4 meters was significant. This led to the application of the Friis transmission formula to aid in the design of the antenna parameters. After calculating the path loss and transmit/ receive powers, the remainder of the circuit could be designed. The transmitting circuitry consisted of a microwave power source, coaxial-waveguide adapter, input matching network, and a transmitting antenna. The receiving network, known as a rectenna (rectifier + antenna), provided a means of converting RF directly into DC [5]. It is composed of a receiving antenna on

18 the front end, an output-matching network, rectifying circuitry, and the load. Figure 1 shows a block diagram of the overall system.

Fig 1: Block diagram showing transmitting system and rectenna components.

CIRCUIT MODEL

The power source would be a magnetron, commonly found in any household microwave oven. This source was chosen because it is cheap, easy to obtain, and emits microwaves, typically at a center frequency of 2.45 GHz. It can easily output 1000 kW of power and more if wired together [2].

Because the magnetron emits microwaves into free space, it would need to be coupled to the system. To solve this problem, a COTS waveguide adapter was chosen as the means of doing so. Such a device would allow for free-space waves to be collected inside a geometrically-suited waveguide, whereby the energy would be converted at the probe to electrical energy, and allowed to propagate through the system, via a coaxial coupler.

The next challenge was impedance matching this coaxial line to the transmitting antenna for optimal transmission efficiency. It is well known in RF design that an impedance mismatch will cause EM power to be reflected, resulting in overall transmission power loss. This can be prevented using a microstrip technique known as quarter-wave matching. This method simply uses a 2-layer board with a microstrip trace length equal to a quarter of the transmission frequency’s wavelength. To practically apply this design, a network analyzer would be needed to determine the impedance of the source, the real resistance of the load (antenna in this case), and the propagation constant must be known.

If connecting directly to the antenna (load), the use of a balun could be eliminated. For the transmitting antenna, a Kraus, or helical geometry, was chosen. After simulating the directivity achievable from dipole and patch models, the Kraus antenna seemed to provide a higher directivity. Transmitting in axial (T1) mode, it can produce a highly directive beam

19

allowing for higher gain and lower path loss. This antenna style produces a circularly polarized wave; therefore the receiving antenna must be wound in the same direction.

The receiving antenna design was identical to the transmitting antenna. Again, impedance matching played a key role at the interface with the rectifying circuitry. For the matching network, a matched-stub filter was the design of choice. This consisted of a microstrip design, using two stubs of equal length, to act as filters, as well as preventing unwanted return loss. When properly designed for the center receiving frequency, the circuit acts as a notch filter for the desired frequency, rejecting parasitic frequencies. Although this is more crucial for signal reception, the rectifying circuit could act unpredictably if they were allowed to pass.

Once the wave is received, the RF power must be converted back to electrical power. This was done using a classical full-wave rectifier circuit. The rectifying part of the circuit converts a sinusoidal AC wave to a DC signal. Schottkey barrier diodes seemed to work best due to their low turn-on voltage and low junction capacitance [4]. The rectifying circuit in figure 2 shows a classic full wave rectifier, outputting a stable DC voltage.

Fig 2: Full-wave rectifier circuit using Schottkey diodes

The value for C would be determined from the current at the load, which would be determined after knowing the load’s resistance. Once rectified, the power could either be used directly, or be stored in the form of fuel cells for transportation. The method of use would characterize the load of the device, which should be, ideally, a real resistance.

PROCEDURE

The first step in designing the system was to model each element of the circuit. This was a non-linear process, as many elements had to be tuned as others change value. The crucial part of the system centered on the antennas. Qualifications for the transmitting (TX) antenna required that it must be highly directive, have high power-handling capacity, and be physically realizable. Directivity describes the power density as it is related to an isotropic source (spherical, unit of 1), and is based on the physical geometry of the radiator itself. It was found from research that the Kraus antenna design would provide reasonable directivity and size. It also provided better power-handling capabilities than the two previously mentioned layouts, due to self-resonance

20

and lower potential for dielectric breakdown. By writing a MATLAB script, a basic model for the helical design was quantified, based on equations written by Kraus for his antennas [6]. This included solving the Friis Transmission Formula (eq.2), which showed how much power would need to be radiated through the TX antenna, in order to overcome the path loss through an atmospheric medium and be received at the other end.

P  4π R 2 P = R Eq. 2 T   GT GR λ

Here, Pt & Pr ate the transmitted and received powers, respectively, and similarly are Gt & Gr the respective gains of each antenna. The squared term is the path loss, where R is the distance between the two antennas. For these calculations, 100% efficiency was assumed, allowing directivity to equal the gain. This simplified calculations, as no data was available for efficiency of this antenna. Also, because the TX and RX antennas are identical Gt=Gr. Using simple calculations for a helical radiator, the starting point for the antenna system is defined by the variables in table 1.

Table 1: Helical Antenna Parameters (here ‘ λ ’ is wavelength) Antenna Variables Calculated Antenna Power Transfer Values Parameters Wavelength 12.23 cm Gain 16.4 dBi Pr 40 dBm Circumference 1.1 λ Characteristic 143.0 Ω Path Loss -104.5 dBm Impedance # of Turns 12 Input Resistance 44 Ω Gt 16.4 dBi Spacing ¼ λ HPBW 34.1° Gr 16.4 dBi Between Coils Diameter 3.9 cm BWFN 60.4° Pt 111.7 dBm

For the complex calculations involved, a MATLAB script was written and the design software HFSS aided in 3D representations. Figures 3 & 4 shows the model for a Kraus antenna, based on the above parameters, and details of the feed point.

21

Fig 3: Kraus simulation model Fig 4: Detail of feed point

A radiation pattern was produced, from these calculations, showing the high directivity of the design (fig. 5).

Fig 5: Radiation field pattern for the Kraus antenna design

When dealing with high frequencies, impedance matching is an all-important consideration. In order to deal with this, impedance-matching input and output networks are mandatory. These would be entirely based on the final impedances of the desired circuits to be interfaced, and would be in the implementation stage of the design. I intend to use microstrip design, with quarter-wave and stub-matching geometries. These are compact, double-sided substrate circuits whose dimensions are determined by frequency, dielectric material, and can be designed simply using a Smith chart. This tool allows one to take S-parameters, discovered from using a network analyzer, and convert them to impedance values. A schematic example of a quarter-wavelength matching network was simulated in Agilent’s ADS® software package for verification of this technique. For the matched-stub network on the rectenna side, a similar example was simulated and can be seen in figure 6.

22

Fig 6: ADS schematic for a matched-stub filter, center frequency 7GHz

RESULTS

From this research, a system design for a WPT network has been designed. Based on calculations and theoretical models, the transmission of 10W of power (40dBm) would require about 111 dBm (125 GW) of RF power at the transmitting antenna. The path loss amounted to 104 dBm, giving a transmission efficiency of ~35%. This is very low for a small scale system. This is a theoretical model, based on 100% efficiency assumption. A realistic model would involve obtaining measurements for the real system’s impedance, or equivalently, modeling each individual component with its parameters and material compositions. This cannot be done at this level. It should be noted, however, that physical dimensions of the proposed design are physically realizable.

DISCUSSION

As this is research on a design, many aspects are left as variables, depending heavily upon the final physical parameters. More work is necessary to complete the theoretical design/ simulations, and carry it on to the testing phase. No small portion of this is quantifying the safety hazards present. RF power at this frequency is known to be dangerous. Such a high transmission power is extremely hazardous and expert-level certification would be necessary before this could be realizable. A concern is dielectric breakdown of the transmitter, due to the high power. This could be ameliorated by a different design. The low efficiency is common with wireless systems; however in a large-scale, SPS scenario, with unlimited solar power, this would be a small concern. These systems typically decay at an exponential rate relative to the distance from the source.

Future work on the project is needed. Halfway through the design, the computer doing the simulations for HFSS crashed, so a 3D representation could not be produced. This should be

23

done for tuning to optimum directivity, which would increase overall efficiency. Another effort could also be made for a more suitable antenna, such as a corner reflector.

CONCLUSION

This research has allowed me to expand upon my basic knowledge of wireless energy. Though no breakthrough has come to light, it is hoped that the concepts expanded upon herein can be taken to the next level in future work and study. If such a design for SPS could someday be a reality, it could alleviate almost all dependency on fuel sources on our planet.

ACKNOWLEDGEMENTS

I would like to thank the HSGC for supporting my research, the University of Hawai‘i for providing me with an outstanding education, Drs. Z.Q. Yun, Magdy Iskander, Wayne Shiroma, and David Garmire for their talented illustrations of EM techniques and all they have taught me.

REFERENCES

[1] Rajen Biswa (2012), Feasibility of Wireless Power Transmission, Retrieved from Academia.edu website: https://www.academia.edu/1561057/Feasibility_of_Wireless_Power_Transmission

[2] Sagolsem Kripachariya Singh, T.S. Hasaramani, R.M. Holmukhe, “Wireless Transmission of Electrical Power Overview of Recent Research & Development”, International Journal of Computer and Electrical Engineering, vol. 4, no. 2, April 2012

[3] Umenei, A. E. (2011), “Understanding Low Frequency Non-Radiative Power Transfer”, Fulton Innovation

[4] Christopher R. Valenta, and Gregory D. Durgin, “Harvesting Wireless Power”, IEEE Microwave Magazine, vol. 15, no. 4, pp. 108-120, June 2014

[5] M. Venkateswara Reddy, K. Sai Hemanth, CH. Venkat Mohan, “Microwave Power Transmission - A Next Generation Power Transmission System”, IOSR-JEEE, vol. 4, no. 5, pp. 24-28, January 2013

[6] “Helical Antenna Design Calculator”, Internet: http://www.daycounter.com/Calculators/Helical-Antenna-Design-Calculator.phtml, Dec. 24, 2014

24

MULTI-WALLED CARBON NANOTUBE NANOFORESTS AS GAS DIFFUSION LAYERS FOR PROTON EXCHANGE MEMBRANE FUEL CELLS

Kathryn Hu Department of Mechanical Engineering University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT

This research focuses on evaluating the potential to implement carbon nanotube nanoforests (CNNs) as gas diffusion layers (GDLs) in proton exchange membrane fuel cells (PEMFCs) in order to increase fuel cell performance in terms of stability, humidity, power density, and operation efficiency, while lowering the weight, size, and costs. Multi-walled carbon nanotubes (MWCNTs), used in this research, have been proven to transport large currents with low resistance, have extremely high hydrophobic properties and inherent oxidation resistance, all of which make the potential application of MWCNTs as GDLs very promising for a variety of applications. The process, from growing carbon nanotube nanoforests, characterization of growth, to the removal of CNNs from the substrate, will be discussed in this report. Solutions were found to overcome the most significant barriers to the use of CNN GDLs, growth and removal, thereby proving the feasibility for future fuel cell testing.

INTRODUCTION

Proton exchange membrane fuel cells (PEMFCs) are emerging as competitive candidates for power conversion devices for stationary, automotive, and portable devices compared to other types of fuel cells [1, 2]. The PEMFCs operate at elevated temperatures to improve the conductivity of electrolyte and enhance the kinetics of electrode reactions resulting in higher operating efficiencies. However, operation at elevated temperatures requires external humidification to fully humidify the reactant gases to avoid low proton conductivity which results from membrane dehydration [3, 4]. If the PEMFCs humidification needs are reduced or removed, systems can be designed with simplified or no auxiliary humidification units, leading to lower costs of fuel cell operations.

Gas diffusion layers (GDLs) have been developed to manage water as well as to promote gas distribution to the active catalyst regions in an attempt to obtain higher power density at all current density regions. For several years, carbon papers or carbon cloth substrates (microporous layer) with polytetrafluoroethylene (PTFE) based microporous layer coatings have been the major choice for GDLs [5-9]. While this process is effective, the introduction of microporous layer introduces less conduction due to the presence of PTFE, substantially increasing the resistance of the cell. In practice, this could lead to an increase of fuel cell stack size and higher weight and capital costs.

Electro-osmotic drag causes membrane dehydration when protons move from anode to cathode simultaneously transporting water [10]. The PEMFCs’ sensitivity to hydration initiated extensive research and analysis to monitor and optimize the water management in these systems

25

[11-14]. The results of these studies identify operating conditions that result in water balance for the proposed PEMFC model by controlling factors such as operating temperature, pressure, reactant gas temperatures, and the humidity of reactant gases. In addition to modeling, different types of PEMFC humidification systems have also been studied, the more common including direct injection [15] which adds water directly to the gas inlets, and external membrane humidifiers [16] which flow gas over a water permeable membrane causing the gas to absorb water. Due to MWCNTs ability to transport large currents with low resistance and their extremely high hydrophobic properties and corrosion resistance, implementing MWCNTs as GDLs has the potential to significantly decrease or eliminate the need for complicated humidification systems.

EXPERIMENTAL METHODS

Carbon Nanotube Nanoforest (CNN) Growth

CNN growth in this research is achieved by a chemical vapor deposition (CVD) process which involves the decomposition of a carbon-containing gas. Gas-phase processes are known to produce nanotubes with fewer impurities and can be more easily implemented for large-scale processing. To grow the MWCNTs, the six-inch diameter CVD furnace was first purged with argon gas while heating to operating temperature. A precursor mixture of ferrocene and xylene was vaporized and injected into the furnace whose temperature was held steady at 750° C. The ferrocene then deposited itself onto the surface of silicon dioxide wafers upon which the MWCNTs grew following a base-growth model, depicted in Figure 1.

Figure 1. (a) Tip-growth and (b) Base-growth models for MWCNT CVD growth [17].

The growth was kept in the inert, argon environment until the entire volume precursor had been injected into the furnace and the furnace was cooled to below 200° C. Obtaining consistent, good quality growth required a considerable amount of effort to ensure the stable operation of the furnace and to determine the best gas and precursor flow parameters.

26

CNN Characterization Using Electron Microscopy

Characterization was achieved via Scanning and Transmission Electron Microscopy, SEM and TEM, respectively. These visualization methods verified the quality and characteristics of the CNN growths to be used for GDLs. SEM provided a means to directly observe the MWCNT’s size, shape, and structure. TEM was allowed for the measurement of MWCNT inner and outer diameters, and thus the approximate number of walls. [18].

CNN Removal from Substrate

In order to be used as a GDL, the CNN growth needed to be isolated from the silicon dioxide wafer substrate and catalyst layer. However, given that the CNN is only held together with van der Waals forces, the growth was extremely fragile and difficult to handle. Scraping the growth from the substrate caused the layer to crumble or break, as can be seen in Figure 2.

Figure 2. Broken pieces of CNN resulting from attempting to scrape the CNN off the substrate.

In previous studies, the CNN was first removed from the substrate by dissolving the oxide layer using Sigma Aldrich Ceramic Etchant A or Alumiprep 33 [19]. Both aforementioned chemicals are hydrofluoric acid replacements, intended to reduce the safety risks of using hydrofluoric acid. Multiple rinses alternating between hydrochloric acid and deionized water were then used to remove the ferrocene catalyst layer from the CNN, leaving only the pure nanoforest.

Despite the process having been tested and proven, supported by the Patent #WO 2011106109 A2 “Nanotape and nanocarpet materials,” the removal process was not so simple for the GDL-required thickness of the CNN growth. Obtaining a full, 3”x3” sheet of CNN was extremely difficult due to the fragility of the material and its susceptibility to breaking while handling.

RESULTS AND DISCUSSIONS

Carbon Nanotube Nanoforest (CNN) Growth

It was confirmed that there is a direct correlation between the amount of ferrocene-xylene precursor injected and the amount of growth on the substrate. This was an expected result given that the ferrocene and xylene mixture contains the building blocks for the MWCNT. Several tests also yielded the ideal hydrogen to argon gas flow rate ratio to be 20% since hydrogen

27

carries the precursor through the furnace. Through a variety of trial and error processes, a set of parameters yielding consistent, uniform, high quality CNN growth was achieved. Figure 3 and 4 below represent some of the different growth results obtained throughout testing.

(a) (b)

Figure 3. (a) SiO2 wafer prior to CNN growth and (b) Ideal, uniform, good quality CNN growth.

Figure 4. Examples of other growths on SiO2 resulting from different parameters.

CNN Characterization Using Electron Microscopy

Using microscopes from the Pacific Biosciences Research Center (PBRC) Biological Electron Microscope Facility, information about the good quality growths was obtained from both the SEM and TEM and can be seen below.

28

Figure 5. SEM side-view at 900 times magnification of CNN growth with height measurement.

Figure 5, an image obtained from the SEM of a good quality CNN growth, verified the intended verticality of the overall nanoforest. This also confirmed that the growth was the appropriate height needed to be used as a GDL membrane (50 – 90 m). Figure 6 below is a further magnified view of the side of the growth in Figure 5. This shows the more interwoven or tangled arrangement of the actual CNN; however, the overall verticality shown in Figure 5 is the important aspect desired for the GDL.

Figure 6. SEM side-view at 30,000 times magnification of CNN growth.

Images taken from the TEM verified the quality of MWCNT growth by allowing the visualization of distinct inner and outer walls of the CNT, as can be seen in Figure 7 below.

29

Figure 7. TEM image of single MWCNT sampled from the CNN.

From Figure 7, the total MWCNT thicknesses could be measured and were on average about 10 nm. The darker spot on the image is the ferrocene catalyst within the MWCNT. In comparison to other CNN growth brought to the PBRC for analysis by other departments, the growth obtained for this research is significantly greater in quality due to the obvious cylinder and wall definition.

CNN Removal from Substrate

Comparatively testing Alumiprep 33 versus the Ceramic Etchant A showed that Alumiprep is more effective for removing the CNN from the substrate than the Ceramic Etchant. It was found that the time needed for the CNN layer to separate from the SiO2 wafer was inversely proportional to the thickness of the growth, requiring 16 to 27 minutes to fully separate rather than the predicted 90 seconds. Separation initiated from any exposed SiO2 surfaces; therefore if the growth was not uniform in thickness, spots of thinner growth would lead to ruptures through the growth as can be seen in Figure 8. Further testing brought upon the issue of handling separated CNN pieces, as they would quickly rip if not fully supported. Upon rinsing with deionized water and setting aside to dry, the pieces of growth obtained from the removal step cracked or broke into even smaller pieces when dry and were then stuck to the petri dish.

30

Figure 8. CNN separating from the SiO2 wafer after being submerged in Alumiprep 33.

In order to control the separation process and yield full sheets of CNN, the edges of the SiO2 wafer were scraped and approximately 1/8” of wafer was exposed around the perimeter. A 4”x4” square of 5% teflonized carbon paper was placed under the wafer to more easily lift the separated CNN layer after separation. Alumiprep was poured around the edges of the wafer first to facilitate the edges’ removal then the entire wafer was covered to a depth of approximately 0.5 inches. Once the CNN fully separated from the SiO2 wafer, the wafer was carefully slid out from between the CNN layer and the carbon paper. With the CNN centered on the carbon paper, the CNN could be fully supported as it was slowly raised out of the Alumiprep. Keeping all CNN surface flat on the carbon paper, the growth was rinsed in several consecutive deionized water baths. The result of this process is pictured in Figure 9.

Figure 9. Complete CNN growth removed from SiO2 substrate with Alumiprep 33 and dried on 5% teflonized carbon paper.

CONCLUSIONS & FUTURE WORK

The successful demonstration of growth of high quality CNN of the appropriate GDL thickness and the ability to separate large pieces of CNN from the substrate suggests that it is feasible to implement a MWCNT CNN as a GDL. Given that the growth, removal, and handling of the CNN are primary roadblocks limiting the possible use for these GDLs in large-scale applications, the solutions presented in this research give way to promising improvements for fuel cell technologies. Based on prior research, it is expected that this CNN GDL will

31 substantially improve the performance of a large-scale PEMFC. Additional research will be conducted to test our optimized CNN as GDLs in a large-scale industrial PEMFC apparatus.

ACKNOWLEDGEMENTS

My deepest gratitude to Dr. Nejhad for allowing me to conduct this research and giving me the opportunity to explore an area of research closely aligned with my career interests. Thank you to Brenden Minei, Kyle Wong, Caton Gabrick, Adrian DeLeon, Sterling Gascon, and Vamshi Gudapati for their help and insights during this research. Also, many thanks to Tina Carvalho at the Pacific Biosciences Research Center for training and allowing me to use the TEM and SEM. REFERENCES

[1] Whittingham, M. Stanley, Thomas Zawodzinski, and Robert F. Savinell. "Introduction: Batteries and Fuel Cells." Chem. Rev. Chemical Reviews 104.10 (2004): 4243-244. Print.

[2] Service, R. F. "FUEL CELLS: Shrinking Fuel Cells Promise Power in Your Pocket." Science 296.5571 (2002): 1222-224.

[3] Zawodzinski, Thomas A., et al. "Water Uptake by and Transport Through Nafion® 117 Membranes." J. Electrochem. Soc. Journal of The Electrochemical Society 140.4 (1993): 1041. Print.

[4] Büchi, Felix N., and Supramaniam Srinivasan. "Operating Proton Exchange Membrane Fuel Cells Without External Humidification of the Reactant Gases." J. Electrochem. Soc. Journal of The Electrochemical Society 144.8 (1997): 2767.

[5] Dicks, Andrew L. "The Role of Carbon in Fuel Cells." Journal of Power Sources 156.2 (2006): 128.

[6] Kannan, Arunachala M., Vinod P. Veedu, Lakshmi Munukutla, and Mehrdad N. Ghasemi- Nejhad. "Nanostructured Gas Diffusion and Catalyst Layers for Proton Exchange Membrane Fuel Cells." Electrochemical and Solid-State Letters Electrochem. Solid-State Lett. 10.3 (2007): B47.

[7] Kannan, A.M., L. Cindrella, and L. Munukutla. "Functionally Graded Nano-porous Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Low Relative Humidity Conditions." Electrochimica Acta 53.5 (2008): 2416-422.

[8] Chen-Yang, Y.W., T.F. Hung, J. Huang, and F.L. Yang. "Novel Single-layer Gas Diffusion Layer Based on PTFE/carbon Black Composite for Proton Exchange Membrane Fuel Cell." Journal of Power Sources 173.1 (2007): 183-88.

[9] Park, Gu-Gon, et al. "Adoption of Nano-Materials for the Micro-Layer in Gas Diffusion Layers of PEMFCs." Journal of Power Sources 163.1 (2006): 113-18.

32

[10] Zawodzinski, Thomas A. "A Comparative Study of Water Uptake By and Transport Through Ionomeric Fuel Cell Membranes." J. Electrochem. Soc. Journal of the Electrochemical Society 140.7 (1993): 1981.

[11] Bernardi, Dawn M. "Water-Balance Calculations for Solid-Polymer-Electrolyte Fuel Cells." J. Electrochem. Soc. Journal of the Electrochemical Society 137.11 (1990): 3344.

[12] Sridhar, P., Ramkumar Perumal, N. Rajalakshmi, M. Raja, and K.S Dhathathreyan. "Humidification Studies on Polymer Electrolyte Membrane Fuel Cell." Journal of Power Sources 101.1 (2001): 72-78. [13] Ren, Xiaoming, and Shimshon Gottesfeld. "Electro-osmotic Drag of Water in Poly(perfluorosulfonic Acid) Membranes." J. Electrochem. Soc. Journal of the Electrochemical Society 148.1 (2001): A87.

[14] Chiang, M-S, and H-S Chu. "Effects of Temperature and Humidification Levels on the Performance of a Proton Exchange Membrane Fuel Cell." Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy 220.5 (2006): 435-48.

[15] Wood, David L., Jung S. Yi, and Trung V. Nguyen. "Effect of Direct Liquid Water Injection and Interdigitated Flow Field on the Performance of Proton Exchange Membrane Fuel Cells." Electrochimica Acta 43.24 (1998): 3795-809.

[16] Chen, D., Li, W., and Peng, H. An Experimental Study and Model Validation of a Membrane Humidifier for PEM fuel Cell Humidification Control.

[17] Mukul Kumar (2011). Carbon Nanotube Synthesis and Growth Mechanism, Carbon Nanotubes - Synthesis, Characterization, Applications, Dr. Siva Yellampalli (Ed.), ISBN: 978-953-307-497-9, InTech, DOI: 10.5772/19331. Available from: http://www.intechopen.com/books/carbon-nanotubes-synthesis-characterization- applications/carbon-nanotube-synthesis-and-growth-mechanism.

[18] Aqel, Ahmad, Kholoud M.m. Abou El-Nour, Reda A.a. Ammar, and Abdulrahman Al- Warthan. "Carbon Nanotubes, Science and Technology Part (I) Structure, Synthesis and Characterisation." Arabian Journal of Chemistry 5.1 (2012): 1-23.

[19] Ghasemi-Nejhad, Mehrdad N., and Vamshi M. Gudapati. Nanotape and Nanocarpet Materials. Patent WO 2011106109 A2. 1 Sept. 2011.

33

CORRECTING SPECTRAL DATA FROM EXTRAGALACTIC STAR-FORMING REGIONS FOR ATMOSPHERIC DISPERSION

Casey Jones Department of Astronomy and Physics University of Hawai‘i at Hilo Hilo, HI 96720

ABSTRACT

The calculation of star-formation rates for one hundred and forty-eight star-forming regions from sixteen low-redshift galaxies uses thirty-three thousand three hundred spectra from the SuperNova Integral Field Spectrograph (SNIFS) on the University of Hawai‘i’s 2.2-meter telescope on Mauna Kea. This requires the correction of atmospheric dispersion to allow for the required calculations of the light extinction in the regions studied to take place after calculating the Hydrogen alpha and Hydrogen beta line fluxes and the generation of images from these values. Then the star-formation rates can be calculated and comparisons can be made between low and high-redshift objects. INTRODUCTION

In collaboration with my mentor, I investigated the effect of atmospheric dispersion on data from one hundred and forty-eight regions of star-formation across sixteen nearby galaxies from the SuperNova Integral Field Spectrograph on the University of Hawai‘i’s 2.2-meter telescope. Then the images can be corrected for atmospheric dispersion and the star-formation rate (SFR) of the region can be calculated. To calculate the star-formation rates of these regions involves the calculation of the Hydrogen alpha and Hydrogen beta line fluxes as these are used to calculated the amount of light lost due to dust in the star-forming region. Once the amount of light extinction has been calculated, the true flux of the Hydrogen alpha line can be calculated and then in conjunction with the redshift of the region, the Hydrogen alpha luminosity of the region can be calculated. From this the star-formation rate can be determined using equation two (Kennicutt, 1998) (Eq. 1).

= 7.9 10 ( ) ( ) = 1.08 10 ( )( ) −1 −42 −1 −53 0 −1 ⊙ Equation푆푆푆 1. The�푀 Kennicutt푦푟 � Hydrogen∗ alpha퐿 퐻퐻luminosity푒�푒푒 relation� to the∗ local star푄 -퐻formation� rate of the star-forming region where ( ) is the luminosity of the Hydrogen alpha line.

The atmospheric dispersion must 퐿be퐻퐻 calculated to determine the star-formation rate as it causes the same spaxel for the red and blue channels to not be spectra of the same region of the sky. This is caused by that fact that the atmosphere refracts incoming light from a star, galaxy, or other celestial object much like a prism. The net effect is that the light is dispersed along a straight-line known as the parallactic angle, the angle between the azimuthal and equatorial coordinate grids at the location of the object (Figure 1). This dispersion causes the data from the telescope to shift depending on the wavelength with images at shorter wavelengths closer to the zenith when compared those with longer wavelengths (Figure 2). This means that the

34

atmospheric dispersion must be corrected before the calculation of the extinction values can occur. Otherwise the spaxel being calculated will give erroneous extinction values.

P A E

N

Figure 1. Example of how the parallactic angle is measured from North through East. The dashed lines represent the equatorial coordinate grid and the solid lines represent the azimuthal coordinate grid.

Figure 2. Example of how the atmosphere refracts and disperses incoming light. When traced backwards the dispersed data has bluer images closer to zenith compared to red images. Solid black line represents incoming light before interacting with the atmosphere, the dark grey lines represent blue light with the solid being light as it travels and the dashed being how it would look on the sky when traced straight back. This is the same for the light grey lines that represent red light.

35

METHODS

To calculate the effect of atmospheric dispersion on the data, both the magnitude and direction of the dispersion must be calculated. The magnitude of the atmospheric dispersion is calculated using equations two and three (Szokoly 2005) (Eqs. 2 & 3). The information required for this calculation is the outside temperature in Kelvin, atmospheric pressure in Pascals, both wavelengths that are being used (in this case Hydrogen alpha (0.6563 μm) and Hydrogen beta (0.4861 μm)), the reciprocal of the wavelengths (σ), and the angle at which the object is from zenith.

5792105 167917 10 ( 1) = + 238.0185 −2 57.362 −2 8 휇푚 휇푚 푛푎푎 − −2 2 −2 2 Equation 2. Szokoly 2005 relation for calculating휇푚 − 휎the index of refraction휇푚 − 휎 for a given wavelength, where σ denotes the reciprocal of the wavelength being studied. 푎푎 푛 ( ) ( ) ( ) tan( )

푎푎 푎푎 0 푎 Equation 3. Szokoly 2005 relation훥� 휆 for≈ calculating�푛 휆 − 푛the magnitude휆 � 푧 of atmospheric dispersion for a given wavelength compared to another wavelength. This requires using equation two to calculate both indices of refraction, knowing the atmospheric temperature in Kelvin and pressure in Pascals at the time of observation, the zenith angle, and the standard values for both the temperature and pressure at sea level.

The calculation of the magnitude of the atmospheric dispersion requires knowledge of both the local atmospheric temperature and pressure. While the temperature is recorded in the data, the pressure is not. To solve this problem, thirteen years of pressure data from the nearby Canada-France-Hawai‘i Telescope were averaged to create a value that would be used in for these calculations.

The calculation of the direction of the atmospheric dispersion involves calculating the parallactic angle of the object using equation nine (Filippenko 1982) (Eq. 4). This formula requires the hour angle (h) and zenith angle (z) of the object in addition to the latitude of the observatory (ɸ). All of this information was recorded and included in the data files at the time of observation.

sin( ) sin( ) = sin( ) sin 2 휂 ℎ 휋 푧 Equation 4. Filippenko 1982 relation for� calculating− 휙� the parallactic angle (η) of an object based on the hour angle (h) and zenith angle (z) of the object and the latitude at which the observation to place (ɸ) with all angle in radians.

Once the parallactic angle of the object and the magnitude of atmospheric dispersion have been calculated, the values for the fluxes of each spaxel are calculated using the Pyraf

36

routine splot to calculate the local continuum level around the Hydrogen alpha and Hydrogen beta lines thus allowing for the calculation of the line fluxes and the generation of new images from these fluxes. Once these new images have been created the Hydrogen beta image is shifted by the amount calculated in the direction of the parallactic angle and new values are generated using a fast Fourier transform to accommodate for the sub-spaxel shift amounts. Finally, to determine if the shift was successfully calculated, the point spread function of the Hydrogen alpha, Hydrogen beta, and shifted Hydrogen beta images is calculated and visually inspected.

DATA AND RESULTS

Using thirteen years of weather data from the Canada-France-Hawai‘i Telescope, an average for the atmospheric pressure near to the University of Hawai‘i’s 2.2-meter telescope was calculated to be ~ 61583 Pa for use in the calculation of the amount of atmospheric dispersion for the data in addition to the values for standard temperature and pressure at sea level 288.15 K and 101325 Pa. This allowed for the calculation and shift of the original images but it was determined after calculating their point spread functions that the shift caused the image to become more offset then it was originally (Figure 3). This lead to using the calculated fluxes from a previously compiled database where it was determined that the fluxes needed to be recalculated as the Hydrogen beta fluxes were in some cases orders of magnitude higher then the Hydrogen alpha fluxes.

Figure 3. Point spread function of one column of an image of the center of NGC 6946 showing that the peak of the shifted Hydrogen beta image is more offset from the Hydrogen alpha peak then the initial Hydrogen beta image.

37

When the fluxes where recalculated it was found that the fluxes were now in accordance with the requirement that the Hydrogen beta line be at least 2.8 times less then the Hydrogen alpha line allowing for the generation of new flux based images.

DISCUSSION AND FUTURE RESEARCH

The recalculated Hydrogen beta fluxes lead to a large number of non-detections of the Hydrogen beta line notably reducing the number of usable values for the fluxes to be used in the shifting procedure (Figure 4). This leads to additional difficulty in determining whether or not the shift that is undertaken allows for the determination of new valid flux values as well as determining if the shift is successful or not.

Figure 4. Hydrogen beta flux image created from the calculated fluxes. All white squares represent non-detections of the hydrogen beta line where as the black square represent actual flux values on the order of ergs except for the top left, top right, and the two adjoining pixels to the top− ퟏퟏright pixel−ퟏퟏ that are bad pixels on the detector and ퟏퟏthus read− ퟏퟏ as zero.

These new flux based images will be used to try and determine the source of the large offsets seen in the shifted images. Once the cause is determined, fluxes can be generated for the shifted image that can be used to determine the extinction values for each spaxel. This will allow for the calculation of the star-formation rate for the region.

38

CONCLUSION

Despite setbacks due to the as yet undetermined cause of why the original image shifts appear to move the image too far as well as problems with flux values, new flux based images have been created for future use in the correction of the atmospheric dispersion in the data from which star-formation rates for the studied regions will be determined. This in conjunction with star-formation rate at higher redshifts comparisons can be made between these nearby galaxies and those much farther away and will allow for the creation of a timeline of galactic evolution from the early universe to now.

ACKNOWLEDGEMENTS

I would like to thank the Hawai‘i NASA Space Grant Consortium for allowing me the opportunity to take my first steps into scientific research in my field of study and allowing me to gain many new skills that will help me in my future endeavors. I would also like to thank my mentor, Dr. Marianne Takamiya, for all of the help, advice, and expertise that she has provided during this project.

REFERENCES

Filippenko, A.V. (1982) The Importance of Atmospheric Differential Refraction in Spectrophotometry. Publications of the Astronomical Society of the Pacific, 94, 713-721.

Kennicutt, R.C. Jr. (1998) Star Formation in Galaxies Along the Hubble Sequence. Annual Review of Astronomy and Astrophysics,36.

Szokoly, G.P. (2005) Optimal Slit Orientation for Long Multi-object Spectroscopic Exposures. Astronomy and Astrophysics, 443, 703-707.

39

DEVELOPMENT OF SAMPLE MOUNTING FOR STARDUST INTERSTELLAR CANDIDATES Logan Magad-Weiss Department of Geology and Geophysics University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT At least two particles collected in aerogel by the Stardust Interstellar Dust Collector are suspected to be of interstellar origin, with several more interstellar candidates likely to be identified soon. Oxygen isotopic analysis by secondary ion mass spectrometry will provide more insight into the origin of these particles, and possibly definitive proof of their interstellar origin. We developed sample-preparation techniques to extract the interstellar candidates from the aerogel collector and prepared them for isotope analysis. By using grains extracted from meteorites with non-terrestrial oxygen isotopic composition, we will investigate the reliability, accuracy, and precision of our sample preparation and analytical techniques. INTRODUCTION NASA's Stardust Mission was launched in 1999 to collect cometary and interstellar dust and return the samples to Earth for study in the lab. Samples from the Jupiter-family comet 81P/Wild 2 have been extensively studied since samples returned to Earth in 2006. One of the most surprising results from the Stardust cometary collection is that some of the rocky component of this comet originated in the inner solar system (Ogliore et al. 2012 ApJ 745 L19). Analyses of potential interstellar dust collected by Stardust are far more challenging due to their scarcity and small size. The Stardust Interstellar Dust Collector (SIDC) was exposed to the interstellar dust stream for 195 days during two periods between 2000 and 2002. Like the Stardust cometary collector, the SIDC captured high-velocity dust particles in low-density silica aerogel tiles supported by an aluminum frame. The 0.1 m2 collector is composed of 85% aerogel tiles, and 15% of aluminum foils. Hypervelocity particles impacting the aluminum foils create craters and are severely melted, though some residue of the impacting residue can be retained within the crater. Particles impacting the aerogel collector are more gently captured and can retain their original mineralogy as well as their trajectory which can be used to compare with the expected direction of the interstellar dust stream. Preliminary, nondestructive analyses of particles captured by the Stardust Interstellar Dust Collector (SIDC) were recently published (Westphal et al. 2014 Science 345, 786). At least two particles captured in the aerogel were determined to be probable interstellar dust. The oxygen isotopic composition of the interstellar dust captured by Stardust can help distinguish true interstellar dust, which formed outside the solar system, from interplanetary dust which formed inside the solar system. The 17O/16O and 18O/16O ratios of solar-system material vary by only ~5% percent, whereas presolar grains, which formed around other stars, vary in these ratios by up to a factor of 10. However, some grains that formed outside the solar system have approximately solar oxygen isotopic composition, so normal oxygen isotope ratios do not

40

necessarily disprove an interstellar origin. Conversely, however, highly anomalous oxygen isotope ratios do prove that the particle formed outside the solar system.

Figure 1: Stardust interstellar candidate “Hylabrook”. A) Phase map of Mg-bearing olivine and amorphous phases, from synchrotron STXM data. B) X-ray diffraction pattern of Hylabrook, showing crystalline phases. Figure is from Westphal et al. 2014 Science 345, 786. The technique best-suited to precisely measure the oxygen isotopic composition of candidate interstellar grains is secondary ion mass spectrometery (SIMS), or ion probe. The ion probe is destructive: it consumes the sample to make the isotopic measurement. Since there are only a few grains identified so far that are suspected to be of this interstellar origin, it is imperative to develop reliable, robust techniques to extract and prepare the particles for SIMS analysis. METHODS Steve Jones (JPL) has fabricated aerogel embedded with spinels from the Allende meteorite. These spinels have oxygen isotopic composition off the terrestrial fractionation line, and so should be easily distinguishable from terrestrial contaminants and the surrounding aerogel by SIMS. In order to extract the spinel grains in aerogel keystones, a keystoning technique was implemented. This technique was developed by Andrew Westphal’s lab at U. C. Berkeley (Westphal et al. 2004 MAPS 39, 1375), and was adapted to the microscope, stage, and micromanipulators available in HIGP. This was a significant resource for the University of Hawai‘i since only U. C. Berkeley and Johnson Space Center can extract aerogel keystones. Dr. Ryan Ogliore has experience extracting keystones at U. C. Berkeley, and aided in the safe extraction of the keystone. CONSTRUCTION OF A SURGICAL TABLE A “surgical table” had to be constructed in order to hold the silica aerogel in place during extraction. The aerogel had to be kept still during keystone extraction. A device to secure it in place was needed because of the aerogel’s extremely low density and tendency to move easily by

41 static charging. The aerogel is very fragile and fractures easily, so a surgical table was necessary to hold the aerogel block in place during extraction without causing damage to the aerogel or the spinels within it. The surgical table (Figure 2) is a plastic box with plastic wrap that is tightened from the corners by fishing line pulled by guitar tuners. The aerogel block is placed on a Petri dish between the surface of the plastic box and the plastic wrap. The guitar tuners pull the plastic wrap taut, holding the aerogel block in place, without exerting too much force on the aerogel. A hole in the plastic wrap is made for access to the aerogel while the rest of the aerogel is protected from dirt and dust in the lab. To construct the box, four holes were drilled on each side of a plastic rectangular box,. From there, guitar tuning heads were fitted to the holes, and held in place using bolts, washers and nuts. Four more holes were drilled above the end of the guitar tuning heads so that fishing line could be fed into the box. A rectangle of plastic wrap was then cut out, and aluminum tape was folded over the edges. Plastic rods were also clamped down within the aluminum tape around the saran wrap. Holes were punctured through the aluminum tape and plastic wrap so that fishing line could also be fed through there. The fishing line was then tied around the plastic rod through the hole so that it would not come undone later in the experiment. A Petri dish was then placed on top of the box, and the plastic wrap was laid down on top of it. Once the fishing line was fed through all of the holes above the guitar-tuning heads, it was pushed through the hole within the tuning head itself, wrapped around, and knotted. All of the tuning heads were then turned until each corner of the plastic wrap was taught, and the center of the plastic wrap was flat. This part was critical because the silica aerogel is very light and fragile, and it was necessary to have a flat surface that could hold it down while not crushing it. The final step in the construction of the stage was to burn a hole ~1-4mm wide in the center of the saran wrap. This was accomplished using a soldering iron which successfully created a hole that would enable us to see the aerogel while operating the microscope during extraction. Following the creation of this hole, the aerogel was successfully placed on the Petri dish under the saran wrap, and held down taut.

Figure 2: Surgical table built for safely holding the silica aerogel

42

THE CODE, AND THE CUTS A Matlab script (originally written by Andrew Westphal) was used in order to send coordinates for cuts to the micromanipulator. The Matlab code writes simple script files of coordinate values that are sent via serial interface to the micromanipulator. Once we were able to communicate with the micromanipulator through the serial port, we modified the Matlab script to function in an Ubuntu-Linux environment. We modified a vise grip that could hold a pulled- glass needle snugly in the micromanipulator. Next, we aligned the needle in row, pitch, and yaw, centered the micromanipulator’s coordinate system above a spinel grain in the aerogel block. Next, we executed three extraction scripts for a 90-degree-bent glass needle (left cut, top cut, bottom cut) and one for a 45-degree-bent glass needle (the undercut).

Figure 3: Top view of the keystone cuts made by the Matlab code

Figure 4: Side view of the keystone cuts made by the Matlab codE

43

Figure 5: Micromanipulator setup showing the 90° needle

THE KEYSTONE The keystone was removed using a stereoscope to observe the aerogel, and thin bristles on a paint brush to get the aerogel to attach to it. Silica aerogel is very low density, and is moved mostly by electrostatics rather than by gravity. The keystone was transferred onto a welled glass slide, and quickly covered that with another slide. The slides were then clamped together, and using a stereoscope and compound microscope, it was confirmed that the keystone was successfully extracted. We then located the spinel grain of interest inside the keystone, which was now in a different orientation than when it was first extracted. Despite this success, there were difficulties encountered, such as loss of the first attempted keystone. Removal of the keystone was first attempted using a single bristle of a paint brush held on a toothpick with an epoxy. Problems arose when the aerogel was unable to attach to the bristle. The keystone could be moved, however not picked up and placed on a slide for safe keeping. In addition, the single bristle had a tendency to catch on the aerogel and bend, until reaching a point that caused it to fire outward. This flicked the keystone multiple times, making it difficult to find, and eventually lead to it being lost. Correcting for mistakes made in the first extraction attempt lead to a successful extraction in the second attempt. ACQUIRING A STANDARD AND SECURING THE PARTICLES In order to accurately measure the oxygen isotope composition of the Allende spinel grain, there needed to be a standard to compare the oxygen isotope composition against. The standard used was a terrestrial grain of Burma spinel curtesy of Dr. Aurelien Thomen at HIGP. Burma spinel has a known oxygen isotope value, and would permit the ion probe to switch between each particle respectively to measure the differences in oxygen isotope composition. To do so, a large piece of Burma spinel was removed using a diamond saw, and was placed in an ion probe mount. The Burma spinel was then epoxied into the ion probe mount using an epoxy recipe at HIGP, and cured in an oven at 70°C for 72 hours. Once the epoxy had cured, the

44 keystone was then placed in the ion probe mount, and epoxied in as well. Using diamond paste, the sample mount could then be polished down until the Allende spinel grain was exposed. The decision to use a larger Burma spinel grain was so that when polishing down to the Allende spinel, the Burma spinel would merely be shaven down so that the two particles are side by side and can be switched between during testing in the ion probe.

Figure 6: Extracted silica aerogel keystone

Burma spinel grain (terrestrial)

Silica aerogel keystone

Figure 7: Ion probe mount displaying Burma spinel grain and silica aerogel keystone

45

CONCLUSION Removal of this keystone has confirmed that the Matlab code, as well as the stage and micromanipulator are capable of both cutting and removing a keystone safely. We are currently polishing the keystone with diamond paste in order to expose the Allende spinel grain and spinel standard for ion probe analysis using the Cameca ims 1280 in the basement of POST at University of Hawai‘i at Mānoa. The polishing should be completed in the coming week. Comparison of the grain’s oxygen isotope composition against that of a standard terrestrial particle of known composition from the Allende meteorite will determine whether or not this technique of oxygen isotope measurements is accurate and precise. If the particle’s oxygen isotope composition returns a value within the known values of the Allende meteorite, it will indicate that these methods are both precise and accurate, and are closer to use on the suspected interstellar dust candidates.

REFERENCES Westphal A. J. et al. (2014) Evidence for interstellar origin of seven dust particles collected by the Stardust spacecraft. Science 345, 786; DOI: 10.1126/science.1252496 Ogliore R. C., Nagashima K., Huss G. R., Westphal A. J., Gainsforth Z., and Butterworth A. L. (2015) Oxygen isotopic composition of coarse- and fine-grained material from Comet 81P/Wild 2. Geochimica et Cosmochimica Acta 166, 74– 91; DOI:10.1016/j.gca.2015.04.028

46

DESIGN AND DEVELOPMENT OF A SUSPENSION SYSTEM USED IN ROUGH- TERRAIN VEHICLE CONTROL FOR VIBRATION SUPPRESSION IN PLANETARY EXPLORATION

Arvin Niro College of Engineering University of Hawaiʽi at Mānoa Honolulu, HI 96822

ABSTRACT

Interplanetary Exploration has been a big part of NASA’s curiosity into exploring what is beyond our atmosphere and NASA’s rovers have allowed us to do just that. They are equipped with state of the art technology that allow us to see and measure if life can exist elsewhere. In this project, we aim to design and build a suspension system that can be incorporated onto a rover that was constructed last semester at Kapi‘olani Community College by Eric Caldwell and Lee Do. This rover features wireless technology and a mecanum drive system that allow it to be extremely maneuverable. However, due to its maneuverability, this generates uncontrollable vibration in the rover. In an attempt to suppress the vibration, a double wishbone suspension design was used with a spring and damper system to reduce and dampen the vibration generated from the mecanum wheels. The result of this allowed a suspension system to be designed and optimized for this specific rover by utilizing SolidWorks and finite element analysis to decide the overall design of the system.

INTRODUCTION

Suspension systems are commonly seen in automotive vehicles where it is used to dampen and reduce the amount of vibration generated from the ground to the vehicle. This allows passengers to enjoy the car ride without having to worry about getting hurt going over potholes or speed bumps. By using a combination of springs and dampers, the vehicle is allowed to control how much force the chassis sees. The application of a suspension system on a rover that is exploring other planets is a possibility. This is because the amount of sensors and equipment that are onboard the rover may be sensitive to any type of vibration and may case imperfections in the data recorded. This was similar to the case on our rover that I helped build at Kapi’olani Community College.

In the fall of 2013, I was part of a group that designed and built a rover (Figure 1) at KCC. The rover featured an onboard router that allowed for wireless control and video feed through the onboard webcam, wired through the use of a CAN bus system. The rover also featured the use of mecanum wheels, which are individually driven wheels that have individual rotors mounted at a 45° around the wheel. These wheels allow the rover to maneuver in three directions: forward & back, rotation, as well as side to side. When programed, the wheels move in a certain direction to allow the rover to achieve what is called holonomic motion, which is when the rover is capable of controlling all degrees of freedom, in this case three. One of the problems that the group mentioned when they tested the rover was how shaky the video feed was while they drove the rover, as well as how much the vibration interfered with the sensors that

47

were mounted onboard. This was due to the design of the mecanum wheels. Because it featured individual rollers mounted around the entire wheel, it caused the rover to vibrate each time a roller hit the ground, thus causing the vibration. For this reason, I was tasked to design a suspension system for the rover to reduce the amount of vibration generated from the wheels.

Figure 1 – Rover built with mecanum wheels (Webcam not shown)

METHOD

Before designing the suspension, research needed to be conducted to learn about the different types of suspension designs that are currently used. Due to the design of the mecanum wheels, we decided to go with a double wishbone suspension design (Figure 2), most commonly seen in Formula One and off-road vehicles. The beauty of this design is that it is allows each wheel to be independent of other wheels on the vehicle, thus allowing for fine tuning and design of an individual wheel without affecting others. Because our rover features a single gearbox and motor for each wheel, a double wishbone setup provided a perfect solution. Figure 2 – Double Wishbone Design Photo provided by carbibles.com

The gearboxes on the rover are from AndyMark, a company specializing in robotic components. Using the downloadable CAD files of the AndyMark Toughbox, we were able to determine the exact dimensions of the front plate as shown in Figure 3. Using SolidWorks, a 3D Computer Aided Design Software, allowed us to design the double wishbone suspension as shown in Figure 4. The first design featured equal upper and lower control arms to allow for even vertical movement. An upright connects the two control arms on the outside which will house the ball bearing of the output shaft that connects to the mecannum wheel. This all connects to the redesigned faceplate that will be replacing the current plate on the Toughbox. Universal joints were designed to allow the drive shaft to reach from the gearbox to the wheel, as well as allow for vertical movement while still transmitting the power from the gearbox at various angles.

48

Figure 3 – SolidWorks rendering of Andy Mark Toughbox Figure 4 – SolidWorks rendering of the first wishbone design for the rover

Our designs went through finite element analysis (FEA) by using the built in tool provided by SolidWorks. The part I was most interested in was the new faceplate that would replace the current faceplate on the gearbox. A force of 100 lbs. is applied at a 45° angle to the shaft, which will simulate the force of the shock when it is mounted. Using the Von Mises and Deflection graphs as shown in Figures 5 and 6, we were able to redesign the parts to ensure that they could withstand the force. Once completed, parts were 3D printed (Figure 7) at KCC, to allow for us visually see our design and get a feel for how it will actually behave when mounted onto the rover. When the 3D parts were tested to fit on the rover, there were problems with clearance and design issues about the current design. Therefore a second design was made.

Figure 7 – ABS 3D Printed parts of the first wishbone design

RESULTS

The second designs still utilized the equal length control arms, but were extended to allow for clearance as well as mounting of the shocks and dampers. The first design did not account for the mounting points for a shock and damper system. This is due to the fact that these types of components are not common among rovers and therefore had to be searched. When it came to the shock, we found mountain bike shocks that would fit within the design of our suspension system. They are approximately 110mm (4.33 in) long, with a 550 lb. steel spring as

49

shown in Figure 8. These were the stock springs, but there are plans to reduce the stiffness. As for the damper, they were purchased from McMaster-Carr that had dampers small enough to fit (Figure 9 Left). They are approximately 5.28” when extended, and 4.3” when compressed, and can handle up to 112 lbs. of force. Also purchased from McMaster-Carr were new connectors for the dampers, since the original connectors featured a ball-joint end which was not ideal for our setup (Figure 9 Right).

Figure 8 – Mountain bike shock Figure 9 – Damper with ball joint connection (left) and easy adapt clevis connection (right)

One of the components that were not accounted in the original design that changed was the manufacturing of universal joints. Originally they were designed specifically for the first design, but due to the high cost to manufacture the custom universal joints, we decided to use off-the-shelf universal joints. The problem with them was that they were too long (Figure 10 Left), and in order to fit them in the design, the length of the control arms had to be extended, and the universal joints had to be machined down (Figure 10 Right) to fit within the design. 1/2 inch diameter rods with 1/8 inch keyways connected the universal joints to the output shaft of the gearbox and upright of the design (Figure 11). The keyway allowed for the power to be transmitted through the universal joints to provide the rotational movement of the wheels, but still allowed for vertical movement during rotation.

Figure 10 – Original universal joint (Left) Figure 11 – 1/2” diameter shafts with 1/8” keyway and and machined universal joint (Right) key

50

Once all the new designs were drawn, and analysis was performed (Figure 12 - 15), the parts were taken to be 3D printed. All of the components were then test fitted into the printed design as shown in Figure 16.

Figure 16 – ABS 3D Printed parts of the second wishbone design with all components installed

From here, the designs will be taken to be machined out of 6061 Aluminum. 6061 was chosen because it is a common material widely used in automotive and aerospace applications, and it also demonstrated during FEA tests that is can withstand the force applied.

CONCLUSION

As a result, I was able to design and prototype a suspension system for the rover. As mentioned before, the last step will be to machine the parts out of 6061 aluminum and install them onto the rover. Once that is complete, tests can be conducted to see how the system behaves under different loading conditions to allow us to see what can be changed. One of the things that will probably change is the stiffness of the spring. Because it is such a stiff spring, the system will still behave the same way without a suspension system since the amount of force to compress the spring is too high. For that reason, softer springs will be replaced.

Although I was not able to successfully build a working suspension system to be used on the rover, I was able to go through the engineering process of designing and prototyping my designs for the rover. This allowed me to see that there are pros and cons to every decision that was made, as well as understand the importance of utilizing software to conduct analysis to save on time and money spent to design a prototype. This project also allowed me to further enhance my fabrication and SolidWorks skills as I had some experience before, but wanted to improve on them. Understanding how FEA works and learning how I can change my designs based on the results was one of the new skills I developed. Also the importance measuring twice, cut once, to allow for perfect fits of the components when I assembled them was key.

51

ACKNOWLEDGEMENTS

I would like to thank my mentor, Dr. Aaron Hanai, for all his help, knowledge and assistance in everything that I did on this project, whether it is designing parts in SolidWorks, or understanding the physics or math behind the concepts. Also thanks for helping with the purchase of some parts that I needed for this project. I would also like to thank the Kapi‘olani Community College STEM program for allowing me to use the facility to conduct my research. Thank you to Lee Do and Eric Caldwell, who were past fellows of the Hawai‘i Space Grant, and were also part of the project in the fall of 2013, for helping me with additional help with the rover. Thank you to everyone else who gave input on my designs as well as help on where to find parts (William Kaeo). Lastly, thank you to the Hawai‘i Space Grant Consortium for all your monetary support in funding this project.

REFERENCES

Longhurst, Chris. Coil Spring Type 1. Digital image. The Suspension Bible. N.p., 13 Apr. 2014. Web. 1 May 2014. .

52

FIGURES

Figure 5 – Von Mises graph of the first design

Figure 6 – Displacement graph of the first design

53

Figure 12 – Von Mises graph of the second design

Figure 13 – Displacement graph of the second design

54

Figure 14 – Von Mises graph of the second design lower control arm

Figure 15 – Displacement graph of the second design lower control arm

55

ESTIMATION OF DAYTIME SLEEPINESS (DS) IN SPACEFLIGHT SUBJECTS

Roberto F. Ramilo Jr Department of Kinesiology and Rehab Science University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT

Daytime sleepiness (DS) is very high in College students. Consequences of DS in young college students range from lower levels of academic performance to higher risks of motor vehicle accidents and poorer health. Unfortunately, the currently accepted quantitative measure of DS is currently time consuming and expensive as it requires the subject to go through a lengthy diagnostic tool conducted in a clinical setting called the Multiple Sleep Latency Test (MSLT). This study uses college students as a baseline to develop improved methods for the efficient, objective, and reliable detection of DS as applied to astronauts during spaceflight or when living on the space station for a prolonged period of time. The purpose of this research is to develop a novel, reliable and quick quantitative method of estimating DS in astronauts using pupillometry. Subjects are first subjected to a health questionnaire, and are asked to wear an actigraph watch for three consecutive nights in order to ensure healthy subjects, and a regular sleep/wake pattern in addition to an adequate amount of continuous sleep. When coming back to the lab, patients are then subjected to two qualitative sleep surveys: the Epworth Sleepiness Scale (ESS) to estimate daytime sleepiness, and the Stanford Sleepiness Scale (SSS) to estimate their DS. In order to verify the outcomes of such qualitative tests, a quantitative measurement of the subject’s pupil diameter is performed over a period of 15 minutes. Pupil diameter fluctuations are analyzed using time series analysis tools such as standard deviation, and the Pupil Stability Index (PSI). Preliminary results conducted on 42 subjects suggest that pupillometry data is a reliable measurement to validate qualitative results obtained through sleep surveys and successfully identify subjects affected by DS.

INTRODUCTION

There have been a myriad of studies done on the importance of astronauts in spaceflight maintaining a normal sleep-wake pattern relative to Earth’s light/dark cycle at 1G. During spaceflight, both the exposure to the main synchronizer of the human circadian timing system, i.e. the light/dark cycle, as well as exposure to gravitational forces is altered dramatically (Dijk, et al., 2001). In addition, to maintain optimum human health, performance and safety during spaceflight require adequate sleep duration and synchrony between the circadian pacemaker (Barger et al. 2012). Astronauts may be required to perform technical operations during spaceflight missions that require 24-hour monitoring; however, a mismatch between scheduled work time and the environmental light/dark cycle can lead to substantial loss of sleep duration during sleep episodes (Barger et al., 2014). Therefore, a quantitative measure of physiological sleepiness will help to monitor astronaut health and to ensure proper shift work scheduling. This study will help to establish a baseline that can be applied to astronauts in spaceflight, by measuring the Daytime Sleepiness in college students in Hawai`i. The reports of college students indicate that Daytime Sleepiness (DS) among them is very high. Unfortunately, adequate and

56

quantitative data concerning this problem is lacking; in large part because the currently accepted quantitative measure of daytime sleepiness is operationally cumbersome, very expensive, and time consuming. The current method of assessing DS has a patient participate in the Multiple Sleep Latency Test (MSLT), the ‘gold standard’ for the objective measurement of physiologic sleepiness (Merritt et al., 2004). The MSLT is a lengthy diagnostic exam conducted in a clinical setting between 24 to 48 hours. This pilot project is aimed at addressing the above problems. It will assess DS in Hawai‘i's college students using pupil fluctuations based on standard deviation changes of pupil diameter over time and the Pupil Stability Index (PSI) that has been developed for this research. The data acquired from our subjects will include two types of continuously recorded, objective physiological variables strongly influenced by the autonomic nervous system, namely the stability of pupil diameter (pupillometry), and actigraphy (accelerometer wore on subjects' wrist for 3 nights). In addition, subjects will be asked to fill out two qualitative sleep surveys; the subjective surveys that are used in this study are currently used by clinicians and researchers as reliable subjective metrics for sleepiness. The surveys being used are the Epworth Sleepiness Scale (ESS) to estimate daytime sleepiness, and the Stanford Sleepiness Scale (SSS) to confirm the ESS test. Each pupillometry recording will be analyzed using standard statistical measures as well as the PSI. The results from the quantitative (pupillometry) and the qualitative (sleep surveys) methods will be compared, and assessment made of their potential use as multivariate discriminators of sleepiness. The proposed project is expected to lead to the development of improved methods for the efficient, objective, and reliable detection of DS as applied to astronauts undergoing spaceflight or when living on the space station for a prolonged period of time.

BACKGROUND

Colloquial definitions of sleepiness (e.g. Lethargy, general fatigue, feeling tired, etc.) do not enable researchers to quantitatively measure sleepiness. This study defines physiologic (objective sleepiness) as how rapidly and individual falls asleep under sleep-promoting situations (Carskadon and Dement, 1982). To physiologically measure how rapidly and individual falls asleep pupil fluctuations are used as indicators. The Stanford School of Medicine (n.d.) describes the anatomy and physiology of pupil fluctuation as:

The physiology behind a "normal" pupillary Figure 1 Pupillary Control, The Basics, from constriction is a balance between the sympathetic http://stanfordmedicine25.stanford.edu/t and parasympathetic nervous systems.

Parasympathetic innervation leads to pupillary constriction. A circular muscle called the sphincter pupillae accomplishes this task. The fibers of the sphincter pupillae encompass the pupil. The pathway of pupillary constriction begins at the Edinger-Westphal nucleus near the occulomotor nerve nucleus. The fibers enter the orbit with CNIII nerve fibers and ultimately synapse at the cilliary ganglion. Sympathetic innervation leads to pupillary dilation. Dilation is controlled by the dilator pupillae, a group of muscles in the peripheral 2/3 of the iris.

57

Sympathetic innervation begins at the cortex with the first synapse at the cilliospinal center (also known as Budge's center after German physiologist Julius Ludwig Budge). Post synaptic neurons travel down all the way through the brain stem and finally exit through the cervical sympathetic chain and the superior cervical ganglion. They synapse at the superior cervical ganglion where third-order neurons travel through the carotid plexus, and enter into the orbit through the first division of the trigeminal nerve. Lowenstein et al. (1963) stated that under constant accommodation and lighting conditions, a healthy iris undergoes small changes in size during wakefulness due to the varying neuronal stimulation of the two iris muscles. In dark or low light settings, an alert individual’s pupils become large and show little fluctuation (Kardon, 1997). As an individual becomes sleepy, central and hypothalamic centers cease to function in an orderly manner, central inhibition of the Edinger-Westphal nucleus decreases, sympathetic tone is steadily lost, and a preponderance of parasympathetic activity occurs that is reflected in decreasing pupil size and large, slow pupillary fluctuations (Merritt et al., 2004).

PURPOSE

The purpose of this research is to use pupillometry to find a quantitative method that is reliable and quick to estimate daytime sleepiness. Specifically, this research will show that pupil stability, as expressed by the fluctuation of the standard deviation of the subjects’ pupils over time and the Pupil Stability Index (PSI), is a reliable indicator to estimate daytime sleepiness.

HYPOTHESIS

Pupil stability is a reliable indicator to estimate Daytime Sleepiness (DS) and will be lower in sleepy subjects. METHOD

To date there have been 42 subjects (N=42), 13 female 29 male average age of 23 (± 4.92) average BMI of 26.2 (± 5.99); 11 were excluded due to medical reasons. The average amount of sleep for our subjects was 5.98 hours (± 57.82 minutes) with an average sleep efficiency of 79.2 (± 4.19). The 31 remaining subjects were split into two categories, “Sleepy” and “Alert” based on the ESS and SSS survey scores. The ESS rates subjects between “Sleepy” and “Alert” by having subjects rate their likelihood to doze off in eight hypothetical situations by assigning a score of “0”, would never doze, to “3”, high chance of dozing. Subjects who rated their current feeling of alertness from 1-6 are categorized as “Alert”, where those who rated themselves at 9 or higher were categorized as “Sleepy”. For the SSS, subjects self-rated their alertness on a scale from 1-7. Subjects who reported themselves as “1” or “2” were categorized as “Alert”; where those who self-rated at “3” or more were categorized as “Sleepy”. For each subject, an initial screening survey is administered to ensure healthy subjects. Subjects must be free of sleep medications that could alter normal sleep patterns. Actigraphy watches are worn for three consecutive nights to ensure proper sleep habits. Subjects sleeping less than 6 consecutive hours per night are also excluded from the study. When returning to the lab on day four, each subject takes the ESS and SSS sleep surveys and undergo pupil measurement for 15 minutes. The first five minutes are used for acclimation and only the last 10 minutes of measurement are analyzed.

58

All subjects were recruited from the University of Hawai‘i college system, and this research has been approved by the University of Hawai‘i Human Studies Program (IRB CHS#20902) to conduct experiments on human subjects.

As human subjects blink during the data acquisition process this causes zero-value entries in the time series when the system loses the recognition of the subject’s pupil. This can have a higher occurrence in sleepy subjects, so an interpolation method is implemented to estimate the missing data points. The interpolation is done using MATLAB where we used a Weighted Average interpolation. Once the missing data points were interpolated, the subjects were sorted based on their ESS and SSS survey scores, and the standard deviation analysis and PSI were used to quantify diameter stability to identify “Sleepy” subjects. The first 5 minutes of the recording session are omitted due to pupillary environmental adjustment. The remaining ten- minute time series is then divided into ten one-minute intervals. For each interval, the standard deviation of the diameter is calculated for subjects in each grouping. The means of the standard deviation were plotted over time with a linear regression. Along with mean of standard deviation, a new metric developed from this study was used: the Pupil Stability Index. We defined the Pupil Stability Index (PSI) as: | ( ) | 푁 = −2 휈 휈 푋푘+휈휈 휈 푋푘+휈 �+1 ∑�=0 ∑푘=1 − ∑푘=1 Where N is the number푃푃푃 of data points, 휈 and ν is the frequency휈 of acquisition. This is a dimensionless quantity that represents the fluctuation푋� of the pupil diameter. A simple example of how this index is calculated is shown below for the case where the frequency is 3, and the number of data points in the time series is 18.

As shown in Figure 2, the first step is to calculate the average of the first 3 data points, the average of the subsequent 3 data points, etc. until the end of the time series. This process represents the average of the pupil diameter over a one-second interval. We then take the absolute value of the difference of two adjacent averages until the whole set is spanned, which represents how much the diameter averages fluctuate. We then complete this calculation by adding all these differences and renormalize this sum by dividing it by the average of all the data points in the set. The higher the fluctuations of the pupil diameter are, the higher the absolute value of the differences, and therefore the higher

Figure 3 Example of the PSI calculation the value of the PSI is. Hence, we hypothesize that the higher the PSI, the sleepier a subject is.

Figure 2 Mean sleep length for subjects using SSS scores 59

RESULTS

The mean sleep length for subjects categorized as “Sleepy” or “Alert” by their SSS scores show that “Alert” subjects had a greater mean of sleep length than “Sleepy” subjects, as seen in Figure 3. Subjects sorted by ESS scores had similar results with the SSS scores, but there were no statistical differences in either group.

Figure 4 Mean sleep efficiency for subjects using SSS Figure 4 shows sleep efficiency data from the actigraphy watch showed that “Alert” subjects sorted by their SSS scores had a higher mean of sleep efficiency than subjects that scored as “Sleepy”. Again, there were similar results with the ESS sorting; however, there were no statistical difference using either sorting method.

Regarding onset latency, how long (in minutes) a subject took to fall asleep, subjects that self-reported as “Alert” on their SSS scoring had a lower mean onset latency than the subjects that self-reported as “Sleepy”, as seen in Figure 5. Statistical difference was almost established via Figure 5 Mean onset latency for subjects using SSS scores the SSS sorting method as calculated by a t-test (0.0678). These results are similar with subjects sorted by ESS scores, but the ESS group did not have a statistical difference.

These results suggest that for healthy subjects, sleep length, sleep efficiency, and sleep onset latency are not useful indicators to estimate daytime sleepiness as they would have been if subjects had sleep pathologies. We therefore used other techniques of analysis to estimate DS. Figure 6 shows the means of standard deviations of pupil diameters over one-minute intervals. The means of standard deviations of pupil diameters is representative of Figure 6 Means of Standard Deviations of Pupil pupil fluctuation during the recording period. “Alert” Diameters vs Time subjects, sorted by self-reported SSS scores, have a lower mean of standard deviation (pupil fluctuation) than subjects that reported as “Sleepy”. The linear trend lines show, for both “Alert” and “Sleepy” subjects, a decrease in the amount of pupil fluctuation over time. The mean value for each subject in the SSS group was statistically different between the “Alert” and “Sleepy” subjects (p<0.05). Subjects sorted by ESS scores had similar results, but no statistical difference was found.

The mean PSI values for subjects sorted by their SSS scores are shown in Figure 7. For the “Sleepy” and “Alert” Figure 7 Mean PSI for subjects using SSS scores

60

groups, there was a statistical difference present (p=0.022) with the “Alert” subjects having a lower PSI mean value than “Sleepy” subjects. The ESS group had similar results with the SSS group, but no statistical difference was present.

These results suggest that both the mean of the standard deviation calculated over one- minute interval during the last ten-minute period of recording and our newly developed indicator (PSI) are reliable quantitative methods to estimate DS using the SSS surveys.

DISCUSSION

The use of qualitative ESS and SSS surveys has been problematic for this research. First, neither rating system includes a clear distinction between “Sleepy” and “alert”, and subjective decisions has to be made to identify the limit that separates the two categories. Second, the scoring of subjects was not consistent between the surveys, which suggest that one may be more reliable than the other (in our case, the SSS survey). Even though both the ESS and the SSS have been used and validated in the body of literature, based on this research the SSS seems more appropriate for pupillometric analysis. In order to alleviate such subjectivity the same research protocol and analysis should be duplicated, but this time compared with scoring obtained from the Multiple Sleep Latency Test that has been clinically validated, and currently represents the gold standard for DS estimation. An additional limitation of this study is the lack of a robust sample size of the Hawai‘i college student population. Therefore future studies are necessary to obtain stronger statistical differences to confirm the efficacy of the PSI.

CONCLUSION

This research aimed at demonstrating that pupil stability, as expressed by the fluctuation of the standard deviation of the pupil’s subjects over time and the Pupil Stability Index, is a reliable indicator to estimate Daytime Sleepiness. Results showed that sleepy subjects did have less pupil stability by their means of standard deviation and the Pupil Stability Index. If a pupillography recording device could be mounted on a space station, Astronauts would only need to sit for 15 minutes to determine the effect DS has on them. The analysis can be automated, or monitored by mission controllers, where Astronauts with higher values of standard deviation would be instructed to get more sleep, whereas standard deviations in the lower ranges indicate an Astronaut fit for duty. Further research would involve reproducing this research and compare our novel PSI index with results of the MSLT rather than using more subjective baselines such as sleep survey.

ACKNOWLEDGEMENTS

First, I would like to thank my wife, Jasmine, and my family for supporting, and encouraging me to pursue this research. I would like to acknowledge Dr. Hervé Collin, my Faculty Advisor, without whom I would not have been able to pursue this research, and who has been supportive no matter the circumstance, in addition to my consulting advisor Dr. Sheryl Shook, for introducing me to Sleep Science. I would also like to thank the Hawai‘i Space Grant Consortium for funding my research, and the STEM program at Kapi’olani Community College for the use of their facilities. I would also like to say mahalo to my team member, Mitch Mikami.

61

REFERENCES

Barger, L. K., Sullivan, J. P., Vincent, A. S., Fiedler, E. R., McKenna, L. M., Flynn-Evans, E. E., et al. (2012). Learning to Live on a Mars Day: Fatigue Countermeasures during the Phoenix Mars Lander Mission. SLEEP, 35(10), 1423-1435.

Barger, L. K., Wright Jr., K. P., Burke, T. M., Chinoy, E. D., Ronda, J. M., Lockley, S. W., et al. (2014). Sleep and cognitive function of crewmembers and mission controllers working 24-h shifts during a simulated 105-day spaceflight mission. Acta Astronautica(93), 230- 242.

Bitsios, P., Schiza, S. E., Giakoumaki, S. G., Savidou, K., Alegakis, A. K., & Siafakas, N. (2006). Pupil Miosis Within 5 Minutes in Darkness Is a valid and Sensitive Quantitative measure of Alerness: Application in Daytime Sleepiness Associated With Sleep Apnea. SLEEP, 29(11), 1482-1488.

Carskadon, M., & Dement, W. (1982). The Multiple Sleep Latency Test: what does it measure? Sleep(5), S67-S72.

Dijk, D.-J., Neri, D. F., Wyatt, J. K., Ronda, J. M., Riel, E., Ritz-De Cecco, A., et al. (2001). Sleep, performance, circadian rhythms, and light-dark ccles during two space shuttle flights. The American journal of Physiology - Regulatory, Integrative and Comparative Physiology, 281, 1647-1664.

Griofa, M. O., Blue, R. S., Cohen, K. D., & O'Keeffe, D. T. (2011, April). Sleep Stability and Cognitive Function in an Arctic Martian Analogue. Aviation, Space, and Environmental Medicine, 82(4), 434-441.

Gundel, A., Polyakov, V. V., & Zulley, J. (1997). the alteration of human sleep and circadian rhythms during spaceflight. Journal of Sleep Research, 6, 1-8.

Lowenstein, O., & Loewenfeld, I. (1962). The pupil. In H. Davison, & H. Davison (Ed.), The Eye (Vol. 3, pp. 231-267). New York: Academic Press.

Lowenstein, O., Feinberg, R., & Loewenfeld, I. (1963). Pupillary movements during acute and chronic fatigue. Investigating Ophthalmology(2), 138-157.

Mallis, M. M., & DeRoshia, C. W. (2005). Circadian Rhythms, Sleep, and Performance in Space. Aviation, Space, and Environmental Medicine, 76, B94-107.

Merritt, S. L., Schnyders, H. C., Patel, M., Basner, R. C., & O'Neill, W. (2004). Pupil staging and EEg measurement of sleepiness. International Journal of Psychophysiology(52), 97- 112.

Mitler, M., & Miller, J. (1996). Methods of testing for sleepiness. Behavioral Medicine(21), 171- 183.

62

Prasad, B., Choi, Y. K., Weaver, T. E., & Carley, D. W. (2011). Pupillometric assessment of sleepiness in narcolepsy. Frontiers in Psychiartry, 2(35), 1-7.

Szabadi, E., & Bradshaw, C. (1996). Autonomic pharmacology of a2-adenoceptors. Journal of Psychopharmacology(10), 6-18.

Verghese, A., Kugler, J., Ozdalga, E., Chi, J., Hosamani, P., Elder, A., et al. (n.d.). #22 Pupillary Responses. Retrieved October 18, 2014, from Stanford Medicine 25: http://stanfordmedicine25.stanford.edu/the25/pupillary.html

Yamamoto, K., Kobayashi, F., Hori, R., Arita, A., Sasanabe, R., & Shiomi, T. (2013). Association between pupillometric sleepiness measures and sleep latency derived by MSLT in clinically sleepy patients. Environmental health Preventative Medicine, 18, 361-367.

63

DEVELOPING A SOFTWARE TOOL TO ENABLE CREATION OF COMMAND SCRIPTS FOR A SATELLITE MISSION Erik Keoni Wessel Department of Physics University of Illinois at Urban Champaign Urbana, IL 61801-3080

INTRODUCTION Mid-May - August 2014, I worked for the Hawai‘i Space Flight Laboratory (HSFL) in the University of Hawai‘i at Mānoa as a programmer for the upcoming Hiakasat satellite mission. During my internship, I helped to develop COSMOS (Comprehensive Open- architecture Space Mission Operations System), a suite of software systems and tools that the Hiakasat mission will use both on the ground and on the satellite itself (Sorenson, et. al., 2012). My primary project was the creation of a new software tool to support the COSMOS flight software. This application, called the Command Assembler Tool (CAT) is crucial for the generation of command scripts, files that are sent to the satellite to control it. It can also be used to view logs sent back from the satellite. SKILLS LEARNED This project gave me the opportunity to hone my existing computer science skills and acquire several new ones. I gained hands-on experience with the COSMOS flight software that CAT supports by writing test routines for satellite hardware. I learned how COSMOS monitors and controls the satellite. I also gained experience with Qt, a cross-platform application and GUI framework (http://qt-project.org/), while writing the application in C++ and Qt’s QML language. My first task was to write test programs for the satellite. Being able to test the satellite’s systems regularly as it is prepared for launch is crucial for the mission. Each piece of hardware requires its own test program, and at the time my internship started a program was needed to test the spacecraft’s magtorque (magnetic torque) rods. Magtorque rods are a set of three perpendicular electromagnets that help control the rotation of the satellite by interacting with the earth’s magnetic field. To test them, the program I wrote ramped up and down the current in each magtorque rod and compared the magnetic field measured by internal sensors with the expected flux values at regular intervals. Additionally, I wrote a bash script that runs all the test programs in sequence, allowing quick verification that the satellite is in working order. Testing the satellite hardware provided perspective and hands-on knowledge about the flight software. Understanding how COSMOS flight software systems work was essential. A collection of programs on the flight computer are responsible for controlling and monitoring the satellite. While the satellite is still on the ground we can manually run these programs and view their output, but after launch this will no longer be practical. Instead, a special program called “agent_exec” (which is set to start running as soon as the flight computer boots up) runs and monitors all the other programs for us. Agent_exec itself then takes commands and relays output through files sent to and from the satellite. 64

In order to communicate with agent_exec, there must be a way to describe any events that can happen in a general sense, be they commands that might be executed or other events that have occurred or will occur on the satellite. COSMOS describes all types of events with a single data structure called an “eventstruc,” detailed in the table below (Figure 1.).

double utc The event’s universal time coordinate (UTC)

double utcexec The time the event was actually triggered

char node [] COSMOS Node for the event

char name [] Name of the event

char user [] User of the event

uint32_t flag Event flags

uint32_t type Event type

double value Current value of the condition

double dtime Initial time consumed

double ctime Continuous time consumed

float denergy Initial energy consumed

float cenergy Continuous energy consumed

float dmass Initial mass consumed

float cmass Continuous mass consumed

float dbytes Initial bytes consumed

float cbytes Continuous bytes consumed

jsonhandel handle The handle of the condition that caused the event (NULL if timed)

65

char data [] Data associated with the event

char condition [] Condition that triggers the event (NULL if timed)

Figure 1. eventstruc structure members.

(http://www.hsfl.hawaii.edu//cosmos/documentation/structeventstruc.html)

COSMOS has functions that can convert any COSMOS data structure into a text representation in JSON (JavaScript Object Notation), and vice-versa. This allows eventstrucs to be written to and read from text files which can be transferred over the radio between the satellite and ground stations. Two lists of events are sent to the satellite, which agent_exec constantly monitors: a queue of pending commands and a dictionary of physical events that can happen (such as going into shadow behind the Earth). Events can be triggered in two ways. If an event does not have a condition specified, then agent_exec will trigger it as soon as the current time passes the event’s UTC. If an event has a condition string, that string is evaluated every cycle after the event’s UTC is reached, and the event is triggered if the condition is true. Events that have been triggered temporarily remain on the list with their flag set to “true” (so other COSMOS software can see them) and are then copied to an event log and removed from the list (unless their flag is set to “repeat”). The event log is transmitted back to ground regularly. Eventstrucs representing commands have a unix terminal command in their associated “data” string, which agent_exec will run when they are triggered, allowing remote control of the satellite. The Qt framework and QML were chosen to aid the development of CAT (details to follow). Qt is an open-source C++, cross-platform application and user interface framework, which all COSMOS GUI tools have been written in so far (http://qt-project.org/). It allows GUI software applications to be written that can then be built to run on Mac, Windows, or Linux machines. Applications can be written in C++ and/or QML (Qt Meta-object Language), a declarative language with embedded JavaScript snippets that is highly suited for user interface development (http://developer.ubuntu.com/apps/qml/). Several days were spent researching QML before even beginning work on the project, and the Qt documentation proved an invaluable resource while developing this software tool. Knowledge of both Qt and COSMOS was foundational to being able to build CAT, and through the project I got better at software design, honed my C++, and learned to write QML code.

SOFTWARE DESIGN & IMPLEMENTATION Creating the CAT in support of the COSMOS flight software was an exercise in efficient, goal-oriented software design and programming. Before beginning, it was very important to consider which needs the CAT was addressing, and how it would do so. Once the goals of the project had been decided, major design decisions had to be made that would guide the development. Finally, the minor details of the implementation were fleshed out during coding process. 66

The CAT was meant to be a simple tool that would let COSMOS event logs be viewed, and COSMOS command queues and physical event dictionaries be viewed, edited and created. In the MPST specifications it was stated that MPST would contain a horizontal timeline running from left to right to provide a visual overview of event sequences. While CAT is distinct from MPST at present, implementing a QML version of the timeline not only makes CAT more useful, but furthers the development of COSMOS as a whole by laying the groundwork for a QML-based MPST. CAT also needed to give users the ability to manage lists of events, adding, moving, and reordering them as need be. Every property of every event would also have to be quickly viewable and editable via the GUI. Finally, the application needed to be able to load and save out JSON text files containing COSMOS event lists. Having determined the goals of the project, my task was then to use the tools provided by COSMOS and the Qt framework to achieve them. Several key decisions guided the development of the application. As mentioned previously, I chose to build the application in C++ and QML, within the Qt framework. QML makes building GUIs much quicker, the language is designed for that task, and since it is interpreted at runtime by the QML engine there is no need to recompile during development in order to test changes (http://developer.ubuntu.com/apps/qml/). However, C++ code would still be needed to configure and launch the QML engine, and to use the COSMOS libraries and services. I decided to structure the application so that there was a straightforward division of labour between the C++ and QML sides of the application. QML would handle the application GUI and C++ would manage the application back end. Thus, the visual interface and all calculations and objects relating to it would be managed and defined in QML, while C++ would take on file I/O and the storage of and all processing related to the event lists. This structure makes the program better organized, easier to debug, and easier to update with new features. The only drawback is that it means the program is not likely to be as performance optimized as it could otherwise be, but this isn’t a concern for realistic cases. With a clear outline of the CAT completed, it was time to start writing actual code, discovering and solving any new problems in a practical, hands-on manner. The key to interaction between C++ and QML is the Qt Meta-object system and the associated QObject class included in Qt. QObjects are a C++ class, instances of which can, within the framework of Qt, be accessed from both QML and C++ if passed into the QML engine. Qt Meta-object properties can be defined on QObject-derived classes by defining property accessor functions (functions to get and set the property’s value), and these properties can then be accessed in QML just like properties on any QML object. Functions on QObjects can also be made callable from QML via the Qt Meta-object system. Upon startup, the C++ main function creates two objects: a QML engine and a “CommandAssembler” object (a custom QObject-derived class), a reference to the latter is then passed to the engine. The CommandAssembler object provides the QML GUI access to the C++ back end of the application. Since the application relies on users being able to modify COSMOS events, eventstrucs must be accessible to QML, so a QObject eventstruc wrapper class called “Event” was created that bade each member of an embedded eventstruc accessible as a Qt Meta-object property. I wrote a third class, called “SharedObjectList”, to allow lists of objects to be shared between C++ and QML, something not yet supported by default in the Qt framework. Using COSMOS’s JSON parsing functions, a

67

function on the CommandAssembler can parse an event file into eventstrucs, and store them as Event objects inside a SharedObjectList that can be edited by the GUI. Another function can extract the eventstrucs from the SharedObjectList of Events and save them to an output file. As with any software project, there were more minor design choices made and small problems fixed during the implementation stage than it is possible to detail in a paper, but ultimately the major goals of the project were achieved. RESULTS As it stands, the CAT has the all the functionality required to adequately address the need for a simple command script writing and event log reviewing tool for COSMOS. There are numerous ways to view and edit event lists and individual events within the application. The main window is divided into two major sections: the timeline and the event list views (Figure 2).

Figure 2. Screenshot of the finished Command Assembler Tool

68

Across the top half of the window is a multi-lane horizontal timeline. The timeline can be zoomed in and out via the scroll wheel, controlling the exponent of a horizontal scale multiplier, to enable fine control over large scale ranges. Holding down shift enables side-to- side scrolling with the scroll wheel, which can also be achieved with the scroll bar at the bottom of the timeline. The timeline is divided into 5 lanes. A label area on the left lists the types of events displayed in each lane, and the labels are color coded to match the color of the events they correspond to. Pairs of enter and exit events (such as entering and exiting the Earth’s shadow) are drawn as a colored bar stretching along the timeline. Conditional events have a blur extending to their right down the timeline, which illustrates visually the possibility that they might not be executed immediately when their UTC is passed (Figure 3, previous page). When the mouse is moved over the timeline, a vertical green line appears, and text appears in a box at the top that displays the cursor’s exact UTC. The bottom half of the window contains two tabbed regions. The left set of tabs displays table views of either the command queue, a list of projected future events (at this point the application does not yet predict future events based on information about the satellite and its trajectory, but it is a planned feature), or a list archival events. Events in these three lists are also displayed in the timeline. The right set of tabs contains table views of the physical event dictionary, and a dictionary of command events that can be easily copied into the command queue. These two event dictionary lists are not drawn on the timeline. All lists can be loaded from event files or saved to event files via buttons above the lists, or corresponding items in the main menu (Figure 4).

Figure 4. Saving the command queue to a file

69

Right-clicking on the events or the timeline brings up a menu that allows new events to be created, either by copying them from a template in one of the dictionaries, or creating a new blank event (Figure 5). Additionally there is copy/paste functionality allowing events to be copied between or within any of the 5 event lists the application manages. Clicking and

Figure 5. Context menu options for creating an event based on those in the command dictionary list. dragging on events, or the bar connecting an enter/exit event pair, will change those events’ UTCs. A context menu lets the user bring up an event properties dialog for any event (Figure 3.), where all the event’s properties can be viewed and edited. Events can be selected in either the timeline or lists, and will be highlighted in dark blue in both the timeline and lists when selected. Events can be selected, and subsequently copied, pasted, deleted or edited, in groups.

CONCLUSION At this point the Command Assembler Tool has the necessary functionality to be used to make command scripts for the satellite. Lists of commands can be made quickly, aided by the ability to copy and paste and replicate template commands from a command dictionary list. It is possible to load, edit, and save any of the 5 lists Command Assembler manages, the command queue, projected list, archival list, command dictionary and event dictionary, and the files outputted are suitable for commanding the flight software. There is still room for improvement, the next step would be to add the ability to simulate the satellite to a limited extent, predicting future events and modeling resource consumption. However, as it stands the software application has achieved its primary purpose. 70

This internship has given me experience writing software applications. I have gotten better at working within the Qt framework, and learned how to use the QML language to build advanced GUIs. I have also learned about the COSMOS flight software systems, and been able to do hands-on work writing software to control hardware on a real spacecraft. I feel I have gained many useful skills during this internship, and have contributed something of value to the COSMOS and Hiakasat projects. ACKNOWLEDGEMENTS I would like to thank Eric Pilger for mentoring me during the project, and answering lots of questions about COSMOS and Hiakasat, Trevor Sorenson for explaining the design of COSMOS’s MPST program and the format of timelines needed for mission planning, and Ethan Kastner for tolerating sharing an office with me this summer and answering numerous technical questions. I would also like to thank the Hawai‘i Space Grant Consortium, for making this all possible, and everyone at HSFL for their hard work on the coolest engineering project I have ever had the privilege of being a small part of. REFERENCES Sorenson T. C., Pilger E. J., Wood M. S., Nunes M. A., Yost B. D., 2012, A University- developed Comprehensive Open-architecture Space Mission Operations System (COSMOS) to Operate Multiple Space Vehicles, SpaceOps 2012, 11-15 June, conference Stockholm, Sweden Qt Project, http://qt-project.org/, Accessed 8/15/2014 QML, http://developer.ubuntu.com/apps/qml/, Accessed 8/15/2014 COSMOS Software Documentation, http://cosmos-project.org/docs/software-documentation, Accessed 8/15/2014 Eventstruc Struct Reference, http://www.hsfl.hawaii.edu//cosmos/documentation/structeventstruc.html, Accessed 8/15/2014 Longeventstruc Struct Reference, http://www.hsfl.hawaii.edu//cosmos/documentation/structlongeventstruc.html, Accessed 8/15/2014

71

AN EMBEDDED MICROPROCESSOR DESIGN FOR THE PULSE SHAPE DISCRIMINATION OF A PLASTIC SCINTILLATING NEUTRON DETECTOR

Marcus Yamaguchi Department of Physics Kaua‘i Community College Lihue, HI 96766

ABSTRACT

Kaua‘i Community College (Kaua‘i CC) is presently participating in an effort to develop a space industry in Hawai‘i. This effort includes future launches of University of Hawai‘i supported micro-class satellites from the Pacific Missile Range Facility at Barking Sands, Kaua‘i. Kaua‘i CC’s participation is represented by operation and maintenance of a ground station, located on campus, to support command and control and data acquisition for UH micro-satellites. Kaua‘i CC has also proposed a plastic scintillating neutron detector as a potential micro-class satellite payload for the detection of solar accelerated neutrons. The dual focus of this Fellowship was to support the operation and maintenance of the Kaua‘i CC ground station and to develop a digital signal processing (DSP) framework for neutron detection which is compatible with a Field Programmable Gate Array (FPGA). Ground station support includes the installation of an S-Band dish and the installation of new components to existing ultra-high frequency (UHF) and very-high frequency (VHF) antennas. The FPGA design focus is on developing a system design capable of discriminating neutrons from other sources of accelerated particles and meeting timing specifications necessary for characterizing neutron flux as a count rate. This paper presents the results and accomplishments of these two distinct goals in separate sections.

I. KAUA‘I CC GROUND STATION

UH micro-satellite Hiaka-Sat is scheduled to launch from PMRF in the near future. In order to support data acquisition for Hiaka-Sat an S-Band dish is required at the Kaua‘i CC ground station. The UHF and VHF antenna systems require additional components to improve signal to noise ratio and a new azimuth motor to replace a mechanically failing unit.

A M2 A2000 S-band dish was successful installed onto the roof of the Kaua‘i CC Daniel K. Inouye building (Fig. 1). The dish is mounted onto a base of two galvanized steel I-beams which are mounted onto the North-Eastern perimeter wall and an adjacent parapet wall. The I- beams were installed by a contractor. The dish was installed by Marcus Yamaguchi, with assistance from Kaua‘i CC’s facility manager, and with the fortuitous presence of a roofing contractor with a crane working on a nearby building. Control lines for the azimuth and elevation rotors and a LMR 400 cable were run in the ceiling space of the building to the HSFL ground station control room, again by Marcus Yamaguchi. After installation the function of the rotors was verified by properly terminating the cable with terminal lugs and connecting to the motor controller, then commanding the dish azimuth and elevation position which confirmed a successful mechanical assembly and installation of the dish.

72

Figure 1. Kaua‘i CC S-Band

Following completion of the dish installation, a new rotor was installed to replace the malfunctioning one on the UHF/VHF antenna system. This required lowering the antennas, dismounting UHF/VHF antennas from the tower, and removing and replacing the motor. At that time, new LNAs (low noise amplifiers) were to be installed, but proper N-connector adaptors had not yet been received at the time of this paper writing. After the installation of the new rotor the antennas were recalibrated and the new rotor was tested to verify successful installation. Future work will be to install the S-band receiving equipment in the ground station and the two LNAs on the VHF/UHF antenna system.

II. DESIGN FOR NEUTRON DETECTION

According to ref. [1], interactions of ions accelerated in solar flare magnetic loops produce neutrons that escape from the Sun and survive the transit to Earth where they can be directly detected with instruments in orbit. The purpose for studying solar accelerated neutrons is to analyze solar mechanisms (such as magnetic reconnection and particle transport) through characterizing the original conditions of their acceleration [2]. The primary purpose the detector for this effort will be to characterize the escaping neutron flux and time history as a relative count rate that can be correlated with related data on solar radiation to convey information about the accelerated ions responsible for their production.

Plastic scintillators used for radiation detection are known to have an identical pulse response for various accelerated particles; thus neutrons, gamma and alpha, all require a method for particle identification. In this effort, mathematical programs were developed in Mathematica that model: the optical pulses produced by the scintillators from various particle impacts (Marrone method), digital methods of detecting the time and amplitude of the optical pulse peak (Constant Fraction Discriminator (CFD)), and a tail-to-charge integration method of particle discrimination. After modeling, this effort additionally begins the process of implementing the design in hardware.

73

Traditionally neutron detection has been implemented using analog circuits, but advances in FPGA technology and very fast high precision Analog to Digital Converters (ADC) allows for an equivalent design to be implemented digitally. In comparison to analog methods, digital signal processing provides a greater degree of design customization and system stability. For the detector design of this effort, an Altera DE2 FPGA was used for digitally processing the pulse data. The FPGA design was accomplished using a suite of software tools supplied by the FPGA manufacturer Altera. This suite includes the System on a Programmable Chip (SOPC) builder which allows the user to specify components (such as logic elements, embedded microprocessors, memory elements) of their design with the use of a Graphical User Interface (GUI) shown in Fig 2. Using this GUI, VHDL (Very High Speed Hardware Design Language) or (Verilog HDL - Hardware Design Language) code was generated to implement a system design with given specifications, including the assignment of base addresses, clock rates, interrupt requests, memory storage, bus sizes etc.

Development of a mathematical behavioral model: Prior to designing the FPGA, a mathematical model for simulating the pulse response of a scintillator to neutrons and γ−particles, obtained from ref. [3], was implemented in Mathematica. This model (hereby Marrone’s model) was used to help develop and optimize of the DSP techniques chosen for our FPGA design. Marrone’s model sums three exponential components (scintillator optical response, analog circuit pulse response, and particle decay) to obtain a reproduction of the pulse shape with the functional form of eq. 1.

= ( ) ( ) + ( ) ( ) . ( . 1) −휃 푡−푡0 −휆푠 푡−푡0 −휃 푡−푡0 −휆푙 푡−푡0 , 퐿 퐴 are�푒 exponential− 푒 decay �constants퐵�푒 of the− scintillator푒 � optical푒푒 response, the analog pulse response and the particle decay, and A and B are normalization constants. Table 1 푠 푙 shows the휃 휆 parameters푎푎푎 휆 used for neutrons and -particles. The exponential decay components, the normalization constants and the initial time ( ) were empirically determined in ref. [3]. Fig. 3 훾 shows plots generated using Marrone’s model and Mathematica for a neutron and -particle. The 0 normalization constant A may be varied to produce푡 signals of various amplitudes. The difference in the decay rate of the two different responses differentiates the scintillator훾 response for neutrons and -particles. Marrone’s model is used in the next two sections to develop the digital CFD and the “tail to charge” method of integration. 훾 amplitude normalized to 1 1.0

0.8 Figure 3. Simulated scintillator response for 0.6 γ−particles and neutron using Marrone’s model normalized to 1. 0.4

0.2 훾 − 푝�푝�푝푝푝�

nanoseconds 0 10 20 30𝑛푛40𝑛푛50�

74

Table 1:

-particles neutrons 4.378 4.325 훾 3.537 3.547 휃 11.69 38.55 푠 A 휆 11.03 11.48 푙 B 휆 0.015 0.015 0.295 0.331

푡0 Pulse Triggering and Peak Determination: In our neutron detector design, the optical pulse response of a plastic scintillator is detected by a photodiode, and the generated current is integrated by a charge sensitive amplifier to generate a voltage pulse proportional to the detected current across the photodiode. This pulse is scaled to the voltage input of an ADC and digitized. To determine the pulse triggering threshold for the leading edge of the digitized pulse signal a simple threshold discriminator may be implemented to compare the input pulse voltage to a predetermined voltage value. When the compared voltage pulse exceeds the predetermined voltage threshold, the threshold discriminator outputs a logic high corresponding to the leading edge of the input pulse signal which can be used for timing purposes. For pulse signals with a constant rise time and varying amplitudes a threshold discriminator may give inaccurate timing information due to an error walk (Shown in Fig. 4) since the timing threshold is dependent on the pulse amplitude. A digital implementation of a Constant Fraction Discriminator (CFD) is chosen for pulse triggering in our digital design because of its ability to minimize the effect of error walk as shown in [4].

In our Mathematica model a linear interpolation was applied to the simulated pulse generated with Marrone’s model to determine the time location of the pulse peak. The time location of the pulse peak is used to divide the signal, isolating the decay component from the entire pulse for an application of the “tail to charge method” of discrimination, which is discussed in the next section. In practical applications the detection of the pulse peak can be influenced by noise fluctuations. A CFD may also be used to determine the timing threshold for the pulse peak, minimizing the effect of the noise fluctuations. Equation 2, obtained from ref. [5], is used to implement the CFD on the simulated pulse. The CFD splits the input signal into two parts. One part is attenuated to a fraction (F) of the original amplitude, and the other part is delayed by a factor (D) and inverted. These two signals are summed to form the constant-fraction timing signal. The fraction (F) is chosen so that the leading edge of the delayed signal will correspond to the original leading edge/amplitude of the signal.

[ ] = 퐿 { } ( . 2)

푉퐶퐶퐶 푘 � 퐹 ∗ 푉푘−푖 − 푉푘−푖−� 푒푒 푖=1 [ ] is the output of the implemented CFD equation at a given sample k. represents the original simulated input pulse. L refers to the number of samples used in the pulse 퐶퐶퐶 푘 model which푉 푘 can be considered the signal length. This method of CFD was applied to the푉

75

simulated signal in Mathematica ( is equal to simulated pulse output at a given sample [k]) to determine optimal figures for the delay (D) and the fraction (F). Fig. 5 shows a plot of the 푘 original signal for the scintillator response푉 compared the signal with the applied CFD algorithm where F = 0.3, D = 4ns and L = 50 samples.

Particle Identification: Presently available plastic scintillators have a similar amplitude response for neutrons and -particles. Since the decay rate of the optical pulse response in plastic scintillators is different for neutrons and -particles, one method for particle identification is a “tail to charge” integration.훾 This method is implemented by calculating a ratio of the integrated charge of the signal decay component to훾 the integrated charge of the entire signal (eq. 3) which is distinctly unique for each detected particle type. The system response can compared to this ratio to determine detected neutrons. = ( . 3)

∫ 푄푡푡𝑡

푃푃푃 푓푓𝑓 푠�푠푠𝑠 푒푒 This method of discrimination was∫ applied푄 to our simulated signal. Calculating the tail to charge ratio gives a figure of 0.939891. When the amplitude of the simulated signal is varied to ±5 percent of the original amplitude the tail to charge ratio remains unchanged from the original figure. This indicates that the ratio is independent of the pulse amplitude. To implement the tail to charge discrimination method digitally the integrals can be rewritten as the recursive function.

+ [ ] = 퐿 ( ) ( ) ( . 4) 2 푡푖 푡푖+1 푦 � � 푡푖+1 − 푡푖 ∗ 푥 푒푒 The integral is calculated 푖=by1 dividing the signal into a series of step responses and summing the responses. The width is defined as the time interval whose resolution is determined by the sampling rate. The height is determined by calculating the midpoint of the 푖+1 푖 pulse response at the time intervals , which is given by푡 the− 푡function of the input

signal x[n] = . 푖+1 푖 푖 푖+1 푡 푎푎푎 푡 푡 +푡 2 FPGA Development푥 � : � Once the design is determined by modeling, HDL code must be developed by the FPGA programmer to specify the system input/outputs and how they are processed. For this project, significant time was spent developing the mathematical model for the received pulses and the CFD algorithm, completing tutorials on FPGA design, block diagramming the FPGA design, and programming the FPGA with portions of the DSP design. Shown in Fig. 6 is a block diagram of the developed FPGA system for digital signal processing. The system digitizes the input pulse signal, which is then passed to DSP blocks containing the CFD and particle identification algorithm. The CFD provides the timing information for particle identification. Once a neutron pulse is discriminated the event is recorded in the pulse counter with a time stamp from the ADC, and the processed information is sent to the SDRAM controller where it waits to be sent to memory storage. Buffers are provided for the ADC sampling, pulse processing, and the SDRAM read and write to prevent data loss during real-time signal processing.

76

III. CONCLUSIONS

Kaua‘i CC has repaired and verified the operation of UHF and VHF antennas. Mechanical operation of the newly installed S-band dish has been completed. Before the launch of Hiaka-Sat, additional components for signal receiving must be installed and tested for the S- band dish. The model developed in this project provided the opportunity to understand and optimize the CFD and particle identification algorithms with an approximated signal response for a scintillator detector. The model enabled the first pass implementation of an FPGA design on our Altera DE2 board. While the mathematical model aided in the design process, the FPGA design developed during this project must still be optimized empirically with the scintillator that will be implemented onboard a satellite mission. Other aspects of the FPGA design such as digital pulse shaping and data compression algorithms which have not been addressed in this project will be important areas of development for the progression of our FPGA design.

ACKNOWLEDGMENTS

Hawai‘i Space Grant Consortium, Hawai‘i Space Flight Lab, NASA.

REFERENCES

[1] Murphy, R. J., Kozlovsky, B., Share, G. H., Hua, X. M., & Lingenfelter, R. E. (2007). Using gamma-ray and neutron emission to determine solar flare accelerated particle spectra and composition and the conditions within the flare magnetic loop. The Astrophysical Journal Supplement Series, 168(1), 167.

[2] Lawrence, D. J. et al., E. Using Solar Neutrons to Understand Solar Acceleration Processes. (unpublished article on www. Nationalacademies.org.)

[3] Marrone, S. et al., (2002). Pulse shape analysis of liquid scintillators for neutron studies. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 490(1), 299-307.

[4] Kilpelä, A., Ylitalo, J., Määttä, K., & Kostamovaara, J. (1998). Timing discriminator for pulsed time-of-flight laser range finding measurements. Review of Scientific Instruments, 69(5), 1978-1984.

[5] Peng, H., Olcott, P. D., Foudray, A. M. K., & Levin, C. S. (2007, October). Evaluation of free-running ADCs for high resolution PET data acquisition. In Nuclear Science Symposium Conference Record, 2007. NSS'07. IEEE (Vol. 5, pp. 3328-3331). IEEE.

77

Figure 2. SOPC Builder system shown with a CPU component, USB interface, PLL, SSRAM and DDR SDRAM.

amplitude normalized to 1   1.0

Threshold set to fraction 0.8 0.3 of normalized

amplitude 0.6

0.4

0.2

nanosecond 0 10 20 30 40 50 Error walk Fig 4. Two simulated neutron pulses with different amplitudes are shown with an approximated threshold set for pulse triggering and time pickoff. The two pulse leading edges are shown to have similar starting times but the set threshold will trigger at different times.

78

amplitude 1.0

0.5

ns 10 20 30 40 50

0.5

1.0

Figure 5. Original signal response generated using Marrone’s model

and the signal with the CFD algorithm applied. Using a delay of D = 4ns and a fraction F = 0.3 the zero crossing point of the CFD trace corresponds with the pulse amplitude of the original signal.

SDRAM

ADC Sampling Memory MUX Storage SDRAM

ADC Register COntroller RD Buffer

Buffer WR Processing Buffer Data Buffer Digital signal processing Blocks FIFO Particle CFD Identification Pulse Data Buffer Time Stamp ACCUM FIFO Pulse Counts

Figure 6. FPGA design for a neutron detector with CFD and particle identification blocks. Once particles are discriminated the counts are passed into an accumulator.

79

STUDY OF THE MOST HARMFUL SOLAR ENERGETIC PARTICLE FOR SHIELDING NEXT HUMAN SPACE FLIGHTS

Bryan Yamashiro Department of Physics & Astronomy University of Hawai‘i at Mānoa Honolulu, HI, 96822

ABSTRACT

Solar events such as solar flares and coronal mass ejections eject immense magnitudes of energetic particle flux daily toward the Earth. Particle detectors in space and on Earth are constantly monitoring these potentially harmful particles, aiming for a better understanding of the mechanisms that drive the particles to extreme energies. Utilizing powerful particle detectors in space, this study analyzes the highest energetic solar storms for deeper insight of the behavior of energetic solar particles. Various solar event parameters and methods of analysis are employed to portray an array of results for two major solar events. These results support models for predicting high-energy solar events that are hazardous to society and more importantly, human life on Earth and in space.

INTRODUCTION AND OVERVIEW

Heliophysics is the study of the Sun and its interaction with Earth and the Solar System. The Sun exhibits 11-year solar cycles during which its magnetic field becomes very complicated. Periods of fewer and smaller sunspots and solar flares are called solar minima, and conversely periods of larger, more prominent, and more numerous sunspots are called solar maxima [1]. Sunspots are the source regions of solar activity, such as flares and coronal mass ejections (CMEs), thus solar activity increases during solar maxima. Solar flares and CMEs emit large amounts of photons, particles, and plasma into the Solar System.

Solar flares and CMEs occur almost daily. Solar flares are intense bursts of radiation at the surface of the Sun. Solar flares are classified with the letters A, B, C, M, and X, from the least energetic to the most energetic events, which are assigned according to their peak X-ray flux. CMEs are expulsions of plasma and magnetic fields into the Heliosphere that travel with velocities rang- ing from 400 to 2,500 km/s [2]. Both phenomena are initiated from the release of magnetic energy associated with sunspots, and results in the ejection of photons and high- energy solar energetic particles (SEPs) [3].

History has shown effects of large solar storms; the largest recorded being the Carrington Event of 1859. This event was powerful enough to create auroras at equatorial latitudes, away from the magnetic poles. An event similar to the Carrington Event could be highly detrimental in modern society; the world’s high-tech infrastructure could grind to a halt [4]. The massive amount of particle interference could cause radio disruptions, GPS and satellite failures, and trigger large-scale blackouts. The radiation exposure from energetic particles is also a danger to human life, especially in unprotected space outside the atmosphere onboard the International Space Station (ISS), and will be a significant problem for long-duration manned space flight missions.

80

PARTICLE DETECTION

Spacecraft detectors are continuously monitoring solar activity, measuring photons and particles emitted by solar flares and CMEs. SEPs, having mass and being electromagnetically charged, move along a path dictated by the interplanetary magnetic field filling the Heliosphere. The journey of these particles to Earth can take from less than thirty minutes to a few days depending on the particle energy, intensity of the solar event, and the location of the event on the Sun. Particle detectors such as the Solar and Heliospheric Observatory (SOHO) [6], the Geostationary Operational Environmental Satellite-13 (GOES-13) [5], the ISS installed Alpha Magnetic Spectrometer (AMS-02) [3] and many more, capture and measure data from solar emissions. The data from these spacecraft allow for analysis of SEPs at specific points in time and location.

The AMS-02 particle detector is primarily directed at high-energy physics studies such as dark matter and cosmic rays, however it is also the largest solar energetic particle spectrometer ever flown. AMS-02 provides observations of solar protons and helium from a few hundred to a few thousand MeV, including precise measurements of intensity, spectra, and anisotropy and their temporal evolution. This very high-energy range is poorly understood at present due to a lack of precision measurements; however it is also the most potentially dangerous to astronauts [7]. AMS-02 has observed 18 different SEP events from May 2011 to February 2014. The specific events are important since observation by AMS-02 requires SEPs to sustain extremely high- energy ranges above 125 MeV at low Earth orbit.

RELATION TO NASA GOALS

One of NASA’s main strategic objectives is to understand the Sun and its interactions with Earth and the Solar System, including space weather [8]. A deeper knowledge of Heliophysics is being stressed by agencies such as NASA, with a focus on monitoring and predicting changes on the Sun [9]. For all reasons, it is imperative for the success of the next human space missions to identify methods of predicting large solar storms. The goal of the AMS-02 group at the University of Hawai‘i at Mānoa is to provide an understanding of the flux variation in cosmic rays due to the solar modulation and to study the most energetic SEP events emitted by the Sun.

METHOD

SEP events were studied in detail focusing on characteristics such as particle intensity, maximum particle energies, frequency and duration of events, and their origin locations on the Sun. Full energy spectrums of SEP events were plotted with the data from the three satellites SOHO, GOES-13, and AMS-02. From the start of the AMS-02 mission in May 2011 to the middle of 2014, AMS-02 collected particle data for 18 major solar events, which determined events suitable for research.

Excess particles were measured in the proton-counting rate for days associated with flares and provide plots of SEPs measured by AMS-02. SEP flux data was collected from GOES-13 and SOHO for dates of interest, available online and can be downloaded from their respective

81

websites. A C++ program was modified to graph data from each of these events while using the CERN graphing program, ROOT [10].

Flare Date Flare Class Flare Start CME Velocity AMS-02 Max Time (km/s) Energy (MeV) August 9, 2011 X6.9 07:48 1610 910 February 25, 2014 X4.9 00:39 2147 1220

TABLE I. The table includes initial parameters of the two SEP events. Flare class, start time of the flare, corresponding CME velocity, and max AMS-02 observed velocities are included to distinguish between the two events.

The main research study involved two high-class SEP events detected by AMS-02. The events correspond to the August 9, 2011 X6.9 class flare and the February 24, 2014 X4.9 class flare. These flares were chosen due to the high intensity, which makes the SEP flux increase well defined. Another prime aspect of the flares was the timing, as both occurred in the middle of the day, which allowed for a full analysis due to the presence of the rise and fall of SEP flux propagation.

FIG.1. Full SEP proton flux propagation of particles for an entire day. Various vertical lines represent the different time intervals used for analyzing the SEP event. The top graphs represent flux from GOES while the bottom graphs portray SOHO proton flux.

The spectra in figure 1 showed the incoming proton flux for an entire day, each horizontal line represented different energy bins. Higher horizontal lines represented lower energy bins, and contrarily lower horizontal lines represented higher energy bins. Higher energy 82

particles were less abundant than lower energy particles seen in the GOES flux. Spectra showed consistent flux until the start of the SEP events, where defined increases in flux were observed. Figure 1 includes time intervals for proton flux for the two events from GOES and SOHO shows the flare start time (first vertical line), time of first fit for high energies (second vertical line), time of first fit for low energies (third vertical line), and the time at which the highest energy SEP flux was consistent with zero (fourth vertical line). The SEP event of February 25, 2014, as shown for GOES and SOHO, had a more gradual flux increase and an extended SEP signal at high energies compared to August 9. The flux was integrated in 10 - 30 minute intervals to generate the SEP spectra. SEP spectra was generated by using background subtraction, which was done by taking the particle flux of the SEP event and subtracting the particle flux of the previous day. For the two SEP events, the previous days were quiet, without solar activity; therefore the spectra represented the event exclusively. SOHO measures protons from 4 to 53 MeV and GOES measures from 0.8 to 500 MeV, allowing the creation of spectra over a broad energy range.

FIG.2. The far left graphs shows SEP proton flux spectra including both GOES and SOHO data. Following graphs were power law fits for SOHO (second column plots), GOES at low energy (third column plots), and GOES at high energy (fourth column plots) in defined energy ranges.

Converse to a method of having a full day interval (0:00-23:59), the time intervals were optimized for in-depth flare analysis. The complete SEP proton spectra are shown in figure 2 for a single time interval for August 9 and February 25. There were higher proton flux magnitudes of low energy particles and fewer high energy particles indicated by the plots. The spectra were divided into low and high-energy ranges for this analysis and each energy range

83

was fit with a power law. In figure 2, the last histograms for both events represent the split between the low and high-energy range regions. For the lower energy portion of the spectrum, SOHO and GOES at low energy were fit between 0.09 - 0.4 GV. GOES at high energy was fit between 0.3 - 2.0 GV for the higher energy portion of the spectrum. The spectra were divided into two energy regions because spectral breaks are often observed in this energy range and appear to be present for these two events. Spectral breaks are important in power law observations, providing information about acceleration mechanisms that drive SEP events.

RESULTS

Data analysis was completed for 18 SEP events dating from 2011 to 2014 that were determined energetic enough by AMS-02. For each event, data was gathered from online graphical user interfaces of NASA satellites, SOHO and GOES-13, 15. Using the sets of data, proton and X-ray flux were plotted for every event. X-ray flux was plotted with data from the GOES-15 satellite detectors. The X-rays were graphed with two wavelengths that the on board detector measures in, 0.05-0.4 nm to 0.1-0.8 nm seen in figure 3. Each X-ray showing time versus flux showed sharp peaks of the initial phenomena of each SEP event. X-ray data is important since detected rays reach Earth quicker than energetic particles, allowing for predictions before particle arrival.

FIG.3. X-ray flux for the August 9, 2011 SEP event. The largest spike in flux represents the X6.9 class flare and the associated detection time.

84

Proton flux was plotted with a low energy range detector (SOHO), and a low to high energy range detector (GOES-13) on individual histograms. Each event consisted of multiple time intervals that saw an increase in proton flux, and all of the various intervals were plotted for every event. These individual graphs were then combined in a single spectrum to portray the low to high-energy range proton flux spectrum of the two different satellite detectors. The data showed similar trends and a correlation between the varying detectors.

Slopes for power law fits were recorded for each time interval and plotted for August 9 and February 25 using generated SEP flux spectra in figure 4. Each slope was plotted versus the interval start time. To search for a trend in the evolution of the event, a line was fit to the slopes. Less time intervals were recorded for August 9, 2011 than February 25, 2014 since the high energy SEP proton flux decreased to zero more quickly. SEP flux decreasing to zero meant that the activity of the SEP event returned to the normalized background state. The power-law slopes measured in the low energy range were consistent within the error bars between SOHO and GOES, however SOHO was somewhat steeper than GOES.

Each fit showed SEP event characterization, and having a generalized flare fit allowed for predictions for SOHO, GOES at low energy, and GOES at high energy. SOHO had a slope of −4.73−4 ± 8.69−5, GOES at low energy had a slope of −3.95−4 ± 8.07−5, and GOES at high energy had a slope of −1.54−5 ± 2.01−5, for the August 9, 2011 SEP event. The February event and the August event again are both listed in Table 2. Each generalized fit showed negative slopes and a certain magnitude. The linear fits to the power law slopes show a steepening of the low energy region of the SEP spectrum for both events. The linear fit to the high-energy part of the SEP spectrum on August 9 results in a slope consistent with zero, indicating that the spectrum is not changing. February 25, however, does show a steepening in the high-energy part of the spectrum.

FIG.4. Spectral indicies found by power law fits corresponding to the defined time intervals for each event. The top graphs were generated from the low energy range data from SOHO and GOES. The two bottom graphs are illustrated using the high energy range data from GOES.

85

Flare Date Energy Range Slope August 9, 2011 SOHO Low Energy −4.73−4 ± 8.69−5 GOES Low Energy −3.95−4 ± 8.07−5 GOES High Energy −1.54−5 ± 2.01−5 February 25, 2014 SOHO Low Energy −3.94−5 ± 1.81−6 GOES Low Energy −4.79−5 ± 6.38−6 GOES High Energy −2.23−5 ± 2.75−6

TABLE II. Respective generalized fits for the two SEP events. Each energy range region included correlating slopes for SOHO and GOES.

CONCLUSIONS

The spectra created from GOES and SOHO data proved compatibility for both the August 2011 and February 2014 events. This phenomenon was seen in the power law fits as both satellite detectors showed a difference between the error bars in the low-energy spectral fits. Differences between the two events occurred at the higher energy range as deviations were off by almost a magnitude. The discrepancy led to a model to find trends between the event parameters and a potential prediction catalog.

The X4.9 flare, although smaller than the X6.9 flare, had a higher CME velocity and max AMS-02 energy. Even with the higher observed energy and velocity the maximum SEP flux was lower, showing that the higher flare class event generated more SEPs regardless of event parameters. The steeper high energy range slope was observed in the X4.9 flare, promoting a prediction that CME velocity and max AMS-02 energies support a more elevated fit slope. The steep slope represented fewer high-energy SEPs since the difference in flux would be greater. Higher SEP flux from the August 2011 event resulted in more high-energy SEPs relative to the February 2014 event even with far lower CME velocities and max AMS-02 observed energies.

ACKNOWLEDGEMENTS

I would like to thank the Hawai‘i NASA Space Grant Consortium for providing me with the opportunity to conduct scientific research. The knowledge I attained from this research period will allow me to transition into my future aspirations. Along with the program, I would like to thank Dr. Veronica Bindi for cultivating my knowledge in advanced physics and providing valuable instruction throughout the entirety of my research.

REFERENCES

[1] Metcalf, T. ”The Magnetic Sun.” The Magnetic Sun. .

[2] Kahler, S. W. Solar flares and coronal mass ejections. Annual Review of Astronomy and Astrophysics, 30(1):113141, 1992. doi:10.1146/annurev.aa.30.090192.000553.

86

[3] Bindi, V. The Alpha Magnetic Spectrometer AMS-02: soon in space. Nuclear Instruments and Methods in Physics Research Section A, 617(1-3):462, 2010. doi:10.1016/j.nima.2009.10.090.

[4] Lovett, R. ”What If the Biggest Solar Storm on Record Happened Today?” National Geographic. National Geographic Society, 2 March 2011. Web. .

[5] Geostationary Operational Environmental Satellite. Available from: http://www.swpc.noaa.gov/Data/goes.html.

[6] Poland, A.I. The SOHO mission. Sun-Earth Plasma Connections, Geophysical Monograph Series,109:1117, 1999. doi:10.1029/GM109p0011.

[7] Whitman, K. ”Filling in the Energy Gap: The Direct Detection of the Highest Energy SEPs in Space.”:n. pag. Print.

[8] http://www.nasa.gov/sites/default/files/files/FY2014_NASA_SP_508c.pdf

[9] “Heliophysics Research - NASA Science.” Heliophysics Research - NASA Science. 2010. Web. .

[10] “ROOT - A Data Analysis Framework.” ROOT - A Data Analysis Framework. Web. .

87

MODELING OF H2O ADSORPTION ON ZEOLITES

David Harris Department of Biological Engineering University of Hawai‘i at Mānoa Honolulu, HI 96822

Jim Knox1 NASA Marshall Space Flight Center, Huntsville, AL, 35812

Hernando Gauto2 NASA Marshall Space Flight Center, Huntsville, AL, 35812

Carlos Gomez3 NASA Marshall Space Flight Center, Huntsville, AL, 35812

NOMENCLATURE

3 ci = Concentration of sorbent, mol/m ui = interstitial velocity, m/s ɛ = Void Number q = pellet loading, mol/m3 3 q* = equilibrium pellet loading, mol/ m 3 ρg = density of gas mixture, kg/m 3 ρs = density of sorbent bed, kg/m cpg = heat capacity of gas mixture, J/kgK cps = heat capacity of sorbent bed, J/kgK kg = Thermal Conductivity of gas mixture, W/mK ks = Thermal Conductivity of sorbent bed, W/mK 2 hg = gas heat transfer coefficient of sorbent bed, W/m K Tg = Temperature of gas mixture, K Ts = Temperature of sorbent

I. ABSTRACT

Zeolites are common adsorbents used in industry. Their unique molecular structure allows them to behave like sieves trapping molecules within their structure. Their ability to adsorb molecules is dependent on pressure, temperature, surface geometry and packing arrangements. This study determines the breakthrough curve, the time it takes for sorbents to reach a saturation point, via COMSOL Multiphysics TM. This will inform researchers how long to run a hydrothermal stability test. The model accounts for heat transfer, mass transport, the geometry of the pellets and the initial inlet partial vapor pressures of 183.98 Pa, 93.33 Pa, and 1.60 Pa. This study finds the breakthrough curve for RK 38 pellets (spheres), and ASRT 2005 pellets (cylinders).

1 Principle Investigator, Environmental Control and Life Support Systems, ES 62, Marshall Space Flight Center. 2 Co-Principle Investigator, Environmental Control and Life Support Systems, ES 62, Marshall Space Flight Center. 3 AST Heat Transfer, EV34, Marshall Space Flight Center.

88

II. INTRODUCTION

Estimates state that an astronaut produces 1.00 kg of CO2 and 2.28 kg of H2O from respiration and perspiration(1). A buildup of these products can cause electrical problems, brain damage and even death. CO2 and H2O are scrubbed from the air in the ISS by zeolites, aluminosilicate, micro porous minerals that have a repeating molecular structure(2). This pattern gives them the ability to behave like molecular sieves, trapping smaller molecules within the pores (see figure 1) (3). These small molecules cannot be released back into the atmosphere unless exposed to high temperatures or by a chemical reaction.

Figure 1: Artist rendition of zeolites depicting the repeated pattern of uniform sized hole for trapping molecules

Zeolites are imbedded in clay pellets to form sorbents. The porosity of the clay allows the zeolites to be exposed to the gaseous mixture even while imbedded. Clay sorbents come in a variety of shapes and sizes, but cylinders or spheres are most common. The sorbent shape effects packing density and hence the surface area available for adsorption.

An important aspect of the design is the packing arrangement of the pellets. The way the pellets are packed in a canister determines the void fraction, or the amount of empty space left after the canister is filled. The void fraction determines how much gas or fluid will move through the canister and interact with the sorbents. Figure 2 below depicts a side view of spheres in different arrangements.

Figure 2: 2 Dimensional representations of spheres packed in two different arrangements.

A cylinder with the same diameter will have a different volume and will pack differently than spheres. The challenge in designing the sorbent canisters is picking an optimum shape and size of pellet.

It is possible to measure the effectiveness of each size and shape of a pellet computationally via the finite element method. The finite element method finds approximate

89

solutions to complex differential equations by solving for the equation in a small subdomain, and connecting them to a find an approximate value for a complex equation over a larger domain. (4) This is similar to the idea of connecting a series of small straight lines to form a smooth curve (see figure 3). As each finite element gets smaller, the approximate value becomes closer to the actual solution.

Figure 3: An example of a few small straight lines forming an arc.

Many software packages include the finite element method, but the software most useful for this project is COMSOL MultiphysicsTM. COMSOLTM requires minimal coding and can solve for many partial differential equations simultaneously. It is used industry to model problems in heat and mass transfer, stress and strain, and fluid mechanics. Figure 4 depicts a typical COMSOLTM interface.

Figure 4: COMSOL MultiphysicsTM interface depicting different temperature gradients for different time steps

In the case of sorbents, the finite element method is useful to determine the breakthrough curve, or the time it takes for the sorbents to reach a saturation point for water. This will inform future engineers on the best shape, size and packing arrangement for sorbents to use on the ISS.

III. MATERIALS AND METHODS

A list of materials used for this project is shown below: 1. Test article canister- 165.10 mm in length and 6.35 mm radius 2. Snow storm funnel

90

3. Packing rod- 6.35 mm radius 4. COMSOL MultiphysicsTM 5. RK 38 and ASRT 2005 Sorbent pellets 6. Dry glove box 7. Ruler 8. Scale 9. 50 mL beaker 10. Dry Glove box

The two sorbents used were RK 38 and ASRT 2005. RK 38 pellets are spherical in shape and a have a diameter of 2.1 mm. ASRT 2005 pellets are cylindrical in shape with a diameter of 2.05 mm and with varying heights (see figures 4-5).

Figure 4-5: RK 38 pellets (left) and ASRT 35 pellets (right).

The test article was held up vertically by a clamp, rod and stand. The sorbents were poured into the test article through a snowstorm funnel by tapping a beaker against the edge of the funnel. This method ensured a uniform packing arrangement in the cylinder. The process continued until the packing rod would fit inside the test article and only the area below the groove would be covered.

Once the right amount of pellets was in the test article, they were poured into a 50 mL beaker. Seven small vials were filled with 75 pellets each, extracted by the sorbents from the test article. The remaining pellets were poured back in the test article using the same procedure as before. The packing rod was placed in the test article and was pushed down until it made contact with the sorbents. A blue marker was used to mark the depth of the packing rod. A set of 75 pellets from one of the vials was poured in the test article and the depth of the packing rod was measured again. This process was repeated until all of the sorbents were poured back in the test article (see figure 5-6).

91

Figure 5-6: Packing rod in test article (left) and packing rod with markings (right)

The markings were measured using a ruler to find the difference in height that 75 pellets filled. This was important in finding the average volume of 75 pellets, and for designing spacers to replace 75 when absent. The average difference in height was approximately 0.25 inch.

The next step in the procedure was to determine the mass of 75 pellets after a test. Eight empty canisters were pre-weighed and filled with sorbents. Four were filled with ASRT 2005 and the other four were filed with RK 38. After activation, the sorbents were poured in a sieve, sifted for dust and weighed again. 75 pellets were extracted, and the mass was measured again. After subtracting the weight of the dust from the weight of the remaining sorbents, the final mass of 75 pellets was found.

The sorbents were poured back into their respective canisters. A spacer with a quarter inch diameter and a quarter inch length were placed in the canister to replace the volume lost from the 75 pellets. The canisters were put back in the testing system.

The Model

The equations used were mass balance (eq. 1), mass transfer rate (eq. 2), fluid phase heat balance (eq. 3), and sorbent bed heat balance (eq. 4):

( ) ( ) PDE Mass Balance: + + = 0 풊 (1)흏흏 흏 풖 � ퟏ−휺 흏흏 흏흏 흏흏 휺 흏흏 PDE Mass Transfer Rate: = ( ) 휕휕 ∗ (2) 푚 휕휕 푘 푞 − 푞 ( ) Fluid Phase Heat Balance: + = + 휕푇푔 휕푇푔 휕푇푔 (3) 휕 1 − 휀 6 푔 푔 휕휕 푔 푖 푔 휕휕 푔 휕휕 휕휕 휀휀 푔 푠 푔 휀휌 푐푐 휀휌 푢 푐푐 휀푘 � � ℎ �푇 − 푇 � ( ) Sorbent Bed Heat Balance: (1 ) = (1 ) + 휕푇푠 휕 휕푇푠 1−휀 6 (1 ) 푠 푠 푠 푔 푔 푠 (4) − 휀 휌 푐푐 휕휕 − 휀 푘 휕휕 � 휕휕 � 휀휀 ℎ �푇 − 푇 � − 휕휕 푑ℎ 휕휕In COMSOL− 휀 MultiphysicsTM, the input parameters came from a proposed test, The Hydrothermal Stability Test, by Jim Knox. In this test, canisters filled with sorbents will be connected in series and exposed to a flow of nitrogen gas with four set partial vapor pressures (4). (see figure 7).

92

Figure 7: The test system with eight test articles bundled in insulation and connected in series

The paper recommended the same void fraction for both ASRT 2005 and RK 38 at 0.40. Both sorbents had a density of 1201 kg/m3. The flow rate of gas was assumed to be 1.384 L/min, the initial temperature of gas at 24 C, the initial pressure at the inlet, to be 100.8 [kPa], change in pressure was 0.36 [Pa], and the LDF to be 0.000625 [1/s].

Even though the study assumed a void fraction of 0.40, another method to calculate the void fraction was used*. Dividing the total mass of pellets by the particle density yielded the sorbent volume. The sorbent volume was divided by the canister volume to get the void number, which was subtracted by one. The void fractions were 0.44 for RK 38 and 0.435 for ASRT 2005.

The model for this test ran for 3600 minutes with time steps of 30 seconds. It took two to four minutes for the computer solve each problem

*The calculated void fractions were included in the project because of the suggestion that not all sorbents will have the same void fraction. If there was a way to measure void fractions that is the

IV. RESULTS

For the packing experiment, the average height difference for the both ASRT 2005 and RK 38 are in Table 1:

Table 1 : Average height of 75 pellets in the test article

Sorbent Inches meters

Average RK 38 for 75 pellets 0.22544643 0.005726

Average ASRT 2005 for 75 0.24776786 0.006293 pellets

The average height was 5.7 mm for RK 38 and 6.3 mm for ASRT 2004. This is why quarter inch spacers were used.

93

The average mass of 75 pellets of both ASRT 2005 and RK 38 are:

Table 2: Average weights of 75 pellets

Bulk weight of RK 38 in g in kg Bulk Weight of in g in kg ASRT 2005 A7 13.42 A4 13.02 A5 13.58 A6 13.76 A3 13.52 A8 13.86 A1 13.41 A2 13.69 Average 13.4825 0.0134825 Average 13.5825 0.0135825

The curve was found for both ASRT 2005 and RK 38 at three different inlet partial vapor pressures, 183.98 Pa, for 93.33 Pa, and 1.60 Pa. In the Hydrothermal Stability Test Protocol, there was a test for 0.0097 Pa, but that inlet pressure never broke through in COMSOLTM. The breakthrough times, assuming a void fraction of 0.40, are in Table 3:

Table 3: Times to reach breakthrough point assuming void fraction of 0.40

Sorbent 183.98 Pa 93.33 Pa 1.60 Pa ASRT 2005 1542 min 2600 min 43100 min RK 38 1542 min 2600 min 42000 min

The breakthrough curves, assuming a void fraction of 0.4 for ASRT 2005 are in Figures 8-10:

Figure 8: ASRT 2005 with Figure 9: ASRT 2005 with inlet pressure at 183.98 Pa with inlet pressure of 93.33 Pa with void fraction of 0.40 void fraction of 0.40

94

Figure 10: Breakthrough curve for ASRT 2005 with an inlet temperature of 1.60 Pa with void fraction of 0.40

The breakthrough curves for RK 38, assuming void fraction of 0.40 are in Figures 11-13:

Figure 11: Breakthrough curve for Figure 12: Breakthrough curve for RK 38 with an inlet partial pressure of RK 38 with an inlet partial pressure of 183.98 Pa with void fraction of 0.40 93.33 Pa with void fraction of 0.40

Figure 13: Breakthrough curve for RK 38 with an inlet partial pressure of 1.60 Pa with void fraction of 0.40

95

Table 4: Times to reach breakthrough point assuming void fraction of 0.435 for ASRT 2005 and 0.44 for RK 38 The breakthrough curves for 1.60 Pa were not found

Sorbent 183.98 Pa 93.33 Pa 1.60 Pa ASRT 2005 1100 min 1800 min -- RK 38 1100 min 1685 min --

Figures 14-15 are the breakthrough curves for ASRT 2005 assuming a void fraction of 0.44:

Figure 14: Break through curve for Figure 15: Break through curve for ASRT 2005 with an inlet pressure of ASRT 2005 with an initial intlet pressure of 183.98 Pa with void fraction of 0.435 93.33 Pa with void fraction of 0.435

Figures 16-17 are breakthrough curves for RK 38: with a void fraction of 0.56:

Figure 16: Breakthrough curve for Figure 17: Breakthrough curve

RK 38 with an initial inlet partial for RK 38 with an initial inlet vapor pressure of 183.98 Pa with partial vapor pressure of 93.33Pa void fraction of 0.44 with void fraction of 0.44 96

V. CONCLUSION

The breakthrough curves were predicted for three different inlet partial vapor pressures. In all scenarios, sorbents break through faster at higher partial vapor pressures than at lower ones. The solutions from the model vary when using different void fractions. The breakthrough curves with a void fraction of 0.40 for both sorbents are almost identical. However, if the void fractions were 0.44 for RK 38 and 0.435 for ASRT 2005, the results are drastically different. Both models indicate that running a hydrothermal stability test for more than 2600 minutes is unnecessary, as well as running a test with vapor pressures of 1.60 Pa or lower since they will never breakthrough.

ACKNOWLEDGMENTS

D. H. Harris thanks Robert F. Coker for his consultation with COMSOL MultiphysicsTM, and David Watson for supervising the projects when others were unavailable.

REFERENCES

ECLLS staff, Human Needs and Effluents Mass Balance (per person) -Marshall Space Flight Center Poster

Lobo, Jairo Antonio Cubillos –“Heterogeneous asymmetric epoxidation of cis-ethyl cinnamte over Jacobsen's catalyst immobilized in inorganic porous materials” [Thesis P. 28}, 2005

Murphy, Donald W. and INterrante, Leonard V.- “Zeolite Molecular Sieves”, Inorganic Synthesi, 2007s

Knox, Jim –“Requirements for Hydrothermal Stability Test,” NASA Planning Document, 2014

Image sources: 1. “The microporous molecular structure of a zeolite, ZSM-5”- December 28, 2013 http://commons.wikimedia.org/wiki/File:Zeolite-ZSM-5-3D-vdW.pn 2. “Sphere Packing”-2007-http://www.keplersdiscovery.com/Images/SpherePacking.jpg 3. “Finite Element method 1D illustration1”- March 8, 2006 http://en.wikipedia.org/wiki/Finite_element_method#mediaviewer/File:Finite_element_m ethod_1D_illustration1.png

97

LOW DENSITY SUPERSONIC DECELERATOR

Kolby Javinar Department of Electrical Engineering University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT

During the summer of 2014, NASA planned on testing two new decelerators for their Low Density Supersonic Decelerator (LDSD) Project. The “Low Density” refers to the thin atmosphere of Mars. The National Aeronautics and Space Administration (NASA) want to advance the technology of delivering larger payloads to Mars. The two new decelerators will help slow down the speed of the entry vehicle to subsonic speeds for safe water landings. The device that was tested during the summer was the Supersonic Inflatable Aerodynamic Decelerator Robotic Class (SIAD-R). The SIAD is equipped to a Test Vehicle that will be launched from a balloon. The balloon will carry the TV to an altitude of 120,000 feet. The TV will use a rocket-fueled motor to bring it to the top of the stratosphere and drop down to the Earth at Mach 4 speed. The SIAD will inflate to slow down the TV. A supersonic parachute is also attached to the TV to slow it down for a safe landing. The test is the first steps on a path to potentially landing humans on Mars.

INTRODUCTION

The Low Density Supersonic Decelerator (LDSD) Project is a technology development effort in the Technology Demonstration Missions portion of NASA’s Space Technology Mission Directorate (STMD) [1]. The project’s objective is to develop full-scale supersonic decelerators for application in low-density atmospheres, such as on Mars, by demonstrating their capability in relevant to the environments on Earth [1].

The LDSD Project is currently being tested on Kaua‘i at the Pacific Missile Range Facility (PMRF). Shown in Figure 1, the test vehicle (TV) is a launch mechanism that's in the shape of a capsule with one side shaved off to store electrical devices to implement autonomous actions along with a motor for propulsion.

Figure 1: The Test Vehicle equipped with the Star-48 motor and SIAD-R [2]

98

The TV will used to implement two decelerators that are known as Supersonic Inflatable Aerodynamic Decelerators (SIADs). These are very large, durable, balloon-like pressure devices. The technologies being tested are the SIAD-R (20ft in diameter) and the, still in development, SIAD-E (26ft in diameter). Another device that's being tested is the Supersonic Disk Sail (SSDS) parachute. The supersonic parachute will be the largest of its kind ever flown.

The task that I was given was to create a Launch Day Operations book that would include information on the different roles of the mission. The LDSD is a huge project made up of three different NASA groups. There are the Jet Propulsion Lab (JPL), the Wallops Flight Facility (WFF), and also the Columbia Scientific Balloon Facility (CSBF). The Launch Book will have a list of different procedures and checklists that will be crucial information to the launch process. I was also given the task of creating a visitor information booklet that will contain a water-down version of the project and its goals.

The first flight is an experimental flight test, to see if it can accurately achieve the speeds and altitudes required for the demonstration of the technologies in a Mars-like environment. Two more tests are scheduled next year to collect the required data on the decelerator technologies. The two decelerators (the SIAD-R and the SSDS Parachute) are on-board this first shakeout flight, and will be deployed if the proper conditions are met.

The launch of the flight test will be implemented with the use of a balloon instead of rocket. The reason for this is due to an earlier program conducted by NASA. The Viking program was an earlier attempt at developing and testing decelerators for the same purpose [1]. A balloon was used to launch decelerators for testing rather than a rocket because of cost [1]. It was more efficient to use a balloon than it was to launch an expensive jet-fueled rocket to the attended height. However, the TV itself is equipped with a Star-48 booster rocket. The balloon is incapable of lifting the TV up to the attended height, so a rocket is used to fill the gap. Four spin- up and four de-spin motors are used to provide stability during powered flight [3]. The SIAD attached to the outer rim of the TV will inflate when the test vehicle is flying at Mach 3.8 and decelerate the vehicle to Mach 2.5, where it becomes safe to deploy a supersonic parachute [3]. The SIAD-R is intended for extended performance of Mars Lander class entry systems and to enable more precise landings for heavier payloads [1].

Key Points

Early forms of decelerators date back to NASA's Viking program. This program put two Landers on Mars in 1976 and its parachute design is still being used in recent missions such as the Mars Curiosity rover in 2012 to land in Mars [3]. To safely land heavier payloads on Mars, larger parachutes and other decelerators that at supersonic speeds are needed. The LDSD project's aim is to solve the complicated problem of slowing down Martian entry vehicles enough to safely deliver large payloads to the surface of Mars [3]. These are the first steps on the path to potentially landing humans and there return rockets safely on Mars [3]. The new designs borrow from the same technique used by the Hawaiian puffer fish – the `o`opu hue – to increase its size without adding mass: rapid inflation [4]. The first flight is a shakeout test of the TV at speeds up to Mach 4. There will be two more flights next summer to test the technologies.

99

Dimensions

Test Vehicle (TV): - Weigh w/ fuel: 6,878 lbs. (3,120 kg.) - Diameter (pre-inflation): 15ft, 5in. (4.7 meters) - Post inflation: 20ft (6 meters) - Maximum Speed Traveled: Mach 4 - Duration of flight test from balloon launch to vehicle splashdown: approximately 3 hours Parachute: - 100ft (30.5 meters) in diameter; longest supersonic parachute ever deployed. Launch Balloon: - Width: 460ft (140 meters) - Height: 396ft (120 meters) - About the size of a football stadium

METHODS

The day of the launch starts off with the transfer of the TV to the launch pad. Launch pad activities will be conducted in preparation for pre-lift operations. There are electrical checkouts and arming activities that need to be established such as positioning of the TV on the launch tower and connecting to the tower and TV. This leads into lifting the TV to the height of the tower and conducting post-lift electrical checkouts. These checkouts include checking the balloon's and TV's telecom systems and verifying console displays from B560 (TOC/BOC) and B105 (ROCC). The balloon is then laid out and inflated with helium. The internal power of the TV is turned on at this time. All of these checks and balances that need to be done to the TV and balloon take's about 6-7 hours to complete prior to launch.

The balloon and vehicle are released from the tower at L-minus 0. The balloon carries the vehicle to an altitude of 120,000 feet. Then, the balloon will release the TV and the Star-48 booster rocket will kick in and take it to an altitude of around 180,000 feet (top of the stratosphere). There are also small rocket motors attached to the side of the SIAD that spin the vehicle ahead of the main motor ignition. Spinning the test vehicle while in flight keeps it stable. Having the vehicle stable is important for deploying the decelerators; if the vehicle wobbles too much, it could get the parachute tangled during it’s deploy. The TV will be dropped at a maximum altitude of 180,000 feet and travel at approximately Mach 4 speeds. The TV will then deploy the SIAD to decelerate the vehicle to approximately Mach 2.5 speeds. The SSDS parachute will deploy to slow it down to subsonic speeds and carry it safely to the surface for a controlled water impact. Figure 2 shows a brief explanation on how this launch plays out.

There are two recovery boats called the Kahana and the Honua that are going to recover a few items from the water. The balloon envelope and TV will be recovered after impact. The duration of the recovery phase will be driven by how quickly boats are able to locate the items to be recovered, address any safety hazards and assess the article, pick up the article on-board the boat, and head back to port. It is required, due to the environmental sensitivity of the region, for all floating objects be recovered from the ocean following launch. Recovery off of the waters of Kaua‘i consists of the Balloon, test vehicle, SSDS Parachute, Flight Imagery Recorder (FIR),

100

and other floating debris. The balloon, the TV, and FIR is expected to remain buoyant after impact. The FIR, located on the TV, is an on-board memory module for the storage of all high speed and high resolution data. It has been designed to separate itself from the TC on ocean impact if in saltwater for a given period.

Figure 2: Flight overview of the Supersonic Flight Dynamics Test (SFDT) [1]

RESULTS

The launch for the LDSD project was scheduled for the first two weeks of June. During those two weeks, the weather was not favorable for the launch. The biggest problem for the launch was the wind. There are many restricted areas that were in place by either the Federal Flight Administration (FAA) or PMRF. The team conducted many balloon trajectory simulations prior to and up until the launch day to predict where the balloon will go. Simulations weren’t very reliable due to imprecise data up until the day of launch; so, they also released smaller balloons the morning of the launch days to see where the wind would take the balloon when launched. During the first two weeks of June, the predictions always had the balloon hovering in restricted areas. These counted as going over populated areas or entering flight restricted areas. To get the balloon to fly over water, the winds must be at near perfect conditions. In Hawai‘i, the trade winds needed to be present with minimum clouds and showers. It wasn’t till the ending of June that these conditions were present.

The launch for the LDSD project happened on June 28, 2014. The day started off just as planned. The TV was setup on the launch tower and adjusted according to the wind direction. The Launch Operations Book was finished and ready for use by the Range Operations Team. The camera displays in both Building 560 and 102 were working along with the streaming devices for video feed of the operations. NASA’s website had a live feed of the entire operation. The launch of the balloon was delayed a bit due to extra checks that needed to be done to ensure

101

that nothing terrible happens during launch. The balloon was launched around 8:40am HST. Figure 3 shows the balloon launch in action.

Figure 3: The balloon is filled with helium and released into the air to lift the TV off of a launch tower

The launch went off without a hitch. The balloon and the TV’s Star 48 rocket-fueled motor worked perfectly. Figure 4 shows the view of one of the GoPro Cameras attached to the TV. The balloon hovered over the water north of Ni`ihau at an altitude of about 120,000 feet when it detached from the TV. The TV propelled itself to an altitude of about 180,000 feet; then, the TV activated the spin motors to stabilize itself for the fall to Earth. The TV fell at Mach 4 speeds and successfully deployed the SIAD. The SIAD was able to slow the TV down to about Mach 2.1 speeds. From there, the TV deployed the SSDS parachute to slow down the TV for landing. When the parachute was deployed, it immediately ripped apart (Shown in Figure 5). The parachute did slow down the TV a bit, but it never did reach subsonic speeds. The impact that the TV made with the water was like hitting concrete. The TV was significantly damaged when it landed in the water.

Figure 4: The Star-48 rocket motor propels the TV up to the top of the stratosphere

102

Figure 5: The SSDS Parachute is torn apart in a matter of seconds when it deployed from the TV

The TV was recovered from the ocean just north of Ni`ihau. The TV suffered major damage to the top of the structure, creating a huge hole in the TV. However, the electronic devices of the TV didn’t suffer too much damage. The TV was in the water for a few hours before the recovery boat fished it out of the ocean (Shown in Figure 6). Within that amount of time, the electronics did sustain a lot of corrosion from the salt water. Most of what’s left from the TV’s electronic system is salvageable.

Figure 6: The TV being hooked on to the Kahana recovery boat [5]

103

CONCLUSION

Despite the parachute being torn apart when it deployed, the other aspects of the test went according to plan. The LDSD team would call this a huge success on their part. The parachute problem is a minor fix; although, they’re not quite sure why the parachute ripped apart so vigorously. The LDSD team is very excited for their return to Kaua‘i for the next phase of operation. They will probably test the SIAD-R a couple more times before moving on to the SIAD-E. Next summer (around the same time as this launch) will be their next scheduled launch opportunity at PMRF.

ACKNOWLEDGEMENTS

I would like to express our biggest thank you to the Hawai‘i Space Grant Consortium for fronting the stipends for this internship along with Mr. Burley who helped me apply for this opportunity. I would also like to thank everyone involved with the LDSD project and the people that I worked with at PMRF after the conclusion with the LDSD project. I am very grateful for the opportunity to be part of this project and being a part of this history in the making.

REFERENCES

[1] U.S. Navy Pacific Missile Range Facility April 2014, Low Density Supersonic Decelerators Operations Directive Final, NASA JPL

[2] NAS/JPL-Caltech May 16, 2014 NASA’s Saucer-Shaped Craft Preps for Flight Test, http://www.nasa.gov/jpl/ldsd/flight-test-20140516/#.U-8pz_ldWSo

[3] NASA JPL Press Kit June 2, 2014 Low Density Supersonic Decelerator (LDSD), http://www.jpl.nasa.gov/news/press_kits/ldsd.pdf

[4] NASA June 2013, Low Density Supersonic Decelerators, NASA Facts www.nasa.gov JPL 400-1530

[5] NASA/JPL-Caltech June 29, 2014 First LDSD Flight a Success, http://www.nasa.gov/jpl/ldsd/test-flight-successful-20140629/#.U-8i6_ldWSo

104

LOW DENSITY SUPERSONIC DECELERATOR

Jacob J. Matutino Department of Computer Science University of Hawai‘i at Mānoa Honolulu, HI 96822

ABSTRACT

The Low Density Supersonic Decelerator (LDSD) project’s purpose is to test three designs in Mars-like environments: Two Supersonic Inflatable Aerodynamic Decelerators (SIAD), the SIAD-R and the SIAD-E; and the Supersonic Disk Sail (SSDS) parachute. The SIAD-R is meant for Robotic Class Missions, allowing larger robots or rovers to be sent to Mars with more precise landings. The SIAD-E is meant for Exploration Class Missions, the end goal for this SIAD is to make it possible for humans to settle on Mars. The differences between the two SIADs are the size and shape of the SIADs. The purpose of the SIADs is to increase surface area without increasing weight or mass; this creates more drag and therefore deceleration. Refer to figure 1 to see the physical differences between the SIADs. The SSDS is the biggest parachute of its kind and has a unique design. Both the SSDS and the SIAD-R designs have been in full-scale testing prior to the launch at the Pacific Missile Range Facility (PMRF) Barking Sands Kaua‘i Hawai‘i. The LDSD project will occur in Earth’s upper atmosphere, where the air is less dense and better simulates Mars’ atmosphere.

30.5 m SSDS Parachute

Figure 1: LDSD designs to be tested

105

INTRODUCTION

LDSD plans on expanding and improving on the Viking Balloon Launched Decelerator Test (BLDT). The Viking BLDT occurred in the summer of 1972 at White Sands Missile Range in New Mexico. The Viking BLDT tested the Viking Disk-Gap-Band (DGB) parachute in Mars- like conditions. The LDSD test is modeled after the Viking BLDT in many ways: using a weather balloon to achieve the desired altitudes, the SSDS is a larger version of the DGB, and the Viking BLDT used onboard rocket motors to achieve the supersonic speeds needed to conduct the test.

METHOD

NASA went through a selection process to find the range facility that would best host the LDSD project. Three ranges were considered: US Navy San Nicolas Island off California, Royal Australian Air Force Woomera Test Range, and the US Navy Pacific Missile Range Facility (PMRF). These ranges were rated on several qualities including range services and instrumentation, conditions for balloon launch, conditions for balloon trajectory and ascent, and range availability. Other factors were taken into account but those were more or less the main qualities desired. PMRF was chosen because it rated best in most categories followed by Woomera. Referring to figure 2, the three ranges were rated from one to three, three being the best out of the three choices. The ratings were summed up and the range with the highest total was probably the best choice.

Figure 2: Range Selection Results

The LDSD project requires collaboration between many facilities such as NASA Wallops Flight Facility (WFF) in Virginia, NASA Jet Propulsion Laboratory (JPL) in California, NASA Columbia Scientific Balloon Facility (CSBF) in Texas, and PMRF at Barking Sands Kaua‘i, Hawai‘i. JPL made the Test Vehicle (TV); they are in charge of the maintenance and the operation of the TV. CSBF is responsible for the balloon and all of its components; they are also responsible for keeping track of the weather and providing weather data for the balloon trajectory predictions. WFF is in charge of the overall operations on the LDSD project side; Shad Combs, who is from WFF, was in charge of Range Coordination with PMRF. PMRF is in charge of hosting the project and providing the required instrumentation and telemetry for the project. PMRF is also in charge of maintaining safety guidelines for the island of Kaua‘i; PMRF Range Safety had to approve the project before the project could occur and Range Safety has the authority to stop the project at any time during the project if they deem the project unsafe or if the project endangers any civilians or personnel in the area.

106

The interns’ job in the LDSD project was to compile all of the essential checklists and documents for launch day into a Launch Book. Some documents included in the book were the LDSD Master Countdown, Test Conductor’s Checklist, and Launch Constraint Documents. These were some of the documents that the LDSD personnel needed on-hand during the mission for quick reference. These launch books were made primarily for the LDSD personnel in the PMRF Range Operations Control Center (ROCC) also known as building 105. The ROCC is a high security area that does not allow phones or laptops, having electronic versions of the essential documents were not possible. Having physical hard-copies was the only way for those in the ROCC to have the documents necessary for launch.

The balloon trajectory predictions were acquired and calculated depending on the weather as Figure 3: TV on Launch Tower the scheduled launch window was closing in. Trajectory predictions were acquired using a method called the Monte Carlo method. The Monte Carlo method in the LDSD test was implemented by simulating the balloon trajectory many times using the weather data for that day and with slight variations through each simulation. The simulations are done through a MATLAB program that takes weather data, and other variables, as parameters. The simulation is done about 8,000 times and trends are revealed as certain paths are projected more frequently. A path that best fits the trending path predictions is derived from the collected data and that is presented as the predicted balloon trajectory for that particular day.

Once the predicted trajectory for the planned window was discussed and approved by the LDSD team and range safety, balloon preparations were made and the TV was suspended on the launch tower’s arm, as shown in figure 3. The balloon takes some time to fill up for it requires about 250,000 cubic feet of helium. The balloon used for the LDSD test is known as the 34 Heavy (34H) because it has a volume of about 34 million cubic feet. Once the balloon is filled, the balloon was released and it takes the suspended TV off of the launch tower’s arm. Once the balloon with the TV was up in the air away from the launch tower, it was at the mercy of the winds. There was about a two hour ascent time before the balloon and TV arrived at the desired 120,000 feet altitude. The balloon ascent is the most unpredictable part of the test because the winds are unpredictable. If the balloon were to drift into a restricted or unsafe zone where people were put in danger, range safety had the power to ‘pull the plug’ on the tests and abort procedures would initiate. In the unfortunate event that something like that would happen mid- ascent, the balloon can self-terminate.

Once the balloon and TV hit an altitude of about 120,000 feet, the TV detached from the balloon and a couple of events occurred instantaneously. The TV has spin motors that was used to induce spin on the TV; this provided stability to the TV as it used its main motor, the Star48, to get up to just under Mach 4 speeds and reach a higher altitude of about 180,000 feet. The

107

balloon self-terminated a couple of seconds after the TV drop and fell into the Pacific where it was recovered. Once the desired speed was reached, spin motors pointing the opposite way of rotation were used to stop the rotation.

Figure 4: LDSD Timeline Diagram

The LDSD test began once the TV is moving at about Mach 4 at an altitude of about 180,000 feet. First, the SIAD was deployed; in this case, the SIAD-R was used for the initial deceleration. The SSDS was then deployed when the TV slowed down to a velocity of about Mach 2. The SIAD worked wonderfully, but the SSDS ripped upon deployment. The SSDS parachute ripped open as soon as it was deployed so it failed. Figure 4 is a timeline of the whole flight from balloon launch to the TV recovery.

RESULTS

The SSDS parachute failed but the test was a great success. Everything worked wonderfully from the balloon launch to the recovery of the balloon and TV. Everything was recovered by the Kahana, a local boat from Port Allen Kaua‘i, and taken back to Port Allen, where it was then transported back to PMRF for processing and analysis. The TV was intact and the balloon carcass was recovered. The parachute was tied in many knots and was heavily tangled as shown in figure 5. Everything was shipped back to JPL for further analysis. The on-board GoPros, high- definition, and high-speed cameras were also recovered. The GoPros provided great footage of the flight and provided some hints as to what exactly Figure 5: SSDS Post-Recovery happened to the SSDS. The high-definition and high- speed camera footage will be used to do a more in-depth analysis of the SSDS failure.

108

The LDSD project faced some problems at PMRF. LDSD is the first project in history to launch a rocket off of a weather balloon in this way so there were many unknowns going into the project. Because the balloon was at the mercy of the weather, it was impossible to predict the exact trajectory of the balloon. This randomness via the weather created many uncertainties, which PMRF flight safety was somewhat comfortable with. The predicted projections were broad and the restrictions were strict; if the weather wasn’t perfect, there was always the possibility that the balloon would float into restricted areas and air space. This almost caused the cancelation of the LDSD project because the weather did not cooperate for the scheduled launch window. LDSD luckily got a second chance by returning at the end of June 2014 for a small extension. Eventually, the wind directions moved in such a way that the balloon trajectory was almost guaranteed to not move into off-limits areas. The LDSD team jumped on the opportunity and proceeded to mission success on June 26.

CONCLUSION

NASA LDSD plans to come back to PMRF next year in the summer of 2015. NASA JPL plans to make two test vehicles with improved SIAD and SSDS designs. It is still undecided yet if the SIAD-E will make it to the tests next year due to lack of testing and uncertainty. The SSDS design will be analyzed and improved according to what problems caused the SSDS to rip and fail.

The LDSD team hopes to use these designs on real Mars landers or rovers in the future. Landing Humans on Mars will hopefully be obtainable using the SIAD-E. Bigger rovers and payloads can be sent to mars using the SIAD-R.

ACKNOWLEDGMENTS

I would like to initially thank Stewart Burley from Hawai‘i Space Grant Consortium for giving me this opportunity. The NASA LDSD team: Shad Combs, Grace Tan-Wang, George Chen, and everyone else at NASA jet propulsion Laboratories, Wallops Flight Facility, and Columbia Scientific Balloon Facility. And the people at PMRF who we worked with: Johnathan Williams, Kris Blackstad, Saige Stocker, Patrick Reyes, Alan Chun, and Kent Mizuguchi.

Figure 6: NASA Interns with Test Vehicle.

109