AY B REEN tium -G

ISCONSIN W OF

onsor ant C August 16-17, 2012 August NIVERSITY Whitewater, Wisconsin Whitewater, U sin iscon to Galaxy to University University of Wisconsin-Whitewater From Earth From w e Gr Spac

Proceedings of the 22nd Annual of the Proceedings Conference Space Wisconsin

Proceedings of the 22nd Annual From Earth to Galaxy Wisconsin Space Conference 2012 Green Bay, WI 54311-7001 Bay, Green ■ Web site: www.uwgb.edu/wsgc site: Web

■ 2420 Nicolet Drive 2420 Nicolet Drive ■ E-mail: [email protected] E-mail: [email protected] ■ University of Wisconsin-Green Bay Bay Wisconsin-Green of University ■ Fax: 920.465.2376 Fax: ■ Phone: 920.465.2108 Phone: Wisconsin Space Grant Consortium Grant Space Wisconsin For information about the programs of the Wisconsin Space Grant Consortium, WSGC Members and Institutional Representatives contact the Program Office or any of the following individuals: Lead Institution Wisconsin Space Grant Consortium University of Wisconsin-Green Bay University of Wisconsin-Green Bay Scott Ashmann 2420 Nicolet Drive Green Bay, WI 54311-7001 Affiliates Tel: (920)465-2108; Fax: (920)465-2376 Space Explorers, Inc. www.uwgb.edu/wsgc Aerogel Technologies, LLC George French Stephen Steiner Director Spaceflight Fundamentals, LLC AIAA - Wisconsin Section Bradley Staats R. Aileen Yingst Todd Treichel University of Wisconsin-Green Bay Spaceport Sheboygan (920)465-2327; [email protected] Alverno College Daniel Bateman Paul Smith University of Wisconsin-Fox Valley Program Manager Astronautics Corporation of America Andrew Shears Tori Nelson Steven Russek University of Wisconsin-Green Bay University of Wisconsin-La Crosse (920)465-5078; [email protected] BioPharmaceutical Technology Center Institute Eric Barnes Karin Borgh University of Wisconsin-Madison Chair, WSGC Advisory Council and Institutional Representative Carroll University Gerald Kulcinski Mike LeDocq Damon Resnick Western Technical College University of Wisconsin-Milwaukee Ronald Perez (608)785-4745 ; [email protected] Carthage College Kevin Crosby University of Wisconsin-Oshkosh Nadejda Kaltcheva WSGC Associate Director for Scholarships/Fellowships College of Menominee Nation Lindsay McHenry Kathy Denor University of Wisconsin-Parkside University of Wisconsin-Milwaukee David Bruning (414)229-3951; [email protected] Concordia University Wisconsin Matthew Kelley University of Wisconsin-Platteville WSGC Associate Director for Student Satellite Programs William Hudson Crossroads at Big Creek William Farrow Coggin Heeringa University of Wisconsin-River Falls Milwaukee School of Engineering Glenn Spiczak (414)277-2241; [email protected] Experimental Aircraft Association (EAA) Bret Steffen University of Wisconsin-Sheboygan Harald Schenk WSGC Associate Director for Research Infrastructure Great Lakes Spaceport Education Fdn. Gubbi R. Sudhakaran Carol Lutz University of Wisconsin-Stevens Point University of Wisconsin-La Crosse Sebastian Zamfir (608)785-8431; [email protected] Lawrence University Megan Pickett University of Wisconsin-Stout Todd Zimmerman WSGC Associate Director for Higher Education Marquette University John Borg University of Wisconsin-Superior Christopher Stockdale Richard Stewart Marquette University (414)288-7519; [email protected] Medical College of Wisconsin University of Wisconsin-Whitewater Danny A. Riley Rex Hanger WSGC Associate Director for Aerospace Outreach Milwaukee School of Engineering Western Technical College Shelley Lee William Farrow Michael LeDocq Wisconsin Department of Public Instruction (608)266-3319; [email protected] Orbital Technologies Corporation Wisconsin Aerospace Authority Eric E. Rice Tom Crabb

WSGC Associate Director for Special Initiatives Ripon College Wisconsin Department of Public Instruction Nicole Wiessinger Sarah Desotell Shelley A. Lee Wisconsin Department of Transportation Wisconsin Department of Transportation (608)266-8177; [email protected] Saint Norbert College Terry Jo Leiterman Nicole Wiessinger

WSGC Associate Director for Industry Program Wisconsin Lutheran College Space Education Initiatives Kerry Kuehn Eric Rice Jason Marcks Orbital Technologies Corporation (608)827-5000; [email protected] See www.uwgb.edu/wsgc for up-to-date contact information FROM GALAXY TO EARTH

22nd Annual Wisconsin Space Conference August 16-17, 2012 Host: University of Wisconsin-Whitewater Whitewater, Wisconsin

Edited by: R. Aileen Yingst, Director, Wisconsin Space Grant Consortium Tori Nelson, Program Manager, Wisconsin Space Grant Consortium Sarah Desotell, Assistant Professor, Ripon College Glenn Spiczak, Professor, University of Wisconsin-River Falls Karin Borgh, Executive Director, BioPharmaceutical Technology Center Institute

Cover by: Ashley Skalecki, Student Assistant, Wisconsin Space Grant Consortium

Layout by: Brittany Luedtke, Office Coordinator, Wisconsin Space Grant Consortium

Published by: Wisconsin Space Grant Consortium University of Wisconsin-Green Bay

Copyright © 2013 Wisconsin Space Grant Consortium Wisconsin Space Grant Consortium University of Wisconsin - Green Bay Green Bay, WI 54311

May, 2013 Preface and Acknowledgements

So I must confess that I missed the conference this year, but I have a good excuse. On August 5, 2012, I, along with hundreds of my science colleagues (the VIPs were in another room), crowded into a small auditorium at the Jet Propulsion Laboratory, and anxiously watched the huge screens lining the wall. We were living the “Seven Minutes of Terror,” the approximately seven minutes that it would take the Mars Science Laboratory spacecraft to complete entry and descent through the martian atmosphere, going from 13,000 mph to zero in that time, firing up a descent system never before tried on another planet, landing a rover wheels first. Honestly, most of us were terrified. Many of my engineering friends gave the rover no better than even odds of surviving. But she did. And I, along with the Curiosity rover, survived the seven minutes of terror. And now we have a half-ton avatar on Mars, big and beautiful, strong and capable, returning the most glorious science data nearly every day.

We all know the old adage: with great risk comes great opportunity. But this mission, with all its new technology could easily have gone the other way, and if it had, what then?

Well, I was there when it went the other way. On December 3, 1999, Mars Polar Lander was lost during entry, descent and landing operations. And let me tell you, I like it a great deal better when we land safely. But I can say from experience that if it had gone the other way — if we had lost Curiosity, then it still would have been worth it. Why? Because either way, the entire conversation — in aerospace engineering, in technological advancement, in scientific growth — is different than it would have been had we never tried. And the conversation we had after the crash of Mars Polar Lander was one of the most important factors in the safe landing of Curiosity. We talked, we listened, we learned, we improved, we tried again. That’s what science and engineering are all about.

Innovation isn’t safe. When you push the envelope, sometimes it doesn’t open. But innovation, imagination, creativity — all of these things are the wellspring of progress. No matter how many bumps there might be along the way, the way forward goes through risk. I am proud to be involved with these proceedings because each paper, to a greater or lesser extent, represents an individual or a group taking a risk in order to stretch our knowledge. Thank you for your courage, and I encourage everyone in the Wisconsin aerospace community to continue taking those risks because they continue to be the very best way to advance the human condition.

Conferences don’t occur in a vacuum, and the Wisconsin Space Grant Consortium office especially thanks our host for this conference, the University of Wisconsin—Whitewater, starting with Conference lead Dr. Rex Hanger and his staff of helpful volunteers. We are grateful to everyone at UWW who made our conference run so smoothly. Thanks must also go to our session moderators and to our poster judges for their conscientious work and their strong support for our students. Our keynote speakers are also to be thanked for adding so much to our conference: Dr. Robert Benjamin who presented, A Visitor’s Guide to the Milky Way, and Dr. John Delano who presented, Astrobiology: NASA’s Multi-Disciplinary Search for Life Beyond the Earth. And once again, I especially appreciate all the scientists, engineers, students, educators and others, who contributed papers to this volume. Those papers represent the hard

work and the risks that each contributor has taken to advance their field, and to each of them, I say, thank you, and, as always….

Forward!

R. Aileen Yingst, Ph.D. Director

Wisconsin Space Grant Consortium Programs for 2012

Student Programs Aerospace Outreach Program • Undergraduate Scholarship The Aerospace Outreach Program provides grant • Undergraduate Research monies to promote outreach programs and projects that • Graduate Fellowship disseminate aerospace and space-related information to • Dr. Laurel Salton Clark Memorial Graduate Fellowship the general public, and support the development and • University Sounding Rocket Team Competition implementation of aerospace and space-related curricula • Student High-Altitude Balloon Launch in Wisconsin classrooms. In addition, this program • Student High-Altitude Balloon Payload supports NASA-trained educators in teacher training • Student High-Altitude Balloon Instrument Development programs. • Industry Member Internships • NASA ESMD Internships • NASA Academy Leadership Internships Special Initiatives • NASA Centers/JPL Internships The Special Initiatives Program is designed to provide • NASA Reduced/Gravity Team Launches planning grants and program supplement grants • Relevant Student Travel for ongoing or new programs which have space or aerospace content and are intended to encourage, (see detailed descriptions on next page) attract, and retain under-represented groups, especially women, minorities and the developmentally challenged, Research in careers in space- or aerospace-related fields. The Research Infrastructure Program provides Research Seed Grant Awards to faculty and staff from WSGC Member and Affiliate Member colleges and Wisconsin Space Conference universities to support individuals interested in starting The Wisconsin Space Conference is an annual conference or enhancing space- or aerospace-related research featuring presentations of students, faculty, K-12 program(s). educators and others who have received grants from WSGC over the past year. The Conference allows all to Higher Education share their work with others interested in Space. It also The Higher Education Incentives Program is a seed- includes keynote addresses, and the announcement of grant program inviting proposals for innovative, award recipients for the next year. value-added, higher education teaching/training projects related to space science, space engineering, Regional Consortia and other space- or aerospace-related disciplines. The WSGC is a founding member of the Great Midwest Student Satellite Program including Balloon and Rocket Regional Space Grant Consortia. The Consortia consists programs is also administered under this program. of eight members, all Space Grants from Midwest and Great Lakes States. Industry Program The WSGC Industry Program is designed to meet the Communications needs of Wisconsin Industry member institutions in WSGC web site www.uwgb.edu/wsgc provides information about WSGC, its members and programs, multiple ways including: and links to NASA and other sites. 1) the Industry Member Internships (listed under students above), Contact Us 2) the Industry/Academic Research Seed Program Wisconsin Space Grant Consortium designed to provide funding and open an avenue for University of Wisconsin–Green Bay member academia and industry researchers to work 2420 Nicolet Drive, ES 301 together on a space-related project, and Green Bay, Wisconsin 54311-7001 3) the Industrial Education and Training Program Phone: (920) 465-2108 designed to provide funding for industry staff members Fax: (920) 465-2376 to keep up-to-date in NASA-relevant fields. E-mail: [email protected] Website: www.uwgb.edu/wsgc Wisconsin Space Grant Consortium Student Programs for 2012

Undergraduate Scholarship Program Student High-Altitude Balloon Payload/ Supports outstanding undergraduate students pursuing Launch Program aerospace, space science, or other space-related studies The Elijah Project is a high-altitude balloon program in or research. which science and engineering students work in integrated science and engineering teams, to design, construct, Undergraduate Research Awards launch, recover and analyze data from a high-altitude Supports qualified students to create and implement balloon payload. These balloons travel up to 100,000 ft., a small research study of their own design during the considered “the edge of space.” Selected students will summer or academic year that is directly related to join either a launch team or a payload design team. their interests and career objectives in space science, aerospace, or space-related studies. Industry Member Internships Supports student internships in space science or Graduate Fellowships engineering for the summer or academic year at WSGC Support outstanding graduate students pursuing Industry members co-sponsored by WSGC and Industry aerospace, space science, or other interdisciplinary partners. space-related graduate research. NASA ESMD Internships Dr. Laurel Salton Clark Memorial Supports student internships at NASA centers or WSGC industry members that tie into NASA’s Exploration Graduate Fellowship Systems Mission Directorates. In honor of Dr. Clark, Columbia Space Shuttle astronaut and resident of Wisconsin, this award supports a graduate student pursuing studies in the NASA Academy Leadership Internships fields of environmental or life sciences, whose research This summer internship program at NASA Centers has an aerospace component. promotes leadership internships for college juniors, seniors and first-year graduate students and is co-sponsored by University Sounding Rocket Team participating state Space Grant Consortia. Competition Provides an opportunity and funding for student teams NASA Centers/JPL Internships to design and fly a rocket that excels at a specific goal Supports WSGC students for research internships at that is changed annually. NASA Centers or JPL.

High School Sounding Rocket Team NASA Reduced Gravity Program Operated by the NASA Johnson Space Center, this Competition program provides the unique “weightless” environment For high school students. This program is in its initial of space flight for test and training purposes. WSGC stages. It mimics the university competition. student teams submit reduced gravity experiments to NASA and, if selected, get to perform their experiments Student High-Altitude Balloon during a weightless environment flight with the support Instrument Development of WSGC. Students participate in this instrument development program through engineering or science teams. Relevant Student Travel Working models created by the students will be flown Supports student travel to present their WSGC-funded on high-altitude balloons. research. 22nd Annual Conference TABLE OF CONTENTS

Preface

Part 1: Student Satellite Program: High Altitude Balloon

Balloon Payload Team: Brock Boldus, Milwaukee School of Engineering Patrick Comiskey, Milwaukee School of Engineering Falls Latisha Jones, Milwaukee School of Engineering Kaitlyn Mauk, Milwaukee School of Engineering Benjamin Peterson, Milwaukee School of Engineering Matthew Weichart, Milwaukee School of Engineering

Balloon Launch Team: Tyler Capek, University of Wisconsin-River Falls Richard Oliphant, Milwaukee School of Engineering Devin Turner, Marquette University Danielle Weiland, Carthage College

Part 2: Student Satellite Program: Rocket Design Competition

1st Place – Non Engineering UWL Physics Rocket Team, University of Wisconsin-La Crosse Richard Allenby Joseph Krueger John Nehls Andrew Prudhom

1st Place – Engineering Team Woosh Generator, Milwaukee School of Engineering Brandon Jackson Devin Dolby James Ihrcke Kirsti Pajunen Eric Johnson

2nd Place - Engineering Team Jarts, Milwaukee School of Engineering Cameron Schulz Alex Folz Eric Logisz Brett Foster

3rd Place - Engineering Team ChlAM, University of Wisconsin-Madison

Chloe Tinius Maxwell Strassman Andrew Udelhoven

Part 3: NASA Reduced Gravity Program

UW-Madison SEED Zero-Gravity Experiment, Aaron Olson, Julie Mason, Collin Bezrouk, Undergraduate Students, Riccardo Bonazza, University of Wisconsin-Madison

Modal Evaluation of Fluid Volume in Spacecraft Propellant Tanks, Steven Mathe, KelliAnn Anderson, Amber Bakkum, Kevin Lubick, John Robinson, Danielle Wieland, Rudy Werlink, Undergraduate Students, Kevin M. Crosby, Carthage College

Part 4: Other NASA Student Opportunities

The Badger eXploration Loft at Desert RATS 2011, Jordan Wachs, Aaron Olson, Peter Sweeney, Will Yu, A. Arnson, Julie Mason, N. Roth, S. Wisser, Marcus Fritz, Samuel Marron, Michael Lucas, Nathan Wong, Undergraduate Students, Fred Elder, University of Wisconsin-Madison

MDRS Crew 110A, Aaron Olson, Julie Mason, Lyndsey Bankers, Samuel Marron, Will Yu, Mark Ruff, Undergraduate Students, Fred Elder, University of Wisconsin-Madison

Part 5: Biology and Medical Sciences

Prototype Framework for Dynamic Probabilistic Risk Assessment of Space- Flight Medical Events, Kirsti Pajunen, Undergraduate Student, Milwaukee School of Engineering

Part 6: Engineering

Operating Temperature Dependence of QDOGFET Single-Photon Detectors, Eric Gansen, Assistant Professor, Physics Department, University of Wisconsin- La Crosse

Infrasonic Detection, Paul Thomas, Undergraduate Student, University of Wisconsin-Platteville

Towards Billion-Body Dynamics Simulation of Granular Material, Rebecca Shotwell, Undergraduate Student, Dan Negrut, Professor, University of Wisconsin-Madison

Test and Analysis of the Mass Properties for the PRANDTL-D Aircraft, Kimberly Callan, Undergraduate Student, University of Wisconsin-Madison

Development of a Passive Check Valve for Cryogenic Applications, Bradley Moore, Graduate Student, University of Wisconsin-Madison

Part 7: Physics and Astronomy

A Novel Technique for Fabricating Metalized Objects with Difficult Geometries, Mitchell Powers, Undergraduate Student, University of Wisconsin- Madison

A C-Band Study of the Historical Supernovae in M83 with the Karl G. Jansky Very Large Array, Christopher Stockdale, Associate Professor, Physics Department, Marquette University

Testing General Relativity with Pulsar Timing Arrays, Sydney Chamberlin, Graduate Student, University of Wisconsin-Milwaukee

The Origin of the Element, Shelly Lesher, Assistant Professor, Department of Physics, University of Wisconsin-La Crosse

Improving Cloud and Moisture Representation in Weather Prediction Model Analyses with Geostationary Satellite Information, Jordan Gerth, Graduate Student, University of Wisconsin-Madison

Population Analysis of Seyfert Galaxies in the Coma-Abell 1367 Supercluster, Megan Jones, Undergraduate Student, University of Wisconsin-Madison

X-Ray and Radio Emissions of AWK and MKW Clusters, Michael Ramuta, Undergraduate Student, University of Wisconsin-Madison

Developing a Focal Plane Array at the GBT for 21 cm Astronomy, Christopher Anderson, Graduate Student, University of Wisconsin-Madison

Teaching Special Relativity: Developing a Software Aid for Spacetime Diagrams, Randy Wolfmeyer, Instructor, Department of Natural Science, John Wood Community College

Observing Convection in Microgravity, Matt Heer, East Troy High School

A Simplified Model for Flagellar Motion, Kelsey Meinerz, Undergraduate Student, Marquette University

Part 8: Geology

Fumarole Alteration of Hawaiian Basalts: A Potential Mars Analog, Teri Gerard, Graduate Student, University of Wisconsin-Milwaukee

Part 9: Education and Public Outreach

A Hubble Instrument Comes Home: The High Speed Photometer, James Lattis, UW Space Place, University of Wisconsin-Madison

Spaceflight Academy for CESA #7, Bradley Staats, Spaceflight Fundamentals, LLC.

Students Teaching Astronomy-Related Science (STARS), Reynee Kachur, Director of Science Outreach, University of Wisconsin-Oshkosh

Launching STEM Interest: Using Rockets to Propel to Excel in STEM Results of the Lift-Off for Teachers and Youths (LOFTY) Program, Reynee Kachur, Director of Science Outreach, University of Wisconsin-Oshkosh

A Celebration of Life XVII: Geology on Earth and Mars! Summer Science for Grades 3-5 and 6-8, Barbara Bielec, BioPharmaceutical Technology Center Institute

NASA and Biotechnology-Professional Development for Secondary Teachers, Barbara Bielec, BioPharmaceutical Technology Center Institute

Using Science to Bridge Achievement Gaps, James Kramer, Simpson Street Free Press

EAA Women Soar – Expanding Horizons, Jeff Skiles, Elissa Lines, Experimental Aircraft Association

EAA FlightLink 2G, Jeff Skiles, Elissa Lines, Experimental Aircraft Association

EAA Space Week – Lab for Exploring Teachers, Jeff Skiles, Elissa Lines, Experimental Aircraft Association

Appendix A: 22st Annual Conference 2012 Program

22nd Annual Conference Part One

Student Satellite Program High Altitude Balloon WSGC Elijah High Altitude Payload 2012 Summer Team Final Report

August 17th, 2012

Brock Boldus1, Patrick Comiskey1, Latisha Jones1, Kaitlyn Mauk1, Ben Peterson1, & Matthew Weichart1

1Milwaukee School of Engineering

Advisor: Dr. William Farrow2

2Assistant Professor Department of Mechanical Engineering Milwaukee School of Engineering 1

Abstract

The 2012 Wisconsin Space Grant Consortium Elijah High Altitude Payload Team explored four different experiments this summer. We investigated the motion of the payload during flight, designed several systems for two cameras, utilized a magnetometer, and inspected the use of thermistors. Each experiment had numerous subcategories that resulted in more work than we had time for. Because of these unforeseen consequences, our team had to urgently attempt to make the deadline of the launch date. Through this haste, sections were overlooked; therefore we lost the cameras during flight and were not able to gather data from the magnetometer and thermistor experiments. However, this experience demonstrated a lot about the engineering process and working as a team. The internship taught us how to become better engineers through our mistakes and was an extremely worthwhile experience.

Introduction

The Wisconsin Space Grant Consortium (WSGC) is a group dedicated in getting students into science and engineering in the broad field of aerospace. We had the opportunity to work with the WSGC on the Elijah high altitude balloon project, which sends a weather balloon high into the atmosphere in order to conduct science and engineering experiments. The balloon takes the payload close to the edge of the earth’s atmosphere. The balloon then pops and the payload descends on a parachute.

The only constraints for the project are those imposed by the Federal Aviation Association (FAA), the governing body over US Airspace. In accordance with FAA regulations for balloons,

1 Financial support from Wisconsin Space Grant Consortium, special thanks to all those mentioned in “Acknowledgements” section

1 the payload must weigh only six pounds or less and the pressure on any square inch of the outer surface of the payload must not exceed 3/16 psi (FAA Regulations for Kites/Balloons, 1999).

This year, the team is elected to conduct experiments in stabilizing the payload, strength of the earth’s magnetic field, the amount of radiation put off by the sun by measuring the temperature, and taking and comparing pictures with and without infrared filters. The stabilization of the payload includes two sub projects, controlling the tendency to tumble using a tail extension and controlling the spinning using CO2 canisters.

Box-Tail/Drag-Kite

Early on one of the team’s interests was to stabilize the payload during the flight. One of the projects that were born from this desire was the Box-Tail/Drag-Kite. The idea behind the Box- Tail/Drag-Kite is to extend the momentum arm of the whole payload, therefore requiring a bigger and stronger force to disturb the payload. It also adds a restoring force. As one side of the tail is pushed by wind, it will expose the opposite side to the same force, causing it to briefly go into periodic motion until it reaches a more stable orientation.

The first version of the Box-Tail/Drag-Kite was based on the casing from the previous year’s casing and eight foot long flexible rods that extended behind the casing. Attached to the end of the rods was a large parachute that would be used to retard all of the forces acting on the payload and resultantly make the payload more inherently stable. The force of air acting on the drag kite would resist other forces acting on the payload, such as winds aloft and the length of the tail behind the payload would increase the moment of inertia making the whole thing harder to perturb.

However, there were major drawbacks to this plan. The eight foot rods would make the whole thing unwieldy and awkward to move in addition to just being excessively large so the rods were reduced from eight feet long to a more manageable four feet long. The parachute-like end also presented a significant problem as it could cause the whole payload to invert once it started to descend back towards Earth.

In order to deal with the issue of it flipping over, the parachute design was changed for a more traditional box-shape tail. The design would not resist the payload’s upward movement like the previous design did, however as far as stabilization for side to side movement; it did just as well or better than the previous design. It was also significantly less likely to invert the whole payload than the previous design.

The rods for the Box-Tail/Drag-Kite were quickly decided to be made from fishing rods, which have proven to be able to withstand a lot of stress without breaking as well as being lightweight.

2 Fishing rods are also readily available in various lengths and actions, where the rod bends the most, so there were plenty of options for us to chose from. The team researched for fishing rod manufacturing companies based here in Wisconsin and came across St. Croix Rods, based in northern Wisconsin. The team got in contact with their director of engineering and was able to receive an in-kind donation of several blanks, rejected carbon-fiber rods that were not completely built, but the rods still offered the team everything the team was looking for the Box-Tail/Drag- Kite.

The carbon fiber rods were cut to the desired size using a dremel cutting tool and then fitted with nylon bolts so they could be attached to the rest of the payload more easily. The casing and the connection nexus that would be mounted on the bottom hemisphere of the payload were set up so that the rods would be ninety degree between each other and twenty degrees from the central axis.

The sides of the Box-Tail/Drag-Kite were decided to be made out of rip-stop nylon, the most popular material for parachutes since the Second World War and most recommended after kevlar and other similar materials currently used to make parachutes for the military. The sides of the Box-Tail/Drag-Kite were four inches in width and twenty-three and twenty one hundredths in length and set up to accommodate the angle of the rods. The sides were made out of rip-stop nylon that was donated by North Sails, a manufacturer of sails for sailboats operating out of Milwaukee and Chicago. The nylon was cut from a template and then pieces were then sewn together to make the sides of the Box-Tail/Drag-Kite with loops that would go over the rods. The nylon sides were then attached to the rods using Tear-Aid, a tear repair patch kit that bound very well to both the nylon and the carbon fiber rods.

Shape Determination and Connection Junction

In the pursuit for payload stabilization during flight, the shape of the payload is an integral consideration. There were many different hazards that the payload would experience that needed to be addressed. For example, crosswinds, drag force, cost, weight, center of mass, overall size, space for equipment, and ease of building are some of the parameters that had to be factored into the design.

The basic shapes that we persisted upon finding the best out of the bunch were the cube, sphere, cone, and cylinder. Because the payload experiences many different perils at different altitudes, we needed to gather information at various altitudes. Using data from The Engineering Toolbox (The Engineering Toolbox, 2012) and the United States Centennial of Flight Commission (U.S. Centennial of Flight Commission, 1999); we received crosswind speed with corresponding altitude, density, temperature, and an ascent rate. Compiling and processing the information

3 seemed quite a large task to undertake by hand; therefore, a MATLAB code was written to do the calculations.

After working some time on the code, for every data point of information found online, we had our different parameters on the four different shapes. Figure 1 below, shows the Drag Force vs. Altitude plot that the code displayed:

Figure 1: Drag Force vs. Altitude

Out of the four shapes, the sphere performed the best in terms of drag force. For each different parameter, something similar to this figure was communicated. After careful consideration, the sphere turned out to be the best shape for our application. We finalized the choice, and put forth a motion to start to build the spherical payload.

There are several different ways to connect the box tail to the payload, including right onto the outer sphere, a connection point inside the sphere, or various other methods. In regards to joining to the outside, physically creating the necessary geometry for a correct box kite would be very difficult. Lining everything up would be near impossible and separately connecting each carbon fiber rod adds unnecessary hassle. Therefore, we decided that it would be best to make a connection junction inside the payload.

Because of the complex geometry needed for the box kite and weight considerations, rapid prototyping our nexus seemed to be the best option. After furthering the project in other aspects, we decided that this connection junction could harbor the YEI 3-Space Sensor, along with the

4 camera mount carbon fiber rods, and the box kite carbon fiber rods. With that in our minds, we set out to make a structurally rigid, rapid prototyped part. After six different re-designs, the final variation that was built is seen in Figure 2 and 3:

Figure 2: Top view of the nexus with space for the YEI sensor and four spot to connect to the mounting board

Figure 3: Bottom view with the four larger holes at a 20 degree angle for the box kite and the two smaller ones for the camera mount

After retrieving the payload from launch, we saw that the connection junction performed flawlessly, and did everything that is was intended to do.

Magnetometer

At the beginning of this project, it was determined that the earth’s magnetic field in the X, Y, and Z directions were to be experimentally found in relationship to altitude; these values would be measured in Gauss. Not much information could be found relating earth’s magnetic field to altitude, except for over 119,000 meters -- most magnetic surveys occur under 300 meters due to

5 difficulties in acquiring data (Bayot, 2005). A goal from this side of the project was to investigate the relationship for altitudes approximately zero to ninety kilometers up. There was also the possibility of incorporating a Geiger Tube into the experiment to measure the radiation in the atmosphere in relationship to altitude as well, but that experiment was dropped due to the time required to process and ship that electronic component; it may not have arrived in time for the flight.

In order to measure the earth’s magnetic field, a magnetometer was needed. There are different types of magnetometers that can come in 1, 2, or 3 axes. Different magnetometers are meant for different situations. Most basically, there are fluxgate, proton processors, and overhauser magnetometer (Bayot, 2005). The latter two were either meant for stable, ground situations or were too large for the purposes of this experiment, and certain fluxgate magnetometers are smaller in size and do not need a ground base, like particular digital and analog ones. Then, the major companies that sold fluxgate magnetometers whose products were most closely examined were Applied Physics, Stefan Mayer, and Bartington. During this time of research, a sheet with all of the projects’ combined potential weight and budget was created. Due to these calculations, cost and weight were big factors to consider when trying to choose a magnetometer. The final magnetometer was chosen to be Magnetometer 113D from Applied Physics. It was a digital fluxgate magnetometer, weighing in at approximately 25 grams. Its minimum power input was +4.9V, which was compatible with the Arduino’s capabilities. It came with six inch flying leads and could give off both ASCII and binary over the serial communication RS232 and TTL (Applied Physics Sytems, 2012). The earth’s magnetic field averages about 50,000nT, or 0.5 Gauss, and it ranges from about 25,000nT at the Equator to 70,000nT around the poles (British Geological Survey, 2012). So since Model 113D had a range of ±60,000nT with a 2nT resolution, it would be very sensitive to the altitude’s magnetic differences (Applied Physics Systems, 2012).

Once the magnetometer came in the mail, a few wires had fallen off of the PC board. After they were resoldered, as the magnetometer was being dealt with, they continued to fall off randomly until the system was properly mounted with styrofoam. The system was not mounted with metal in a hopeful attempt to not mess up the earth’s magnetic field readings. The wiring came prepared for the serial communication RS232, and the Arduino UNO was programmed with RS232 taken into consideration. After many struggles with trying to get the magnetometer and arduino to prove that they were talking through the computer, it was eventually found that the serial port was incorrect, and the wires were re-soldered to incorporate TTL communication, the real serial that the arduino communicates through. After determining that, it was a very fortunate situation that the board and magnetometer were not damaged through the testing because of the different voltages associated with these ports. By then changing some of the coding, the correct number of binary sets of numbers were showing up through the com port. Figure 4 shows how the code was written to only store the magnetic field in the X, Y, and Z directions in binary and not the rest of the nine bytes of information the magnetometer was picking up.

6

Figure 4. Storage of only needed data on Arduino’s EEPROM.

It was determined earlier on that the YEI accelerometer would need its own arduino memory if it was going to work through communication with the carbon dioxide canisters to the gyros too. Therefore, it was given its own arduino, and the thermistors and magnetometer were to split the memory on the other arduino. Given the amount of space available, binary was set to be used because if the data was recorded in ASCII, the number of readings per three hour flight would be significantly dropped. As it was, approximately only 72 readings could be taken during the flight in order not to write over itself in the memory, by just gathering the X, Y, and Z directions in Gauss. If there had been more time, extra EEPROM memory would have been purchased.

Some of the major programming victories include getting accurate data, setting a switch, and merging the magnetometer and thermistor codes. A switch needed to be set so that the arduino would give the computer its data without starting over and erasing the previous data. There was a button on the breadboard at first, and it was programmed so that when it was pushed down upon start up, the arduino would know to record data only. When it was not pushed down upon start up, the arduino would instantly just give the computer its last data. An LED was incorporated that would light up to signify when the system was ready to record data. Wiring became quite congested, as seen in Figure 5, so the program was rewritten to get rid of the button and change it to a wire that needed to be put in or pulled out, telling the arduino to gather data or give data. Because the battery supplies were estimated to only last three to four hours, and the flight was

7 approximately three hours long, another switch was made for the battery packs. A circuit was created so that when a nail was screwed into it from outside of the payload, it complete the circuit, turning the battery packs and LED lights on. Merging the thermistor and magnetometer codes was a difficult task, but it was merged so that both the thermistors and magnetometer were under the same delay. Because the EEPROM memory of one Arduino was being shared, there were three temperature readings for every one magnetometer reading.

Figure 5. Magnetometer and Thermistors connected on one Arduino board.

One of the biggest disappointments overall was after the testing, after all of the programming had worked well. The data showed up after an hour test of the payload hanging from a tree, and it made sense. Then, on the day of the launch, the green LED never lit up -- one of the wires connected to the battery pack had fallen off, not allowing the circuit to be complete.

In hindsight, another magnetometer, like the HMC5883L Triple Axis Magnetometer would have been more strongly considered because of the Arduino UNO tutorials online specifically for this product, including tilt compensation for compass directions formulas (Love Electronics, 2012). However, this product was not chosen due to its lack of documentation online, and by not having any tutorials online for Magnetometer 113D, a conducive learning experience was enforced, especially since the given online manual was of very minimal help, at that. Having known nothing about programming before this experience, and having spent many hours trying to do it this summer, many of the essentials were learned at a quick pace without any formal instruction. If this project could be improved, more external EEPROM memory would have been bought to eliminate delays, and the payload would have hopefully been stabilized through the help of gyros and an accelerometer to eliminate spinning, which messes up magnetic field averages. Also, most definitely, better wire protection for soldered wires would be finalized.

8 Thermistors

In order to determine the amount of radiation the payload would receive from the sun as it rose 10K 1% Waterproof Thermistors were used. Thermistors were used because they are easy to use when measuring simple temperature readings and they are very adaptable as well. The thermistors were attached to one inch aluminum blocks with thermally conductive adhesive, and then glued into the payload, illustrated in Figure 6 below.

Figure 6: Thermistor glued to an aluminum block and then covered with aluminized mylar for insulation

It was found through research that thermistors would be the best sensor to record temperature due to the fact that thermistors are more manageable when it comes to programming and the sensitivity of the sensor itself. The arduino programming for the thermistors (Adafruit Learning System, 2012) was fairly simple yet tedious, the program allowed for the thermistor resistance to be converted into degrees Celsius as shown in Figure 7.

9

Figure 7: The thermistor program converting voltage to degrees Celsius.

The initial idea for the thermistors was to use them to receive temperature readings and then take those readings and mathematically convert those readings using the equation for radiation:

hc A dT (Convective Heat Transfer, 2012)

q represents the amount of heat of heat transferred, A represents the area of the surface where the heat transfer takes place, hc represents the heat transfer coefficient, and dT represents represents temperature difference. In order to make sure the thermistors could stand up to the cooler temperatures that the payload would come in contact with an experiment was performed using dry ice in order to drastically cool down the sensor. The experiment proved that the thermistors would function accurately in cooler temperatures; the data received from the dry ice experiment is shown below in Figures 3 and 4.

10 Table 1: Dry Ice Experiment Data

Average Analog Thermistor Temperature Time (Seconds) Reading Resistance (Ohms) (Degrees Celsius)

560.2 12104.58 20.76 1.00

560.8 12133.28 20.71 3.04

562.0 12190.89 20.61 5.08

561.6 12171.65 20.64 7.12

562.4 12121.16 20.57 9.16

562.6 12219.81 20.56 11.2

563.4 12258.49 20.49 13.24

564.0 12287.58 20.43 15.28

565.0 12336.24 20.35 17.32

564.6 12316.75 20.38 19.36

565.0 12336.24 20.35 21.40

565.8 12375.33 20.28 23.44

566.0 12385.12 20.26 25.48

566.8 12424.37 20.19 27.52

567.0 12434.21 20.18 29.56

567.2 12444.05 20.16 31.60

568.6 12513.20 20.04 33.64

568.2 12493.41 20.07 35.68

569.0 12533.04 20.00 37.72

11

Figure 8: Graph displays the decrease in temperature with time, with a few anomalies based on the position of the thermistor in relation to the block of dry ice.

After the dry ice experiment proved that the thermistors could still perform in cooler temperatures the next step was to glue the thermistors into the payload and get them ready for the flight of the payload. As the payload was being prepped for launch there was a realization that there might be a problem with the wiring of the battery pack that was used to power the arduino board or the wiring of the green LED light that was used to identify whether or not the different applications, including the thermistors were connected and working. Unfortunately the balloon carrying the payload was already inflated and ready to be released, so it was known that there may have been a possibility of no recorded data. Upon retrieval it was in fact determined that one of the wires connected to the battery pack had come apart and there was no temperature data recorded from the flight of the payload.

The ideal design for the security of the thermistors proved to be flawless when tested in the dry ice experiment, but was inconclusive during the flight of the payload. A possibility for the improvement of the thermistor part of the experiment would be to find a way to accurately secure all wires inside of the payload, as well as create a more easily accessible way of fixing any wiring problem that may occur upon the pre-launch setup. Although error did not allow for the necessary data to be received, based on experimentation it is believable that the initial purpose for the thermistors would have been carried out.

12 Cameras and Camera Mount

Before beginning the camera research, the decision came down to what kind of pictures should be taken. Eventually it decided that in addition to video, normal photos and infrared photos would be taken as well for comparison. It was determined that the GoPro HD Hero camera for the video and Canon Powershot A480 cameras for the pictures would be the best cameras to use. These camera types were chosen because the previous payload team had used them, and there was a way to access them easily. The GoPro was used for video because it had better video quality.

Because the cameras would be in the air, the team needed some way to take pictures without pressing the shutter button. The answer was to use a time lapse function, where the camera is set to take a picture every time a certain number of seconds or minutes have passed. Since the A480 cameras did not come with a time lapse function, the Canon Hacking and Development Kit (CHDK) was used. CHDK was a program that can be downloaded into a memory card and installed into a Canon camera. This hacking program made changes that improved the camera, such as allowing the user to change the shutter speeds, download scripts, and change the distance the camera zooms in on the target. Using the CHDK wiki (CHDK.wikia.com, 2012), the team found the CHDK program for the Canon Powershot A480 and downloaded it into a memory card. The site “British Ideas” (British Ideas, 2012) linked to a time lapse script for the A480 that could be downloaded onto the same memory card that held CHDK. The site also had instructions on how to activate the time lapse function.

The team practiced taking apart and putting back together the A480 camera that was left over from the previous balloon team’s work. When the inner workings of the camera were sufficiently understood, the camera’s normal filter was replaced with an infrared one made from a film negative. Pictures of common items like pencils were taken with both the normal filter and the infrared filter for comparison. The film negative filter made the camera produce good infrared pictures, but the team wanted better quality. So using a plastic and a glass infrared filter instead was tried, in hopes to compare the two and see which filter was the best. The plastic filter was ordered from Edmund Optical and the glass filter was ordered from Thor Labs. Since both the filters were larger than the camera, a piece the size of the original filter had to be cut from both the plastic and glass filters.

Attempts at replacing the original filter with an infrared one were made not only on the original Canon A480 but on used Canon A480’s purchased online. Though the modifications initially worked, after a while an unknown problem caused all modified cameras to stop working, no matter if an infrared or original filter was placed in them. Sometimes the lens would jam and make a grinding noise, other times the screen would fill with static and the pictures would turn out dark. Because the attempts at making infrared cameras was not working and the payload was

13 in danger of becoming overweight, it was decided that the payload would carry only two cameras: the GoPro and a non-infrared A480 for taking time lapse photos.

Tests were performed to see how the cameras would do in flight. Both the GoPro and an A480 camera with a CHDK time lapse program installed were given fully charged batteries and turned on to see how long it would take before their batteries were depleted or their memory cards filled up. (The A480 was set to take a photo every thirty seconds.) Both cameras lasted approximately three hours, the expected duration of the payload’s flight. Because there was some doubt as to whether the GoPro would fare well under cold temperatures, it was placed in a freezer where it recorded for approximately three hours. The cold did not affect its operation.

Another large part of the project was the camera mounts, which held the cameras to the payload. Integral to the camera mount were servos. The servos were meant to hold the cameras stationary even when the payload was spinning and moving. They were also meant to move the cameras to record different areas of the ground and sky.

The servos that were chosen were the GS-1 servos. They were chosen because they have a built in gyro and keep a specific place in space by providing voltage (Dunehaven Systems, 2012). When designing the camera mounts, one common element among all the designs was how they would attach to the balloon. All of the possible designs were connected to two vertical carbon fiber rods that were attached to the connection junction.

When designing the mount, the team had to figure out how the cameras would attach to the servos. It was known that one of the servos had to attach to a “mount head” that would hold the camera, and it was also known that the mount head would have to have holes go through it, so that the cameras could be attached with screws. The A480 cameras had an area on the bottom so that they could be attached to tripods, so a screw that was the right size and threading for it was found. For the GoPro, a bike mount that attached to the camera was purchased and could also be screwed into the mount head.

Figure 9: A picture of the GoPro bike mount. Picture from http://www.tourcycling.com

14

After multiple brainstorming sessions, the team came up with the mount head pictured below. The two holes on the arm of the “T” correspond to the holes that the screws of the GoPro mount went through. The hole on the bottom is for the nylon screw that held the canon camera in place, as seen in Figure 10.

Figure 10: A picture of the mount head. Picture by Patrick Comiskey.

Figure 11 below is the final design for the camera mount. The swing arm allows the mount head to be turned on its side, and lets the cameras be pointed at downwards angles.

Figure 11: The final camera mount. Picture by Ben Peterson.

15

When the payload was launched, fishing line that was rated for 50 pound strength was tied to the cameras and to the i-bolt that went through the center of the payload as a failsafe. The team had originally intended on using braided metal wire, but a thick enough braided wire could not be found.

The day of the launch, the team could see that the servos worked and successfully kept the cameras in one position before periodically switching them to a new position. Unfortunately, the wheel of the top servo in the camera mount snapped sometime during flight. The cameras and the bottom portion of the mount were not recovered.

There are several improvements the team could have made to the camera mount system. First of all, contact information could have been placed on the cameras. That way, if they fell off the payload and someone found them, they could be returned. Also, the mount could have been redesigned so that so much weight was not put on one servo; or, the cameras could have been attached to the payload itself. In addition, the team could have used a failsafe made of a stronger material such as braided steel wire, instead of fishing line. Another improvement would be devising a test that simulates the payload falling end over end, so that the team could find out how that extra stress affects the payload and camera mount.

Pressurized CO2 Jets

The goal to stabilize the payload inspired the idea to use pressurized jets. To use jets as stabilization on the exterior of the payload, they need to be mounted as a couple. This means that jets are mounted in pairs on opposite sides of the sphere and pointing in opposite directions. This is to provide a torque without a net linear force to the center of mass. Two jets together then provide resistive torque in one direction for one plane of geometry. To stabilize in both clockwise and counterclockwise directions for a single plane, two pairs of jets are needed. Furthermore, to stabilize in both directions in all three planes would take a minimum of six pairs of jets. For both simplicity and to save weight, the objective was decided to be stabilization in the horizontal plane only. In other words, the jets’ purpose was to dampen any spinning about its vertical axis.

Theoretical Thermodynamic Analysis

To begin analyzing the force that can be achieved from the jets, the mass flow rate out of the jets must first be calculated using the Bernoulli Equation with the expression for mass flow rate of an orifice to achieve the equation shown below. This will now require information about the density and pressure of the CO2 as well as the dimensions of the nozzle used and the outside pressure.

16 2 P1 P2 mdot A 2√ A 1 2 2 A1

Now with the mass flow rate, a method from NASA is used to calculate the force (Benson, 2008). Using that force and the distance from the center of mass, the moment created by both jets is found. This is then inserted into a modified expression for the rotational version of Newton’s 2nd Law of Motion.

mdot v 2 A 2 P 1 P 2 A 2 v v e 2 mdot F mdot v e 2 F d I dt d

With the above methodology, the time needed to open the jets to counteract a specific rotational velocity can be found as long as the inner and outer pressures are known, the density of the CO2, and the mass moment of inertia. This method will successfully operate the CO2 jets once.

This method cannot be used to continually operate the CO2 jets because the pressure inside of the canister will change depending on a few factors. To solve this issue, we must solve for the final state of the CO2 after a certain amount is released. In efforts to simplify the problem, the canister is assumed to be adiabatic (no release or absorption of heat to or from the environment). Using an unsteady flow analysis for an adiabatic system with no input or output work, the internal energy for state two can be calculated. Also, with the known loss of mass (using mass flow rate and total release time) and a fixed volume, the specific volume of state two can be calculated.

m 1u 1 mdot dt h 1 u 2 m 2

17 m 2 v 2 V

Because the CO2 is a two phase system, three intensive properties need to be determined to define state two. To achieve the third and final property, an expression was taken from the 7th Edition Thermodynamics - An Engineering Approach by Boles. The modified expression is shown below.

1 h fg T u 2 u 1 P v 2 v 1 c v v fg

Now with state two fully defined, the pressure can be found on a continual basis. That being achieved, the jets can also be operated on a continual basis.

The above methodology is completely theoretical and has not been tested. By the completion of this theory, there was not enough time to bring the experiment to fruition by the launch date and the experiment was terminated. If time allowed, the theory should be simulated through a computer code. After that, the simulation should be experimentally tested and verified prior to use on the payload because the accuracy of the force calculations directly influences the effectiveness of the experiment as a whole.

18

Acknowledgements

Dr. William Farrow (Advising) Dr. Matthew Traum (Thermodynamic insight) Rich Hajny (Wiring assistance) Jim Yauch (Camera Mount Prototype assistance) Arthur Weborg (Software Engineer at Milwaukee School of Engineering) Jason Brunner (Director of Engineering, St. Croix Rods) Tom Pease (North Sails)

References

Adafruit Learning System. (2012). Thermistor. Retrieved 2012, from Adafruit Learning System: http://learn.adafruit.com/thermistor

Applied Physics Systems. (2012, March 19). MODEL 113D. Retrieved June 2012, from appliedphysics.com: http://www.appliedphysics.com/sites/default/files/documents/Model_113D.pdf

Bayot, W. (2005, January 17). Practical Guidelines for building a Magnetometer by Hobbyists. Retrieved June 2012, from http://perso.infonie.be/j.g.delannoy/BAT/IntroductiontoMagnetometerTechnology.pdf

Benson, T. (2008, July 11). nasa.gov. Retrieved August 2012, from Specific Impulse: http://www.grc.nasa.gov/WWW/K-12/airplane/specimp.html

British Geological Survey. (2012). The Earth's Magnetic Field: An Overview. Retrieved June 2012, from British Geological Survey: http://www.geomag.bgs.ac.uk/education/earthmag.html

British Ideas. (2012). CHDK and Canon A480 Quick Start Guide. Retrieved August 2012, from British Ideas: http://www.britishideas.com/2010/06/03/chdk-and-canon-a480-quick-start-guide/>.

CHDK.wikia.com. (2012). What is CHDK? Retrieved June 2012, from CHDK.wikia.com: http://chdk.wikia.com/wiki/CHDK

Convective Heat Transfer. (2012). Retrieved 2012, from The Engineering Toolbox: http://www.engineeringtoolbox.com/convective-heat-transfer-d_430.html

Dunehaven Systems. (2012). GS-1 Gyro Servo. Retrieved June 2012, from Dunehaven Systems: http://www.dunehaven.com/gs1.htm

FAA Regulations for Kites/Balloons. (1999, November 30). Retrieved August 2012, from chem.hawaii.edu: http://www.chem.hawaii.edu/uham/part101.html

Love Electronics. (2012). Tilt Compensating a Compass with an Accelerometer. Retrieved June 2012, from Love Electronics: https://www.loveelectronics.co.uk/Tutorials/13/tilt-compensated-compass-arduino-tutorial

19

The Engineering Toolbox. (2012). U.S Standard Atmosphere Air Properties in Imperial (BG) Units. Retrieved July 2012, from The Engineering Toolbox: http://www.engineeringtoolbox.com/standard-atmosphere-d_604.html

U.S. Centennial of Flight Commission. (n.d.). Retrieved July 2012, from centennialofflight.gov: http://www.centennialofflight.gov/essay/Theories_of_Flight/atmosphere/TH1G3.htm

20 Elijah High Altitude Balloon Launch Team 2012-2013

Tyler Capek, University of Wisconsin River Falls; Richard Oliphant, Milwaukee School of Engineering; Devin Turner, Marquette University; Danielle Weiland, Carthage College

Wisconsin Space Grant Consortium

Abstract: The 2012-2013 launch team of four undergraduate students has brought itself up to speed on the equipment and software necessary for high altitude balloon flights, prepared and tested it all, and completed two successful flights. GPS flight data of latitude, longitude, time, and altitude was collected and analyzed. A new mobile tracking tool was used more than ever before with the growing popularity of smart phones; and new methods were used for data analysis and presentation thanks to technologies now available on the internet.

Introduction The Student Satellite Initiate is an innovative program that provides students with the opportunity to fly their science experiments in a near space environment. The Elijah high altitude balloon launch team is funded by the Wisconsin Space Grant Consortium (WSGC) to work together to safely launch and return a scientific payload for data analysis. The launch team is tasked with assisting in the launch of the balloons as well as tracking the balloon while it is in the air. The launch team is also responsible for the maintaining and updating of the tracking payload. Data retrieved from the tracking payload is used to show the path of the balloon. Given the nature of the task and equipment, specialized procedures and coordination within the team are required, as well as equipment testing and accurate selection of a suitable launch location. The Elijah launch team has performed two successful launches and recoveries of the high-altitude balloon and payload and has more launches planned in upcoming months.

Equipment and Testing The Federal Aviation Administration (FAA) allows two payloads to fly with a high altitude balloon: a six-pound payload and a two-pound payload. The six-pound payload is left to the Elijah Payload Team, and the two-pound payload is made up of the launch team’s positioning system.

Before launch day the necessary equipment must be gathered and tested. The following is a list of the equipment needed with a brief explanation of each item:

 Helium – a primary tank and a secondary tank  Balloon – these natural latex balloons range from 800 to 3,000 grams  Tarp – used as a ground cover for a clean surface to inflate the balloon on  Gloves – the latex balloon will rupture earlier if contaminated by skin oils  O-ring – clamps around the balloon collar for efficient filling  Air hose – hosing from the helium tank to the balloon collar  Parachute – attached below the balloon but above the payloads, this slows the descent rate of the system after the balloon bursts  Tracking system – two GPS modules for tracking

 Science payload – maximum of six pounds, this module is mounted last  Scale – used to confirm weight regulations are met before launch  Counterweight – adjustable weight to test balloon lift before launch  Quick release nozzle – connection between helium hose and balloon collar

Several systems mandate preceding testing to ensure accuracy and functionality for launch operations. In the week leading up to the launch date, the GPS with the StratoSat tracking software was tested on the ground in Milwaukee. A three hour test was run and all systems proved operational. In the 24 hours before launch, each module’s electrical pack was charged as well.

Pre-launch Operations In addition to an equipment check and testing, the launch team is responsible for finding a suitable launch date and location. Near Space Ventures makes available a website which takes certain inputs and generates a flight path prediction for high altitude balloons(Campbell, 2012). The inputs include the launch date and time, the launch site location (longitude & latitude) and elevation, an appropriate weather station, the anticipated balloon ascent and descent rates, as well as its anticipated burst altitude (Figure 1).

Launch date selection. High altitude balloons are rather fragile and necessitate calm weather. When discussing potential launch dates, the weather forecast is evaluated. Once the weather is confirmed by several sources to be forecast as calm, the jet stream is evaluated. Balloon flights should not pass through a turbulent or violent jet stream. Again, the balloon, payload, and tracking equipment are somewhat fragile and need to stay in predictable environments. A quick check to California Regional Weather Service’s website gives a map of the jet stream across North America and shows grayed-out zones where the jet stream is moving rapidly and violently. This can be forecasted five days in advance.

Weather station selection. It is best to use a weather station that is in the middle of the predicted flight path, the Near Space Ventures application uses weather and wind data from the National Weather Service’s forecast soundings from the specified station to predict the behavior of the balloon. The three letter airport code is all that is needed to specify a station.

Launch site selection. When selecting launch sites, a large public field away from tall trees, overhead wires/cables, and air traffic (airports) is necessary. It is preferable to have a wi-fi hotspot nearby for internet access for running one or two last predictions. This is usually found at a café or restaurant so food is there for the launch team too. If a team is fatigued, bad things could happen. These locations are plentiful however, so more primary factors are used to determine a launch site first. First, a launch site is chosen and the necessary information gathered for it (i.e. latitude, longitude, elevation, and weather station). This is inputted and the Near Space Ventures application outputs a predicted path on Google Maps with a landing site inside a 5 mile radius circle and a 10 mile radius (Figure 2). From this, the predicted landing site is analyzed. Google Maps satellite view is used to look for areas of water, cites or towns, as well as forested or hilly terrain. These are avoided. From the resulting predicted path, the launch team sees where the winds tend to be taking the balloon and this becomes an iterative process. Another location is chosen with the first prediction as a reference, the output is analyzed, and the process repeats

until a suitable launch and landing combination is obtained. This is done a week prior to the launch date, a few days before the launch date, and then the night before, and the morning of. The weather predictions become more accurate as the launch date approaches and sometimes the launch site needs to be changed.

Figure 1: Data required for flight prediction. Figure 2: Google map result of flight prediction.

Once a specific area has been determined to be a good launch location, Google earth is used to select a mark. This mark is sent to all of the launch team, payload team, and the team advisor for use as reference on launch day.

Launch Day Operations On the day of the launch a flight path prediction is run to confirm all is well for the chosen launch site, path, and landing zone. The equipment is gathered up from the previous day’s test and electrical pack charging, and loaded into the team’s vehicles.

Once at the launch site with the payload team, the tarp is laid down in an open and level area. From this spot the balloon is prepared by carefully unrolling it onto the tarp by team members wearing gloves. Others hook-up the helium tank to the hose and then to the neck and collar of the balloon for filling. The payload system is also set-up. The parachute is unraveled and then folded so that it will open properly on descent, then the first GPS module is attached below that, then the secondary one, and lastly the science payload. The GPS modules are turned on and the StratoSat software is activated to verify a signal is being transmitted. The science payload is powered on as the balloon is filled with helium. The balloon is not yet connected to the payload system, but instead to the counterweight. The counterweight is set to six pounds heavier than the sum of the weight of the payload system. Then as the balloon fills, it works to lift the counterweight. Once the balloon can lift the counterweight off of the tarp, the helium is shut off, the balloon collar connection to the payload system confirmed, and the quick release nozzle is released. The balloon is let go and as it rises, each successive payload module is released (Figure

3). Holding the modules by their connecting cord and releasing in succession stops the balloon lift from yanking each payload violently off of the ground.

Figure 3: The balloon and payloads immediately after release. Tracking Once in flight, the balloon and payload system soar out of sight and can only be tracked by the GPS signals. The tracking pod pings out a GPS coordinate along with its speed and direction of motion every 30 seconds or so (Czech, Fossen, Johnson, & Westphal, 2010). The signal is received by antennae secured to the chase vehicles and plotted on a map. Using StratoSat and MapPoint software, the balloon is tracked in real time. This is in addition to the web browser application that plots the balloon system’s path and can be accessed by any device with internet access. This tracking application was advertised by the launch and payload teams for a wide community audience. The teams also had a record number of smart phones that could track the balloon’s progress as well (Figure 4).

Figure 4: Screen shot of the mobile web app used for tracking.

The chase vehicles communicate via short range radios and cell phones while driving to the predicted landing zone. As the balloon flies it often drifts from the predicted path, and the chase vehicles must make quick decisions to change their route in order to get to the actual landing site.

Recovery The tracking software has a variance of plus or minus 50 feet (Garver & Krueger, 2009). This allows the launch team to get to the general site of recovering the payload system, but often this isn’t enough to find it. If the chase vehicles aren’t able to predict the actual landing site accurately or get there in time to see the system descend, or if the system falls into tall vegetation, then it can be difficult to recover.

First launch. On the first launch the balloon landed in an easily accessible location – only 60 feet off a country highway, into two-foot tall corn.

Second launch. On the second launch the location was less accessible and the payload sustained major damage. Upon recovering the payload, it was determined that the payload sustained damage in-flight and parts of the payload were unrecoverable. In-flight damage to balloon payloads has been a challenge historically and is likely a result of severe turbulence experienced by the balloon as it travels through the jet stream during the flight. Payload damage can also be caused during landing but the missing payload items were not found in the area of the landing site. The parachute, balloon throat, GPS and harnesses were not damaged in landing, suggesting that the damage was not a result of an in-flight collision. The audio tracking component (beeper) that was added to aid in the recovery of the payload significantly improved our ability to locate the payload after landing and will be used in future flights as well.

Flight analysis The data received from the tracking payload was loaded into an excel spreadsheet. The geographic coordinate data was outputted in a degree-minute format (DMF, 43° 48.7235). In order to work with most programs, the geographic coordinate data had to be converted into a degree-decimal format (DDF, 43.81205833°). This was accomplished using the following excel algorithm:

Figure 5: Excel algorithm for converting degree-minute data to degree-decimal data

Altitude data for each geographic coordinate was also isolated. It was outputted in meters, and no conversion was needed. The geographic coordinate and altitude data was exported into an extensible markup language (XML) document. XML is a programming language that defines a set of rules for encoding documents that is readable by both humans and machines. The XML document was needed to convert the data into a format that Google Earth could read. Google Earth is a virtual globe, map, and geographical information program. The XML document was designed to create a path along the geographic coordinate data, essentially tracking the entire path of the balloon’s flight. The code was copied and pasted into Google Earth, where the flight of the balloon could be visually depicted as shown in Figures 6-8.

Figure 6: Graph of the altitude with respect to the total distance traveled by the balloon.

Figure 7: Three dimensional portrayal of the Figure 8: Superior view of the balloon’s flight balloon’s flight path in Google Earth. path in Google Earth.

Conclusion The team completed two successful launches and recoveries reaching altitudes in excess of 100,000 feet and sustained flight time of over two hours. In addition to learning how to smoothly plan future launches, the team efficiently handled testing, coordination and transportation of the balloon and payload. However upon recovery it was discovered that several parts of the payload had separated from the harness during flight. Some areas of improvement for future launches include a higher fidelity jet stream model, starting the chase phase more quickly, launching earlier in the day, better securing of the payload to the harness, and updating the GPS tracking systems for easier recovery.

References 1. Campbell, T. (2012, August 15). On-line Near Space Flight Track Prediction Utility. Retrieved from Near Space Ventures, Inc. 2. Czech, M., Fossen, T. V., Johnson, P., & Westphal, K. (2010). Balloon Tracking Methods. Wisconsin Space Grant Consortium Confererence Proceedings. 21, (pp. 11-17). 3. Garver, M., & Krueger, J. (2009). StratoSAT Instruction Manual. Stratrostar Systems LLC

Acknowledgements Dr. William Farrow, Milwaukee School of Engineering Brittany Luedtke and Sharon Brandt, WSGC Office at UW-Green Bay Rich Phillips, Milwaukee School of Engineering WSGC Satellite Initiative High Altitude Payload Members

22nd Annual Conference Part Two

Student Satellite Program Rocket Design Competition

UWL Physics Rocket Team: Final Report

Joseph Krueger, Andrew Prudhom, Richard Allenby, John Nehls

UWL Physics Rocket Team, University of Wisconsin-La Crosse

Executive Summary The goal of this year's collegiate rocket competition was to design and successfully launch a one-stage, high-powered rocket that, during its ascent, would transmit live video from a downward looking camera to a ground based receiver. In order to be considered a successful launch, the rocket was to attain an altitude near 3000 feet, electronically deploy a recovery parachute attached to all parts of the rocket, succeed in transmitting live video throughout the ascent, and safely land in a flyable condition.

To achieve these requirements, the UWL Physics Rocket Team used OpenRocket to sketch a design that best fit the specifications of the competition. After it was discovered that programs such as OpenRocket are capable of doing the brunt of the theoretical work, it was decided that the majority of the essential components of our rocket would be hand built to increase the feeling of personal accomplishment. The design of our rocket utilizes a dual deployment recovery system, with the bottom section housing a custom made motor mount, the middle section housing the electronics for recording flight data, and the top-most section housing the equipment for the recording and transmitting of live video.

Design Features Rocket design. The design of the rocket began with meeting the requirement of lifting a payload to 3000 ft (915 m) simply and efficiently. After researching the basic elements of rocket design and construction, a single minimum diameter, dual deployment type was selected. Upon receiving the list of motors available and reviewing initial flight simulations, we found that the J357 38mm motor was best suited to reaching the target altitude. Based on the size of the video system components, a 98mm (4in) diameter blue tube airframe was chosen. The rocket design consists of four sections, the nosecone and payload bay, the main recovery bay, the flight electronics bay and the booster with drogue chute.

The nosecone is an ogive shape and is fastened to the airframe with removable plastic rivets. The video system is housed just below the nosecone on a removable sled constructed from 3/16 inch plywood and 1/2 inch plywood bulkheads. The payload bay uses as little metal as possible, to keep inference with the transmission antenna at a minimum. The rear bulkhead of the sled holds an eye-bolt serving as the forward attachment point of the main parachute recovery harness. The payload sled is held inside the airframe between the nosecone and a glued in blue tube coupler.

Continuing downward, the main recovery bay holds the main parachute, a Top Flight Recovery 60 in. Crossfire nylon parachute and a 9 meter 1 in. tubular nylon recovery harness. The aft attachment point for the harness is the electronics bay. The e-bay is built from a blue tube coupler and two 1/2 inch plywood bulkheads connected with 1/4 inch threaded rods. A sled for the altimeter, battery, arming switches and flight data recorder is attached to the rods. The e-bay serves as the central structure of the rocket and all components are tethered to one of the two eye-bolts located on either end of it. The final part of the rocket is the booster section and fin can. The drogue parachute deployed at apogee is stored in the top of the booster section. The motor tube is a 40 cm long, 38 mm diameter paper tube centered with three rings. The fins are attached through the airframe wall to the motor tube with epoxy. The forward centering ring has attachments for the drogue recovery harness and the aft centering ring has threaded inserts for attaching a motor retaining ring.

Figure 1: Diagram of the Rocket

Video System Design. The video system is a commercially available system for first person view flying of radio control model aircraft. The video camera is lightweight and has a 768x480 pixel resolution. The transmitter is an 800 mW transmitter operating on 1280 MHz through an inverted vee antenna. This frequency requires an Amateur Radio license, and our control operator

for the flight was team member Joseph Krueger, KC9VUD. Operating on 1280 MHz will reduce the interference to our system compared to more commonly used bands. The 800 mW TX power output helped to ensure video quality throughout the flight. The transmitter and camera are powered by a 460 mAh lithium polymer battery. On the ground, the receiver is connected to a circularly polarized “bi-quad” antenna. This antenna was designed and built by Mike Cook, AF9Y. The circular polarization of the receiving antenna allows wide angle reception of the video signal. This allows the antenna to remain fixed and receive the rocket signal during ascent.

Construction of the Rocket Construction of the airframe began by squaring the ends and cutting all blue tube body tubes to length using a compound miter saw. The spiral ridges in the body tubes were filled using epoxy clay. After sanding primer was applied using rattle cans. The final paint red and yellow paint will be applied last in order to protect the finish.

Bulkheads and centering rings were cut from 1/2 inch plywood using a plunge base router and circle guide. The holes for the motor tube were drilled using a 1 5/8 inch forstner bit in a drill press. This proved to be a perfect fit for the 38mm tube. The fins and electronics sleds were cut using a jigsaw from 3/16 inch plywood and assembled with epoxy and wood glue.

One of the more technically challenging aspects of the build were the through-the-wall fin slots. The slots were cut using the plunge base router and a purpose build jig to hold the body tube. To assemble the fins and motor mount the fore and mid centering rings were epoxied to the motor tube and then epoxied into the body tube. The fins were aligned through the slots and held by hardboard cutouts while epoxy was dripped down the internal joints. After the fins were cured the aft ring was installed onto the motor mount completing the assembly of the booster section. The external fin joints were filled with epoxy clay to blend the airframe and the fins together for painting. The plywood bulkhead, rings and fins that we built are similar materials to those from rocketry suppliers. Analysis of Performance Before flight, simulations were made using OpenRocket to determine anticipated flight performance of the rocket. The center of gravity for with no motor loaded in was predicted to be located at the middle of the rocket, 95 cm from the nose. The CG with motor installed is 106 cm from the nose. The calculated center of pressure is 154 cm from the nose. Flight simulations using these values and measured data of the rocket predict an apogee of 835 m (2740 ft), maximum 2 acceleration of 114 m/s and a maximum velocity of 142 m/s (Mach 0.42). The time to apogee was predicted to be 13 seconds with landing following at 100 seconds. The rear drogue parachute will be deployed at apogee by the altimeter and the main parachute will be deployed at 250 m.

These calculations did not account for various factors such as wind speed and high humidity, and the fact that the final mass of our rocket was higher than the designed mass. Most of the unaccounted for mass came from a change in material of our main recovery harness, which added just under a kilogram of mass to our final design. Due to these factors, our apogee was recorded at just over 2060 feet as opposed to the 2740 feet we predicted before the launch. Our time to apogee was 13.5 seconds, with a time to landing following at 113 seconds. Both our main and rear drogue parachutes were deployed at apogee. The rocket was recovered in flyable condition; the only damage sustained was to our drogue parachute, which had been damaged by the backup charge. Our video system performed admirably, with only minor cutoffs of our video feed during ascent. Any cutoffs were symptoms of the fact that the low cloud ceiling the day of launch obscured our view of the rocket, which meant that when the wind caught it and moved it outside the predicted launch path, we were unable to reposition the receiving antenna to maximize the signal (because we were unaware that the shift in location had happened).

Conclusion Due to our relatively simple rocket design, coupled with a reliable video system, we were able to outcompete the other Wisconsin teams. This was our first foray into rocketry, and having completed the competition we feel that we have learned many valuable lessons about the subject. This year we crafted our rocket out of Bluetube body tubing, without the knowledge that this material has a history of swelling up in humid environments and thus hindering flight performance on days such as our own launch day. We learned that it is better to design your rocket to attain a final apogee above the desired limit, because there will always be factors you cannot account for. This combined with newly attained knowledge of more efficient design features will hopefully culminate in a more efficient rocket design in future competitions. Team Woosh Generator 2012 WSGC Collegiate Rocket Competition

Milwaukee School of Engineering Devin Dolby James Ihrcke Brandon Jackson Eric Johnson Kirsti Pajunen

Abstract The objective of the 2012 Wisconsin Space Grant Consortium Collegiate Rocket competition was to design, build, and launch a single-stage high powered rocket that is capable of transmitting live video from a downward looking camera during its ascent. The rocket must reach a target altitude of 3000 ft and deploy a parachute(s) electronically for a successful recovery. Upon recovery, the rocket must be determined to be in a flyable condition to be considered a successful launch. Teams receive a launch score based on their combination of reaching the desired altitude and the quality of their video received.1

After running preliminary simulations, Team Woosh Generator decided to select a Cesaroni J357 motor with a 3.0-in airframe diameter. A BoosterVision video recorder, transmitter and receiver system was selected to complete the live video feed requirement. The camera will be located on the exterior of the rocket and will be protected from drag forces during flight from a shroud. Upon reaching apogee a drogue shot will deploy under which the rocket will descend until it reaches an altitude of 500 feet. A second chute will then deploy such that a slow descent speed is obtained for landing. Redundant flight altimeters will be utilized to ensure proper chute deployment.

Included in this report are design details considered, anticipated performance, photos of the construction process, and flight results.

1 WSGC 2011 Collegiate Rocket Competition Handbook 1.0 Rocket Design and Construction The following subsections will detail the motor selection process, airframe design, fin design, pressure relief considerations, electronics bays, and recovery method.

1.1 Motor Selection The competition parameters limited the motor selection to eight Cesaroni motors ranging from I to K class. Teams were than challenged to select a motor which would sufficiently meet the requirements of the completion based on their design.

It was decided to further develop a MATLAB program which was utilized by team member for a previous year’s competition to analyze the performance of each motor across a range of potential weights given an anticipated rocket geometry and drag coefficient. The algorithms used in this program have proved comparable to both commercially available codes along with flight results from previous launches.

The code utilized took into account the following factors when approximating rocket performance:  Aerodynamic drag  Mass change of the rocket due to propellant flux  Gravitational forces

The following assumptions were made regarding the geometry of the rocket and launch conditions:  Body tube diameter of 3.0-in  CD = 0.55  Standard temperature and pressure  No wind

It should be noted that factors such as wind, stability, rotation, and deviation from vertical flight could not be accounted for through this simulation. As a result, estimates obtained are likely over estimates of probable flight performance. To account for this uncertainty a buffer of 300 feet was added to the desired altitude for motor selection. This buffer was decided from experience gained from previous year’s competitions. Results from this simulation are presented in Figure 1:

5000 Motor I284Cesaroni Motor I470Cesaroni 4500 Motor I540Cesaroni Motor J357Cesaroni 4000

3500 Target Altitude with Buffer

Target Altitude 3000

2500

Peak Altitude [ft] 2000

1500

1000

500

0 3 4 5 6 7 8 9 10 11 12 Rocket Mass [lbm] Figure 1: Analysis of rocket motor performance in predicted rocket mass range.

From this analysis it was decided to proceed in the design with the Cesaroni J357 Motor. With a projected mass between 6 and 8 lb-mass this motor is most capable of achieving the desired altitude. In the event that the constructed is less than this range small weights would allow for the rocket to achieve the predicted mass.

1.2 Airframe Design Standard airframe diameters include, but are not limited to: 3.0-in, 4.0-in, and 5.5-in. Using the same simulation code discussed in Section 1.1 Motor Selection, the performance of each airframe diameter was compared. These results are presented in Figure 2:

5000 3.0-in 4.0-in 4500 5.5-in

4000

3500 Target Altitude with Buffer

Target Altitude 3000

2500

Peak Altitude [ft] 2000

1500

1000

500

0 3 4 5 6 7 8 9 10 11 12 Rocket Mass [lbm] Figure 2: Analysis of airframe diameter on peak altitude in predicted rocket mass range (Cesaroni J357)

From this analysis, it became apparent that a 3.0-in diameter body tube would be the only viable choice to meet the desired altitude.

1.3 Center of Gravity (CG) and Center of Pressure (CP) The relationship between the center of pressure and center of gravity is one of the most important relationships in high powered rocketry. The center of pressure is defined as the point at which aerodynamic forces on the rocket are centered. The center of gravity is the location at which the whole weight of the rocket can be considered to act as a single force. The ratio between the locations of relative to the rocket diameter can be used to predict the stability of the rocket during flight. Generally, the center of pressure must be at least one body-tube diameter infront of the center of pressure.

The center of pressure was determined analytically for this design through the use of Barroman’s theory. The results were then compared against those obtained through OpenRocket and agreed acceptably.

The following assumptions were made during the derivation of Barrowman’s theory for predicting the center of pressure:2 1) The flow over the rocket is potential flow. 2) The point of the nose is sharp. 3) Fins are thin flat plates. 4) The angle of attack is near zero. 5) The flow is steady and subsonic. 6) The rocket is a rigid body. 7) The rocket is axially symmetric.

The rocket design presented in this paper did violate some of these assumptions, particularly assumptions 2, 6, and 7. However, the theory was still applied with the understanding that minor uncertainties will be present as a result.

The two centers of gravity, before and after burnout, were estimated experimentally. Sand was used to simulate the mass of the propellant and while the rocket was balanced on a wooden dowel to determine the location where it remained static.

The overall location of the center of pressure and center of gravity for the rocket are presented in Table I: Table I: Locations of CP and CG (In Inches)* Center of Gravity Center of Pressure Stability (Caliber) (CG) (CP) Ignition 49 54.5 1.77 Motor Burnout 47.25 54.5 2.33 * References from the nose cone tip.

2 Barrowman, James. "The Theoretical Predictions of the Center of Pressure." (1966). Apogee Rockets. Web. 13 Apr. 2011. . From this analysis it is projected that the rocket will be stable during the duration of the ascent portion of the flight. The rocket is currently over stable; however, prior to launch when it is in its flight configuration this process will be performed again and ballast will shifted to achieve a desired ratio.

1.6 Electronics Bay The electronics bays, which serve as couplers for the bottom, middle, and top sections of the rocket, houses both the parachute altimeters in addition to: the WSGC RDAS altimeter, camera, transmitter, and batteries. To insulate the electronics from ejection-charge gases, a standard design was employed which utilized bulkheads above and below a tube coupler. Threaded steel rods were passed through the bay to which the electronics board was attached. Finally, holes were drilled in the tube coupler to allow for wires to be passed through, thus allowing the altimeters to be armed while the rocket is on the launch rail.

An image showing the front side of one of the electronics bays is shown in Figure 3. It should be noted that the components shown are not secured in their final flight configuration.

Figure 3: Lower electronics bay board.

1.7 Recovery A dual deployment recovery method was selected for this design. An 18-in drogue chute will deploy at apogee and allow the rocket to descend to 500-ft where the SkyAngle Classic 44-in main chute will be deployed. For this rocket a descent rate of 20 ft/s is estimated once the main chute is deployed. Redundant RRC-2 mini altimeters are incorporated into the lower electronics bay. The location of the main and drogue chutes are shown below in Figure 4:

Main Chute Drouge Chute

Figure 4: Full assembly with locations of parachutes

Nylon shear pins will be utilized to obtain controlled separation during descent along with keeping the nose cone attached. 2.0 Video Methodology The commercially available BoosterVision GearCam was selected to achieve the live video requirement of the competition parameters. A 1.5-in nosecone was split symmetrically and installed vertically along the side of the upper airframe section to function as a shroud for the camera, shown in Figure 5. The camera is mounted within one of these nosecone sections and oriented to achieve a downward facing image.

Figure 5: Camera shroud schematic

The following subsections will discuss the camera, transmitter, and receiver in further detail.

2.1 Camera, Transmitter and Receiver The camera and transmitter are combined into a single functioning unit, show in Figure 6. The transmitter specifications state that video up to 5600-ft can be achieved with the standard antenna/receiver. These numbers could not be verified experimentally on the ground since signal absorption from the ground severely decreases these numbers.

The corresponding receiver is shown in Figure 7 and allows for RCA video output into the TV tuner for recording. The black antenna shown in Figure 7 will be removed for the competition flight and replaced with a directional antenna allowing for improved reception. One of the team members will then be responsible during the flight for tracking the rocket during its accent with the antenna.

Figure 6: BoosterVision Camera and Transmitter Figure 7: BoosterVision Receiver

3.0 Anticipated Performance Two simulation programs were utilized to design and estimate the performance of the rocket. The programs used to simulate performance were OpenRocket and a MATLAB simulation program which was written by a team member. The results of both simulations were compared to determine the overall predicted performance of the rocket. These simulation programs will be detailed further in subsequent sections.

3.1 MATLAB Simulation

3.1.2 Limitations and Assumptions The primary assumptions made were that the rocket would be launched vertically and that the rocket would follow a vertical flight path. Additionally, standard temperature and pressure were assumed to determine air density, which was also assumed to be constant throughout the range of flight.

3.1.3 Numerical Methods The MATLAB simulation was designed to be a basic simulation program used in addition to OpenRocket. This program was originally developed and used in the 2010 and 2011 WSGC competition and compared closely with flight data. The program was designed to perform the following functions:  Load thrust data obtained from ThrustCurve.org  Interpolate thrust curve for more discrete steps  Calculate change in mass resulting from burnt propellant  Calculate velocity from the combined impulse from drag, gravity, and thrust  Calculate altitude and acceleration from velocity  Determine maximum altitude, velocity while leaving the launch rail, and landing velocity  Export all data to excel for graphical analysis

The velocity of the rocket was determined from the previous momentum plus the impulse. This relationship is shown in Eq. 1:

mi v iii F i  t  m11 v (0.1)

Where Fi is the net force acting on the rocket and t is the time step between calculations. The net force acting on the rocket during accent is expressed in Eq. 2:

Fnet FFF grav drag thrust 1 (0.2) m gv  C A2 T ii di2

Where:   is the density of air

 Cd is the coefficient of drag  A is the frontal cross sectional area of the rocket

 Ti is force from the motor

Substituting Eq. 2 into Eq. 1 and solving for vi1 yields:

11 2 (0.3) vi1  v i m i  T i  m i g  kv i  mti1 

Where:

1 kC A d 2

Acceleration was calculated using Newton’s second law which is expressed in Eq. 3:

Fi ai  (0.4) mi

The trapezoidal method for approximating the area under a curve was used to calculate the altitude of the rocket during the flight.

From the acceleration, velocity, and position data the maximum altitude, peak acceleration, and velocity while leaving the launch rail were determined. These results will be discussed in the subsequent section.

It should be noted that this simulation was not able to account for variables such as wind speed and direction, launch altitude, the effects of stability on flight, and flights other then perfectly vertical.

3.2 OpenRocket OpenRocket is a free, open source, software similar to RockSim. It is capable of calculating acceleration, velocity, and position data while accounting for variables including: elevation, wind speed, and the effects of individual components on performance such as: surface roughness and leading edge fin radii on drag and stability.

Also included in the program is the ability to construct full to-scale schematics of the rocket design. From this schematic the CP and CG can also be approximated.

3.3 Flight Predictions The peak altitude, acceleration and velocity for both simulation methods are shown in Table II:

Table II: Simulation performance comparison OpenRocket Matlab Altitude (ft) 3302 3149 Velocity (ft/s) 538 557 Acceleration (ft/s^2) 434 461

4.0 Results Simulations were run to design and estimate flight performance of the rocket. The two programs that were used were RASAero and MATLAB code written by the team. Actual flight data was recorded using a R-DAS flight data recorder provided by WSGC. The flight of the rocket matched well with the estimates of both simulations. A comparison between predicted and measured results is shown in Table III.

Table III: Flight Performance Results Max Altitude (ft) Max Acceleration (ft/s2) MATLAB 3149 434 Open Rocket 3302 461 Actual 3314 425 Percent Error From Actual MATLAB 5% -2% Open Rocket 0% -8%

Predicted and actual altitude and acceleration data are shown in Figure 8 and Figure 9, respectively. 3500 Matlab RASAero 3000 Actual

2500

2000

1500

Altitude (ft)

1000

500

0 0 2 4 6 8 10 12 14 Time (s) Figure 8: Comparison between Predicted and Actual Altitude

700 Matlab 600 RASAero Actual 500

400

)

2

300

200

Acceleration (ft/s 100

0

-100

-200 0 2 4 6 8 10 12 14 Time (s) Figure 9: Comparison between Predicted and Actual Acceleration

4.0 Conclusion The rocket was successfully recovered in a flyable condition in compliance with the competition rules. Performance evaluations software utilized for this design predicted the altitude and acceleration of the rocket to an exceptional margin given the uncertainties present in the launch and design. Lessons learned through this design will be incorporated into future competitions by returning team members. Thank you to the Wisconsin Space Grant Consortium for providing funding for this project and for publishing this paper in the 2012 Wisconsin Space Conference Proceedings. Team Jarts Rocket Design

Alex Folz, Brett Foster, Eric Logisz, Cameron Schulz

Milwaukee School of Engineering

Abstract The objective of the 2012 Wisconsin Space Grand Consortium collegiate rocket competition was to construct a rocket capable of live video feed through the thrust phase while flying to an exact altitude of 3,000 feet.

The airframe of the rocket was designed utilizing a simplistic, low weight design to provide a properly balanced rocket. The main chute was designed for two separate scenarios of deployment; the first scenario involves a remote controlled manual deployment, and the other is activated by an automatic elevation trigger, which will serve as a failsafe to ensure the rocket makes a safe landing. A numerical simulation, developed in MATLAB, was used to predict the performance of the rocket. The maximum predicted acceleration calculated is 506 풇풕/풔ퟐ, and the maximum predicted altitude calculated is 3480.6 풇풆풆풕. Further analysis of the rocket design and discrepancies are attached below.

Design Features of the Rocket General Design. The airframe for the rocket was constructed out of fiberglass wrapped phenolic tubing with a diameter of 3.097 inches. We chose fiberglass wrapped tubing because it provides high strength while weighing less than other options. Another advantage of fiberglass tubing is its resilience in form; it’s capability to deform under extreme force while still maintaining the ability to return to its original shape. This has proven to be beneficial with regards to the camera mounted on the exterior. This fiberglass phenolic tubing is also resistant to zippering, a factor considered in the design phase due to the fact we will be deploying a drogue parachute while the rocket is still ascending. The risk or possibility of zippering was reduced not only by the selection of fiberglass wrapped phenolic tubing, but also by the use of 30 feet of half inch nylon shock cord.

This length of cord will reduce impulse forces; a large contributor in zippering. Due to the combination of these characteristics we chose fiberglass wrapped airframe over phenolic or quantum tubing which is less expensive. The rocket was designed to utilize a 3.097 inch diameter in order to properly house the chosen electronics. The total length of the airframe was designed to utilize a total of 49 inches of tubing; this allows enough space to accommodate the internals while placing the CG and CP in proper relation to one another. After the addition of the nose cone the total rocket length equals 62 inches. The launch weight of the rocket (including the airframe, recovery system, motor mount and motor) equals 8.09 pounds.

The rocket incorporates three clipped delta fiberglass fins evenly spaced around the bottom of the rocket’s airframe. The specific shape of the fins provide sufficient manipulation of the Center of Pressure in order to eliminate the necessity for a fourth fin. This reduces unnecessary drag on the rocket while maintaining a stable flight with a CP lower than the CG. Recovery System. The main parachute has a variable diameter of 24, 30, or 48 inches, and a drogue parachute with a diameter of 24 inches, tubular nylon with a diameter of 0.5625 inches, two nomex cloths, and two nomex shock cord sleeves are combined and utilized to act as the recovery system for the rocket. When thrust from the motor has ended, the rocket will continue to gain altitude until approximately 2,850 feet, at which point the first ejection charge will be automatically triggered in order to separate the upper portion of the rocket from the center electronics bay. This will also deploy the drogue parachute; this will bring the rocket into a rapid, controlled descent. The half inch tubular nylon incorporated is rated to 2,000 lbs, 30 feet will be used in order to minimize the risk of zippering.

The drogue parachute will serve as a break parachute in order to bring the rocket to a stop at 3,000 feet. On the day of the competition, we will assess conditions in order to properly program the electronics to deploy the drogue parachute at roughly 2,850 feet allowing the rocket ample time to stop at 3,000 feet. The rocket will then fall under the control of the drogue parachute until within 500 feet of the ground; at this point an altimeter will trigger a second ejection charge which will deploy the main parachute. In order to reduce the risk of parachutes not deploying there will be a second set of ejection charges connected to a separate altimeter. This altimeter will be programmed to deploy the drogue parachute at apogee and the main parachute at 400 feet above the ground. This measure will ensure the rocket the ability to achieve a safe recovery. The d-links and half inch tubular nylon shock cord are shown in Figure 1.

Figure 1: Shock cord and d-links used to connect the sections of the rocket

Electronics & Storage. The center section of the rocket is used to house the electronics which deploy the parachutes; this section was constructed using a portion of coupling tubing with a length of 9 inches. A 2 inch section of fiberglass phenolic airframe centered and epoxied around the coupling tubing is displayed in Figure 2.

Figure 2: Electronics bay with the U-bolt connections and threaded rod mounting rails

The approximate 3.5 inches of exposed coupling tube remaining on each end of the center section is to fit into the top and bottom sections of the rocket’s airframe connecting all three pieces. The electronic bay is capped on both ends by bulk plates, these bulk plates are attached to the bay by a pair of threaded rods running the full length of the bay. Each bulk plate also has a U-bolt attached through it to allow the d-link on the shock cord to attach to the electronics bay between the upper and lower sections of the rocket.

The primary altimeter used in this rocket is a MARSA4, a programmable parachute deployment system. This particular system was selected because of its four ejection channels. It is also field programmable and provides a plethora of data from each flight. The combination of these characteristics makes this system ideal for the controlled descent system being implemented. The fact that this system satisfies the design by ejecting the drogue parachute while the rocket is still coasting upward, the many sensors possessed by the altimeter, and the accurate data given for a quality post flight assessment makes this piece crucial in our rockets success. This is all accomplished by the small device shown in Figure 3.

Figure 3: MARSA54 Parachute Deployment System Source: http://www.rocketryplanet.com/content/view/3541/29/#axzz1KVksSdHZ (4/13/2012)

The second altimeter housed in this electronics bay is a PerfectFlite StratoLogger Altimeter. This altimeter was chosen as a backup altimeter due to its ability to deploy a drogue parachute at apogee and a main parachute ranging from 100 feet to 9,999 feet in 1 foot increments. This altimeter will also record altitude and velocity plots that can be used in the post flight assessments. This altimeter is not as accurate as the primary altimeter but serves the purpose well as a redundant backup system in case there is failure in the primary system.

Both of these electronic components are mounted to a plywood sled using stand offs and screws. The sled has two metal tubes mounted to it to slide onto the threaded rods. This system allows for the electronics to be mounted securely while providing easy access allowing them to be wired and programmed on launch day.

Figure 4: PerfectFlite StratoLogger Source: http://www.perfectflite.com/sl100.html (4/13/2012)

These systems are joined by the competition R-DAS altimeter used to record flight data. This system will be housed in the nose cone. Our selected nose cone is an intellicone from Public Missiles. The R-DAS will be mounted on the same rail system that is utilized in the main electronics bay. The use of an intellicone saves space in the rocket and allows for a shorter overall rocket length, resulting in a beneficial relationship between the center of gravity and pressure.

Video System. The video system will be housed in a half nose cone epoxied to the fiberglass wrapped phenolic tubing. This capsule was placed at the center of gravity of the rocket to reduce the moment arm of the drag force. The capsule projects off the rocket by one inch. Dr. Matthew Anderson helped determine the boundary layer at the center of gravity is 0.6 inches. The outside 0.4 inches of the capsule will see drag, however this will be minimal due to minimal surface area. The small amount of extra drag will cause a slight amount of instability, but this is taken into account in the center of pressure and center of gravity. The capsule is displayed in Figure 5 accompanied by the bottom view in Figure 6.

Figure 5: Rocket booster depicting the side nose cone

Figure 6: Rocket showing the bottom of the side nose cone

The video system chosen was the BoosterVision GearCam mile high combo. This system uses a 1 inch x 1 inch camera mounted with a 9V battery inside the capsule. The signal will be received using a 14db antenna; this antenna has a range of 5,000 feet vertically. The original system had a range of 3,000 feet but to ensure video throughout the flight the upgraded 14 db antenna was purchased.

Figure 7: Booster Vision GearCam Source: http://www.boostervision.com/cart/scripts/prodView.asp?idproduct=77 (4/13/2012)

Center of Pressure/ Center of Gravity. The locations for the center of gravity and the center of pressure were determined by constructing a sample model in OpenRocket. The rocket construction analysis found in OpenRocket allowed for the modeling of most components of the design while enabling the designer to make adjustments to calculate the ideal dimensions. The only component that was not modeled in OpenRocket was the half nose cone used to house the video system. The result of the analysis can be seen in figure 8 displaying the layout of the rocket design and the resulting calculations for the placement of the Center of Gravity and the Center of Pressure. The analysis also provides how these two points relate to each in producing a stable rocket. The CP must be located more than 1 airframe diameters below the CG to make the rocket stable. Our rocket’s stability margin is 3.52. This margin is considered over stable but due to the fact we are adding instability to the rocket by adding the side nose cone to house the video system the rocket was designed over stable to accommodate the video system’s instability. A result of the rocket being over stable would be the rocket would not act optimally in windy conditions causing the rocket to reach a lower maximum altitude. This was accounted for when choosing a motor because our maximum altitude is 3,353 feet. As a result we have extra altitude to spare if conditions are windy.

Figure 8: OpenRocket Construction Analysis

CPRocket =49.2” from top of rocket CGRocket = 38.5” from top of rocket with motor CGRocket = 35.2” from top of rocket after burnout Stability Ratio = 3.52

Analysis of the Anticipated Performance Assumptions 1. Weight was assumed to be constant throughout the flight. wrocket = 6.765 풍풃풔

2. The density of air was assumed to be constant throughout the entire portion of the flight. ퟑ = ퟎ.ퟎퟎퟐퟑ8 풔풍풖품 / 풇풕

3. Gravity was assumed to be constant. 𝑔 = ퟑퟐ.ퟐ 풇풕/풔ퟐ

4. The drag force due to air resistance was assumed to be proportional to the square of the velocity. The drag force was calculated using Equation 1:

(1)

Where: ρ= density of air = cross sectional area of the rocket CD= coefficient of drag = instantaneous velocity of the rocket

5. The coefficient of drag was assumed to be constant. 𝐷 = ퟎ.6

Linear interpolation was used to extract data points of the thrust of the motor to take in account the variation. (http://www.thrustcurve.org/simfilesearch.jsp?id=1685)

Predicted Velocity History. Two distinct portions of the flight are present. First is the thrust portion of the flight. During this phase, the rocket is accelerated upward from the ground to the point when the thrust ends. Thrust is maintained throughout this entire phase. The second phase of the flight is when the motor finishes providing thrust and the rocket decelerates until apogee. During this portion, the rocket continues to fly as a projectile. There is no thrust during the second phase.

Phase 1: Motor Accelerating Rocket (0 < time < 1.8 seconds)

The velocity can be predicted using Newton’s second Law and a numerical algorithm. Newton’s second Law says that the sum of the forces is equal to the product of mass and acceleration.

(2)

When applied to the rocket, Equation 2 becomes:

𝑔 (3)

A numerical method (Euler’s Method) is applied and this becomes:

𝑔 (4)

Equation 4 can be rearranged as follows:

( 𝑔 ) (4)

Equation 4 was scripted in a MATLAB code (Appendix 1) from the time of launch ( =0 sec⁡) until the thrust ends ( =1.8 sec). A plot of this equation can be seen in Figure 9.

Figure 9: Velocity History Plot of the Rocket Produce by MATLAB

Predicted Acceleration History. The acceleration of the rocket was predicted by applying a numerical differentiation model on the predicted velocity data. The acceleration was calculated as follows:

(5)

Equation 5 was scripted in a MATLAB code (Appendix 1). This algorithm was applied from the time of launch until apogee was achieved. A plot showing the acceleration data can be seen in Figure 10.

Figure 10: Acceleration History Plot of the Rocket Produce by MATLAB

It is anticipated that acceleration is initially positive since thrust is applied. The acceleration decreases during this phase because the drag force is increasing. Acceleration is negative after thrust ends because the rocket is slowing down. It should be noted that the maximum acceleration achieved in the flight is 506 풇풕/풔ퟐ.

Predicted Altitude History. The altitude history of the flight was also predicted using a numerical model. This was calculated by applying the trapezoid rule to find the area under the velocity curve. The equation used to calculate the height above Earth’s surface (altitude) is as follows:

(6)

Equation 6 was scripted in a MATLAB code (Appendix 1). This algorithm was applied from the time of launch until apogee was achieved. A plot showing the predicted altitude history can be seen in Figure 11. The maximum altitude achieved was 3480.6 풇풆풆풕.

Figure 11: Anticipated Altitude Projection Plot of the Rocket Produce by MATLAB Summary of Flight Performance Maximum Acceleration 506 풇풕/풔ퟐ Maximum Altitude 3480.6 풇풆풆풕 Time at Apogee 14.17 풔풆풄풐풏풅풔 Table 1: Pre-Flight Analysis Predictions of Results

Although the maximum altitude is predicted to be 3480.6 ft, the rocket will be stopped at 3000 ft due to a drogue chute. Therefore the time to apogee will also be less.

Post Flight Analysis Overall Performance. The table below has three distinct columns of values that were obtained from different sources. The “MATLAB Prediction” column contains the values that were determined from the MATLAB model that Team Jarts constructed of the rocket. The “OpenRocket Prediction” column contains the values that were determined from the OpenRocket model with the exact wind conditions from the day of the launch. The “Official” column contains the values that were measured using the equipment provided by the competition officials.

MATLAB OpenRocket Flight Official Prediction Prediction Maximum Altitude [ft] 3480.6 3135 2680 Peak Acceleration [ft/s2] 506 413 363.86 Length of Video [s] 6

Table 2: Summary of both predicted and measured results for analytical comparison

The official data was then used in formula 1.1 in order to compute the total score for the flight.

22 ascent videotime   ascent videotime  apogee 3000 2 (0.1) 10    ascent time   ascent time  50

Using this formula with the official data a total score for the flight was found to be 15.34.

22 6[s ]   6[ s ]  2680[ ft ] 3000[ ft ] 2 1015.34    11.770[ss ]   11.770[ ]  50

The primary reason that the rocket did not achieve results closer to the predicted values in OpenRocket was from the rocket having a high stability margin and therefore weather cocked into the wind. In order to achieve better results, the center of gravity would have had to be moved back on the rocket by adding weight to the aft end. This is a tradeoff between weather cocking and weight, both causing a decrease in altitude. If the conditions on the day of launch were less windy, the rocket would have performed closer to what was predicted.

Altitude. When comparing the results of the predicted data to the actual data, one can confirm the analysis method used. There was a relatively small disparity between the predicted altitude and the recorded value for the OpenRocket prediction. The percent difference for this prediction was 14.5%. The predicted value was also known to be high when designing due to the fact that there was no numerical value for the coefficient of drag for the side pod that housed the camera. The MATLAB prediction percent difference is much higher due to the fact this prediction did not take into account launch angle, wind speed, or the added coefficient of drag of the side pod. The percent difference in the MATLAB prediction was 23%. Overall the nontraditional shape of the rocket with a side pod resulted in the larger percent differences. The large differences in the two percent difference values stems from the fact that MATLAB model did not take into account the launch rod angle and wind.

Acceleration. The comparison of the acceleration values yields a larger variation. However this can be due to the fact of the added drag that was produced by the side pod that was not accounted for in the predictions. The percent difference from the OpenRocket prediction was 11.9%. The MATLAB prediction percent difference was 28.1%. Again, the high percent difference in the MATLAB prediction come from the fact that wind and launch angle was not accounted for in this prediction.

Video. The video feed that was received and recorded by the competition produced about 6 seconds of video. This video feed started shortly after ignition and ended about a little over halfway through the ascent phase. The fact that the video feed was not received during ignition could be due to the large acceleration the rocket was undergoing. In order to improve the percent of video feed received another receiver could have been added on the ground so that two people could track the rocket.

Conclusion Team Jarts’ confidence in their rocket to meet the completion performance criteria and its ability to perform in such a way as to be field repairable were validated at competition. Although, the numerical simulations predicted produced a high percent difference, the post-flight analysis was able to identify potential sources for these errors. The rocket was also designed to overshoot the goal altitude of 3,000 feet due to the fact that the team knew the side pod was not accounted in the calculations. Also the team designed the rocket to overshoot the goal altitude because we knew the rocket was over stable and would not act ideally in windy conditions. The successful design, analysis, and execution of this project allowed Team Jarts to view this endeavor as an engineering achievement.

Acknowledgement Team Jarts’ thanks Wisconsin Space Grant Consortium (WSGC) for their contributions to make this project possible. WSGC Collegiate Rocket Competition Design Analysis – Team ChlAM

Max Strassman, Chloe Tinius, Andrew Udelhoven

University of Wisconsin – Madison Department of Engineering Physics

ABSTRACT The WSGC Collegiate Rocket Competition’s 2012 launch challenged teams to launch a single stage high-power rocket as close to 3000 ft in the air while transmitting live video for the entire ascent. Our team designed and successfully launched a roughly 5 foot tall rocket for the competition on a $1000 budget. With the aid of simulation software and our own calculations, our rocket had a safe and stable launch to 2479 ft, lower than our expected 3000 ft. Error was likely caused by the strong wind and rainy conditions during the launch, which add drag. In addition, we made assumptions in our calculations about the rocket’s drag coefficient, which may not have been accurate. We were satisfied with the success of our first high-power rocket and learned a lot that can be carried over to next year’s competition.

COMPETITION PARAMETERS The Collegiate Rocket Competition sponsored by WSGC is an annual competition open to undergraduate and graduate students from Wisconsin universities. Both engineering and non- engineering students are permitted to take part in the competition, but compete in separate categories. Each year, WSGC applies different parameters and challenges for the competition, making the focuses of a rocket’s design different from year to year. For the 2011-12 competition, the challenge was to launch a single-stage high power rocket as close to 3000 feet as possible using one of several motor options. In addition, the rocket was required to transmit a live video signal to the ground during the ascent. The rocket was to have a successful parachute deployment using an electronic deployment system and be recovered in flyable condition.

DESIGN FEATURES Our rocket design was fairly straightforward for a high power rocket. Figure 1 details the rocket design and layout. The rocket was 61 inches long and used a 3.9-inch diameter body tube. Starting at the nose of the rocket, we used a standard high-power rocket nosecone attached to a BlueTube body tube. BlueTube is a very strong and durable paper fiber used on tank shells. Within the upper section of the main body tube is the electronics bay, a protective casing used to store the altimeter to record flight data and deploy the parachute. The altimeter used in our rocket was the PerfectFlite Stratologger, which recorded altitude, velocity and acceleration of the rocket during the flight using an accelerometer and the outside air pressure. Upon rocket recovery, we were able to retrieve the information from our altimeter and analyze the rocket’s performance using a computer. The altimeter also deployed the parachute when the rocket reached the top of its flight path using a gunpowder charge fixed to the inside of the rocket. The rocket splits in two at the electronics bay with the upper and lower section tied together using a Kevlar shock cord. The lower section of the rocket contains the parachute and motor. Our rocket used a 44” diameter parachute, which was protected on either side by two small Nomex blankets so that it would not be burned when the parachute deployment charge detonated. At the bottom of the rocket is the motor, which was secured by an engine cap and slid into a smaller diameter BlueTube. The smaller tube was secured to the body tube by two plywood rings. The motor used for our rocket was a Cesaroni I-285. Attached to the motor tube is an aluminum fin can, which has channels on the exterior to facilitate correct fin orientation. Three G-10 fiberglass fins were mounted at the base of the rocket. On the exterior of the rocket, a single 1.14” diameter tube and nose cone were glued to the main body to house the onboard camera. The camera used was made by BoosterVision and had a range extender to increase reception beyond 5000 feet. The camera itself is approximately an inch cube and attaches to a 9V battery. The camera looked down the body of the rocket towards the ground.

Figure 1. Rocket layout and design

ROCKET STABILITY A major concern for any rocket is stability. If a rocket is unstable, it may have a chaotic flight path and could fly very far horizontally, causing it to become potentially dangerous and likely making it arduous to recover. The rule of thumb for stability is that a stable rocket will generally have the center of pressure (labeled CP in Figure 1) between one and two body diameters, or calibers, behind the center of gravity (labeled CG in Figure 1). A difference of less than one body diameter is generally marginally stable and a difference greater than two is generally over- stable. An over-stable rocket will turn into the wind, reducing its maximum altitude. If the center of pressure is ahead of the center of gravity, the rocket is likely to be unstable. During flight, the center of pressure does not change, but because the rocket loses propellant mass, the center of gravity shifts forward. If the mass of the propellant is a significant percentage of the rocket’s total mass, stability can be affected significantly. Before and after burnout, the distance between our center of gravity and center of pressure increased from approximately one body diameter (marginally stable) to about 1.6 diameters (stable). For our calculations, we used RockSim, a rocket simulation software, to determine our rocket stability.

ANTICIPATED PERFORMANCE Predicted Altitude Rocket performance was modeled using RockSim and was validated using Microsoft Excel and Engineering Equation Solver (EES). Our main points of focus were maximum altitude and maximum acceleration. The altitude that was expected under fair conditions with negligible wind was 3307 feet from RockSim, while the altitude that the rocket was expected to reach based on our Excel simulation was 2935 feet. This discrepancy between the two altitudes was most definitely non-negligible, and testing under ideal conditions would have helped confirm which method was most accurate. Below are graphs from both simulations.

Figure 2. Predicted altitude from EES/Excel simulation

Figure 3. Predicted altitude from RockSim simulation

Predicted Acceleration Using the EES/Excel simulation, the point at which maximum acceleration would be expected would be the moment the engine begins to fire. This is because the velocity of the rocket is zero, so there is no drag force and the motor thrust was assumed to be constant for the sake of simplicity. Taking those assumptions to be true by force equals mass times acceleration, the maximum acceleration is 316.8 ft/s2 (using an average thrust of 285 N). The simulation performed in RockSim gave the maximum acceleration to be 399 ft/s2. The difference in the accelerations arises because RockSim does not take the thrust of the engines to be constant. Since RockSim allows for variable thrust, the accepted value for the maximum acceleration is 399 ft/s2.

Figure 4. Predicted acceleration from RockSim

ACTUAL PERFORMANCE Conditions on launch day (April 28, 2012) were less than ideal with rain and strong wind applying significant drag to the rocket, causing us to significantly undershoot both of our numbers for predicted maximum altitude.

Regarding operation, the rocket worked as expected, being stable throughout the flight with video transmission for nearly the entire ascent and a successful parachute deployment at apogee. Because of a strong crosswind, the rocket drifted over a mile away from the launch site and was found 60 feet up in a tree. The rocket remained structurally intact and could be flown again.

Figure 5. Actual launch footage from the onboard camera

Maximum Altitude The maximum altitude achieved by the rocket (2479 ft) differed greatly (21% error) from the expected maximum altitude (3000 ft). In accounting for discrepancies between our calculations and the actual results, a few probable sources of error stick out. First, since our simulations and calculations assumed we would launch in ideal or near ideal conditions, the increased drag caused by weather lessened our altitude. Secondly, several assumptions were made in our calculations to simplify them, which may have increased error significantly. We assumed a drag coefficient of .33, but wouldn’t be able to confirm this as accurate without testing. It was observed in the footage of the launch taken by the authors that the rocket does not travel vertically off of the pad. There are two likely causes for this: drag forces on the rocket were not balanced, or the rocket was unstable. The drag forces on the rocket were not balanced because an external pod was attached to the exterior of the rocket in order to house the camera/transmitter. The stability of the rocket was also questionable. The rocket had a margin of one caliber after burnout, which is acceptable, but before burnout, the distance between the center of pressure and center of gravity was too small (approximately .75 calibers.) Because the margin was small, perturbations in the flight would be able to grow faster and make the assumption of vertical flight invalid. The rocket was never wind tunnel tested, nor flight tested before launch day. This left us with no accurate way to determine the coefficient of drag (and therefore maximum altitude and acceleration). This left the team to trust the value based off of expected values from the simulation software RockSim, which was not ideal.

Maximum Acceleration The maximum acceleration (515 ft/s2) also differed greatly (29%) from the predicted value (399 ft/s2). This was due to similar reasons as the differences between the maximum predicted altitude and the maximum achieved altitude. The model created was actually modeling a slightly different rocket. The model that was created had two side pods (initial design), one for the camera and the other for symmetry, instead of only one (flight configuration). Because there was one less pod, the drag force on the rocket was less and the rocket could achieve a higher rate of acceleration. The coefficient of drag used in the model is also inaccurate. For the maximum acceleration RockSim was used. RockSim calculates the coefficient of drag and it even varies with velocity, but experience has shown that calculated values from RockSim can be wildly inaccurate.

Table 1. Predicted and actual performance values Predicted Value Actual Value Maximum Altitude 3000 ft 2479 ft Maximum Acceleration 399 ft/s2 515 ft/s2 Time to Apogee 13.2 s 12.8 s Good video time 13.2 s 8 s

CONCLUSIONS Overall, our team is satisfied with our successful launch and recovery of our first-ever high- power rocket. There are several aspects of design and simulation that we can carry over to future rockets. First, we learned not to trust RockSim for extremely accurate results and instead will try to use our own calculations. In addition, to reduce the distance that the rocket travels under parachute, we will use a dual parachute deployment system with a small parachute deploying at apogee and the main parachute deploying later in the descent. Lastly, we will put a GPS tracker on board so that we can find the rocket more easily.

Figure 6. Disassembled picture of the rocket

The authors would like to acknowledge Wisconsin Space Grant Consortium for their sponsorship of the Collegiate Rocket Competition as well as Dr. Suzannah Sandrik at the University of Wisconsin - Madison for her advising during the design process.

22nd Annual Conference Part Three

NASA Reduced Gravity Program Spacesuit Dust Removal Techniques in Microgravity

Aaron Olson, Julie Mason, Collin Bezrouk UW-Madison 2012 Zero Team University of Wisconsin- Madison

Abstract Preventing and removing the build-up of regolith on an astronaut’s space suit is a major issue facing future manned exploration missions. The dust poses a health threat to astronauts; it also adversely affects their equipment. The Field Integrated Regolith Cleaning Experiment (FIRCE) was designed to study different techniques for removing regolith as well as reviewing what commercial off-the-shelf products were best suited for cleaning space suit orthofabric and polycarbonate (used on the astronaut’s visor). All of the commercial products worked well in both the 1-g environment as well as the 0-g environment. All were ranked above a 7.0/10 for ease of use and above 7.6/10 for cleaning effectiveness. A custom made magnetic brush, designed to attract the regolith’s static charge was not effective at removing dust because the humidity in the air prevented the build-up of static charge on the regolith.

Introduction There are many design challenges in the area of manned space exploration, one that is very important, but may not be the first to come to mind is the regolith (dust) contamination challenge. The Field Integrated Regolith Cleaning Experiment (FIRCE), on a level one basis, addresses the question of which commercial off-the-shelf products could be used to mitigate the dust challenge.

Background Dust mitigation has been identified as a major issue for future space exploration missions. Based on the experiences from the Apollo missions, it is evident that that dust contamination can cause mechanical and wear issues with the space suits as well as the critical components within the vehicle. Particles less than three microns in size can lodge in the lungs if inhaled. The nanophase iron in lunar dust can enter the bloodstream and potentially reach toxic levels [2]. The space suit is the single greatest transport mechanism for Extra-Vehicular Activities (EVA). Since a single suit system is utilized, a suit worn for an EVA must also be worn inside the vehicle for activities such as takeoff and landing. Not only will the suit need to be cleaned before re-entering the vehicle, the suit will also have to be cleaned before an EVA; organic materials from Earth will need to be contained in order to maintain the cleanliness of the exploration area [1]. The main issue with space dust stems from its strange properties. The dust is formed from micrometeorite impacts pulverizing local rocks into fine particles. Energy from the collisions causes the dirt to melt into a vapor that forms a glassy shell when cooled. These micron sized angular particles are extremely abrasive and can wear through space suits as well as disable the joint mobility of the space suits. The irregular shapes of the dust particles and electrostatic cling causes the dust to adhere to exposed surfaces. The electrostatic cling is cause by the electron bombardment from solar winds charging the specs of metallic iron (Fe0) which are embedded with the glassy shell of each particle [2].

Financial support provided by the Wisconsin Space Grant Consortium The inherent nature of the dust particles to be electrostatically charged can potentially be utilized to mitigate dust contamination. By introducing a magnetic field, the charged dust particles can be removed from a contaminated material surface.

Description of experiment The Field Integrated Regolith Cleaning Experiment (FIRCE) was designed as a level one test to determine effective techniques and products for removal of lunar regolith. This experiment investigated methods for dust removal on space suit materials in microgravity. The two tested materials were orthofabric and polycarbonate, which are used for the suit and visor of an EVA suit. Each flight performed testing on 5, 4 x 4 inch samples of orthofabric and 6, 1.5 x 1.5 inch polycarbonate samples. The space suit materials were exposed to JSC-1A < 1mm lunar dust simulant and mounted in a doubly contained glove box. During the flight, various dust removal techniques were performed on each of the samples, and the before-and-after effects of cleaning the samples were recorded via a video camera that was mounted to the outside of the box. Figure 1 shows the experiment setup.

Figure 1: Isometric view of experiment setup inside glovebox

Methods We began our research by following two paths to cleaning regolith from space suits: commercial products and magnetic dust removal. We intended to test both elements in order to better focus later research. On the magnetic side, we reviewed the magnetic properties of regolith, finding that most dust particles contain a core of ferrous iron. Additionally, the insulating glass coating on the dust is ideal for storing static charge. Commercial product research looked into different dust attraction mechanisms used in cleaning products as well as how each one prevents damage to the surface it is cleaning. A list of products suitable for safely cleaning electronic and camera equipment was put together for testing.

Prior to the experiment we anticipated the commercial cleaning products would be close to 50% effective at removing dust. More specifically, they would be able to remove dust sitting on top of the sample, but due to the jagged glass edges on the dust particles, we hypothesized that it would be difficult to remove embedded particles. Since the commercial products rely on contact pressure for cleaning, we felt that many particles that were not originally embedded would become embedded during cleaning. The magnetic brush was also expected to remove about 50% of the lunar simulant from the samples. While it was a non-contact cleaning method, the strength of the magnetic field was not such that we felt it would be able to lift out embedded particles. To give ourselves a better idea of how the cleaning products would respond in microgravity, we tested two orthofabric samples in 1-g with a horizontal configuration. One sample had lunar simulant dusted on its surface while the second sample had the dust rubbed into it. We were surprised to find that the Swiffer duster was able to remove almost all traces of the dust after applying very modest contact pressure. The magnetic brush was also tested under these conditions and was unable to lift any traceable amounts of dust from the samples. These results varied from our original hypothesis. We concluded that the high humidity in the air was causing a reduction of static charge, reducing the effectiveness of the magnetic brush. Conversely, the commercial cleaning products were substantially more effective. The experiment contained all dust, samples, and cleaning products in a doubly contained glove box with two portholes and gloves for the experimenter to run the test. The orthofabric and polycarbonate samples were mounted vertically on a wall inside the glove box. They had been pre-coated with JSC-1A lunar simulant, which had been dusted on and then rubbed into the sample in 1-g. Each cleaning product was mounted to the floor of the glove box or one of the vertical walls using Velcro. Only one item was tested at a time and all other cleaning items remained stowed during this process. We also mounted a small bottle filled with lunar simulant. This was applied to the samples between cleanings after all samples had been tested at least once. Regolith was applied to the sample and rubbed in before cleaning resumed. To test each sample, the cleaning tool was used as directed by the manufacturer. The cleaning involved a repeated scrubbing motion over the surface of one sample for a period of approximately 10 seconds. The sample was inspected for cleanliness and then the scrubbing was repeated with more pressure applied to the sample. When the experimenter felt that the sample had been thoroughly cleaned, he or she proceeded to the next sample and/or test article. Whenever a test item was finished, it was stowed using Velcro on the bottom of the containment box. The magnetic brush was tested over five samples: two orthofabric, two polycarbonate, and once with free-floating regolith. The first orthofabric and polycarbonate samples were tested by turning on the magnetic brush, and passing it about 0.5 inches above the surface of the sample, being careful not to touch it. After approximately 10 seconds of waving, the brush head was inspected to see the dust it had collected. Any present dust was wiped away with the gloves and the test repeated two more times. The second orthofabric and polycarbonate samples were tested by pressing and scrubbing the magnetic brush over the sample. Similarly, after about 10 seconds of rubbing, the head of the brush was inspected for dust, cleaned, and repeated. The results of these tests were poor, so we tested the brush on free- floating regolith in the containment box. Regolith was released into the air and the brush was waved about 1 inch away. The dust was attracted to the brush head in greater quantities than when trying to pull dust from the mounted samples.

Results In the zero gravity testing conditions, most of the cleaning elements performed as expected and similarly to the 1g results; the only exception was the magnetic brush. Since the experiment was conducted within the atmosphere, meaning it was subject to the effects of atmospheric pressure and humidity; the results were mitigated compared to the hypothesized results of the space environment. The humidity reduced the electrostatic charge of the dust to a level that rendered the magnetic brush useless.

Data Collection Team members were asked to rate each item post flight in regards to effectiveness and ease of use for both cleaning of the polycarbonate and orthofabric materials. Each flyer was also asked to include comments about the usage of each item to potentially clean a large item such as a full space suit or a large solar panel; further testing is required to clarify the results gathered as well as test items in a vacuum or space simulation environment.

Microgravity Results Swiffer Duster (SD) The SD received a rating of 8.6/10 for ease of use due to the ergonomic design of the handle and extension of flexible bristles on the end. Effectiveness averaged 8.6/10 with the polycarbonate and a 7.6/10 with the orthofabric. The SD proved generally more effective with the polycarbonate; owing to the flexibility of the bristles, it is difficult to apply pressure with this brush particularly for particles engrained in the

Figure 2: Swiffer Duster orthofabric. Overall, it seemed effective at cleaning by visual inspection and perhaps the ability to scrub would engrain particles deeper within the fabric. Furthermore the prediction that the bristles would not capture all dust but rather allow some to be re-emitted into the atmosphere proved to be incorrect. This was probably due to the large amount of bristles and small sample sizes. The SD should be further tested with larger samples.

Microfiber Cloth The Microfiber cloth was rated 7.3/10 for ease of use. This rating was due to the fact that the cloth was difficult to maneuver while wearing the gloves of the experiment’s glove box. A similar situation would be encountered when attempting to manipulate the cloth while wearing EVA gloves (known for their lack of dexterity). The cloth was

Figure 3: Microfiber Cloth rated 7.7/10 for its effectiveness on polycarbonate and 7.6/10 for its effectiveness on orthofabric. Some mention was made of the fact that the cloth didn’t collect as much dust as the wet multi-surface dust wipes, as it released some of the dust to the atmosphere of the experiment. This may or may not be relevant in an actual zero-g scenario. It should also be mentioned that the size of the cloth was no larger than 6” x 6”, thus cleaning of a large surface would be time consuming and would probably not be the cleaning tool of choice while performing time-critical operations.

LCD Screen Cleaning Wipes The LCD cleaning wipes were rated 7.6/10 for ease of use. These wipes were dry and specialized for computer and TV screens. For the same reasons that the microfiber cloth was somewhat difficult to use, the dry wipes were as well. The cloth was rated 8.3/10 for its effectiveness on polycarbonate and 8.1/10 for its effectiveness on orthofabric. The wipes were smaller than the microfiber cloth and thus could also require a significant amount of time to clean a large surface. Figure 4: LCD (Dry) Screen Cleaning Wipes

Pledge Multi-Surface Dust Wipes (Wet) The wet Pledge Multi-Surface wipes were rated 7.0/10 for ease of use on orthofabric and 7.2/10 for ease of use on polycarbonate. Again, the use of wipes or a cloth introduces difficulties with respect to maneuverability of the cleaning tool (cloth or wipe). The cloth was rated 8.7/10 for its effectiveness on polycarbonate and orthofabric. These wipes were the highest rated of all of the cleaning products. The moisture held in the wipes allowed regolith to be more easily collected compared to the other products. Unfortunately, if a wet dust wipe were actually used in a vacuum environment (like that of the Moon or any asteroid), the moisture would quickly dissipate from it and negate its

Figure 5: Pledge Multi-Surface Dust Wipes advantage over the dry wipes. (Wet)

Magnetic Brush As previously mentioned, the brush wasn’t effective at all due the humidity in the glove box. However, its ease of use was essentially rated on par with the rest of the cleaning tools, if not slightly above average. In a properly simulated space environment, a similar magnetic brush would be assumed to perform much better.

Microfiber Sponge

The microfiber sponge was rated 8.1/10 for ease of use on both the orthofabric and the polycarbonate. The effectiveness of the sponge was 8.0/10 on the orthofabric and 7.6/10 on the polycarbonate. The effectiveness of the sponge had to do with the fact that the sponge was much thicker than the wipes and microfiber cloth, which allowed dust to penetrate the surface of the sponge and allow more dust to be collected. Additionally, the sponge was easier to scrub with when compared to the cloth and wipes. If a larger version of this sponge with the ergonomic advantages of the Swiffer Duster was tested, it may have been the best cleaner of all Figure 6: Microfiber Sponge the products tested.

Hypergravity Results Data was not collected during hypergravity portions of the flight.

Discussion There was a lot learned about methodology during this experiment which should greatly benefit the second level of testing. We faced several challenges during the experiment that will lead to a better design in the future. First, the portholes in the containment box were too small and the gloves were too short to reach all areas of the box. Second, during microgravity, one hand inside the box was always being used to hold the experimenter in place. The most comfortable position always inevitably blocked the camera’s view of the experiment, so data could only be gathered after a cleaning was done and the sample was revealed to the camera. Third, the camera’s field of view was too small to see the two side panels, so samples on these panels did not return usable footage. Aside from these logistical issues, the experiment was largely a success. Each commercial item tested performed better than anticipated, each one scoring above a 7.0/10 in ease of use and above a 7.6/10 in cleaning effectiveness. These results matched our 1-g results. Additionally, we now know how to modify the experiment in order to increase the effectiveness of the magnetic brush. Either a vacuum can be pulled on the containment box or nitrogen can be introduced into the box to dry the air.

Outreach Outreach became an important factor for the team; As space advocates we look back at what influenced us to become engineers and scientists interested in space exploration, and each of us can look at a teacher, an astronaut, a space launch, or a role model who made an impact on our lives for the better. The Reduced Gravity Student Flight Opportunity Program (RGSFOP) reaches out to university students in the same way, and the UW-Madison team wanted to share this excitement with a highly motivated and intelligent group of students about to begin school at universities across the country. The objective of our outreach was to mentor a group of 12 high school students in the design and construction of their own microgravity experiment. The ultimate goal would be to bolt their experiment on the side of the UW-Madison experiment and collect data for the students to analyze and learn from. A team of advanced physics students from East Troy High School in Wisconsin were selected to work with us. The students researched ideas, elected team leaders, and divided into groups to accomplish all of the work for each subsystem. The UW-Madison team was able to mentor the development of the project, interact with the students, and guide them towards the ability to create an experiment for microgravity. We learned that this effort was by no means easy and a mentorship that both parties would be able to learn from. The final result was rewarding for both the high school students and the UW-Madison team. As a diverse group of students we were able to work with minorities in science and engineering and stimulate interest not only in STEM fields but also in space exploration. Students from the high school team are pursuing microgravity experiments at the universities they will attend next year. The high school teacher with whom we had the honor to work, is looking into beginning a high school team that will apply to the HUNCH program next year (He would not have considered this otherwise). The knowledge UW-Madison students have gained as mentors as well as our impact on the high school students is irreplaceable. One high school at a time, the UW-Madison team feels that we can truly have an impact on the engineers, scientists, and explorers of the future.

Conclusion The information gathered from FIRCE should provide a sound base for a level two test of commercial off-the-shelf products. In such a test, the glove box environment should be more representative of an actual zero-g, vacuum environment. The possibility of altering the products to test each of them at an equal ease of use should be explored. This could consist of having all of them on a common handle like that of the Swiffer Duster. An automated system utilizing the products tested in the FIRCE should also be developed. An attempt to create a prototype for this level one test was unsuccessful due to hardware failure prior to testing. This system is described in Appendix B. Overall, the results of FIRCE were positive and suggest that further research and testing should be invested to understand if there are commercial off-the-shelf products that could be adapted to help mitigate dust for future manned exploration mission

Acknowledgements We would like to acknowledge and extend our gratitude to the following organizations and persons who have made the completion of this experiment possible:  Wisconsin Space Grant Consortium, for their generous support in the construction of our experiment.  Space Science and Engineering Center of UW Madison, for their generous support and commitment to our team throughout the entire process.  Professor Bonazza, for his encouragement and guidance.  Dr. Elder, for his understanding and assistance.  NASA’s Reduced Gravity Education Flight Program and all of its employees, for giving us this amazing opportunity and experience.  Robert Trevino, our mentor, without him none of this would have been possible. Without the support and effort of these organizations and individuals, the completion of the experiment would not have been possible References

[1] Cadogan, D., & Ferl, J. (2007). Dust Mitigation Solutions for Lunar and Mars Surface Systems.

[2] Soil Science Society of America (2008, September 24). NASA’s Dirty Secret: Moon Dust. ScienceDaily. Retrieved February 18, 2012, from http://www.sciencedaily.com /releases/2008/09/080924191552.html Modal Evaluation of Fluid Volume in Spacecraft Propellant Tanks

Steven Mathe1, KelliAnn Anderson1, Amber Bakkum1, Kevin Lubick1, John Robinson1, Danielle Weiland1, Rudy Werlink2, Kevin M. Crosby1

1Carthage College, Kenosha, WI, USA

2NASA Kennedy Space Center, Florida, USA ([email protected])

Abstract

Propellant mass-gauging in unsettled (sloshing) fluids is an important and unsolved problem in spacecraft operations and mission design. In the present work, we demonstrate the efficacy of the experimental modal analysis technique in determining the volume of fluid present in model spacecraft propellant tanks undergoing significant sloshing. Using data acquired over approximately 37 minutes of time in zero-gravity conditions provided over two years of parabolic flights, we estimate the resolution of the technique at low tank fill-fractions where other mass-gauging techniques are known to fail.

Introduction

Accurate spacecraft propellant volume measurement in a microgravity environment has been identified by the NASA Exploration Systems Architecture Study (ESAS) final report as an area requiring further development [NASA, 2005]. The microgravity environment renders direct volume measurement using traditional buoyancy- and level-based techniques ineffective. Instead, indirect methods are currently used to establish propellant volume. Commonly used indirect gauging methods include equation-of-state estimations (for pressurized systems), measurement of spacecraft dynamics, and burn-time integration. Each of these gauging methods introduces uncertainty into the propellant volume measurements. Measurement error increases as the tank empties. It is important to minimize this uncertainty to reduce required unusable propellant reserves, decreasing the total mass of the spacecraft. These methods are also accompanied by the mass of the associated hardware. As launch costs approach $10,000 per pound, any attempt at reducing the spacecraft mass through decreased propellant reserves or reduced hardware mass can represent a significant cost savings [Peters, 2004].

Fluid sloshing in microgravity presents another challenge to propellant volume gauging. Propellant states are characterized as being either settled, in which the fluid is quiescent and in mechanical equilibrium with its container, or unsettled in which the propellant sloshes within the tank. Currently, no propellant volume gauging method functions accurately while the fluid is sloshing. This limits the utility of current methods. The time required for a sloshing fluid to settle into a quiescent state is called the settling time and depends on the geometry and material properties of the tank as well as the fluid properties of the propellant. Depending on the size of the tank, this settling time can be on the order of hours.

Previous work has shown the viability of experimental modal analysis (EMA) as a propellant gauging method, and identified the resolution of the technique as better than 10% of the total volume of the model tank with the fluid in an unsettled, sloshing state [Finnvik et al., 2011]. In the study reported here, we estimate the resolution of EMA at low tank fill-fractions where accuracy is the most important and other indirect techniques fail. We also demonstrate that the resolution of the technique over a range of fill-fractions can be as little as 1.5% in volume between 30% and 70%, and 7.4% in volume between 0% and 20%.

Modal Analysis

Modal analysis is a commonly used technique in the analysis of structures. Acoustic forces are applied to the structure through discrete impacts, continuous white-noise functions, or chirp functions, and the vibrational response of the structure is recorded through sensors affixed to the surface of the structure. Natural vibrational modes of the structure will be excited resulting in an increased amplitude in sensor response at the excited mode frequencies. The resonant modes are calculated by means of a Frequency Response Function (FRF). The FRF is the ratio of the Fourier Transform of the response to the Fourier Transform of the input. Graphing the FRF results in peaks at the natural vibrational modes of the structure. In practice, a Fast Fourier Transform (FFT) algorithm is used to efficiently calculate the Fourier Transform, with the input measured by a monitor sensor placed immediately next to the actuator to measure the signal actually being output by the actuator. This allows for real-time monitoring of the vibrational characteristics of the structure.

Both the FFT and the FRF are complex valued functions, but only the real portion of the function which contains amplitude information is of interest, as the EMA technique looks for the increased amplitude at the frequencies corresponding to the natural vibrational modes. The use of more sensors provides a more complete picture of the vibrational characteristics of the structure, and with enough sensors, the three dimensional shape of the structure can be reconstructed.

The EMA technique has been used to characterize the behavior of fluid-filled structures during earthquakes [Malhotra et al., 2000]. It has also previously been applied to propellant gauging, which found that the dominant effect of the fluid loading was an increase in the effective mass of the fluid/tank system, lowering the frequency of the natural resonant modes [Finnvik et al., 2011].

Research Objectives

The central objective of the current study is to determine the resolution of the real-time, non- invasive EMA technique in determining propellant volume of a model spacecraft propellant tank in a microgravity environment under unsettled conditions. Prior work suggests that fluid loading is correlated to the contact area between the fluid and the internal surface of the model tank [Finnvik et al., 2011]. Under sloshing conditions, this contact area is continuously changing as the fluid rolls around inside the tank. The effect of variable contact area on the resolution of the EMA technique will be examined. Influences of the geometry of the model tank will also be examined.

Experiment Design

NASA’s microgravity research aircraft simulated a microgravity environment by flying parabolic maneuvers. The flights on which this experiment was performed were conducted as a part of NASA’s Systems Engineering Educational Discovery (SEED) student flight program. These flights were conducted in April 2012.

A schematic diagram of the experiment is shown in Fig. 1. Two identical model tanks were used in this study. The schematic diagram of the tank is shown in Fig. 2. Each tank is a steel cylinder of diameter 15.1 cm and length 39.4 cm, with two approximately hemispherical end caps welded to the cylinder for a total length of 49.2 cm and a total volume of 2.0 gallons. The tank also has two feet welded to the body for mounting, as well as six ¼” NPT ports. To the first tank are attached pressure and temperature gauges, fill and drain valves, a transfer line, and a pressure- release valve. The same equipment is attached to the second tank, without the pressure and temperature gauges. In this study, the tanks were oriented vertically.

In flight configuration, the first tank was initially filled to 46%, while the second tank was left empty. After every five parabolas, fluid was transferred from the first tank to the second by means of solenoid valves and a garden pump. The volume transferred was measured both by a flow meter and a PVT method. The fluid used in this study was tap water.

The research team used a custom designed software interface programmed in the National Instruments LabVIEW environment to control all data acquisition and fluid transfer operations for the experiment. The touch-screen user interface was designed to be as simple as possible, as the microgravity environment greatly complicates trivial tasks such as pushing buttons. A screen-shot of the interface is shown in Fig. 3.

Results

A typical FRF spectrum for an empty tank in 1-g is shown in Fig. 4, with a frequency resolution of 1.0 Hz. Many different vibrational modes appear in this spectrum, but the mode of interest lies at about 834 Hz. The placement of the sensors on the tank has a large effect on the amplitude of each mode, as placing the sensor at a location that is a node for a given mode would remove that mode from the FRF spectrum.

The experiment was extensively tested on the ground with the tanks filled to different levels. Typical FRF spectra for a tank containing varying volumes of settled fluid in a 1-g environment are shown in Fig. 5, again with a frequency resolution of 1.0 Hz. The downward shift in resonant frequency with increasing fill fraction is clearly evident.

In microgravity, a total of 12 different fill fractions were tested on each tank. Selected FRF spectra of varying fill fraction from the same tank as that in Figs. 4 and 5 are shown in Fig. 6, with a frequency resolution of 1.0 Hz. The fluid was sloshing inside the tank for the entire time the data was recorded. This data shows the same relationship between resonant frequency and fill fraction as the ground data in Fig. 5.

Fig. 7 shows a summary of all of the ground data and flight data collected as a part of this study, as well as previously reported data from flights conducted in 2011, plotting the mode frequency versus the fill fraction [Finnvik et al., 2011]. Error bars representing the standard error are included for the flight data only, but would be smaller than the data symbols for the ground data.

Discussion

The central objective of this study was to determine the resolution of EMA in determining propellant volume in a model spacecraft propellant tank. Based on the three highest fill fractions of tank 1, the resolution between 30% and 70% is 1.5% of the total volume. Looking at the plots of the 1-g data in Fig. 5 and the 0-g data in Fig. 6, there is a marked decrease in the clarity of the peaks in the 0-g data as a result of fluid sloshing in 0-g. Due to this, it is possible that the resolution of the technique would approach the 1-g resolution after the fluid settles in the tank, removing the problem of variable fluid contact area.

Based on the three lowest fill fractions of tank 1, the resolution below 20% is 7.4% of the total tank volume. This drastic reduction in resolution can be attributed to the geometry and construction of the model tank used in this study. The point where the resolution changes is closely correlated with the point where the end caps are welded to the cylindrical tank body. The end caps themselves, being roughly hemispherical, have drastically different vibrational properties than the tank body. Secondly, the weld between the end caps and the tank body represents a discontinuity in the vibrational structure. As the sensors are mounted to the tank body, the technique is capable of measuring the fluid volume only after the lower end cap is filled, and before the cylindrical body is filled. This problem is easily corrected by choosing a tank of a different geometry, such as a simple closed cylinder.

This study has shown that the EMA technique is viable as a non-invasive, real-time propellant volume measurement technique, with a resolution of about 1.5% of the total tank volume between 30% and 70% fill fraction, when used on unsettled, sloshing fluids. The EMA technique warrants further study to determine the resolution of the technique when applied to settled fluids in microgravity.

Acknowledgments

The authors acknowledge the financial support from the Wisconsin Space Grant Consortium, and the Reduced Gravity Office at NASA Johnson Space Center for support of the SEED program. The Carthage Microgravity Team student members who worked on this project include KelliAnn Anderson, Amber Bakkum, Stephanie Finnvik, Erin Gross, Cecilia Grove, Kevin Lubick, Steven Mathe, John Robinson, Kimberly Schultz, and Danielle Weiland.

References

Finnvik, S., Metallo, S., Robinson, J., Crosby, K. M. & Werlink, R. (2011). Modal evaluation of fluid volume in spacecraft propellant tanks. Proceedings of the Wisconsin Space Grant.

Malhotra, P., Wenk, T. & Wieland, M. (2000). Simple Procedure for Seismic Analysis of Liquid-Storage Tanks. Structural Engineering International 3, 197-201.

NASA. (2005). NASA’s Exploration Systems Architecture Study Final Report.

National Instruments, Inc., Austin, TX.

Peters, J. (2004). Spacecraft Systems Design and Operations. Kendall Hunt Publishing.

22nd Annual Conference Part Four

Other NASA Student Opportunities

The Badger eXploration Loft at Desert RATS 2011

Jordan S. Wachs 1,2 UW-Madison X-Hab Team 3

Faculty Advisor: Fred Elder University of Wisconsin-Madison

Nomenclature

HDU = Habitat Demonstration Unit DSH = Deep Space Habitat HDU-DSH = Habitat Demonstration Unit Deep Space Habitat configuration D-RATS = Desert Research and Technology Studies X-Hab = eXploration Habitat BXL = Badger eXploration Loft PI = Principal Investigator GeoLab = a compact onboard geology laboratory GMWS = General Maintenance Work Station MOWS = Medical Operations Work Station DMM = Dust Mitigation Module

Abstract In recent years, the aim of both NASA as well as the nascent private space industry has become increasingly audacious. By any measure, it is becoming apparent that long duration manned missions to deep space are inevitable; as are the inherent complexities and risks associated with such large scale undertakings. With this in mind, it is critical that engineers and scientists work toward the goal of understanding the difficulties that lay ahead. Analog studies such as the Desert Research and Technology Studies (D-RATS) are one critical resource available for the investigation of the effectiveness of various mission architectures. This paper provides a brief outline of D-RATS and focuses on the role, abilities, and benefits of the integration of the Badger X-Loft with the Habitat Demonstration Unit mission during D-RATS 2011.

1 UW-Madison Department of Engineering Physics 2 UW-Madison Department of Physics 3 A. Olson Department of Mechanical Engineering, J. Wachs1, 2, P. Sweeney1 Will Yu1, A. Arnson, Dept of Design Studies; J. Mason1, N.Roth, Granger School of Business; S.Wisser1, M. Fritz1, S. Marron1, M. Lucas1, N. Wong1

The UW X-Hab team would like to thank the following contributors: Wisconsin Space Grant Consortium, UW Space Science and Engineering Center, UW-Madison Dept. of Engineering Physics, UW-Madiso Dept. of Mechanical Engineering, Ghallager Tent and Awning Company, Air Distribution Concepts

Introduction

Following the current presidential administration’s changes in the prescribed vision for future human exploration of our solar system (Sellers, 2012), the high level objective of Desert RATS 2011 was to test mission architectures for deep space habitation on a manned mission to a near earth asteroid (Gruener, 2012). The annual recurrence of Desert RATS allows teams to rapidly deploy and test new concepts in a safe working environment. Exhibiting a short turn-around time of approximately one year, the “tiger Figure 1: D-RATS team photo with the DHU-DSH, both MMSEVs, and team” approach of the D-RATS Robonaut. management allows for faced paced progression of both technologies and mission goals, promoting flexibility in adaptation to changing high level requirements. Both the simulation’s destination and team makeup were changed for 2011, as NASA’s mission goals shifted to asteroid exploration. Furthermore, the inaugural X-Hab Academic Challenge provided seed funding for student teams, which brought students directly into the critical path and allowed them to become full-fledged Principal Investigators (PI) for their subsystem under the direction of a faculty advisor (Howe, 2012). Overall, the HDU team, along with the X-Hab team from the University of Wisconsin-Madison, was able to successfully fulfill mission requirements during Desert RATS 2011.

Habitat Demonstration Unit-Deep Space Habitat overview

The Habitat Demonstration Unit-Deep Space Habitat (HDU- DSH) consists of a large number of complex systems spread out through four functional modules. Every module was relatively independent in its operation, with most of the subsystems in each module being confined to that module. A notable exception to this is the avionics package which allows monitoring and interfacing with major systems in all four modules from the same iPad interface.

The HDU itself serves as the primary module, housing most of the mission critical subsystems incorporated within the DSH system. Four large subsystems were given permanent workstations in the HDU; their locations are shown in Figure 2 for reference. The GeoLab, located in the upper right, serves as a Figure 2: HDU-DSH first level layout

small but capable laboratory in which scientists can investigate geological samples. Samples are loaded from the exterior of the DSH by either an astronaut or the robotic companion, Robonaut. This configuration allows for investigation of specimens without cross contamination between the environment in the HDU and its exterior. The Tele-operations station allows crew aboard the DSH to communicate with all key players in any given mission scenario: EVA crew, crew aboard the Multi Mission Space Exploration Vehicles (MMSEV), as well as mission control (MCC) in Houston and the on-site mobile MCC. The General Maintenance Work Station, or GMWS provided the ability for crew to perform repair work on mission critical parts, and the MOWS allows crew to conduct medical procedures that may become necessary during the course of the mission. The dashed circle in the center of Figure 2 represents the single person elevator used to get from the lower deck of the DSH to the loft above.

The Dust Mitigation Module (DMM), shown at the bottom of Figure 2 served as a stand-in for what would be an airlock in a live mission. The primary function for the DMM in D-RATS of keeping dust, sand, and dirt from contaminating the interior of the DSH is analogous to keeping fine lunar or asteroid regolith from entering a habitat (Mason, 1992).

A new addition for 2011, the Hygiene Module supplied crew with basic personal hygiene needs. This module incorporated a toilet facility, hand wash station, and full body wash station for extended use.

Situated above the three other modules was the expandable loft, known as the BXL, designed and constructed by the University of Wisconsin-Madison team. The loft served as quarters for the crew while conducting activities not directly associated with mission goals. The lower level of the BXL, the 2nd level of the DSH, was comprised of a communal working area and served many functions. Some of the primary mission systems in the loft include the galley, exercise equipment, general workspace, a secondary Tele-operations station for use during times of high activity on the lower deck, and a crew entertainment station. Above the communal area were situated private quarters for each crew member. This third and highest level of the DSH was a cantilevered partial floor, spanning four feet inward from the outer edge of the BXL at a height sufficient to allow uninhibited crew function beneath. Each crew member was allotted ¼ of the 3rd floor for personal use. Air circulation and power were provided in each of the personal crew quarters.

The Badger eXploration Loft

The requirements supplied to the BXL by the HDU management at NASA were very high level. Such requirements as maximum weight, minimum internal volume, and the high level mission gals were defined, but middle and low level requirements definition was left to the individual teams in order to stimulate creativity in the design process. The benefits seen by NASA management were twofold: Firstly, it was a noted strength of this process that students are often able to think far “outside the box” to come up with ingenuitive ideas because their lack of real world experience means that they aren’t aware of what is commonly seen to be impossible. Secondly, by reaching into the educational system, the HDU team wanted to motivate students to supplement their curricula with real world engineering that would improve the learning experience at each university involved (Howe, 2012).

One of the greatest assets that the BXL team had was diversity. With students representing various engineering departments, the physics department, the School of Business and the School of Textiles and Interior Design, the team was uniquely positioned to handle the wide range of challenges that came with this large scale design and construction.

Overall, the design itself went through many stages, as the schedule adhered to the NASA review process, including SDR, PDR, and CDR reviews in preparation of the final construction and system testing of the three major subsystems incorporated into the BXL and described below.

Shell. Externally, the most apparent subsystem of the BXL is its shell. This complex soft goods construction was stitched into a single piece before integration with the other subsystems. Incorporating airbeams for structure (shown red in Figure 2), four layers of fabric were suspended with three inch air gaps between each layer to provide insulation against the extreme desert temperatures, and a waterproof exterior fabric for further protection against the elements. This construction proved effective as a lightweight, highly stowable, and reliable solution to the problem of expandable insulation and protection.

Figure 3: Cutaway rendering showing Internal Structure. To meet the NASA defined requirement of the air gaps and external shell layer ability to withstand 50mph wind loading, and the internally (within the BXL team) defined requirement of a 3rd floor for personal quarters, an internal structure was deemed necessary. This skeletal stucture, comprised of aluminum columns and carbon fiber cantilevered beams and spanners supporting the third floor of the BXL provided ample room for crew to work and live. The internal structure and the shell worked together to fulfill all of the structural and insulation requirements even under conditions far harsher than the design criteria specified, including one storm with Figure 4: Interior of BXL with crew member for scale winds approaching 70 mph.

Electrical, Lighting, and Inflation Control (ELIC). The third major subsystem of the BXL, the Electrical, Lighting and Inflation Control subsystem, was very well integrated into the overall habiat, providing dimmable LED lighting to key areas, power outlets where necessary, and a working inflation control system which allowed for testing and integration inflation. Once the BXL had been fully integrated into the DSH, a NASA controlled pressurisation system was integrated and fully networked into the avionics for airbeam pressure monitoring.

Conclusion

In conclusion, D-RATS 2011 was a success on many levels. Aside from gathering valuable data pertaining to the future of human rated space exploration, D-RATS validated the approach that the HDU team took to producing innovative hardware while immersing students in the critical path for NASA missions in human spaceflight. The continuation of the eXploration Habitat Academic Innovation Challenge is a clear sign that the Challenge’s approach has produced quality work and experience for those involved.

Acknowledgments

On behalf of the X-Hab team from the University of Wisconsin-Madison, the author would like to thank the WSGC, Fred Best and the University of Wisconsin Space Science and Engineering Center, Randy Illif from Bjorksten|Bit 7, and the members of the Habitat Demonstration Unit team for their contributions to the X-Hab project. Special thanks go to our faculty advisor, Dr. Fred Elder and graduate student advisors Max Salick and Tim Feyereisen for their exceptional guidance through the design and construction processes.

References

Gruener, J. E. (2012). NASA Desert RATS 2011 Education Pilot Project and Classroom Activities J. E. Proceedings of the 43rd Lunar and Planetary Science Conference, (p1583). The Woodlands, TX

Howe, A. S. (2012). X-hab Challenge: Students in the Critical Path American Institute of Aeronautics and Astronautics

Seller, P.J. (2012). A Vision for the Exploration of Mars: Robotic Precursors followed by Humans to Mars Orbit in 2033. Proceedings of Concepts and Approaches for Mars Exploration from the Lunar and Planetary Institute (p.4140). Houston, TX

Mason, L. W. (1992). Engineering, construction, and operations in space-III: Space '92. Proceedings of the 3rd International Conference, (pp. 1127-1138). , CO.

MDRS Crew 110A – Summary Report

Aaron Olson, Julie Mason

University of Wisconsin-Madison Engineering Physics and Mechanical Engineering Departments

Crew Members

Julie Mason, Crew Commander Aaron Olson, Crew Engineer, Mission Specialist Sam Marron, Chief Astronomer, Crew Health and Safety Officer Lyndsey Bankers, Crew Engineer, Mission Specialist Mark Ruff, Crew Engineer, Journalist Will Yu, Chief Engineer Crew Rotation December 31 – January 7, 2011 The crew would like to thank the Wisconsin Space Grant Consortium (WSGC), the UW- Madison Dean of Engineering, their faculty advisor Dr. Frederick Elder, and the Mars Society Mission Support for their knowledge, guidance, and support of the mission. We could not have accomplished our goals without you.

Crew 110A consisted of a team of engineers, undergraduates and graduate students, from the University of Wisconsin-Madison. During our stay, we celebrated the New Year with an inaugural midnight EVA and red Martian cool-aid, we anxiously listened to the UW-Madison Badger Football Team play in the Rose Bowl, and we completed 15 EVA’s within a seven day period.

The crew set out with a mission to conduct habitability, atmospheric, geologic imaging, and space suit ergonomic studies and a vision to advance the space industry’s exploration capabilities.

The team worked with a NASA lead human factors engineer to design a habitat architecture study, a member from ESA on 3D imaging, and a professional astronomer. The crew also looked at habitable space for crew exercising and made outreach videos for upcoming middle school and high school visits.

Contributions by Lyndsey Bankers, Sam Marron, Will Yu and Mark Ruff Financial support provided by the Wisconsin Space Grant Consortium As commander, I was honored to have such a great crew. The crew worked long hours during the daytime EVA’s, collecting images and videos, as well as stayed awake through long hours of the night to collect data at the Musk Mars Desert Observatory and complete 3D models.

Summary of Mission’s Accomplishments

15 EVA’s:

 Over 20,000 Telescope Images  Over 500 Photos for 3D Model  4 Outreach Videos

3D Imaging Summary

The purpose of the 3D modeling project was to explore the ability to create a 3D model of the surrounding area of the MDRS habitat with Autodesk’s 123D Catch, a free online program that utilizes “the cloud” to compute 3D models from arrays of images. Such a model could be used by future researchers at MDRS to plan robotic traverses and could also be shared online through YouTube and Google Earth when completed. 123D Catch was also used to create models of crew members, small objects inside and outside of the habitat and remote geographical features. In the seven days of the study, images were taken during EVAs and then uploaded and processed when the internet communication limitations weren’t in effect. The results of the 3D models varied due to image exposure, the program’s ability to match features in different images and the amount of manual image “stitching” performed. The main 3D model was a model of a 200 foot radius about the habitat. As suspected, 123D Catch was capable of creating the model with some success, but in order to make the model to scale and without gaps in the mesh; quite a bit of manual image stitching (locating matching points in multiple images) and model re-processing was needed. Over 30 hours of work was done during the stay at MDRS to get acquainted with the 123D Catch program and to try to better the model. More work will be done before the model is sent out to interested people and posted online. It is clear that the program is much better suited to model areas smaller than the 400 foot diameter circular area used for this project, I would recommend to others interested in using this software for a similar purpose to make models of smaller areas and then compile them into larger models using other 3D modeling programs (Autodesk 3DMax...etc.)

Atmospheric Monitoring Summary

The astronomy study conducted by MDRS crew 110a aims to estimate the amount of extinguishing material in the atmosphere. This is achieved by tracking and imaging of celestial objects such as stars and planets throughout the night. As a celestial object appears to pass through the sky as the night progresses, the amount of atmosphere through which the light passes changes. Imaging of these objects with the imaging astronomer kit in the MDRS observatory allows us to track the intensity of the light as a function of the angle off the horizon. We recorded

images near the start of each hour through the nights starting 1/4/2012/ and 1/5/2012. We recorded at least 6 images of Jupiter, Sirius, Rigel, and Polaris.

The data recorded for Jupiter and Sirius best fit the expected trends. We learned that images of Rigel were easily oversaturated which resulted in a loss of light intensity data. Larger, dimmer celestial bodies provided better light intensity data. Polaris does not change its angle relative to the horizon, and was imaged as a control in order to track unexpected changes in atmospheric extinguishing. We recorded weather data for each hour of each recording night; this includes temperature, humidity, dew point, and cloud cover percentage. Post-mission analysis will allow us to gauge our imaging accuracy and comment on which factors contribute to and prohibit accuracy, including the class of object being imaged, temperature data, and the frequency and location of imaging.

Exercise Study Summary

The goal of the exercise study is to learn about the effects of exercise on habitability and productivity of the crew. The exercises chosen for this study all utilize body weight and therefore do not require any extra equipment or space in the habitat. The exercises focus on strength and are not meant to tire crew members out, but to ensure that each major muscle group stays strong throughout missions.

Throughout the week various crew members exercised in both the upper and lower levels of the habitat with no problems with crowding of the space. Workouts were modified as the week went on based on how each crew member felt and what they liked and disliked about previous workouts.

Habitat Architecture Summary

The team designed a habitat architecture study to gather human-in-the-loop performance data on the habitat at Mars Desert Research Station.

Data will be compared to habitat architecture studies conducted at NASA Desert Research and Technology Studies (Desert RATS). At Desert RATS 2011, group members, Samuel Marron, Aaron Olson, and Julie Mason participated in an engineering evaluation of NASA’s Habitat Demonstration Unit—Deep Space Habitat in Flagstaff, Arizona.

The habitat architecture study examined habitat architecture, requirements, and operations to advance understanding in these critical areas. The results will enhance the human performance data collected during several habitat evaluations conducted at Desert RATS. Habitat questionnaires will be completed at the end of the field test at MDRS by each study participant.

Journalist Summary

On any future mission to Mars, there will need to be an element of public outreach. Public support is critical to the space program as a whole and even more critical to the major projects in manned spaceflight, such as a mission to Mars. If we are to avoid the “flags and footprints” missions of the Apollo era as we venture to Mars, it will be critical that we ensure that the public is engaged.

This is why journalism is a critical part of any Mars mission. A trained journalist can report information to the public in a manner far more palatable than a crew member without such experience, and thus, ensure the public’s ongoing engagement with the Mars program.

This would appear to present a problem, when crew sizes for most missions currently under proposal range between 3 and 6. There simply isn’t room on such a small crew for a dedicated journalist. However, as was learned here at MDRS, this is also not a full-time role. As crew journalist, I come from an engineering background, but also have experience with communications. While working at the research station, I worked on a number of studies and assisted the chief engineer, all in addition to my role as crew journalist, all together representing a perfectly reasonable workload.

While serving as crew journalist, I chronicled crew events and activities, many of which were mentioned in the daily reports. A point was also made of getting quality pictures of the landing site, the crew, and any other image that would help engage the public.

It was determined on this mission that journalism is critical to the continued success of a Mars program. However, this does not require that other elements of the mission be hindered by placing a dedicated journalist on the crew. A technician or scientist already suited to the crew could likely be trained in the necessary communication skills to fill this role.

22nd Annual Conference Part Five

Biology and Medical Sciences

Prototype Framework for Dynamic Probabilistic Risk Assessment of Space-Flight Medical Events

Kirsti Pajunen*

Department of Mechanical Engineering, Milwaukee School of Engineering, Milwaukee, WI

Spencer Lane, Brian Moore

Abstract. Medical problems in space, while not necessarily common, can be life and mission threatening. Because of this, a model has been created to assess the risk of certain medical events under space mission conditions called the Integrated Medical Model (IMM). The model uses a type of risk analysis called Probabilistic Risk Assessment. An effort is being conducted to convert these static models to Dynamic PRA (DynaPRA) models. These models would integrate changing conditions in space missions such as human interaction, changing system components, and environmental factors and apply a risk analysis to find more complete and useful probabilities of certain medical events occurring to astronauts in space. To this end, a prototype DynaPRA framework was created during the first three months of the project using the existing IMM hip, wrist, and lumbar spine bone fracture models as a case study. After the initial implementation was complete, ideas were generated for applying this framework to the rest of the IMM.

Introduction Probabilistic risk assessment. Probabilistic Risk Assessment (PRA) is a systematic process of estimating the probability of reaching an end-state, defined by the combination of individual event probabilities and their consequences. It is typically used for engineering applications, such as for determining the probability of reaching system failure after a valve breaks in a piping network. The PRA simulation is a static, quantitative process that estimates the probability of reaching certain intermediate or end-states resulting from initiating or critical events using a Monte Carlo simulation. The Monte Carlo process estimates the expected value or individual probabilities of the system end-states by conducting many independent trials of the simulation. It calculates through end-result estimation, a method which is much more efficient than direct calculation of the probabilities.

Dynamic probabilistic risk assessment. PRA may be a useful tool in many situations; however, some engineering systems are inherently dynamic. As the name implies, dynamic systems change over time, and the response of a dynamic system may change over time due to multiple external factors. Therefore, a static simulation like PRA does not always accurately represent a dynamic system. Dynamic probabilistic risk assessment, or DynaPRA, is an extension of the PRA concept that characterizes complex dynamic systems, unlike PRA. It accounts for the complex interactions between a system and its internal and external factors. All possible event chains, of which there are infinitely many, can be interrogated using DynaPRA. DynaPRA incorporates dynamic changes to a subsystem of the overall system that may affect other subsystems, like how a broken valve in a piping system can affect the chance that another valve will break. Because of the greater complexity and higher sensitivity of DynaPRA, it allows for a much more complex risk assessment than PRA.

*Funding provided by the Wisconsin Space Grant Consortium.

Integrated Medical Model. The Integrated Medical Model (IMM) is a PRA simulation created through the HRP headed at JSC and at Wyle Labs, JSC’s subcontractor for the project. GRC has created several new modules for the IMM where data is sparse, such as the bone fracture and renal modules. It is an ongoing project that attempts to quantify medical risks associated with space travel, including those that have not been observed to date, such as bone fractures, kidney stones, and sleeping problems. For example, the risk associated with bone fractures increases as a mission proceeds due to the decrease in bone mineral density (BMD) in low gravity environments. One main purpose of the IMM is to guide space mission planners in selecting appropriate medical treatments for each mission, while improving the safety of the astronauts in flight (Griffin, 2012). The IMM uses PRA with Monte Carlo simulations to determine the likelihood of 87 different medical events occurring to astronauts while they are on space missions. The IMM is written in SAS, but the modules that GRC constructs are primarily written in MATLAB. The current static simulation nature of the IMM proves useful in some cases, such as finding the probability of a bone fracture occurring after a loading event with a fixed probability. In other cases, however, a more dynamic simulation is needed.

Bone model. There are three existing bone fracture models from the IMM that were used as a case study for converting the static PRA nature of the IMM to a DynaPRA nature. These models include the hip, wrist, and lumbar spine models. The hip model represents a posterolateral fall onto a hip from the side. A fracture occurring in this model is representative of a fracture in the proximal femur, including the femoral neck. The wrist model models a generic load applied to the wrist such as that from a fall forward to the side. The lumbar spine model models a fall from a ladder onto two feet (Nelson, 2009).

IMM shortcomings and motivation. The IMM is not in itself intended to be accurate, which is a characteristic of PRA. Instead, it is designed to be inaccurate in that it is trying to capture the entire field of what could happen (the uncertainty in the outcome) because the inputs are uncertain. Thus, DynaPRA would provide the IMM a means to improve the estimate of uncertainty propagation and illustrate its impact as time moves forward. A space mission poses countless possibilities for medical events to occur, and a DynaPRA analysis of the mission would create a more realistic representation of risk variation over time as well as elucidate critical event series that may exacerbate the occurrence and outcomes of time dependent medical events. It would allow for a higher fidelity analysis and an analysis of the interaction effects between various medical events. Unlike the current IMM, with DynaPRA, a wrist fracture could affect a hip fracture in that after a wrist fracture, the astronaut has a higher probability of breaking his hip because he may no longer have a useable wrist to catch himself with as he falls. DynaPRA would be able to analyze how time and other external factors could affect medical risks, such as the space environment, the training and expertise of the astronaut, and changes in the mission (such as changing times for extravehicular activities, or EVAs). The progression of events would not be purely controlled by the logical process as defined by the developer, but would constantly be taking into account interactions that could change the course of events in unique ways. A dynamic IMM could be applied to any sort of mission environment, such as the International Space Station (ISS), Mars, or the Moon. With such a greater accuracy over the current IMM, a DynaPRA IMM could better help mission planners decide what treatments to bring on several different kinds of missions, and to provide more possibly necessary safety precautions for astronauts. It could also help mission planners optimize the crew assignment for

specific missions. Because of all of these reasons, a DynaPRA IMM simulation is being created at GRC. The objectives for this project are described in the next section.

Objectives Project goals. 1. Construct DynaPRA framework, including credibility testing. 2. Integration of IMM, and then demonstration of IMM integration. 3. Finalize credibility assessment and demonstrate and examine need for additional applications. Prototype objectives. 1. Create a conceptual model for DynaPRA implementation. 2. Develop a prototype DynaPRA framework using the existing IMM bone fracture models as a case study. 3. Devise ideas for integrating DynaPRA into the entire IMM.

DynaPRA IMM Prototype Design reference mission. Based on the design missions created by the Constellation project, Mars was chosen as the case study reference mission for the prototype framework. The primary mission phases include the initial transfer to Mars, the stay on Mars, and the transit back to Earth. Both transits take 6 months. During this time the crew would be exercising, living on the ship and sleeping. EVAs would be rare and unlikely. The crew would also have a heightened rate of bone loss as compared to on Mars or Earth. The total stay length on Mars would be 18 months. The crew would spend their time exercising, living on the ship and sleeping and would likely spend a significant amount of time on EVAs (McElyea, 2007).

Event classification. Events are classified and separated into one of three classes. Class 1 events, as events that are not re-schedulable, include events such as launches and orbital insertion burns. With the critical mission events defined above, this makes the Class 1 events launch and entry, descent, and landing (EDL). Class 2 events, as required but re-schedulable events, include mostly mission events with a few medical events. These would include sleeping, EVAs, intravehicular activities (IVAs) and exercising in addition to certain medical conditions such as space adaptation sickness. Class 3 events are scheduled randomly and do not have to occur. This class contains the bulk of the medical events and a few mission events. The current Class 3 events are each of the bone fracture modules.

State variables. There are several variables that affect all parts of the simulation. These variables are called state variables because they account for the state, or status, of different parts of the system, such as the environment and the astronaut. The state of the environment, for example if the astronaut is on the Moon or on Mars, or the state of the astronaut, such as whether his hip is broken or not, affects probabilities of several events occurring along with the probabilities of the outcomes of those events.

The state variables are calculated at the beginning of the simulation before scheduling any Class 2 events. The astronaut state is first generated with a function called GenerateAstronaut. The state of the astronaut is stored in two structure arrays, or structs. The first is astronautState and the second is astronautAnthropometrics. The astronaut state was split into two structs due to the two distinct types of astronaut variables present. The struct astronautState contains astronaut

variables that can change; i.e., they are not static. Such variables include the BMD, current fractures, and organ states of the astronaut. The fractures and organ states are sub-structs of the main struct and contain the states of specific bones and organs. The organ state sub-struct does not currently exist, but would need to be implemented as the prototype is formed into the final simulation. After an event is executed, certain states of the astronaut can change, and these are updated within the astronautState struct. BMD is constantly decreasing with time until the minimum BMD is reached, and it is always updated. The current fractures are stored as Boolean variables and are updated as fractures occur. For example, if an astronaut hits his knee and shatters his kneecap, the fracture state for ‘knee’ is updated to broken. This could have serious repercussions for future events. The organ states could be stored as integers that represent the seriousness of the organ injury. For example, 0 could mean that the organ is functioning perfectly whereas 10 would mean that the organ is completely dysfunctional. The organ states would be updated as medical events execute and organs change condition.

The second struct for the state of the astronaut is astronautAnthropometrics. The anthropometrics are the static variables of the astronaut that either do not change or vary very little throughout the mission. Such variables are the height and weight of the astronaut, damping coefficients of certain parts of the body, and spring constants for certain fracture areas. These variables certainly affect the probabilities of events, however they change relatively little as the mission progresses so they are not constantly updated. They are, however, constantly called during events to calculate probabilities and to schedule appropriate events.

The BMD of the astronaut is an important state variable. It is one of the variables where time is a very important factor. The current BMD of the astronaut is constantly decreasing with time due to the less than 1 g environments. The rate of the bone loss is slowed due to the high amount of exercise that astronauts get in space. For the prototype, it was decided to make the rate of bone loss linear based on time, which can be changed for the final model. There are several factors that affect the rate of the bone loss, including where the astronaut is located for the mission (Mars, spaceship, etc.) and whether the astronaut is exercising regularly. If the exercise machine broke, for example, the rate of bone loss would increase due to the lack of exercise. It is important for the BMD to be updated before and after each event, because BMD is an important factor for such models as the bone models, where the chance of breaking a bone is increased when the BMD gets smaller.

The function BoneDensity constantly updates the BMD. It takes into account the state of the environment, such as where the astronaut is or the state of the exercise machine, in order to calculate the rate of bone loss. First, it is checked whether the minimum BMD has been reached. Each part of the body has a minimum BMD that cannot be gone below. If the minimum BMD for that part of the body has not been reached, then the BMD loss rate is found using the environment state variables. This rate is then used to calculate the current BMD. The current BMD for each specific area of the body is then outputted into the astronautState struct and the BMD has been updated for use in the next event.

The final state variable that is currently developed is the environment state. Of course, more state variables would need to be added as the prototype is converted to the final simulation. The environment state contains important information, such as where the astronaut is (Mars,

spaceship, etc.), the current gravitational acceleration, and the state of the exercise machine (whether it is being used or not). The environment greatly affects the probabilities of events throughout the entire simulation. It affects things like the rate of bone loss, the speed at which impact occurs for loading events, and whether or not certain events take place. The environment state is stored in a struct called environmentState and is not currently generated by a function at the beginning of the simulation. A generation function, however, is desired, and it probably would be wise to create one for the final model. This way, it would be easy to create all of the needed environment variables right away and have them at disposal for as soon as they are needed. Whenever events occur, the environmentState could be updated, such as if the astronaut moves his location from Mars to the spaceship or if the exercise machine goes from broken to fixed.

Framework. The frameworks for each event classification are separated into different functions. The Class 1 framework is the main program that is used to run the model. During initialization, the initial astronaut state and anthropometrics as well as the initial environment state are generated. The Class 1 events are then scheduled and the initial event is executed. The Class 2 framework is then run between the current time and the next Class 1 event time. After the Class 2 framework executes, the next Class 1 event is executed. This is repeated until there are no more Class 1 events to execute.

Class 2 events are scheduled between the start and end time of the function. The Class 3 framework is then executed between the current time and the next Class 2 event time. This determines if a random event occurred between the two time periods. After running the Class 3 framework, the program investigates the system to see if there were any significant consequences that occurred during the time period. Significant consequences for the models could be actual fractures occurring. This changes what types of Class 2 events will be executed. For instance, if an astronaut breaks their hip, all EVAs most likely will be postponed until the wrist heals. EVAs would then have to be re-scheduled, canceled, or given to another astronaut. If no significant consequences occurred during the time period, the next scheduled event is executed. If a significant consequence does occur, different Class 2 events are scheduled during the remaining time period based on the consequence. This process repeats until there are no more events to execute in the level 2 event queue.

The Class 3 framework functions in a similar manner to the Class 2 framework, and is run over a time period. It starts by scheduling level 3 events between the start and end time periods. It then executes these events until there are either no more events to execute or a significant consequence occurs. Figure 1 shows the overall sequence of events for the entire framework.

Scheduler. The scheduler is the primary component to a functional DynaPRA model. One of the key attributes of DynaPRA is its ability to pursue all possible avenues of investigation by scheduling its own event trees dynamically. This requires a recording and sampling of its current and past state, combined with a robust algorithm that most accurately reflects reality when planning future events. The structure of the scheduler is divided into three tiers that each plan a class of events in a hierarchy based on requisite information. The first layer of the scheduler plans the Class 1 events. These are assigned at the mission beginning based on a mission plan.

Figure 1. Framework Sequence Diagram - UML Sequence Diagram Showing the Flow of the Overall Framework. The discrete event occurrences do not actually effect a change in the health of the astronaut. Rather, it changes the environment state in a way that is fixed until the next Class 1 event. Therefore the first layer is scheduling ranges or zones of fixed environmental state parameters within which you can schedule Class 2 and Class 3 with confidence, because between the Class 1 events the parameters affected by Class 1 events are not altered. The Class 2 scheduler works in much the same way as the Class 1 scheduler, in that the events denote a change in state, between which Class 3 events can be scheduled with confidence.

Class 3 events are presently discrete medical events that denote a change in the astronaut’s state. The structure of the hierarchy was built from the core outward, based on what was needed at the fundamental level to properly schedule medical events, and what was required from the outer layers. The Class 3 events needed to be scheduled randomly, both in their occurrence and the time at which they occur. Multiple methods were considered, and are discussed further below. For the initial structure, the Class 3 events are built from modules - in this case the 3 bone fracture models. These models are then converted into two separate modules. One module is a

stripped down version of the former bone fracture model which calculates the probability of a bone fracture from the given astronaut and environmental parameters, such as body mass, BMD, fall height, gravity, etc. and generates a probability of fracture given the impact event. This percent chance is then evaluated, and the module returns with whether or not a bone fracture occurred.

The second module is an “event chance” module which divides an arbitrary span of time into discrete slices of arbitrary resolution relevant to the medical model in question. The module determines the probability of an impact capable of producing a fracture during each one of those time slices based on the environment state and astronaut state. A simple vector of random values, of length equal to the number of time slices is created, is compared with the vector of probabilities. At every point at which the random number is less than the probability percentage, an event is determined to occur and the medical event is scheduled. This combines the ability of time-dependent event probability, such as fatigue, to be referenced as a function and overlaid on otherwise uniform probability distributions. The choice of this system is discussed further below in comparison with a Poisson distribution. This process is paralleled by every medical event model, creating a timeline of all medical events that occur during an arbitrary time. All medical events determined to occur are placed into a queue system and executed by their respective medical modules in chronological order.

If a medical event is determined to occur, or if the astronaut or parameter state is changed in some way, then the remaining schedule of queued events may no longer reflect the reality of the situation. At this point, after the state is changed from the consequence of the medical event, the event probabilities for the remaining time are recalculated and a new timeline of events are queued up to be executed.

While the Class 3 scheduler can work for any arbitrarily small span of time, it is more efficient to schedule as far ahead as possible at once. This efficiency is countered by the growing likelihood of a need to recalculate the probabilities due to a state change, which renders the scheduled events beyond that point in time potentially invalid and therefore unusable. With only these two factors, the arbitrary amount of time scheduled would be determined by the sum of the probabilities of medical events multiplied by the chance of the medical event having consequence. Due to the very minute chance of a medical event with our current number of models, and empirically demonstrated to be the case no matter how many models are integrated by the low medical event rate in space, a third factor that defines the arbitrary time pops up long before an equilibrium of efficiency can be reached. Class 2 events, which by their nature are guaranteed to change the environment state, and therefore require recalculation for every event than follows, occur several times per day.

Class 2 events act as a bridge between the completely rigid Class 1 events and the completely fluid Class 3 events. While Class 3 events can be executed, and require rescheduling of Class 2 and Class 3 events on relatively rare occasions, Class 2 events require constant analysis of scheduling. The complexity introduced is that Class 2 events are beholden to a third set of parameters or ‘mission state’ which would be made up of other relevant factors influencing the scheduling of EVAs, sleep, exercise, and other day to day activities. Because far more abstract factors go into this scheduling, several assumptions have been made to facilitate a plausible

Class 2 scheduler with room for improvement or overhaul while allowing the rest of the simulation to be tested in the interim.

The parameters and arbitrary factors behind scheduling Class 2 events represented a large degree of complexity that need to be simplified. Class 2 events are more difficult than Class 3 events for two primary reasons. The first reason is that the best reflection of reality of the schedule of class 2 events relies on some form of intelligent decision making, rather than probabilities that only account for the chance of an event. The second reason is the interdependence between Class 2 events. Class 3 events did not influence each other while being scheduled in mass, because they only alter each other based on a rarely realized potential of consequence once executed. Class 2 events by contrast must always reflect the occurrence of all other Class 2 events, because every Class 2 event has consequence simply by occurring. Whether or not an astronaut has an EVA on a given day is always going to affect his sleep schedule or exercise routine. This creates the difficulty of scheduling Class 2 events in mass because of the nondeterministic nature of the schedule. Modeling, or at least simulating, a realistic scheduling process is a difficulty that has yet to be achieved.

The Class 2 Scheduler currently works as an aggressive schedule of predetermined goals based on a continuous model of time. Class 2 events currently include both in-transit and Mars variants of Sleeping, Exercising, and EVAs, as well as IVAs which are equivalent to a default position on non-specific activity. At the beginning of the mission, a data queue specific to each Class 2 activity is filled with occurrences of the desired event, including a specific alphanumeric denomination of the event, the duration of the event, the first point in mission time the event can be scheduled, and the last point in time during the mission it can be scheduled.

The queue automatically takes each list of each type of event and organizes it by priority. Priority is given first to the events whose window of opportunity for scheduling closes first, which minimizes the number of missed events. Secondary priority is given to the longest duration events, thereby front-loading work. Events whose windows of opportunity have closed are deleted. When the queue is prompted with a specific command, it will then output the top priority event for the Scheduler to schedule, and delete it from memory.

This structure is well suited for events like EVAs, which are pre-planned, finite, and have a window of opportunity in which they can occur. The structure can also easily be manipulated for events like Sleep, which are generally homogeneous and have no unique windows of opportunity. The Sleep queue is preemptively filled with an excessive number of sleep events, with a window of opportunity stretching from mission start to mission end and equal durations. Without creating extra framework, this allows sleep to be scheduled by no other factors that the convenience and desire to sleep, where the number of events achieved is unimportant. Exercise is handled similarly. The queue system also leaves room for additional Class 2 events to be scheduled during the mission in future iterations of the model.

The queue structure permits the Scheduler to now determine if “Astronaut would like to sleep now” or “Now is a good time for an EVA on Mars”. Once determined, the queue can be called, and an event is loaded into the events for the day.

The second part of the Class 2 Scheduler is deciding what the priority activity is at the particular moment in time. Each type of Class 2 event is assigned variables for a sigmoid function; (1/(1+exp(-x)). This function simulates the priority of events as a decimal from 0 to 1 based on x time after the last event of its type began. In the case of sleep, immediately following a sleep there is virtually no priority to engage in another 8 hours of sleep. This non-priority starts to become a low priority after 18 hours or so, where it becomes a small but slight priority. The next several hours have a steep slope of increasing priority, where after 24 hours it becomes a high priority. As time goes on to be 28 or 32 hours, the priority maxes out at asymptotically approaching 1.000, reflecting the relatively absolute need to sleep. A sigmoid function is able to model this very well, and can be adjusted by several variables. Each Class 2 event type has a set of parameters that define when and how quickly the priority for the event changes.

After the values are achieved from the sigmoid function for each event type, the Scheduler determines what events occur for the next week. The top priority event is scheduled first; time is moved forward by the duration of that event, and then the new highest priority event is chosen, and the process repeats. Every time an event is scheduled, it is pulled from its queue, and that data is placed into a Class 2 schedule queue. The scheduler schedules out an arbitrary amount of time, currently events to fill a week, which executes them in chronological order, updating the time and environment state as they are triggered. Between each execution, the Class 3 scheduler is called, and the random events are analyzed before returning to the Class 2 Scheduler for the next Class 2 event to occur. If a Class 3 event changes the state, the remaining Class 2 events in the queue are returned to their respective queues, and the Class 2 scheduler re-plans the events from that point forward.

Emulating this type of decision making efficiently is something not yet achieved by our model. This framework may have potential for the final Class 2 Scheduler, but currently it acts as a place-holding function that facilitates the necessary functions of the Class 2 scheduler so that the rest of the model can be run for testing purposes. The Class 2 schedule produced by the scheduler cannot be verified as reflective of reality, or even of a structure that is capable of reflecting reality. Many questions still remain with exactly how the Class 2 Scheduler should function. The resolution of detail, such as whether an EVA occurs at a specific hour, or that it occurs in the morning or the afternoon or simply that it occurs today, are questionably relevant to a useful model. As such we predict the Class 2 model will be the most volatile scheduling function as this project is continued, where it will likely be adapted based on what time-specific medical events are introduced and how complex of mission scheduling algorithms are desired to be employed.

Discussion and Future Work General improvements and changes. The efficiency of the model should be taken into account both during the design process and during an optimization process following the completion of a working DynaPRA model. A large magnitude of variability due to the near- infinite potential event sequences becomes problematic for analysis. Currently the model is designed to create results that can be analyzed as a Monte Carlo simulation. It is likely that the model itself will be run hundreds of thousands of times for a useful distribution of results. As the DynaPRA model is expanded with further IMM models, the complexity will grow to the point where several more magnitudes of mission trials are necessary, potentially in the realm of

millions or billions of trials. Within each mission is a repetitive structure of scheduling events, followed by executing events, followed by rescheduling events. Each function within the program will be called hundreds or thousands of times, and potentially more. This program was never meant to be able to deliver quick results, and at its full potential will likely spend days or weeks to deliver useable distributions of medical events and mission risks. With functions potentially being called on the order of thousands of times, every small change in execution time for each function will yield a large change in runtime for the program, and a massive change in runtime for the Monte Carlo results.

Framework. While the framework that was created this summer is a functional prototype, there are still improvements that could be made. First, most of the required functionality should be left up to the developer of the module. A common framework should be developed and inheritance utilized to allow for common class definitions and function calls. Each module could then be stored in a data structure for easy access and processing. For instance, each module could have a schedule method that either schedules that event occurring or returns the time that it occurred. The function in the framework that calls the schedule method could then iterate through the data structure calling the schedule method for each item. The framework would not need any knowledge of the inner workings of each module and would therefore be generic. Adding new modules would be simplified to updating the state variables and adding the new module object to the data structure.

Challenges with data requirements. Due to the complex nature of DynaPRA techniques, accurate DynaPRA models require a large amount of data on event occurrence and interactions to be collected and analyzed. In physiological systems, this is even more true as every internal and external factor has an effect on the state of the body. In order to have a valid system, data would need to be collected on all of the interactions between the environment and medical events that are being modeled. Additionally, because of the low number of medical events that occur in space and the limited amount of time spent in space, this data cannot be collected on the actual system. Some of the data can be collected from medical studies of the population at large but that wouldn’t take into account the effects of microgravity. Eventually, estimations, assumptions, and abstractions will need to be made in order to compensate for missing data.

Integration with the IMM. The next step for this project is to analyze additional IMM models and plan for their integration. The current framework is structured with versatility in mind, but the only tested models are the bone fracture models. Continuous aspects, such as a continuous medical model like the Renal Stone model, have yet to be extensively considered. It is likely an expansion of the framework will be necessary to accommodate continuous processes. Integrating them with the Scheduler and the more discrete medical events will likely be handled by discrete appearances of symptoms, which will be tracked in the astronaut state and considered when planning Class 2 events.

References Griffin, DeVon. ISS and Human Health Office: Exploration Medical Capability, IMM. n.d. http://microgravity.grc.nasa.gov/SOPO/ICHO/HRP/ExMC/IMM/ (accessed June 2012). McElyea, Tim. Project Constellation: Moon, Mars and Beyond. Burlington: Apogee Books, 2007. Nelson, Emily S., Beth Lewandowski, Angelo Licata, and Jerry G. Myers. Development and Validation of a Predictive Bone Fracture Risk Model. National Aeronautics and Space Administration, 2009.

22nd Annual Conference Part Six

Engineering Operating Temperature Dependence of QDOGFET Single-Photon Detectors

† Eric J. Gansen, Sean D. Harrington, John M. Nehls Physics Department; University of Wisconsin-La Crosse, La Crosse, WI

Abstract: We report on the temperature dependence of the photosensitivity of a quantum dot, optically gated, field-effect transistor (QDOGFET) that uses self-assembled semiconductor quantum dots embedded in a high-electron-mobility transistor to detect individual photons of light. Paramount to the operation of the device is differentiating weak, photo-induced signals from random fluctuations associated with electrical noise. To date, QDOGFETs have only been shown to be single-photon sensitive when cooled to 4 K. Here, we study noise spectra of a QDOGFET for sample temperatures ranging from 7-60 K and discuss how the noise affects the sensitivity of the device when operated at elevated temperatures. We show that the QDOGFET maintains single-photon sensitivity for temperatures up to 35-40 K where increases in operating temperature can be traded for decreases in signal-to-noise ratio. Introduction Single-photon detector (SPD) development is crucial to the advancement of quantum information technologies and measurement science. More effective SPDs are needed to improve the security of quantum communication systems based on quantum-key distribution (Hiskett et al., 2006) and to extend the link lengths and data rates of deep-space communications (Mendenhall et al., 2007; Hemmati et al., 2007; Boroson et al., 2004). SPDs are fundamental tools for quantum optics experiments and also impact the areas of observational astronomy, medical diagnosis and imaging, and light detection and ranging (LIDAR) (Priedhorsky et al., 1996). In general, desirable characteristics for SPDs include high detection rates, low dark counts, and high detection efficiency. Some applications require detectors that are not only sensitive to single photons, but that can also count the number of incident photons that arrive simultaneously. Photon-number resolution is critical for the realization of linear optics quantum computing (Knill et al., 2001), impacts the security of quantum communications (Brassard et al., 2000), and is useful for studying the quantum nature of light (Giuseppe et al., 2003; Waks et al., 2004; Achilles et al., 2006; Waks et al., 2006). In addition, for many commercial applications, SPDs must be compact and exhibit modest power and cooling requirement for operation.

In this work, we investigate how the photosensitivity of QDOGFETs (quantum dot optically gated field-effect transistors) depends on operating temperature. In these novel SPDs, quantum dots (QDs) are embedded in a specially designed high-electron-mobility transistor (HEMT) and used as optically addressable floating gates. The QDOGFET structure and principles of operation are illustrated in Fig. 1(a), and are described in further detail by Rowe et al. (2006), Gansen et al. (2007) and Rowe et al. (2008). A photon is detected when it is absorbed in the structure and electrically charges a QD with a photo-generated hole carrier. The charged QD makes itself known by altering the electrical current that flows through the surrounding

† The authors would like to acknowledge M. A. Rowe, S. M. Etzel, S. W. Nam, and R. P. Mirin of the Optoelectronics Division of the National Institute of Standards and Technology (NIST) in Boulder, CO for their contributions to this work as well as the Wisconsin Space Grant Consortium for its financial support. transistor. The photoconductive gain associated with the persistent photoconductivity makes QDOGFETs sensitive enough to detect individual photons of light. Previous reports demonstrate that when cooled to 4 K, QDOGFETs exhibit single-photon sensitivity with high internal quantum efficiency (Rowe et al., 2006) and, moreover, can accurately discriminate between the detection of 0, 1, 2, and 3 photons 83% of the time (Gansen et al., 2007; Rowe et al., 2008). While persistent photoconductivity lasting for hours has been demonstrated for temperatures as high as 145 K (Finley et al., 1998), previous demonstrations of the single-photon sensitivity of QDOGFETs have been limited to operating temperatures of 4 K, where thermally activated noise sources are minimized.

Figure 1. (a) Schematic diagram of the composition and band structure of the QDOGFET single-photon detector. CB and VB denote the conduction band and valence band, respectively, and 2DEG denotes the two-dimensional electron gas. (b) Detection circuitry used to characterize the electrical noise and photoresponse of the QDOGFET.

Here we present the results of a systematic study, where we measured the noise spectra of a QDOGFET for different sample temperatures and use a mathematical framework that was recently developed to determine how the sensitivity of the detector will vary with temperature. We show that for temperatures between 7 – 60 K, the QDOGFET exhibits a high degree of 1/f noise [i.e. the power spectral density (PSD) of the noise is inversely proportional to frequency] for a frequency range of at least 100 kHz and that the noise increases as a function of temperature. Following the mathematical formalism developed in previous work (Rowe et al., 2010), we use the noise data to map the signal-to-noise ratio (SNR) of the detector’s single- photon response as a function of temperature and measurement frequency. Our analysis indicates that QDOGFETs can operate over a broad range of temperatures, where increases in the operating temperature can be traded for decreased sensitivity. We show that the QDOGFET can detect photons with a SNR greater than 3:1 at a measurement frequency of 50 kHz for temperatures up to 35-40 K.

This work highlights a potential advantage QDOGFETs have over a number of the top performing SPDs that are currently being developed. Many of today’s detectors that have set the standard for detection rate, photon-number resolution, and detection efficiency operate at temperatures well below 10 K. For instance, superconducting transition-edge sensors (TESs) (Miller et al., 2003; Lita et al., 2008; Calkins et al., 2011) provide excellent photon- number resolution and detection efficiency at visible and infrared wavelengths; however, they operate at ~100 mK. In addition, superconducting nanowire single-photon detectors (SNSPDs) (Lolli et al., 2012; Gol’tsman et al., 2001; Baek et al., 2009; Shibata et al., 2010; Baek et al., 2011; Dorenbos et al., 2011; Il’in et al., 2012; Hadfield et al., 2005) and photon-number- resolving arrays (Divochiy et al., 2008) are known for their picosecond response times (Hadfield et al., 2005), but are typically cooled to 3-4 K. Alternatively, visible-light photon counters (VLPCs) (Waks et al., 2003; Waks et al., 2006) that utilize avalanche multiplication operate at 7 K. QDOGFETs are novel alternatives to these detector technologies and may be employed for applications where cooling to below 10 K is not feasible or where tolerance to temperature fluctuations is required.

QDOGFET Detection System Fig. 1(b) shows a schematic of the detection circuitry used to operate the QDOGFET. The

QDOGFET and load resistor (RL) were mounted on the same temperature tunable cold stage of a liquid helium cryostat and biased with a DC voltage supply, VB. A thin-film resistor that exhibited a resistance of approximately 100 k at room temperature was used as the load in the circuit. The GaAs/Al0.2Ga0.8As QDOGFET exhibited an active region that was approximately 2 m x 2 m and contained InGaAs quantum dots. The QDOGFET was reversed biased with a

DC gate voltage, VG, that produced maximum transconductance, as desired for photodetection. When illuminated, photo-induced changes in the transistor current, Ids, were read out as voltage changes at the circuit output.

We measured the electrical noise in the output voltage by amplifying the signal and sending it to a spectrum analyzer. Measured in this way, the total noise can be separated into three major contributions: the QDOGFET noise, the thermal noise associated with the load resistance, and the noise produced by the amplifier. The PSD (in units of V2/Hz) of the measured noise can be expressed in terms of the circuit parameters as

2 2 2 NV  G (RQ || RL ) (NQ  N L )  G NVA , [1] where G is the voltage gain of the amplifier; RQ is the total resistance of the QDOGFET (i.e. the 2 combined resistance of the transistor channel and contacts); NQ is the PSD (A /Hz) of the of the 2 QDOGFET noise; NVA is the PSD (V /Hz) of the of the amplifier’s input noise; and NL is the thermal noise of the load resistor. When VB = 0, the QDOGFET essentially behaves as a resistor and contributes thermal noise of magnitude NQ = 4kBT/RQ; however, when a bias voltage is applied, the QDOGFET’s noise contributions are modified.

Because we are interested in how the noise in QDOGFETs ultimately limits their sensitivity, we performed a systematic set of measurements that allowed us to remove the noise contributions of the amplifier and load resistor and determine the noise, NQ, fundamental to the QDOGFET. At each fixed sample temperature, we measured NV with VB = 2 V and the noise spectra for VB = 0, where only thermal noise contributes to NQ. Because G, RL, and RQ all vary with temperature, we measured these parameters as well. To determine the noise contributions from the load and amplifier (second and third terms in Eq. [1]), we calculated the thermal noise associated with the

QDOGFET using the experimentally determined resistance, RQ, and then subtracted it from the data collected with VB = 0. Once NL and NVA were known, we used them in conjunction with Eq [1] to determine NQ with the bias applied. This procedure was repeated for selected temperatures between 7 K and 60 K.

x 10-10 3 60 K -21 (A) -19 10 2 37 K 10

B

N

) 1

-22 Q

10 18 K x

/Hz

2 20 40 60

f

A

(

-23

A ( T (K) 10 11 K -20

2

Q 10 )

N 10-24 (a) (b)

102 103 104 102 103 104 Frequency (Hz) Frequency (Hz)

Figure 2. (a) PSD of the QDOGFET noise for select operating temperatures: 11 K (solid black), 18 K (dash), 37 K (dash-dot), 60 K (dot). (b) NQ multiplied by frequency. The straight lines represent pure 1/f dependence and are shown for comparison. (Inset) The amplitude coefficient, B, of the 1/f noise as a function of temperature.

Results of Noise Measurements and Analysis The PSD of the QDOGFET noise is plotted in Fig. 2(a) at four different operating temperatures. The noise spectra exhibit a high degree of 1/f character and grow in magnitude as the temperature is increased. In Fig. 2(b), we plot NQ x f to accentuate the features of the spectra that deviate from pure 1/f noise. When displayed in this way, 1/f noise appears as a constant background, and any gradients in the data indicate additional contributions. To evaluate more quantitatively how the underlying 1/f noise increases with temperature, we fit the PSD measured at each temperature with a function that adds a single Lorentzian peak (with characteristic frequency fL) to the 1/f contribution

L2 B2 NQ ( f )  2  . [2]  f  f   1  f L 

The temperature dependence of the amplitude coefficient, B, is plotted in the inset to Fig. 2(a). It increases from approximately 0.1 nA at lower temperatures to 0.3 nA at 60 K.

To determine how the increased noise observed at higher temperatures impacts the photosensitivity of the QDOGFET, we must also investigate how the photoresponse of the device changes with temperature. As illustrated in Fig. 1(a), the detector responds to light when a hole carrier excited by a photon in the absorption layer of the device is trapped by a QD. The charged QD screens the gate field, producing a change in the channel current, Ids, that persists for as long as the hole carrier is trapped in the dot. The detection circuitry shown in Fig. 1(b) converts this current change into a persistent change in the output voltage.

Noise in the output voltage can obscure week photo-induced steps; however, an effective way to reduce the impact of electrical noise when the arrival time of the photons is known is to apply an average difference filter (ADF) to the signal. An ADF integrates the signal over equal time intervals before and after the arrival of the light pulse and then takes the difference of the two integrated values. The un-normalized transfer function of an ADF filter is given by

2 W( f )  sin 2 (f / 2) . [3] if where is the total averaging time.

Recently, a mathematical framework was developed to predict the SNR of SPDs based on photoconductive gain when an ADF is employed (Rowe et al., 2010). Within this framework, the filtered signal produced by a pulse of light is given by

S = D/2, [4]

where D  I dsRL is the amplitude of the photo-induced change in the output voltage. The change in the channel current caused by a photon depends on the parameters of the QDOGFET and surrounding circuitry and is given by I ds  gm (eW) /(' A) . In this expression e is the elementary charge, W is the epitaxial layer thickness, ’ is the electric permittivity of the material, A is the transistor active area, and gm  Ids Vg is the transconductance of the QDOGFET in series with the load resistor. D can subsequently be expressed as

g eWR D  m L , [5] ' A

where the system transconductance is related to the fundamental transconductance of the o o 2 QDOGFET, gm , in the absence of a load resistance by gm  gm[RQ (RL  RQ )] .

Plotted in Fig. 3(a) is D predicted by Eq. [5] using experimentally determined gm and RL values (plotted in the inset) and parameters appropriate for the geometry and composition of the QDOGFET. The material permittivity is taken to obey the weak temperature dependence reported by Strzalkowski et al. (1976). A notable characteristic of the photoresponse predicted by the model is that D is relatively constant with temperature even though RL and gm have strong temperature dependences. The resistance RL is approximately 100 k at 60 K, but almost doubles as the temperature approaches 7 K. Since D is proportional to RL, this effect, in and of itself, should cause D to increase as the operating temperature is decreased. However, D is also proportional to gm, which degrades as RL increases. The competing dependences of gm and RL on temperature roughly cancel, resulting in little temperature dependence predicted for the total signal.

-5 x 10 3 (a) (b) 6

2

4 S

g

(V)

/

m

)

D

4 (

 150

k

( A/V 1 3 L 2

R 100 2 ) 20 40 60 0 0 10 20 30 40 50 60 10 20 30 40 50 60 Temperature (K) Temperature (K)

Figure 3. (a) The single-photon response, D, measured optically (open circles) and predicted using Eq. [5] and parameters characteristic of the QDOGFET and circuitry (solid circles)(b) The SNR determined by optical measurements (open circles) and predicted by Eq. [7] using the experimentally measured noise spectra when an ADF with  = 20 s is used (solid circles). (Inset) The load resistance (solid circles) and transconductance (open circles) of the detection system as a function of temperature. The solid line curves are included as guides to the eye.

Once the photoresponse and the noise spectrum of the detection system are known, we can use the mathematical framework outlined by Rowe et al. (2010) to predict the SNR of the system’s response to single photons. The ADF (Eq. [3]) filters the signal such that only noise frequencies close to the measurement frequency, fm = 1/, substantially influence the sensitivity of the measurement. Noise will produce fluctuations in the filtered signals, S, produced by single photons with standard deviation

1/ 2     W 2 ( f )N ( f )df . [6]  V   0 

Consequently, for a particular measurement frequency, the detection system will respond to photons with SNR given by the ratio of Eq. [4] to Eq. [6],

1/ 2 S g eWR    m L W 2 ( f )N ( f )df . [7]  V   2' Af m  0 

The SNR predicted by Eq. [7], given the measured noise spectra, are plotted as a function of operating temperature in Fig. 3(b) for a measurement frequency of fm = 50 kHz. At the lowest temperatures studied, a SNR of approximately 5:1 is predicted. The ratio decreases to 2:1 at 60 K due to increased noise. In previous work (Rowe et al., 2010), a SNR of 3:1 was chosen to be the benchmark for single-photon sensitivity. Using this benchmark, the data indicate that the detection system will maintain single-photon sensitivity up to operating temperatures of 35-40 K.

Experimental Verification To check that the mathematical model properly predicts the behavior of the detection system shown in Fig. 1(b), we performed optical measurements with the system at a number of different operating temperatures. To test our predictions, we illuminated the device with a train of 4000 pulses of light from a diode laser that were properly tuned to be absorbed in the GaAs absorption layer of the QDOGFET. The light pulses were 15 ns in duration and were attenuated such that on average approximately one photon was detected per pulse. The detection system’s bias conditions were the same as those used in the noise measurements. With a constant reverse bias applied to the QDOGFET, the system operates in continuous mode, where the QDs discharge randomly. By operating the device in this way, we avoided the additional noise contributions associated with electrically discharging the dots after each pulse of light (Rowe et al., 2010).

In Fig. 3(a), we compare D determined from the optical measurements to the signal calculated using Eq. [5] and the experimentally measured gm and RL. We determined D from our optical measurements by using the ADF to determine the change in the output voltage produced by each pulse of light and then dividing the average change by the average number of photons detected per pulse, . The statistical method used to determine  is described by Rowe et al. (2010). Overall, there is excellent agreement between the optical results and those predicted by our model.

To show that our model effectively predicts the sensitivity of the detection system, in Fig. 3(b) we compare S/ determined from our optical measurements to the values predicted using the noise spectra. For the optical measurements, the standard deviation, , of the system’s response to a photon was determined by applying the ADF 4000 distinct times to the output signal acquired in the absence of light [as described by Rowe et al. (2010)] and by removing

contributions associated with the external amplifier. Again, there is very good agreement between the optically determined values and those predicted by the model.

Conclusions We have investigated the effects of operating temperature on the photosensitivity of QDOGFETs. We measured the noise spectra of a QDOGFET as a function of temperature, and detailed how the noise impacts the device’s photosensitivity using a mathematical framework that was recently developed for detectors that employ photoconductive gain. We subsequently checked the validity of the model by performing optical measurements. Our study shows that QDOGFETs function as SPDs at temperatures above 4 K where increased operating temperature can be traded for decreased SNR. The QDOGFET system maintained single-photon sensitivity (based on a 3:1 SNR benchmark) for temperatures up to 35-40 K for a measurement frequency of 50 kHz.

6

5 11 K 4 16 K

/

S 3 30 K

2 37 K

1 60 K

102 103 104 105 f (Hz) m Figure 4. The SNR predicted by Eq. [7

fm = 1/for a fixed load resistance of RL = 100 k

The mathematical model used in this work provides a convenient way of determining the photosensitivity of a QDOGFET detection system once the transconductance and noise spectrum are known without the need to set up on optical measurement. Consequently, devices can be characterized quickly by performing a few simple electrical measurements. The model can be used to test yield (i.e. predict what fraction of the QDOGFETs produced are capable of detecting single photons) and to determine how changing the experimental parameters affects the sensitivity of the detection system. For instance, in Fig. 4 we show how the SNR varies with measurement frequency and operating temperature for a fixed load resistance, RL = 100 k. If the noise was strictly 1/f in nature,  given in Eq [6] would be proportional to and as a result,

S/ would be independent of fm (Rowe et al., 2010). In this case, the detector would operate at

arbitrarily fast detection rates without losing sensitivity. The variations in the S/ curves with fm are caused by the non-1/f contributions to the noise (Fig. 2). Specifically, it is the Lorentzian contributions observed at low frequencies in the noise spectra that result in reduced SNR at those frequencies.

The SNR calculations shown in Fig. 4 were performed with RL = 100 k because this load resistance has provided good results in the past (Gansen et al., 2007); however, improved performance may be possible by modifying the load resistance. System optimization will be the subject of future work.

References Achilles, D., Silberhorn, C., and Walmsley, I. A., Phys. Rev. Lett. 97, 43602 (2006). Baek, B., Lita, A. E., Verma, V., and Nam, S. W., Appl. Phys. Lett. 98, 251105 (2011). Baek, B., Stern, J. A., and Nam, S. W., Appl. Phys. Lett. 95, 191110 (2009). Boroson, D. M., Bondurant, R. S., and Scozzafava, J. J., “Overview of high rate deep space laser communications options,” in Free-Space Laser Communication Technologies XVI, Mecherle, G. S., Young, C. Y., and Stryjewsjki, J. S., eds., Proc. SPIE 5338, 37-49 (2004). Brassard, G., Lütkenhaus, N., Mor, T., and Sanders, B. C., Phys. Rev. Lett. 85, 1330-1333 (2000). Calkins, B., Lita, A. E., Fox, A. E., and Nam, S. W., Appl. Phys. Lett. 99, 241114 (2011). Di Giuseppe, G., Atatüre, M., Shaw, M. D., Sergienko, A. V., Saleh, B. E. A., Teich, M. C., Miller, A. J., Nam, S. W., and Martinis, J., Phys. Rev. A 68, 63817 (2003). Divochiy, A., Marsili, F., Bitauld, D., Gaggero, A., Leoni, R., Mattioli, F., Korneev, A., Seleznev, V., Kaurova, N., Minaeva, O., Gol’tsman, G., Lagoudakis, K. G., Benkhaoul, M., Levy, F., and Fiore, A., Nature Photonics 2, 302-306 (2008). Dorenbos, S. N., Forn-Diáz, P., Fuse, T., Verbruggen, A. H., Zijlstra, T., Klapwijk, T. M., and Zwiller, V., Appl. Phys. Lett. 98, 251102 (2011). Finley, J. J., Skalitz, M., Arzberger, M., Zrenner, A., Böhm, G., and Abstreiter, G., Appl. Phys. Lett. 73, 2618 (1998). Gansen, E. J., Rowe, M. A., Greene, M. B., Rosenberg, D., Harvey, T. E., Su, M. Y., Hadfield, R. H., Nam, S. W., and Mirin, R. P., Nature Photonics 1, 585-588 (2007). Gol’tsman, G. N., Okunev, O., Chulkova, G., Lipatov, A., Semenov, A., Smirnov, K., Voronov, B., Dzardanov, A., Williams, C., and Sobolewski, R., Appl. Phys. Lett. 79, 705-707 (2001). Hadfield, R. H., Stevens, M. J., Gruber, S. S., Schwall, R. E., Mirin, R. P., and Nam, S. W. , Optics Express 13, 10846-10853 (2005). Hemmati, H., Biswas, A., and Boroson, D., Proc. IEEE 95, 2082-2092 (2007). Hiskett, P. A., Rosenberg, D., Peterson, C. G., Hughes, R. J., Nam, S. W., Lita, A. E., Miller, A. J., and Nordholt, J. E., New J. of Phys. 8, 193 (2006). Il'in, K., Hofherr, M., Rall, D., Siegel, M., Semenov, A., Engel, A., Inderbitzin, K., Aeschbacher, A., and Schilling, A., J. Low Temp. Phys. 167, 809-814 (2012). Knill, E., Laflamme, R., and Milburn, G. J., Nature 409, 46-52 (2001). Lita, A. E., Miller, A. J., and Nam, S. W., Optics Express 16, 3032-3040 (2008). Lolli, L., Taralli, E., and Rajteri, M., J. Low Temp. Phys. 167, 803-808 (2012).

Mendenhall, J. A., Candell, L. M., Hopman, P. J., Zogbi, G., Boroson, D. M., Caplan, D. O., Digenis, C. J., Hearn, D. R., and Shoup, R. C., Proc. IEEE 95, 2059-2069 (2007). Miller, A. J., Nam, S. W., Martinis, J. M., and Sergienko, A., Appl. Phys. Lett. 83, 791-793 (2003). Priedhorsky, W. C., Smith, R. C., and Ho, C., Appl. Opt. 35, 441-452 (1996). Rowe, M. A., Gansen, E. J., Greene, M. B., Hadfield, R. H., Harvey, T. E., Su, M. Y., Nam, S. W., and Mirin, R. P., Appl. Phys. Lett. 89, 253505 (2006). Rowe, M. A., Gansen, E. J., Greene, M. B., Rosenberg, D., Harvey, T. E., Su, M. Y., Hadfield, R. H., Nam, S. W., and Mirin, R. P., J. Vac. Sci. Technol. B 26, 1174-1177 (2008). Rowe, M. A., Salley, G. M., Gansen, E. J., Etzel, S. M., Nam, S. W., and Mirin, R. P., J. Appl. Phys. 107, 63110 (2010). Shibata, H., Takesue, H., Honjo, T., Akazaki, T., and Tokura, Y., Appl. Phys. Lett. 97, 212504 (2010). Strzalkowski, I., Joshi, S., and Crowell, C. R., Appl. Phys. Lett. 28, 350-352 (1976). Waks, E., Diamanti, E., Sanders, B. C., Bartlett, S. D., and Yamamoto, Y., Phys. Rev. Lett. 92, 113602 (2004). Waks, E., Diamanti, E., and Yamamoto, Y., New Journ. Phys. 8, 4-8 (2006). Waks, E., Inoue, K., Oliver, W. D., Diamanti, E., and Yamamoto, Y., IEEE. J. Sel. Top. Quantum. Electron. 9, 1502-1511 (2003).

Infrasonic Detection

Paul Thomas

University of Wisconsin - Platteville

Abstract. Infrasonic signals have traditionally been hard to detect because of wind and background noise interference. A new type of hardened foam windscreen, which was designed at NASA, was tested to determine how well it works at reducing noise. Different densities and shapes were tested at different elevations. The windscreens were quite effective at filtering out superfluous background noise. The medium density, sphere-shaped windscreen was the most effective at reducing background noise. These windscreens should be effective tools in future infrasonic research. More rigorous tests should be performed in order to better catalog the exact properties of the windscreens, such as how they perform at different temperatures, elevations, and humidities.

Introduction

There are many natural sources of infrasound, such as tornadoes, volcanoes, clear air turbulence, and earthquakes. There are also many man-made sources of infrasound, such as rocket and shuttle launches, satellite re-entry, and other large machinery. Infrasonic detection could be used to find precursor signs to natural disasters, or as a way to monitor heavy machinery and equipment for failure. But what is infrasound?

Infrasound is sound with a frequency of 0 Hz to 20 Hz. It lies below the human hearing range of 20 Hz to 20,000 Hz, and is therefore not usually of interest to us. There has not been much research in infrasonics because of the difficulty creating a microphone that can readily detect infrasound, and because of the difficulty distinguishing infrasonic signals of interest from background noise. The system designed by NASA was created to overcome these problems and begin research on actual infrasonic signals.

The project my team was working on was a method to detect wake vortexes from planes taking off and landing. A wake vortex is like a tornado that comes off the tips of a plane's wings. They are dangerous because another plane can fly into them and get pushed around violently. Wake vortexes are especially dangerous on runways during take-off and landing due to the proximity to the ground where it is much more likely that the plane be forced into the ground and crash. Currently, airports have a system where they wait a predetermined amount of time after each plane takes off before they let another plane take off, giving time for the wake vortexes to dissipate. By detecting these wake vortexes it is possible to make air travel safer and more efficient by potentially reducing the time between airplane take-offs.

My part of the project was to test the windscreens that were designed to find out how well they reduce background and wind noise. I tested different shapes and densities (referred to as “weights”) at different elevations to determine how effective the windscreens are.

I would like to thank the Wisconsin Space Grant Consortium for their financial support of my NASA internship.

Equipment

There were three primary pieces of equipment used for my experiments with the windscreens: the infrasonic microphone, the windscreen, and the data acquisition hardware. I will briefly explain each piece of equipment and how it was used during the experiment.

The infrasonic microphone was developed at NASA. It was built specifically to detect sound under 50 Hz. It is one of the most sensitive infrasonic microphones ever built. During the windscreen tests the microphones were placed on second and third story roofs, as well as the ground, in order to collect the background noise.

Traditional infrasonic microphones require a large area to set up in. They need that space in order to set up a hose system around the microphone. These hoses have small holes punctured in them at regular intervals and act to filter out the noise from the wind blowing over the microphone. The microphone developed by NASA however, does not require a hose system to operate, but instead uses a compact windscreen to filter out wind and other background noise. The windscreens are made of a hardened, foam-like material. They were made in different shapes, such as spheres and cylinders, and different densities, also referred to as “weights.” The windscreens were placed over the microphones during the experiments in order to test how well they reduced background noise.

The data acquisition hardware was a PULSE brand piece of hardware and a laptop. The PULSE card converted the raw signal from the microphone into something that could be processed by the PULSE software on the laptop and then displayed on screen.

Experiment

The goal of my experiments was to determine approximately how effective the windscreens were. To do this I needed an area that was exposed to a fair amount of wind and background noise. The most convenient location was the roof of the building I worked in. There were no trees or buildings blocking the wind or other background noise on the roof, so it was a good place to conduct the tests.

Three locations were set up to collect data from, the third floor roof, the second floor roof, and the ground. Each location had a place to mount the microphone and windscreen. The second and third floor roof locations had a concrete pillar to mount the equipment on in order to raise the microphone above the meter-high safety wall running around the roof. From each location, a coaxial cable was run back to the control room on the second floor. The same microphone was used at each location in order to remove the possibility of differences between microphones skewing the results.

The cables were all run on the outside of the building and brought in through the window. Running the cables on the outside of the building created the possibility that the cables would act as antennas and pick up background radiation that would interfere with the microphone signal. The cables were measured with an oscilloscope to ensure that the background noise was a low enough level that it would not interfere with the microphone signal.

The data collection was a fairly simple, two-part process. First, I recorded the signal from the microphone without the windscreen for two minutes. Then I collected another two minutes of data from the microphone with a windscreen on. A fourier transform was applied to each of these sets of data and then they were compared to determine the reduction in noise at frequencies below 20 Hz. This same process was repeated for each of the locations (third floor roof, second floor roof, and ground) and different windscreens (light, medium, and heavy), and different windscreen shapes (sphere and cylinder).

The experiments were run multiple times, on different days, and at different times of the day. Multiple tests were performed so a more accurate noise reduction value could be determined. The unpredictable nature of the wind made it very important that many tests were run. It was entirely possible that the wind would be blowing during one test and not another. By running the tests many times the results are more likely to be accurate.

Results

For the 15 pound (medium density) windscreens, the spherical shape performed the best overall, reducing the noise by 33.5 dB at 20 Hz and 31.2 dB at 10 Hz. The cylindrical windscreen had a noise reduction level of 28.6 dB and 26.2 dB at the same frequencies, respectively. Unexpectedly the cylindrical windscreen performed slightly better on the ground than the spherical windscreen. It is unknown at this time if that is a typical result. Another promising result is the similar level of background noise was picked up at all three heights. Because the background noise is similar at all three elevations, the noise reduction results between the three elevations can be more easily compared.

The experiment was repeated with 12-pound windscreens instead of 15-pound windscreens. The microphone was the same one used in the previous experiment. The results were similar, though the noise reduction was not as great as the 15-pound windscreens. The noise reduction of the spherical windscreen at 20 Hz was 23.6 dB and at 10 Hz reduction was 21.8 dB. These results are not surprising. The less dense windscreens should allow more noise to pass through them. The background noise on the 2nd and 3rd floors was also higher for this experiment than the 15- pound experiment. Interestingly the cylindrical windscreen performed better than the spherical windscreen at all three elevations.

The next test was performed with only the spherical windscreens on the second and third floors. The purpose of the test was to compare noise reduction of the differently weighted spherical windscreens. Table 1 shows the results of this test. From these preliminary test results the 15- pound spherical windscreens seem to perform the best over the broader infrasonic range.

Comparison of Noise Reduction (dB) for Spherical Windscreens 3rd Floor 2nd Floor

Frequency 30 15 12 30 15 12 (Hz) Pound Pound Pound Pound Pound Pound 0.125 -4.5 10.5 1.5 1.1 -1.1 1.1 1 1.8 15.1 7.0 5.6 8.3 1.6 5 18.5 19.8 23.4 18.1 24.7 8.8 10 17.5 20.1 25.7 22.0 31.2 21.8 20 16.0 26.9 21.7 18.9 33.5 23.6 Table 1 is a summary of the amount of noise reduction from the various tests performed on the 2nd and 3rd floor roofs. Negative values indicate noise level was higher with the windscreen on. This is likely due to the wind speed increasing.

The final test performed was to collect an hour of background noise from the second floor with the 15-pound spherical windscreen. The data were recorded as Pascals versus time. Once the data were collected a spectrogram was generated, shown below in Figure 1. A clear, repeated signal can be seen at 7 Hz and 4.5 Hz. There also appears to be another signal at 3 Hz, but it is hard to distinguish from the rest of the low frequency noise. Because of its very stable, repetitious behavior, the cause of these signals is likely the large, ground mounted air conditioner right next to the building.

Figure 1 is a spectrogram of one hour of background noise from the microphone on the third floor roof.

Conclusions

The infrastructure for an infrasonic detection array was set up on the roof of the test location. Preliminary testing has begun to characterize background noise and test the effectiveness of the windscreens. Because testing is still in an early stage only a few conclusions can be made. One, the windscreen is especially effective at reducing background noise, typically achieving 20 to 30 dB reductions between 10 and 20 Hz frequencies. Two, the 15-pound, spherical windscreen seems to perform the best at reducing background noise on the roof of the test site. Three, at this time the windscreens look promising for the wake vortex detection project.

Future Work

Because of the scope of this research, there are still many tests that need to be done. The effects of wind speed, humidity, and temperature on signal reception and noise reduction must be determined through rigorous scientific testing. The windscreen weight and geometry should be optimized for the airport application.

Acknowledgments

I would like to thank the following organizations for helping to make this paper possible: the Engineering Directorate (D2), the Strategic Relationships Office (H1), the Wisconsin Space Grant Consortium, and the University of Wisconsin – Platteville. I would also like to thank the following people for helping me during my time at NASA Langely: Dr. Qamar Shams, Dr. Allan Zuckerwar, George Weistroffer, Cecil Burkett, Debbie Murray, and Sarah Pauls.

Towards Billion-Body Dynamics Simulation of Granular Material

Rebecca Shotwell Dan Negrut

Department of Mechanical Engineering University of Wisconsin – Madison

Abstract Advances are being made in the simulation of granular material, such that billion- body simulations are, perhaps, feasible using a combination of existing hardware and new modeling software technology. This capability is predicted to lead to a better understanding of granular materials, contributing to many areas of research. However, these capabilities are only useful if they are shown to agree well with reality. Beginning at a simple level, a study was conducted to compare the simulation of primitive joints in the Chrono::Engine simulation package with the MSC/ADAMS commercial software package. This simple effort provides a measure for the accuracy with which the software models the interaction of a joint and one or two bodies. This allows more complex analyses to be carried out with knowledge of the software’s accuracy at this level.

Overview The capability to simulate billion-body models will, perhaps, soon be a reality. As computing power and simulation methods improve, the size of simulations can continue to grow [4]. Simulating many-body systems is important to the study of granular materials, since, in this field, it can be very difficult, if not impossible, to accurately measure the parameters of the particles [5]. Furthermore, simulation can be used to test new designs in low- or zero-gravity or other conditions that are infeasible to access or create in the real world [1].

While simulating granular materials clearly can offer many benefits over conducting physical experiments, this capability cannot be fully utilized until it is known to what extent simulation mirrors reality. Validation is essential to knowing whether simulations are modeling reality well, but to conduct a validation, measurements of reality are necessary. Since it is difficult to measure the parameters of granular materials, this study starts by comparing a simpler set of models involved in the multibody dynamic simulation of granular material: primitive joints.

This effort, while not directly addressing the issue of validating granular material simulations, provides assurance that the software behaves correctly for two bodies; it follows that the software should behave correctly for 20 bodies, 2,000 bodies, or 2,000,000 bodies. By comparing a more basic piece of the simulation software, one can check for inaccuracies while also providing a stepping stone for further validations.

The software in question in this effort, Chrono::Engine [7], [8], has been used to create and simulate large, many-body granular material models, such as the one pictured in Figure 1, which consists of 50,000 ellipsoidal particles [2]. MSC/ADAMS, a well-known and well-validated

The authors would like to thank the Wisconsin Space Grant Consortium Research Infrastructure Program for financial support. 2 software package [3], provided proven values with which the Chrono::Engine results were compared. Primitive joint models were created in both software packages and direct measurements (position, velocity, acceleration, joint reaction force, and joint reaction torque) from the simulation of these models were examined.

Figure 1: Granular simulation of 50,000 ellipsoidal particles, simulated using Chrono::Engine*

Comparison Introduction. This study compared the simulation of primitive joints in Chrono::Engine and MSC/ADAMS 2010 (ADAMS). By considering the measured position, velocity, acceleration, joint reaction force, and joint reaction torque results from simple models in these two software packages, the project analyzed one basic piece of the Chrono::Engine software.

Methods. To carry out this comparison, a simple model was created for each of a list of primitive joints, shown in Table 1. Identical versions of each model were made in Chrono::Engine and ADAMS. Then the results generated by the simulation of this model in the two software packages were compared. The models each contain one primitive joint which connects a moving body to a fixed body (the ground). There are no applied forces in the models; the bodies are acted on only by gravity. Keeping the models simple allows the results to be more easily analyzed.

Models. The joints that were analyzed in this study are listed in Table 1, with data on the number of degrees of freedom allowed and removed by each joint. The “other” columns in the table refer to the angled directions of translation and axes of rotation. For example, for the angled translational joint, the “other” direction of translation refers to a path that is negative 45 degrees from the positive x axis. The orientation is explained further in the description of the translational joint models.

For the first three joints listed in Table 1 (the “pendulum joints”: the revolute, Hooke, and spherical joints), the models created for the comparison, one of which is shown in Figure 2, each consist of a simple pendulum: one link that is fixed at one end to the ground, allowing the other end to move freely.

* A video rendering of this simulation can be viewed at http://sbel.wisc.edu/Animations/. 3

Three models were created for each of the remaining three joints from Table 1 (the “slider joints”: the translational, cylindrical, and planar joints) to consider three different joint orientations; examples of these models can be seen in Figure 7. In each model, a body is fixed to the ground through its center of mass by the joint in question. In the first model for each joint, the joint is oriented vertically, offering no resistance to the body’s downward motion due to gravity. In the second model, the joint is oriented horizontally; thus, the joint holds the body in place and prevents it from falling. In the third model, the joint is placed at an angle of negative 45 degrees from the positive x axis (horizontal). Thus, the joint allows the body to slide down the incline in the positive x, negative y direction.

Table 1: List of joints considered in this validation and the degrees of freedom (DOF) allowed and removed by each joint. The “other” columns refer to translational paths and rotational axes that are not oriented along the x, y, or z axes. Translations Rotations DOF Translational Rotational Joint Allowed Allowed Removed DOF DOF x y z Other x y z Other by Joint Revolute Joint - - - - 0 - - x - 1 5 Hooke Joint - - - - 0 x - x - 2 4 Spherical Joint - - - - 0 x x x - 3 3 Translational Joint Vertical x x - - 1 - - - - 0 5 Horizontal x - - - 1 - - - - 0 5 Angled - - - x 1 - - - - 0 5 Cylindrical Joint Vertical - x - - 1 - x - - 1 4 Horizontal x - - - 1 x - - - 1 4 Angled - - - x 1 - - - x 1 4 Planar Joint Vertical x x - - 2 - - x - 1 3 Horizontal x - x - 2 - x - - 1 3 Angled - - x x 2 - - - x 1 3

Results. The full results from the comparison can be found in [6]. The results of the first joint from each group are presented in the following two sections.

Revolute Joint. In the Chrono::Engine and ADAMS simulations, a simple pendulum featuring a revolute joint was created. The pendulum, which consists of a link with a mass of 1 kg, is pinned, on one end of the link, at the origin. There are no forces acting on the pendulum besides gravity, which acts in the negative y (downward) direction. The initial position of the link is horizontal, along the positive x axis. The model in ADAMS is shown in its initial position in Figure 2. The pendulum begins at rest. The model is oriented in the x-y plane; therefore, the results in the z direction are all zero.

4

Figure 2: Screenshot of revolute joint pendulum model in ADAMS

The position and velocity results from Chrono::Engine (Chrono) and ADAMS (Adams) (see Figure 3 and Figure 4) show good agreement with only small, though apparently increasing, errors in the x and y directions. The errors in the average position magnitude and average velocity magnitude have values of 0.001065 and 0.035105, which are 0.053% and 1.012% of the ADAMS results, respectively.

Figure 3: Position results in the x and y directions and error magnitude for the revolute joint models 5

Figure 4: Velocity results in the x and y directions and error magnitude for the revolute joint models

The acceleration results, shown in Figure 5, have more discrepancies, the most significant of which is the spike that occurred in the Chrono::Engine simulation at a time of just past 1 second; this is obvious in the x, y, and error acceleration plots, and can also be seen as a small inconsistency in the x, y, and error velocity plots in Figure 4. The error in the average acceleration magnitude has a value of 0.485406, which is 4.437% of the ADAMS results.

Figure 5: Acceleration results in the x and y directions and error magnitude for the revolute joint models 6

The joint reaction forces are presented in Figure 6. These plots show poor agreement between the two software packages for the forces in the x and y directions; the shapes of the force plots are different, as are the values of the force. However, as is shown by the fact that the magnitudes of the forces are very similar, the difference in the x and y directions is mainly due only to a difference in the definitions of the reference frames. The error in the average reaction force magnitudes has a value of 0.6278, which is 4.928% of the ADAMS results.

Figure 6: Joint reaction force results in the x and y directions, magnitude, and error magnitude for the revolute joint models

Translational Joint. For the translational joint, three models were created in each Chrono::Engine and ADAMS. Each model consists of a body with a mass of 1 kg. The body is fixed, though its center of mass, to the ground, at the origin, by a translational joint. In the first model, the joint is oriented vertically; for the second it is horizontal, and for the third it is oriented at a 45 degree angle down (clockwise) from the positive x axis (horizontal). The ADAMS models, in their initial positions, are shown in Figure 7. There are no applied forces; the body in each model is acted on only by the joint and gravity. The initial position of the body in each model is at the origin, and the body starts from rest. The models are located in the x-y plane; therefore, the results in the z direction are omitted.

7

Figure 7: Screenshots of translational joint models in ADAMS; the joints are oriented, from left to right, vertically, horizontally, and at an angle

The position, velocity, and acceleration results show good agreement between Chrono::Engine and ADAMS; a selection of this data is shown in Figure 8 for the vertically-oriented joint and in Figure 9 for the angled joint. A summary of the error data for the three models is also shown in Table 2.

Figure 8: Position and velocity results in the y direction and error magnitudes for the vertically-oriented translational joint models

8

Figure 9: Position results in the x and y directions and error magnitude for the angled translational joint models

Table 2: Average error data for translational joint models (horizontal data is omitted since the differences between the magnitudes were either zero or very small) Error Data for Vertical and Angled Translational Joint Models Vertical: Angled: Error in: Error % of ADAMS Error % of ADAMS Avg Position Mag 0.129174761 1.9676% 0.079135785 1.7047% Avg Velocity Mag 0.038781457 0.3955% 0.027974969 0.4034% Avg Acceleration Mag 1.2562E-05 0.0001% 5.82645E-06 0.0001% Avg Reaction Force Mag 5.84522E-07 - 0.000463661 0.0067%

The joint reaction force results (see Table 2 and Figure 10Figure 11Figure 12) show some discrepancies between the two software packages. For the vertically-oriented joint models, the difference in the force results between the two software packages is quite small (at most, approximately 6E-7 N), as Figure 10 shows. For the horizontally-oriented joint, the reaction force in the y direction is negative in Chrono::Engine and positive in ADAMS, as can be seen in Figure 11. This is only a difference in definition, as the forces calculated have the same magnitude. A difference in definition can also be seen in the results from the angled translational joint models (see Figure 12). The forces in the x and y directions agree poorly for this model; however, the mean magnitudes of the forces are the same, providing evidence that the Chrono::Engine force calculations are indeed accurate. The Chrono::Engine results do show some discrepancies from the ADAMS results for the angled translational joint model; the force magnitude deviates from the mean value, with the deviations appearing to increase with time.

9

Figure 10: Joint reaction force data in the y direction and magnitude for the vertically-oriented translational joint models

Figure 11: Joint reaction force data in the y direction and magnitude for the horizontally-oriented translational joint models

Figure 12: Joint reaction force data in x and y directions, magnitude, and error magnitude for angled translational joint models

10

Future Plans and Conclusion Overall, this effort was a straightforward means of (i) providing assurance that Chrono::Engine simulations are accurate, and (ii) working towards validating the software’s many-body granular material simulations.

This effort provides evidence that Chrono::Engine generates the same or similar position, velocity, acceleration, and joint reaction force results for the primitive joints considered. The study also highlighted some areas for further investigation, specifically, the spikes in the acceleration and joint reaction forces for the revolute joint models and the discrepancies in the joint reaction forces for the angled translational joint models. These topics will be addressed more fully in future work.

To continue the effort of providing a measure of accuracy for Chrono::Engine, future work will also include validations of the software’s granular material simulations against experimental results. Comparing to ADAMS simulations, as in the current effort, is infeasible for use in directly validating the simulation of granular materials due to the program’s limited efficiency for even very small systems. Therefore, future direct validations of granular material simulations will involve designing and creating physical models from which experimental data (such as mass flow rate) will be gathered.

The direct validation process, outlined in the previous paragraph, is obviously not as straightforward as creating models in ADAMS. This highlights the benefit of the current work in that it provided a simple method for lending greater confidence in Chrono::Engine results, bringing us one step closer to accurate, billion-body simulations.

References [1] Mazhar, H., Quadrelli, M., Negrut, D., Using a Granular Dynamics Code to Investigate the Performance of a Helical Anchoring System Design: Technical Report TR-2011-03, 2011, Simulation-Based Engineering Lab, University of Wisconsin - Madison. [2] Mazhar, H., Granular Simulation of 50,000 Ellipsoids, http://sbel.wisc.edu/Animations/. [3] MSC.Software, ADAMS Standard Documentation and Help, 2010, MSC Software Corporation, ADAMS 2010. [4] Negrut, D., A. Tasora, and M. Anitescu, Large-scale parallel multibody dynamics with frictional contact on the GPU, in ASME 2008 Dynamic Systems and Control Conference, E. Misawa, Editor 2008, ASME: Ann Arbor, MI. [5] Pöschel, T. and T. Schwager, Computational granular dynamics: models and algorithms. 2005: Springer. [6] Shotwell, R., A Comparison of Chrono::Engine’s Primitive Joints against ADAMS Results: Technical Report TR-2012-01, 2012, Simulation-Based Engineering Lab, University of Wisconsin - Madison. [7] Tasora, A. Chrono::Engine, 2012, Available from: http://www.deltaknowledge.com/chronoengine/. [8] Tasora, A., M. Silvestri, and P. Righettini, Architecture of the Chrono::Engine physics simulation middleware, in ECCOMAS Multibody Dynamics Conference 2007: Milano. p. 1-8. Test and Analysis of the Mass Properties for the PRANDTL-D Aircraft

Kimberly Callan

National Aeronautics and Space Administration Dryden Flight Research Center1

Abstract The main objective of the "PRANDTL-D" project is to obtain the stability and control derivative yawing moment due to aileron deflection, Cnδa. The sign of Cnδa determines whether the "PRANDTL-D" aircraft experienced proverse or adverse yaw during flight. Proverse yaw, the desirable outcome, occurs when the aircraft yaws in the same direction as the turn due to a novel wing twist and bell-shaped lift distribution. Accurate moments of inertia are essential in order to create a dynamic model of the aircraft and compute the stability and control derivatives, including Cnδa. The robustness of the aircraft simulation also depends on the uncertainty of the mass properties. To obtain the moments of inertia, the aircraft will be hung from two filars and rotated in a bifilar pendulum test. A wireless inertial measurement unit will be used to capture the rotation rate data during testing. This data will be used in a bifilar pendulum simulation to analyze the data and obtain the experimental values. For the tests, our team used Pro Engineer to predict the moments of inertia, designed the test structure and specific testing procedures, and created a Simulink model of the bifilar pendulum test.

Background The purpose of flight research for the "PRANDTL-D" aircraft (Primary Research for an Aerodynamic Design to Lower Drag) was to obtain the stability and control derivative yawing moment due to aileron deflection, Cnδa, with a positive sign. This derivative represents the adverse yaw experienced by an aircraft. Adverse yaw is the tendency for the aircraft to yaw, or turn in the horizontal plane, in the opposite direction as the intended turn. For past and current aircraft this derivative is found to be negative, showing that these aircraft experience the adverse yaw effect that described. Positive Cnδa shows the aircraft experienced proverse yaw, a concept that the Dryden Flight Research Center engineers are attempting to prove.

Proverse yaw is the opposite of adverse yaw, when an aircraft yaws in the same direction as the intended turn. Proving proverse yaw can be accomplished in many ways, and the first task that was assigned to my team of interns was to decide how to perform the necessary flight research and prove this concept. The team was presented with a subscale model of a larger flying wing concept. This model had a 12.3 foot wingspan, with a foam interior and carbon fiber overlay. This flying wing utilized a novel wing twist that was computed by my mentor, Albion Bowers, to produce a bell-shaped lift distribution instead of an elliptical distribution, as current wings produce. The production of adverse yaw lies in the lift distribution produced by the wing. An elliptical lift distribution shows that the wing is still producing a large amount of lift at the wingtips. As more drag accompanies the production of more lift, when a wing with an elliptical distribution turns, the outside wing slows itself down due to the amount of drag it is producing at the wingtip. Rudders were created to combat these forces, the effects of adverse yaw. A bell- shaped lift distribution, however, produces minimal lift and drag at the wingtips. A wing twist

1 I would like to thank the Wisconsin Space Grant Consortium for funding my internship experience at NASA. that would produce a bell-shaped lift distribution would allow the outside wing to maintain speed on a turn, creating proverse yaw.

The engineers at the National Aeronautics and Space Administration Dryden Flight Research Center have christened this subscale model "PRANDTL-D," which they have analyzed and calculated to produce a total carbon footprint reduction of approximately 60%. Additional reductions include airframe weight savings of 20-30% and drag reductions of 8-11%. While many engineers do not believe that the concept of proverse yaw can be proven, the Dryden engineers believe that these calculated reductions are proof in themselves that this idea deserves to be explored. Flying wings similar to "PRANDTL-D" have been flown in the past, however no flight data has ever been recorded.

This is where my intern team had to decide the method in which we would produce flight data. We had to decide what instrumentation would allow us to capture the most accurate data while remaining a logical choice for the aircraft, how to extract the data from these instruments, and how we would analyze this data to obtain the sign of Cnδa. Additionally, we had to select the maneuvers to be performed during flight, bearing in mind the accuracy of our instruments and the ability of our aircraft.

The team decided to break into four subsystems. One subsystem would create a dynamic simulation of the aircraft using Matlab and Simulink; another subsystem, the subsystem I worked with, was in charge of finding the mass properties of the aircraft, including the center of gravity and the moments of inertia; the third subsystem handled the physical integration of the instrumentation on the aircraft; and the last subsystem dealt with day-of-flight procedures, including deciding upon the maneuvers to be performed by the aircraft, the flight cards and briefings, and communications between the team and the RC pilot. The aircraft simulation allowed the team to check that the aircraft would behave in a manner similar to our predictions before flight research was performed on the aircraft. It would also allow us to analyze our data, which we obtained during flight using an unmanned avionics system autopilot called Piccolo II and the appropriate data streaming software (Piccolo Command Center). In order to provide the simulation with a suitable description, or identification, of the "PRANDTL-D" aircraft, the mass properties of the aircraft had to be determined. It is on the mass properties that I focused my research this summer, and it is the mass properties that will be discussed further.

Methods As a member of the subsystem in charge of determining the mass properties of the "PRANDTL- D" aircraft, it became my objective to experimentally determine the center of gravity and moments of inertia of the aircraft as well as analytically check the experimental moments of inertia. The internship was a ten week experience and my team first received the aircraft during the seventh week. Due to this timing, my subsystem had a lot of time to prepare our procedures but a relatively short amount of time to perform out experiments.

To obtain the center of gravity, my subsystem drew upon previous classroom and project experience. First, we adopted the standard aircraft coordinate system by placing the origin of the coordinate system at the nose of the aircraft. The x axis then extended positive forward of the nose, the y axis extended positive over the right wing, and the z axis extended positive below the nose. Next, we decided that we could prepare a relatively simple experiment and obtain accurate results if we could raise the aircraft on a few pinpoints, find the weights of the aircraft at those points, and utilize the concept of a summation of moments. We agreed upon three points, and accomplished this task by raising the nose and wingtips of the aircraft using blocks of wood with nails drilled through them. While the aircraft rested on the points of the nails, the blocks were set on zeroed scales. Then the weights that the scales read and their corresponding distances from the nose were put into Equation 1, below.

Eqn 1 WaircraftdCG = w1d1 + w2d2 + w3d3.

This method was used to determine the center of gravity along the x and y axes. For the z axis, a slightly different method was employed. For this axis, the original angle that the aircraft made with the ground was measured, along with the distance from the x-y center of gravity to a line that was drawn from wingtip to wingtip, or x0. The angle the aircraft was tilted was then increased and the new angle and distance measured, x1. The difference in the angles was labeled θ, and Equation 2 was then used to find the center of gravity along the z axis.

Eqn 2 d = 2x cosθ - 2x CG 0 1

Once we had found the center of gravity, our mentors selected a specific location where they then wanted us to place the center of gravity. We used small iron coupons that we inserted into the aircraft's wings to ballast the aircraft. Our mentors asked for the center of gravity to be located 13.24" aft of the nose along the x axis, as close to the centerline of the aircraft as possible along the y axis, and they were not concerned with the center of gravity along the z axis.

Once the center of gravity of the aircraft was in the preferred location, the moments of inertia had to be found at that location. It was suggested early on that my subsystem utilize the bifilar pendulum method, a method in which the aircraft is suspended from two cables, or filars, and oscillated around its center of gravity. Before beginning, the subsystem decided upon all the tasks that had to be accomplished before testing. We had to: come up with predictions for the moments of inertia to compare against our experimental results, determine which testing factors would affect our experiment enough to affect our results, design and fabricate the filars and rig that would be used to perform our experiments, and create a Matlab simulation of a bifilar pendulum with which to analyze our data. By spending the appropriate amount of time accomplishing these tasks, testing ran smoothly later.

Obtaining predictions for the moments of inertia that we would experimentally obtain was critical to remain confident with our results from testing. My subsystem used a drafting program, Pro Engineer, with the help of an engineer from the Materials Laboratory, to create a drawing of the entire aircraft, including all of the instrumentation that would be on the aircraft during testing. Using Pro Engineer, we were able to assign the correct material densities to each component of the aircraft. This feature allowed the program to compute all of the moments of inertia and products of inertia of our computer model. We expected some deviation from these predicted results, as the end result of the "PRANDTL-D" aircraft would be slightly different from our ideal computer model. Once we obtained these predictions we were able to employ the aircraft simulation subsystem to tell us which moments and products of inertia varied the end result, Cnδa, enough to require being found experimentally. We were informed that we had to find the moments of inertia Ixx and Izz and the product of inertia Ixz experimentally. The simulation, and therefore the assumed instrumentation of the "PRANDTL-D" aircraft, was robust to the moment of inertia Iyy and the rest of the products of inertia. This meant that we did not have to find these inertias experimentally.

Once the moment of inertia and product of inertia predictions were obtained, we had to determine which factors of our experiment might cause the experimental inertia values to vary and if this variance was great enough to cause concern. With the help of one of our mentors, we determined that we had to develop a linearized dynamics model of the bifilar pendulum and calculate the variance of inertia with respect to each parameter. The model of the linearized dynamics model of the bifilar pendulum was created using Equation 3, below.

2 2 2 2 Eqn 3 Ib = ((mcomb + mrigging)gd )/(4hω comb) - mcombr comb - mriggingr rigging - Irigging (1)

The different variables, in the order in which the appear in the equation, include the experimental inertia, the mass of the combined system (aircraft and rigging), the mass of the rigging, gravity, the distance between the filars, the length of the filars, the natural frequency of the combined system, the distance from the center of gravity of the combined system to the axis of rotation, the distance from the center of gravity of the rigging to the axis of rotation, and the inertia of the rigging. Using this equation, the derivative was taken with respect to each parameter to calculate the variance of inertia with respect to each parameter. These variances were then used in Equation 4, below, to calculate the total variance in inertia and to figure out which parameter affects this variance most.

2 2 2 2 2 2 2 2 2 2 2 Eqn 4 σ I = C mcombσ mcomb + C mriggingσ mrigging + C dσ d + C hσ h + C ωcombσ ωcomb + 2 2 2 2 2 2 C ωriggingσ ωrigging + C rcombσ rcomb + C rriggingσ rrigging (1)

Once the total variance in inertia was found, we applied this procedure to the moment of inertia Ixx. We graphed the total variance in inertia against each parameter and determined that only one graph had a local minimum, and therefore had a specific value to obtain the most accurate data during testing. This parameter was the length of the filars. Graph 1, below, shows the optimal filar length for determining the moment of inertia Ixx was 1.1 feet.

Graph 1

2 σ Ixx

Filar Length (feet)

Due to the fact that the predicted values for Ixx and Izz were very similar, the optimal filar length for obtaining Izz was also 1.1 feet. Soon after we obtained these results, the filars were fabricated. We later realized, however, that due to the testing location we had to use longer filars when experimentally testing for Ixx. We continued to use 1.1 foot filars for the Izz experiments, but made additional, 30 inch filars for the Ixx experiments. This will be discussed in greater detail when I describe the outcome of the experiments.

The next step in preparing for our moments of inertia testing involved assigning personal roles, creating detailed procedures to follow while testing, and determining how we wanted to attach the aircraft to our filars and the filars to the ceiling. After some deliberation, we decided to perform our experiments in Dryden's shuttle hangar, the hangar that was built to accommodate the shuttle in case it was at Dryden while the desert experienced rainfall. Making this decision before typing up our procedures allowed us to keep in mind space and safety considerations. Since my subsystem consisted of three team members, the testing roles we decided upon included a data acquisition engineer, a systems engineer, and an engineering technician. The data acquisition engineer was responsible for inertial measurement unit pre-test setup, collecting all rotation rate data on the laptop, and understanding sensor output. The systems engineer was in charge of directing the team through the testing procedures, making sure all systems were working properly, and checking that all safety guidelines were followed. Finally, the engineering technician made sure to set up test equipment, initiate the tests, and ensure the testing structure was working properly. I was assigned the role of engineering technician. Once these roles were decided, the procedures were drafted. All of the experiments began with attaching the rigging to test structure in the shuttle hangar and to the aircraft. The data acquisition engineer then began data acquisition on the laptop and informed the engineering technician when to initiate each test. The data was collected via a wireless inertial measurement unit that was placed at the center of gravity of the aircraft and attachment plate combination. The attachment plate was fabricated to allow the filars to attach to the aircraft without attaching them to the aircraft itself. Data was collected for thirty oscillations, after which the oscillation of the aircraft was stopped. This procedure was repeated until three successful data sets were acquired. The aircraft was then removed from the rig, leaving all possible connection hardware. The tests were then repeated without the aircraft mounted to the structure to obtain the inertia of the rigging and attachments. These same procedures were followed for the Izz experiments apart from the orientation of the aircraft during testing. These procedures were also followed to obtain data for Ixz, although the aircraft was also hung in the Izz orientation and the entire procedure was repeated while the aircraft was tilted at angles of 4°, 6°, 8°, and 10°. Two photographs of testing are shown in Figures 1 and 2 below.

Figure 1 Figure 2

Figure 1 shows the orientation of the aircraft Figure 2 shows the orientation of the aircraft during the Izz during the Ixx experiments. experiments. I am initiating the oscillation of the aircraft.

The last piece of preparation that we performed prior to testing included creating a Matlab simulation of a bifilar pendulum with which to analyze our data. The equation of motion of a bifilar pendulum that we input in Matlab, Equation 5, is shown below.

2 Eqn 5 ӪI + (C1|(theta dot)| + C2)(theta dot) + ((mgd )/(4h)ϴ = 0 (2)

The simulation was provided with values for theta dot (the rotation rates from the inertial measurement unit) and was able to derive and integrate itself for ϴ and Ӫ. We also input the mass of the system, gravity, the distance between the filars, and the length of the filars. This left the values of inertia and two nuisance parameters, C1 and C2. The simulation knew what the ideal sine curve would look like under the conditions we set, and then attempted to match the experimental curve that was generated from our rotation rate data to the ideal curve. Once the simulation performed many iterations attempting to match the curves as best as possible, it generated the appropriate inertia and nuisance parameter values. The values of the nuisance parameters were, as their name suggests, not important, but we took note of the inertia values to compare to our predicted values that we generated through Pro Engineer prior to testing.

Results There were a few unpredicted complications that occurred during testing. One of these complications concerned the optimal length of the filars that was mentioned earlier. Although the optimal length was 1.1 feet, the orientation of the Ixx experiments caused a nose attachment on the aircraft to come in contact with the ceiling of the rigging. Since this would greatly interfere with the accuracy of our results, we analyzed the variance graphs that were mentioned previously. We concluded that as the length of the filars increased, the total variance in inertia did not increase enough to cause concern. Since shortening the filars would cause the variance of inertia to greatly increase, we increased the length of the filars to 32 inches for the Ixx experiment. Another cause for concern was the air conditioning inside the shuttle hangar itself. We did not predict that the air conditioning would create wind currents strong enough to actually oscillate the aircraft. We contacted the engineer in charge of the shuttle hangar and requested that the air conditioning be turned off for the remainder of the time that we were testing.

As soon as we were presented with a complication that would affect our results we dealt with it so that we would not have to deal with it again during testing. After testing, we also decided to check our results once more against the linear equation for inertia. This equation, Equation 6, is shown below.

Eqn 6 I = (mgd2)/(4hω2) (2)

Our results are shown in Table 1, below.

Table 1

2 2 2 Method Ixx slug-ft Izz slug-ft Ixz slug-ft

Predicted 2.051 2.229 - 0.0245 Experimental 1.695 1.599 - 0.048 (Linear eqn) Experimental 1.5489 1.5068 -0.0036 (w/ simulation)

Conclusions At the end of the internship my subsystem evaluated the effectiveness of the center of gravity and moments of inertia testing preparations. In conclusion, we believe that center of gravity test and procedure were successful. We were able to repeatedly and accurately find the center of gravity each time additional attachments or instrumentation were added to the aircraft. Additionally, we were confident in the bifilar pendulum's ability to yield accurate data for the moments of inertia and product of inertia Ixx, Izz, and Ixz. Upon comparing our predicted values and experimental values, we noticed that the experimentally obtained values were lower than the predicted values. This may have occurred as a consequence of Pro Engineer estimating the densities of the materials used in the aircraft. We were confident in our experimental values, however, as they are the same order of magnitude as the predicted values. Also, we expected Izz to be larger than Ixx, as our Pro Engineer values predicted. Our values obtained from testing show that this was not the case. Between these values, Ixx was larger than Izz. When we checked our results a third time with the linear equation for inertia, these results agreed with our tests and Ixx was larger than Izz.

By the end of the internship my team was not able to complete the project as a whole. Flight data still does not exist for this type of aircraft. My mentors at the Dryden Flight Research Center, however, hope to continue this research.

References

(1) Chris Regan, Independent Analysis of X-48B Inertia Swing Data Rev. H. 2011

(2) Matt Jardin & Eric Mueller, Optimized Measurements of UAV Mass Moment of Inertia with a Bifilar Pendulum. 2007

In addition, I would like to recognize and reference Albion Bowers, Oscar Murillo, and Brian Taylor, my three mentors from NASA Dryden Flight Research Center, for their previous work and insight on the project. I would also like to reference my subsystem members, Stephanie Reynolds and Joseph Wagster, the rest of the PRANDTL-D team for their hard work and contributions to the project, and Alex Stuber for his help creating the Pro Engineer model of the PRANDTL-D aircraft. Development of a Passive Check Valve for Cryogenic Applications

Bradley Moore1 University of Wisconsin - Madison

Abstract Future astrophysics missions will rely on a new generation of cooling technologies to improve the resolution of infrared and x-ray sensors. A novel continuous cold cycle dilution refrigerator (CCDR) has been proposed by Prof. F.K. Miller to provide cooling for these sensors at temperatures below 100mK. A passive check valve for liquid 4He-3He mixtures is a key technological innovation required to implement the CCDR, as will be further explained in the paper. The design of a reed style passive check valve and initial results from tests with helium gas at room temperature and 80K will be detailed.

Introduction Cooling for space science instruments to temperatures below 1 Kelvin is critical for new infrared and x-ray astrophysics missions. The cutting edge in detector technology for infrared missions lies in cryogenic detectors, either transition edge sensor (TES) bolometers or microwave kinetic inductance detectors (MKIDs). Future x-ray missions that include spectrometers will include microcalorimeters that also need to be cooled to temperatures below 1 K. Each of these detector types requires operation at sub-Kelvin temperatures for the highest sensitivity applications. The sub-Kelvin continuous cooling needed for these missions is achievable with the dilution refrigerator that is detailed in this report.

Current space flight technologies used to obtain sub-Kelvin temperatures are the Adiabatic Demagnetization Refrigerator (ADR), the single shot, space-pumped dilution refrigerator and the 3He evaporation refrigerator. Prof. F. K. Miller has proposed a novel a cold cycle dilution refrigerator (CCDR) using a novel thermal magnetic pump capable of cooling to temperatures below 100 mK. This innovative technology will provide cooling at temperatures below 100 mK for detectors on future infrared and x-ray astrophysics missions and will, in turn, enable NASA to better fulfill strategic sub-goal 3D: to discover the origin, structure, evolution and destiny of the universe, and search for Earth-like planets.

The initial scope of the proposed research for the Wisconsin Space Grant Consortium Fellowship was to fabricate and test a Cold Cycle Dilution Refrigeration system. However, upon becoming more familiar with the project, it quickly became clear that the scope of such an endeavor was far beyond the year indicated in the fellowship proposal, or even the 2 years in an M.S. program. Given the level of complexity and expectations of the primary funding source (NASA SBIR Phase 1 grant), the work accomplished in the tenure of the fellowship was focused on a crucial component and major technological hurdle to the CCDR development: the cold check valves.

Cold Cycle Dilution Refrigerator

Special acknowledgement to Prof. F.K. Miller, Prof. J. M. Pfotenhauer of UW-Madison and Dr. J. Maddocks of Atlas Scientific for their intellectual support and the Wisconsin Space Grant Consortium for financial support. System. The cold cycle dilution refrigerator will consist of a pump, two recuperative heat exchangers, a phase separation chamber, a mixing chamber and a throttle. The cycle was modeled by an M.S. student working for Prof. Miller, Bryant Mueller.

VycorSuperleak

Porous Packed Porous Packed Superconducting Paramagnetic Paramagnetic Superconducting Magnets Bed Bed Magnets

Check Valves 5 1

Q Recuperative Heat Q Exchanger Q Q Q

Phase 2″ Separation Chamber 0.6K - 0.7K

2′ Q Q Recuperative Heat Q Exchanger Q Q 3 VycorSuperl Mixing eak Chamber 0.01K - 0.5K 4 Throttle

Figure 1. Schematic of the cold cycle dilution refrigerator.

A mixture of 3He and 4He with a concentration of approximately 6% 3He exits the pump at 1.7 Kelvin and flows into a recuperative heat exchanger. The pump itself is a thermodynamically reversible fountain effect pump invented by Professor Miller and currently being developed at the University of Wisconsin (Miller 2009). In this heat exchanger the fluid is cooled as it exchanges heat with the low concentration stream. As the high concentration stream is cooled it follows a line of constant 4He chemical potential as shown in Fig. 2. Therefore, the concentration of the high concentration stream increases as the temperature decreases through the recuperator. (Note: the labels on Fig. 2 correspond to the labeled positions on Fig. 1.) The working fluid exits the cold end of the recuperator as a 2-phase mixture and enters a separation chamber where the 3He rich phase is separated from the 4He rich phase, creating a 3He concentrate and a 3He dilute phase. For ground- based applications this separation can be achieved by gravity due to the density difference between the phases. For operation in microgravity, surface tension forces will be used to accomplish the separation. Dilution refrigerators that use surface tension for phase separation have been developed at the NASA Ames Research center (Roach 1999). Further developmental work is required to verify this.

Figure 2. Plot of the cold cycle dilution refrigerator cycle on a T- x3 diagram.

Next, the 3He rich phase exits the separation chamber and enters another recuperative heat exchanger and is cooled by the low concentration stream. Then the fluid passes through a capillary where the pressure drops. The high 3He concentration mixture enters a mixing chamber and is diluted with 4He. The process of mixing 3He into 4He is endothermic therefore heat is absorbed from the load, providing the primary cooling at sub Kelvin temperatures. Low 3He concentration mixture exits the mixing chamber and returns to the pump via the two recuperative heat exchangers.

At Goddard, Miller developed a thermodynamically reversible fountain effect pump shown in Fig. 3 (Miller 2009). The pump consists of two canisters packed with Gadolinium Gallium Garnet (GGG) spheres connected by a piece of Vycor glass, called a superleak. A superconducting magnetic coil surrounds each of the canisters. The operation of the pump can be simplified into 4 primary steps shown in Table 1. The current in the coil around cylinder A is increased while the current in cylinder B is decreased. This causes the magnetic field in A to increase while the magnetic field in B decreases. As the field increases in the paramagnetic GGG spheres the magnetic entropy decreases causing the thermal entropy in the spheres and the surrounding fluid to increase (the temperature increases) consistent with process step I. At the same time the field in the other bed is decreasing so the temperature of the spheres and fluid in the other bed decreases. The temperature gradient across the superleak results in a pressure gradient that causes the configuration to act like the well-known fountain effect pump. When the pump contains a dilute mixture of 3He-4He, the low temperature side is gradually depleted of 4He because the 4He flows out through the superleak to the high temperature side while the 3He is blocked by the superleak staying behind at the cold side during the second process step. Process Process Pump Pump Magnetic Field Step Description Concentration Temperature I. Raise temp at High Rapidly Rapidly constant increasing increasing to concentration intermediate

II. Discharge 3He at Decreasing High Ramp increase constant temp to maximum

III. Lower temp at Low Rapidly Rapidly constant decreasing decreasing to concentration intermediate IV. Replenish 3He at Increasing Low Ramp decrease constant temp to zero

Table 1. Pumping Process description

Eventually the cold side 4He content drops to the point that it becomes necessary to reverse the direction of flow, which is accomplished by simply reversing the direction of the current in the superconducting magnetic coils, entering into process step III for cylinder A, and process step I for cylinder B. This causes the magnetic field and hence the temperature of the warmer bed (A) to begin decreasing while the field and temperature of the colder bed (B) begin increasing, thus reversing the direction of the pumping action and leading into step IV, when the 3He is replenished in cylinder A and depleted from cylinder B. This configuration allows cyclic thermodynamic cycles to be driven without using pistons or moving parts.

Cylinder A Cylinder B

Figure 3. Schematic of the reversible pump.

Because of the requirement that the flow be reversed periodically, the pump is inherently an alternating flow device. However, it can be adapted to work with continuous flow thermodynamic cycles such as the superfluid Joule-Thomson cycle and the proposed cold cycle dilution refrigerator by adding low temperature check valves, as shown in Fig. 1, to rectify the alternating flow.

Check Valve Requirements. The operation of the check valves in the system is best illustrated by a series of diagrams, Fig. 4 to 7. Gray valves are sealed; black valves are open, with arrows to indicate flow direction.

Figure 4. Process I, Cylinder A: Lowering temp at constant concentration, Plow

3 3 Figure 5. Process II, Cylinder A: Replenish He at constant temp, PA =Plow, Cylinder B: Discharge He at constant temp, PB =Phigh

Figure 6. Process III, Cylinder A: Raising temp at constant concentration, Plow

3 3 Figure 7. Process IV, Cylinder A: Discharge He at constant temp, PA =Phigh, Cylinder B: Replenish He at constant temp, PB =Plow

It is clear from these diagrams that the check valves must seal to flow in one direction, but open to a very low pressure differential in the opposite direction.

After much of the testing had been done on the check valve, the numerical model of the system was completed. This gave a much more accurate picture of the high pressure and low pressure requirements of the valve. When the initial development was done, it was assumed that the pump would achieve a pressure differential of 10 psi. However, the final model results indicated that the pump would only achieve approximately 1.4-3.2 psi. Since much of the initial work was done with 10 psi, this was kept as the sealing pressure criteria for further understanding the valve. Once a functional valve for 10 psi is accomplished and understood, it will be possible to optimize for a lower pressure as will be discussed later. The model also indicated forward flow rates ranging from 60-140 μmole/s 3He (1.800E-07 to 4.200E-07 kg/s). Since 3He is extremely costly and currently tightly rationed by the DOE, the tests were done with 4He. Check valve design. The initial design of the valve was based on Miller’s work at MIT (Miller 1999). Polymers such as Teflon (PTFE) exhibit creep at higher temperatures, but “freeze” into shape at lower temperatures. Therefore some preload can be applied at room temperature to cause the Teflon to creep and match the sealing surface of the valve. When the valve is cooled to cryogenic temperatures, the preload is released and the Teflon will hold the sealing surface shape, giving repeatable sealing.

The initial valve was a poppet design with spring preload; however the preload required was sufficiently high such that the spring could not provide sufficient sealing. There were also other issues associated with the geometry and possible gravity effects that drove a move to different valve geometry. Based on Dr. Maddocks’ experience with metal seated cryogenic reed valve and Miller’s experience with polymer seats, a new valve was designed. This used the reed valve design with a Teflon seat. It eliminates the gravity issues and provides a baseline with a well- known geometry. Fig. 8 shows an exploded section view of the valve of the fabricated valve, Fig. 9 shows a picture of the valve.

The Teflon seat is epoxied over the base with Stycast 1266, chosen for its performance at low temperatures. Also, given Teflon’s resistance to bonding, inner diameter of the bonding surface must be etched with a chemical etchant. The spring steel reed is placed on top of the seat. The copper washer and wave spring hold the reed in place over the valve. The flow direction is indicated by the arrow.

Forward Flow Direction

Top

Bottom

Teflon Seat Washer Reed

Wave spring

Figure 8. Exploded section view of reed style check valve

Figure 9. Picture of actual reed check valve

Experiment Test Setup. For actually testing the valve, a liquid helium test would be ideal but the deadline for the NASA SBIR was approaching so a simpler alternative needed to be developed. A Nitrogen test was adopted since material properties (thermal contraction, Teflon workability) change little below approximately 100K. This allowed for a simple procedure:

-Lower experiment into open, empty LN2 dewar -Apply pressure to valve if preloading -Fill dewar from large storage tank until valve and coil completely covered with LN2 -The valve has reached LN2 temp when LN2 no longer boiling around valve -If preloaded with pressure, release the pressure and commence testing

To measure the flow, an Omega flow meter was employed, with a range of 0-2 slpm (0 to 5.5E-6 kg/s He) and a resolution ~4E-9 kg/s He. For lower flow rates, a Bubble flow meter was employed to give a resolution of ~8.915E-12 kg/s. For pressure sensors, Endevco pressure transducers with a range of 0-300 psi and a resolution of ~.01 psi where utilized.

Results. The first phase of testing was to achieve sealing to backflow (flow opposite the forward flow direction discussed earlier) at room temperature. However, a 10 psi pressure difference at room temp resulted in flow above the range of the flow meter (~5.5E-6 kg/s). The pressure was increased until the valve sealed at room temperature with a pressure difference of 25 psi. The experiment was then cooled down with that pressure as the preload. The pressure was released after cool down and the flow rate at different pressures was recorded. Since better sealing was seen at higher pressures at room temperature, the valve was warmed up and cooled down with 45 psi and 65 psi respectively and the low temperature flow rates were recorded at that temperature. The results are compared to the theoretical forward flow rate of the valve shown in Fig. 10.

Figure 10. Cold back flow mass flow rates compared to theoretical forward flow

After the pressure is released, the minimum leak back is still at that pressure. Also, sealing improves at higher pressures but after cool down the sealing is poor at low pressures. Higher pressures freeze a small radius of curvature into the Teflon seat. When the high pressure is released at low temperature and lower pressures are tested, the radius of curvature no longer matches. The data suggest that a balance is needed between a preload pressure that will smooth out asperities, but will still have a curvature near that of the reed at low pressures. It is well known that the radius of curvature of a simply supported thin circular plate with a uniformly applied load is proportional to the cube of its thickness. Based on this, the radius of curvature of a 4 mil thick reed at a pressure difference of 10 psi is expected to be the same as the radius of curvature of an 8 mil thick reed at a pressure difference of 80 psi. Therefore, a well-polished 8 mil reed can be pressurized to 80 psi at room temperature to permanently smooth asperities while still maintaining the radius of curvature that a 4 mil reed would possess at 10 psi. Fig. 11 shows that this was very successful with flow rates below the forward flow even when no preload was applied. Even better results were accomplished with a preload.

Figure 11. Preformed Cold back flow mass flow rates compared to theoretical forward flow

Conclusion In conclusion, the back flow rates are in an acceptable range for application in a CCDR. Further investigation is required before the valve can be considered a complete success. The actual CCDR will have a lower pressure difference than was tested. The fluid that must be sealed against will not be 4He gas at liquid nitrogen temperatures, but the 3He component in a 3He-4He mixture at liquid helium temperatures. Currently, a liquid helium temperature test by using a Stirling Cryocooler that will improve the confidence in this design is being pursued. However, there remains a strong indication from the data that a reed-style passive check valve can be optimized for use in a cold cycle dilution refrigerator.

References [1] F .K. Miller and J. G. Brisson, A Superfluid Pulse Tube Driven by a Thermodynamically Reversible Magnetic Pump, Cryocoolers 15, Proceedings of the 15th International Cryocooler Conference (2009). [2] Pat R. Roach and Ben P.M. Helvensteijn, Progress on a Microgravity Dilution Refrigerator, Cryogenics 39 (1999) pp. 1015-1019.

[3] Franklin K. Miller, J.G. Brisson, Development of a low-dissipation valve for use in a cold- cycle dilution refrigerator, Cryogenics, Volume 39, Issue 10, 12 October 1999, Pages 859- 863

22nd Annual Conference Part Seven

Physics and Astronomy A Novel Technique for Fabricating Metalized Objects with Difficult Geometries

Mitch Danger Powers

UW-Madison Observational Cosmology

Abstract

In the course of fabricating corrugated horn antennae a technique was developed to avoid certain geometric difficulties. Stereolithograpy, a form of 3D printing, had been employed to create a lightweight, cheap, plastic horn. However, it was found that it was difficult to plate metal into the corrugations of the plastic horn. To work around this, a technique was developed where in the corrugated horn would be produced by assembling a number of easy to plate, interlocking rings. The rings would be plated separately, assembled, and electroplated with an exoskeleton. The general technique could readily be applied to a large number of similarly difficult scenarios.

Background

The genesis of this project is in the QUBIC bolometric interferometer. In order to save weight and manufacturing costs, the profiled corrugated microwave horn antennae were made of plastic via a process known as Stereolithography (SL). They would then be electrolessly plated with a thin layer of nickel. Plating proved to be quite challenging, due to the aspect ratio and small scale of the corrugations. As the object of study would be the cosmic microwave background radiation, they were designed to receive radiation in the W-band (75-110 GHz, 3-4 mm). This requires slots about a millimeter deep and half as wide to line the interior of the horn (Timbie, 2011). The nickel coat that was electrolessly deposited became “spotty” and of visibly low quality in the grooves. As SL had already been used to get around horn fabrication issues, it was looked to as being a potential solution to this problem.

Stereolithography. In a word, SL is amazing. It is a relatively young, and rapidly expanding area of 3D printing. It works fairly simply. A vat of plastic resin is selectively solidified, layer by layer, via a computer controlled laser. The precision available is stunning, a state of the art machine is capable of printing 16µm layers, accurate to about .025mm and the minimum feature size is down around .25mm. This makes SL an excellent candidate for millimeter regime component fabrication. In addition to the already impressive abilities of SL, it is also of note that it is comparatively cheap (prices scale by volume, and not complexity; a 3” long horn costs about two or three hundred dollars), lightning fast (parts can ship the same day they’re ordered), and produces lightweight products (densities vary, but tend to about 1.3 g/cm3 and components can have their interiors “honey-combed” to further cut weight), and designs that would otherwise be nearly, or in some cases fully, impossible to build become trivial (Quickparts). Further, the SL process culminates with a somewhat surreal moment where the machine operator can reach into the vat of resin and pull out a fully formed widget. The only

The author would like to thank WSGC for material support, as well as Peter Timbie for his guidance and expertise along the way. drawback I’ve encountered with SL is that they are yet to invent a conductive resin, and therefore any components that would need to be conductive require electroless plating.

Plating Plastic. While modern plastics are engineered to have a wide array of properties, to my knowledge we are yet to invent a conductive plastic. Adding a thin layer of conductive metal fixes this problem but creates one of its own, getting the metal to adhere to the plastic. Electroless plating works by mechanically anchoring a seed layer to the substrate, and plating, often auto-catalytically, A section of electrolessly copper plated horn. This was onto that. To make this possible, the plastic produced prior to the switch to the Zhou electroless substrates require a series of acid based nickel process, which is considered to be an improvement. Timbie, 2011. treatments to roughen the surface. This is followed by baths of stannous chloride and palladium chloride, which serve to “activate” the substrate and enable and catalyze the electroless deposition (Zhou, 2007). From there an off-the- shelf electroless nickel plating solution provides a uniform, fine grained, conductive Nickel coating. This layer, which for the purposes of this project is made to be approximately 1µm thick, features a resistivity that tends to be roughly thrice that of bulk Nickel (thin film resistivity was measured to be approximately 4×10-7 Ωm). The nickel layer extends into holes and around corners exceptionally well, except for in cases of difficult geometries where the solutions have trouble reaching certain areas due to surface tensions, attack angles, and a host of other minor issues that become considerably more dominant in areas of difficult geometry.

Design and Experimentation

Rather than fight troubled areas, I have opted to remove them. The monolithic SL horn was deconstructed into a number of small interlocking rings, which, on their own, are remarkably easy to plate and are without difficult trenches.

Ring Design. Several ring designs were considered, each with their a) own merits. The desired properties were that they be fairly simple, connect together with little or no seam, are stackable, and fit within SL capabilities. Shown at left are three of the considered ring cross sections. Ring A was b) ultimately chosen as it avoided any overly complex features, is readily stackable, and hides the ring-ring seam. Rings B and C are both very simple,

but lack in stackability and feature prominent seams. More complicated c) designs involving multiple types of rings being stacked in order were scrapped because, while they hid all their seams and were extremely secure

Several prototype ring designs. See text. when stacked, they were overly complicated and tested some of the extremes of the SL machines.

Ring A is ideal for corrugated waveguides, where there is constant slot depth and width over a large stretch of waveguide. While the rings fit together nicely, and are easily adjusted to fit with the taper of a conical or even a profiled horn, incremental shifts in slot depth create an assembly issue. To avoid this, horn rings will be produced as an “exploded” horn, and attached to a SL rod that will keep them ordered until stacking (see cartoon at right). At which point they will be summarily broken off and stacked in their respective order.

Procedure. There are four distinct steps to the complete Horn rings will be kept assembly process. The rings must be roughened, nickel plated, stacked, ordered via a SL guide which will be removed and then copper plated. For a detailed explanation of both the after nickel plating. This roughening and nickel plating process, the Zhou paper ought to be is expected to reduce assembly time consulted, what is important though is the affect the roughening has on considerably. the later steps. The nickel coat creates a uniform enough layer that there are no overt signs of deformity once the rings are plated. An overly rough ring would potentially create issues with ring to ring connections. A loose ring, or one with an acid puckered concave edge would introduce unwanted gaps into the stacked assembly, while a bloated, convex edge would damage the copper exoskeleton.

Stacking the nickel coated rings is precisely as simple as one would expect. So long as the rings are designed to mate with each other and nickel plated uniformly they connect together without issue. To ensure this, an entire assembly of rings ought be moved from bath to bath together. At times there is some resistance to stacking, especially if the rings are moist. To overcome this, rings should be left to dry in a clean, dry environment between processes. Further stacking issues can be overcome by applying a small amount of force along the z-axis of the horn. This can be delivered in any number of ways to fit the situation, however, I am quite fond of using a pen spring attached to a small plastic “X” at each end of an assembly to hold it in place (if no such object is readily available, one might consider making a SL item to do the trick).

The copper exoskeleton binds the entire assembly is perhaps the most important step. It is put in place by a simple copper sulfate based electroplating system. For conductivity puposes, the exoskeleton would be required to several skin depths, δs, where √ , ρ = resistivity,

and µr≈1, which works out to approximately 3µm in the microwave regime (Edwards, 1981). For structural purposes the exoskeleton would likely be several times thicker.

Due to unforeseen events, experiments on this stage of the process have been limited. However, it has been found that several items of nickel can be bound by a copper exoskeleton. Several nickel coated SL items, held lightly together – in this case by a piece of masking tape outside of the electroplating solution – and plated via a bath composed of copper sulfate and commercially available additives (Caswell Copper Electroplating Kit) with a 1 amp, 9 volt DC current forms an layer of copper that transcends substrate boundaries and forms an exoskeleton. It was observed that even as a thin layer, the copper made qualitative, obvious, improvements in the binding between nickel items. The copper layer held against the forces of gravity, but was inevitably torn asunder by a modest application of force.

Sadly, no quantitative data has been gathered on the properties of the exoskeleton. It has been shown to have obvious and beneficial affects even as a notably thin and low quality shell. Further experiments will create copper layers in excess of 10µm thick, perhaps thicker if mechanical issues demand it. This will serve as plenty thick for strong conductivity without interfering terribly with the overall dimensions of the final product.

Applications

Given even the modest performance of the copper plating, there is the possibility for a broad range of applications. In the specific case of a corrugated horn, not only can the entire bulk be assembled, but a traditionally manufactured portion of a horn can be fitted to an assembly of SL rings. Specifically, the throat section of the horn, which is perhaps the most crucial and most complicated section, can be electroformed as a solid, high quality piece, and an assembly can be mated with it by way of an overlapping exterior sheath, and the two can be electrobonded together. An existing fixture, perhaps a proprietary item that cannot readily be reproduced, can be fitted to a device with minimal effort. In more general situations, the technique can be used to produce a wide array of widget with specific thermal, electrical, or mechanical properties can be readily fabricated in house. This would be especially useful in a situation where new supplies are difficult to come by. A hypothetical mars colonist could be self-sufficient with little more than a SL machine, a small chemistry set, and a creative imagination.

Future Work

There are several stages of work that have been uncomfortably delayed to this point due to temporary circumstances. The copper coating requires several quantitative measurements to determine the resistivity of the coating, as well as measurements of the mechanical strength of the shell as affected by the copper thickness. Beyond that, a length of straight, corrugated waveguide will be constructed and tests of signal loss will be done in an anechoic chamber. Beyond that, it is hoped that the technique will be able to be applied to building a profiled corrugated horn antenna, with the tests that accompany that. Several gadgets may also be built in parallel to this work.

The time required to construct these devices is expected to be considerably short. As work has been delayed several times, design and theory have been developed considerably. Barring any further unexpected delays, it is expected that a waveguide will be constructed within a week of work resuming. References

Clarricoats, Peter John Bell., and A. David. Olver. Corrugated Horns for Microwave Antennas. London: P. Peregrinus, 1984. Print.

Edwards, T. C. Foundations for Microstrip Circuit Design. Chichester: Wiley, 1981. Print.

"Stereolithography (SLA) | Rapid Prototyping | Quickparts.com." Stereolithography (SLA) | Rapid Prototyping | Quickparts.com. N.p., n.d. Web. 9 Aug. 2012. .

Zhou, Z., D. Li, J. Zeng, and Z Zhang. "Rapid Fabrication of Metal-coated Composite Stereolithography Parts." Proc. IMechE 221B: J. Eng. Manuf. 221.9 (2007): 1431-440. Print.

A C-Band Study of the Historical Supernovae in M83 with the Karl G. Jansky Very Large Array

Christopher J. Stockdale1 Marquette University

Knox S. Long 2 Space Telescope Science Institute

Roberto Soria Curtin University

John J. Cowan University of Oklahoma

Larry A. Maddox Northrop Grumman

P. Frank Winkler Middlebury College

Kip D. Kuntz Johns Hopkins University

William P. Blair Johns Hopkins University

James Miller-Jones Curtin University

Abstract. We present new low frequency observations of the grand design spiral galaxy, M83, using the C and L bands of the Karl G. Jansky Very Large Array (VLA). Utilizing the newly expanded bandwidth of the VLA, we are exploring the radio spectral properties of the more than 150 radio point sources in M83. We present the initial analyses, focusing on the radio evolution of the six historical supernovae discovered in the last century. With four epochs of VLA observations and optical and X-ray observations, we will probe the transition of supernovae into supernova remnants.

1 CJS thanks the Wisconsin Space Grant Consortium, the Distinguished Visitor Programs of the Australian Astronomical Observatory and NRAO, the University of Oklahoma, and Marquette University for their financial support of this research program. 2 KSL acknowledges Chandra Award Nos. GO1-12115A, B, and C issued by the CXC, which is administered by SAO, and HST grant GO 12513, provided by NASA through the STScI, which is operated by AURA, Inc., under contract NAS5-26555 Introduction and Motivation As part of a long-term study to detect radio emission from known (i.e., historical) supernovae (SNe), the Karl G. Jansky Very Large Array (VLA)3 has been used to observe a number of galaxies (Cowan & Branch 1985; Cowan, Goss & Sramek 1991; Cowan, Roberts & Branch 1994; Eck et al. 1998; Eck, Cowan & Branch 2000; Stockdale et al. 2001; Stockdale et al. 2006; Maddox et al. 2006). The primary goal of these observations has been to learn more about the evolution of SNe into supernova remnants (SNRs), how radio emission is produced in these events, and the environment in which they occur. In the course of this effort, we have made very deep, arcsecond resolution observations of the nearby spiral galaxy, M83, and we have obtained deep, similar resolution measurements at X-ray and optical wavelengths which have allowed us to examine the point source populations of this galaxy, discovering a number of previously unreported HII regions, SNRs, X-ray binaries, and Ultra-Luminous X-ray sources. In the last 100 years, six optical SNe have been detected in M83. Radio emission has been detected from four of these recent SNe, all but SN 1945B (unknown optical type, possible a type Ia SN) and SN 1968L (buried in the bright but complex nuclear region). In recent Hubble Space Telescope (HST) observations, SN 1968L was optically recovered (Dopita et al. 2010). Given this recent SN activity, we expect to find ∼120 SNRs that were formed in the last 2,000 yrs. This makes M83 one of the closest SN factories and the ideal galaxy to study young SNRs. For this project, we made new, deep, VLA observations of M83 at 5.0 GHz and 1.4 GHz utilizing the broad spectral coverage of the VLA. We  searched for “hidden” SNRs in the extensive HII continuum emission of the spiral arms;  monitored the time evolution of historical SNe in M83 as they transition into SNRs;  explored the spectral profile of the nuclear region of M83; and  studied the spectral properties of the diffuse emission in the spiral arms in conjunction with new HST and Chandra studies.

These new measurements were 10× deeper at every sub-band than any previous VLA observations of M83. Figure 1 illustrates an initial analysis of the new Australia Telescope Compact Array radio data in the C band and 160 ks of recent Chandra (160 Ks) X-ray observations of M83 and what we can begin to expect from deeper imaging of the galaxy with a total of 750ks of Chandra new observations.

3 The Karl G. Jansky Very Large Array telescope managed by the National Radio Astronomy Observatory which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Figure 1: The C-band ATCA radio contours image of M83 overlaid on the diffuse X-ray emission, from our initial Chandra observations of 160 ks (.3-7 keV), provide a tantalizing glimpse of what we can expect from the complete 750 ks dataset. The two boxes identify the regions more fully explored in Figure 3.

Previous studies of M83 by Maddox et al. (2006) identified a number of radio point sources with α C and L bands pseudo-continuum measurements that had radio spectral indices (α, Sν ∼ ν ) that were neither clearly thermal nor non-thermal. This was likely caused by spatially coincident HII regions and SNRs with blended emission, causing the flux from thermal HII regions to dominate the higher frequency emission and the SNR flux to dominate at lower frequencies. This made it difficult to definitively identify a possibly significant portion of the SNRs that are in the galaxy. In 2011, we proposed a scaled-array experiment to measure the spectral index evolution of a sample roughly double the size of the ∼ 50 sources that were detected by Maddox et al. (2006), as they vary across the EVLA L and C Bands. This project gives comparable resolution at both C and L band, and allows us to image each sub-band separately, to contrast the full spectral profiles of each source detected in the two EVLA bands. With these images, we are able to spectroscopically distinguish the non-thermal emission from the SNRs from the thermal emission associated with the large, diffuse HII regions. The non-thermal emission from these hidden SNRs should become a more significant part of the radio emission especially at the lower frequencies of the L band.

This radio project is part of a large multiwavelength study of M83, the nearest face on, grand- design spiral galaxy with a starburst, and the site of 6 historical SNe. The study includes deep X- ray imaging with Chandra (750 ks PI Long) and optical imaging with Magellan (PI Winkler) and HST observations (PI Blair; Dopita et al. 2010). Radio observations are crucial for resolving the nature of the source populations, especially SNRs seen in the X-ray and optical surveys, with 71 optically identified SNR candidates and with 20-30 X-ray counterparts (Blair & Long 2004). Consequently, we have made new ATCA observations at S and C bands to attain higher sensitivity at these wavelengths than was possible with our earlier VLA observations (Figure 1). Well determined radio spectral indices and X-ray colors make it possible to clearly identify the precise nature of sources in a given galaxy (Maddox et al. 2006; Kilgard et al. 2005; and Prestwich et al. 2003).

Figure 2.— Radio light curves for historical type II SNe in M83 at L band compared to several radio SNe II and SNRs. [SNe 1923A (filled circle and open, inverted triangle for upper limit), 1950B (open circles) & 1957D (filled circles, and each as labeled in the figure.] (Stockdale et al. 2006). The red rectangle represents the region where we expect these supernovae to transition into supernova remnants in the 100-300 years following explosion.

Past Observations of M83 Our collaboration has studied the long term radio emission from compact sources in the galaxy M83 since the commissioning of the VLA in the early 1980’s. The deep continuum VLA observations span a period of approximately 15 years, from 1982 to 1998. In Maddox et al. (2006), we presented the data over the fifteen years of VLA monitoring with a consistent data analysis and imaging scheme to ensure the best possible comparative study of more than 50 sources and extensive diffuse emission regions across the galaxy. Some of our key findings include:

 It was shown that SN 1957D has continually faded in the radio from 1982-1998, consistent with a shock expanding through a circumstellar material that is decreasing in density (Figure 2). SN 1923A has faded to near the limits of detection of these observations. We continue to show no detection of SN 1983N after its initial radio detection, consistent with it having been a Type Ib supernova. SN 1950B has apparently faded to the level of thermal HII regions that are near the position of the explosion. SN 1923A has faded to near the limits of detection of these observations. Figure 2 illustrates the L band radio evolution of a variety of decades-old SNe.  About half of the radio sources are thermal HII regions, α ∼ 0.0. The HII regions tend to be very large, exhibiting high excitation parameters. The largest regions were not detected due to the high resolution of the previous VLA observations.  It was found that ten sources were coincident with X-ray sources. The continuum spectral indices of these sources indicated that most were X-ray supernova remnants. We confirmed that one of the coincident sources (source 28) is the nucleus of a background radio galaxy with two radio lobes (sources 27 and 29). Three of the X-ray sources are coincident with known optical supernova remnants.

Why We Needed New Observations of M83 Studies of the compact sources are providing new information about late-term stellar evolution (e.g.,mass-loss rates), emission mechanisms, the transition of SNe into SNRs, and the nature of XRBs. Radio, X-ray, and optical observations provide three independent and complementary means to identify SNRs – a non-thermal radio spectrum, soft X-ray spectra (with line emission if there are sufficient counts) and emission line ratios showing shocked gas. Only through their combination can we carry out a complete census of SNRs in M83 and begin to understand the environmental conditions that lead to their identification in different wavelength ranges.

With these new observations, we are exploring the spectral properties of the diffuse emitting regions along the spiral arms and in the nuclear of M83 to constrain to the extent that various emission mechanisms contribute. The soft X-ray diffuse emission in the spiral arms of M83 is the brightest of any known spiral galaxy (see Figure 1).

We note that the nuclear region of M83 (Figure 3, right) has shown a slight increase in 1998 VLA L band emission in the radio peak, not seen in the C band data. It is possible that there is an increase in accretion onto a supermassive black hole, which would be consistent with X-ray results. The reported optical/IR nuclear peak is not consistent with the brightest radio nucleus, though there is evidence for a radio emission region at the position of the optical nucleus. The nuclear radio peak is near the position of a second “dark” nuclear mass concentration that corresponds to the dynamical nucleus of the galaxy, possibly a double massive black hole system. Our spectral index observations will allow us to search for possible jet emission to help confirm this hypothesis.

With these new EVLA observations (with similar BW coverage and configurations as the previous three epochs of VLA studies), we are able to follow the long-term evolution of all of these sources in M83 in comparison with more than 25 years of previous VLA and Chandra studies. We would like to note some initial discoveries made in the initial Chandra observations (December 2010). These new X-ray images reveal significantly more sources than in the earlier (shorter) observation, including a previously undiscovered ULX source and significant change in the X-ray emission of SN 1957D (Figure 3, left). The X-ray emission associated with SN 1957D has brightened significantly, never having been detected in earlier epochs. This could indicate the formation of a pulsar wind nebula, a significant change in the density of the circumstellar wind established by the stellar wind of the progenitor star, or the SN blastwave is now encountering significant amounts of ISM and completing its transition into a shell SNR. With so many older historical SNe, M83 is the best galaxy to study the SN to SNR transition.

Recently, Chomiuk & Wilcots (2009) have argued that radio SNRs have a universal luminosity function, with a normalization proportional to the star formation rate in a galaxy but independent of ISM density. M83 is the ideal galaxy to fully explore the history of star forming rate (SFR) of a nearby spiral and to test their model, we need large, well-studied samples in nearby galaxies. From the initial VLA maps, Maddox et al. (2006) found 17 candidate radio SNRs, about half of which are also detected at optical and/or X-ray wavelengths. With this project, we will use previously detected VLA sources like SN1950B and other marginally radio-identified SNRs to explore their spectral index across both bands and apply what we learn to newly discovered sources to more clearly identify as SNRs or not with the VLA.

Assuming the Chomiuk & Wilcots (2009) model is correct, we predicted that we would find ∼ 100 candidate radio SNRs with our ATCA/EVLA observations, including confirming the identification of a number of marginally detected SNRS in the Maddox et al. (2006) survey. Finding these SNRs is important to verify or refute the Chomiuk & Wilcots (2009) model, as well as for helping to determine the nature of the approximately 600 point sources expected in the new Chandra survey of M83. The full analysis of these VLA measurements will provide crucial evidence to fully explore M83’s star formation history.

Figure 3.— ATCA, C-band, Radio contours image of M83 overlaid on the diffuse X-ray emission. Left: The X- ray/radio detection of SN 1957D with two SNR candidates along the northern spiral arm. Right: The radio contours indicate 4 central radio sources with the brightest significantly offset from the optical peak (as labeled in left figure).

Low Frequency Radio Spectra of Historical Supernovae We obtained 14 hours of C band observations of M83 in the CnB configuration sampling the 4GHz bandwidth. We also obtained 16.0 hours of L band observations of M83 in the BnA configuration sampling a 1 GHz bandwidth. With these observations, we achieved an average 3 sigma detection threshold at L band of 30 µJy/bm and at C band of 21 µJy/bm. We obtained a 2 2 somewhat elongated beam, 4 × 3 arcsec or 80 × 60 pc ,for the central sub-band, which will varied across both bands, and we wereable to adjust the beam size to match resolution of the prior VLA observations of M83. Each arcsec corresponds to 20 pc at 4.5 Mpc. The data were analyzed using the CASA software provided by the National Radio Astronomy Observatory (NRAO). We followed scripts for low frequency data analysis prepared by Miriam Krauss, NRAO Jansky Postdoctoral Fellow, and posted on the www.nrao.edu website. Due to significant radio frequency interference (RFI) in the L band data, we were forced to discard, or flag, roughly one third of the data that was unable to be calibrated. Here we present the preliminary analysis of our observations of the recent VLA data regarding the historical supernovae located in M83 (see Figure 4).

Figure 4. L band radio image (1 GHz) centered on 1.5 GHz with the optically reported positions of the six historical supernovae indicated. Rms noise for the image is 30 microJy. The diffuse emission associated with the radio nucleus has two distinct radio peaks which are coincident with the optical and X-ray central sources detected with HST and Chandra).

In these recent VLA observations, we detect radio emission from the vicinity of 3 of the 6 historical SNe in M83: SN 19757D, SN 1950B, and SN 1923A. The C Band radio emission appears to be dominated by thermal Bremsstrahlung associated with nearby or associated HII regions for all three detected sources. The L Band radio emission for SNe 1957D and 1923A appears to have a significant non-thermal component, attributed to synchrotron emission from either the SN shock interacting with circumstellar and/or interstellar medium or other nearby, older SN remnants (See Figure 4).

Figure 5. The log of flux densities for the three historical supernovae are plotted above as a function of log frequency. The C band measurements at right indicate thermal emission. The L band measurements at left appear to indicate non-thermal emission as the flux increases at lower frequencies for SNe 1957D and 1923A.

In Figure 5, we plot the new radio VLA data vs frequency and there appears to be a clear steepening of the radio emission from SNe 1957D and 1923A, helping to confirm our detection of emission from the supernova shocks interacting with the material ejected in the winds of the their progenitor stars prior to explosion.

We also detect approximately 130 point individual point sources have been detected. It is our intent to perform a spectral analysis of these sources to search for older, “hidden” supernova remnants who’s radio emission is often dominated at higher frequencies by thermal emission from nearby HII regions. It is our intent to expand this analysis to other nearby galaxies and further improve our understanding of how stars form in nearby in spiral galaxies.

References

Blair & Long, 2004, Astrophysical Journal Supplement, vol. 155, p. 101 Chomiuk & Wilcots, 2009, Astrophysical Journal, vol. 703, p. 370 Cowan & Branch, 1985, Astrophysical Journal, vol. 293, p. 400 Cowan, Goss, & Sramek, 1991, Astrophysical Journal, vol. 379, p. L49 Cowan, Roberts, & Branch, 1994, Astrophysical Journal, vol. 434, p. 128 Dopita, Blair, Long, Mutchler, Whitmore, Kuntz, Balice, Bond, Calzetti, Carollo, Disney, Frogel, O’Connell Hall, Holtzman, Kimble, MacKenty, McCarthy, Paresce, Saha, Silk, Sirianni, Trauger, Walker, Windhorst, & Young 2010, Astrophysical Journal, vol. 710, p. 964 Eck, Cowan, & Branch, 2002, Astrophysical Journal, vol. 573, p. 306 Eck, Roberts, Cowan, & Branch, 1998, Astrophysical Journal, vol. 508, p. 664 Kilgard, Cowan, Garcia, Kaaret, Krauss, McDowell, Prestwich, Primini, Stockdale, Trinchieri, Ward, & Zezas 2005, Astrophysical Journal Supplement, vol. 159, p. 214 Maddox, Cowan, Kilgard, Lacey, Prestwich, Stockdale, & Wolfing, 2006, Astronomical Journal, vol. 132, p. 310 Prestwich, Irwin, Kilgard, Krauss, Zezas, Primini, Kaaret, & Boroson, 2003, Astrophysical Journal, vol. 595, p. 719 Stockdale Goss, Cowan, & Sramek, 2001, Astrophysical Journal, vol. 559, p. L139 Stockdale, Maddox, Cowan, Prestwich, Kilgard, & Immler, 2006, Astronomical Journal, vol. 131, p. 889

Testing General Relativity with Pulsar Timing Arrays

Sydney J. Chamberlin and Xavier Siemens

Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI

Abstract Pulsar timing arrays are a promising tool for probing the universe through gravitational radiation. A variety of astrophysical and cosmological sources are expected to contribute to a stochastic background of gravitational waves (GWs) in the pulsar timing array (PTA) frequency band. Direct detection of GWs will provide a new mechanism to test General Relativity and requires the development of robust statistical detection strategies. Here, we investigate the overlap reduction function, a term present in the optimal detection statistic, for GWs in various metric theories of gravity. We show that PTA sensitivity increases for non-transverse gravitational waves when pulsar pairs have small angular separations in the sky.

Introduction

Astrophysical and cosmological objects such as coalescing supermassive black hole binaries, asymmetrical rotating neutron stars and supernovae, and other exotic sources such as cosmic strings are expected to produce a stochastic background of gravitational radiation in the universe. Mathematically, such gravitational waves (GWs) can manifest in up to six different polarization modes, shown in Figure 1. Einstein’s theory of General Relativity (GR), however, restricts the six possible modes to just two. In the past half-century, a number of viable alternative theories of gravity have emerged. Many of these theories satisfy weak-field, slow motion tests of gravity such as the bending of light around massive objects and the precession of the perihelion of Mercury, but differ from GR in the predictions they make regarding GWs. A direct detection of GWs thus presents a unique opportunity to test GR.

Figure 1: In a general metric theory of gravity, as many as six GW polarization modes are possible. Note that Fig. 1(a)-(c) represent transverse wave propagation, with the plus and cross modes corresponding to GR; Fig. 1 (d)-(f) represent non-transverse wave propagation. Fig. 1(a) and 1(b) correspond to the plus/cross modes of GR.

Acknowledgments: We thank the Wisconsin Space Grant Consortium and the National Science Foundation for their support in this project. We would also like to thank our collaborators in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).

A number of large-scale GW detection efforts are currently underway world-wide, with current focus primarily given to ground-based laser interferometric detectors, sensitive in the 10 − 103 Hz range (Abramovici et al. 1992), and pulsar timing arrays, sensitive in the 10−9 − 10−7 Hz range (Hobbs et al. 2010). The work presented in this project involves detection efforts using pulsar timing arrays (PTAs).  Detecting Gravitational Waves with Pulsar Timing Arrays

The radio signals emitted by many pulsars are stable enough to serve as clocks. A GW, if present between the Earth and the pulsar, will induce a redshift in the pulsar’s signal. This redshift will cause the pulse to arrive either early or late at the Earth. For a single pulsar, this information is not very useful since a number of other physical factors could account for the pulse’s delay. With an entire array of pulsars, however, the GW will induce redshifts in the pulsars’ signals in a correlated way. PTAs are used in the search for GWs by seeking out the correct cross- correlations between data from different pulsar pairs.

The primary focus of this work is the development of statistical data analysis tools that are necessary to extract the tiny GW signal from noise in data. The so-called optimal detection statistic is found by maximizing the expected signal-to-noise ratio (Anholm et al. 2009), and involves calculating the overlap reduction function Γ(f), a geometrical quantity that appears in the expression for cross-correlation. The overlap reduction function is related to the loss in sensitivity due to pulsars not being co- located or co-aligned, and characterizes the sensitivity of the detector to a GW given its polarization mode.

Overlap Reduction Functions

Overlap reduction functions were calculated for all six possible GW polarization modes by integrating the proper geometric expressions. To obtain meaningful analytic results, each pulsar pair was considered equidistant (L1 = L2 = L, where L is the distance to the pulsar from the Earth) and several different pulsar separation angles ξ were chosen. The results are functions of fL, where f is the GW frequency in Hz, and L is the geometrized distance to the pulsars; these are shown in Figure 2a-d. Here we point out that we have excluded plots for the cross and vector-x modes, as they can be described by the plus and vector-y mode without loss of generality.

Also plotted in Figure 2 are the curves obtained by excluding the pulsar term (this is the term containing all the frequency dependence of the GW and distance to the pulsar) in the integration of the overlap reduction function. This is a technique that is often used in the literature to simplify calculations for GR, and has been justified by examining the agreement between the dashed and solid lines in Figures 2a and 2b. We found that for non-transverse GWs, the overlap reduction functions retain frequency dependence well into the range of fL that is relevant to pulsar timing experiments, meaning the pulsar term cannot be excluded for these polarization modes.

One can also see by studying Figure 2 that for non-transverse GWs, the values obtained by Γ(fL) are much larger than those of the transverse GWs, especially for pulsar pairs with smaller angular separations. This increase in Γ(fL) may be interpreted physically as an increase in sensitivity, and means that pulsar timing arrays should be much more sensitive to vector and scalar longitudinal polarized GWs than to the ordinary GWs of GR.

Figure 2: Γ(fL) with (solid curves) and without (horizontal dashed lines) the pulsar term for the various polarization modes: plus (a), breathing (b), vector-y (c), and longitudinal (d). In the latter two modes, smaller pulsar separation angles ξ are characterized by retained frequency dependence in Γ(fL) in the range of frequencies relevant to pulsar timing experiments. Nearly all the non-transverse curves eventually converge, but at rather high values of Γ(fL) relative to the transverse modes, indicating increased sensitivity to GWs with these polarizations.

The physical origin of this effect and calculations of the overlap reduction functions for actual current pulsar timing array data are described more fully in Chamberlin and Siemens (2012).

Conclusion

It is possible that a direct detection of GWs will be made in the next decade using a pulsar timing array. Such detections will provide new testing grounds for GR. To assist in the development of robust statistical detection strategies, we have analyzed overlap reduction functions for six possible GW polarization modes in a metric theory of gravity. We have found that PTAs should be much more sensitive to GWs with vector and scalar longitudinal polarizations that are present in some alternative theories of gravity than they are to the polarization modes of GR for pulsar pairs that have small angular separation.

Works Cited

A. Abramovici, W. E. Althouse, R. W. Drever, Y. Gursel, S. Kawamura, F. J. Raab, D. Shoemaker, L. Sievers, R. E. Spero, K. S. Thorne, R. E. Vogt, R. Weiss, S. E. Whitcomb, and M. E. Zucker, Science 256, 325 (1992). M. Anholm, S. Ballmer, J. D. E. Creighton, L. R. Price, and X. Siemens, Physical Review D 79, 084030 (2009). S. J. Chamberlin and X. Siemens, Physical Review D 85, 082001 (2012). G. Hobbs et al., Classical Quantum Gravity 27, 084013 (2010).

The Origin of the Elements

S. R. Lesher and A. Arend

Department of Physics, University of Wisconsin – La Crosse

Abstract: There are about 35 nuclei found in nature, which are not susceptible to neutron capture and are explained by the p-process. The modeling for this process requires thousands of nuclear reactions involving both stable and unstable nuclei including (α,α), (α,p) and (α,γ) reactions. In a recent experiment, the cross section of the reaction 120Te(α,p)123I was measured in the energy range of astrophysical interest for the p-process. The α beam from the Notre Dame FN Tandem Van de Graaff accelerator bombarded highly enriched self-supporting 120Te targets and the γ-rays from the activated 123I was counted with a pair of Ge clover detectors in close geometry.

Introduction

We have understood the basic processes of stellar burning, and the ability to produce the lightest of elements, but moving beyond the simple model has proven difficult to complete. Since the pioneering work of Burbidge, Burbidge, Fowler, and Hoyle in 1957 (B2FH) [1], which set up the framework for nucleosynthesis, there has been steady progress in the quest to observe the elemental abundances in terms of the nuclear processes which take place in stars. In recent years, the progress has rapidly increased with the advancement in observational capabilities over a varied range of the electromagnetic spectrum, development of radioactive beam facilities, advance detection techniques, and computing power which allow the study of nuclear properties previously inaccessible. The next generation of radioactive beam facilities is planned to access even more regions of interest but is years from completion. At this time, there is still work in accessible regions that can be studied.

Decades of dedicated research have produced a detailed version of atomic structure. Each atom consists of a small nucleus, about 10-12 to 10-13 cm in diameter, surrounded by a cloud of negatively charged electrons. This nucleus is only about 1/20,000 the diameter of the atom! Even though the nucleus is small, it can contain hundreds of positively charged protons and uncharged neutrons, referred to collectively as nucleons. To express a specific nucleus, we explicitly denote its composition. The atomic number, Z, equals the number of protons, N denotes the number of neutrons, and the mass number A is equal to the number of nucleons (N + Z). Isotopes, nuclei A with the same Z but different N and A values, are symbolically expressed as Z X where X represents the chemical symbol for the element. For example, carbon is a common element. The most naturally abundant carbon isotope makes up 98.9% of all carbon atoms and is denoted by 12C; however, the isotope 13C has an extra neutron and comprises only 1.1% of all carbon atoms. There are still 13 additional isotopes of carbon, which decay in some period of time (i.e., are not stable) with a characteristic half-life (t½), which ranges from nanoseconds to years. For historical reasons, some isotopes are denoted by special names; 1H, 2H and 3H are called the proton (p), deuteron (d), and triton (t), respectively, and the nucleus of the 4He isotope is called an α-particle (α).

Particles release energy in uniting to form an atomic nucleus. The energy given up in this formation is called the binding energy, which is the difference in mass energy between a nucleus and its constituent X protons and N neutrons: A 2 B  Zmp  Nmn  m( X)c

2 where mp and mn are the mass of the proton and neutron, respectively, and c is the speed of light squared given as a conversion factor 931.50 MeV/u. This is the beginning of the fuel cycle which takes place in the interior of stars and the first step in the origin of the elements.

The work of Burbidge et al. proposed a variety of nuclear processes that take place in the interiors of stars to produce the observed elemental and nuclidic abundances. The abundance of an isotope measures how relatively common it is, or how much of that isotope is in the universe by comparison to all other isotopes. Since 1957, the cosmic abundance curves have been modified with new information, but the nuclear processes proposed are still the basis of what we know about nucleosynthesis. To explain all the features of the original abundance curves, Burbidge et al. viewed the elements as evolving through at least eight processes, where only hydrogen is created in the formation of the universe. It has since been grouped into the following processes [1][2][3]:

 hydrogen burning  helium burning  heavier element burning  the s-, r- and p-processes  the l-process

The first three processes are in the interior of stars and explain the synthesis of isotopes from helium to iron. Hydrogen burning converts hydrogen to helium; helium burning converts helium to carbon, oxygen and so on. The chain starts as follows:

1H1H2 H  e  2H1H3 He  3He3He4 He21H

where e+ is a positron and υ is a neutrino.

Heavier element burning includes carbon, oxygen and neon burning which produces elements 16  A  28 and silicon burning which is responsible from the elements 25  A  60. These cycles cannot produce significant amounts of heavier elements, partly because of high Coulomb barriers for the reactions; therefore, elements higher than Fe are mainly produced by neutron-  capture, the s- and r-process.  Neutron capture is a nuclear reaction in which a neutron collides with a nucleus and merge to form a heavier nucleus. When the neutron flux is small, the nucleus captures a neutron and then decays before capturing another neutron. Nuclearsynthesis takes place in the interior of stars where there is a high neutron flux density. In these cases, the nucleus is bombarded with neutrons and may or may not have time to decay before it captures another neutron. The s- and r- process are compared to beta-decay (β-decay) where the mass number rises by a large amount while the atomic number stays the same.

The process in which neutron capture is slow compared to beta decay is the slow or s-process. The lighter processes which produce nuclide up to iron liberate neutrons which are then used with pre-existing “seed” nuclide to produce the s-process nuclide. This secondary process can explain elements heavier than bismuth [4]. The r-process (rapid neutron capture) is a violent process, occurring in supernovae or disruptions in neutron stars. This process is responsible for the synthesis of uranium from seed nuclei and other higher elements. There are still about 35 nuclei found in nature, which are not susceptible to neutron capture and therefore cannot be explained by the r- or s-process. The p-process makes their existence possible [4].

The p-process nuclei are rare, more proton rich, and less abundant than nuclei produced by other means. It is this rare reaction, which has been harder to grasp by the community and about which more information is needed. One possible location of this process is far from the core collapse center in the layers of a type II supernova explosion. Under these high temperature, high-density conditions, seed nuclei from the r-and s-nuclei are photodisintegrated (γ,n), and the nuclei are shifted to the proton rich side of the valley of stability. Every time a neutron is lost, the neutron binding energy is increased, and the process becomes less likely. If the temperatures are high enough, the process can continue with (γ,p)- and (γ,α)- reactions to lower masses. The production rates of the p-process nuclei are still relatively unknown.

The astrophysical rates for the p-process reactions are evaluated from the Hauser-Feshbach (HF) statistical model for the neutron and α-particle captures, and then the reverse photodisintegration rates are obtained. This is not a simple process. Many input parameters must be known, including proper optical potentials, ground state properties (masses, deformation, matter densities) of both the target and residual nuclei, excited states, and when these are not know experimentally, nuclear mass models are used [5]. The HF model uses the α-particle capture and requires the structure of the nuclei to be known. Recent simulations demonstrate the importance of these (γ,α) reactions on the overall reaction flow and therefore abundance of the p nuclei [6]. These simulations indicate 120Te is under produced in comparison to observations [6]. Previous experiments 120Te(p,γ)121I and 120Te(p,n)120I have already been completed [7] to begin to address these issues. In this set of experiments, we set out to explore the 120Te(α,p)123I reaction.

Experiment and Results

The experiments took place at the University of Notre Dame's Institute for Structure & Nuclear Astrophysics (ISNAP). The ND FN Tandem Van de Graaff accelerator (Fig. 1) is capable of terminal voltages over 10.6 MV with beam energies from a few MeV up to more than 100 MeV.

In the first set of experiments, a series of targets were analyzed to account for their composition using Rutherford Backscattering. Rutherford Backscattering is a high kinetic energy elastic collision between an Figure 1: The University of Notre Dame FN 12 3+ Tandem accelerator. incident ion beam and a target particle. C ions were accelerated to 15 MeV using the tandem FN accelerator and impinged on each of the targets in

question. Since the targets are made of thin foil, there is a chance of interaction and scattering by Coulomb collision. The chance of scattering depends upon the atomic number and isotope (among other things) and therefore, we can use this cross-section of the reaction to characterize the isotopic composition of each target of interest. The outgoing α particles were measured with a silicon detector at 150° with respect to the beam axis.

The Rutherford Backscattering (RBS) experiments allowed the identification of the targets and the thickness which is necessary for the use in the following reaction experiment. Fig. 2 shows 2 120 the data from a sample of Tellurium which was determined to be 555 g/cm thick TeO2. In this figure, two alpha peaks are clearly shown, which corresponds to the known energies of 120Te.

120 Figure 2: TeO2 identified by the two alpha peaks shows. Alpha energy is shown on the x-axis in keV and number of counts on the y-axis. After identifying the targets of interest, the FN provided a high quality α-beam for the 120Te(α,p)123I experiment. After irradiation of the target using a range of alpha-beam energies, the activated target was taken to a separate low-background counting area and the induced γ-ray activity was measured by two Clover detectors placed face-to-face in close geometry (as shown in Fig. 3). The Clover detectors consist of four high purity germanium crystals that gather information about the γ-rays which have been emitted from the decaying nucleus. This procedure has been used before at ISNAP with great success and details can be found in Ref. [7].

Figure 3: The detector on the left is an approximately 110% efficient HPGe detector, the one on the right is ~55% efficient. This equipment was used to detect the de-exciting γ-rays after the irritation of the target by α- particles by the ND FN Tandem Van de Graaff accelerator.

The 120Te foils were removed after irridation and transferred to one of the HPGe detectors to be counted. The cross section (Fig. 4) was determined from the γ-rays yields and compared to a code using Hauser=Feshbach statistical statistical theory called NON SMOKER.

Figure 4: Comparison of the measured cross section of 120Te(α,p)123I to the calculated cross section using the standard 'NONSMOKER' code which uses Hauser-Feshbach (HF) statistical model theory. [Preliminary] Conclusion

The 120Te(α,p)123I experiment was performed over two weeks in the summer of 2011, with an on- going analysis. There is still much work needed to be done, the remaining data need to be analyzed, the statistical models need to be run and an interpretation of the data discussed. This work has a far-reaching impact on the community as it can help make the HF model for the

origin of the elements more robust, and therefore, able to predict isotopes we cannot reach in the laboratory.

Acknowledgements

The authors would like to thank their collaborators, A. Aprahamian, A. Kontos, W. P. Tan, R. T. Güray and N. Özkan and generous funding by the National Science Foundation Grant No. PHY- 08-22-648. SRL would also like to thank the Wisconsin Space Grant Consortium for their generous funding.

Bibliography [1] Burbidge, E. Margaret, Burbidge, G. R., Fowler, William A., and Howyle, F. Rev. Mod. Phys., 29 (1957), 547.

[2] Rolfs, Claus E. and Rodney, Williams S. Cauldrons in the Cosmos. The University of Chicago Press, Chicago, 1988.

[3] Ehmann, William D. and Vance, Diane E. Radiochemistry & Nuclear Methods of Analysis. John Wiley & Sons, New York, 1991.

[4] Meyer, Bradley S. Annu. Rev. Astron. Astrophys., 32 (1994), 153.

[5] Arnould, M. and Goriely, S. Phys. Reports, 384 (2003), 1.

[6] Rapp, W., Gorres, J., Wiescher, M., Schatz, H., and Kappeler, F. Astrophys. J., 653 (2006), 474.

[7] Guray, R. T., Ozkan, N., Yalcin, C. et al. Phys. Rev. C, 80 (2009), 035804.

Improving Cloud and Moisture Representation in Weather Prediction Model Analyses with Geostationary Satellite Information

Jordan J. Gerth1 Department of Atmospheric and Oceanic Sciences Cooperative Institute for Meteorological Satellite Studies University of Wisconsin, Madison, Wisconsin

Abstract: This paper develops a methodology for an experiment with several parallel regional Weather Research and Forecasting (WRF) model simulations initialized with satellite-based retrievals. The intent is to clarify the impact of observations, in the form of retrievals, from the Geostationary Operational Environmental Satellite (GOES) Sounder on 12, 24, and 36-hour WRF model forecasts of precipitable water. Two experimental analyses are built from a CIMSS Regional Assimilation System (CRAS) pre-forecast spin-up. The CRAS assimilates precipitable water and cloud products derived from the GOES Sounder. An experimentation period between late September and early October 2011 found that the majority of impact in the experimental simulations compared to the control is recognized in the total precipitable water field over the first 12 hours.

Introduction Water vapor is an important molecule in our atmosphere which has a profound impact on the dynamics and physics of the fluid earth system. Accurately assessing magnitudes and gradients of moisture in the troposphere, especially in the boundary layer, is an ongoing challenge. While in-situ observational data from surface stations and radiosondes paint a partial picture of the moisture distribution in the atmosphere, information collected from weather satellites is the only way to determine short-term changes in water vapor on spatial scales under a few hundred kilometers.

Many of the earth’s most significant weather phenomena are a consequence of temperature and moisture gradients. The tropics and middle latitudes contain a substantial amount of water vapor, which condenses to produce clouds and precipitation. In order to better forecast the broad spectrum of diabatic weather processes, it is necessary to improve understanding of such processes on multiple spatial and temporal scales, from mesoscale convective systems (MCSs) to synoptic-scale mid-latitude weather systems, through their simulated evolution in numerical weather prediction (NWP) guidance. Essential to accurately resolving and parameterizing these phenomena as part of a forecast is incorporating satellite observations of cloud and water vapor into numerical models. The consistent use of these observations in real-time model simulations has the potential to improve predictions of storms and precipitation, a claim which is investigated here.

Since 1992, the development of the dynamics and physics within the Cooperative Institute for Meteorological Satellite Studies (CIMSS) Regional Assimilation System (CRAS) weather prediction model (http://cimss.ssec.wisc.edu/cras/) has been guided by the addition of satellite products into the assimilation pre-forecast (Aune 1994). During the pre-forecast, cloud and precipitable water (PW) products from the twelve hours ahead of the initialization time are substantiated in the modeled atmosphere. These products are predominantly from the

1 Funding for this research supplied in part by the Wisconsin Space Grant Consortium. Geostationary Operational Environmental Satellites (GOES) Imager and Sounder due to their relative temporal frequency. Polar-orbiting satellites, such as those equipped with a MODerate resolution Imaging Spectroradiometer (MODIS), can also be used where temporal frequency can be sacrificed in place of increased spatial and spectral resolution. The goal of the CRAS has been to show forecast improvement when additional satellite data sets are added to the traditional in-situ observations (R. Aune 2011, personal communication). In recent years, however, other modeling systems have grown in popularity as the development of the CRAS slowed. Despite this, the use of CRAS output gradually expanded into dozens of National Weather Service (NWS) forecast offices between 2006 and 2011. Forecaster comments reveal that the CRAS output continues to have a positive impact in certain forecast situations.

In contrast to the CRAS, the Weather Research and Forecasting (WRF) model (http://wrf- model.org/) is a NWP model built from an increasingly popular collection of code for simulating atmospheric conditions at high spatial scales (Skamarock et al. 2008). The WRF model is a state-of-the-art mesoscale NWP tool which was developed to satisfy the needs of both operational forecasters in the field and atmospheric scientists in a research setting. This functionality allows the WRF model to be used both in scientific studies and for real-time prediction. NWS field offices across the United States are increasingly reliant on output from the WRF Environmental Modeling System (EMS), an end-to-end distributable for running the WRF locally and producing output (http://strc.comet.ucar.edu/wrfems/).

Since the WRF has been widely adopted for forecast applications, with improvements to its dynamical cores and physical packages continuing through the present time, it is an ideal platform for observation impact studies because of the applicability to numerous real-time users. Obtaining a better solution via a more accurate set of initial conditions is a long-standing tenet of NWP mathematics. The CRAS pre-forecast methodology remains a viable source of initial conditions (ICs) which have been influenced with satellite data. This investigation quantifies the degree of improvement that the CRAS-produced ICs have in WRF simulations out to 36 hours during portions of the Northern Hemisphere fall months of September and October 2011, where there are a combination of both moist and dry regimes over the north central United States.

This paper will provide a summary of the current state of GOES Sounder radiance and retrieval assimilation in numerical models as a motivating factor for this research; the design of the experiment in seeking to quantify the impact of these retrievals on a regional-scale domain; and some results and comparisons between the WRF simulations, CRAS, and validating analyses and point observations for certain moisture fields. The objective is to develop a methodology for an effective, applicable study easily replicated in the field that confronts substantial forecast problems resulting from tropospheric moisture gradients which are inadequately resolved in NWP guidance at the current time.

Background of Problem The basic premise of NWP is that it is an initial value problem. In striving to attain the perfect forecast, there are several other factors which constrain the accuracy of the solution, including parameterizations and approximations within the model; schemes which use time-stepping to solve partial differential equations over a finite interval; atmospheric features occurring on spatial and temporal scales smaller than resolved by the model; limited observations to populate the initial analysis, particularly above the surface and away from land; the quality and accuracy of those observations and the representation of any observation errors during the assimilation process; and the boundary conditions on the perimeter of the domain which can force the solution for long-duration simulations. The United States’ National Centers for Environmental Prediction (NCEP) operational models use numerous data sets consisting of in-situ and remotely sensed observations in building their analysis. However, some forecasters have indicated that moisture representation in the NCEP models is sometimes inadequate for forecasting mesoscale precipitation events (G. Mann 2011, personal communication).

To resolve this issue, additional moisture information was sought from unexploited earth- observing satellite instruments for incorporation into model simulations. Retrievals from the GOES-13 Sounder were chosen due to the limited amount of use during the current assimilation process in the NCEP operational models (Keyser et al. 2011). As of this writing, the North American Mesoscale (NAM; http://www.emc.ncep.noaa.gov/?branch=NAM) model and Global Forecast System (GFS; http://www.emc.ncep.noaa.gov/GFS/) model do use brightness temperatures from the GOES Sounders (GOES-11 and GOES-13) over ocean as part of their radiance assimilation system. However, they do not use retrievals, nor do not use the GOES Sounder observations over land. The Rapid Update Cycle (RUC) model, which is transitioning to the Rapid Refresh (RR; http://rapidrefresh.noaa.gov/) model, does use PW retrievals, but only those over ocean from the GOES-11 Sounder.

Conservation of enthalpy. Theory provides a connection between temperature and moisture during convective processes. As convective towers ascend, the parcel cools and condenses resulting in a release of latent heat. In order to conserve moist enthalpy E, , where Cp is the heat capacity at constant pressure, T is temperature, Lv is the latent heat of vaporization, and q is specific humidity, not only does the convection require a removal of water vapor from the parcel, but that amount must be directly proportional to the change in temperature. This relationship must also hold for the depth of the convective cloud, from the base pressure at the LCL, Cbase, to its top pressure at the equilibrium level, Ctop, as shown in Baldwin et al. (2002), such that:

Therefore, the availability of middle and upper tropospheric moisture for deep convective processes is a factor in their strength, effectiveness, and longevity, because as environmental temperature increases aloft, the stability increases and parcel buoyancy decreases. In fact, it has been demonstrated that tropical convection is respondent to mid-level moisture, as found in Shepherd et al. (2001) and Thompkins (2001). This is also an observation which has been incorporated into the Kain-Fritsch (KF) convective scheme (Kain 2004), and further confirmed by Knupp and Cotton (1985), which found that environmental humidity is an important factor in assessing downdraft strength. The KF scheme is the convective parameterization of choice in this study to allow the model solution to indicate increased sensitivity as a consequence of differential moisture resulting in vertical mass fluxes.

Convective scheme. The WRF simulations in this experiment all utilize the KF convective parameterization, which is a mass flux scheme, and thus requires an adjusted response based on the grid scaling. The closure for the KF scheme is convective available potential energy (CAPE). This is an important source for latent heat release, and thus, accumulated convective precipitation. It has been shown in Kain and Fritsch (1990) that the normalized vertical mass flux varies significantly—by a factor of two—in the upper troposphere for changes of relative humidity between 50% and 90%. This sensitivity is critical because, for cold temperatures, the amount of water vapor mixing ratio required to adjust the relative humidity is not particularly substantial.

Data and Methodology The configuration of the models in this WRF transition experiment was intended to be easily duplicated in the field as part of the EMS. The dynamics core and physics packages chosen for the ARW runs closely match those from the local WRF model used at that NWS office in Milwaukee, Wisconsin. Table 1 outlines the core configuration. The WRF code used for simulations throughout the study was version 3.1.1.

Dynamics Non-Hydrostatic Cumulus Scheme Kain-Fritsch Microphysics Scheme WSM Single-Moment 5-Class PBL Scheme Yonsei University Land Surface Scheme NOAH Surface Layer Physics Monin-Obukhov with heat and moisture surface fluxes Long Wave Radiation RRTM Short Wave Radiation Dudhia Scheme Time-Integration Scheme Runge-Kutta 3rd Order Damping Rayleigh

TABLE 1. The core configuration for the Weather Research and Forecasting (WRF) model used in the experiment. The dynamical package was the Advanced Research WRF (ARW). Each simulation had an adaptive time step. References for the schemes can be found in the Skamarock et al. (2008) technical note.

The domain selected for the simulations was over the north central United States, including the Northern Plains and Upper Mississippi Valley. The Lambert Conformal grid contained 100 grid points in each horizontal direction with equal-area spacing of 20 km to minimize the time to complete each simulation. The domain was thus square with 2000 km on each boundary and 45 vertical levels up to a model top of 50 hPa. The WRF model runs were initialized twice daily during the experiment period at the standard times of 00 and 12 UTC and executed out to 36 hours, outputting every hour.

Configuration of control and experimental WRF runs. There was one control and two experimental WRF runs, all of which utilized a MODIS sea surface temperature composite for water grid points (Haines et al. 2007) and soil moisture from the operational 0.5-degree GFS distributed by NCEP. Soil temperatures and the source of atmospheric properties were different based on the run. Temperature, wind, relative humidity (water vapor mixing ratio), cloud water mixing ratio, geopotential height, and surface pressure were pre-processed by the WRF prior to each unique model simulation. Additional moisture information was available for the first experimental run (WRFY). As part of the WRF preparation process, input model fields were interpolated both vertically and horizontally to the WRF grid, which resulted in some minor smoothing to the analysis.

The control run, herein referred to as WRFX, contained ICs and boundary conditions (BCs) from the GFS executed six hours prior to the experiment initialization time. Thus, the six-hour forecast from the GFS was used to initialize the WRFX run. Lateral boundaries were forced every three hours from the same GFS run. Moisture components of the GFS initial and boundary conditions were relative humidity and cloud water mixing ratio.

The first experimental run, herein referred to as WRFY, contained ICs and BCs from a CRAS simulation run on an expanded 45 km grid with an identical projection, allowing lateral boundaries to be forced hourly. Moisture initialization in the WRFY came from four mixing ratios produced by the CRAS pre-forecast procedure: water vapor, cloud water, ice water, and rain water. The CRAS utilized only one cloud mixing ratio and one precipitation mixing ratio, however. The form of the water carried in the CRAS mixing ratio arrays was a function of the temperature. They were classified prior to being served to the WRF preparation process as input. The background for the CRAS simulation was the same GFS run as used in the WRFX run.

The second experimental run, herein referred to as WRFZ, used the ICs from a “cold start” CRAS initial-hour assimilation but BCs from the previous GFS run, as in the WRFX. Like the WRFX, the WRFZ used a six-hour forecast from the GFS as the IC background. The WRFY took advantage of the CRAS pre-forecast assimilation of GOES Sounder retrievals into the ICs, which is commonly known as a “hot start”. The WRFZ took advantage of the Sounder retrievals which improve the moisture analysis only at the initialization time. Moisture initialization in the WRFZ came in the form of water vapor mixing ratio only.

The purpose and configuration of the WRFZ run was strictly to assess whether the updated moisture analysis would have an impact on short-term forecasts of moisture-related variables: relative humidity, total precipitable water (TPW), and accumulated precipitation. In contrast, the intent of the WRFY experiment was to see whether the CRAS could build an analysis which could outperform a simple assimilation technique, or no assimilation at all, since the CRAS was developed to assess the impact of space-based observations on the accuracy of NWP solutions. Producing forecasts using parameterizations and techniques which exploit and give merit to information from satellites, particularly the GOES Sounder, was thus a necessary component of the project.

Precipitable Water Results In assessing the value of the GOES-13 Sounder retrievals to the WRF forecasts, it was first necessary to examine the improvement to the PW analyses initializing the model runs. The subsequent evaluation focused on the impact of the retrievals on forecasts, in 12-hour increments, through 36 hours, for each of the simulations, the control and both experiments. TPW was compared against the NAM analysis and point Global Positioning System (GPS) integrated precipitable water (IPW) observations. For comparisons involving NAM mass fields (TPW), statistics were computed for grid points within the interior of the domain; for those involving the NDFD, statistics were computed for grid points within the continental United States in the interior of the domain; for those involving GPS-IPW, statistics were computed for TPW at grid points near the observation sites. Interior grid points were used for the grid verification of the model domain instead of all eligible grid points to discount any BC and WRF blend zone influences from the results. The Model Evaluation Tools (MET) package, version 3 (http://www.dtcenter.org/met/users/), was used to compute the statistics. The primary statistics used to assess performance were mean absolute error (MAE) and root-mean-square error (RMSE).

The evaluation period consisted of 21 times between 00 UTC on 28 September 2011 and 00 UTC on 8 October 2011. This period was chosen for both dry and moist regimes. In addition, most precipitation was the result of well-forced, kinematic processes rather than thermodynamically driven. The objective was to establish whether a remotely sensed, integrated quantity could be adequately analyzed to a three-dimensional grid and produce favorable results in short-term forecasts, under 36 hours.

Error of experiment and control WRF analyses compared to GPS-IPW. An initial investigation calculated the mean TPW MAE for the experimental runs compared against GPS- IPW during the evaluation period. There was not a discernible leader. The difference in mean MAE over this period between the best performer, the WRFX (control), and the WRFY was 0.03 mm. The WRFY had the lowest MAE for 38% of the 21 periods; the WRFZ had the lowest MAE for 33% of the periods; and the WRFX had the lowest MAE for 28% of the periods, which also indicates a similar trend in the run-to-run RMSE during the experiment period. The inconclusive results were likely due to the poor spatial heterogeneity of GPS-IPW sites across the domain compared to the magnitude of the correction. Because of this, it is possible that some horizontal moisture distributions favor one analysis over the others.

GPS-IPW WRFX-00 WRFY-00 WRFZ-00

MAE (mm) 1.58 1.61 1.59

RMSE (mm) 2.07 2.11 2.10

Error of experimental and control WRF 12-hour forecasts compared to GPS-IPW. The aforementioned case was representative of the mean quantitative results over the experiment period. Using GPS-IPW point observations, the 12-hour forecast comparison indicated that the WRFZ narrowly outperformed the WRFX with a lesser mean MAE for TPW by 0.05 mm. For both the 24-hour and 36-hour forecasts, the WRFZ and WRFX performed statistically about the same, with a mean MAE difference no greater than 0.01 mm. During this period, the TPW MAE was lowest for the WRFZ 12-hour forecast nine of 21 times, or 43%. This was in comparison to the WRFX, which had the lowest MAE only five of 21 times, or 24%. The WRFY had the lowest MAE seven of 21 times, or 33%. This indicates that the WRFY analysis was competitive, but occasionally exhibited a higher MAE for individual misses than the WRFX and WRFZ, which increased its mean TPW MAE. Of particular interest is that the WRFY analyses exhibited a lower TPW MAE for four of the last five periods evaluated. The 12-hour forecast TPW MAE for the WRFY was likewise the lowest for four of the last five periods. This lends credence to the already seminal holding that NWP is an initial-value problem and more accurate ICs result in a more accurate forecast.

GPS-IPW WRFX-12 WRFY-12 WRFZ-12

MAE (mm) 1.77 1.81 1.72

RMSE (mm) 2.27 2.37 2.24

Error of experiment and control WRF 12-hour forecasts compared to NAM analysis. In order to confirm the result of the point comparison, a grid analysis comparison was arranged. The grid comparison used the NAM analysis of TPW, which contains the GPS-IPW observations as input. All grid points within the verification area were compared. The verification grid and forecast grid were collocated after re-projecting the verification analysis. This was completed through remapping and upscaling the NAM analysis from its native projection and resolution to that of the experiment domain. A bilinear interpolation was used as part of the subsampling. WRF model output was not re-projected. The mean TPW MAE calculated with this approach over the experiment period reached the same result as the GPS- IPW point-wise comparison. Using the NAM analysis and comparing all verification grid points, the WRFZ mean MAE was 0.04 mm less than the WRFX, which produced the next lowest mean MAE. The trend throughout the evaluation period can be found in Figure 1. The WRFY exhibited the worst mean MAE, but that was only 0.16 mm greater than the WRFZ.

NAM WRFX-12 WRFY-12 WRFZ-12

MAE (mm) 1.97 2.09 1.93

RMSE (mm) 2.59 2.78 2.56

The performance of the WRFX and WRFZ at the 24 and 36 hour forecasts were not discernible. Again, the difference in the mean MAE was 0.01 mm or less between the two for both forecast hours. The WRFY had a higher mean MAE at those forecast hours, but still indicated some strength relative to the WRFX and WRFZ during the last five periods.

Conclusions This study was an initial investigation into the benefits of using GOES-13 Sounder retrievals as part of regional NWP to improve forecasts of TPW. In the evaluations conducted here, the retrievals were found to be inconsequential in many cases and did not produce a consistent positive reflection in the statistics. This indicates that instruments onboard our earth-observing geostationary satellites need spectral improvement to supply a meaningful correction to analyses used in regional NWP. It also requires a reassessment of how the operational NWP community uses indirect moisture information from remote sources beyond the techniques explored here. While the results presented are perhaps a testament to the adequacy of current analyses, they stand as a challenge to improve numerical techniques for assimilating additional data because the number of data sets assimilated into operational models continues to grow. Yet, NWP solutions are far from perfect and, as demonstrated, there is information from the GOES Sounder which can improve moisture representation in models and alter forecasts for the better.

FIG. 1. Mean absolute error (top) and root-mean-square error (bottom) for total precipitable water (mm) over the period from 00 UTC 28 September 2011 to 00 UTC 8 October 2011. Error is calculated based on the NAM analysis at the valid time compared to the 12-hour forecasts of the WRFX (red), WRFY (green), and WRFZ (blue) for the same time. Note the change to the scale on the left ordinate axis between the figures.

Benefits were expected in part due to a casting of the ARW governing equations in flux form in conjunction with the KF convective parameterization, which is sensitive to variations in middle and upper tropospheric relative humidity as part of its formulation. The method and results were tempered based on the limited vertical resolution of the GOES Sounder. During the summer months, the GOES Sounder water vapor channels are most capable of detecting temperature and moisture gradients in the upper troposphere, and sense relatively little, if any, boundary layer moisture. A one-dimensional variational assimilation scheme was used to add or subtract water vapor from a model sounding, indiscriminately of vertical gradients, based on three sigma- bounded layers produced by the Li et al. (2008) enhancements to the retrieval process.

Comparing WRFX and WRFZ, two sources of PW verification confirmed forecasts were slightly better 12 hours after initialization if GOES-13 Sounder input was included. Results were calculated based on the GPS-IPW network and confirmed using a NAM analysis. There was no substantial impact of the added observations at 24 or 36 hours in the flow regime of the period studied from late September into early October. While the WRFY had fleeting success, the indications were that the CRAS dynamics and physics were controlling and negatively influencing the solution, even at short intervals from the initialization time.

Thus, in order for an accurate forecast of cloud, water vapor, and precipitation distributions, it is necessary for our NWP models to contain a detailed and accurate analysis of moisture. The current NCEP operational models are good, but there remains a small margin for improvement from assimilating additional observations if done so with skill and knowledge of the dynamics and parameterizations within the model that would respond to such changes in producing a forecast. At the current time, this is only possible through the use of satellite products. Using the CRAS pre-forecast and assimilation techniques in conjunction with the WRF has allowed for GOES Sounder observations in the form of retrievals to impact the solution. The WRF transition experiment conducted during the early fall of 2011 has been able to better quantify the degree of this effect, and will continue in real-time for upcoming seasons. While the results are tempered by some inherent shortcomings in the capabilities of the instrument, assimilation scheme, and numerical model, the strategy and path forward are clear. A delicate investigation of moisture integration techniques within assimilation constraints and model parameterizations for different seasons and flow regimes can slowly extract gainful information from the current, and future, geostationary platforms.

References

Aune, R. M., 1994: Improved precipitation predictions using total precipitable water from VAS. Preprints, 10th Conference on Numerical Weather Prediction, American Meteorological Society, Portland, OR, 192–194. Haines, S. L., G. J. Jedlovec, and S. M. Lazarus, 2007: A MODIS sea surface temperature composite for regional applications. IEEE Trans. Geosci. Remote Sens., 45, 2919–2927. Kain, J. S. and J. M. Fritsch, 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47, 2784–2802. Kain, J. S., 2004: The Kain–Fritsch convective parameterization: an update. J. Appl. Meteor., 43, 170–181. Keyser, D., Environmental Modeling Center, National Weather Service, cited 2011: PREPBUFR Processing at NCEP. [Available online at http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/document.htm.] Knupp, K. R. and W. R. Cotton, 1985: Convective cloud downdraft structure: an interpretive survey. Rev. Geophys., 23, 183–215. Li, Z., J. Li, W. P. Menzel, T. J. Schmit, J. P. Nelson, III, J. Daniels, and S. A. Ackerman, 2008: GOES sounding improvement and applications to severe storm nowcasting, Geophys. Res. Lett., 35, L03806. Shepherd, J. M., B. S. Ferrier, P. S. Ray, 2001: Rainfall morphology in Florida convergence zones: a numerical study. Mon. Wea. Rev., 129, 177–197. Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, W. Wang and J. G. Powers, 2008: A description of the Advanced Research WRF version 3. NCAR Technical Note, TN-468+STR, 113 pp. Tompkins, A. M., 2001: Organization of tropical convection in low vertical wind shears: the role of water vapor. J. Atmos. Sci., 58, 529–545.

Population Analysis of Seyfert Galaxies in the Coma-Abell 1367 Supercluster

Megan Jones, Eric Wilcots

University of Wisconsin-Madison1

Abstract We are studying the population of active galaxies residing both in and out of groups along the Coma-Abell 1367 supercluster to look at the occurrence of Seyfert galaxies. We are also measuring the level of activity, as defined by active galactic nuclei (AGN) and star formation rates in the galaxy groups. Our goal is then to relate this information to determine any environmental correlation of these two features. We report on the distribution of Seyfert galaxies as a function of environment across the supercluster and probe the characteristics of the population of groups that currently host at least one Seyfert.

Introduction AGN feedback. One of the major unresolved issues in our understanding the growth of galaxies, their central black holes, and their surrounding structures is the role of feedback. Feedback is the term for energy deposited back into the surroundings from the formation of stars or central black holes. Feedback is also a potential source for additional heating of intergalactic gas. It is now well established that the intergalactic gas in galaxy groups is significantly hotter than can be explained by the pure gravitational infall of baryons into the potential well (Borgani et al 2002, Jeltema et al 2006). The source of this excess heat is believed to be either starburst driven galactic outflows or outflows from active galactic nuclei (Lloyd-Davies, Ponman, & Cannon 2002, Nath & Roychowdhurry 2002). This unexplained heating of the intergalactic gas between groups is a major puzzle for astronomers. In the hopes of finding a possible source of intergalactic heating, I am conducting a population analysis of the Seyfert galaxies throughout a section of the Coma-Abell 1367 supercluster. A Seyfert galaxy is a type of active galactic nucleus (AGN), the name given to a galaxy that hosts an actively accreting supermassive black hole in its nucleus/center. Seyfert galaxies tend to be spirals, and are more widely distributed than elliptical galaxies. Our own Milky Way was probably once a Seyfert. In an effort to analyze intergalactic heating, I am investigating the properties of these AGN phenomena.

What is a Seyfert? Seyfert galaxies, named for astronomer Carl K. Seyfert, are a type of AGN characterized by strong, broad emission lines (Seyfert 1943), with an actively accreting black hole within the nucleus (Weedman 1977). These broad emission lines, in the case of Seyferts, is defined by Weedman as 103 km sec-1 (Weedman 1977). Seyfert galaxies are identified as objects with high values (greater than 2-3) of the ratio of OIII to Hβ (Kauffmann et al. 2003).

1 The Wisconsin Space Grant Consortium funded this research, and we are very appreciative of their contribution.

Figure 1. Galactic spectra obtained from SDSS. The spectrum on the left belongs to a non-AGN from 2MASS group 698, the one on the right is from a Seyfert galaxy in 2MASS group 683. The Seyfert spectrum clearly differs in appearance, with the broad emission lines and the 2:1 ratio of the OIII line to the Hβ line. The image below belongs to a starburst galaxy in the SDSS group 17426. It is very similar in appearance to the Seyfert spectrum, the main indicator being the stronger Hβ line.

Figure 2. Spectrum of a starburst galaxy in SDSS group 17426.

Welcome to the Supercluster! Meet Coma-Abell 1367. The supercluster spans a right ascension range from 11h to 14h and a range in declination from 24d to 30d, connecting the large clusters Coma and Abell 1367. The supercluster contains regions of denser groups as well as a less dense field near the center. We have compiled a catalog of the galaxy groups and group members in the supercluster, with the data sampling regions from the densest parts of Coma to the “field” in the middle of the supercluster. Our group list comes from the 2MASS galaxy catalog as assembled by Crook et al. 2007. With the groups identified, we used the Sloan Digital Sky Survey (SDSS) to identify individual members of each group, as well as galaxies identified from the SDSS redshift survey (Berlind et al. 2006), producing a catalog with over 1000 spectroscopically identified galaxies. The SDSS galaxies come from the Mr18 sample, which is accessible online (http://lss.phy.vanderbilt.edu/groups/dr7/).

Figure 3. The positions of the 2MASS galaxy groups on the sky.

AGN Activity within galaxy groups.

Figure 4. The galaxy groups in the Coma-Abell 1367 supercluster. The crosses represent groups with no activity, the x's represent groups hosting at least one Seyfert.

Having acquired the list of galaxies from 2MASS and the SDSS database, I marked the position of each group and noted which groups had no Seyferts, one Seyfert, and multiple Seyferts. With the identified groups, the number of catalogued Seyferts comes to about 40 confirmed Seyferts. I have plotted the individual groups and identified the location of the Seyferts within the groups to discover any trend.

Figure 5. The positions of Seyferts in SDSS groups 19296 (left) and 19286 (right). The x's indicate Seyfert galaxies, crosses non-Seyferts. Group 17426, the heart of the Coma cluster, is shown below.

Figure 6. SDSS group 17426, the Coma cluster.

The volume. With a somewhat clearer picture of the groups in this section of the Coma cluster, I started looking at the larger environment in which groups find themselves. I want to know what environmental factors contribute to AGN activity in galaxies, and why some groups are more likely to host multiple Seyferts while some groups have none. I used the SDSS database compile a list of Seyfert galaxies that are not positioned in groups in order to get a more unbiased sample of Seyferts. With this wider range of environmental conditions involving Seyfert galaxies, I have as unbiased of a data range as I can hope to obtain. Using the defining features of a Seyfert mentioned above, I used the line strengths of OIII and Hβ to identify AGN in the volume that may not be situated in groups. The frequency of occurrence of Seyferts is well matched to that of the group environment, showing about a 9% contribution to the population.

Using the SDSS tool CasJobs, I retrieved the strengths for the NII, OIII, Hα, and Hβ emission lines of all of the galaxies in the volume. Following the procedure by Moric et al. 2010 (Kauffmann et al. 2003; Kewley et al. 2001, 2006), I created optical spectroscopic diagnostic diagrams that separate emission-line galaxies into star-forming galaxies and AGNs. The equation for this distinction can be seen below (Kauffman et al. 2003).

Figure 7. Optical spectroscopic diagnostic diagrams (Moric et al. 2010, Kauffmann et al. 2003, Kewley et al. 2001). Following the demarcation by Kauffmann, the dots correspond to starburst galaxies, the crosses to Seyfert galaxies. The non-Seyfert galaxies with higher ratios of NII to Hα toward the right side of the plot are most likely LINERs.

Conclusion Of the 66 groups sampled, there are 11groups containing one Seyfert and 6 groups containing multiple Seyferts, which leaves 49 groups without any activity. The 2MASS group 723 contains the most Seyferts by fraction: 37% of its galaxy members are classified as Seyfert galaxies. We have yet to determine whether the occurrence of Seyferts is dependent on environmental factors. So far, we have not identified a correlation between environment and AGN activity. AGN activity occurs pretty evenly both within and outside of galaxy groups. However, there does seem to be a bit of a correlation between the size of the group and the percentage of Seyfert activity; SDSS group 17426 with over 250 members has a much smaller amount of Seyferts by percentage (9 Seyferts) than the other groups with 10 or less members. We will be investigating this correlation in continuing research.

References

Abazajian, K. N., et al. 2009, ApJS, 182, 543

Berlind et al. 2006 ApJ 162 38 A

Borgani, S., Governato, F., Wadsley, J., Menci, N., Tozzi, P., Quinn, T., Stadel, J., & Lake, G. 2002, MNRAS, 336, 409

Crook, A.C., Huchra, J.P., Martimbeau, N., Masters, K.L., Jarrett, T., & Macri, L.M. 2007, ApJ, 655, 790

Freeland, E., Stilp, A., & Wilcots, E. 2009, AJ, 138, 295

Jeltema, T.E., Mulchaey, J.S., Lubin, L.M., Rosati, P., & Bohringer, H. 2006, ApJ, 649, 649

Kauffmann, G. et al. Mon. Not. R. Astron. Soc. 346, 1055-1077 (2003).

Lloyd-Davies, E.J., Ponman, T.J., Cannon, D.B., 200, MNRAS, 315, 689

Moric, I. et al. ApJ 724: 779-790. 2010.

Nath, B.B., & Roychowdhury, S. 2002, MNRAS, 333, 145

Osterbrock, D.E.: \textit{Astrophysics of Gaseous Nebulae and Active Galactic Nuclei}, 1989

Schmitt, H.R., Ulvestad, J.S., Antonucci, R.J., & Kinney, A.L. 2001, ApJ, 132, 199

Weedman, D. 1977, A&A, 15, 69 X-Ray and Radio Emissions of AWM and MKW Clusters

Michael Ramuta1

Astronomy Department, University of Wisconsin-Madison

Abstract A grasp of the life-cycles of large-scale structures is critical to understanding the Universe. This can be accomplished through the study of poor clusters-- that is, younger clusters that are likely evolving to another state. The selected clusters are significant in that they are poor but also possess a type-cD galaxy. This brighter central galaxy suggests that these clusters may be dynamically evolved and are potential candidates for fossil groups. In order to more fully understand the structure and behavior of poor galaxy clusters, 12 clusters were selected and analyzed. Using data from the Sloan Digital Sky Survey, Chandra X-Ray Archive, and the VLA FIRST Survey, we present x-ray profiles and radio observations of these 12 galaxy clusters.

Procedure As discussed by Hanisch and White (1981), the type-cD galaxy within the AWM or MKW cluster will have individual x-ray and radio sources (Bagchi & Kapahi, 1994). Further, Bagchi et al argues that the position of the type-cD galaxy and the evolutionary state of the cluster are factors for the radio brightness of the type-cD galaxy. The clusters examined were AWM1, AWM2, AWM3, AWM4, MKW1s, MKW4, MKW4s, MKW5, MKW7, MKW8, MKW10, and MKW12. To reduce contamination by foreground and background sources, each x-ray and radio emission must be matched with the corresponding galaxy. Using the SDSS, the redshift of the galaxy could be used to determine if the emission was indeed from a member of the cluster or if the emission was from a foreground or background source. Following the removal of the contaminant emissions, an accurate survey of x-ray and radio emissions was found.

X-Ray Emission X-ray data from the Chandra X-Ray Data Archive was found for 5 of the AWM & MKW clusters: AWM4, MKW1s, MKW4, MKW4s, and MKW8. MKW4 and MKW4s feature an extended x-ray source [figure 1] centered around the type-cD galaxy. AWM4 has an extended x- ray source in the center of the cluster, but not around the type-cD galaxy [figure 2]. While MKW1s and MKW8 do not have extended x-ray sources, both clusters have pointlike x-ray emission from the type-cD galaxy. X-ray radial profiles were then produced by eliminating point sources and calculating the surface brightness of 38 concentric annuli from the center of the source. The brightest of these sources is MKW4s [figure 3], and as a significant source of diffuse x-ray emission is from cluster collisions (Yusef-Zadeh et al, 2003), this further indicates that MKW4s is a collision of two clusters. When examining the radial profiles of the extended emission sources, MKW4s is the brightest at the source, but rapidly decreases in

1 This author would like to acknowledge the support of the National Space Grant College Fellowship Program and the Wisconsin Space Grant Consortium.

brightness. AWM4 and MKW4 have significantly flatter brightness profiles, maintaining brightness to a farther radii [figure 4].

Radio Emission In Poor Clusters Radio data from the VLA FIRST Survey was used in conjunction with optical data to find radio sources in AWM and MKW clusters. Radio emission is usually found from individual galaxies rather than from the cluster as a whole, and the strongest sources are typically type-cD galaxies located in the center of the cluster (McHardy, 1979). Radio sources were detected within AWM1, AWM4, MKW4, MKW4s, MKW5, MKW7, MKW8, MKW10, and MKW12 [figure 5]. Further, AWM1, MKW4s, and MKW8 had radio emission from the type-cD galaxy, located in the center of the cluster.

AGN In Poor Clusters Large-scale radio features indicating an active galactic nucleus were found in AWM4, MKW4, MKW4s, MKW7, and MKW8 [figure 6]. No type-cD galaxies were determined to be active, but the extended x-ray emission of AWM4 was also centered on an AGN.

Results From previous Wisconsin Space Grant Consortium research, AWM2, AWM3, MKW1s, MKW4, MKW5, and MKW10 were determined to be in dynamically equilibrium and AWM1, AWM4, MKW7, and MKW8 are still undergoing galactic accretion. MKW4s and MKW12 were determined to be merging clusters. From the radio data, we can see that the only clusters that lack radio emission (AWM2, AWM3, MKW1s) are all in dynamical equilibrium. Further, the clusters with type-cD emission (AWM1, MKW4s, and MKW8) are not in dynamic equilibrium. AGN were found exclusively in clusters not in dynamical equilibrium with the exception of AWM4. With the surface brightness of the x-ray sources, a temperature estimation can be made if applied to the proper model. The strong extended x-rays in MKW4s further indicate that it is indeed a merger of two clusters.

FIGURE 1: X-ray HSV Plot of MKW4

FIGURE 2: X-ray HSV Plot of AWM4

FIGURE 3: Radial Profile of MKW4s

FIGURE 4: Radial Profile of AWM4

FIGURE 5: Radio Plot of MKW10

FIGURE 6: Radio Plot of AWM4

References

Albert, C., White, R. & Morgan, W. 1977, ApJ, 211, 309

Bagchi, J., & Kapahi, V. K., 1994, J. ApA, 15, 275-308

Dariush, A., Khosroshahi, H., Ponman, T., et al. 2007, MNRAS, 382, 433

Fuller, F., West, M., & Bridges, T. 1999, ApJ, 22-26

Hanisch, R. J. & White, R. A. 1981, AJ, 806-810

Heisler, J., Tremaine, S., & Bahcall, J. N. 1985, ApJ, 298,8

Jones, C., & Forman, W. 1999, ApJ, 511, 65 (JF)

Koranyi, D., & Geller, M. 2002, AJ, 123, 100

Ledlow, M. J., Loken, C., Burns, J. O., Hill, J. M., & White, R. A. 1996, AJ, 112,388

McHardy, I. M., 1979, MNRAS, 188, 495

Morgan, W.W., Kayser, S. & White, R.A. 1975, ApJ, 199,545

Yusef-Zadeh, F., Nord, M., Wardle, M., Law, C., Lang, C., Lazio, T. J. W. 2003, ApJ, L103- L106

Developing a Focal Plane Array at the GBT for 21 cm Astronomy

Christopher Anderson

Department of Physics, University of Madison Wisconsin

Abstract. Progress has been made on the design of a 9 receiver array in the 700 to 945 MHz range to replace the single receiver in that frequency range currently in use at the Green Bank Telescope (GBT) in West Virginia. The new array will increase the rate of data collection ~9x in ongoing 21 cm intensity mapping, and it should also be a valuable resource for other users of the GBT. The following paper describes the science of 21 cm intensity mapping and the work that has been done in designing the receiver for the focal plane array.

Cosmology with the 21 cm line of Neutral Hydrogen (HI) It has been observed that, on the largest scales, the universe is quite symmetric. If one considers scales much larger than galaxy clusters, the universe looks the same in every direction (isotropic) and also the same at every point (homogeneous). This allows cosmologists to make the simplifying approximation that, again only on very large scales, the matter and radiation density of the universe are constant. Combining this simplification with the rules of general relativity (GR), one can derive the Friedmann Equation, a differential equation which determines how the universe expands as a function of four things: the matter density, the radiation density, the curvature of space, and a mysterious component called dark energy (which is, in the currently favored ΛCDM model, Einstein's cosmological constant). As the universe expands, the wavelength of light traveling through the universe expands with it. It has also long been known that velocity curves of galaxies are inconsistent with gravitation from the visible luminous matter: objects are traveling too fast for the amount of matter that is seen. Therefore, rather than completely abandon the highly successful GR theory of gravity, cosmologists now postulate that most matter is 'dark matter,' which does not interact with or produce light. Studying dark matter and dark energy are two main goals of modern cosmology.

Mapping the distribution of neutral hydrogen, using the 21 cm line, is a convenient way to study the distribution of dark matter and the equation of state of dark energy (to be discussed more fully later). Hydrogen's abundance makes it a good unbiased tracer of the underlying dark matter distribution (because of gravitational attraction, it should map the dark matter). The 21 cm line, being a (low energy) atomic transition, is always produced at a fixed wavelength (21 cm), and therefore any increase in wavelength over 21 cm is due to the expansion of the universe (cosmological redshift). The Friedmann equation allows one to use this redshift to calculate the radial position of the HI and the time in the past when the radiation was emitted. Aside from mapping the dark matter, the HI power spectrum should show the remnants of Baryon Acoustic Oscillations (BAO). BAOs were pressure waves in the primordial plasma of the early universe, created due to slight over densities and under densities in the matter of the universe (caused by random quantum fluctuations). Eventually the plasma cooled enough for photons to decouple with matter, the universe became transparent, and the pressure driving the propagation of the

Supported by the Wisconsin Space Grant Consortium and the Van Vleck Award at UW Madison sound waves disappeared. The over/under density pattern of these waves, now vastly increased in size, became frozen in place. At that point, their size became tied to the expansion of the universe. Thus, if we can map the size of the BAOs over time, we can produce a history of the expansion of the universe. This precise history will provide higher precision tests of the nature of dark energy. For example, in co-moving coordinates (in which the expansion of the universe is factored out), the radial size of the BAO feature should be equal to the tangential size. Since the radial coordinate depends on the dark energy density, constraining these two dimensions to be equal can put limits on how the dark energy density varies with time (equation of state). The current ΛCDM model has a constant dark energy density, but there may be another more complicated equation of state.

Completed Intensity Mapping with the GBT Mapping 21 cm HI emission in individual galaxies at high redshift is impossible with current single dish telescopes, due to the diffraction limit. However, it is possible to use large radio telescopes like the GBT to perform three dimensional intensity mapping by detecting the combined emission from the many galaxies that occupy large (1000~ Mpc^3) voxels [F. B. Abdalla and S. Rawlings (2005), S. Wyithe, A. Loeb, and P. Geil (2007), Y. Mao et al. (2008), H.-J. Seo et al. (2010)]. The use of such large voxels allows telescopes such as the Green Bank Telescope to reach up to z~2, conducting a rapid survey of large volumes. The BAO features are still visible on this scale.

The biggest challenges are RFI and foreground removal. The strongest foregrounds are synchrotron emitters, which produce approximately 1000 times more flux then the HI signal. Since synchrotron radiation is spectrally smooth and pointlike, whereas the HI signal should be uncorrelated in frequency along the line of sight. This makes foreground subtraction through Singular Value Decomposition (SVD) possible but difficult. Consequently, our collaboration has thus far only published results in cross-correlation with galaxy surveys. Auto-correlation results are not yet ready. First results, a cross-correlation with the DEEP2 galaxy survey at an average redshift of z~0.8 were published in 2008 [Chang et al. (2008)]. A second paper, cross-correlating with WiggleZ fields, has been submitted.

Design of new Focal Array for GBT Efforts are underway to replace the 700-945 MHz GBT receiver with a 3x3 array in the focal plane of the GBT. The increase in the mapping speed is given by the radiometer equation, which says that the time required to reach desired signal to noise is proportional to the number of receiver elements divided by the square of the system temperature of those elements. The system temperature is a measurement of the inherent noise of an antenna, due to the electronic amplifier and unwanted radiation picked up from the ground and sky. We plan to limit the electronic noise by using a cryo-cooled low noise amplifier. The amount of radiation picked up from the ground is a function of the shape of the antenna's beam pattern and is quantified by the antenna spill temperature.

I have worked almost exclusively on the antenna design, and the rest of this report will follow my efforts with that. All antenna designs were modeled with CST's Microwave Studio.

The size constraints at the GBT focus limit us to a 3 meter by 3 meter array. This makes the current receiver, which has excellent system temperature, too large for even a 2x2 array. Several alternative designs were considered, but the best seemed to be the short backfire antenna (SBA), which is extremely compact and has a relatively narrow beam pattern. The SBA also has the advantage of using dipoles as exciting/receiving elements, whereas horn antennas require a fairly large and heavy OMT (Orthomode Transducer) to transition radiation from the antenna to modes that can properly excite dipoles. For a 9 element array of horn antennas, 9 OMTs would be required, whose size and bulk could quickly put us over our space and weight allowances. The original SBA design that was modeled (Fig. 1) was a scaled up version of a receiver already used at the GBT for higher frequencies (scaled up to match our lower frequency). Figure 2 shows the spill temperature for this antenna design, calculated for several frequencies.

Figure 1: Original SBA design. The black cylinder is dielectric. The metallic cylinder beneath the dipoles is the cryogenic housing for the electronics. The diameter is 76 cm.

Figure 2: Spill temperature in Kelvin as a function of frequency in MHz for the original SBA design. The two circles at each frequency represent the two dipoles of the SBA. The temperature is decent at 758 MHz, but it is very high at 700 MHz and 882 MHz. The value at 945 MHz is high, but RFI in that portion of the band makes it practically unusable.

The design has gone through many iterations, but the current best design (Fig. 3) has a quarter wavelength outer corrugation, which has the effect of narrowing the beam-pattern [Kooi, Leong, Yeo (1979)]. To minimize the spill temperature over the bandwidth, the diameter was increased to 1.04 meters and then squared off at the edges to fit in a 1x1 square meter box. As shown in [Lee, Yang, et al. (2006)], squaring off the edges had only a small effect on the beam pattern. Fig. 4 shows the calculated spill temperature of the new SBA design, and Fig. 5 shows the proposed 3x3 array of receivers.

Figure 3: Current SBA design, with squared off corrugated rim. The length and width is 1 meter.

Figure 4: Spill temperatures for the current design of the SBA.

Figure 5: Visualization of propose 3x3 array to be installed at the GBT focal plane.

My adviser, Professor Timbie, and I believe the spill temperature is near optimal values. The only remaining difficulty is tweaking the position of the smaller sub-reflectors to minimize power loss in our desired frequency band (optimizing S-parameters). To minimize thermal noise in the wiring, the tip of the cryogenically cooled chamber for the electronic amplifier extends almost to the dipole antennas, and this has led to some difficulty in optimizing the S-parameters. We are confident that this will be overcome, perhaps with a slight sacrifice in spill temperature.

Conclusions and Future Work This work is being conducted by a small international collaboration. Once the S-parameters are optimized, construction of the antenna and cryogenic system will begin in Taiwan. The completed antenna will then be shipped to the United States, where the amplifier will be integrated. We will then install a single prototype receiver on the GBT to test its noise temperature and beam pattern. The new array will be just part of our ongoing efforts in 21 cm intensity mapping. Our group is also pursuing intensity mapping in a higher redshift range with another GBT receiver and lower redshift intensity mapping using the Parkes telescope in Australia. We hope that our work will inspire enthusiasm and funding in the burgeoning field of 21 cm cosmology.

In terms of the farther term future, extending the redshift range far into the cosmological dark ages (before the epoch of reionization) will eventually require space based antennas. This is because of both the RFI difficulties on Earth and the fact that the ionosphere reflects highly redshifted 21 cm radiation above z~60. A neat solution to both these problems is a proposed 21 cm observing array on the dark side of the moon [http://lunar.colorado.edu/lowfreq/index.php].

References

F. B. Abdalla and S. Rawlings, MNRAS 360, 27 (2005), arXiv:astro-ph/0411342.

Chang, Pen, Bandura, Peterson, Hydrogen Intensity Mapping at redshift 0.8, (Nature, July 22, 2010).

Kooi, Leong, Yeo, Dipole-excited Short Back re Antenna with Corrguated Rim, (Electronic Letters, July 5th 1979).

Lee, Yang, Schmeider, El-Ghazaly, Fathy, Suleiman, Rodeer, Zilhman, Design and Development of an Integrated Twin Feed Horn for a DBS Reector Antenna, (IEEE Transactions on Antennas and Propagation, Vol. 54, NO. 8, August 2006)

Y. Mao, M. Tegmark, M. McQuinn, M. Zaldarriaga, and O. Zahn, Phys. Rev. D 78, 023529 (2008), 0802.1710.

H.-J. Seo, S. Dodelson, J. Marriner, D. Mcginnis, A. Stebbins, C. Stoughton, and A. Vallinotto, Astrophys. J. 721, 164 (2010), 0910.5007.

S. Wyithe, A. Loeb, and P. Geil, ArXiv e-prints (2007), 0709.2955. http://lunar.colorado.edu/lowfreq/index.php

Teaching Special Relativity: Developing a Software Aid for Spacetime Diagrams

Randy Wolfmeyer

Department of Chemistry and Engineering Physics University of Wisconsin-Platteville John Wood Community College1

Abstract Special Relativity is an important subject for the space sciences, but is often difficult for students to understand. Spacetime diagrams provide a graphical tool to aid in the conceptual understanding of relativity. The SpaceTime applet is designed to aid students in drawing spacetime diagrams and setting up diagrams for specific problems. A lab activity is also developed for use with the applet in studying spacetime diagrams.

Introduction Special relativity is considered to be a difficult subject by many students – it is often not well understood by even advanced graduate students4. It involves concepts that we simply do not experience in our day-to-day lives, and contradicts our common sense concepts of a time and space. It was not until the development of Maxwell's equations and the negative results of the Michelson-Morley experiment in 1887 that there was even a hint of this more complex nature of the universe, and it was another 18 years until Einstein was able to make sense of the results. It is quite understandable that students still struggle with understanding this subject. Fortunately, Einstein worked out the theory of special relativity from a simple set of postulates, with the seemingly fantastic results of time dilation and length contraction following as inevitable logical consequences. Traditional textbooks, however, often present special relativity as a set of confusing equations with little emphasis on giving the students a conceptual understanding of the reasoning behind the equations – instead students rely on plug-and-chug techniques to get to an answer that the textbook says is correct. Students have difficulty understanding the results, or simply do not believe the results. Special relativity is a prerequisite topic of study for many space related sciences, especially cosmology and many fields of astronomy, and even finds its way into precision calculations for spacecraft, not to mention exploration of far-future forms of propulsion, all components of NASA’s goals for science missions, especially those related to astrophysics explorations. Many students have an interest in the topic, but the traditional way of teaching it causes many students to become disinterested. Teaching special relativity so that students gain understanding of the concepts and can follow the logical implications can increase student interest in further study of space sciences. Introductory courses in physics that have successfully introduced special relativity have found that students are genuinely excited to learn and understand such a well- known but seemingly complex subject2.

1This work is supported by the Wisconsin Space Grant Consortium through the Higher Education Incentives grant. Basic Principles of Special Relativity. Einstein's Theory of Special Relativity has its roots in the development of Maxwell's Equations governing electricity and magnetism. The four equations lead to a wave solution that allows electromagnetic energy to transmit as a wave, with a predicted wave velocity of c = 3 x 108 m/s, the speed of light. But a conundrum presented itself; the electromagnetic waves predicted by Maxwell's Equation made no reference to a physical medium with respect to which one could measure the speed of light. The ether was proposed as the medium for the electromagnetic waves, with Maxwell's Equations being complete in a reference frame at rest with respect to the ether, but requiring additional correcting terms in other frames moving with respect to the ether. However, the Michelson-Morley experiment of 1887 failed to detect the change in the speed of light due to the Earth's motion through the ether. Einstein proposed a solution with his Theory of Special Relativity in 19051. His theory is contained within two postulates: 1. The laws of physics are the same in all inertial (non- accelerating) references frames, and 2. The speed of light is measured the same by all observers in inertial reference frames. The first postulate assumes that Maxwell's equations must be valid in all inertial reference frames without frame specific correction terms. The second postulate is essentially a logical conclusion of the first; no matter the relative velocity all observers will observe light moving at the same velocity. This seems absurd. According to Newtonian physics, two observers moving relative to one another will obviously measure different speeds for a third object. For example, what happens if you are traveling 90% of the speed of light and you turn on your headlights? Einstein's answer is that you will record the light traveling at the same 3 x 108 m/s that a stationary observer would record. The solution to this paradox is that Einstein recognized that time is not universal. Two observers moving relative to one another will not measure the same time intervals between two events. It also leads to the concept of non-simultaneity, that two observers may not agree on whether or not two events occur at the same time. The Lorentz transformation equations: 1 v x γ t ' = t− 2  x ' =  x−v t where = c √(1−v2/c 2) give us the means to mathematically translate the time and spatial coordinates of an event from one frame to another. In these equations, x and t are the position and time of an event in one frame, while x' and t' correspond to the position and time of the same event as observed from another frame moving at speed v with respect to the first frame. The term is known as the Lorentz factor, appearing in many special relativity equations. These equations provide a quantitative way to calculate the observations made by different observers. From the Lorentz transformation equations, we can also derive the equations for time dilation (time slows down for objects observed to move at relativistic speeds) and length contraction (the length of objects moving at relativistic speeds are shortened): L Time Dilation: ΔT ' γΔT Length Contraction: 0 = 0 L= γ where T 0 is the proper time between two events, T is the dilated time, L0 is the rest length of an object, and L is the contracted length. This is the point at which many introductory textbooks end their discussion of Special Relativity. The equations are not difficult to manipulate mathematically, but students often find the equations confusing. Given a specific problem they have difficulty assigning values to the variables in the equations, or confuse the signs for relative velocities. Once they have an answer, they find it difficult to verify a correct answer because the scenarios in special relativity are outside the realm of our normal experiences and often non-intuitive so that it is difficult to judge if an answer “makes sense”. The students have difficulty gaining a conceptual understanding of what the equations physically represent. Spacetime diagrams. A spacetime diagram is a useful tool that can be used to help students gain a conceptual understanding of special relativity. Used properly, spacetime diagrams provide a graphical illustration of the non-Euclidean geometries involved and a way to solve complex problems without relying solely on quantitative equations. By analogy, a well drawn spacetime diagram serves a similar purpose as the free-body diagram in a complex mechanics problem; it allows one to more easily visualize the geometry and arrive at an intuitive understanding of the solution before crunching Figure 1: Motion Diagram numbers. Spacetime diagrams provide a simplified model of relativistic motion in one space dimension, usually oriented along the x-axis, with a set of events that occur at single points in space and single instants of time. A spacetime diagram is essentially a motion diagram, like a filmstrip of a moving ball with each frame an instant of time, and its position in each frame marked by an event. Connecting the events together we construct the “world-line” of the moving ball as shown in Figure 1. Since problems in special relativity involve velocities close to the speed of light, we tend to choose units that simplify the diagrams. A common convention is to measure position in light-seconds (ls), or the distance that light travels in 1 second (1 ls = 3 x108 m). This has the advantage that we can illustrate the speed of light on the diagram as a line with a slope of 1 ls/s, as shown in Figure 2. So far we have considered the motion from one reference frame, but special relativity depends on the observations from different inertial reference frames. To construct a two- Figure 2: t' axis and the speed of light. observer spacetime diagram, we can place the origin of a coordinate system for an observer on the moving object. Since this observer assumes that the reference frame is stationary (there are no special reference frames and all inertial motion is relative) we place the spatial origin of their coordinate system on the world-line of the moving object, and label the world-line the t' axis. By convention established by Tom Moore2, we call the reference frame moving in the +x direction the Other Frame, and the frame moving in the -x direction (with respect to the Other Frame) the Home Frame. The faster the Other Frame moves with respect to the Home Frame the more the slope of t' axis rotates toward the slope of the speed of light. Due to the Lorentz transformation equations, the scaling of time along the t' axis is not the same as on the t axis. One can use the time dilation equation to determine the appropriate scaling along the t' axis. Assume an event occurs at t' = 1s along the origin of the Other Frame (along the t' axis). Use the time dilation equation to find determine the corresponding time that the event occurs in the Home Frame, then place a tick-mark for t' = 1s along the t' axis at the calculated time in the Home Frame. Figure 3 illustrates the procedure for the Other Frame moving at 3/5 the speed of light. To keep the speed of light constant in both the Home Figure 3: Calibrating the t' axis Frame and the Other Frame(the 2nd Postulate of Special Relativity), the x' axis also rotates towards the speed of light line, as shown in Figure 4. The x' axis can be calibrated in a similar way as the t' axis, using the length contraction equation. The rotation of the coordinate axes is similar to the transformation for the rotation of the coordinate system about the origin in Euclidean geometry, however the rotation in special relativity is non-Euclidean. In normal Euclidean geometry, the distance between two points is defined by  d =  x 2 y2 , and this distance is preserved under transformations between coordinate systems. In space-time, a quantity called the spacetime interval is defined as the time between two events measured by an observer in an inertial reference frame present at both events. The spacetime interval is calculated from spacetime coordinates of two events by an equation similar to that for distance,  s = t 2− x 2 . The spacetime interval is preserved under transformations from the Home Frame to the Other Frame and is thus a non-invariant quantity. All observers in inertial reference frames will measure the same value, and this defines many of the geometric properties of spacetime. By creating a single diagram with two coordinate systems, one can graphically see how the space-time coordinates of an event can be measured in two different reference frames, as shown in Figure 5. Figure 4: Hyperbolic rotation of the t' and x' axes Lines are drawn parallel to the axes to connect to the coordinate value on the axes. It is easy to see in the Home Frame, as the axes area perpendicular. The process is a bit more complicated as the axes are not perpendicular, leading to coordinate grid that is skewed at an angle. By comparing the coordinate values in each reference frame, an accurately drawn space-time diagram should give the same results as using the Lorentz transformation equations. Figure 5: Reading the spacetime coordinates in two reference frames The space-time diagram is a very useful tool for graphically establishing problems in special relativity, and can allow students to see conceptual relationships that would otherwise evade them. It can also provide a means for students to evaluate the plausibility of answers obtained from the Lorentz transformation equations. Experience teaching special relativity with spacetime diagrams has shown that students can interpret a well-drawn space-time diagram, but they have difficulty in setting up their own diagrams to use for a given problem. The students need assistance in the complicated details of setting the coordinate axes, calibrating the axes, and reading the space-time coordinates of an event from a space-time diagram. The goal of this project was to study student understanding in the subject of special relativity, and to develop an interactive software tool that will help students with the creation of spacetime diagrams and illustrate how spacetime diagrams can be used to setup and solve problems in special relativity. Previous research. While considerable research has been done on how students understand topics in introductory physics courses (forces, motion, electricity, magnetism), substantially less research has been done in the more advanced topics in physics, such as special relativity. There are only a handful of research papers on the subject. Roberto Salgado3 has done research on visualizing proper-time in special relativity, and has developed a number of graphical techniques for teaching special relativity. Rachel Scherr4 did her dissertation and a series of papers on student understanding of time in special relativity and non-simultaneity. She used a set of questions related to the simultaneity of events in written exams and in-class problems, and conducted one hour interviews to probe student thinking in greater depth. Some of her results were disheartening: “After instruction, 2/3 of physics undergraduates, and 1/3 of graduate students in physics are unable to apply the construct of a reference frame in determining whether or not two events are simultaneous.” “It is not surprising that students, even at advanced levels, do not fully understand the implications of the invariance of the speed of light. What is surprising is that most students apparently fail to recognize even the basic issues that are being addressed.” Procedure/Methods Student understanding and conceptual difficulties in special relativity. The first step in this project was to study student understanding in special relativity, specifically the use of space-time diagrams, so that tools could be developed to best address student needs. This process involved a variety of techniques, including review of exam results, interviews with students and lab activity/problem solving sessions with students. Exam results were analyzed from 2 semesters of introductory physics courses one each at Marquette University and John Wood Community College, and 2 semesters of a modern physics course at the University of Wisconsin-Platteville. Each course contained a 2-3 week unit on special relativity, including space-time diagrams with a test after the unit and questions on the cumulative final exam. Each exam had at least one question requiring the students to setup or interpret a space-time diagram. Interviews were conducted with a number of student volunteers following one semester of Modern Physics at UW-Platteville. The students were presented problems from special relativity and were asked questions about the problem solving techniques they used and followup questions to determine where there might be conceptual misunderstanding. During two courses, one section of Modern Physics at UW-Platteville and one at John Wood Community College, the students used a lab activity developed from earlier interview responses and exam analysis to help students gain practice with drawing space-time diagrams. These results were also studied to add to data on student understanding of special relativity. SpaceTime applet. To assist students in creating spacetime diagrams and gaining a graphical understanding of the Lorentz transformation equations, the SpaceTime Applet was developed in Java.. This applet was then used by students as part of the lab activity at John Wood Community College. Results Student understanding of special relativity. From the interviews, exam analysis, and problem solving sessions, a number of problem areas were identified within the study of special relativity. The following items were consistently marked on wrong on exams, or areas for which students expressed needing additional help. Assignment of variables in the Lorentz transformation equations and length contraction and time dilation. Given a problem with a numerical solution, students had difficulty assigning the values from the problem statement to the appropriate reference frame. Especially confusing was the determination of proper time for time dilation problems, and rest length in length contraction problems. The confusion also adds to the conceptual difficulty in the understanding that there are no special reference frames, all inertial reference frames apply the same laws of physics. Proper time and rest length give imply that the values measured within one frame are more “right” than another. Absolute time and non-simultaneity. Students frequently assume that the results of the Lorentz Transformations are due to light lag – that events appear to be at different times to different observers due to the time for a signal to travel from an event to an observer. They often try to calculate this lag in problems, which can in some situations give the correct numerical answer for the wrong reasons. Equivalence of inertial reference frames. Historically this was the source of many paradoxes to attempt to disprove special relativity, but it still causes conceptual difficulty for students. Either the students do not apply the principle of relativity and assume special frames, or they misapply the principle to assume all inertial reference frames will see the same results, or they assume the effects negate one another. They fail to make the connection that the apparent paradox is solved by the non-simultaneity of events. Causal Links. Two events are causally linked if one event can cause another. If a signal between the two events would have to travel faster than the speed of light, they cannot be causally linked and thus can have no influence on one another. Students have difficulty identifying events that can be causally linked as the relationship is determined by both the time and distance between two events. Each of the classes studied also used space-time diagrams as a graphical tool for studying special relativity, so a number of difficulties with space-time diagrams were also addressed: Axis orientation. Students do not have difficulty with t' axis, but getting correct angle for the x' axis causes more confusion. Many students fail to understand that the reasoning behind the rotation of the axes is to preserve the invariance of the speed of light. Axis calibration. To use a space-time diagram to compare numerical results with the Lorentz transformation equations, the axes must be calibrated. The students could often comprehend the concept of time dilation causing the tic marks on the axis to change, but had difficulty applying the principle to draw their own spacetime diagrams. Interpreting the space-time diagram. Students often misunderstand the the coordinate grid of the Other Frame is transformed in the same way as the x' and t' axes, creating a skewed or diamond shaped grid pattern. This mistake usually manifests when students must interpret a space-time diagram. Consequently many students have difficulty determining the sequence of events in the Other Frame. Space-time interval. The spacetime interval is analogous to distance in Euclidean space, but is hyperbolic being a difference of squares. This makes determining the space-time interval from a space-time diagram an non-intuitive exercise; two events that are farther apart on a diagram can have a smaller space-time interval than two events that are close to each other. SpaceTime applet. As a result of the study of student difficulty and misconceptions in special relativity and space-time diagrams, the SpaceTime applet was developed to assist the students in drawing space-time diagrams and gaining a graphical understanding of coordinate transformations and the non- Euclidean geometry of space-time. The SpaceTime applet allows the user to draw a very basic spacetime diagram with up to 5 spacetime events, to determine the coordinates in both the Home Frame and the Other Frame, and to calculate the spacetime interval between two spacetime event. Figure 6 shows a screenshot of the applet, with the key areas of the user interface indicated. Figure 6: SpaceTime Applet - User Interface A) Spacetime Diagram: This is where the two-observer spacetime diagram is drawn by the applet. Elements within the diagram can be selected and moved around by the user. B) Beta Slider: Slider control to set the value of beta, (β = v/c), the relative speed of the Other Frame with respect to the Home Frame. It also calculates the value of gamma, the Lorentz factor. C) Spacetime Event List: Displays the coordinates of the spacetime events that can be displayed within the diagram, and allows the user to change the coordinates, and set gridline for the display. D) Zoom Slider: Adjusts the scale of the diagram. Moving the slider upward will zoom into the diagram while moving it downward zooms out allowing the user to adjust the scale to fit a particular problem. E) Spacetime Interval Calculator: The button activates the SpaceTime Interval measurement tool. Uses for the SpaceTime Applet. The SpaceTime Applet addresses a number of issues for students in understanding space-time diagrams. Correct orientation of the t' and x' axes: The user can change the relative speed of the Other Frame by adjusting the value of the Beta slider (B) and see how the slope of the t' and x' axes both tilt toward the diagonal line indicating the speed of light. Calibration of the scales on the t' and x' axes: By adjusting the Beta slider (B) and changing the relative speed of the Other Frame the user can see how the scaling on both the t' and x' axes are changed. This also provides a basic understanding of time dilation as the user can see the spacing between the t' tic-marks increase as the relative speed approaches the speed of light (β =1). Space-time coordinates of an event: The user can add up to 5 events to the space-time diagram (labeled A – F) by clicking on the checkbox next to each event in the SpaceTime Event List (C). The events will default to the origin at (t=0, x=0), but they can be altered either by clicking and dragging the event within the space-time diagram drawing area (A), or by entering new coordinates in the Event List (C) and pressing Enter on the keyboard. When the event is selected in the drawing area (A), a pair of dashed lines will connect the event to the coordinate axes to illustrate how the coordinates are determined. Space-time coordinates in the Other Frame: The space-time coordinates of an event in both the Home Frame and the Other Frame are listed in the SpaceTime Event List (C). By selecting the Other Frame radio button at the top of the list, the user can change the Other Frame coordinates. When the Other Frame option is chosen, and an event selected, the dashed lines will be drawn parallel to the t' and x' axes to show how the coordinates are determined in the Other Frame. Space-time Interval: Pressing the “Measure Δs” button (E) activates the space-time interval measuring tool. This activates a tool that can be used within the space-time diagram area (A) to calculate the space-time interval, Δs, between two points in space-time. The tool is represented as a dimension line between two circular end points which can be dragged with the mouse to select different points. The end points will “snap” to an event in the diagram. Coordinate lines are drawn to the axes for the currently selected frame in the Event List (C) to indicate the time and space intervals between the two selected points in that reference frame. Choosing a different frame will cause the coordinate lines to switch to the coordinate axes for that frame. In the region next to the “Measure Δs” button(E) the value of the time interval is indicated for each reference frame (Δt and Δt') as well as the value of the space interval Figure 1: Space-time Interval (Δx and Δx'). The calculation for the space-time interval is shown for each reference frame (Δs and Δs') and it should be clear that the two values are always the same no matter the two points chosen in space-time or the relative speed between the two reference frames. A lab activity was developed utilizing the SpaceTime Applet to introduce students to space-time diagrams and is available online. This activity was designed to be used to either reinforce lecture material on space-time diagrams, or as a standalone activity to illustrate another useful tool in understanding special relativity. All relevant terms are defined in the lab activity and the students are allowed to discover some of the geometrical properties of space-time through their own experimentation. The activity also demonstrates how to use the applet to setup a complex problem in special relativity involving non-simultaneity of events. Conclusions The SpaceTime Applet has been a successful tool for aiding students in understanding space-time diagrams and geometric relationships of special relativity. Students have responded favorably to both the application and the lab activity that guides the students in using the tool to setup space- time diagrams to solve special relativity problems. The SpaceTime Applet is a work in progress. Initial testing with both the students and other instructors have indicated a number of bugs that should be fixed, several items that could use improvement, or features that should be added to make the application more useful as a student and teaching aid. Near future work. Following is a list of items for the next few months. User Interface. The current user interface is sometimes non-intuitive for users. Additional testing and development with students and instructors must be completed to determine the most effective user interface design. Currently the user can only interact with events and the space-time interval measurement within the space-time diagram itself. Future development would allow the user to adjust the slope of the t' or x' axes by clicking and dragging the axes. The user should also be able to rescale the diagram and move the origin of the diagram with the mouse. Event List. Currently the user can only enter 5 events, but it is easy to conceive problems or demonstrations that would require more than 5 events. Ideally the applet would be able to handle any number of events. This requires a different method for adding events and listing coordinates of the events. An idea under development is to add events with a button, and to show the space-time coordinates for an event that is currently selected within the space-time diagram. A complete list of events and space-time coordinates would be provided as a pop-up window after clicking a different button. This should also allow events to be given user selected labels to match the descriptions in different problems or scenarios. Gridlines. There is currently a glitch in the drawing of gridlines for the Other Frame. Only gridlines for that intersect with the tic marks on the t' and x' axes are drawn. When the Beta slider approaches β= 1, the coordinate grids by design become very skewed, but the result appears odd because so few gridlines appear on the diagram. Long term development. The following is a list of items to develop over the next year. Units. The applet currently sets no specific units, assuming that both the time and space coordinates are using similar units, seconds in time and light-seconds in space for example. Students may use textbooks that use specifically defined of units, or wish to keep the SI units. The applet should have an easy option to adjust the units accordingly, and have all calculations done in the chosen units. World-lines. In addition to events that exist at a single instant of time, many problems in special relativity have objects that persist over time that are better represented by world-lines. Users would have an interface button to add a world-line with a specific speed, and therefore slope, with respect to the Home Frame or the Other Frame. This world-line would be adjustable within the space-time diagram itself, and also indicate intersections with other world-lines or the coordinate axes. Space-time movies. An advanced feature would allow users to set up events within the space-time diagram, and then view a small “movie” that illustrates the sequence of events as viewed from each reference frame, essentially taking a slice of the diagram for each time step. Events and objects would be represented in the movie by built-in icons for planets, earth, rockets, trains, cars, people, light pulses, etc. These movies would be especially useful for illustrating the concepts of time dilation, length contraction and non-simultaneity of events by isolating the view point from each reference frame. Teaching mode and demonstration mode. The applet in its current form is very useful for students as a demonstration and an aid for creating space-time diagrams for problems in special relativity. It has been proposed to develop a teaching mode or version that could be used for labs and activities that would allow students to develop their skills in reading information from the diagram and learn some of the mathematical techniques without having the results calculated and displayed by the applet. This mode could be toggled by an instructor to allow the students to use all of the features after completing a set of exercises to demonstrate proficiency. If you would like to try the SpaceTime Applet, it is available at: http://www.uwplatt.edu/~evensenh/SpacetimeLab/SpaceTime/SpaceTime.html The spacetime diagrams lab activity is available at: http://www.uwplatt.edu/~evensenh/SpacetimeLab/SpacetimeLab.pdf

References [1] A Einstein, On the Electrodynamics of Moving Bodies, Annalen der Physik. 17:891, 1905

[2] T. Moore, Six Ideas That Shaped Physics,, Unit R: Laws of Physics are Frame-Independent , 2 nd Edition, McGraw-Hill

[3] R. Salgado, Physics Teacher (Indian Physical Society), v46, pp. 132-143 (October-December 2004)

[4] R.E. Scherr, P.S. Shaffer, S. Vokos, Physics Education Research, American Journal of Physics Supplement, 69, S24-35 (2001) Observing Convection in Microgravity

Matt Heer

East Troy High School Physics

Abstract. The purpose of the experiment is to observe the movement, or lack thereof, of heat in an enclosed space in multiple gravitational accelerations. An electric heat source will be placed in the center of an insulated box and three temperature probes above and below the heating element at equal spacing will record temperature variations every hundredth of a second.

It will be placed on a KC-135 airplane that does a series of parabolas that provide an acceleration of 0G (or microgravity) similar to that of astronauts on the space station, and 2G on its return climb.

On earth heat is dispersed unevenly due to cold air being denser therefore getting pulled down by gravity. In a micro gravitational environment there is nothing to cause this shift. During the microgravity phase of our experiment, we expect to initially see symmetry in the temperatures (ie temperature probe C and D are the same, B and E are the same, A and F are the same. See Illustration 1). As the plane goes into its 2 G phase, we expect to see heat rise. (ie probe A is hottest, B next hottest, and continuing with that pattern with probe F being coldest)

Background This experiment is testing the dispersal of heat in different levels of gravity. In order to understand this, the theory of convection needs to be understood. Convection is when the less dense hot air stagnates above the dense cold air. Convection relies on gravity because it separates the air based on density.

Another method of heat is radiation. With radiation, heat energy is directly transported through space by electromagnetic waves. This type of heating, unlike convection, is not affected by gravity. That’s why heat from the sun can be felt on Earth. It would be possible, in a vacuum, to test the effects that heat via radiation would have on our readings from the temperature probes. However, we don’t have the means at this time. It’s our hypothesis, though, that the effect will be infinitesimal.

Equipment Design Overview. Our heat dispersal experiment is designed with simplicity, effectiveness, and space efficiency in mind. There will be a metal frame surrounding our main insulating box that will strap to the bottom of the plane. The main insulating box will be attached to the frame and self-contained. All internal components of the box will be secured and no hazardous materials will be used. There will be a variac to control the amount of current flowing to the nichrome heating element. There will be a switch to turn on/off the flow of current from the power source to the variac which goes directly to the heating element. It will be fused at 10 amps as a backup fail-safe. A laptop will be strapped to the top to take in data from our data acquisition unit (DAQ). The DAQ will have 6 thermocouples attached to it leading into the insulated box that will take temperature data. It will be logged in the computer for later analysis.

Heating Element Set-Up. The heat source will be set up in the very center of the box. It will be a piece of nichrome wire (commonly found in toasters and electric stoves) wound in a tight spiral that is controlled by a variac and fused at a maximum of 10 amps. See illustration 2 below for a view of how this is set up inside the box.

Illustration 2

Main Insulating Box. The main box for our experiment will be made of a rigid insulating foam material. The outer dimensions of it will be approximately 20”x20”x20”. The inner dimensions will be 6”x6”x12” with our electronic equipment on its surface. The material we are using is residential construction insulating foam commonly used in between the studs of a house.

Stabilizing Frame. The stabilizing frame will be made out of aluminum L brackets. It will be secured together with 3/8” diameter grade 5 nuts and bolts. It would be secured via ratcheting straps to the floor of the plane. Any edges of the frame that are showing will be covered with pipe insulating foam to protect occupants of the plane from cutting themselves during the weightless phases. See Illustration 3 on the next page.

Illustration 3

Temperature Probes. Distributed evenly from the heating source every 2 inches from the center of the box, we will have temperature probes. They will be attached to data logger on the outside of the box, and go through the material to the box’s interior to collect data. To keep them stabilized, stainless steel TIG welding wire will be taped to them so they will not move during the 2 G phase of the flight or if there are any unexpected bumps or turbulence.

Electronics. The data logger will record the change in temperature over time and in relation to distance from each probe. They will run off of batteries for the duration of the flight and be attached to a laptop which will be strapped to the top of the frame. The computer will also run off its own internal battery.

Procedure The experiment was run by a laptop which was velcroed and strapped to the top of the frame. A data acquisition device (DAQ) was connected via USB cable to the laptop. 6 thermocouples were run into the enclosed insulated box and they were programmed to take 100 measurements per second. The heating element was switched on when the plane is in its ‘Zero G’ phase, accelerating down at approximately 9.80 m/s^2. The experiment was left to run for the entirety of its approximately 25 parabolas of alternating 0g and 2g phases.

Results The experiment was ground tested (See Graph 1) under 1g, resting normally on a lab table for a period of 15 minutes. You can see convection in the fact that the higher elevation probes (A, B, and C with A being the highest) gain temperature faster than the probes underneath the heating element. There is some ‘noise’ in the data that we believe was caused by static electricity on the Styrofoam that interfered with our thermocouples.

Graph 1

Our zero g results were disappointing due to two unforeseen events. On its first flight, there was a computer glitch that froze the taking of data. We (East Troy High School) were working with a particular team member from UW Madison who wrote and designed the program for collecting our data. He was not on that flight and the team members who were on the flight to operate our seemingly simple experiment did not know how to fix the issue. Upon landing the glitch was fixed and the experiment was ready to go, it was again ground tested, was showing good data, and was loaded with our direct contact from UW Madison for execution in midair. Upon turning on the experiment, the fuse protecting the variac blew and another was not available on the plane. The cause of the fuse blowing has not been confirmed. We believe it to be that the variac was turned too high (there was a sticker on the dial showing where it should be), the internal nichrome wire somehow touched together making it shorter and thus having a lower resistance and drawing more current, or there is was a short somewhere that we are missing. These two unfortunate events were very disappointing to both our team at ETHS and UW Madison, we feel like we let everyone down, and we believe we could have better prepared.

Conclusion Although we were not able to collect data in a microgravity environment, we learned a valuable lesson about being prepared for anything that could go wrong when conducting experiments in space / microgravity, we gained a wealth of knowledge of how science works in the real world, and we turned some impressionable youth on to the aerospace industry. Two students are planning on starting their own Zero G teams at their respective universities in the fall; Duke and UW LaCrosse. One has decided that aerospace engineering will now be her course of study when she gets to college in the fall. Two are planning on majoring in computer science and would like to intern for NASA during their college careers.

We still have options and plans for this experiment in the future: A) Next year’s incoming group of seniors is looking into the HUNCH. (High School Students Uniting with NASA to Create Hardware) If we end up going that route, we could ask and bring this fixed experiment along as a peripheral and gather data. B) We could remedy the experiment and if UW Madison applies and is selected next year, we could send it down with them again to try and collect good data.

If we end up sending this experiment on the plane again, we will make sure that the UW Madison team member that worked with us to develop our experiment is on the plane to assure we get good data, and we will send up an extra fuse(s) in case one burns up unexpectedly.

A Simplified Model for Flagellar Motion

Kelsey Meinerz, John Karkheck

Department of Physics, Marquette University, Milwaukee1

Abstract. Many bacteria use long, thin appendages called Flagella to propel them forward. Examples of bacteria which use these are Escherichia coli and Salmonella typhimurium. A rotary motor within the cell causes the flagellum to spin, which then propels the bacterium through its fluid environment. This is a complex process, especially for a single-celled organism. In order to better understand how this process works, we have created and tested a model for flagellar motion using springs in different fluids. Quantifying the propulsive force is an important goal, not only for relating it to bacterial motion, but also for potential applications in low gravity environments.

Background. A majority of the published research relating to flagella focus on how the flagella are self-assembled, how the motor within the cell causes the flagellum to rotate, and how the bacteria utilize their flagella to swim towards food or otherwise change direction through the process of “tumbling.” There is very little research out there which reduces the process to a simpler model which can then be used to test different conditions and better understand how bacteria move. In his book Biological Physics: Energy, Information, Life (copyright 2004), Philip Nelson sought to explain how flagella work by comparing it to a helix twirling through the fluid in which the bacteria live. He used this idea to then explain how the net force from the flagellum on the fluid is reciprocated by the fluid which drives the bacterium forward. The fluid needs to be viscous enough to have a low Reynolds number so that the bacterium is not governed entirely by inertial forces. This topic was covered in the class titled The Physical Basis of Biological Structure and Function in the fall semester of 2011. Nelson’s development focuses on a small segment of the helix to explain the existence of a propulsive force.

Figures 1 (left) & 2 (right) courtesy of Philip Nelson’s Biological Physics: Energy, Information, Life Updated First Edition; pages 175, 177; Figure 1 is an illustration of a rotating helix with a small segment highlighted; Figure 2 is that segment of the helix, illustrates the components of the force and velocity for that segment.

1 Special thanks to the Wisconsin Space Grant Consortium for financial support. Special thanks as well to Joseph Holbus and Thomas Silman for their help in engineering this project.

This project sought to create a simple, macroscopic model for a bacterial flagellum. This was to be done by testing the characteristics of a rigid, helix-shaped body being rotated about its axis in a fluid environment and comparing that to the known characteristics of a bacterial flagellum to see if the rigid body accurately models a flagellum. Not only would this be a way to better understand bacterial flagella, but this also could potentially be applied to make a gentle mixing system in a low-gravity environment.

Method and Results. The first apparatus, shown in Figure 3, utilizes a two- pulley system to drive the spring. One of the pulleys is a PASCO rotary motion sensor which measures the velocity and acceleration of the hanging bucket. The falling mass applies a torque to the second pulley which is mounted on a rotating shaft. The spring is mounted on the other end of the shaft, so when the pulley begins to rotate the spring does as well. The distinct springs were immersed in fluids of different viscosities. This set up allowed us to determine the tangential component of the force upon the spring through an analysis of the torques in the system. Data was collected using the program DataStudio, which is the software intended for PASCO instruments. The rotary motion sensor recorded the linear velocity of the system, from which it was possible to determine the linear acceleration of the system. Figure 3: The first apparatus using the 1.50in pulley and the 8in spring Changing viscosities. The first test which was performed using this apparatus involved immersing the springs in different fluids to see how the viscosity affected the motion of the spring. The first fluid used was air, then distilled H2O, and finally motor oil. Graphs of the hanging mass versus the acceleration were plotted for each of these fluids. The graphs show that as the viscosity of the fluid increased, the data became more systematic. This is because in a more viscous fluid, the spring did not whip as much and the rotation of the spring was more stable.

Figures 4 (left), 5 (center), & 6 (right); Plots of the hanging mass versus the acceleration; Figure 4 is the graph for the spring rotating in air; Figure 5 is the graph for the spring rotating in distilled water; Figure 6 is the graph for the spring rotating in motor oil

Changing pulley diameters. In an attempt to better understand the system, the diameter of the second pulley was changed and the effects of that change were studied. The diameters used were 0.75in, 1.50in, and 2.00in. Since the pulley with the smallest diameter has a smaller moment arm, it allowed the system to accelerate quickly. Typically, this resulted in the spring whipping and the data was not as systematic. The pulleys with larger diameters had larger moment arms, which resulted in slower accelerations. This resulted in more stable rotations of the springs and cleaner data.

Accelerating system. Graphs were made by plotting the acceleration achieved with a particular hanging mass. These were made for different wet lengths of the spring. Wet length refers to the length of spring which was immersed in the fluid. Using these graphs it was possible to extrapolate back to zero acceleration and determine what mass would, theoretically, cause the system to move without accelerating. A graph was then made illustrating the relationship between the wet length and the zero acceleration mass. Examples of these are shown in Figures 7 and 8. Using the equation for the line from the zero acceleration mass versus wet length graph, it was possible to gain information about the force on the spring.

Figures 7 (left) & 8 (right); Figure 7 is a plot of the hanging mass versus the acceleration, each line represents a different wet length; Figure 8 is a plot of the wet length versus the mass necessary to achieve zero acceleration

This is the equation for the force on the spring per unit length, where R is the radius of the pulley, g is gravity, X is the distance from the axis of rotation to the center of the spring, m/ l is the slope from the graph of 0 acceleration mass vs. wet length (Figure 8).

System in free-fall. When running tests of the springs in the motor oil, many of the trials resulted in the hanging mass reaching a terminal velocity, meaning the system was no longer accelerating. Since this system is not accelerating, the drag force upon the spring is directly proportional to the force created by the hanging mass. Therefore, the torques created by each are equal and opposite. Figures 9 shows a trial in DataStudio which shows the plateau in the velocity versus time graph; showing that the system did achieve terminal velocity. Figure 10 is a graph of the terminal speed versus the amount of hanging mass for different wet lengths. The graph at right shows that the relationship between the hanging mass and the terminal speed is linear, which translates to a linear relationship between the drag force and the terminal speed.

Figures 9 (left) & 10 (right); Figure 9 is an example of the velocity versus time graph when the system reached an effectively constant speed; Figure 10 is a plot of the hanging mass versus the terminal speed for different wet lengths

We also graphed the relationship between the terminal velocity and the wet length for different amounts of hanging masses. LM stands for large mass which were steel balls that were 32 milligrams and placed into a bucket which weighed 487 milligrams. The graph shows that as the wet length increases, the terminal speed decreases. Figure 11; a plot of the wet length versus the terminal speed for different hanging masses

Conclusions. Based on the relationship between the viscosity and the stability of the rotation, it is likely that flagellar motion is stabilized in a viscous environment.

A typical result for the force per unit wet length of spring is 5.4mN/meter in a water environment using the smaller vessel. Preliminary results using the larger vessel yield forces larger by a factor of ten. This suggests that boundary conditions have an effect upon the spring in the smaller vessel.

Based on the linear relationship between the hanging mass and the terminal speed in the non-accelerating system it was determined that the system is operating in the regime of Stokes flow.

Based on the relationship between the wet length and the terminal speed in the non- acceleration system, it is likely that there is a constraint on flagella length.

Future Work. Figure 12 is the second apparatus for this project. This set up utilizes a DC motor with a speed control device. This will apply a constant torque to the shaft the spring is mounted on and will cause the spring to turn at a constant rate. This should create a vortex pattern which causes the surface of the liquid to act like a parabolic mirror. Using LASER pointers, the curvature of the mirror can be determined. This will allow us to ascertain the longitudinal force on the spring. This portion of the experiment is still a work in progress.

Data collected with the first apparatus using different spring geometries- pitch, coil diameter, and wire diameter- is under analysis.

Figure 12: The second apparatus using a spring with a thicker coil diameter

References. Nelson, Philip. Biological Physics: Energy, Information, Life. New York, NY: W. H. Freeman and, 2004. Print

22nd Annual Conference Part Eight

Geology Fumarole Alteration of Hawaiian Basalts: A Potential Mars Analog

Teri L. Gerard and Lindsay J. McHenry

Department of Geosciences, University of Wisconsin-Milwaukee

Abstract Over the last decade of Mars exploration, planetary scientists have discovered widespread sulfate-rich deposits indicating the acidic weathering of basalts (Squyres and Knoll, 2005; Bibring et al., 2006). By understanding the processes occurring during alteration of basalt in wet volcanic environments such as solfataras, we can learn more about the past aqueous processes on Mars. Chemically altered mineral assemblages of potentially hydrothermal origin have been detected at several Mars sites including Gusev Crater, Mawrth Vallis, and Nili Fossae (Ehlmann et al., 2010; Ehlmann et al., 2008; Chojnacki and Hynek, 2008; Schmidt et al., 2008; Yen et al., 2008; Bibring et al., 2006). The abundant magnesium and iron sulfates, hematite, and silica suggest alteration of basalts in water-limited, saline-acidic conditions, consistent with acidic evaporites or potentially sulfur-rich fumaroles (Squyres et al., 2007; Morris et al, 2000). In order to fully understand the formation of the minerals observed on Mars, it is necessary to fully understand the geochemical and mineralogical pathways that the basalt undergoes as it weathers under different conditions. Kilauea volcano Hawaii provides an excellent Mars analog for hydrothermal alteration. Samples collected at various fumaroles will be analyzed in order to determine any mineralogical and/or geochemical signatures that can help determine the origin of the mineral assemblages seen on Mars.

Introduction Hawaii is a great place to study the alteration of basalts at solfataras since Kilauea volcano currently has active solfataras in contact with basalts, and since previous studies have identified mineral assemblages similar to those found on Mars (e.g. Morris et al., 2000). By collecting and geochemically and mineralogically analyzing rock samples from young fresh and altered basalts from the same units, it will be possible to determine the different mineral assemblages and element mobility patterns associated with this kind of alteration. These results will be applicable to the results of the Mars Exploration Rover (MER) missions and will serve as a terrestrial analog to help determine the origin of the sulfur-rich deposits seen on Mars. Determining the differences between volcanic and low-temperature aqueous alteration processes on Mars relates to NASA’s strategic goal 3C.3 and MEPAG (2008) goal 1A of assessing the past and present habitability of Mars.

Hawaii has been identified as a useful Mars analog site since its ocean island type basalts are more similar in composition to Martian basalts than most other Earth basalts and since the basalts are weathering in a variety of environments, resulting in alteration minerals that are seen on Mars (Schiffman et al., 2006; Seelos et al., 2010). Fumaroles are currently found in and near the Kilauea and Pu‘u ‘Ō‘ō calderas, and steam vents are additionally located at Kilauea Iki, Mauna Ulu, and other more recent vents and lava flows. Recent (and current) fumarole deposits are located on the wetter, northeastern side of the volcano (i.e. Sulfur Banks) and the drier, southwestern side (i.e. Ka’u Desert). Rock samples from young fresh and altered basalts from different fumarolic environments were collected in order to determine the mineral assemblages

Funded by Wisconsin Space Grant Consortium and element mobility patterns associated with this kind of alteration. The results will serve as a terrestrial analog to help determine the origin of the sulfur-rich deposits seen on Mars and will also be compared to previous studies on fumarole alteration in other contexts, including the rhyolites, dacites, and andesites of the Valley of Ten Thousand Smokes, Alaska and previous studies at Kilauea (Papike et al., 1991a, b; Papike, 1992; Morris et al., 2000).

Background Hydrothermal systems on Mars. Mars was volcanically active for much of its history and it is believed to have widespread past aqueous activity (Hynek and Phillips, 2003; Bibring et al., 2006), which likely included hydrothermal environments. Hydrothermal environments could have provided a suitable environment for microbial life (Walter and Des Marais, 1993). Current models for Mars suggest that there was extensive volcanism early in Mars’ history with warmer, wetter conditions which is shown by the clays seen by OMEGA (Bibring et al., 2005, 2006). As volcanism slowed down and Mars became colder, more arid, and acidic, sulfates formed (Bibring et al., 2005, 2006). Hydrothermal environments likely remained active during this time.

Hydrothermal environments can be identified by the alteration products and geochemical signatures formed by the interaction of fluids with basalt. Many mineralogical features seen by the MERs could have resulted from either low-temperature aqueous alteration or hydrothermal alteration (McCollom and Hynek, 2005). Further constraining the mineralogical and geochemical pathways of hydrothermal alteration will allow us to more accurately reconstruct the aqueous history of Mars.

Previous Hawaiian studies. Previous studies at Kilauea volcano have explored both high and low temperature weathering of basalt, alteration of basalt in an “acid fog” environment, and recent solfatara activity (Morris et al., 2000; Schiffman et al., 2000; Schiffman et al., 2006; Minitti et al., 2007; Chemtob et al., 2010; Seelos et al., 2010). In particular, Chemtob et al., (2010) and Seelos et al. (2010) examined the mineralogy of basalts in the vicinity of recent solfataras on the 1974 and 1976 lava flows in the Ka’u Desert. They did not look at the preservation of alteration materials at older solfataras, or the spatial changes in mineralogy. Studies such as Morris et al. (2000) examined only the alteration materials at the fumarole (at Sulfur Banks) and did not sample fresh, unaltered basalt. Most of the previous studies employed VNIR spectroscopy rather than X-ray Diffraction (XRD), X-ray Fluorescence (XRF), or Electron Probe Microanalysis (EPMA).

Methods We collected samples from Kilauea Volcano, Hawaii in December 2011. Rock and mineral samples were collected at and near the rim of Pu‘u ‘Ō‘ō at the Thanksgiving Eve Breakout flow, the drill hole at Sulfur Banks, Kilauea Iki, Mauna Ulu, and the Kula Kai Caverns. Fresh and altered lavas and the associated mineral coatings were collected near active fumaroles. The steam and gases emitted from the fumaroles ranged from just above ambient temperature to 130° C. Some of the samples were analyzed in the field with a TERRA portable XRD in order to identify any ephemeral minerals that might not survive transport. In the laboratory, the samples were analyzed by XRD by crushing the samples in a micronizing mill to a fine powder and drying overnight. 1 gram of the powder was then mounted and analyzed with a Bruker D8 Focus XRD. This provided the mineral assemblages of the samples. XRF was also used, providing major and trace element data. Samples were crushed in a micronizing mill to a fine powder and loss on ignition was determined using ~ 1 g for each sample. A second 1.000 g of sample was combined with ~1 g of ammonium nitrate and 10.000 g of Claisse 50:50 LiT:LiM flux with an integrated LiBr non-wetting agent, mixed thoroughly, and fused into a homogenous glass bead in a Claisse M4 fluxer for XRF analysis with a Bruker S4 Pioneer Sequential Wavelength Dispersive XRF Spectrometer. For a more detailed description of the methods, see McHenry, (2009).

Dr. Brian Hynek collected several gas and gas condensate samples from the vents for a preliminary microbiology study to help assess the habitability of this extreme environment. Crust and sediment samples congruent with gas and mineral samples were collected aseptically and stored frozen until analysis. Hynek is currently analyzing the gas and gas condensates samples collected.

Results Sulfates and phosphates were the most commonly detected alteration minerals. Halides, clays, and zeolites were also present. Several minerals were only seen in the freshest samples from Pu‘u ‘Ō‘ō indicating that there is a temporal element to the preservation of these minerals. There is also a spatial element with more secondary alteration minerals detected closest to the vent.

At Sulfur Banks, elemental sulfur and gypsum are by far the most common minerals seen. Sulfur and phosphates are most commonly located closest to the vent and gypsum and oxides are located the furthest from the vent. The most heavily altered sample of basalt contained the smectite clay montmorillonite and the sulfate natrojarosite, both minerals seen on Mars (Figure 1). The minerals identified at Pu‘u ‘Ō‘ō are mostly sulfates such as gypsum, anhydrite, and mirabilite. Several of the sulfates were only found at Pu‘u ‘Ō‘ō likely due to the fact that they quickly alter over time. The samples from Mauna Ulu were heavily altered with the most common alteration mineral found being halides, such as ralstonite. Calcite, phosphates, and sulfates were found in the lava tube caves at Kalakai Caves. Precipitated gypsum is the main secondary mineral at Kilauea Iki and appears closest to the vent. For example, KI 11-6 (Figure 2) collected at the vent, shows gypsum along with the basaltic minerals augite and KI 11-10 (Figure 3) collected from ~10.5 m away from the vent shows a mostly unaltered basalt. Preliminary results from the first field season indicate the presence of minerals seen on Mars confirming the suitability of this area as a Mars analog. Further analysis will enable us to identify which minerals are best suited to be a hydrothermal signature. Montmorillonite Natrojarosite

Figure 1: XRD spectrograph of sample SB 11-5 from Sulfur Banks. Montmorillonite, a smectite clay, and natrojarosite, a sulfate, are identified. These minerals have been seen on Mars.

Gypsum

Augite

Figure 2: Sample KI 11-6 was collected at the vent. The XRD spectrum shows gypsum along with the basaltic mineral augite.

Figure 3: Sample KI 11-10 collected from ~35 feet away from the vent. The XRD spectrum shows minerals commonly found in basalt such as fosterite and clinopyroxene.

Future Work We will continue to analyze samples from the 2011 field season using a variety of techniques. Scanning Electron Microscopy (SEM) will provide the mineral assemblages and imaging of the fine-grained alteration minerals and textures. The thin alteration coatings will be analyzed by EPMA to determine their elemental composition (methods of Minitti et al., 2007). Selected samples will be sent to U. Arkansas for spectral analysis by Visible-Near Infrared (VNIR) in order to determine early alteration phases that are not detectable by XRD such as palagonite, amorphous silica, or nanophase Fe oxides.

This pilot study concentrated on determining the mineral assemblages and geochemical changes during alteration of young basalt at active fumaroles and steam vents. We are planning a second trip to Kilauea for late 2012 in which we hope to collect rock and mineral samples from several “fossil” fumaroles such as those in the Ka’u Desert. This will provide a temporal component to the study, allowing us to see what mineralogical and geochemical features of fumarole-driven basalt alteration are preserved in the rock record, and thus what signatures are most useful in trying to identify more ancient deposits. We will collect a transect of rock and mineral samples to determine the spatial pattern of the geochemical alteration. It will be important to collect samples of both the freshest, most unaltered basalt and the corresponding fumarolic altered basalts in order to model the geochemical pathways involved in the alteration processes such as leaching. The samples will be analyzed by XRD, XRF, SEM, and EPMA. By comparing currently active fumaroles to older fumaroles, I will be able to determine what mineralogical and geochemical features of fumarole-driven basalt alteration are preserved in the rock record, and thus what signatures are most useful in trying to identify more ancient deposits of volcanic/hydrothermal activity. By comparing the mineral assemblages of fumaroles from a wet environment (sites from 2011) and dry environment (sites from 2012), it will be possible to determine the differences in weathering pathways and products. Conclusions Mineral samples were collected at several active fumaroles at Kilauea volcano. Preliminary analysis indicates the presence of some of the same minerals, such as sulfates, that have been seen on Mars. Further analysis will enable the determination of the mineralogical and geochemical pathways involved in solfatara-related alteration of basalt. A second field expedition will enable us to collect samples from older, extinct fumaroles in order to determine how long the mineralogical and geochemical changes are likely to be preserved. This will lead to a better understanding of the water-limited, acid-sulfate weathering history of Mars, in particular the sulfate-bearing Gusev Crater soils and bedrock where sulfur-rich hydrothermal processes are believed to have been involved (Squyres et al., 2007) and Meridiani Planum deposits, where acid-sulfate evaporitic conditions are generally preferred (Squyres et al., 2009).

References Bibring, J.-P. et al., 2005. Mars surface diversity as revealed by the OMEGA/Mars Express observations. Science 307, 1576-1581.

Bibring, J.-P. et al., 2006. Global mineralogical and aqueous Mars history derived from OMEGA/Mars Express data. Science 312, 400-404.

Bibring, J.-P. et al., 2007. Coupled ferric oxides and sulfates on the Martian surface. Science 317, 1206-1210

Chemtob, S.M., Jolliff, B.L., Rossman, G.R., Eiler, J.M., Arvidson, R.E., 2010. Silica coatings in the Ka’u Desert, Hawaii, a Mars analog terrain: a micromorphological, spectral, chemical, and isotopic study. JGR 115, E04001.

Chojnacki, M., Hynek, B., 2008. Geological context of water-altered minerals in Valles Marineris, Mars. JGR 113, E12005.

Ehlmann, B.L., Mustard, J.F., Fassett, C.I., Schon, S.C., Head, J.W. III, des Marais, D.J., Grant, J.A., Murchis, S.L., 2008. Clay minerals in delta deposits and organic preservation potential on Mars. Nature, vol. 1, pp. 355-358.

Ehlmann, B.L., Mustard J.F., Bish, D.L., 2010. Weathering and hydrothermal alteration of basalts in Iceland: Mineralogy from VNIR, TIR, XRD, and implications for linking Mars orbital and surface datasets. Abstract. Lunar and Planetary Science Conference, 2010.

Hynek, B.M., Phillips, R.J., 2003. New data reveal mature, integrated drainage systems on Mars indicative of past precipitation. Geology 31, 757-760.

McCollom, T.M., Hynek, B.M., 2005. A volcanic environment for bedrock diagenesis at Meridiani Planum on Mars. Nature 438, 1129-1131.

McHenry, L.J. 2009. Element mobility during zeolitic and argillic alteration of volcanic ash in a closed-basin lacustrine environment: Case study Olduvai Gorge, Tanzania. Chemical Geology 265: 540-552. Minitti, M.E., Weitz, C.M., Lane, M.D., Bishop, J.L., 2007. Morphology, chemistry, and spectral properties of Hawaiian rock coatings and implications for Mars. JGR 112, E05015.

Morris, R.V. et al., 2000. Mineralogy, composition, and alteration of Mars Pathfinder rocks and soils: Evidence from multispectral, elemental, and magnetic data on terrestrial analogue, SNC meteorite, and Pathfinder samples. JGR 105, 1757-1817.

Papike, J.J., Keith, T.E.C., Spilde, M.N., Shearer, C.K., Galbreath, K.C., and Laul, J.C., 1991a. Major and trace element mass flux in fumarolic deposits, Valley of Ten Thousand Smokes, Alaska: Rhyolite-rich protolith. Geophysical Research Letters vol. 18 no. 8., pp. 1545-1548.

Papike, J.J., Keith, T.E.C., Spilde, M.N., Galbreath, K.C., Shearer, C.K., and Laul, J.C., 1991b. Geochemistry and mineralogy of fumarolic deposits, Valley of Ten Thousand Smokes, Alaska: Bulk chemical and mineralogical evolution of dacite-rich protolith. American Mineralogist, vol. 76, pp. 1662-1673.

Papike, J.J., 1992. The Valley of Ten Thousand Smokes, Katmai, Alaska: A unique geochemistry laboratory. Geochimica et Cosmochimica Acta, vol. 56, pp. 1429-1449.

Schiffman, P., Spero, H.J., Southard, R.J., Swanson, D.A., 2000. Controls on palagonitization versus pedogenic weathering of basaltic tephra: Evidence from the consolidation and geochemistry of the Keanakako’i Ash Member, Kilauea Volcano. G3 1, 1040.

Schiffman, P., Zierenberg, R., Marks, N., Bishop, J.L., Dyar, M.D., 2006. Acid-fog deposition at Kilauea volcano: A possible mechanism for the formation of siliceous-sulfate rock coatings on Mars. Geology 34, 921-924.

Schmidt, M.E. et al., 2008. Hydrothermal origin of halogens at Home Plate, Gusev Crater. JGR 113, E06S12.

Seelos, K.D. et al., 2010. Silica in a Mars analog environment: Ka’u Desert, Kilauea Volcano, Hawaii. JGR 115, E00D15.

Squyres, S. et al., 2007. Pyroclastic activity at Home Plate in Gusev Crater, Mars. Science 316, 738-742.

Squyres, S. et al., 2009. Exploration of Victoria Crater by the Mars rover Opportunity. Science 324, 1058-1061.

Squyres, S., Knoll, A., 2005. Sedimentary rocks at Meridiani Planum: Origin, diagenesis, and implications for life on Mars. EPSL 240, 1-10.

Walter, M.R., Des Marais, D.J., 1993. Preservation of biological information in thermal spring deposits: developing a strategy for the search for fossil life on Mars. Icarus 101, 129-143.

Yen, A.S. et al., 2008. Hydrothermal processes at Gusev Crater: An evaluation of Paso Robles class soils. JGR 113, E06S10.

22nd Annual Conference Part Nine

Education and Public Outreach A Hubble Instrument Comes Home: UW-Madison's High Speed Photometer

James M. Lattis1

Director, UW Space Place Department of Astronomy, University of Wisconsin-Madison

Abstract. WSGC provided generous support to return the High Speed Photometer (HSP) to Wisconsin to be used for public education and outreach at UW Space Place. HSP was one of the original five science instruments built for and launched with the Hubble Space Telescope (HST). Declared to be federal surplus equipment in autumn 2011, HSP was acquired by UW Space Place and brought to Madison to be put on exhibit and used for public education. As the only research photometer built specifically for HST, HSP is an important part of the history of astronomical instrumentation and one of the most important astronomical instruments to come from Wisconsin.

Introduction UW Space Place requested and was granted support from Wisconsin Space Grant Consortium for the relocation of a scientific instrument that is very significant in the history of astronomical research and a primary example of Wisconsin's contributions to space science. The High Speed Photometer project was a direct descendant of the continuous development of astronomical photoelectric photometry dating to the early work of Prof. Joel Stebbins, Director of from 1922 to 1948. Stebbins and his younger colleague Prof. Albert Whitford advanced photoelectric photometry to such an extent that their practices and instruments were widely adopted as standards in the astronomical community.

With the coming of the Space Age, the capacity to put astronomical instruments in orbit led to NASA's Orbiting Astronomical Observatory-2, which was the first general purpose space observatory and the first to carry out extended observations in the ultraviolet. The principal and most productive instruments aboard OAO-2 were the photometers of the Wisconsin Experiment Package (WEP), which was designed and operated by the UW Space Astronomy Laboratory (SAL).

Origins of HSP The technical and scientific success of WEP aboard OAO-2 ensured that Wisconsin's astronomers were deeply involved in the planning for the Space Telescope (later Hubble Space Telescope) project in its earliest stages. The Space Telescope was conceived as a general purpose, primarily optical, observatory in Earth orbit that could be serviced by astronauts but operated by astronomers from the ground. With photoelectric photometry by then a proven technique for space observations, Prof. Robert Bless, who had been one of the leading OAO-2 team members, proposed a minimalist photometric instrument as part of the new space observatory. The instrument, which he characterized as “two thermos bottles and a shoebox” (meaning two photometer tubes and their associated electronics), would be relatively small, simple, and inexpensive compared to the other much more complex “science instruments,” or SIs, to be included in the Space Telescope.

1 UW Space Place wishes to acknowledge the support of the Wisconsin Space Grant Consortium in the acquisition of HSP. We are also grateful for support from: Judy & Jim Sloan Foundation, Prof. Robert C. Bless, UW-Madison Space Science & Engineering Center, UW-Madison Department of Astronomy.

The motivation for a space-based general purpose photometer was three-fold: First, it could extend optical ground-based photometry into the ultraviolet, as OAO-2 had done, but with the vastly greater power offered by a large telescope. Second, it could perform polarimetry, a very powerful technique for investigating the characteristics of interstellar matter and complex star systems, for example. Third, freed of the scintillation of Earth's atmosphere, a space-based photometer can make very high time-resolution studies, i.e. analyzing changes in the light from an object millisecond by millisecond or better. This time-resolution is useful, for example, with rapidly fluctuating objects, like pulsars, and explosive events, such as recurrent novas. This ability to take up to 100,000 measurements per second gave the Wisconsin instrument its appellation as the “high speed” photometer.

NASA considered the scientific case for a photometer aboard the space observatory to be so compelling as to solicit from UW a proposal for a full, major Science Instrument (of which there can be five at any given time) devoted to high-speed photometry. For the full proposal, Bless and his team at SAL collaborated with UW's Space Science and Engineering Center. The result was a contract to deliver a High Speed Photometer in 1983 that would be one of the five original science instruments (SIs) aboard the Hubble Space Telescope when it was launched into orbit.

Design, fabrication, and testing of HSP was a large step in complexity beyond anything undertaken by UW astronomers up to that point. Aside from some NASA-supplied standard hardware and off-the-shelf aerospace equipment (such as power supplies) available from commercial sources, the entire instrument, including structural and thermal design, detectors and their support electronics, on-board control systems, scientific and control software, and ground- based testing procedures, were developed and fabricated entirely at UW-Madison. HSP was the only HST SI to originate entirely on a university campus, and the SAL/SSEC team also made extensive use of student employees, both graduate and undergraduate. The instrument was delivered on schedule and under budget and passed all NASA verification and acceptance tests without significant problems.

HSP was also unique among HST SIs in the simplicity and reliability of its design, which allowed hundreds of filter-aperture-detector configurations with no moving parts, “except electrons,” as Prof. Bless says. This elegant simplicity was made possible by the use of Image Dissector Tubes (IDTs) as the primary detectors. IDTs were the direct descendants of the photoelectric photometers that had been developed by UW astronomers since the 1920s and which were the most advanced form reached by photoelectric detectors before the advent of solid-state detectors, by which IDTs have now been replaced. For that reason, HSP (along with its sibling instrument, the Wisconsin Ultraviolet Photo-Polarimeter Experiment) represents the historical capstone and most advanced form of the astronomical photometry that Wisconsin astronomers had pioneered for most of the Twentieth Century.

Mission HSP was delivered to NASA as contracted in 1983, but numerous problems in NASA's HST and shuttle programs resulted in repeated launch delays. In the end, HSP and the other original SIs were launched in place with HST in 1990. As is well known, HST began operations with several serious technical problems. Foremost, and best known, is that Perkin-Elmer Corp. made a mirror with significant spherical aberration, and that fundamental optical error was not caught in any NASA acceptance testing. The result is that the prime focus images are blurry, e.g. the images of stars that should be diffraction-limited points are instead much larger, fuzzy spots. Another major problem came in the telescope's pointing stability—far worse than the specifications— which meant that those fuzzy images could not be kept steady in the field of view.

These two problems were devastating for the scientific potential of all of the SIs. In the case of HSP, the pin-point images produced by HST's advertised optical perfection were supposed to direct starlight through tiny apertures in front of the IDTs. Instead, the fuzzy images were larger than most of the apertures, so much of the incoming starlight never reached the detectors. The pointing instability meant that even those fuzzy images could not be depended upon to remain centered on a given aperture.

Despite these problems, Bless's team of astronomers were able to carry out a number of new observations and produce meaningful results. (A bibliography of the refereed publications based on HSP science can be found at http://www.sal.wisc.edu/HSP/hsp.papers.html.) Examples of HSP's scientific results include:  Highest time-resolution optical light curve, and the first ever ultraviolet light curve, of the Crab Nebula pulsar.  Detailed structural investigations of Saturn's rings by means of stellar occultation.  First high time-resolution ultraviolet photometry over the complete eruption cycle of a recurrent nova.  Ultraviolet polarimetry of galaxy images to test theories of gravitational lensing.

HST's pointing instability was found to be largely a result of thermal instability and mechanical flexure in the solar panels. This was solved during the first HST servicing mission, in December 1993, when the original solar panels were replaced with a new design. Correction of the spherical aberration problem was accomplished by installation of a device designed to take the place of one of the primary science instruments in order to insert corrective optics in front of the remaining SIs. (This was often described as giving HST “eyeglasses,” although more accurately it amounted to giving each SI its own set of “eyeglasses.”) In order to install this corrective optics device, called COSTAR (for Corrective Optics Space Telescope Axial Replacement), as part of the servicing mission, one of the original SIs had to be removed, and NASA judged that HSP would be the odd instrument out. So HSP was removed from HST by astronauts, who then installed COSTAR in its place. As is well known, COSTAR, along with the solar panel replacement, succeeded in bringing HST up to specifications and restored the remaining instruments to their full potential. Thus the sacrifice of HSP saved the mission for the other instruments, but at the cost of all the potential science that HSP's team could have accomplished. Throughout its operational life, HSP operated flawlessly.

Denouement HSP was returned from orbit at the end of the servicing mission. Except for a brief return to Madison for post-flight calibration tests, the instrument remained in a NASA storage facility until autumn of 2011. At that time, following the end of the shuttle program (and hence the end of any possibility of further HST servicing missions), HSP was declared federal surplus equipment. UW Space Place thereupon requested and was granted ownership of HSP for purposes of public exhibition and education. The not insignificant shipping costs of such a transfer must be borne by the receiver. We requested and were granted funding by Wisconsin Space Grant Consortium as partial support for shipping HSP to Madison.

Since its arrival in Madison, and with support from UW-Madison's SSEC and Astronomy Dept., HSP has been mounted on a custom dolly to make it practical and safe for public exhibition. Some of the aluminum skins have been replaced by plexiglass panels so that visitors can see the internal structure, components, and instrumentation within. HSP went on public display at UW Space Place on 20 July 2012 and will be maintained there as a permanent exhibit (Fig.1).

Fig 1. HSP on public display at UW Space Place

Spaceflight Academy for CESA District #7

Bradley J. Staats, President Spaceflight Fundamentals, LLC Greenville, Wisconsin

Synopsis: Spaceflight Academy for Cooperative Educational Service Agency (CESA) District #7 was a teacher workshop1 that focused on the history, math, science, and technology of spaceflight. This workshop offered a unique approach to teaching by incorporating “real world” applications into the classroom.

Goals and Project Value: Spaceflight Academy for CESA District #7 was a one-day workshop that focused on STEM (Science, Technology, Engineering, and Math) related topics for spaceflight. The course offered a unique approach to teaching and answering the age old question “Why do I have to learn this?” by elegantly incorporating “real world” applications into the classroom. Instructors experienced a unique approach to teaching math, science, and technology standards by tackling real world issues in inspiring classroom experiences. Spaceflight Academy for CESA District #7 utilized award-winning approaches to advance an educator’s knowledge base by employing a fun, hands-on approach to learning.

For this year’s workshop, instructors in the Cooperative Educational Service Agency #7 district were the target audience. The workshop was setup to provide 20 instructors with a full day of instruction and over $100 per person in materials and resources to take back to their respective classrooms. A lunch was also provided to all participants. After advertising extensively for this 3-12 workshop, a total of 25 instructors pre-registered for it. Forty eight percent were middle level instructors while the remaining fifty two percent were high school instructors.

Due to the variety of teaching levels present for this workshop, the workshop was broken into three basic segment/components – elementary/middle and high school level topics. The workshop was co-facilitated. The AM sessions were devoted to elementary/middle level discussions while the PM session was devoted to high school topics. During these sessions the following topics/activities were covered: perception / paradigm shifts; center of gravity; Bernoulli implications; Newton’s universal law of gravitation; various orbital shapes (conic sections); circular orbits; geosynchrous orbits; elliptical orbits; spaceflight mathematics; history of human space exploration; and technologies of space exploration.

1 The main financial support for this workshop was provided by the Wisconsin Space Grant Consortium. Additional support was provided by the following sponsors: Spaceflight Fundamentals, LLC; Science Kit & Boreal Labs; and the Green Bay Area School District.

1 Evaluation Results: At the conclusion of this workshop, the following questions were asked on an evaluation form. The results, of this evaluation, are based on a 100-point scoring system with 100% = strongly agreeing with the provided statement, 80% = agreeing with the statement, and so on.

1. My exposure to this project has increased my knowledge/understanding in space, aerospace, space-related science, design, and technology. [Score = 91%.]

2. Student exposure to this project could increase an interest in space, aerospace, space- related science, design, and technology. [Score = 94%.]

3. The project has self-sustaining/replicable qualities due to the fact that the participants are trained and supplied with the basic materials to go out and duplicate in their classrooms the work that was incorporated in this workshop. [Score = 97%.]

4. The project meets the goal of Teacher Training which is defined as successfully educating, training, and exciting teachers about the math, science, technology, and history pertaining to spaceflight. [Score = 93%.]

5. The instructors were knowledgeable about the subject matter that was being taught. [Score = 100%.]

6. The workshop was well organized. [Score = 97%.]

7. The instructors’ presentation style was well suited for the audience in attendance. [Score = 96%.]

8. I am pleased with the information and materials that I received as part of this workshop. [Score = 99%.]

9. What is the total number of students that you teach per day? [*Based on the number of instructors present and their teaching assignments, a total of 1735 students will be positively impacted by this workshop.]

10. How do you plan to implement this material into your classroom curriculum? [*The following are highlights of responses to this question]

“I want to work it into current labs that I currently use. It also gives me some grabbers to get the student’s thinking.” “Use the activities/projects as they relate to my curriculum and the core standards.” “I hope to use these activities in my room with kids to get them excited about space.” “I like the ‘make and take’ projects. I will be implementing many of these at the beginning of the year when teaching ‘observation and problem solving’ skills.” “I enjoyed the challenge activities. I would like to use some of the activities at the beginning of the year as team building/ice breaking activities.” I will incorporate this as a “new unit in physics”. “I will use the Bernoulli materials in my physics and microgravity.”

2 I will implement the materials into “all classes”. “I plan on using the software to integrate technology into the science curriculum. I will use many of the hands-on activities…all mainly in earth science courses.” “I will use the software program in my astronomy classes and the rocket launcher in physics.” “I will add it to my astronomy unit.”

11. Please express any additional comments regarding the workshop and/or instructors. [*The following are highlights of responses to this question]

“Extremely well done. Practical and easy to duplicate. Great presentation methodology.” “It is so nice to leave with materials I can use immediately in my classroom! Also learning new perspectives to share.” “I am so glad I attended, as this is one of the most useful workshops I’ve attended.” “This was a great workshop. I enjoyed the pace.” “Wonderful – Very useful information.” Presentations were “excellent for all levels”. “Software is very cool. Students will like to do this.” “This was a fantastic variety of projects covering very relevant topics in science.” “Nicely paced and kept busy with hands-on activities.”

Evaluation Analysis: Based upon the positive evaluations and comments of these grades 3-12 teachers, there should be a definite increase for interest in space, aerospace, space-related science, design, technology, and its potential benefits for their students in CESA District #7. Invariably, based on our evaluations, this project should allow secondary (pre-college) students the opportunity to increase their interest, recruitment, experience and training in the pursuit of space or aerospace related science, design, or technology in CESA District #7.

The project has self-sustaining/replicable qualities due to the fact that the instructors that were trained were supplied the basic materials to go out and duplicate the work that we incorporated in our workshop. The goal is for teachers to go back to their classroom and replicate this work to their students – the “multiplier effect” is then engaged. Through this effect, teachers are able to provide their students with exposure to this exciting curricular approach. For the 2012-2013 school year, 1735 students will have the opportunity to be exposed to this worthwhile curriculum. Based upon the amount of grant money received from the Wisconsin Space Grant Consortium (WSGC) and the number of students each registered instructor has, it only cost WSGC an average of $1.72 per student to run this workshop – this is an amazing investment!

The goal for this program is to have it ultimately offered throughout the state. Based upon the workshop’s evaluations, the project certainly met this year’s specific goal of Teacher Training. The whole purpose of the project is to educate, train, and excite teachers about the math, science, technology, and history pertaining to spaceflight – This workshop definitely and successfully accomplished this feat.

3 Alignment with the Science Mission Directorate (SMD): Earth Science, Heliophysics, Planetary Science, and Astrophysics. The scientific investigation of the Earth, Moon, Mars and beyond with emphasis on understanding the history of the solar system, searching for evidence of habitats for life on Mars, and preparing for future human exploration.

This workshop exposed and prepared teachers for our next generation of scientists to aerospace related fields. As we prepare for future space exploration, we will need many new scientists and engineers to accomplish this endeavor. Our workshop’s goal was to train teachers who in turn will train future scientists and engineers.

Human Exploration and Operations (HEO): The HEO mission Directorate provides the Agency with leadership and management of NASA space operations related to human exploration in and beyond low-Earth orbit. The workshop had a focus on past and future exploration system development needs, human space flight capabilities, and advanced exploration systems.

Educational Standards: The National Research Council’s (NRC) Science Education Standards were addressed throughout the workshop. Special emphasis was put on the following Standards areas: The Teaching Standards: Guiding and Facilitating Learning & Building Learning Communities; The Professional Development Standards: Learning Science Content, Learning To Teach Science, & Learning To Learn; and The Content Standards: Scientific Inquiry, Technological Design, & Science and Technology.

Participants: The workshop was limited to 20 science and/or math instructors. It was made available on a first come, first serve basis. Spaceflight Fundamentals, LLC fully complies with the Americans with Disabilities Act of 1990 (ADA), Section 504 of the Rehabilitation Act of 1973, and its amendments, all of which prohibit discrimination on the basis of disability in the admission, access to, or participation in programs or activities.

Location of Project: The workshop was advertised to science/math (Grades 3-12) teachers in the CESA #7 district (Green Bay area). The workshop was located at the Lombardi Middle School, 1520 South Point Road, Green Bay, WI. We coordinated the workshop advertisement with school districts in the Green Bay / CESA #7 area. The target audience was science / math classroom instructors (Grades 3 –12). Based upon future funding, follow-up (Part 2) workshops and additional initial (Part 1) workshops could be set around the state.

Work Plan: Our work plan involved an eight-hour workshop. In those eight hours, we focused on the concept of spaceflight via a variety of hands-on activities (labs, simulations, computations, etc.) and discussions. High emphasis was placed on cooperative work and constructivistic approaches being fueled through facilitator lead Socratic dialogue. The goal was to allow the instructors to have the chance to infuse their new knowledge into their respective curriculums with the hopes that a follow-up workshop can be funded in order to further our focus.

4 General Information: Spaceflight Fundamentals, LLC is a small but dedicated company to the advancement of aerospace in the classroom. For the past eleven years, our company has been authoring and publishing educational materials on aerospace education in the state of Wisconsin. Also, in those eleven years, we have had the opportunity to organize and instruct several teacher graduate course workshops. The workshops have always been well received and have made definite positive impacts in our state’s classrooms. With that said, our hope is that workshops like these will allow further opportunities to educate and motivate the current and next generation of instructors/students on aerospace education in the state of Wisconsin. We look forward to creating future proposals / activities for teacher aerospace workshops and further broadening our ability to work with other state organizations with the same goals. Based on feedback from our workshop, we know that teachers and students have benefited from our activities. It is our hope to continue being a positive force in the WSGC’s community outreach efforts while helping to nurture and grow the aerospace industry in the State of Wisconsin.

5 Students Teaching Astronomy Related Science (STARS)

Reynee Kachur1, Sara Seidling1, and Lindsey DeVries2

1Science Outreach, University of Wisconsin, Oshkosh, WI 2Lighted School House, Oshkosh Area School District, Oshkosh, WI

Abstract The Students Teaching Astronomy Related Science (STARS) program was designed to promote the interest in space-related science to the Oshkosh Area School District Lighted School House (LSH) K-5th grade students. Through a partnership with UW Oshkosh Science Outreach and the LSH, the STARS program provided 60 K-5th graders of the LSH program 12 weeks of hands-on learning experiences dealing with space and aerospace content outside of the regular school day and the normal science curriculum. The astronomy related science curriculum developed for this program utilized activities from UW Oshkosh Science Outreach and NASA, and included two visits to the Buckstaff Planetarium for an age appropriate planetarium show. Results of this project indicate that this program was successful in increasing the knowledge of space-related science, brought science to “life” for elementary students, and made space and aerospace education exciting and engaging.

Introduction The University of Wisconsin Oshkosh (UW Oshkosh) is a state-assisted, non-profit institution of higher education. Its mission is to provide undergraduate, select graduate, and continuing/professional education opportunities. UW Oshkosh also engages in research and serves as a regional educational, cultural, and economic development resource. The Science Outreach program, housed at UW Oshkosh in the College of Letters and Science, is devoted to making science education accessible, exciting, and up-to-date for students, teachers, and the community. The Science Outreach department’s goal is to expel science misconceptions and support the University’s mission through interactive educational experiences at the elementary, middle, and high school levels.

The Oshkosh Area School Districts (OASD) Lighted School House Program (LSH) at Washington Elementary School in Oshkosh offers academic and enrichment opportunities to students before and after regular school hours. Washington Elementary is kept lit during an extended day to symbolize the great range of formal and informal learning activities available to students, even after the regular school day is finished. This program has been highly successful and provides services to over 60 students and many community members each semester.

The OASD LSH program is federally funded by the 21st Century Community Learning Center (CLC) Grant. The CLC grants, formulated under the No Child Left Behind Act, is a competitive grant process which provides funding to schools failing to meet academic benchmarks and or have high poverty rates. Washington Elementary School in Oshkosh was awarded this grant to provide academic and enrichment opportunities to its economically disadvantaged students and minorities so that they are equally exposed to a variety of programs that enhance educational and social experiences as compared to their middle and upper socioeconomic class peers.

Thank you to the Wisconsin Space Grant Consortium for funding this project. Page 1

The LSH after-school program offers a nutritious snack at 2:45 p.m. followed by one hour of homework help. This is then followed by one hour of enrichment activities that have been guided by research to benefit children’s’ social, physical, character and academic development. These activities include physical fitness, nutrition, conflict resolution, social skills, leadership skills, career exploration, service-learning projects, team building, injury prevention, arts and crafts, foreign languages, science and nature. All activities occur in a safe, healthy, and positive environment.

The Students Teaching Astronomy Related Science (STARS) program is a fusion of the UW Oshkosh Science Outreach program with the LSH program. This blend of after school enrichment activities focused on space-related science targeted at K-5th graders was designed to increase the knowledge of space-related science content, bring science to “life” for students, and to encourage students to pursue additional STEM curriculum or careers. By targeting the LSH K-5th grade students with real-world applications of science, students can build confidence in science and math and develop positive attitudes towards science in general.

Program Details A total of six UW Oshkosh college students employed by Science Outreach participated in this 12 week program of providing hands-on aerospace-related science instruction to approximately 60 LSH K-5th graders (on average 10 students per grade level per week). In addition, three UW Oshkosh National Science Teacher Association members and one Environmental Health major volunteered their time at Washington Elementary School for this program.

A schedule of curriculum activities was developed by UW Oshkosh Science Outreach and the LSH program coordinators (Table 1). Based on the UW Oshkosh and LSH calendars, a total of 12 weeks were identified for UW Oshkosh Science Outreach to visit Washington Elementary for the STARS program. The program began on September 29, 2011 and ended on January 19, 2012. To maximize student to teacher ratio and the number of adults familiar with the activity, the K-5th grade students were divided into two groups – K-2nd graders and 3rd-5th graders. Two Science Outreach teaching assistants (TAs) were assigned to each group. Each week a different set of activities were planned for each age group to address the myriad of aerospace science themes. Table 1 outlines the activities plan and schedule for the STARS program. Each week, Science Outreach TAs would prepare for, gather supplies, learn the content, and travel from the university to Washington Elementary to teach the science concepts through a hands-on activity.

The curriculum was selected to cover a wide variety of astronomy related topics – from the solar system, comets, individual planets, exploration of space, and human survival in space. In addition, two planetarium shows (Monsters in the Sky and Earth and the Solar System) were revised and updated to provide additional content for the STARS participants.

Results This program enabled both college students employed by Science Outreach and K-5th grade students at Washington Elementary to learn new aerospace curriculum. All results for this grant are qualitative – informal verbal surveys of college students by grant directors, and informal verbal surveys of the K-5 students by LSH coordinators and the grant director.

Page 2

For the UW Oshkosh Science Outreach college teaching assistants, this experience provided a weekly opportunity to “be” in a classroom in front of a group of students. Every teaching assistant who participated in this program developed classroom management skills, a thorough understanding of the science content, and broadened their understanding of science and teaching. By participating in this program, each teaching assistant gained confidence in their teaching skills and experience in teaching science to K-5 students.

For the K-5th graders in this program, the exposure to new curriculum and activities they would not normally get during their regular school year allowed these students to think outside of the box and provided an introduction to science in an informal environment. In addition:

This program got the K-5th grade students excited about science and astronomy. Every week, the students would look forward to the aerospace activity and then ask what they were doing next week. – LSH Coordinators

This enthusiasm was contagious – spreading from the students, to the Science Outreach teaching assistants, to both program coordinators. Pairing college students with elementary students also fostered a role model mentality, with the K-5th grade students looking forward to when the “scientists” would be back.

Three of the Science Outreach teaching assistants in this program and all of the UW Oshkosh NSTA chapter volunteers were also women scientists. Having a positive female scientist role model for the LSH girls also helped encourage more elementary girls become excited about doing science. Although not one of our original goals, encouraging young girls to participate in and try more science became an offshoot of this grant, and will be included in future grant objectives and goals.

The hands-on activities covered a variety of student interests and learning styles, and allowed students to build more confidence in doing scientific activities. The STARS program engaged students in the LSH program in a space related curriculum in order to nurture the interest in STEM careers.

Conclusion Through the STARS program, UW Oshkosh Science Outreach used the NASA curriculum for many of the STARS hands-on activities. Since the completion of the STARS program, Science Outreach has used many of these activities at other programs in the area, enhancing the sustainability of this program. In addition, the relationship between Science Outreach and the LSH program has been strengthened and will continue to provide science activities to encourage elementary students to have success in science in the future.

Page 3

Table 1: Schedule for the 2011-2012 STARS program. Topic White Dwarfs (K-2) Red Giants (3-5)  Moon Cookies th  How Can the Little Sept. 29 The Moon  Moon Cratering Moon Hide the Giant Sun? Oct. 6th Space Exploration  Egg Drop  Egg Drop  Film Canister  Paper Rockets (Build Rockets Oct. 13th Space Exploration and Test Launch)  Rocket Science with  Wind Tunnel Balloons  Rocket Paper Oct. 20th To the Sun  How Big is the Sun? Rockets (Modify and Launch) Oct. 27th  No Lighted School House Nov. 3rd Solar System  Planetarium Show: Monsters in the Sky!  Do the Mystery  Edible Mars Rovers Samples Contain Nov. 10th Life on Mars  Can Things Live Life? An exploration Here? of finding life in space Nov. 17th  No Lighted School House Nov. 24th  THANKSGIVING – No Lighted School House  Stories in the Stars  3D Constellations Dec. 1st Constellations  Star Finders  Star Finders Dec. 8th Solar System  Planetarium Show: Earth and the Solar System Dec. 15th  No Lighted School House Dec. 22nd  No Lighted School House  Make Play Dough  Make Play Dough Jan 3rd Solar System  A 3D Model of the  A 3D Model of the Earth and Moon Earth and Moon  Food Preparation for  Food Preparation for Jan. 5th Surviving in Space Space Space Jan. 17th Galaxies  Galactic Mobile  Galactic Mobile  Edible Comet  Make a Comet Jan. 19th Comets  Comet on a Stick  Edible Comet

Page 4 Launching STEM Interst: Using Rockets to Propel to Excel in STEM Results of the Lift-Off for Teachers and Youths (LOFTY) Program

Reynee Kachur, Michelle Fleming, and Sara Seidling

Science Outreach, University of Wisconsin, Oshkosh, WI

Abstract The Lift-Off For Teachers and Youths (LOFTY) program brought together the UW Oshkosh Science Outreach Program within the College of Letters and Science and College of Education and Human Services faculty to provide a space-related science learning opportunity for in- service teachers that in turn excites and engages the students they teach in aerospace-related science, design and technology. This project dovetailed nicely with many of the other hands-on science programs already conducted by Science Outreach, while at the same time filling a void of increasing the content knowledge of elementary teachers, and increasing the interest and hands-on space-related science experiences for elementary students in Wisconsin. The LOFTY project also emphasized current NASA education goals including helping educators and students develop the critical skills and knowledge base in space-related science. By bringing in elements of an in-service hands-on teacher training, cross disciplinary discussions to incorporate a rocket unit into each subject, planetarium shows at the Buckstaff Planetarium, and using the framework of the Science Olympiad rules and values, the LOFTY project increased interest in and excitement for science, technology, engineering and mathematics (STEM).

Introduction UW Oshkosh is a state-assisted, non-profit institution of higher education. Its mission is to provide undergraduate, select graduate and continuing/professional education opportunities, engage in research and other scholarly activity, and serve as a regional educational, cultural, and economic development resource. The University of Wisconsin Oshkosh and the project leaders have several strengths relevant to this proposal.

First, the Science Outreach Program, housed at the University of Wisconsin Oshkosh (UW Oshkosh), is devoted to making science education accessible, exciting, and up-to-date to students, teachers, and the community, by expelling misconceptions through hands-on experiences and by supporting the University's mission. Throughout the school year, Science Outreach hosts a variety of preK-12th grade groups on the UW Oshkosh campus for learning programs in college science labs, and also visits local elementary, middle, and high schools to provide educational programs on location. In addition, Science Outreach has a long tradition of providing high quality, content-rich summer science workshops for in-service teachers (including: Operation Chemistry, SUMmer Science Workshop (SUMS), and Children’s Literature and Science Project (CLASP), and Science Teaching through Universal Design and Inquiry (STUDI) programs).

Science Outreach also runs the Buckstaff Planetarium located on campus and provides astronomy and planetarium shows for UW Oshkosh students and the community. Science Outreach strives to develop and deliver educational and informational planetarium shows in the Buckstaff Planetarium, while maintaining a smaller, informal atmosphere that encourages

Thank you to the Wisconsin Space Grant Consortium for funding this project. Page 1 interaction between our show presenters and the audience. We believe this focus provides a greater astronomy learning experience for all ages in the audience.

In addition, Science Outreach also hosts the Wisconsin Division B Middle School Science Olympiad competition. Every year, nearly 40 middle school teams from around the state compete in 23 events during the state Science Olympiad competition held at UW Oshkosh. Science Olympiad combines events from all disciplines to encourage a wide cross-section of students to get involved, plus the events reflect the ever-changing nature of science. Each year, at least one event is devoted to astronomy and space-science. In 2011, one of the astronomy events is called Bottle Rocket, in which competitors design and build a bottle (water) rocket with the goal of keeping their rocket up in the air the longest. This event requires a combination of knowledge about space-science, technology and mathematics.

Second, UW Oshkosh has one of the top programs in the state for educating teachers. Recent innovations include alternative licensure programs such as Alternative Careers in Teaching (ACT) to recruit and train more teachers in the areas of science, technology, engineering and mathematics. Under the leadership of Dr. Michael Beeth, this program has attracted major support from the National Science Foundation. Science Education was also prioritized by UW Oshkosh in its recent Growth Agenda, making it possible to hire more faculty. These new hires include Dr. Michelle Fleming, who has built up new capacities for the university, including organization and recruitment of a large student chapter of the National Science Teachers Association (NSTA).

Through this project the successful elements of programs offered by Science Outreach and the elementary educational leadership of Michelle Fleming were combined to provide a space- related science learning opportunity for in-service teachers, that in turn excites and engages the students they teach in aerospace-related science, design and technology. By bringing in elements of an in-service hands-on teacher training, cross disciplinary discussions to incorporate a rocket unit into each subject, planetarium shows at the Buckstaff Planetarium, and using the framework of the Science Olympiad rules and values, the overarching goal of the LOFTY program was to increase interest in and the love for science, technology, engineering and mathematics (STEM).

Program Details In a two-day workshop (two 7-hour days) held on Saturday, September 24th and October 8th, 2012, seven elementary teachers came to the UW Oshkosh campus to learn the science concepts behind the design and construction of bottle (water) rockets through problem solving, hands-on, minds-on constructivist learning practices. The schedule and activities selected for this workshop were chosen to help these elementary teachers learn rocketry fundaments, to be able to incorporate the lessons into their current classroom curriculum, to help their students learn the principles of rocket engineering, to employ the scientific method to design and construct a bottle rocket, and to test student built rockets at their school.

During both sessions with the in-service teachers, an emphasis was placed on discipline crossover to allow teachers to spend more time discussing science. Examples of cross-discipline ideas shared during the LOFTY session included: fiction and non-fiction books for use during reading, review of a science notebook for writing, accounting and measurements for math. Each

Page 2 participant was charged with coming up with additional ways to blend science with other subjects taught throughout the day to increase student thinking about science outside of a true science class. By interweaving science throughout the day and in several subjects, these 4th and 5th grade teachers can make science less intimidating for students and build student confidence in science disciplines.

A major focus on both workshop days was the concept of an interactive science notebook, or a student learning tool that records both the content learned and the reflective knowledge gained (Marcarelli, 2010). A science notebook allows the students to “become” scientists and record all their experimental data, notes, results, and interpretations of the results in one place, just as a true scientist would. In addition to teacher-led discussions (content), one aspect of the science notebook is to have students reflect, assess, and make connections which provide important insights into student understanding, misconceptions, and can serve as a formative assessment tool (Hargroove & Nesbit, 2003; Gilbert & Kotelman, 2005). Some of the “key ideas” and reasons to try a science notebook in a classroom were shared with the participants and included:

Key Ideas: How Notebooks Promote Learning: • Interactive journaling will make a • Improve organization skills difference! • A concrete record of reflection, • Students are actively engaged in assessment, and connections thinking and communicating. • Improve critical thinking skills • Students feel “ownership” because they • Express understanding creatively are creating meaningful knowledge for • Connect student thinking and themselves. experiences with science concepts • There’s no “right” or “wrong” way. • Engage students • Modify to find ways that work best for • Provide opportunities for all students you and your students.

During the two days of LOFTY, participants experienced a wide variety of aerospace activities and/or lesson plan ideas that they could take back to their school (each teacher was given copies of the curriculum to incorporate building rockets into their current 4th or 5th grade curriculum). An outline of the participant topic/schedule is provided in Table 1. The culminating event was a Bottle Rocket competition based on the 2011 Division B Science Olympiad Bottle Rocket event rules. Participants constructed and launched two rockets in a mock competition. This activity better prepared these teachers to help their own students understand and construct bottle rockets during the school year. Asking the participants to go through the building process during the training prepared them for the materials students would need, and the ideas and misconceptions students might experience. Throughout each activity or lesson, participants were asked to use a science notebook to reinforce the science notebooking concepts and to model the practice. Participants could then take the parts of the science notebook that worked best for them and incorporate those ideas into their classroom as well.

For their participation in the LOFTY project, the participants were provided with a bottle (water) rocket launcher. By providing the launchers, teachers in the school/district now have easy access to the equipment in order to continue to teach the rocket unit beyond the life of this grant. In addition, upon completion of the requirements of the LOFTY project (participating in the

Page 3 workshop, teaching a rocket unit in their classrooms during the 2011-2012 school year, and writing a reflection paper), each participant received one graduate credit from the UW Oshkosh College of Letters and Science.

Table 1: Schedule of the 2011 LOFTY Summer Workshop for 4th and 5th grade teachers. Saturday, September 24, 2011 Saturday, October 8, 2011 Welcome, Introductions, Course A multi-disciplinary approach to launching 8am Overview, Pre-course Survey a rocket Science Notebooks and Rocket 9am Rocket Stability History 10am Rocket Principles: Propulsion and Water Rocket Bottle Building and Tests 11am Aerodynamics 12pm Planetarium: Mars Show Altitude Tracking 1pm Rocket Principles: Aerodynamics Water Rocket Competition and Putting it all 2pm Together Designing a Rocket 3pm Post-course Survey

Results Participant pre- and post-test questions on science content knowledge, understanding of the scientific process, and ability to incorporate the rocket-science activities, lessons, and ideas into existing curriculum (i.e., multiple choice, short answer, and Likert scale questions) were administered as part of this program as one indicator of participant learning. Overall, participants showed a positive change in science knowledge on the pre- versus the post-test on rocket and flight concepts. For specific rocket content such as the parts of a rocket, there was an increase in ability to correctly identify the various parts of a rocket (nose, fins, cone). For example, only 40% of the participants when pre-assessed correctly identified the nose cone of a rocket, whereas 60% of the participants post-workshop gave the correct response; and 50% correctly identified the nozzle portion of the rocket on the pre-test, but all participants correctly identified the nozzle post-workshop. In addition, more participants were able to correctly identify how to stabilize a rocket on the post-assessment (pre-workshop only 3 participants correctly identified, post-workshop this increased to 7 participants). With an improved rocket science knowledge base, teachers had an increased comfort level teaching rocketry principles in their science classes, enhanced their confidence in teaching science concepts (pre: M=2.78, SD=0.44; post: M=3.33, SD=0.71; p=0.013) and in particular teaching rocketry principles in science classes (pre: M=2.11, SD=1.05, post: M=3.39, SD=0.60; p=0.002), and gained confidence in their own skills and abilities to teach science (pre: M=2.50, SD= 0.71, post: M=2.00, SD=0.71; p=0.005).

As another measure of the success of the LOFTY program, participants were asked to evaluate changes in student content knowledge and student understanding of the scientific process quantitatively through a method outlined in the aeronautics unit (i.e. pre- and post-tests), and student change in attitudes and beliefs towards science and STEM disciplines during the unit. All student data was provided without student identifiers and only as the raw score for pre-

Page 4 versus post-test assessment. Across all participants, nearly all students had an increase in content knowledge (based on pre- and post-test scores on the participant created/adapted rocketry unit). In particular, one participant saw at minimum of a 20% increase in each student’s score from the pre- to the post- assessment. In addition, many students experienced a change in attitude toward science. All of the participants qualitatively reported an increase in student enjoyment of science after the rocket unit, and enjoyed the hands-on activity of creating a rocket:

“The main thing students liked was creating, making something, having a hands-on experience in science. Many students seemed almost surprised that they had enjoyed the science unit. They said in the past science had been worksheets and packets and they didn’t like science very much. They told me they hoped we would continue to do more hands-on activities for the rest of the year.”

In addition, students reported using technology (computers or graphic calculators) more after the unit (pre: M=2.17, SD=1.30; post: M=3.39, SD=0.99; p=0.001), and saw connections between science and other classes (pre: M=2.65, SD=0.93; post: M=3.39, SD=0.66; p=0.004).

Conclusion By integrating a space-related science unit into 4th and 5th grade classrooms, students become actively engaged in science at a young age. Creating a bottle (water) rocket allows these students to practice the scientific method, learn science in a hands-on way, use engineering and mathematic skills, and to express their creativity. A cross-discipline approach to science through the use of science notebooks, and incorporating a unified theme through all the subjects (history, art, math, social studies) provides students greater exposure to science, makes connections between science and other disciplines, and increases student comfort level with science. By given teachers lessons and activities and having the teachers go through each lesson or activity, teachers learn the content, build their own confidence level with the content, and are better able to help students understand the material and teach students to solve problems through real-world applications of the scientific method. All this, while having fun building a bottle rocket.

The underlying theme behind the student attitude survey results was that past experiences (good and bad) in science have shaped student ideas about science. In order to change those ideas and to show students how science relates to the world around them, it takes a hands-on, cross- disciplinary unit that students can become actively and creatively involved in. Showing students that science learning can overlap into other subjects through cross-disciplinary teaching, creates a greater appreciation and love for science.

Overall, the best part of the LOFTY project was the teacher’s enthusiasm which carried over into the classroom. Nearly all participants qualitatively indicated that their students were engaged, excited, and interested in the rocketry unit. In many cases, students in older grade levels who did not get to participate in creating a rocket were upset that their younger counterparts now got the opportunity to do so; while students in the younger grade levels were asking the teachers when they would get to make the rockets. Beyond the impact to the students, even teachers within the same building/district are asking the LOFTY participants to share their rocketry units.

The inquiry, hands-on, science based learning opportunities provided by the LOFTY program not only enriched current in-service teachers’ curriculum, but enhanced students’ scientific

Page 5 learning. Through this experience, participants improved their ability to integrate building bottle rockets into their curriculum and confidently answer questions about rockets. This hands-on, real-world application approach to learning space-related science enabled students to experience how science works, generated excitement about science and rockets, and built confidence in science and mathematics skills. Thus, this project helped students enjoy science classes and to develop positive skills in science that will last a lifetime.

References Gilbert J., and Kotelman, M. (2005, December). Five good reasons to use science notebooks. Science & Children, 43(3), 28-32 Hargroove, T., and Nesbit, C. (2003). Science notebooks: Tools for increasing achievement across the curriculum. (ERIC Document Reproduction Service No. ED482720). Retrieved January 10, 2012, from http://www.ericdigests.org/2004-4/notebooks.htm. Marcarelli, K. (2010). Teaching Science With Interactive Notebooks. Thousand Oaks, CA: Corwin.

Page 6 A Celebration of Life XVII: Geology on Earth and Mars! Summer Science for Grades 3-5 and 6-8

Barbara Bielec

BioPharmaceutical Technology Center Institute

Abstract The primary goal of “A Celebration of Life” is to support the continued development of African American and other students’ interest in science, and to assist in providing them with the tools for success in school. A long-term goal is to increase the number of minority students who successfully complete high school science courses and who choose to pursue STEM careers. In partnership with the African American Ethnic Academy, Inc. (AAEA), a Madison non-profit organization, the BioPharmaceutical Technology Center Institute (BTC Institute) offered "A Celebration of Life XVII: Geology on Earth and Mars!" during summer 2012. Two week sessions, one for elementary and one for middle school students, were held weekday mornings at the BioPharmaceutical Technology Center, in Madison, Wisconsin. These programs represent a 17-year collaboration between AAEA and the BTC Institute that prioritizes offering a rich range of hands-on science activities for students.

Introduction The primary goal of “A Celebration of Life” is to support the continued development of African American and other students’ interest in science, and to provide them with tools for success in school. A long-term goal continues to be increasing the number of minority students who successfully complete high school science courses, and who may eventually choose to pursue science, technology, engineering and math (STEM) careers. Extensive efforts are made to ensure participation of students from economically challenged families through the provision of scholarships and transportation.

Program Details The program theme for 2012 was Geology on Earth and Mars! The elementary program was held weekday mornings June 18-29, the middle school program was held July 2-13. For both sessions, hands-on activities, in outdoor, classroom and laboratory settings, were designed to engage students’ interest in science and STEM careers. This was accomplished through a series of activities about the exploration of Geology on Earth and Mars that were related to the Mars Rovers, including Curiosity. Program activities reflect the Wisconsin Model Academic Standards for Science, which follow the form and content of the National Science Education Standards. Many of the educational activities were from the NASA Summer of Innovation Project (http://www.nasa.gov/offices/education/programs/national/summer/home/index.html, 2012).

The BTC Institute is pleased to acknowledge the Wisconsin Space Grant Consortium Special Initiatives Program and the NASA Summer of Innovation Project for their financial support.

Over two-thirds of the student participants were African American, and 79% of all participants belong to an underrepresented minority group. Many participants received scholarships and transportation to facilitate their participation in the program. A total of 29 students: 13 girls (45%) and 16 boys (55%), participated in developmentally appropriate learning.

Table 1: Gender of Participants in A Celebration of Life XVII: Geology on Earth and Mars!

Program Total Girls Boys Participants

Geology on 18 9 9 Earth and Mars Elementary Geology on 11 4 7 Earth and Mars Middle School Total 29 13 16

Table 2: Ethnicity of Participants in A Celebration of Life XVII: Geology on Earth and Mars!

Program Total African- Hispanic Other Participants American

Geology on 18 10 3 5 Earth and Mars Elementary Geology on 11 10 1 Earth and Mars Middle School Total 29 20 3 6

All specific topics for both sessions of the summer 2012 Geology on Earth and Mars! program were related to NASA’s exploration of Mars, and many of the educational activities used were designed by NASA to include the following content:

•rocks on Earth and Mars •volcanos •geological evidence of water •current NASA projects related to geology, including Mars Rovers •historic and contemporary African American science, technology, engineering and math (STEM) professionals, including those affiliated with NASA Each session also included a field trip and concluded with student presentations of selected activities to their peers, family members and other adult guests on the last day of each session. Students also shared their posters of African American STEM Professionals.

Results Pretests and post-tests are administered as part of each AAEA/BTC Institute summer science program as one indicator of students’ learning. Overall, both elementary and middle school students showed an increased knowledge about geology, Mars exploration and African American STEM professionals.

For example, on the pretest only 21% (3/14) of the elementary students tested could name a single African American Scientist, Engineer or Mathematician, and two of the answers were “George Washington Carver” – a good, but certainly not current, example. On the post-test 94% (15/16) of the students could list 3 or 4 African American STEM Professionals we had featured.

Elementary students were also asked: “Earth’s atmosphere helps protect us from ______, Mars does not have much atmosphere so there is little protection from this.” Multiple choices for fill in the blank: a rain, b. ultraviolet light, c. gravity, d. space monsters. On the pretest 43% answered “ultraviolet light” correctly, on the post test 88% did so.

Scientific content knowledge can be measured by the pre- and post tests, providing information regarding one aspect of program assessment. Another key indication of success is the number of students who had participated in previous AAEA/BTC Institute programs, or who had family members who were previous participants. In the 2012 summer program, participants included 29 elementary and middle school students total. Of those 29 students, 52% had previously participated, or have had family members who have participated. Of the total number 34% have participated for 3+ years! All of the middle school participants, except for one, had previously participated in the program.

In addition a former student, now in high school, who had attended the program for 5 years, volunteered with both elementary and high school programs as an assistant. Also, three of the “graduating” 8th grade students expressed interest in volunteering for future summer programs.

The return rate of students, along with excellent attendance throughout both sessions, is strong evidence that this program is valued by the participants and their families. When students return for the third, fourth, fifth or sixth year, it is a strong indication of their interest in science programming.

Table 3: Participants in A Celebration of Life XVII: Geology on Earth and Mars!

Program Total Participants First Year Participants in Previous of Programs Eligibility (Grade 3) Geology on 18 5 3 Earth and Mars Elementary Geology on 11 10 NA Earth and Mars Middle School Total 29 15 3

The following local news article helps describe the impact that the AAEA/BTC Institute program Geology on Earth and Mars! had on its participants and the community. From The Madison Times, July 18, 2012, “A Celebration of Life XVII Geology on Earth and Mars”, by David Dahmer:

“On July 13, a group of middle school students celebrated the completion of “A Celebration of Life XVII” program, a two-week course offered through the collaborative efforts of the African American Ethnic Academy (AAEA) and BioPharmaceutical Technology Center (BTC) Institute. This program affords children an opportunity to expand their scientific knowledge by conducting hands-on experiments.

“Great minds got together and a wonderful program was born 17 years ago,” ways Barbara Bielec, BTC Institute K-12 program director. “I’ve had the fortune to work with this program for 10 years now”.

This year’s theme was “Geology on Earth and Mars!” It was especially exciting for the kids because on Aug. 5, NASA’s 1-ton Curiosity rover, the centerpiece of the Mars Science Laboratory (MSL) mission, is slated to land on the Martian surface to investigate whether the planet is, or ever was, capable of harboring past or present microbial life.

“Curiosity’ is landing and this program is all about stimulating curiosity” Bielec said. “We’re really trying to encourage the development of STEM [science, technology, engineering and math] professionals. We’re doing programs every summer to encourage kids to keep taking math and science courses so that we have a diverse workforce in the future.”

The students spend their summer mornings studying, learning, and going on field trips or listening to guest speakers who are often African-American STEM professionals who are related to the topic of the summer – this summer it was geology. A long-term objective is to increase the number of minority students who enroll in – and successfully complete – high school science courses, and who eventually choose to pursue scientific careers.

At the celebration, the kids took turns showing off things they learned during the summer course whether it be giant billboards of famous African American scientists and geologists or scientific experiments. Program founder Dr. Virginia Henderson handed out certificates to all of the youngsters before they had a pizza party.” (Dahmer, 2012)

Comments from the students who participated also speak to the impact of the Geology on Earth and Mars! program. When asked on the post test, “Would you like to become a STEM Professional when you grow up? Why or why not?” the responses (spelling corrected) included:

 “Yes. Because I am really interested in Science, Math and Technology. I also want to get smarter in any way I can. The smarter I get, the more successful I will be throughout the rest of my life.”

 “I do because I like all those things…and today I want to be a scientist when I grow up because it is fun doing and learning new things.”

 “Yes. I want to create Rovers and roads.”

 “I might be an Engineer because I really like building stuff.”

 “I’d like to be an astronomer like Eric Wilcots. Dr. Wilcots inspired me to be one with his vast knowledge of stars.” (Note: Dr. Eric Wilcots was one of our guest speakers and featured African American STEM Professionals. Dr. Wilcots is an Associate Dean at the University of Wisconsin-Madison and a Professor of Astronomy. )

Conclusion The National Science Foundation (NSF) report, entitled Women, Minorities, and Persons with Disabilities in Science and Engineering: 2011, noted that: “Underrepresented minorities [blacks, Hispanics, and American Indians] share of science and engineering bachelor’s and master’s degrees have been rising over the two decades since 1989, with shares of doctorates in these fields flattening after 2000. The greatest rise in science and engineering bachelor’s degrees earned by underrepresented minorities has been in the social, computer, and medical sciences fields of study.” However, this increase in STEM degrees still does not show equivalency with the population percentage of underrepresented minority groups in the U.S. population. From data presented in the NSF report comprised only 3% of the “Scientists and engineers in science and engineering occupations: 2006”. The report concluded that: “The science and engineering workforce is largely white and male. Minority women comprise fewer than 1 in 10 employed scientists and engineers.” (NSF, Women, Minorities, and Persons with Disabilities in Science and Engineering: 2011) Supporting African American educational opportunities in science is essential to helping increase the number of African American students who will ultimately go into baccalaureate and graduate programs in science.

A National Science Teachers Association feature article about science education programs that successfully engage underrepresented students describes the importance of culturally relevant content: “Teachers should discuss African-American scientists throughout the course, not just during Black History Month. Although textbooks may still lack adequate connections between science content and minority scientists, the internet is a great source to find individuals linked to specific stages of scientific development. Minority students will identify with these role models, and thus begin to personalize the science concepts and consider careers in science.” (Bardwell and Kincaid, 2005)

This is in agreement with a study that specifically examined the influence of race and gender role models on young adolescents, where the author found that “the availability of race- and gender- matched role models showed a strong relationship to the developing identities of young adolescents. The availability of a race- and gender-matched role model was significantly and consistently predictive of a greater investment in achievement concerns on the part of these young adolescents.” (Zirkel, 2002). The AAEA/BTC Institute programs will continue to focus on African American STEM professional role models to help inspire students in these areas.

This approach is in alignment with the goal and objectives of the National Space Grant Program, 2010-2014. “The goal of the Space Grant Program is to contribute to the nation's science enterprise by funding education, research, and informal education projects through a national network of university-based Space Grant consortia.” One of the objectives of the Space Grant Program is to: “Promote a strong science, technology, engineering, and mathematics education base from elementary through secondary levels while preparing teachers in these grade levels to become more effective at improving student academic outcomes.” A second objective is to: “Recruit and train U.S. citizens, especially women, underrepresented minorities, and persons with disabilities, for careers in aerospace science and technology.” (National Space Grant College and Fellowship Program [Space Grant] 2010-2014)

The support provided by the Wisconsin Space Grant Consortium helps the A Celebration of Life! summer science program meet both of these objectives, and the overall goal of the Space Grant Program. A Celebration of Life! has helped to provide a “strong science, technology, engineering and mathematics education base” for both upper elementary and middle school secondary students. The exploration opportunities provided by the summer program enrich and enhance students’ scientific knowledge and associated skills. It is also essential for students to see themselves in those roles. Learning about historic and contemporary African American STEM professionals as part of an exciting hands-on science program will help to recruit a diverse work force of problem solvers, scientists, inventors and engineers for “careers in aerospace science and technology”.

The National Space Grant Program 2010-2014 further defines one of its priorities: “NASA Education Priorities, Current Areas of Emphasis - Authentic, hands-on student experiences in science and engineering disciplines – the incorporation of active participation by students in hands-on learning or practice with experiences rooted in NASA-related, STEM-focused questions and issues; the incorporation of real-life problem-solving and needs as the context for activities.” (National Space Grant College and Fellowship Program [Space Grant] 2010-2014). The many NASA Education Mars exploration activities incorporated in the Geology on Earth and Mars! summer 2012 sessions were certainly successful examples of “active participation by students in hands-on science” learning related to current “real-life” NASA projects. Another name for the program? Curiosity? References

Bardwell, Genevieve and Kincaid, Eric. The Science Teacher, February 28, 2005. A Rationale for Cultural Awareness in the Science Classroom. http://www.nsta.org/publications/news/story.aspx?id=50285&print=true

Dahmer, David. The Madison Times, A Celebration of Life XVII Geology on Earth and Mars”, July 18, 2012.

NASA Summer of Innovation Project, 2012. Available at http://www.nasa.gov/offices/education/programs/national/summer/education_resources/index.html

National Science Foundation, Division of Science Resources Statistics. 2011. Women, Minorities, and Persons with Disabilities in Science and Engineering: 2011. Special Report NSF 11-309. Arlington, VA. Available at http://www.nsf.gov/statistics/wmpd/.

National Space Grant College and Fellowship Program (Space Grant) 2010-2014. National Aeronautics and Space Administration Office of Education FY 2010 NASA Training Grant Announcement. Release Date: 30 November 2009. http://www.nasa.gov/pdf/418826main_Space%20Grant%202010%20Solicitation%20Rev%20B[1].pdf

Zirkel, Sabrina. Teachers College Record Volume 104, Number 2, Is There A Place for Me? Role Models and Academic Identity among White Students and Students of Color, 2002, Teachers College, Columbia University.

NASA and Biotechnology – Professional Development for Secondary Teachers

Barbara Bielec

BioPharmaceutical Technology Center Institute

Abstract An integral part of enhancing science education is training teachers in current content and techniques, and biotechnology is one of the technologies that will be needed to maintain living systems in space. The BioPharmaceutical Technology Center Institute (BTC Institute) offered two graduate education courses in biotechnology for teachers during summer 2012. Each weeklong course was held at the BioPharmaceutical Technology Center. Biotechnology: The Basics and Biotechnology: Beyond the Basics provided teachers with training, background and curriculum materials including information about NASA and biotechnology. Teachers of a wide variety of subjects with varied levels of teaching experience were active participants in this lab-based learning. They are now prepared to provide similar opportunities for their students.

Introduction Biotechnology: The Basics and Biotechnology: Beyond the Basics are week long summer courses that were offered by the Biotechnology Technology Center Institute (BTC Institute) July 16-20 and July 23-27, 2012 respectively. The primary goal of Biotechnology: The Basics and Biotechnology: Beyond the Basics, is to provide middle school and high school teachers with the training essential to implementation of a laboratory-based biotechnology curriculum. This goal served as the guide in designing and implementing each activity, as well as in structuring each course. Both courses were offered for graduate education credits through Viterbo University and Edgewood College. All three course instructors are experienced teachers of biotechnology at the secondary level.

Three objectives of the National Space Grant Program are to:  “Promote a strong science, technology, engineering, and mathematics [STEM] education base from elementary through secondary levels while preparing teachers in these grade levels to become more effective at improving student academic outcomes.” Biotechnology: The Basics and Biotechnology: Beyond the Basics help prepare teachers to provide “a strong STEM education base” utilizing current content and techniques. Classroom implementation of these content and techniques will enrich student learning which can “improve student academic outcomes”.

The BTC Institute is pleased to acknowledge the Wisconsin Space Grant Consortium (Aerospace Outreach Program) for their financial support of 9 teacher scholarships for these courses in 2012. In addition, the 2012 courses received support for 2 teacher scholarships from FOTODYNE, Inc.

 “Encourage interdisciplinary training, research and public service programs related to aerospace.” The need for quality STEM education training extends throughout many scientific disciplines, and as plans are made for humans to travel and someday live in space, biotechnology joins other technologies to support the “public service programs related to aerospace. Often students and teachers in the life sciences do not fully realize how biotechnology relates to NASA. One of the objectives for both courses is to highlight how biotechnology is and will be used in space exploration.

 “Recruit and train U.S. citizens, especially women, underrepresented minorities, and persons with disabilities, for careers in aerospace science and technology”. Enthusiastic well-trained STEM teachers are key to recruiting diverse future STEM professionals in “aerospace science and technology”. Making connections between biotechnology and NASA increases the pool of teachers and students who will help meet this objective since it also brings in life science teachers and the students that they teach. (National Space Grant College and Fellowship Program [Space Grant] 2010-2014).

NASA Education Priorities, Current Areas of Emphasis for Space Grant projects include: “Authentic, hands-on student experiences in science and engineering disciplines –the incorporation of active participation by students in hands-on learning or practice with experiences rooted in NASA-related, STEM-focused questions and issues; the incorporation of real-life problem-solving and needs as the context for activities.” Both courses included “real- life” examples of how biotechnology is utilized by NASA, and “hands-on learning” of fundamental biotechnology techniques is the core of the curriculum. In addition, one of the requirements for teachers receiving a WSGC scholarship for Biotechnology: The Basics or Biotechnology: Beyond the Basics was to “submit a 2-3 page summary report to the BTC Institute, at the end of the 2012 or 2013 semester in which they implement discussion of NASA utilization of Biotechnology.” Teachers who took the courses in 2011 were required to do this and several examples of how teachers implemented “a discussion of NASA utilization of Biotechnology” were featured to help 2012 teacher participants include this in their upcoming classes.

Program Details Both Biotechnology: The Basics and Biotechnology: Beyond the Basics were one-week courses offered in summer 2012. Representing rural, urban, and suburban school districts, the attendees were teachers of a variety of subjects, including: middle school science, biology, biotechnology, agriculture, and chemistry. Currently there is a strong encouragement from the state of Wisconsin for agricultural educators to receive more science training. They are teaching many of the Biotechnology courses throughout the state, and over one-half of our attendees were agriculture teachers.

Most participants are high school teachers in Wisconsin, but two are high school teachers in Illinois, one is a junior high school teacher, and many of the Agriculture teachers also teach middle school courses in addition to their high school courses. Biotechnology: The Basics 2012 had 8 attendees (5 women & 3 men) and Biotechnology: Beyond the Basics 2012 had 7 attendees (4 women & 3 men), 3 of the attendees took both courses. Class participants included teachers who had no previous training in biotechnology, as well as very experienced secondary teachers looking to update their knowledge of scientific content and techniques. Some of the teachers currently teach an independent biotechnology course; others incorporate biotechnology curricula within other life science, chemistry or agriculture classes. Several teachers were looking for information to help them design and implement a biotechnology course for the first time.

Table 1: Participants in Biotechnology: The Basics and Biotechnology: Beyond the Basics Summer 2012

Teacher Total High Agriculture Junior Course Participants School Teachers High Science - School Teachers Who often teach Science both high school Teachers and middle school Biotechnology: The Basics 8 2 5 1 2012

Biotechnology: Beyond The 7 4 3 0 Basics 2012

The number of attendees for the 2012 courses was largely due to scholarship funding provided by a Wisconsin Space Grant Consortium-Aerospace Outreach Program grant which covered scholarships for 9 teachers, and 2 Fotodyne, Inc. scholarships. Professional development funding is increasingly difficult for teachers to obtain, and the BTC Institute and the teachers who took the courses are very grateful for the scholarships.

Barbara Bielec (K-12 Program Director, BTC Institute), Peter Kritsch (Teacher, Oregon High School), and Kathryn Eilert (Teacher, Middleton-Cross Plains High School) worked together to plan and implement the courses. All three are experienced teachers of biotechnology at the secondary level. The BTC Institute course fee was $500 in 2012. Both courses were offered for graduate education credits through Viterbo University (3 graduate credits for $330) and Edgewood College (1-3 graduate credits for $150/credit).

Topics and laboratory activities for Biotechnology: The Basics included:  Use of Micropipettes  Agarose Gel Electrophoresis  DNA Extraction  Restriction Enzyme Digestion  Polymerase Chain Reaction  Bacterial Transformation  Bioethics – use of Case Studies  Genetic Counseling  Biotechnology and NASA  Stem Cells  BioFuels and the Great Lakes Bioenergy Research Center  Careers and Training in Biotechnology

Topics and laboratory activities for Biotechnology: Beyond the Basics included:  Polymerase Chain Reaction and Transformation of the PTC gene  Genetic Identity Testing using Short Tandem Repeats (STRs)  Microarrays  Science and Social Media  Protein Purification and Detection  Immunology- Antibody isolation and detection  Bioinformatics and Phylogeny  Biotechnology and NASA  Bioprospecting and the Great Lakes Bioenergy Research Center  Careers and Training in Biotechnology

Implementation was consistently emphasized. How would teachers apply what they learned in their own classrooms? Resources included:  A comprehensive course binder for each teacher  Laboratory protocols, classroom activities and power point presentations on a flash drive for each teacher  Daily discussion and review of course topics and resources  Discussion of funding and equipment sources and tips for successful grant writing

Each day teachers wrote a reflection detailing how they would integrate material into their curriculum and the challenges that they might face, including the resources they would need. These reflections were discussed the next day with the entire group. Additionally, as a final project, each teacher had to design and present a detailed and personalized curricular unit (lesson plan) for teaching the content learned.

Results Course evaluations were extremely positive. For Biotechnology: the Basics teachers wrote:  “Keep up this wonderful opportunity for students and teachers”.  “Info. was delivered wonderfully. Decent amount of time – great resources etc.”  “All [workshops] had wonderful applications to use- especially to give students examples of where the field of biotech. is used today”.  “Great Program – Thank you! I really enjoyed the week.”  “Really appreciate all the sharing of practical materials!”  “I can honestly think of nothing to improve it [the course]. It exceeded my expectations”.

For Biotechnology: Beyond the Basics, teachers wrote:  “The course was presented in a very teacher friendly way, it was a very useful week….Great workshop!”  “Thanks for all the resources.”  “..loved it that I could do 3 credits in a week.”  “This was a great course! I can’t wait for the next one”.  “This course was what I wanted and needed”.

Course evaluations also offered suggestions to improve the courses:  “share a time line of [a] course would be beneficial (sample course outline)”  “lab write-up techniques – k-12 lab notebooks or just HS lab notebook”  “ a little more of answering the ‘why’ from an ag perspective”  “organize each lab with a: what I need, where can I get, kits for each…but seriously this is not a big deal”  “Lab details could be more emphasized”  “addressing more alternative ways some of the labs could be ran if tight supplies”  “ a little more review on what we did the day before …but I also see this as something I have to spend some time with”  “I would love more of an in depth explanation of the labs and how they connect to the curriculum for implementation”

Course evaluations and daily reflections are used to improve courses year to year, as well as to address questions and concerns throughout the course. Next year several of the suggestions will be incorporated, including providing sample biotechnology course outlines.

In 2012 one of the requirements to receive a Wisconsin Space Grant Consortium scholarship was to: “Submit a 2-3 page summary report to the BTC Institute, at the end of the 2012 or 2013 semester in which they implement discussion of NASA utilization of Biotechnology. This report should include: description of how a discussion of NASA utilization of Biotechnology was implemented, description and demographics of the course(s) in which the discussion took place, and student feedback on the discussion”. We are looking forward to receiving these reports and learning how teachers use NASA research as a way to demonstrate the relevance of biotechnology content and techniques. We will incorporate the information we receive from teachers about their inclusion of NASA & Biotechnology content in future courses as well.

For both courses, teacher participants were recruited through direct contact at the BTC Institute’s Biotechnology Field Trip Program, at the Wisconsin Society of Science Teachers (WSST) conference and the National Science Teachers Association (NSTA) conference; an electronic mailing to the BTC Institute’s teacher list, the Wisconsin Dept. of Public Instruction (DPI) Science and Agriculture teacher lists, the Wisconsin Association of Agriculture Educators (WAAE) electronic network, and the Illinois Science Teachers Association (ISTA); posting in the WSST newsletter (print and online) and Science Matters (NSTA Wisconsin) digital newsletter; electronic posting on the Wisconsin Educators Association Council (WEAC) website the Education Communication Board (ECB) website and the Wisconsin Association of Environmental Education (WAEE); emails sent to Cooperative Educational Service Agencies (CESAs) throughout Wisconsin and others; direct recommendation from UW-River Falls Agriculture Education Professor Timothy Buttles; and course listings in the Viterbo University and Edgewood College summer catalogs.

According to data collected on the course evaluations, attendees found out about the course in a variety of ways that are summarized in the following table. The results speak to the strength of the formal and informal networks of Wisconsin agriculture teachers which are reflected in 47% of the responses (7/15), as well as to direct recommendation from another teacher or previous experience with other BTC Institute programs –73% (11/15) of the responses.

Table 2: How Participants in Biotechnology: The Basics and Biotechnology: Beyond the Basics Summer 2012 Learned About BTC Institute Biotechnology Courses.

How Participants Learned About BTC Institute Number of Responses Biotechnology Courses (in order) Other Agriculture teachers and the Wisconsin 3 Association of Agricultural Educators (WAAE) network From another teacher 3 WAAE List Serv or Ag. Education DPI List Serv 2 Dr. Tim Buttles UW-River Falls Ag. Ed. Professor 2 Took previous BTC Institute course 2 From people who have worked with BTCI in the past 1 Wisconsin Society of Science Teachers (WSST) 1 Picked up a brochure at NSTA conference 1 (Indianapolis) Total Responses 15

Conclusion The enthusiasm demonstrated by our attendees is always inspiring. It consistently and clearly demonstrates the need for high quality professional development opportunities that have immediate relevance to the classroom. As stated by the National Science Board/National Science Foundation (NSB/NSF) in A National Action Plan for Addressing the Critical Needs of the U.S. Science, Technology, Engineering, and Mathematics Education System: “The United States possessed the most innovative, technologically capable economy in the world, and yet its science, technology, engineering, and mathematics (STEM) education system is failing to ensure that all American students receive the skills and knowledge required for success in the 21st century workforce. The Nation faces two central challenges to constructing a strong, coordinated education system: Ensuring coherence in STEM learning, and Ensuring an adequate supply of well-prepared and highly effective STEM teachers.”

We are committed to offering quality professional development in STEM for teachers so that their students receive the STEM skills and knowledge needed for future success, and we plan to offer both biotechnology courses in summer 2013. We will continue to seek grant opportunities and new partnerships that will enable us to fund teacher scholarships and provide teachers with much-needed resources. As always, we will utilize previous course evaluations to improve our courses.

The support provided by the Wisconsin Space Grant Consortium to design and implement these courses is greatly appreciated. The donations of instructor time and materials from Fotodyne, Promega, the National Evolutionary Synthesis Center (NESCent), Meriter Hospital, Madison Area Technical College and the Great Lakes Bioenergy Research Center are also key to our success. These partnerships, along with the options to receive graduate education credits through Viterbo University and Edgewood College, ensure the continuation of these essential opportunities for professional development.

References 1. National Space Grant College and Fellowship Program (Space Grant) 2010-2014. National Aeronautics and Space Administration Office of Education FY 2010 NASA Training Grant Announcement. Release Date: 30 November 2009. http://www.nasa.gov/pdf/418826main_Space%20Grant%202010%20Solicitation%20Rev%20B[1].pdf

2. National Science Board / National Science Foundation. A National Action Plan for Addressing the Critical Needs of the U.S. Science, Technology, Engineering, and Mathematics Education System, October 30, 2007. http://www.nsf.gov/nsb/publications/2007/nsb1007.pdf

. Using Science to Bridge Achievement Gaps Simpson Street Free Press Proceedings Paper 2012

Project Summary

Across the country, communities search for innovative and effective ways to promote academic achievement and engage young people in civic life. We use writing and core subject curriculum to accomplish these goals. Coverage of space science in Simpson Street Free Press (SSFP) is an important and popular element in what we do, and central to our mission. The strategy works. SSFP students enjoy producing and publishing this content. Our young audience enjoys reading it. Comments from young readers, parents, and from classroom teachers often reference our Space Science section.

SSFP Science lesson plans are designed to draw connections between and among important concepts. We encourage students to research and write about topics they encounter in school (http://www. simpsonstreetfreepress.org/AAA- Briggs-Rauscher-Reaction) . Science content is our trademark. In 2011-12, with help from the Wisconsin Space Grant Consortium, we launched a significant expansion in this content area. Science and space coverage are major components in publications produced online and in hard copy. For instance, we currently run a feature series that encourages young women, girls, and students of color to explore science-related career choices (http://www.simpsonstreetfreepress.org/editorial/women-in-science). This new section complements perfectly our popular, and now expanding, Space Science section. This content helps fuel our growing circulation. New publications and additional column inches allow us to include more student writers in our programs and reach more readers.

SSFP continues to expand its emphasis on science. Recent circulation and distribution data demonstrates success. Our in-school distribution numbers continue to increase. Letters and emails from school-age readers (in particular middle school readers), refer often to our “cool” space science section. Through compelling space science content, thousands of young readers are drawn to our pages. During the past two years overall circulation has expanded by about 17%.

SSFP science content is perfect fodder for the classroom. Teachers use our publications and related curriculum guides in classrooms. Well-researched articles on topics ranging from “Saturn’s mysterious rings” (http://www.simpsonstreetfreepress.org/space-science/saturns-rings) to the discovery of possible life-sustaining planets (http://www.simpsonstreetfreepress.org/space- science/first-goldilocks-planet); from “Runaway Stars” (http://www.simpsonstreetfreepress.org /space-science/runaway-stars) to climate change (http://www.simpsonstreetfreepress.org/science/ ice-research) add spice to our pages. Space science works for our publications. It fascinates our student reporters and encourages them to think critically. As they conduct research, SSFP student writers gain academic confidence. This intricate writing across the curriculum process complements classroom goals common in districts. SSFP student reporters are required to cite their sources when their story is published. The stories our writers produce and publish draw young readers to the range of academic topics available in SSFP publications. In this way our popular Space Science section acts as a portal. SSFP student reporters explore, write, and polish important skills that easily transfer to any school setting. In turn, they influence their peers. Our writers are effective role models because they are real and because they are local.

Methods and Approach

During the past 20 years Simpson Street Free Press, Inc. has honed an approach to community- based academics that really works. SSFP curriculum is rigorous. But lesson plans are designed to make learning fun, cool, and doable. Kids buy in. They buy in because it’s a job, because it’s a newsroom, and because they see quickly that our methods work for them at school. SSFP programs help students acquire practical, real-world skills. Our teaching methods and across-the- curriculum approach help students build academic confidence. Seventy-five writers, ages 8-18, produce our publications. They work under the tutelage of college-age editors. SSFP editors are program graduates who now attend UW-Madison. Our methods include solid role modeling, sound academic approach, and collaborative effort across age groups. These methods produce successful, college-bound students, no matter their ethnicity or economic background.

Simpson Street Free Press, Inc. is committed to providing outstanding academic support programs delivered in cost-efficient ways. Our innovative science lesson plans are based on proven strategies. SSFP science sections continue to expand. New online versions allow us to publish more articles and columns than ever before (http://www.simpsonstreetfreepress.org/). Readership is expanding exponentially. Science (http://www.simpsonstreetfreepress.org/science) and space coverage (http://www.simpsonstreetfreepress.org/space-science) are major components in these new publications. Using online technology enhances our award-winning approach to out-of-school time academics. No longer constrained by print deadlines and publishing schedules, our students have more time to research science content. We now conveniently post new stories as they are completed. The voices of Wisconsin’s most influential role models are thus amplified. Young readers freely access science topics that interest them by browsing SSFP archives. Students enrolled in our programs gain experience in conducting research and in website development. Our organization has the credentials, the kids, the audience and track record to sustain and expand this successful project.

Project Participants

All students enrolled in SSFP programs produce written work for our science sections. SSFP student writers reflect the diversity of our South Madison location. About 75% of program participants are of color. Many come from low-income neighborhoods. About 30% are second language learners. Dozens of academic success stories begin at SSFP, many among our most at-risk students. Of course, thousands of Wisconsin kids read the positive messages delivered through our publications. We continue to expand the SSFP menu of programs. And we continue to dramatically expand our emphasis on science. Publishing online allows us to include more students and reach more readers. Kids, readers and writers love science. We engage thousands of young people, and in innovative and cost effective ways.

Program Evaluation: Outcomes and Measurement Tools

We use the following outcomes and measures to evaluate the success of this project:

• Improve academic, vocational, and leadership skills for members of our teen writing staff: We evaluate success in achieving this goal using student self-evaluations and performance reviews conducted by adult staff members, parents, and teachers. Evaluations focus on attendance, research skills, and articles completed. We also grade organizational and work skills. We require all our students to submit school report cards. More than 90% of program participants improve overall core subject GPA within six months.

• Expand print circulation and launch an expanded Space Science section. Reach more young people with messages of academic success. Promote interest in science learning: We track circulation numbers, distribution points, and pages printed per issue. During the past 12 months overall column inches devoted to science content increased by approximately 15%, and space science content increased almost 25%. Our writers are extremely effective local role models. They seem “just like us” to kids who read our various publications. We track reader response by documenting web hits, letters and emails received, and through our growing network of middle and high school teachers. About one-third of SSFP distribution is to schools in southern Wisconsin. Overall and in- school circulation reached 23,400 with our latest issue.

• Increase the number of students who are directly involved in producing the various science sections of Simpson Street Free Press. Increase the number of student writers who work in the Free Press newsroom: Increasing content and column inches has allowed us to include more students. And more students than ever are contributing to our Space Science sections. The Simpson Street Free Press has a history of producing college-bound program graduates. During the past ten years, all (100%) of our high school seniors have gone on to college, many with academic scholarships. Admission counselors from several local colleges now make regular visits to our newsroom. Our success rate is high because our core curriculum approach teaches kids how to develop academic self-confidence. We teach the practical skills that really work. Nothing builds academic self-confidence faster than learning to write well, and then seeing your work published. Twenty-three students completed and published space science articles during the past 11 months. All (100%) Free Press writers completed and published at least two science-related stories during the past year.

STEM and Literacy

At SSFP, instruction and training is preparatory. We prepare students for the more complex subject matter they will encounter later. We help them master practical academic strategies. Our students conduct research, check facts, and carefully cite their sources. They quickly learn to apply these strategies at school. Confidence builds as students are immersed in a fun and challenging learning atmosphere. SSFP curriculum is based on latest research and established best practices. Our core strategy is writing across the curriculum. Our approach connects literacy and STEM. A MetLife Education Foundation and After School Alliance study says programs that connect STEM content and writing/literacy can be important in bridging achievement gaps. Simpson Street Free Press science writing lesson plans are excellent examples of out-of-school time activities that support in-school achievement.

Content matters. Methods and approach matter. Across the country communities are turning to after-school programs in search of methods that work. New research tells us that, while extended school days are good, students benefit most when they participate in activities that support in- school learning -- but do not replicate the classroom. This is true whether the activity takes place in the school, or in a community-based setting. This is also a time when school districts across the country are searching for partnerships that work. Simpson Street Free Press works with local school districts concerning achievement gaps and best use of out-of-school time.

Conclusions

Proven, evidence-based, core curriculum teaching methods makes Simpson Street Free Press programs effective. Our multi-mission service-delivery model makes Simpson Street Free Press efficient. Our efforts to expand Space Science coverage and science learning lesson plans are excellent examples of proven and successful non-profit strategies at work.

A recent Harvard Family Research Project study demonstrates that “learning supports outside of school hours should work towards consistent development outcomes for children. In particular, programs that help students acquire practical and transferable academic strategies are considered important.” This is exactly what we do. Simpson Street Free Press programs and lesson plans are carefully designed to complement local school curriculum. Support through Wisconsin Space Grant Consortium is allowing Simpson Street Free Press to expand our innovative curriculum and award-winning approach to after-school learning. We now reach more kids, more often, than ever before. Science learning is cool and fun. And with WSGC’s help, the dynamic peer-to-peer messages of Simpson Street Free Press are reaching even more young people.

EAA Women Soar – Expanding Horizons Space Grant 2011/2012 Special initiatives

Jeffrey Skiles

Youth Education, EAA, Oshkosh, Aviation1

Abstract

Women Soar brings together young women with female mentors who help to expose the participants to math, science, and technology through aviation. Participants have a chance to learn more about programs that can support career goals that have traditionally been male dominated and are able to network with one another to provide support for themselves.

Project Background

Historically the dreams and needs of young girls have been downplayed or ignored in the educational environment. The doors to careers in the fields of math, science, and technology have been much narrower for girls and minority youth due to lack of awareness, encouragement and opportunity. For example, the Federal Aviation Administration’s (FAA’s) Aeronautical Center reported in December of 2007 that only 6.06% of all pilots in the United States are women. According to a July 2009 publication, Programs and Practices that Work: Preparing Students for Non-Traditional Careers Project, prepared jointly by among others, the Association for Career & Technical Education and the National Women’s Law Center, needed techniques to address the lack of women in aviation and non-traditional career fields include: “introducing students to role models, including professionals who have non-traditional careers and peers who participated in non-traditional CTE programs”; and “provide hands-on opportunities for students to learn and apply skills”. The report later quantifies the magnitude of the deficit in attracting young girls and women into the fields of math and science by stating, “In fact, the most recent available data show that the level of under-representation of women in CTE fields that are non- traditional for their gender has remained virtually unchanged since 1979. High School girls also continue to be under-represented in critical math and science fields as well. In 2008, girls made up only 31% of students taking AP physics exams and only 17% of students taking AP computer science exams.”

The need for continuing and expanding programs like Women Soar is highlighted by a recent study showing that girls in the United States are not significantly more interested in STEM (science technology, engineering, and math) careers than they were 10 or 20 years ago. In fact, those girls who do take an interest in such subjects at the middle school and high school level tend to drift to other interests once in college. The study, Women in Science, Technology, Engineering, and Math, conducted by Florida Gulf Coast University and the University of Colorado at Boulder found that two-thirds of young children (boys and girls alike) said they like

1 The EAA Women Soar – Expanding Horizons program was funded by a grant from the NASA Wisconsin Space Grant Consortium. science, but then the gender differences begin to assert themselves. The diversion away from STEM interest among girls begins to appear in middle school and becomes even more obvious in high school where according to the report, “many girls who take advanced science courses in middle school do not continue to study science in high school”.

Lance Rougeux, Director of the Discovery Educator Network, in Silver Springs, MD, singles out a lack of STEM role models for girls who begin to lose interest in such subjects after finishing middle school. “If I could pick one factor that would make a big difference, it would be the need for formal role models.” He points to the Sally Ride Educator institute as a good example of how female-led STEM groups can attract more girls to technical careers.

Across the nation there are a number of organizations and initiatives, similar to Women Soar, that place an emphasis on building partnerships between schools and the non-profit community. These informal education systems provide programs and experiences that challenge, motivate, and inspire youth, and girls in particular, to seek a higher level of understanding of the concepts of math and science and to consider careers in science and technology. Many school districts do not have the resources they need to develop new curricula for teaching the sciences, mathematics, and other fields of vital importance to helping their students build successful lives. There is also a need to incorporate into the learning environment the development of skills in critical thinking, communication, teamwork, decision making, and problem solving where students often lack these fundamental skills, Students’ likelihood of leading productive and fulfilling lives are much reduced when science, math, technology, and life skills are poorly or inadequately developed. These deficiencies become even more prohibitive, to the point of being crippling, when considering opportunities for women and minorities.

The learning process begins with creating interest, demonstrating opportunity, and providing the resources and networking that can support a changing attitude. Several successful programs similar to Women Soar include (examples from previously cited Programs and Practices That Work):  The GirlTech program at Francis Tuttle Technology Center in Oklahoma City has significantly increased girls’ exposure to, enrollment in, and pursuit of college degree programs in technology and engineering. GirlTech provides peer and institutional support and guidance for girls preparing to enter traditionally male-dominated fields by pairing girls with professional female role models and building a strong community of girls at the Pre-Engineering Academy and other programs.  The Seattle Public Schools IGNITE program has, since 2000, connected over 10,000 Seattle high school girls with women currently working in technology careers. The program inspired girls throughout Seattle to overcome barriers to their participation in technology courses by developing personal connections between the girls and their woman mentors. Prior to 2000, girls made up only a handful of students enrolled in Seattle high school technology courses, but after seven years of the IGNITE program, girls made up a substantial number of the students in technology courses and in some cases they filled half the seats in technology classrooms.  Minneapolis Public Schools launched the High-Tech Girls Society (HTGS) in 2003 to increase the representation of girls in traditionally male-dominated, high tech courses such as aviation, engineering and information technology. The HTGS connected the girls with women employed in high-tech fields, provided access to professional organizations that support women in high-tech careers and presented opportunities to meet and network with other young women with similar interests in Minneapolis high schools.

These programs prove the worth of early and consistent reinforcement of STEM opportunities for girls and show that an early exposure to technology education and careers can lead to a lifelong commitment to STEM in non-traditional audiences.

Program Goals and Objectives

EAA strives to introduce youth of all backgrounds to the fields of aviation and technology as well as to role models already actively engaged in aviation fields. Aviation provides young people with a great incentive for advanced learning. The opportunity to meet and interact with leaders in the air and space industry can direct future decision-making. Most youth select their careers based on the lifestyles and pursuits of those closest to them, or based on the paths chosen by adult mentors. It is unlikely that a young person would pursue engineering or a similar technological field without having connected with someone already in that profession. This is particularly true of girls.

Women Soar celebrates the achievements of women in the fields of aviation, technology, engineering, science, and other non-traditional careers, and serves to inspire girls to consider and pursue the vast opportunities in these areas. The purpose of Women Soar is to stimulate interest and engage girls in EAA education programs, to introduce them to exciting career opportunities, and to highlight the educational resources available to them. Women Soar also seeks to promote the belief that the sky is the limit, in that it is possible to achieve success in aviation, space and aerospace, science, engineering, technology, and other fields viewed as non-traditional careers for women. The event also brings young girls together with outstanding women presenters who have achieved success and recognition in their fields. These accomplished women can provide guidance, inspiration, and career direction.

The specific goals for the 201Women Soar initiative will increase knowledge of aerospace and related fields as well as promote interest, recruitment, experience, and training of the next generation of aviation professionals in science, design, and technology where women have been traditionally under-represented. These goals include the following:  To honor and celebrate the achievements of women in the fields of aviation, science, engineering, technology, and other careers traditionally under-represented by women.  To acknowledge women’s leadership in breaking barriers and opening new pathways.  To inspire and empower young women and girls in grades 9-12 to seek their dreams and achieve their fullest potential.  To build a peer network of girl that can provide mutual support during and following the event. Four years ago, Women Soar served as the opening component of an AirVenture focus on women in aviation. The initiative called WomenVenture involved more than 1600 women pilots who signed up during the week to commit their support to helping women and girls find success in the field of aviation.  To raise scholarship funds for girls to pursue their dreams through the EAA’s educational programs. In 2010, $300,000 in scholarships was awarded to promote the exploration of technology-based education and experiences such as attendance at the EAA Air Academy in the summer of 2011, or enrollment in EAA Next Step on-line aviation ground school program, or post secondary scholarships.

The above goals are also consistent with NASA objectives in that the investment in human capital will help to ensure that a pool of workers educated in the relevant aerospace technologies and inspired by the field of aviation will be available in the future to advance each of the Directorates’ missions. Women Soar will focus its mission, attention and resources on the recruitment, training, and inspiration of the next generation of women innovators, entrepreneurs, scientists, engineers, pilots, and astronauts.

Anticipated Program Outcomes

The anticipated outcomes for the Women Soar program, as designed, were as follows:  The formalization of key relationships – The identification of successful women from all fields of study will become part of a national resource network with which EAA can engage in field and web-delivered opportunities, providing ongoing assistance and direction for girls interested in pursuing education and careers in the fields of aviation and aerospace technology. Young women will also connect with their peers of similar interests, thereby establishing a peer network.  Increased awareness – Young women will be provided with the knowledge, support and direction to pursue opportunities in the fields of technology by utilizing tools such as the Young Eagles website and scholarship funding. Pre- and post-event materials developed by EAA education staff and other education experts will maximize impact of the event’s educational activities and challenges.  Financial investment – EAA has demonstrated a commitment to support programs and activities that use aviation as a catalyst to enhance achievement and competency, not only in the core subjects of math and science, but in all subject areas. Targeted approaches that encourage the engagement of girls in aviation and related fields will be successfully demonstrated by their ongoing participation.  Educational resource support – EAA will provide participating girls with a resource package consisting of follow-up educational materials, resources to help advance education and career pursuits, and contact information for ongoing guidance and support.

Results and Findings

Women Soar brought 76 teenage girls between the ages of 13 and 19 together with 20 women mentors, in the fields of aviation, engineering, mathematics, and other science and technology- driven careers. The four-day, three-night event provided activities and challenges that helped form a foundation upon which these young women can base advanced learning and focused educational and career pursuits (Thursday, July 26th, through Sunday, July 29th, 2012). Activities included team-building using physical challenges, mentoring with dialogue among mentors and peers, and career exploration including a review of the historical advancement of women into non-traditional aviation and aerospace careers. Through a partnership with the University of Wisconsin – Oshkosh, participants interacted with professional career counselors and science, math and astronomy instructors. They also were engaged with airline pilots, engineers and aircraft mechanics, as well as women currently employed in aviation, including those women who opened doors to those fields for women more than 40 years ago. Finally, participants participated in EAA AirVenture workshops and forums, and had an opportunity to experience the Advanced Flight Simulator at EAA’s AirVenture Museum and a flight in EAA’s Ford Trimotor. Time for socializing and networking was provided as well. An awards ceremony at the close of the event reinforced the girls’ participation, learning and achievements. Conclusions

EAA conducts a yearly evaluation of the Women Soar event. Past participants have been very complimentary about the benefits of Women Soar, citing the following outcomes:  Attendees connected with peers of similar interests, thereby establishing a peer network.  Attendees become aware of the existence of the current and historical network of successful women from all fields of study.  Young women gained awareness of and access to the knowledge, support and direction available to support their pursuit of opportunities in the fields of technology and aviation.

EAA has successfully sustained Women Soar since 2005, maintaining the small-group, personalized atmosphere of learning exposure and attention that is designed to attract and retain young women as future devotees and career seekers in the field of flight. The program has had a broad base of sponsors, including: University of Wisconsin – Oshkosh; Women in Aviation International; MIT Alumni Association; Wisconsin Women’s Council; Ragged Edge Aviation; and the Antique Airplane Assn. of Colorado. Special thanks and acknowledgement go to the NASA Wisconsin Space Grant Consortium for their generous support of $3,000 that helped make this annual event possible.

EAA FlightLink 2G Space Grant 2011/2012 Special initiatives

Jeffrey Skiles

Youth Education, EAA, Oshkosh, Aviation1

Project Background

Founded over half a century ago in Milwaukee, WI, by a pro-active group of aviators interested in building their own aircraft, the Experimental Aircraft Association (EAA) has subsequently expanded its mission to include the history, preservation, display, and ongoing research on antiques, classics, warbirds, aerobatic aircraft, ultralights, helicopters, and contemporary manufactured aircraft. Hundreds of thousands of aircraft and aerospace enthusiasts gather each summer for EAA’s AirVenture Oshkosh, the world’s largest aviation event. Each year, AirVenture re-creates and reinforces its founding atmosphere of challenge, excitement and achievement through educational workshops that promote personal initiative and accomplishment, the gathering of hundreds of exhibitors who share state-of-the-art technology related to the science of flight, and the daily airshow that celebrates the attraction, beauty, and skill of flight past, present and future. Ongoing EAA programs and initiatives available to amateurs and professionals alike include the EAA AirVenture Museum, the Young Eagles, SportAir Workshops, Timeless Voices, the Air Academy, and, since 2005, Women Soar. The Experimental Aircraft Association’s programs serve all demographics from school-age children through adults, with an organizational goal of reaching out to those populations under- represented in aviation, including women and minorities.

The FlightLink 2G project is new for EAA and has not been previously funded through the Wisconsin Space Grant Consortium. The project is an option for enhanced access by low-income and ethnic minority students to modified existing FlightLink programming and activities. While FlightLink is an overnight experience, FlightLink 2G will be a comprehensive full-day experience.

The original FlightLink programming and activities were developed with the help of grant funds procured through the Wisconsin Department of Transportation (DOT). Over the past year, the FlightLink program established at EAA has been used extensively with 15 Milwaukee Elementary Public Schools and twelve Wisconsin Cooperative Educational Service Agencies (CESAs), as well as two Florida Middle Schools, and a Middle School in New York. The program has been well received by students and faculty. FlightLink 2G will increase the number of students who can participate in this Aerospace learning experience.

EAA has developed the FlightLink 2G project, with WSGC support, in selected upper elementary classes (Grades 4 – 6). For implementation effectiveness, and to facilitate ongoing collaborative follow-up programming among participants, we offer FlightLink 2G to entire classrooms and their respective teachers and chaperones. This experience (9 a.m. to 3 p.m.) will

1 The EAA Women Soar – Expanding Horizons program was funded by a grant from the NASA Wisconsin Space Grant Consortium.

1 include lunch in the EAA Air Academy Lodge and an aerospace career overview. We envisioned offering ten classes of approximately twenty-six students each. Participating classes would be selected following advertising and promotion of the program, and an application process, with preference given to classes with higher percentages of ethnic minority and low-income students. Program Goals and Objectives

The FlightLink program was originally developed by EAA staff to meet the rising demand from educators for programming that incorporates experiential and motivational teaching techniques. Utilizing aviation themes and the Museum’s unique collection and education facilities, EAA’s Education and Museum staffs jointly designed a series of standards-aligned programs to engage students’ (4th – 6th grades) interest using immersive learning experiences customized to meet teacher objectives and grade level-appropriate activities. A 2002 research study, conducted by the University of Wisconsin – Madison, documented the measurable beneficial impact of aviation education programs developed by EAA in collaboration with local educators and a Teacher Advisory Committee. The study showed increased student achievement and performance levels in math and science, as well as in other core content areas followed participation in EAA aviation education programs. EAA’s programs help introduce youth of all backgrounds and ages to exciting aviation technology. For many youths, FlightLink provides the first, and perhaps the only, introduction to the tools and principles of aviation technology, engineering, and other technically driven areas.

In 2008, research showed an expanding ethnicity gap for Americans pursuing science, technology, engineering, and mathematics (STEM) careers. The report from the National Action Council for Minorities in Engineering (NACME) revealed that the number of minority students pursuing STEM degrees and careers had flattened out or even declined in recent years. For example, the study found that while three key underrepresented minority (URM) groups – African Americans, Latinos, and Native Americans – constituted some 30 percent of the overall undergraduate student population in the United States, they received only about 12 percent of the degrees awarded in engineering.

More recently (March 2010), a study by the Bayer Corporation cited survey results demonstrating that women and minorities need to be encouraged to pursue STEM fields from an early age. “Almost eight in ten of our survey respondents say women and underrepresented minorities are missing because they were not encouraged to study STEM fields early on,” said Bayer Corporation President and Chief Executive Officer Greg Babe. Mae C. Jemison, a chemical engineer and America’s first black female astronaut noted that she was lucky when she was growing up to have access to scientists and science programs that allowed her to explore her interest in the field.

Regardless of gender, race or ethnicity, interest in science begins in early childhood. “All children have an innate interest in science and the world around them. But for many children, that interest hits roadblocks along an academic system that is still not blind to gender or color. These roadblocks have nothing to do with intellect, innate ability, or talent,” Jemison said. “On the contrary, they are the kinds of larger, external socio-cultural and economic forces that students have no control over. As students, they cannot change the fact that they do not have

2 access to high-quality science and math education in their schools. But adults can. And we must.”

The survey results identified the three top perceived causes or contributors that lead to underrepresentation in STEM fields:

 Lack of high-quality science and math education programs in poorer school districts.  Persistent stereotypes that STEM isn’t for girls or minorities.  Financial issues related to the cost of education.

The survey also found that the K-12 education system fell short in encouraging minorities and girls to study STEM subjects. Bayer CEO Greg Babe cited the importance of identifying mentors early in their careers for females or underrepresented minorities to help guide them through their career path. Survey respondents said science teachers, at the elementary as well as the high school level, play a larger role than parents in stimulating and sustaining interest in science.

The original FlightLink programming, underwritten in part by the Wisconsin Dept. of Transportation, comprised a two-day overnight experience on the EAA grounds. The proposed FlightLink 2G project submitted to WSGC constitutes a convenient one-day option that provides a similarly attractive and enriching, more introduction to aviation and aerospace, based on feedback from parents and school districts. FlightLink was designed to reach a diverse section of any school district through its customized programs, but school participation has been limited by the availability of dates and resources.

The FlightLink 2G programming will consist of the innovative program components of the original FlightLink program:

 Formation Flight: Multiple brief but intensive activity modules that include: leadership training, flight simulation training; balsa wood aircraft construction; and, Museum tours;  Classroom/Ground Schools: Blends regular museum-based programs with an in-depth look at the world of aviation. This all-in-one program is tailored for each 4th- to 6th-grade group, ensuring that their experience is a strong reinforcement of classroom goals. Special topics include: teamwork in aerospace; navigation, aviation history; basic concepts of flight; air-compressed rocketry; and, wing rib building;  Outside the Classroom: Non-directed activities will cater to individual interests and may include behind-the-scenes tours, in-cockpit orientation, small-group and one-on-one discussions with EAA and aviation/aerospace amateurs and professionals, outdoor activities, and more. Since this project focuses on pre-college education, it has been closely tied to the relevant Wisconsin Model Academic Standards it addresses in Science, Mathematics, Information & Technology literacy, English Language Arts, and Social Studies. The FlightLink lesson plans and program descriptions include the specific model academic standards addressed by each program, lesson, and activity. In June of 2010, the WI Dept. of Public Instruction (DPI) adopted the Common Core State Standards as the new WI Standards for English Language Arts and Mathematics. School districts and other education-focused organizations, such as the EAA, are now in the process of aligning local curriculum, instruction, and assessment with these new standards.

3

The overarching goal of FlightLink programs is to motivate and encourage student interest in core subjects such as science, math, and technology using aviation concepts. This corresponds to the Special Initiatives Program goal of “increasing interest, recruitment, experience, and training of the next generation of experts in the pursuit of space or aerospace-relate science, design, or technology.” The objectives in support of this goal are:

 To provide rural and urban youth with the opportunity to experience state-of-the-art interactive aviation technology;  To provide a vehicle for early career exploration; and,  To increase student achievement.

The development of this proposed FlightLink 2G project is an indicator of the ability of the EAA to sustain and expand promising initiatives to meet the needs of the potential beneficiaries of such programming. The reach and positive reception of the original FlightLink project, sponsored in part by the Wisconsin Dept. of Transportation (DOT), attests to the program’s attraction and feasibility in widely scattered venues. The EAA has a history of successfully promoting, growing, and adapting its initiatives to meet the needs and capacities of the targeted users:

 The EAA Young Eagles program was launched in 1992 to give interested young people, ages 8 - 17, an opportunity to fly in a general aviation airplane. These flights are offered free of charge. Since 1992, more than 1.7 million Young Eagles have enjoyed a flight through the program.  EAA’s “Women Soar” program, first established in 2005, continues each year to unite 35 women, from engineers to fighter pilots, with more than 100 girls, as they work together during a multi-day series of “learning & doing” sessions in a variety of aviation and aerospace fields.

As the Project Description will detail, the FlightLink 2G project illustrates the EAA’s flexibility and responsiveness in modifying the original DOT initiative to serve a wider audience without compromising the elements of academic rigor, inquiry-based interest, cooperative effort, and leadership development integrated into the FlightLink programming and activities. This introductory project will help set the stage for sustained programming, replicate FlightLink’s achievements, and expand its reach and accessibility.

Anticipated Program Outcomes

Based on the described objectives and activities, the desired FlightLink outcomes are:

 An enhanced understanding and appreciation of the mathematical and scientific principles of flight;  An initial exposure to aviation technology and a motivation by the experience to learn more about both technology and aviation;  An increased interest in possible aviation and/or aerospace related careers.

4  An increased achievement in core subjects of science, math and technology, as well as in other core subject areas.

Results and Findings

The effectiveness of the FlightLink 2G programming was to be measured by:

 Student and/or team performance on the hands-on outcome activities incorporated throughout the FlightLink day – FlightLink activities have been correlated with the WI State Model Academic Standards in the various core content areas. Acquired and demonstrated proficiency on those Model Academic Standards have been embedded in both the “process” and the “outcome” activities of FlightLink programming.  Post-activity evaluations filled out by participating students and teachers – This will constitute a second means of measuring the effectiveness of FlightLink programming and activities in changing and guiding participant attitudes toward, and interest in, flight and aerospace.  Post-activity surveys of student interest in and knowledge of aviation education/career opportunities & resources and probable continued interest and activity in the areas of flight and aerospace – Survey results will provide feedback on how to improve FlightLink programming, as well as an opportunity to offer targeted follow-up resources and activities to interested students, teachers, and classrooms. FlightLink 2G is a pilot program and was approved for funding by the WSGC and implemented by EAA very late in the school year. Therefore, in the 2011-2012 school year, EAA was only able to attract 3 classrooms to the experience and will fulfill the requirements of the grant in the 2012-2013 school calendar. The three classrooms represented were from Weyauwega, Appleton, and Oshkosh and encompassed 125 children.

The FlightLink 2G experience encompasses the interactive teamwork developing experience: Teamwork in Aerospace – Houston We May Have an Omelet, a guided EAA AirVenture Museum tour, and time in AirVenture Museum’s interactive children’s area KidVenture where children have the opportunity to work with flight simulators and other interactive exhibits.

Teacher and student evaluations were consistently high, as for all EAA educational programming, with ratings of 4 or 5 on a scale of 1 to 5.

Conclusions

The activities and the targeted outcomes of the FlightLink 2G program contribute to the accomplishment of Special Initiatives Program goals. FlightLink 2G increases knowledge of space, aerospace, and space-related science, design or technology and their potential benefits and, encourages individuals in space-related pursuits. FlightLink programming incorporates NASA materials and refers participating teachers and students to NASA resources, thus aligning with NASA’s Science Mission Directorate. As the program clearly demonstrates, FlightLink supports the three major education goals articulated in President George W. Bush’s 2004 “A Renewed Spirit of Discovery: The President’s Vision for U.S. Space Exploration”:

5  Strengthen NASA and the nation’s future workforce  Attract and retain students in STEM disciplines  Engage Americans in NASA’s mission. FlightLink 2G’s interactive exercises support and enhance science and engineering education with program outreach particularly directed toward schools with significant minority populations. Broad participation will provide a network for educators and students where they can share in the excitement of scientific discovery, and will help to ensure a future workforce of ethnically diverse professionals in the fields of science, math, engineering, and technology. Special thanks and acknowledgement go to the NASA Wisconsin Space Grant Consortium for their generous support of $4,500 that helped make these FlightLink 2G experiences possible.

6 EAA Space Week – Lab for Exploring Teachers Space Grant 2011/2012 Aerospace Outreach

Jeffrey Skiles

Youth Education, EAA, Oshkosh, Aviation1

Project Background

Founded in 1953 in Milwaukee, WI, by a small group of enthusiastic aviators interested in building their own aircraft, the Experimental Aircraft Association (EAA) now brings together hundreds of thousands of people each year at the EAA AirVenture, the world’s largest aviation event. This international celebration recreates, on a grand scale, the atmosphere of challenge, anticipation, and accomplishment that inspired its founders. Over the decades, the organization’s mission has expanded to include other aircraft types including antiques, classics, warbirds, aerobatic aircraft, ultralights, helicopters, contemporary manufactured aircraft, and the continuing, expanding frontier of aerospace. AirVenture features educational workshops that promote personal achievement, hundreds of exhibitors who share state-of-the-art technology related to the science of flight, and a daily air show that celebrates the beauty and thrill of flight past, present, and future. Other EAA programs and resources include the EAA AirVenture Museum, the Young Eagles, SportAir Workshops, Timeless Voices, the Air Academy, Women Soar, and Space Week. These EAA programs are designed to serve all ages – from school-age children to adults – with an organizational effort to reach out to those demographic groups that are under-represented in aviation including women and minorities. Part of EAA’s many educational offerings includes EAA Space Week. EAA Space Week events provide educators with the opportunity to introduce students to the wonders of space exploration during EAA Space Week at the EAA AirVenture Museum in Oshkosh. Daily educational activities throughout the week are designed to introduce students in Grades 3-8 to the science and wonder of space exploration. Each year, EAA and its educational partners strive to introduce enhancements to the Space Week programming, last year for example, Interactive Applets, an exceptional inquiry-based tool was incorporated into the program. EAA’s integrated educational programs offer a continuum of learning for all ages providing students with academic reinforcement that combine standards-based curricula with web-based resources, and challenges that amplify learning in the classroom, at home, in the library, or wherever Internet access is available. Space Week is a joint effort between the EAA Education Staff, EAA AirVenture Museum docents, and the Space Explorers Staff. The experience for students and teachers occurs in the EAA AirVenture Museum, Oshkosh, Wisconsin. This accredited museum facility encourages student’s interest in science and space. In addition to this the museums many exhibits demonstrate the scientific principles of flight, as well as the history of aviation. These exhibits include the Space Ship One exhibit that was the winner of the X-prize and was the first private flight into outer space. This exhibit is a fully active exhibit demonstrating the principles of flight into near earth orbit and the subsequent return to earth.

1 The EAA Women Soar – Expanding Horizons program was funded by a grant from the NASA Wisconsin Space Grant Consortium.

1

Program Goals and Objectives

This year, EAA partnered with Space Explorers, Inc. to provide the EAA Space Week experience. Space Explorers, Inc. has established itself within the educational community as an innovative, leading-edge company with the vision to connect students with space exploration. Space Explorers is committed to bringing the excitement and rewards of space exploration into classrooms throughout the community. Discovery, inquiry, and analysis are integrated into standards-based curricula, experiments, and online mission simulations that incorporate actual NASA data. Through this programming enhancement collaboration with Space Explorers, EAA increases its capacity to meet the following student-centered program goals:  To enable students to gain a better understanding of the universe through hands-on interactive activities, including space exploration exercises and research into such areas as planetary data, the history and future of space exploration, astro-chemistry, and more.  To assist students in developing and enhancing research skills and critical-thinking skills.  To enhance the learning experience, in the classroom, at home, or elsewhere, by providing students with as-needed access to space knowledge resources.  To provide educators with the resources and support needed to integrate the EAA experience with classroom learning objectives.  To combine the technology resources of NASA, EAA, and Space Explorers, Inc., in ways that kindle and renew the level of interest among youth in space travel and exploration.  To provide a multi-faceted experience in which youth participate in events and learning experiences with their classmates at an off-campus school field trip, and where both students and educators experiment and become comfortable during Space Week with an increasing variety of resources that can continue to be used at home, in the classroom, and in other learning venues.

Anticipated Program Outcomes

EAA collaborator Space Explorers, Inc. will be present to provide a hands-on introduction to its enhanced programming for both educators and students attending the events. Space Week will run for a full week to ensure full availability to students and educators.

Children participating in programming throughout the week will benefit from exposure to numerous hands-on challenges and activities that will help them understand and appreciate the scope of our national space efforts, and will encourage them to consider their future with this perspective in mind. Invitations will be issued to all visiting youths and educators to encourage them to re-visit EAA AirVenture Museum with their families to talk about and share what they learned with their parents and siblings. The encouragement of a “return” trip with family members greatly alters the learning environment creating a teaching situation in which the participating children can, in fact, be the “learning guide” for their parents, brothers, and sisters. Planned activities include, but are not limited to: workshops; museum tours; movies; hands-on workstations; rocket launches; simulations; and, take-home challenges.

2

Results and Findings

Over 500 children, 3rd through 8th grade, participated in this year’s Space Week activities. Attendance was down over previous years due to the lack of transportation funds for many school districts. The majority of students arrived from the Fond du Lac and Green Bay areas of Wisconsin. The younger children, grades 3 through 5, participated in the EAA activity, Houston We May have an Omelet, and the Space Explorers sponsored activity Space Odyssey. The older children, grades 6 thru 8, participated in the EAA activity Stomp Rockets, and the Space Explorers sponsored activity Strange New Planet. All students received a guided tour of the EAA AirVenture Museum facility and also had the opportunity to spend time in EAA KidVenture’s aircraft simulators and interactive exhibits.

The programming provided by Space Explorers included opportunities for students and teachers online activities to extend space learning for a full year beyond the museum visit. These activities included online NASA Missions, Mars Explorer Simulations, activities focused on STEM content, Orbital Laboratory, and a K-3 Space Program. This ability to continue this learning throughout the school year through the support of the Space Explorers is an attractive and appreciated component of EAA Space Week.

Conclusions

Space Week has generated consistently positive feedback from participating educators, particularly among those teachers from schools with a large minority and/or low-income population. The EAA/Space Week provides an across-the-board opportunity to introduce youth unfamiliar with advanced technology to the computer, simulation, and other educational challenges found at the EAA AirVenture Museum. Using the constantly changing equipment and other resources available through EAA and its programming, as well as meeting professionals who have attained careers that most feel are out of reach, provides students participating in Space Week with a sense that they can achieve their dreams. EAA’s programming and outcomes reflect and provide research proving that hands-on, aviation- related activities advance achievement and performance to meet state standards in the key subject areas of math, science and technology. Interactive learning and guided research activities introduced during Space Week, and extended into the classroom, at home, and in the community, will further enhance those achievement gains. The continued success of the Space Week events are evidence of its sustainability, and of the responsiveness of EAA and its educational partners to changes in the aviation and aerospace industries, as well as in the educational arena. As a case in point, the expanded Space Explorers’ programming will be incorporated into 2012 Space Week activities. This includes team-based simulations that can be conducted from schools. This project directly aligns with the goals of the NASA Directorates, as well as with the mission of the National Space Grant College and Fellowship program. Both EAA and Space Explorers strive to inspire a new generation of explorers to pursue careers in science, technology, engineering, and mathematics (STEM). Research opportunities and interactive exercises will support and enhance science and engineering education with program outreach particularly

3 directed toward schools with significant minority populations. Broad participation will provide a network for educators and students where they can share in the excitement of scientific discovery, and will help to ensure a future workforce of ethnically diverse professionals in the fields of science, math, engineering, and technology. Special thanks and acknowledgement go to the NASA Wisconsin Space Grant Consortium for their generous support of $3,833 that helped make this annual event possible.

4

22nd Annual Conference Appendix A

2012 Program

Wisconsin Space Grant Consortium

&

University of Wisconsin-Whitewater

Present:

the Twenty-Second Annual WISCONSIN SPACE CONFERENCE

“From Earth to Galaxy”

University of Wisconsin-Whitewater Whitewater, Wisconsin

Thursday, August 16 – Friday, August 17, 2012

______

CONFERENCE 2012 PROGRAM ______

Thursday, August 16, 2012

8:00-9:00 am Registration Hyland Hall Atrium

Continental Breakfast

Poster Setup (formal poster session at 2:30 p.m.)

*** Plenary Session ***

9:00-9:30 am Welcome and Introduction 2101 Hyland Hall

Rex Hanger, Department of Geography and Ge ology, University of Wisconsin- Whitewater

Aileen Yingst, Director, Wisconsin Space Grant Consortium and Co-Investigator on the MAHLI Microimaging Camera for Mars Science Laboratory

Greg Cook, Associate Vice Chancellor, University of Wisconsin-Whitewater

9:30-10:30 am Session 1: Keynote Address

Introduction of Keynote: Rex Hanger, Department of Geography and Ge ology, University of Wisconsin- Whitewater

Dr. John Delano, Professor at State University of New York at Albany and Associate Director of the New York Ce nter for Astrobiology (NASA), Astrobiology: NASA’s Multi-Disciplinary Search for Life Beyond the Earth

10:30-11:00 am Morning Break Hyland Hall Atrium

*** Plenary Session ***

11:00-11:45 am Session 2: Engineering 2101 Hyland Hall

Moderator: Gerald Kulcinski, Associate Dean, College of Engineering, University of Wisconsin-Madison

Eric Gansen, Can Quantum-Dot-Based Single-Photon Detectors Operate at Temperatures Above 4 Kelvin, or Are They Too NOISY, Associate Professor, Department of Physics, University of Wisconsin-La Crosse

Kirsti Pajunen, Dynamic Risk Assessment of Space-Flight Medical Events, Undergraduate Student, Department of Mechanical Engineering, Milwaukee School of Engineering

Paul Thomas, Infrasonic Detection, Undergraduate Student, Department of Electrical Engineering, University of Wisconsin-Platteville

11:45-12:45 pm Lunch University Center 275

*** Concurrent Sessions -- Research Stream ***

12:45-2:15 pm Session 3R: Physics, Astronomy, Meteorology 2100 Hyland Hall

Moderator: Eric Barnes, Professor, Physics Department, University of Wisconsin- Stout

Mitch Powers, A Novel Technique for Fabricating Devices with Complex Geometries, Undergraduate Student, Department of Physics, University of Wisconsin-Madison

Christopher Stockdale, A C-Band Study of the Historical Supernovae in M83 with the Karl G. Jansky Very Large Array, Associate Professor, Department of Physics, Marquette University

Sydney Chamberlin, Testing General Relativity with Pulsar Timing Arrays, Graduate Student, Department of Physics, University of Wisconsin-Milwaukee

Shelly Lesher, The Origin of the Elements, Assistant Professor, Department of Physics, University of Wisconsin-La Crosse

Jordan Gerth, Improving Cloud and Moisture Representation in Weather Prediction Model Analyses with Geostationary Satellite Information, Graduate Research Assistant, Cooperative Institute for Meteorological Satellite Studies, University of Wisconsin-Madison

*** Concurrent Sessions -- Education Stream ***

12:45-2:30 pm Session 3E: K-12 Education & General Public Outreach 2102 Hyland Hall

Moderator: John Heasley, Space Science Educator, Richland Center High School

James Lattis, A Hubble Instrument Comes Home: The High Speed Photometer, Director, UW Space Place, Department of Astronomy, University of Wisconsin- Madison

Brad Staats, Spaceflight Academy for CESA #7, CEO, Spaceflight Fundamentals, LLC.

Reynee Kachur, Students Teaching Astronomy-Related Science (STARS) and the Impact to K-5th Grade Students in Oshkosh, Science Outreach Director, Science Outreach Department, University of Wisconsin -Oshkosh

Reynee Kachur, Launching STEM Interest: Using Rockets to Propel to Excel in STEM, Science Outreach Director, Science Outreach Department, University of Wisconsin -Oshkosh

Barbara Bielec, Geology on Earth and Mars! Summer Science for Grades 3-5 and 6- 8, K-12 Program Director, BioPharmaceutical Technology Center Institute

Barbara Bielec, NASA and Biotechnology- Professional Development for Secondary Teachers, K-12 Program Director, BioPharmaceutical Technology Center Institute

James Kramer, Science Learning and the Achievement Gap, Executive Director, Simpson Street Free Press; non-presenting contributors: Deidre Green, Managing Editor; Ashley Crawford, Teacher and Assistant Editor; Taylor Kilgore, Senior Teen Editor; and Alex Lee, Staff Writer

2:30-3:00 pm Afternoon Break Hyland Hall Atrium

*** Concurrent Session -- Poster Session ***

2:30-4:00 pm Session 4P: Posters Hyland Hall Atrium

Facilitator: Marty Gustafson, Chair, WSGC Advisory Council, Program Director for the Sus tainable Systems Engineering Department of Engineering Professional Development at the University of Wisconsin-Madison

Matt Heer, An Experiment Observing Convection Movement in Microgravity, Science Teacher, East Troy High School; non-presenting members: Kaity Jaeck, Tom Bellar, Sebastian Smith, Curran Schell, Jack Weber, Eric Saltzman, Haley Housch, Nick Frank, Danielle Otto, Alex Otto, Sam Wisniewski, Shauna Wisniewski

Megan Jones, Population Analysis of the Coma-Abell1367 Supercluster, Undergraduate Student, Department of Astronomy & Department of Physics, University of Wisconsin-Madison

Michael Ramuta, X-Ray and Radio Emissions of AWM and MKW Clusters, Undergraduate Student, Department of Astronomy, University of Wisconsin-Madison

Kelsey Meinerz, A Simplified Model for Flagellar Motion, Undergraduate Student, Department of Physics, Marquette University

Kimberly Callan, Prandtl – The Flying Wing, Undergraduate Student, Department of Engineering Mechanics and Astronautics, University of Wisconsin-Madison

Bradley Moore, Development of a Passive Check Valve for Cryogenic Applications, Graduate Research Assistant, Department of Mechanical Engineering, University of Wisconsin-Madison

Darren Pilcher, The Carbon Cycle of Lake Michigan: Application of a Coupled Physical and Biological 3-D Model to Determine Productivity and Nutrient Cycling, Graduate Student, Department of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison

Teri Gerard, Fumarole Aleration of Hawaiian Basalts: A Potential Mars Analog, Graduate Student, Department of Geosciences, University of Wisconsin-Milwaukee

Third Place – Engineering Division – Team ChlAM Maxwell Strassman, Chloe Tinius, Andrew Udelhoven, Undergraduate Students, Department of Engineering Mechanics and Astronautics, University of Wisconsin Madison

Balloon Launch Team: Tyler Capek, University of Wisconsin-River Falls; Richard Oliphant, Milwaukee School of Engineering; Devin Turner, Marquette University; Danielle Weiland, Carthage College

*** Concurrent Session – Research Session***

3:30-5:00 pm Session 4R: Other Programs 2101 Hyland Hall

Moderator: Bill Farrow, WSGC Associate Director for Student Satellite Initiatives, Assistant Professor, Milwaukee School of Engineering

Rocket Team: First Place – Engineering Division – Team Woosh Generator, Brandon Jackson, James Ihrcke, Kirsti Pajunen, Eric Johnson, Undergraduate Students, Department of Mechanical Engineering, Milwaukee School of Engineering, Non-Presenting Team Member: Devin Dolby

Balloon Payload Team: Brock Boldus, Milwaukee School of Engineering; Patrick Comiskey, Milwaukee School of Engineering; Latisha Jones, Milwaukee School of Engineering; Kate Mauk, Milwaukee School of Engineering; Ben Peterson, Milwaukee School of Engineering; Matthew Weichart, Milwaukee School of Engineering

NASA Reduced Gravity Program: Aaron Olson, UW-Madison SEED Zero-Gravity Experiment, Undergraduate Students, Department of Engineering Physics and Department of Mechanical Engineering, University of Wisconsin-Madison, Non- Presenting Team Members: Joe Jaeckels, Austin Lemens, Nathan Rogers, Noah Rotter, Peter Sweeney, Lyndsey Bankers, Collin Bezrouk, Sam Moffatt, Grayson Butler, Aaron Riedel, Austin Gilbertson

NASA Reduced Gravity Program: Kevin Crosby, Microgravity Research at Carthage College, Professor, Department of Physics and Astronomy, Carthage College and Steven Mathe, Microgravity Fuel Gauging Using Modal Analysis, Undergraduate Student, Physics Department, Carthage College, Non-Presenting Team Members: Amber Bakkum, Kelli Ann Anderson, Kevin Lubick, John Robinson, Danielle Weiland

*** Adjourn for Day ***

Friday, August 17, 2012

8:00-9:00 am Registration Hyland Hall Atrium

Continental Breakfast

8:00-8:45 am Undergraduate Workshop 2100 Hyland Hall

*** Plenary Session ***

8:45-9:45 am Welcome and Introductions 2101 Hyland Hall

Rex Hanger, Department of Geography and Ge ology, University of Wisconsin- Whitewater

Session 5: Keynote Address

Introduction of Keynote: Rex Hanger, Department of Geography and Ge ology, University of Wisconsin- Whitewater

Dr. Robert Benjamin, Department of Physics, University of W isconsin-Whitewater, A Visitor’s Guide to the Milky Way

*** Plenary Session***

9:45-10:30 am Session 6: Miscellaneous Research 2101 Hyland Hall

Moderator: Kerry Kuehn, Assistant Professor, Physics Departm ent, Wisconsin Lutheran College

Randy Wolfmeyer, Teaching Special Relativity: A Software Aid for Spacetime Diagrams, Lecturer, Department of Chemistry and Department of Engineering Physics, University of Wisconsin-Platteville

Christopher Anderson, Intensity Mapping with 21cm Line of HI, Graduate Student, Department of Physics, University of Wisconsin-Madison

Rebecca Shotwell, Towards Billion-Body Dynamics Simulation of Granular Material, Undergraduate Student, Department of Mechanical Engineering, University of Wisconsin-Madison

10:30-11:00 am Morning Break Hyland Hall Atrium

*** Plenary Session***

11:00-12:10 pm Session 7: Team Projects 2101 Hyland Hall

Moderator: Kevin Crosby, Chair, Division of Natural Science and Associate Professor of Physics and Computer Science, Carthage College

Rocket Team: First Place – Non-Engineering Division – UWL Rocket Team Joe Krueger, Andrew Prudhom, Undergraduate Students, Department of Physics, University of Wisconsin-La Crosse, Non-Presenting Team Members: Richard Allenby, John Nehls

Rocket Team: Second Place – Engineering Division – Team Jarts Cameron Schulz, Brett Foster, Eric Logisz, Alex Folz, Undergraduate Students, Department of Mechanical Engineering, Milwaukee School of Engineering

NASA Desert RATS: Aaron Olson, Jordan Wachs, UW-Madison Badger Exploration Loft Team at NASA Desert Research and Technology Studies 2011, Undergraduate Students, Department of Engineering Physics and Department of Mechanical Engineering, University of Wisconsin-Madison

Mars Desert Research Station: Aaron Olson, 110th Crew at the Mars Desert Research Station, Undergraduate Students, Department of Engineering Physics and Department of Mechanical Engineering, University of Wisconsin-Madison, Non- Presenting Team Members: William Yu, Mark Ruff, Lyndsey Bankers

12:10-12:30 pm Group Photograph As Directed

12:30-1:20 pm Awards Luncheon University Center 259

1:20-2:15 pm Awards Ceremony

Sharon D. Brandt, Program Manager, Wisconsin Space Grant Consortium WSGC Program Associate Directors

2:15-2:20 pm 2013 Conference

2:20 pm Conference Adjourned

*** Adjournment ***