NASA / OHIO SPACE GRANT CONSORTIUM

2007-2008 ANNUAL STUDENT RESEARCH SYMPOSIUM PROCEEDINGS XVI

A Vision of Yesterday, Today, and Tomorrow

April 18, 2008 Held at the Ohio Aerospace Institute Cleveland, Ohio TABLE OF CONTENTS Page(s) Table of Contents ...... 2-6 Foreward ...... 7 Member Institutions ...... 8 Acknowledgments ...... 9 Agenda ...... 10-11 Group Photograph (Scholars and Fellows) ...... 12 Symposium Photographs ...... 13-19

SCHOLAR AND FELLOW RESEARCH SUMMARY REPORTS

Student Name College/University Page(s)

Anderson, Makeba A...... Central State University...... 20-21 A Radio Based Tracking System for High Altitude Ballooning

Austin, Christian D...... The Ohio State University...... 22-28 Sparse Synthetic Aperture Radar Image Formation and Parameter Estimation

Barnes, Caleb J...... Wright State University...... 29-30 Loitering High Altitude Balloon

Benze, Christopher J...... Cleveland State University ...... 31-32 New and Improved Lunar Rovers

Betten, Joseph F...... Central State University...... 33-34 The Metallographic Investigation of a Deposited Ti 6Al-4V After Friction Stirring

Bozak, Richard F., Jr...... Cleveland State University ...... 35-36 Strain Sensing with a Piezoelectric Biopolymer

Brady, Kyle B...... Case Western Reserve University...... 37-39 Ignition Propensity of Hydrogen in the Presence of Metal Surfaces

Burba, Micheal Eric ...... Central State University...... 40-41 Authenticating Standards of Grain Size Measurement

Carrington, Kenya P...... Wilberforce University ...... 42-43 Design Analyis of the Active Bandpass Filter

Casto, Michael W...... Marietta College ...... 44-48 The Rose Run Exploration Trend of Eastern Ohio

Childerson, Laura E...... Ohio Northern University ...... 49-52 Lift and Drag Coefficients of Wing Designs for an RC Aircraft

Cosgrove, Mathew A...... Miami University...... 53-55 Indoor Navigation Using the Particle Filter Student Name College/University Page(s)

Coulter, Jeffrey V...... Central State University...... 56 The Development of a Flexible Manufacturing System

Dadhania, Mitul R...... University of Cincinnati ...... 57-60 Impedance Spectroscopy for Biosensors and Hydraulic Hose Failure Detection

DePuy, Rhonda S...... Owens Community College...... 61-62 Application of Space Travel Muscle Wasting Research to Aging and Health Compromised Populations

Donnadio, Tammy M...... Youngstown State University ...... 63-65 Autologous Mesenchymal Stem Cell Transplantation to Improve Fascial Repair

Elbuluk, Osama ...... The University of Akron...... 66-67 Integration of Bi-Camera Imaging System on Smart Balloon

Friedlein, Amy N...... Ohio Northern University ...... 68-69 Everlasting Energy

Fuhr, Jeremy N...... Wright State University...... 70-82 Micro Air Vehicle Project: Development of Quad-Winged Flapping Mechanisms

Gabet, Kathryn N...... Case Western Reserve University...... 83-86 Review of Water Management in PEM Fuel Cell Models

Galbraith, Marshall C...... University of Cincinnati ...... 87-95 Implicit Large Eddy Simulation of Low Reynolds Number Flow Past the SD7003 Airfoil

Garling, Joshua D...... Cedarville University...... 96-99 3-D Modeling Tool for New Classical Model of Elementary Particles

Genuske, Megan E...... Youngstown State University ...... 100-101 Autologous Mesenchymal Stem Cell Transplantation to Improve Fasical Repair

Greenfield, Amy J...... Cedarville University...... 102-103 An Investigation of Reaction Rates Using “The Antacid Tablet Race”

Guernsey, Jonathan J...... The University of Toledo...... 104-111 A Virtual Dynamic Modularized Network Architecture

Gulley, Kevin T...... Cedarville University...... 112-116 A Polycation Receptor in Paramecium and Tetrahymena

Harris, Ericka M...... The University of Akron...... 117-118 Shuttle 'Copter

Hegarty, Beth L...... Cedarville University...... 119-120 Proportions in Our Galaxy

Hoersten, Douglas J...... Ohio Northern University ...... 121-122 Analysis of Drag on a Model Rocket Student Name College/University Page(s)

Hollis, Dana L...... Cleveland State University ...... 123-124 Solar System Unit Plan

Howard, D’Nita M...... Wilberforce University ...... 125-127 Design of High-pass Filter to Improve the Quality of Sound and Desired Frequency

Hunt, Royshawnn Q...... Wilberforce University ...... 128-129 The Infrastructure and Security Advantages of Biometric Technology

Hussein, Frederick K...... Cleveland State University ...... 130-133 Freeze-Thaw Durability and Nondestructive Testing (NDT) of Pervious Concrete (PC)

Jones, Nicole E...... Cleveland State University ...... 134-137 Controlling an Induction Machine Using dSPACE

Juhl, Jonathan F...... Cedarville University...... 138 Atmospheric Electricity: Paving the Way for Meeting Tomorrow’s Energy Needs

Keller, Lauren J...... Cedarville University...... 139-141 Mathematics of the Moon

Kerr, Crystal M...... Cleveland State University ...... 142-147 Space Shuttle Glider Takes Mathematics on a Ride

Knapke, Robert D...... University of Cincinnati ...... 148-149 Counter-Rotating Aspirated Compressor Time Accurate Simulations

Lewis, Melissa B...... Cedarville University...... 150-152 Finding the Height of an Aurora

List, Michael G...... University of Cincinnati ...... 153-159 High Fidelity Simulation of an Embedded Transonic Fan Stage with Characterization of Flow Physics

Lukens, Jennifer M...... University of Dayton...... 160-167 Wing Mechanization Design and Analysis of a Perching Micro Air Vehicle

Marquette, Ryan M...... Lorain County Community College ...... 168-169 Will Lorain County Community College Benefit from a Photovoltaic System?

Mayer, Michele S...... Cedarville University...... 170-171 Ecosystems on Mars

Meador, Stephen P...... Miami University...... 172-174 Transient Performance of a Pneumatic Engine

Miller, Eric J...... University of Cincinnati ...... 175-178 Infrared Led Tracking System

Mitchell, Robert W...... University of Dayton...... 179-183 UCAV Wind Tunnel Video Inertial Force Measurement Student Name College/University Page(s)

Moe, Wai ...... The University of Akron...... 184-188 Feasibility Study Report: Analytical Prediction and Mechanical Design of a High-Altitude Intelligent Balloon

Murry, Maisha M...... University of Cincinnati ...... 189-191 Determination of the In Vitro Dissolution Rates of Respirable Airborne Particles of JSC-1A Lunar Mare Regolith Simulant in Simulated Lund Fluid

Namestnik, Marcy E...... Cleveland State University ...... 192-193 Weather - Making a Cloud

Neff, David N...... Wright State University...... 194-199 Integrated Bipolar Plate-Gas Diffusion Electrodes for PEM Fuel Cells

Niedermayer, Sarah A...... Cedarville University...... 200-201 Measuring Models and Materials of Planets

Noble, Garrett J...... Cedarville University...... 202-204 Monotonic Shear Testing of Hydrocyphenyl Hydrogels

O’Brien, Katie M...... Owens Community College...... 205-206 Preventing Bone Resorption in Space Flight

Oravec, Heather A...... Case Western Reserve University...... 207-213 The Development of a Soil for Lunar Surface Mobility Testing in Ambient Conditions

Orra, Mike ...... The University of Toledo...... 214-219 Study on the Optimal Compensation for an On-Board Processing Satellite Payload Experiencing Critical Channel Impairment

Palmer, Hallee M...... Cedarville University...... 220-222 Accelerometers

Parvez, Azm ...... The University of Akron...... 223-224 Electrospinning of Polymer Nano Fibers

Porché, Monica A...... Central State University...... 225-226 Analysis of Extrema Values of a Scalable Parallel Algorithm for Simulating the Control of Steady State Heat Flow through a Metal Sheet

Prater, Ryan D...... The University of Akron...... 227 Reengineering of Drill Collar Transportation and Storage

Reid, Richard E., III ...... Wright State University...... 228-229 Exploring Software-Defined Radio

Reinbolt, Michael J., Jr...... The University of Toledo...... 230-231 Computational Fluid Dynamics Simulation Study

Reynolds, Charles R...... Marietta College ...... 232-234 Computerized Modeling of a Heterogeneous Reservoir Student Name College/University Page(s)

Richardson, Vincent A...... Wilberforce University ...... 235-238 HF Radio Data Communications and Resonator Tuning

Rippl, Matthew D...... Wright State University...... 239-241 Demonstration of an Independent Streamerlike Atmospheric Plasma Jet

Rivera, Alexander L...... Case Western Reserve University...... 242-243 Endothelialization and Function in Bifurcating Microfluidic Channels

Robbins, Thomas R...... University of Dayton...... 244-247 Quasi-One-Dimensional Materials for Thermoelectric Energy Generation Applications

Sadeghipour, Ehsan ...... The Ohio State University...... 248-249 Design and Analysis of a Variable Compliance Robotic Transmission Using a Magneto-Rheological Fluid Damper

Scavuzzo, Joseph J...... The University of Akron...... 250-251 Natural Indicators Lesson Plan

Scavuzzo, Rachel A...... The University of Akron...... 252-253 Space Food and Nutrition

Steinberger, Miranda L...... The University of Toledo...... 254-256 Application of Cavitation for Controlled Cleaning

Studmire, Brittany M...... Cleveland State University ...... 257-258 NASA’s Role in Preserving and Protecting Our Environment

Tatarko, John L...... Cleveland State University ...... 259-260 The Complete Thermodynamics of Benzene Via Molecular Simulation

Tomko, Brian J...... Ohio Northern University ...... 261-269 Design of an Unmanned Aerial Vehicle Autopilot Using a Model Airplane Flight Simulator

Tomlinson, Ryan E...... Case Western Reserve University...... 270-273 Three Dimensional Dynamic Visualization of Bone Remodeling

Verhoff, Ashley M...... University of Cincinnati ...... 274-275 Optical Tracking and Verification for Autonomous Satellite Research

Vo, Thomas V...... The University of Akron...... 276-279 Design and Implementation of an Intelligent Balloon for Real-Time Environment Monitoring

Wilson-Simmons, Trudy ...... Cleveland State University ...... 280-282 Teaching Science and Using the Sheltered Instruction Observation Protocol (SIOP) Teaching Strategies

Zimcosky, Michael J...... The University of Toledo...... 283-285 Biped Robot Actuated By Shape Memeory Alloys FOREWARD

The Ohio Space Grant Consortium (OSGC) is a member of the National Space Grant College and Fellowship Program funded by Congress and administered by NASA Headquarters. The OSGC supports graduate fellowships and undergraduate scholarships for students studying toward degrees in Science, Technology, Engineering and Mathematics (STEM) disciplines at OSGC member colleges or universities. The awards are made to United States citizens, and since 1989, more than $4.1 million in financial support has been awarded to over 500 undergraduate scholars and 150 graduate fellows working toward degrees. The students are competitively selected from hundreds of applicants.

Funds for the fellowships and scholarships are provided by the National Space Grant Program. Matching funds are provided by the member universities, the Ohio Aerospace Institute (OAI), and private industry. Note that this year approximately $400,000 will be directed to scholarships and fellowships representing contributions from NASA, Ohio Aerospace Institute, member universities, and industry.

On Friday, April 18, 2008, all OSGC Scholars and Fellows reported on these projects at the Sixteenth Annual Student Research Project Symposium held at the Ohio Aerospace Institute in Cleveland, Ohio. In eight different sessions, Fellows and Senior Scholars offered 15-minute oral presentations on their research projects and fielded questions from an audience of their peers and faculty, and received written critiques from a panel of evaluators. Junior, Community College, Education, and Bridge Scholars presented posters of their research and entertained questions from all attendees during the afternoon poster session. All students were awarded Certificates of Recognition for participating in the annual event.

Research reports of students from the following schools are contained in this publication:

Affiliate Members Participating Universities •The University of Akron •Marietta College •Case Western Reserve University •Miami University •Cedarville University •Ohio Northern University •Central State University •Youngstown State University •Cleveland State University •University of Dayton Community Colleges •The Ohio State University •Lorain County Community College •University of Cincinnati •Owens Community College •The University of Toledo •Wilberforce University •Wright State University

MEMBER INSTITUTIONS

Lead Institution Representative • Ohio Aerospace Institute ...... Ms. Ann O. Heyward

Affiliate Members Campus Representative • Air Force Institute of Technology...... Dr. Jonathan T. Black • Case Western Reserve University ...... Dr. J. Iwan D. Alexander • Cedarville University...... Professor Charles Allport • Central State University ...... Dr. Gerald T. Noel, Sr. • Cleveland State University...... Ms. Pamela C. Charity • Ohio University ...... Dr. Roger D. Radcliff • The Ohio State University...... Dr. Füsun Özgüner • The University of Akron ...... Dr. Paul C. Lam • University of Cincinnati ...... Dr. Gary L. Slater • University of Dayton ...... Dr. Donald L. Moon • The University of Toledo ...... Dr. D. Raymond Hixon • Wilberforce University...... Dr. Edward Asikele • Wright State University...... Dr. Mitch Wolff

Participating Institutions Campus Representative • Marietta College ...... Dr. Benjamin H. Thomas • Miami University...... Dr. Osama M. Ettouney • Ohio Northern University...... Dr. Jed E. Marquart • Youngstown State University...... Dr. Hazel Marie

Community Colleges Campus Representative • Columbus State Community College...... Mr. Jeffery M. Woodson • Cuyahoga Community College ...... Ms. Mikki Matzelle • Lakeland Community College...... Dr. Frederick W. Law • Lorain County Community College...... Dr. George Pillainayagam • Owens Community College ...... Dr. Bruce Busby • Terra Community College...... Dr. James Bighouse

Government Liaisons Representative • NASA Glenn Research Center...... Dr. M. David Kankam ...... Mr. Robert F. LaSalvia • Air Force Research Laboratory ...... Ms. Kathleen Schweinfurth ...... Ms. Kathleen Levine ACKNOWLEDGMENTS

The Ohio Space Grant Consortium (OSGC) including: Dr. Paul C. Lam, Director, Dr. Gerald T. Noel, Associate Director, Ms. Laura A. Stacko, Program Manager, wish to extend a thank you to the following evaluators for their time, their expertise, their support, and most importantly, for the inspiration and encouragement they offered to the Ohio Space Grant Scholars and Fellows during the student presentations on April 18, 2008. • Edward Asikele, Wilberforce University • Jonathan T. Black, Air Force Institute of Technology • Malcolm W. Daniels, University of Dayton • Hazel Marie, Youngstown State University • Mrityunjay Singh, Ohio Aerospace Institute • Benjamin H. Thomas, Marietta College • James H. Gilland, Ohio Aerospace Institute • Daniel E. Paxson, NASA Glenn Research Center • Kumar Yelemarthi, Wright State University • Jiang Zhe, The University of Akron

Funding for Ohio Space Grant Scholarships and Fellowships is provided by the National NASA Space Grant College and Fellowship Program, Ohio Aerospace Institute, and the participating Ohio colleges, universities, and community colleges. Special thanks go out to the following individuals:

• Michael L. Heil and the Ohio Aerospace Institute for hosting the event and also welcoming the attendees to OAI along with his leadership comments.

• Ann O. Heyward, Ohio Aerospace Institute, for all of her contributions to the Ohio Space Grant Consortium.

• Lynda D. Glover, NASA Glenn Research Center, for her motivating post-luncheon speech.

• Ohio Aerospace Institute staff whose assistance made the event a huge success!

2008 OSGC Student Research Symposium Hosted By: Ohio Aerospace Institute (OAI) 22800 Cedar Point Road • Cleveland, OH 44142 • (440) 962-3000 Friday, April 18, 2008

AGENDA NASA on celebrating 50 years! 8:00 AM – 8:30 AM Sign-In / Breakfast / Refreshments / Student Portraits ...... Lobby 8:30 AM Welcome – Paul C. Lam ...... Forum (Lobby Level) Director, Ohio Space Grant Consortium 8:30 AM – 8:45 AM Leadership Comments, Michael L. Heil ...... Forum (Lobby Level) President and Chief Executive Officer, Ohio Aerospace Institute 9:00 AM – 10:15 AM Oral Presentations – Senior Scholars and Fellows Session 1 (Groups 1, 2, 3, and 4)

• Group 1...... Forum (Lobby Level) • Group 2...... President’s Room (Lower Level) • Group 3...... Industry Room A (2nd Floor) • Group 4...... Industry Room B (2nd Floor)

10:15 AM – 10:30 AM Break ...... Lobby 10:30 AM – 11:45 AM Oral Presentations (Continued) – Senior Scholars and Fellows

Session 2 (Groups 5, 6, 7, and 8)

• Group 5...... Forum (Lobby Level) • Group 6...... President’s Room (Lower Level) • Group 7...... Industry Room A (2nd Floor) • Group 8...... Industry Room B (2nd Floor)

11:45 AM – 12:45 PM Luncheon Buffet...... Atrium / Sunroom 12:45 PM – 1:15 PM Lynda D. Glover, Coordinator, Cooperative Education Program NASA Glenn Research Center...... Forum

1:15 PM – 1:30 PM Group Photo ...... Lobby / Atrium

1:30 PM – 2:30 PM Poster Presentations...... Lobby Junior, Community College, Education, and Bridge Scholars 2:30 PM Symposium Adjourns

Congratulations NASA on celebrating 50 years! STUDENT ORAL PRESENTATIONS

SESSION 1 – 9:00 AM to 10:15 AM

Group 1 – Aerospace Eng./Electrical Engineering Group 2 – Mechanical Engineering

FORUM (AUDITORIUM – LOBBY LEVEL) PRESIDENT’S ROOM (LOWER LEVEL) Evaluators: Edward Asikele and Kumar Yelemarthi Evaluators: Hazel Marie, Benjamin Thomas, John Zhe

9:00 Marshall Galbraith, MS 2, Cincinnati 9:00 Laura Childerson, Senior, Ohio Northern 9:15 Michael List, PhD 1, Cincinnati 9:15 Jeremy Fuhr, Senior Wright State 9:30 Mike Orra, PhD 1, Toledo 9:30 Stephen Meador, Senior, Miami University 9:45 Christian Austin, PhD 2, Ohio State 9:45 Ryan Prater, Senior, Akron 10:00 10:00 Thomas Robbins, Senior, Dayton

Group 3 – Electrical Engineering Group 4 – Aerospace Engineering

INDUSTRY ROOM A (2ND FLOOR) INDUSTRY ROOM B (2ND FLOOR) Evaluators: Malcolm Daniels and James Gilland Evaluator: Jonathan Black and Daniel Paxson

9:00 Nicole Jones, Senior, Cleveland State 9:00 Kyle Brady, Senior, Case Western Reserve 9:15 Richard Reid, Senior, Wright State 9:15 Kathryn Gabet, Senior, Case Western Reserve 9:30 Thomas Vo, Senior, Akron 9:30 Eric Miller, Senior, Cincinnati 9:45 Jonathan Guernsey, PhD 2, Toledo 9:45 Ryan Tomlinson, Senior, Case Western Reserve 10:00 10:00 Jennifer Lukens, MS 1, Dayton

BREAK – 10:15 AM to 10:30 AM

SESSION 2 – 10:30 AM to 11:45 AM

Group 5 – Mechanical Engineering Group 6 – Industrial & Systems Engineering/ Mfg. Eng./Civil Engineering/Pet. Eng./

FORUM (AUDITORIUM – LOBBY LEVEL) PRESIDENT’S ROOM (LOWER LEVEL) Evaluators: James Gilland, Hazel Marie, John Zhe Evaluators: Daniel Paxson and Jay Reynolds

10:30 Mitul Dadhania, Senior, Cincinnati 10:30 Jeffrey Coulter, Senior, Central State 10:45 Tammy Donnadio, Senior, Youngstown State 10:45 Frederick Hussein, Senior, Cleveland State 11:00 Robert Mitchell, Senior, Dayton 11:00 Charles Reynolds, Senior, Marietta College 11:15 Michael Zimcosky, Senior, Toledo 11:15 Heather Oravec, PhD 3, Case Western Reserve 11:30 11:30

Group 7 – Computer Science/Computer Eng. Group 8 – Math/Biology/ Mat. Eng./Rad. Eng.

INDUSTRY ROOM A (2ND FLOOR) INDUSTRY ROOM B (2ND FLOOR) Evaluators: Edward Asikele and Kumar Yelemarthi Evaluators: Mrityunjay Singh and Benjamin Thomas

10:30 Joshua Garling, Senior, Cedarville 10:30 Mathew Cosgrove, Senior, Miami University 10:45 Vincent Richardson, Senior, Wilberforce 10:45 Kevin Gulley, Senior, Cedarville 11:00 Brian Tomko, Senior, Ohio Northern 11:00 David Neff, MS 1, Wright State 11:15 11:15 Maisha Murry, PhD 1, Cincinnati 11:30 11:30

2008 Group Photo

(Scholars, Fellows, Campus Representatives, and Advisors)

The OSGC credits Sharon Mitchell, Kelly Garcia, and Matt Grove

for taking pictures throughout the Symposium. Ohio Space Grant Consortium (OSGC) Annual Student Research Symposium Photographs – 2008

Stephanie Reynolds, OSGC, and Eric Baumgartner admires posters of ONU students Gerald Noel, Sr., Central State Douglas Hoersten and Amy Friedlein.

D’Nita Harris, Wilberforce University Matthew Rippl, Wright State University

Monica Porché, Central State Michael Reinbolt, Jr., Toledo Welcoming Session

Michael L. Heil Ann O. Heyward Paul C. Lam President and CEO Vice President Director Ohio Aerospace Institute Ohio Aerospace Institute Ohio Space Grant Consortium Following are photographs of Graduate Fellows and Senior Scholars who presented individual 15-minute oral presentations on their research in two morning sessions:

Charles Reynolds, Marietta College, explains his research project entitled, “Computerized Modeling of a Heterogeneous Reservoir.”

Thomas Vo, The University of Akron, discusses his project, “Design and Implementation of an Intelligent Balloon for Real-Time Environment Monitoring.”

Eric Miller, University of Cincinnati, presents his research entitled, “Infrared Led Tracking System.”

Tammy Donnadio, Youngstown State University, shares her research entitled, “Autologous Mesenchymal Stem Cell Transplantation to Improve Fascial Repair.”

Maisha Murry, University of Cincinnati, describes her project entitled, “Determination of the In Vitro Dissolution Rates of Respirable Airborne Particles of JSC-1A Lunar Mare Regolith Simulant in Simulated Lund Fluid.” Kevin Gulley, Cedarville University, presents his research, “A Polycation Receptor in Paramecium and Tetrahymena.”

Heather Oravec, Case Western Reserve University, explains her research entitled, “The Development of a Soil for Lunar Surface Mobility Testing in Ambient Conditions.”

Jeremy Fuhr, Wright State University, discusses his project, “Micro Air Vehicle Project: Development of Quad-Winged Flapping Mechanisms.”

Edward Asikele (left) and Cleveland State Scholars Nicole Jones, Malcolm Daniels (right) John Tatarko, and Frederick Hussein

Ann Heyward (right) introduces featured luncheon speaker, Lynda Glover (left), NASA Glenn Research Center, who presented “Working for NASA.” The following are photographs of Junior, Community College, Education and Bridge Scholars who presented their research projects during the Poster Session:

Amy Greenfield (left) Cedarville, explains Brittany Studmire (right) Cleveland State, her research to Edward Asikele (right). presents her research to Pamela Charity (left).

Azm Parvez, The University of Akron Lee Rivera, Case Western Reserve University

Osama Elbuluk (left) Akron, shares his Ashley Verhoff, Cincinnati, discusses research with Michael List (right) Cincinnati. her research with Christian Austin, OSU.

Megan Genuske, Youngstown State, Wilberforce Scholars (from left to right): describes her research to Gary Slater, D’Nita Howard, Vincent Richardson, University of Cincinnati. and Zoly Amegboh Garrett Noble, Cedarville University Wai Moe, The University of Akron

Royshawnn Hunt, Wilberforce University Matt Rippl explains his research to his Dad.

The University of Akron Scholars (from left to right): Thomas Vo, Wai Moe, and Ryan Prater enjoying the day. Paul Lam is also featured (second from right).

Trudy Wilson-Simmons, Cleveland State Lauren Keller, Cedarville University Ryan Marquette, LCCC Richard Bozak, Jr., Cleveland State

Eric Burba, Central State, explains Cedarville Scholars Hallee Palmer (left) and his research to Augustus Morris, Jr. Beth Hegarty (right)

Kenya Carrington (right), Wilberforce, University of Dayton students: discusses her research with Maisha Murry Jennifer Lukens, Robert Mitchell, (left), Cincinnati and Thomas Robbins

Owens Scholars Rhonda DePuy (left) and Lee Rivera (right), Case Western, explains Katie O’Brien (right) his research to Malcolm Daniels, Dayton. Fred Hussein (left), Cleveland State, From left to right: Benjamin Thomas, Garrett Noble, Kevin Gulley, and Osama Elbuluk (Akron), Paul Paslay, and Joshua Garling, Cedarville Charles Reynolds (Marietta College)

Paul Lam (left) and Hazel Marie (right)

Mike Heil (left) OAI, and Gary Slater Marcy Namestnik, Cleveland State, (right), University of Cincinnati explains her education project to Jane Zaharias

Robert Chasnov (left), Cedarville, and Kumar Yelemarthi (right), Wright State) A Radio Based Tracking System for High Altitude Ballooning

Student Researcher: Makeba A. Anderson

Advisor: Dr. Augustus Morris, Jr.

Central State University Manufacturing Engineering

Introduction High altitude ballooning is a low-cost method of accessing space for engineering challenges. It is a hands-on way to introduce engineering students interested in spacecraft design to fundamental engineering techniques. It can also be used to study atmospheric conditions and a wide range of other engineering experiments.

Abstract The purpose of this project was to build a tracking device to successfully track a high altitude balloon as it travels in flight to its landing position. A scientific payload will be attached to the balloon and its data analyzed after recovery. Accurate position and altitude must be acquired during flight and relayed to the ground quickly for a successful recovery. This project included hardware built to obtain specific positioning and altitude data from a Global Positioning System (GPS) tracking device and software that relayed this information through a handheld HAM radio. Lastly, the tracking system was ground tested and a path was simulated through a computer running map-based tracking software.

Objectives The mission was to build an effective data receiving system to successfully retrieve information collected during experiments conducted at high altitudes. It was also necessary to track the balloon's path for successful recovery of additional data after landing. A tracking system was built consisting of two major components: a transmitting system aboard the balloon’s payload that was interfaced to a GPS and the receiver station. Commonly referred to as the base station, its components include a Ham radio and a GPS receiver both intergraded with a computer to plot a tracked path in addition to displaying converted scientific data.

Methodology The transmitting system aboard the payload included a 4.5 V power supply, a handheld transceiver, and a GPS tracking device. A Kenwood TH-D7A handheld transceiver was used to convert the GPS data into Automatic Position Reporting System (APRS) format which then gets transmitted through HAM radio. An APRS infrastructure is made up of variety of Terminal Node Controller (TNC) equipment created my individual Amateur Radio operators. The TNC functions as a modem from radio to computer. The radio receiver used includes a built in .25 Terminal Node Controller and APRS software. Through APRS the data can be distributed globally for instant access. The most visible aspect of APRS is its map display. Any station, radio or object that has an attached GPS is automatically tracked. The GPS chosen uses the WAAS technology to accurately send data to the controller. At 4800 baud, the data is then sent to a controller which is interfaced with both the GPS device, Ham radio, and a computer.

Another transceiver is located at the ground location and used to receive the data. Once the radio receives the data, it was sent to a computer located at the receiver station. APRSPoint software is used to track the actual launch, flight, descent, and landing location of the balloon. The receiver station also includes a second Ham radio capable of receiving APRS data, connected to the computer. A block diagram of the system follows.

Figures and Charts

The Tracking System “Transmitter” Flow chart

GPS Radio Antenna Transmitter Antenna NMEA APRS GPS TNC Radio Formatted AX.25 Output Formatted

1. Kenwood TH-D7A Transceiver 2. Garmin 25 LVC GPS Device 3. Garmin GPS Antenna 4. 4.5 VDC Power Supply

GPS Antenna The Ground System Flow Chart

Radio NMEA Receiver GPS Formatted Antenna APRS Output Radio AX.25 TNC Computer Serial Port Formatted Formatted Sound Data

APRSPoint MapSource

MAP

1. Personal Computer with APRSPoint Software 2. Kenwood TH-D7A Transceiver 3. Garmin V GPS Navigator

Sparse Synthetic Aperture Radar Image Formation and Parameter Estimation

Student Researcher: Christian D. Austin

Advisor: Dr. Randolph L. Moses

The Ohio State University Department of Electrical and Computer Engineering

Abstract Traditional Synthetic Aperture Radar (SAR) 3D imaging uses samples of electromagnetic backscatter of a scene collected over multiple linear fight paths equi-spaced in elevation angle. To increase image resolution, backscatter is collected over more aspect angles, and to avoid imaging artifacts, spacing between backscatter samples must be less than an upper bound. Hence, accurate high-resolution traditional imaging requires long data collection time and high data storage requirements, which may be impractical. If the number of scattering centers in a scene is sparse, it has been shown that it may be possible to reconstruct the image accurately using only a small subset of the data required for traditional reconstruction [1],[2],[3]. In this work, we first examine reconstruction of sparse 3D SAR imagery using only a fraction of the data needed for traditional reconstruction; we then examine the possibility of using sparse image reconstruction methods to perform parametric estimation. Reconstructed images of a construction backhoe using a sparse data flight path and simulations demonstrating parametric estimation of scattering center parameters using sparse reconstruction are presented.

Project Objectives Synthetic aperture radar (SAR) is an all-weather, persistent, and large standoff distance imaging technique. 3D SAR images are pixels representing the complex valued electromagnetic reflectivity of the scene in 3-dimensional Cartesian space. A collected electromagnetic return can be interpreted as a portion of a radial line from the radar to scene center in k-space (frequency space) of the image [4]; thus, the radar flight path determines which portion of the scene’s k-space is collected. Accurate 3D SAR images that are well-resolved typically require samples from many closely spaced linear flight passes over a scene [5]. Interferometric synthetic aperture radar (IFSAR) processing is a sparse 3D image reconstruction method requiring only two 2D SAR images formed from radar returns along two linear flight paths closely spaced in elevation angle with the same azimuth extent [4],[6]. IFSAR image formation is capable of producing accurate reconstructions as long as scattering centers are well-separated with respect to radar resolution and scattering center magnitude is large compared to noise variance.

Typically, scenes have a sparse number of prominent scattering centers, meaning that there are only a small number of strong scattering centers in the scene. Under this sparsity condition, it is possible, under certain conditions, to reconstruct a 3D SAR images accurately without requiring linear flight paths needed in traditional 3D imaging, or requiring well-separated point scatterers needed in IFSAR imaging. The solution to the optimization problem 2 min|| y − Ax||2 +λ || x ||p (1) x has been shown to produce accurate, well resolved, reconstructions of the true image using sparse collections of 2D SAR data [7],[8]. The p-norm is denoted as || ||p; λ is the sparsity penalty parameter; y is a measured data vector; x is the reconstructed image vector, and A is a measurement matrix that maps vectors from the image domain to the measured data domain. In general, solving the optimization problem (1) with p=1 and appropriate choice of λ is convex, and has been shown to be equivalent to solving the Basis Pursuit (BP) optimization problem [9]: 2 min|| x ||1 subject to || y − Ax||2≤ ε, (2) x where ε is a user specified noise tolerance parameter. Figure 1 demonstrates why (2), and equivalently (1), result in sparse solutions for the special case that x is two dimensional and y is one dimensional. The straight line is y=Ax surrounded by an ε ribbon, and the diamond is an l1-ball. The optimal solution occurs for the smallest diamond that touches the ribbon. This solution will result in one of the dimensions of x being zero.

Compressive sensing literature provides sufficient conditions for low error image reconstructions using (2). In particular, if the image x is sparse with respect to some basis, and the pairwise correlation between distinct columns of the measurement matrix A is sufficiently small, it has been shown that the error between the true solution and the estimated solution can be bounded [1],[2]. This condition is commonly referred to as the Uniform Uncertainty principle in the literature. The error bound is a linear function of ε and ||x0,S-x0||1, where x0,S is a vector of the true image pixels with the largest S entries retained and the remaining entries replaced by zeros, and S is chosen so that UUP is satisfied. So, reconstruction error is a linear function of noise tolerance, and how sparse the true image is.

In SAR, x is a vectorized N-dimensional (ND) image, and y is vectorized ND k-space frequency data (collected electromagnetic returns), where N may be 1,2, or 3. The matrix A should approximate the Fourier operator, mapping spatial data into k-space data; A is implemented as an ND Discrete Fourier Transform (DFT) matrix with respect to the spatial and frequency vectors. It is advantageous from a computational standpoint to use the DFT, since it can be implemented efficiently using the Fast Fourier Transform (FFT). The DFT matrix is a special case of the matrix T ⎡ − j(Xi xk ) ⎤ A = e . (3) ⎣⎢ ⎦⎥ Here, the index i indexes rows; k indexes columns, and superscript T denotes vector transpose; Xi = [Xi, T T Yi, Zi] and xk = [xk, yk, zk] are the frequency and spatial coordinate vectors, respectively, where the components of the Xi and xk vectors are locations of frequency and spatial samples in 3-space with respect to first, second and third dimension, (x,y,z) in Cartesian coordinates; the vectors Xk are determined by the th location of acquired k-space samples. If the number of samples in the p dimension is Np, the indices i,k are contained in {1,…,N1*N2*N3}, indexing all unique permutations of the Xi or xk coordinates vectors. th Denote Tp as a uniform sampling interval in the p dimension; if the set of spatial samples consists of all permutations of samples at all locations (n1T1, n2T1, n2T1) and the set of frequency samples consists of all samples at all locations (2πm1/(N1T1), 2πm2/(N2T2), 2πm3/(N3T3) ), where np,mp {1,…,Np}, then A is a DFT matrix. For sparse image reconstruction, if frequency samples are not on the uniform DFT grid, they are interpolated to the grid for processing. Flight path geometry and data sampling strategy along this path determines what k-space data is collected, and hence the structure of the matrix A.

In this work, we first explore the use of sparse reconstruction to accurately resolve 3D SAR images from data on sparse non-linear data collection paths. We examine the validity of using such paths from the perspective of compressive sensing. In addition to image reconstruction, we are also interested in estimating point scatterer parameters, which include the number of scatterers in a scene, the magnitude, phase and location of each scatterer. In general this estimation problem is non-convex and relies on a model order selection step to determine the number of point scatterers before scattering center parameters can be estimated [10]. We pose this non-convex estimation and model order selection problem as a convex sparse reconstruction problem, and present some simulation results demonstrating the potential of this estimation technique.

Methodology SAR imaging and scattering center parameter estimation can both be posed as sparse reconstruction problems. In this section, we present the methodology for solving these problems within a sparse reconstruction framework. First, we present an approach to image formation using a sparse non-linear flight path and then we examine a method of estimating continuous scattering center magnitude, phase and location parameters using discrete parameter sampling and sparse reconstruction.

As discussed in the preceding section, in sparse 3D SAR image reconstruction, the matrix A is a DFT matrix including only the rows from frequencies sampled within the flight path. It has been shown that if the DFT matrix’s rows are omitted uniformly at random to form A, then A satisfies the UUP with high probability, in which case, the error of the reconstructed image x will be bounded below a linear function of the image’s sparsity error, ||x0,S-x0||1 and noise tolerance ε [2]. Omitting rows at random corresponds to collecting k-space data at uniformly random locations in k-space. However, it is not possible to fly a flight path that collects data uniformly at random over all of k-space.

We propose a practical fight path that approximates the matrix A and can be used to reconstruct an image that approximates the true sparse image. First, we assume that the k-space data is contained only in a 3D rectangle in k-space. Outside of this rectangle, the data is zero. From Fourier theory, the magnitude of the image that results from inverting just this cube of data--as opposed to all of k-space--is a lower resolution (smoothed) version of the true sparse image; we wish to reconstruct this lower resolution approximation to the true image; we refer to this approximation as x0. Reconstruction requires measuring random samples from the 3D rectangle to construct A. If there is not a large amount of smoothing, i.e. a large rectangle is used, the resulting smoothed image, x0, will not be as sparse as the original image, but can still be reconstructed; the reconstruction error bound will be larger than for the true image because of non- zero sparsity error. It still may be problematic to collect truly random k-space samples in a 3D rectangle due to practical flight path restrictions. The second approximation is to collect data on a continuous pseudo-random flight path within the 3D rectangle, and to consider this data as random k-space samples. The optimization problem (1) is then solved using this data and measurement matrix, resulting in an estimate of x0.

In some applications, it is desirable to estimate the number, location, magnitude and phase of each scattering center in a scene. The measured electromagnetic radar returns, f, from a scene with N isotropic point scattering centers can be modeled as N T f (X ) a S(X ,x ) − j(Xi xk ) (4) i = ∑ k i k where S(Xi,xk ) = e . k=1

jαk The parameters ak = mk e are the complex amplitude parameters, and mk and αk are the magnitude and T T phase of the amplitude parameter; xk =[xk, yk, zk] are the spatial parameters and Xi=[Xi, Yi, Zi] are the measured frequency parameters, both with respect to a Cartesian coordinate system. In general, estimation of the number of scattering centers, N, the spatial parameters and amplitude parameters using the measured frequency parameters is a joint model order selection and non-convex estimation problem.

Consider an MxP dictionary matrix A in (3) formed by sampling the scattering center function S(Xi,xk) in (4) at a sequence of P spatial sample locations (xi) i=1…p and at the sequence of M measured frequency 0 sample locations (Xi)i=1…P. If the N true isotropic scattering center locations, ( xi )i=1…N coincide with the set of samples from the sample set used in generating the measurement matrix A, then the true scene model is equivalent to the representation f = Aa, where f is a vector of measured frequency samples, and a is the complex amplitude vector [ak]. If the true scattering center location does not coincide with a sample, it is assumed that a small number of adjacent scattering centers in the sample set model the true scattering center well, and we consider this linear combination as representing the true scattering center. In any case, if P>>N, then the number of scattering centers is sparse with respect to the matrix A. We explore the use of sparse reconstruction using the measurement matrix A for model order and parameter estimation.

In the sparse reconstruction optimization problems (1) or (2), A is the measurement matrix; the amplitude vector a is the vector x, and y is the measured k-space data. Denote the complex valued solution vector a th to the sparse reconstruction problem as aˆ , and its i entry as aˆi . The number of non-zero entries (or entries above some small threshold) in this vector is an estimate of the number of scattering centers in the ˆ scene, N; denote this estimate as N . If aˆi is non-zero (or above some small threshold), then there is a 0 th scattering center present with estimated location xˆ i = xi (scattering center corresponding to the i column of A) and aˆi is an estimate of the complex amplitude of this scattering center; otherwise the scattering center is not present.

The sparse reconstruction approach to parameter estimation is convex, automatically chooses model order, and provides unique parameter estimates under (2) or (1) if p=1, which is the case we are interested in here. Accuracy of location estimates are implicitly a function of the Euclidean distance between spatial parameter samples (xi)i=1…P. By decreasing the distance between samples and increasing the number of columns in A it may appear that accuracy can be arbitrarily improved. The problem is that as the samples xi are spaced closer with respect to Euclidean distance, the correlation between the columns increases, and eventually it may no longer be able to satisfy the UUP; hence, we cannot assure that || aˆ − a ||2 is well controlled. In other words, the closer xi are spaced, the more adjacent columns of A “look alike” and can model the data equally well.

It is desirable to design a flight path, like in the image reconstruction problem, so that Euclidean distance between parameter samples can be decreased, while still satisfying the UUP so that there is a bound on estimation error || aˆ − a ||2 . In general, it is computationally expensive or prohibitive to determine if the UUP is satisfied for deterministic measurement matrices, and the matrices in literature that have been shown to meet satisfy the UUP with high probability are currently all random [1], [2]. Here, we present an empirical investigation of the proposed parameter estimation algorithm under different parameter spacing. We generate synthetic data using the model (4) and show results of parameter estimation when P>M. We qualitatively examine how small parameter spacing (large P) can be before estimation breaks down.

Results Obtained The Air Force Research Lab (AFRL) has released SAR data for a construction backhoe along a pseudo- random “squiggle” path shown in Figure 2; this data was generated by the electromagnetic simulation program Xpatchf. We use this data set to investigate image reconstruction from a sparse flight path. Data from the azimuth and elevation angles contained in the dotted box in Figure 2 are used and (1) is solved with p=1 to reconstruct a sparse image x. The magnitude of the actual data in 3D k-space is shown in Figure 3; the axes units are radians. This data is contained in a 3D bounding rectangle, which determines the resolution of the reconstructed image, and we assume that the data is approximately randomly sampled within this rectangle; true random sampling clearly does no occur here, but accurate reconstruction still results, as we will see. A facet model of the backhoe used to generate the SAR data is shown in Figure 4. An image formed by Traditional Inverse Fourier Transform processing of the k-space data in Figure 3 is presented in Figure 5. Sparse reconstruction of the image x, is shown in Figure 6. In each of the reconstructed images, only the top 20 dB magnitude pixels are shown to emphasize prominent scattering centers, and color and size encode magnitude; smaller light pixels have lower magnitude than larger darker pixels.

For simplicity of exposition, we consider parameter estimation using sparse reconstruction in the 1D case. That is, the spatial and frequency parameters are xi=xi and Xi=Xi respectively. We also assume that spatial and frequency parameters are uniformly sampled, having spatial sampling period of 1 meter, a frequency bandwidth of 1 Hz, and hence, a frequency bandwidth of 2π radians. For standard Fourier estimation of spatial locations, there are P samples in the spatial domain and M=P samples in the frequency domain contained in the bandwidth 2π. We examine sparse reconstruction estimation performance for the case where P=KM>M, where K is an integer greater than 1, referred to as the oversampling factor, and M is held constant. Spatial samples xi are oversampled in this case with period 1/K. For brevity, we restrict our attention to estimation of scattering center spatial locations separated by less than the traditional 1 meter of resolution, since we wish to examine how accuracy of the estimation algorithm can be improved by oversampling in the spatial parameter space, before the sparse estimation algorithm breaks down due to high A matrix intercolumn correlation.

0 In simulations, 512 frequency samples using the model (4) for N=3 scattering centers located at x1 =9, 0 0 x2 =9.25 , and x3 =9.75 meters each with amplitude ai=1 are generated. Zero mean independent Gaussian noise with variance of 10-1 is added to the frequency samples. The optimization problem (2) is solved with ε =1 to find the sparse reconstruction. In the following figures there are two panes; the top pane shows the normalized magnitude and the bottom pane shows the phase in radians as a function of the estimated complex amplitude, aˆ , of scattering centers in the scene. True scattering values and locations are marked with ‘x’s. Figure 7 and Figure 8 show the entries of the sparse reconstruction aˆ vector for an oversampling factor of K=4, and K=16, respectively. We note that both figures are zoomed in to the show detail of the area of interest. Amplitudes aˆi outside of this region are small, and would be discarded by an estimation algorithm that calculates model by counting only estimates aˆi above a small threshold.

Significance and Interpretation of Results Comparing the standard SAR image in Figure 5 with the sparse reconstruction in Figure 6, the sparse reconstruction is better resolved than the standard image. Comparing the reconstructions with the facet model of the backhoe in Figure 4, it appears that a significant number of pixels that do not lie on the backhoe are eliminated in the sparse reconstruction. So, the sparse reconstruction algorithm removes artifacts present from using a non-linear flight path. The sparse reconstruction also accurately shows prominent features of the backhoe. For example, the slanted line of the front hood and the corner of the front scoop are well resolved in the sparse reconstruction. These results show that it is possible to form accurate well-resolved 3D SAR images using a sparse flight path. ATR and feature extraction algorithms will benefit from this type of sparse imaging, given the ability to form better resolved images using less data.

Parameter estimation results presented in Figures 7 and 8 demonstrate that it is possible to estimate spatial coordinates spaced closer than the standard period of 1 meter. Assuming that we discard any scattering centers with aˆi below some small threshold, Figure 7, shows that with an oversampling level of K=4 it is possible to successfully estimate scattering center location parameters, since magnitudes at the true scattering centers are much larger than at other locations. At K=16 in Figure 8, large magnitude aˆi are located in the region of the true scattering center locations, but the true scattering center locations have smaller magnitudes than several of the neighboring aˆi . So, the number of scattering centers will be overestimated, and true scattering center locations may be discarded. Thus, it is possible to estimate scattering center spatial coordinates spaced closer than 1 meter, but if the spatial coordinates are sampled too finely (K becomes large), estimation accuracy deteriorates. When sampling becomes fine, columns of the measurement matrix A become highly correlated, and combinations of the closely spaced columns model the data just as well as the true column. It should also be noted that the magnitude and phase estimates may not always be accurate. This is not a big problem, if the model order and spatial coordinates are accurately estimated. Given the number of scattering centers and their locations, the complex amplitude (magnitude and phase) can be estimated by solving a linear least squares problem. These results show that it is possible perform scattering center parameter estimation by solving a convex sparse optimization problem, as opposed to solving a joint model order selection and non-convex optimization problem. Parameter estimation using sparse reconstruction is an area of ongoing research, were we are exploring spatial and frequency parameter sampling (flight path) schemes that are not necessarily uniform and may be used to improve parameter estimation accuracy.

Figures

Figure 1. BP optimization solution. Figure 2. Squiggle path. Part in box is used for image reconstructions.

Figure 3. Squiggle path frequency data Figure 4. Backhoe facet model used to magnitude. generate frequency data.

Figure 5. Traditional Fourier image of Figure 6. Sparse reconstruction image backhoe using squiggle path data. of backhoe using squiggle path data.

Figure 7. Parameter estimation using Figure 8. Parameter estimation using sparse reconstruction for an oversampling sparse reconstruction for an oversampling factor K=4. factor of K=16.

Acknowledgments The author would like to thank Dr. Randolph L. Moses for his guidance and the Ohio Space Grant Consortium for their support.

References 1. E. Candès and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies”, IEEE Trans. on Information Theory, Vol. 52, No. 12, pp. 5406-5425, Dec. 2006. 2. E. Candès, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements”, Comm. Pure Appl. Math., Vol. 59, No. 8, pp. 1207-1223, 2006. 3. D. Donoho, “Compressed sensing”, IEEE Trans. on Information Theory, Vol. 52, No. 4 , pp. 1289- 1306, April 2006. 4. C. V. Jakowatz, Jr., D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and P. A. Thompson, Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach. Boston: Kluwer Academic Publishers, 1996. 5. S. DeGraaf, “3-D fully polarimetric wide-angle superresolution-based SAR imaging,” in Thirteenth Annual Adaptive Sensor Array Processing Workshop (ASAP 2005), MIT Lincoln Laboratory, (Lexington, M.A.), June 7-8, 2005. 6. C. D. Austin and R. L. Moses, “Interferometric Synthetic Aperture Radar Detection and Estimation Based 3D Image Reconstruction” in Algorithms for Synthetic Aperture Radar Imagery XIII. Orlando, FL.: SPIE Defense and Security Symposium, April 17-21, 2006. 7. M. Çetin and W.C. Karl, “Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization”, IEEE Trans. on Image Processing, Vol. 10, No. 4, pp. 623-631, April 2001. 8. T. J. Kragh and A. A. Kharboucht, “Monotonic iterative algorithms for SAR image restoration”, 2006 IEEE International Conference on Image Processing, pp. 645-648, 8 Oct. –11 Oct. 2006. 9. M.A.T. Figueiredo, R.D. Nowak, and S.J. Wright, “Gradient projections for sparse reconstruction: application to compressed sensing and other inverse problems”, IEEE Journal of Selected Topics in Signal Processing, Vol. 1, No. 4, pp. 586-597, Dec. 2007. 10. P. Stoica and R.L. Moses, Spectral Analysis of Signals. New Jersey: Pearson Prentice Hall, 2005. Loitering High Altitude Balloon Student Researcher: Caleb J. Barnes Advisor: Mitch Wolff Wright State University Department of Mechanical Engineering

Abstract High altitude balloons (HAB) have traditionally been used for communications, surveillance, data, and experimentation under different atmospheric conditions. However, the large surface areas of HABs cause them to be greatly affected by high wind velocities and are generally unable to maintain stationary for extended periods of time limiting the balloon’s capabilities. To this end, it is desirable to investigate the feasibility of a loitering HAB capable of maintaining a constant altitude and geographic location for an extended period of time. A small-scale loitering HAB was modeled using MATLAB/Simulink. Tests were run under different altitudes, drag coefficients, and propeller diameters in order to observe the engine power required in each case. It was found that at an altitude of 80,000 feet over Wilmington, Ohio, atmospheric conditions permit the balloon to remain stationary with very small power requirements (less than 20W).

Project Objectives The objective of this study was to investigate the feasibility of designing a loitering HAB in order to accommodate longer flight times. The line of sight at a 20 km altitude is approximately a 310 mile radius. This very large range can cover a vast expanse of land covering more than the entire state of Ohio and can be very useful for various communications, surveillance, and experimental applications. Developing a HAB capable of stationary flight would greatly increase the amount of projects that may be completed with Wright State University’s current HAB program.

Methodology Used Upper air data was obtained from the University of Wyoming website for altitudes between 10,000 feet and 100,000 feet over Wilmington, Ohio. Pressure, temperature, and wind velocity data was recorded. The wind velocity data gathered from this source was averaged for the entire month of March at each altitude. Air density was calculated using the ideal gas law and viscosity calculated using Sutherland’s formula.

In order to simulate the flight dynamics of the HAB, a model was constructed in MATLAB/Simulink using the tabulated atmospheric data described previously as inputs. The model utilizes two blocks provided by a former graduate student from Wright State University. One of the provided Simulink blocks calculates drag on the HAB and another calculates the power required by the engine based upon propeller size and drag force on the balloon. In addition to these two blocks more were developed to calculate the volume, mass and diameter of the balloon at the loitering altitude as well as the Reynolds number of the air flowing over the surface of the balloon. Volume, mass, and balloon diameter were calculated using the following equations:

(m3) (kg) (m)

where L is the load weight, T is temperature, P is pressure, ρ is density, and g is 9.81 m/s2. The subscript (2) denotes properties of the lift gas at the loitering altitude and the subscript (1) denotes properties of the lift gas at ground level. L is assumed to be 12 lb (53.38 N) which is the maximum load for current HAB projects at Wright State University. The mass of the balloon material is assumed to be negligible and the lift gas is assumed to be Helium and behave as an ideal gas.

These blocks combined in a Simulink model allow for atmospheric data, drag coefficient, and propeller diameters to be input from a spreadsheet and values for balloon volume, mass of lift gas required, resulting drag, and power required to keep the HAB stationary were output into a separate spreadsheet for analysis. Twenty combinations of drag coefficients and propeller diameters were tested for each altitude and the results were plotted with power required vs. altitude as can be seen in the provided figures. In each plot the performance of different values of drag coefficient or propeller diameter were compared.

Initially data for several altitudes between 10,000 ft and 60,000 ft were plugged into the model and tested for different combinations of propeller diameters and coefficients of drag and the power required was observed. Due to the higher air densities and wind velocities at these low altitudes the power required to keep the HAB stationary very high, but decreased closer to 60,000 ft. Altitudes up to 100,000 ft were then tested with the same parameters and power required was observed again. This time the values were much more reasonable due to lower wind velocities and air densities causing less drag on the balloon surface.

Results Obtained The figures below show the power required to keep the balloon stationary plotted vs. altitude. Figure 1 shows a very large peak in power required at 40,000 ft due to high wind velocities and a drastic decrease at higher altitudes which became the area of interest in this study. Figure 2 shows the results for altitudes between 60,000 ft and 100,000 ft. There is a very small power requirement of around 10 W found at 80,000 ft. As can be seen in both figures propeller diameter only has a significant effect on power required when the power required in general is high.

The effects of changing the coefficient of drag while keeping the propellor diameter constant were also observed, but not shown in the included figures. Drag coefficients were tested for values between 0.04 and 0.2 in order to see how changing the drag coefficient value for the balloon by geometry or by other means affects power required. At 80,000 ft the effect of changing the drag is fairly insignificant only changing by a few Watts. At other altitudes where wind velocity and drag effects are higher, the effect of drag coefficient values increases significantly.

Figure 1. Effects of propeller diameter on Figure 2. Power requirements at higher altitude power requirements

References 1. Marks, Christopher R., “System Modeling of a High Altitude Airship Using Matlab and Simulink”, M.S. Thesis, Department of Mechanical and Materials Engineering, Wright State University, 2006. 2. “Atmospheric Soundings”, University of Wyoming, Department of Atmospheric Science, http://weather.uwyo.edu/upperair/sounding.html, accessed March 2008.

Acknowledgments Thanks to Dr. Mitch Wolff for his guidance on this project and to Christopher Marks for providing Simulink blocks from his work. New and Improved Lunar Rovers

Student Researcher: Christopher J. Benze

Advisor: Dr. Jane A. Zaharias

Cleveland State University Middle Childhood Education, Science and Language Arts

Abstract Through the implementation of this lesson, I will be covering the State of Ohio Science Content Standards of Scientific Inquiry and Science and Technology. Throughout my experiences working as an intern in both urban and suburban school districts that Scientific Inquiry and Science and Technology are commonly skipped over in lieu of teaching more straight forward standards, such as Physical Science. With the help of a NASA educator’s guide and video clips from NASA’s websites, I was able to develop a lesson plan that effectively hit these often forgotten standards. In this lesson, my students viewed actual footage of the lunar rovers that were used during the Apollo 15 mission to the moon. After watching the video clip, we addressed some of the problems the astronauts experienced when it came to the lunar rover. My students then used scientific inquiry and science and technology to build models of futuristic improved lunar rovers out of common art supplies. Each group of students then reported their model and ideas for improvement to me and the rest of the class.

Lesson To begin the lesson, I explained the concept of a lunar rover to the class. Immediately after describing the purpose of a lunar rover, I used NASA’s website, along with an LCD projector to show the students actual footage of a lunar rover in action. Since the footage was taken during the Apollo 15 mission, the image was slightly blurry, but was extremely effective in “hooking” the student’s attention and increased their desire to learn more.

After watching the video clip, I led a discussion about the problems with the lunar rover they had just observed. This was an easy task, because during the clip the astronauts can be heard in the background discussing the problems with the lunar rover. Together, the students and I identified 3 problems: the wheels, constructed of mesh, made the lunar rover bounce too much, there was very little room to store samples, and the lunar rover only had the capacity to carry twice its weight.

Once three problems were identified, the students were grouped into teams of four. In these teams, the students had to build improved model lunar rovers out of Styrofoam cups, bowls, and other common art supplies that addressed the three problems. In order to complete this task, the students relied solely on Scientific Inquiry and Science and Technology. The students not only built models that solved three problems with the original lunar rover, but they also had to explain to the rest of the class and me how their lunar rover would perform better in the three problem areas.

Objectives • Students will use critical thinking skills to observe and determine three problems with the lunar rovers used in the Apollo missions. • Students work cooperatively in groups to create improved lunar rover models. • Students will demonstrate their scientific inquiry and science and technology skills by presenting the models to the class, explaining how their lunar rover would work better than the rovers used during the Apollo missions.

Alignment Grade Eight Science and Technology Standard Benchmarks 1-4 • Examine how science and technology have advanced through the contributions of many different people, cultures, and times in history • Examine how choices regarding the use of technology are influenced by constraints caused by various unavoidable factors. • Design and build a product or create a solution to a problem given more than two constraints. • Evaluate the overall effectiveness of a product design or solution.

Grade Eight Scientific Inquiry Standard Benchmarks 1-2 • Choose the appropriate tools or instruments and use relevant safety procedures to complete scientific investigations. • Describe the concepts of sample size and control and explain how these affect scientific investigations.

Student Engagement This activity uses a very hands-on approach to cover these often over-looked standards. An effective inductive set for this lesson was viewing and discussion the original lunar rovers used during the Apollo 15 mission. The students were also highly engaged while they discussed and built their new and improved lunar rover models. Finally, the students enjoyed demonstrating their inquiry and technology skills when they presented their model to the rest of the class.

Resources The resources I used for this lesson were primarily found on the NASA website for educators. Within 10 minutes of searching I found a very helpful NASA Educator’s Guide that included the lesson that I tweaked for this activity. The website also included the video clips of the lunar rover that was used during the Apollo 15 mission.

The materials the students used to build their models were inexpensive and easy to obtain. The day prior to teaching this lesson I bought Styrofoam cups and bowls at the Dollar Store. These served as the body for most of the model lunar rovers. To build wheels, storage compartments, antennas, etc, the students used construction paper, pipe cleaners, scissors, glue, etc.

Results This activity went over extremely well. My goal was to get students to enjoy using scientific inquiry and science and technology. It was obvious from the very beginning of the lesson that the majority of the students were engaged, and interested in what they were learning. I was also extremely impressed with the both quality of the lunar rover models that were built in only twenty minutes, and the ideas the students came up with to address the three problems.

Assessment The assessment for this lesson was authentic. I did not require the students to turn any work in other than the actual model that their group built. I did however, conference with each group as they designed their lunar rover to observe the amount of scientific inquiry and technology that was being used. I also was able to authentically assess each group during the presentation of their model. No group was allowed to end their presentation until they explained how their lunar rover was better suited to answer the three problems. This made it very easy for me to authentically assess how well the objectives were met.

Conclusion This activity allowed students to used scientific inquiry and science and technology in a hands-on engaging way. I will definitely use this lesson in my future classrooms, and would highly recommend this activity to any teacher struggling to teach the scientific inquiry or science and technology standards. After authentically assessing each group of students, it was clear to me that my objectives were met and that actual learning took place. The Metallographic Investigation of a Deposited Ti 6Al-4V After Friction Stirring

Student Researcher: Joseph F. Betten

Advisor: Mary Kinsella

Central State University Manufacturing Engineering Department

Many industries have a use for a process similar to welding that deposits material on a pre-existing substrate. A more refined microstructure would help increase the durability of that material. Laser Additive Manufacturing Process (LAMP) is a solid state stirring process developed to help refine the microstructure. After producing a sample of Ti 6Al-4V on a substrate using the LAMP, the specimen was agitated with a tool similar to that of a friction stirred welding process. The result is hypothesized to have a more compact, higher quality, microstructure in the stirred area. To determine the improvement due to LAMP combined with the friction stirring, an analysis of micrographs of the affected areas using acid etching, Scanning Electron Microscopy, and Orientation Imaging Microscopy.

Because the investigation is in its initial stages, any information collected will help guide further analysis in a more accurate direction. Specifically the deposited Titanium alloy needs to develop a finer microstructure with fewer voids after the stirring has been completed. This would result in a higher strength material while still retaining the benefits of the Laser Additive Manufacturing Process. The LAMP is a process by which metal may be added to a pre-existing bed or substrate. This specific LAM Process is under development at the University of Missouri Rolla. A powdered form of the needed material is injected with a laser beam over the sample layer by layer. Each previous layer is still heated past its melting point and forms to the molten layer below it.

The sample was created out of Ti 6Al-4V that was deposited using the Laser Additive Manufacturing Process at the University of Missouri Rolla. The sample was deposited in four test strips, each approximately .5”x .25”x 1.5” in dimension. They were then stirred with a tool similar to that of a friction stirring process. The process was performed with a depth of 0.15”; this enables the head of the bit to pass through the material, while creating friction between the shoulders of the shaft and the sample (figure 1). Different rotational speeds and traverse speeds were applied between samples in order to determine the best setting for change. Friction stirring is a solid state process, meaning that the material does not melt. This is important in that it keeps the phase chemistry of the material the same. Though still cold working, the friction created increases the heat of the material, allowing it to be stirred easier.

Using the material developed in the previous steps, smaller specimens were cut, mounted in polyfast and polished with 1µm diamond solution. After this preparation they were etched with Krohl’s Etchant and images were acquired on an optical microscope. The samples were analyzed with backscatter imaging on a Scanning Electron Microscope. After having the samples electro-polished, an Electron Backscatter Diffraction (EBSD) analysis was performed in order to determine the exact crystal orientation within the sample. With the above results a full post-processing evaluation was developed.

Although many micrographs and image analysis were obtained, the most obvious evidence of a change in grain structure came with the optical microscope and etching. The SEM backscatter imaging had trouble obtaining a detailed image of the entire stirring area. The optical images showed a distinct change in microstructure around 1.25mm into the sample (figure 2). This was found in all four specimens created with slight variations in depth that can be attributed to experimental error. Below the line in the sample created by the grain size change, there were many locations of material voids and inclusions. Above that line there were visibly less significant, though still obvious, cases of voids or inclusions. The EBSD scan gave us more concrete evidence that the microstructure had definitely been reduced in size (figure 3). In the figure shown, it can be seen that from 0mm to .75mm into the sample, the grains were too small for the scan to read, from .75mm down, the large grains were easily seen. The exact grain size reduction that would be ideal was not developed, but it is known that the stirring technique is resulting in a far smaller grain size, and does appear to reduce the size of voids in the sample. The next step in the investigation would be to increase the size of the bit used for the stirring. The bit currently in use has a nose depth of around .15”and gave us a grain refinement depth of around .03”. A suggested adjustment is to make a nose with a depth between .25”-.5” on a sample width of three inches. In order to acquire a sample that large, the processing cost would be much larger.

1010 Figure 1 Figure 3 0001 2110

Figure 2

This project could not have been completed without the help of: Dr. Mary E. Kinsella AFRL/RXLMP Dr. Jeff Calcaterra AFRL/RXLMP Professor John Sassen Central State University Dr. Joe Newkirk University of Missouri Rolla Strain Sensing with a Piezoelectric Biopolymer

Student Researcher: Richard F. Bozak, Jr.

Advisor: Dr. Hanz Richter

Cleveland State University Mechanical Engineering

Abstract In order to produce a piezoelectric biopolymer that can be used in biomedical applications, more research must be done to investigate the applicability of this technology. Piezoelectric material is currently used in many sensors because it has the ability to convert a mechanical stress into an electrical charge. This charge can be measured to obtain the amount of strain in the material. Biomedical polymers exhibit the same piezoelectric characteristics and are also compatible with tissue. There are many processes involved in preparing a sample of material to be tested. Since the material is brittle, four different concentrations of “whiskers” were added for strength to each of four different samples. Electrodes are attached to both sides of the film sample so that a change in charge can be measured. A thin silver coating was applied to the samples for this purpose and the process was evaluated for its effectiveness. A variety of tests were done on the samples to ensure that the coating created an adequate electrode.

Project Objectives In order to understand the usefulness and effectiveness of the coating process, many properties of the samples were measured. The electrode layer on both sides of the film causes the sample to become a capacitor. The samples would not be useful if the electrodes were shorted together because a charge measurement could not be obtained. Therefore, each of the samples must be made and tested so that there is not a connection between the electrodes. Also, measuring the capacitance of the samples can be a useful way to ensure that the coating process was effective.

Methodology Used A variety of methods can be used to determine the capacitance of the samples. The response of the capacitor to an input voltage can be used to obtain the capacitance. Two methods were used to obtain capacitance values from the response to different input voltages. When the input voltage is a sine wave, the capacitor causes a phase shift between the voltage and current. The angle of this phase shift along with the peak values of voltage and current can be used to calculate the amount of reactive power supplied by the capacitor. The reactive power can then be used to calculate the value of the capacitance. Also, the response of a capacitor to a step input voltage is an exponential waveform that shows the charging of the capacitor. An exponential fit was applied to the response of the capacitor. From the exponential fit, the time constant can be used to calculate the capacitance. The equation for the response that was fitted to the data is shown below. The exponential response is more accurate because fewer variables are used in determining the capacitance. The measurements were obtained using a simple series RC circuit, with a resistance of 1 kΩ.

-t/RC VC(t) = V(1 - e )

Results Obtained The step response of the capacitor was used to obtain a capacitance value. The input and response of each of the four samples are shown below (Figure 1). The capacitance values calculated from these waveforms ranged from 2.24nF to 2.63nF. These values of capacitance are consistent with similar materials of the same size. While testing the samples, it was found that the material has a significant amount of dielectric absorption. Dielectric absorption, also known as soakage, is evident when a voltage returns to the capacitor after it has been discharged. Figures

Figure 1. Step Response for the Four Piezoelectric Biopolymer Samples. Ignition Propensity of Hydrogen in the Presence of Metal Surfaces

Student Researcher: Kyle B. Brady

Advisor: Dr. Chih-Jen Sung

Case Western Reserve University Department of Mechanical and Aerospace Engineering

Abstract The technological and economic barriers that have prevented large-scale proliferation of hydrogen-fueled vehicles in the past are quickly being surmounted. While the benefits of such technology to consumers and society as a whole are easy to appreciate considering the environmental and health concerns surrounding the usage of fossil fuels, handling standards and safety regulations of hydrogen must be developed to avoid the potential for unwanted fires and explosions. In so far as hydrogen systems are concerned, one apparent risk relates to accidental fires as a result of leakage of on-board storage of hydrogen. Because of hydrogen gas’ greater propensity for leakage as compared to other common fuels, as well as it’s relatively wide (~4% to 75% by volume) gas-phase flammability range, accidental ignition of leaked hydrogen fuel is a valid and important safety concern. The proposed research seeks to identify ignition hazards associated with the accidental release of hydrogen in the presence of common metals and metal oxides likely to be present in hydrogen powered vehicles as well as their immediate environments. Identifying these hazards at an early stage could lead to the elimination or more restricted use of the incompatible substance in the design, avoiding dangerous conditions.

It is anticipated that this research will, in the longer term, result in determining a range of effectiveness of each catalyst tested, between the extremes of platinum ignition and controlled (ceramic binder) ignition. This will allow for the determination of substances that should be taken into special consideration in the design of future hydrogen vehicles. In addition we will be able to determine the ignition characteristics under which each potential catalyst can cause ignition, and under what circumstances such a catalytic ignition might cause propagation into the gas-phase regime. To this end, an appropriate experimental setup has been completed, with initial substance testing to determine flaws in the experimental regime as well as general ignition behavior against a known catalytic surface.

Project Objectives Extensive study has been conducted surrounding both catalytic hydrogen ignition as well as fire safety hazards associated with its usage as a fuel. However, there has been a large gap in the literature between catalytic studies completed for the purpose of developing reaction mechanism for hydrogen surface chemistry and fire safety studies concerning hydrogen fuel. Ignition experiments have included those such as completed by Zheng et al. which determined ignition conditions for premixed hydrogen in counterflow against heated nitrogen. This experiment put ignition temperatures in the region of 1000-1100K depending on equivalence ratio (Zheng, et al. 2002). Catalytic studies such as those conducted by Deutschmann have primarily focused surface chemistry, using a heated platinum wire as their ignition method (Rinnemo, et al. 1997). Additional fire safety studies have been conducted focusing on ignition hazards. Accumulation of hydrogen in various types of enclosures has been a major research push, such as the development of an Event Tree Analysis (Rigas and Sklavounos 2005) and leakage modeling into enclosures (Swain, et al. 2003). Little study has been conducted into the practical problem of hydrogen ignition against catalytic and potentially catalytic surfaces, whose ignition characteristics may differ significantly from wire or ribbon cases. As such, the current study aims to develop experimental data exploring the catalytic ignition of hydrogen against known catalytic surfaces. Additionally it will extend the data set to common materials found in envisioned hydrogen-powered vehicles that may act as low-grade catalysts.

Experimental Methodology In order to determine ignition characteristics of hydrogen, a premixed stagnation flow experiment has been constructed. The stagnation surface (Figure 1) consists of a heated metal surface enclosed in a Macor ® sheet. For the initial testing done to date, the metal heating plug (copper and aluminum tested thus far) was machined with a central 1/16th inch hole to 1-2 mm above the bottom surface. This allows the insertion of a ceramic- insulated K-type thermocouple courtesy of Omega ® to facilitate temperature measurements. The upper portion of the heating plug (Figure 2) is enclosed by an Omega® 300W nozzle heater which provides for heating up to a temperature of approximately 1000K. The design of the overall heating system allows for a constant temperature profile across the radius of the heating plug bottom surface, with a dramatic drop in temperature at the ceramic/metal interface to discourage surface reactions beyond the metal surface. This temperature trend can be shown by Figure 5. The burner assembly (Figure 3) was a preexisting unit which was adapted for use with a hydrogen fuel flow. It provides a laminar, plug-type velocity profile at the burner exit, which allows for the formation of a flat flame profile (Figure 4). A flow system to provide pre-mixed hydrogen and air (non-synthetic) was also developed.

Results Thus far, qualitative studies have been conducted to determine the range of operation for the experimental apparatus, as well as to determine initial estimates of relative reactivity. For temperatures up to 1000K, copper was not observed to result is a gas-phase flame. However, a platinum foil mounted on the copper block was observed to ignite at temperatures great than ~700K. This temperature is much higher than the validated temperatures for a platinum wire, suggesting that configuration may greatly affect the ignition conditions. As a result, a computational approach is being taken using the FLUENT package to determine the temperature field close to the surface for varying configurations. This will allow for a greater understanding of the temperature conditions where an ignition would need to occur, as well as give insight into necessary changes to the experiment.

Acknowledgments The author would like to acknowledge the financial support of the Building and Fire Research Laboratory. The author would also like to acknowledge the help and support of Dr. Chih-Jen Sung and Dr. Kamal Kumar.

References 1. Rigas, F., and S. Sklavounos. "Evaluation of hazards associated with hydrogen storage facilities." International Journal of Hydrogen Energy, 2005: 1501-1510. 2. Rinnemo, M., O. Deutshmann, F. Behrendt, and B. Kasemo. "Experimental and numerical investigation of the catalytic ignition of mixtures of hydrogen and oxygen on platinum." Combustion and Flame, 1997: 312-326. 3. Swain, M., P. Filoso, E. Grilliot, and M. Swain. "Hydrogen leakage into simple geometric enclosures." International Journal of Hydrogen Energy, 2003: 229-248. 4. Zheng, X.L., J.D. Blouch, D.L. Zhu, T.G. Kreutz, and C.K. Law. "Ignition of premixed hydrogen/air by heated counterflow." Proceedings of the Combustion Institute. 2002. 1637-1643. 5. Astbury, G.R., and S.J. Hawksworth. "Spontaneous ignition of hydrogen leaks: A review of postulated mechanisms." International Journal of Hydrogen Energy, 2007: 2178-2185.

Appendix

Figure 1. Underside view of Figure 2. Stagnation assembly Figure 3. Burner assembly, stagnation assembly. with heating element shown. shown with stagnation plate and heating element removed.

Figure 4. Photo of a stagnation methane flame. This flame shows that the exit flow velocity approximates plug-type flow.

Figure 5. Temperature profile as measured by K-type surface probe. Authenticating Standards of Grain Size Measurement

Student Researcher: Micheal Eric Burba

Advisors: Professor John Sassen and Dr. Mary E. Kinsella

Central State University Manufacturing Engineering Department

ASTM standards are necessary to ensure that materials and products have a basic level of quality and meet minimum technical specifications for use in commercial applications. The material must have a specific microstructure so that the material properties satisfy the design requirements. Material properties are known to vary based on testing procedures. Therefore, ASTM standards are useful in establishing common test procedures to ensure uniform results across laboratories.

A typical Ti-6Al-4V β-phase microstructure was investigated to determine if any differences in sampling size and measuring technique exists between industry practices and ASTM E112 standards for grain size measurements. According to ASTM E112, the grain size is measured using the line-intercept procedure and the planimetric procedure for the entire sample. Two other approaches were used; the line-intercept procedure for 20 random micrographs at 50x magnification and calculating the number of grains larger than a tenth of an inch and two tenths of an inch. The data was analyzed statistically using the methods found in the ASTM E112 and the results were documented.

This investigation of the measurement procedures began with imaging the specimen using an optical camera oriented at various angles, using different types of illumination as a means for distinguishing the different orientations of the beta grains. The digital micrographs were then analyzed using Adobe Photoshop and the FOVEA Pro plug-ins made by Reindeer Graphics. Using the FOVEA Pro plug-ins, lines with many different random orientations were automatically generated and overlaid on the micrographs. The length of these lines or any segments thereof could be measured based on the known calibration.

The first step was the test of the ASTM E112 line-intercept procedure for the entire sample. Five randomly placed lines were overlaid on each micrograph. Then, using Photoshop, the grain boundaries were marked manually where they intercepted the random lines and the number of intersections were counted. Grains are the basic building blocks of a metal just as there are grains found in wood. A grain is a solid form of matter that has the same crystal structure in a specific orientation. The different grain orientations allow the differentiation of grains and their boundaries. The previous process then led to the testing of the other procedure outlined in the ASTM E122; the planimetric method. The planimetric method is comprised of calculating the area of the entire sample and dividing by the total number of grains in the sample. The grains were counted using a layer placed over the original micrograph and then marking every individual grain manually and keeping a tally of the total grain count. Once the total number of grains in known, the assumption that the grains are perfectly spherical and the diameter is calculated and the results can then be added to the data collected. These two tests completed the data record needed according to ASTM E112.

Following the collection of the ASTM E112 grain size measurement data, the industrial practices were then tested. The first test of the industrial methods was the analysis of 20 random micrographs at 50x magnification. The images were of the exact same specimen but the only change was the size of the field- of-view it might be nice to say how big this area was in terms of micrometers and how many grains fit into one micrograph, on average. After smaller images were taken, five random lines were placed on the image using FOVEA Pro. After the lines were placed across the image, the intersections were counted and this process was completed for 20 images to allow for a respectable range due to the randomness of placement and the number of fields-of-view chosen of data that would be acquired using some aspects of the design of experiments. The next step of the process was to do the exact same test for 20 fields-of- view at various known magnifications ranging from 62.5x to 5x to find how the measured grain size would change, if at all, with the size of the image.

The final phase of the investigation was to count the grains with diameters larger than a tenth and two tenths of an inch. This process was comprised of using a template of a circle with the diameter of a tenth and one with the diameter of two tenths of an inch and placing then over the grains using layers in Photoshop to see if the beta grain was larger than the templates. This collection of data was taken over the entire surface of the Ti-6Al-4V β-phase microstructure and the results were recorded.

Through these thorough data collection processes and the numerical analysis calculation procedures found in the ASTM E112, the results of all the testing methods were compared and contrasted. From these investigations, the data illustrates that the dimensions of the field of view determine the ability of finding large, of at least 0.2 in, grains in the sample. Also concluded that the ideal magnification for finding grains 0.2” is 5x through the eyepiece which was found to be equivalent to a 9x digital image. The most significant finding was that any fields of view observed with less than a 20x objective lens will result in measurement errors of less than 10%. A relative accuracy of less than 10% is acceptable according to ASTM E112.

The results show that there is a significant difference between using the ASTM E112 standards and the standards employed by industry. In the worst case, this discrepancy could result in an inferior product being implemented in a critical structural component which could endanger the safety of …. These findings were summarized and presented to the corporation in question. Since then, no other testing has been completed on the project but the results provide sufficient evidence that further investigation is necessary to appropriately qualify the industrial results so that they comply with ASTM E112. There is practical interest in using the current industrial grain size procedures since they can be completed much quicker than the ASTM E112 standard. However, it is important to maintain the integrity of the material and to insure that its quality is sufficient to meet the intended use.

Acknowledgments Professor John Sassen CSU Dr. Mary E. Kinsella AFRL/RXLMP Dr. Jeff Calcaterra AFRL/RXLMP Jon Miller AFRL/RXLMP Don Weaver AFRL/RXLMP

Reference 1. ASTM Standard E 112, 1996 (2004), “Standard Test Methods for Determining Average Grain Size,” ASTM International, West Conshohocken, PA, www.astm.org. Design Analyis of the Active Bandpass Filter

Student Researcher: Kenya P. Carrington

Advisor: Dr. Edward Asikele

Wilberforce University Engineering Department

Abstract Active band pass filters are used quite often in invoice and data communications, primarily wireless transmitters and receivers. The active band pass filter, as do all band pass filters, has the ability to limit the bandwidth of the output signal to the minimum necessary to transmit data at the designed speed and form. On the receiver side, they can allow signals within a range of frequencies to be heard or decoded. In order for active band pass filters to properly function, they are dependent solely on a power supply such as a transistor or integrated circuit. This is one of the underlying factors which contribute to the disadvantages faced by using active band pass filters. Other disadvantages include: limited bandwidth of active devices limits the highest attainable pole frequency and therefore applications above 100 kHz (passive RLC filters can be used up to 500 MHz), limited achievable quality factor (Q), and increased sensitivity to variations in circuit parameters caused by environmental changes compared to passive filters. If one could eliminate any of these factors or decrease the degree at which problems may occur, then one could probably increase the filter’s performance substantially. For instance, having the optimum bandwidth for the speed and mode of communication that is used increases the number of signals that can be transferred within a system while decreasing the interference among signals. While active band pass filters have a few disadvantages, many of their advantages far outweigh the disadvantages. Some advantages of active RC filters include: reduced size and weight, and therefore parasitics increased reliability and improved performance, simpler design than for passive filters and can realize a wider range of functions as well as providing voltage gain in large quantities, and the cost of an IC is less than its passive counterpart. Active band pass filters are designed to overcome the disadvantages and constraints faced in using inductors. This one characteristic gives them advantage over passive filters which use inductors that are likely to become prohibitive at certain sizes. This also enables them to be used largely at audio frequencies. Techniques to improve the quality factor, operation bandwidth and sensitivity to variations in circuit parameters should be considered on a wider scope in order to produce a more advantageous active band pass filter.

My project objectives include the following: to investigate the design of the active band pass filter and expose the disadvantages in the filter’s design in order to suggest ways to create a more beneficial filter that can be used in various applications.

To accomplish these objectives several methods were used. These methods have been used by designers in the past to come up with an accurate active band pass filter design. These methods are respectively: Design Approximation, Design Realization, and Design Implementation. Before I could go through any of these methods I had to first define what a filter was in addition to the purpose that the active band pass means to achieve. To see my results of my filter design I would input values on the computer which would then generate or mimic the filter I was trying to design.

As a result of this information I was able to generate an appropriate prototype for my actual filter design. Actual pictures could not be taken of the prototype and the actual filter thus I have taken the liberty of providing an image retrieved from one of my internet sources that I thought could generate a clear explanation of how the active band pass filter is designed or works for that matter. Please see next page for charts and figures used in my project.

The significance of these findings is that this will allow engineers to come up with a more appropriate, beneficial design for an active band pass filter that can be used in data communications, cellular phones, transmitters, voice or signal sampling, television and radio signals, etc. I pray this information will add much value to society in the design of other technology.

Figures/Charts

Band-pass Filter Response Curve

• passes electrical signals in particular frequency band between two points

Figure 1. Bandpass Filter Behavioral Response Curve.

Design Approximation Design Realization

• Generate transfer function (H(s)) that satisfies desired specifications of active band-pass filter

where (wo ) is the center frequency, (B) is the bandwidth of the • circuit will attenuate low frequencies (w<<1/R2C2) and high filter or the length of the frequency band, and (H ) is the frequencies (w>>1/R1C1), but will pass intermediate frequencies o with a gain of -R /R maximum amplitude 1 2

Design Implementation

• Design Tests I and II

• At low frequency W0 = 0 capacitor opens and thus Vout = 0

• At high frequency W0 = capacitor infinite acts as a wire and thus Vout = 0

Figure(s) 2-4 show steps in design process

I would now like to take this time out to thank the Ohio Space Grant Consortium, Wilberforce University, Ms. Laura Stacko, my professor and mentor, Mr. Khalil Habash, my advisor, Dr. Edward Asikele, and everyone else for all of your contributions, valuable input in regards to my project, and dedication which helped make my project successful. Thank you.

References Used: http://www.ieeexplore.com; http://www.allaboutcircuits.com/vol_2/chpt_8/2.html; http://www.electronics-tutorials.com/filters/active-bandpass-filters.htm; Williams, Arthur. Electronic Filter Design Handbook.McGraw-Hill, Inc., 1981.; Chen, Wai Kai. The Circuits and Filters Handbook. CRC Press in Cooperation with IEEE Press,1995. The Rose Run Exploration Trend of Eastern Ohio

Student Researcher: Michael W. Casto

Advisor: Dr. Benjamin Thomas

Marietta College Department of Petroleum Engineering

Introduction (Geologic Setting) The Rose Run exploration trend is also referred to as the Knox group or simply the Rose Run. During the late Cambrian and early Ordovician, the sandstone and carbonate rocks of the Rose Run were deposited on the southwestern coast of Laurentia, between 0o and 30o south latitude. The climate in this region would have been extremely dry, with less than 10 inches of annual rainfall. Analogous climates from the present include that of Baja, California and Western Australia (Riley, 1993). Prior to deposition of the Rose Run, the area was given a regional dip to the present-day east-southeast by the Taconic orogeny (Roth, 1992). The Rose Run group, consisting of mixed carbonate-siliciclastic sequences, was then deposited on a broad continental shelf of low relief (Riley, 1993). The cyclical deposition of these layers was the result of periodic changes in sea level. Following a regression of the seas which left the southwest of Laurentia exposed, the dipping beds of the Rose Run were truncated during an extensive period of erosion. The Knox unconformity defines this paleosurface, on which hills and valleys developed. Figure 1 shows the location of the Rose Run outcrop beneath the Knox unconformity in Ohio. Deposition of the Glenwood shale occurred after another transgression of the seas submerged the southwest edge of Laurentia. The Glenwood shale overlies the high-relief, dipping beds of the Rose Run and acts as a seal for hydrocarbon traps. The ensuing Appalachian orogeny caused the development of a regional northeasterly strike and a gentle dip that increases toward the Appalachian basin. Tectonic forces throughout the Paleozoic era caused regional and local faulting, further aiding the development of hydrocarbon traps (Roth, 1992).

Figure 1. Roth, 1992.

Stratigraphy Producing units in the Rose Run are subdivided into the following formations, in descending stratigraphic order: the Beekmantown dolomite, the Rose Run sandstone, the Gatesburg formation, the Theresa formation, and the Trempealeau dolomite (Roen, 1996). However, in eastern Ohio, interest is typically limited to the Beekmantown dolomite, the Rose Run sandstone, and the Trempealeau dolomite. Overlying the Beekmantown dolomite is the Glenwood shale.

Figure 2. Roth, 1992.

The sandstones and sandy dolostones underlying the Knox unconformity form a belt that extends approximately 400 miles from Northeastern Kentucky to New York (Riley, 1993). Thickness of the Rose Run ranges from 0 feet in extreme North-Central Ohio to more than 7,000 feet in Central Maryland. Basement faulting is believed to account for the thinning of the Rose Run on the Waverly Arch, a broad area of relatively thin Rose Run in Central Ohio and Eastern Kentucky. Depth of the Rose Run sandstone ranges from 2500 feet in South-Central Ohio to 7,000 feet in North-Central Ohio (Roen, 1996).

The heterogeneity of the Rose Run can be characterized by the number of facies changes that occur over its hundreds of miles of linear extent. For example, the Rose Run sandstone changes from quartz- dominated to carbonate dominated on west flank of the Waverly arch. The same change to a carbonate- dominated facies occurs on the east flank of the Waverly Arch and in Western Pennsylvania. From the Waverly arch in Ohio, quartz-dominated facies of the Rose Run extend southwestward into Kentucky and eastward into Pennsylvania and West Virginia (Riley, 1993).

Past Research A significant body of knowledge exists concerning the Rose Run exploration trend and its geologic and exploration histories. Among the most comprehensive guides are Brian Roth’s book, Seismic Modeling of Ordovician Rose Run Sandstone and Beekmantown Dolomite Oil and Gas Traps in East-Central Ohio as well as Ronald Riley’s Measuring and Predicting Reservoir Heterogeneity in Complex Deposystems: The Late Cambrian Rose Run Sandstone of Eastern Ohio and Western Pennsylvania. The Atlas of Major Appalachian Gas Plays, edited by John Roen, is also a valuable resource for information regarding the Rose Run. Petroleum Geology (Trap) Erosional remnants are the primary type of hydrocarbon trap targeted in the Rose Run. They are found in all three of the Rose Run’s primary formations, but are most productive in the Rose Run sandstone and Beekmantown dolomite. A diagram of a typical erosional remnant is shown in Figure 3. These traps are essentially buried hills – structural highs on former outcrops of the Rose Run’s dipping beds that were subsequently buried by the Glenwood shale. The long period of erosion prior to shale deposition created hills and valleys on the paleosurface that were preserved as erosional remnants.

Figure 3. Roth, 1992.

Because the geometry of erosional remnants is due to changes in lithology, they are considered stratigraphic traps. The changes in lithology that describe stratigraphic traps can come about during deposition or result from form post-depositional processes. In order to distinguish them from depositional stratigraphic traps, like pinchouts, erosional remnants are further classified as paleogeomorphic traps since they were formed by both depositional and erosional processes (Roth, 1992).

While erosional remnants are not considered structural traps, the presence of faults is believed to be responsible for the pattern of erosion which led to their creation. Because fault zones are weak spots in the rock mass, they are more prone to weathering and eventually becoming spots of low relief. Since there is a correlation between faulting and the existence of hills and valleys on the paleosurface, structural geology can give insight regarding the distribution of erosional remnants (Roth, 1996).

Reservoir The Rose Run sandstone and Beekmantown dolomite are the primary producing formations of the Rose Run play. These sandstone and carbonate rocks were deposited on a stable, broad marine shelf environment during the late Cambrian and early Ordovician (Roth, 1992). The Rose Run exploration trend occupies a large belt of acreage, known as the Rose Run fairway, which runs from Ross County to Ashtabula County in Ohio. However, individual reservoirs are typically only 40 to 60 acres in areal extent (Roth, 1996). Along the Rose Run fairway, the depth of the Rose Run sandstone ranges from 2500 feet in South-Central Ohio to 7,000 feet in North-Central Ohio. The dipping beds increase in depth toward the southeast and have an average depth of 6400 feet in Ohio (Roen, 1996). These reservoirs are paleogeomorphic erosional remnants with typical thicknesses of 50 to 100 feet (Roth, 1996). Of the total thickness of an erosional remnant-type reservoir, the net pay ranges from 20 to 40 feet, with an average thickness of 40 feet (Roen, 1996).

The Rose Run sandstone consists mainly of quartz arenites, subarkoses, arkoses, and dolostones, cemented primarily by dolomite and clay minerals. These rocks contain sub-rounded to rounded grains, very fine to medium in size, which exhibit a moderate to poor degree of sorting (Riley, 1993). Porosity in the Rose Run sandstone is primarily intergranular and is enhanced by dissolution and fracture. The average porosity is 9 percent, with a range from 3 to 20 percent. Permeability ranges from 0.01 to 198 md and averages 5 md (Roen, 1996).

The Knox dolomite is the carbonate formation of primary concern in the Rose Run. These rocks exhibit crystalgal laminae and hemispheroids, ooids, and other features. Vuggy porosity in the Knox dolomite ranges from 2 to 25 percent and averages 10 percent (Roen, 1996). Although the dissolution of dolomite aids porosity throughout the Rose Run, permeability varies across these reservoirs since thin dolomitic or shaly beds can act as barriers between more porous zones.

Seal The Glenwood shale acts as the upper seal for erosional remnants in the Rose Run. As hydrocarbons migrate upwards through dipping beds the Rose Run, they are prevented from escaping by the impermeable shale that overlies the Knox [angular] unconformity. The Glenwood shale was deposited in horizontal layers atop the high-relief outcrops of the Rose Run. This kind of marine deposition caused thick sections of Glenwood shale to form above structural lows in the Rose Run and thin sections of Glenwood shale to lie above structural highs. This correlation makes it possible to use a thin section of Glenwood shale as a possible indication of an underlying erosional remnant (Roen, 1996).

Potential flow-paths for hydrocarbons in the Rose Run also come in the form of small offset faults and open joints and fractures. The sealing mechanism which acts against these smaller-scale features is dolomite, created by diagenesis (Roen, 1996).

Source In Ohio, the source rock for hydrocarbons in the Rose Run is the Ordovician Utica shale. Most of the hydrocarbons were generated during the Pennsylvanian and Permian time, when these source rocks were covered by thick accumulations of sedimentary rock. From the source rock buried deep in the Appalachian basin, the oil and gas migrated westward and downward through the stratigraphic section along fracture zones, unconformities, and fold belts until it became trapped in the erosional remnants of the Rose Run (Roen, 1996).

Petroleum Engineering (Drilling Practices) Because erosional remnants are typically less than 80 acres in areal extent, drilling multiple wells into one reservoir is difficult (Riley, 1993). For this reason, development and infill drilling of erosional remnants reservoirs is rare. The small area of these reservoirs also makes pattern drilling ineffective. Instead, companies rely on sound geophysics to be able to efficiently find and drill these isolated reservoirs. Erosional remnants can be confidently identified using strategically placed 3-D seismic surveys. Although a small seismic survey can cost on the order of $80,000, the expense can be justified considering that the cost of a dry hole is approximately $150,000 (Hart, 1996).

Production History The primary hydrocarbon produced from the Rose Run is natural gas, although oil is sometimes discovered (Roth, 1992). Gas was first produced from the Rose Run in 1965, but over 84% of Rose Run wells have been drilled since 1985. As of 1996, cumulative gas production from the Rose Run was estimated to be 143 bcf. This accounts for 80 percent of the estimated 179 bcf of total proven gas reserves in the play. Initial open flows for Rose Run wells average 500 Mcfg/d and range from 10 Mcfg/d to 3 MMcfg/d. Final open flows average 300 Mcfg/d and range from 10 Mcfg/d to 2.1 MMcfg/d. Rock pressures average 2200 psi and range from 1500 to 2400 psi (Roen, 1996).

Conclusion The Rose Run exploration trend will provide excellent drilling opportunities for decades to come, mainly due to increases in the quality and availability of exploration technology like 3-D seismic. Sound geophysics is the key to finding erosional remnants in the vast Rose Run fairway, since their spatial distribution is difficult to predict. Although few oil and gas producers in Ohio can afford the up-front cost of seismic surveys, the investment can easily pay-off if several hydrocarbon-bearing erosional remnants can be identified on the lease. The key to reducing the risk associated with ordering a seismic survey is to first use other geophysical techniques, like magnetic surveying, to determine which areas have a high probability of containing erosional remnants. Because 3-D seismic data is currently only available for a small amount of land in the fairway area, companies must invest in shooting more seismic lines if they wish to exploit the Rose Run to its full potential.

References 1. Hart, B., Copley, D., 1996, Chasing the Rose Run play with 3-D seismic in New York: Oil and Gas Journal, pp. 88-90. 2. Riley, R., 1993, Measuring and predicting reservoir heterogeneity in complex deposystems; the Late Cambrian Rose Run Sandstone of eastern Ohio and western Pennsylvania, 257 pages. 3. Roen, J. B., 1996, The Atlas of Major Appalachian Gas Plays, pp. 181-187. 4. Roth, B. L., 1992, Seismic modeling of Ordovician Rose Run sandstone / Beekmantown dolomite oil and gas traps in east-central Ohio, Master’s Thesis: Wright State University. 5. Roth, B., Berg, T., Blomberg, J., 1996, Reflection configuration and paleotopography of Rose Run Sandstone remnants in Ohio: Ohio Geological Society 4th annual technical symposium, v. 4, pp. 143-155. Lift and Drag Coefficients of Wing Designs for an RC Aircraft

Student Researcher: Laura E. Childerson

Advisor: Dr. Jed E. Marquart

Ohio Northern University Mechanical Engineering

Abstract A computational fluid dynamics analysis of two wing geometries was performed. These wing designs were analyzed to find their respective coefficients of lift and drag. The effects of angle of attack and airfoil shape were the items of interest in this analysis. The angles of attack studied ranged from -10 to +20 degrees. The airfoil shapes modeled were a NACA 5319 and a Clark-Y. Results of this analysis were compared to experimental results of the same wing designs gathered using wind tunnel testing. The CFD analysis and wind tunnel testing were conducted at various Reynolds number values. The intent of the study was to determine the best airfoil design for application to a model airplane. Results were used in the design of an RC aircraft for a senior capstone project and AIAA competition.

Project Objectives The results found in this paper were used to decide upon an airfoil which would best perform for the application of flying a radio controlled aircraft carrying a payload. For this application, a higher coefficient of lift over the range of angles of attack (-10 to 20 degrees) was more important than the coefficient of drag. In other words, a larger drag coefficient value would be acceptable as long as the lift coefficient value was also significantly larger.

Both wing designs had a wingspan of 59 inches and a 10 degree taper with the chord measuring 20 inches at the wing tips. Also, both wing designs had a two degree dihedral, and therefore the only difference between the two wing designs was the airfoil shape. Figures 1a and 1b show the airfoils NACA 5319 and Clark-Y, respectively. As can be seen in the figures, the NACA 5319 is much thicker and was hypothesized to create more lift as well as drag per unit area than the Clark-Y. Supporting this hypothesis is the data collected from Aerofoil, an airfoil design and analysis program, which had shown a higher section coefficient of lift on the NACA 5319 in comparison to the Clark-Y.

Figure 1a. NACA 5319 Airfoil Shape. Figure 1b. Clark-Y Airfoil Shape.

Air at standard temperature and pressure was used as the fluid flowing over the wings and was modeled as viscous.

To verify that the CFD modeling was accurate, wind tunnel testing was also performed and data was generated with which computational results could be compared for the lift and drag coefficients of the two wing designs.

Methodology Used

Wind Tunnel Testing The two wings were constructed out of birch and polystyrene at one quarter scale and covered with monokote. In order to keep data consistent, the Reynolds number was held constant between the CFD model and the wind tunnel testing. In order to do this, the air velocity in the wind tunnel was set to 100 mph to correspond to the actual velocity of 25 mph being experienced by the full scale wing. The normal and axial forces on the wings were recorded along with the pitching moment, air temperature, air velocity, and planform area. Using these values the lift and drag coefficients were calculated.

Computational Fluid Dynamics Three dimensional models of the wings were modeled using Pro/E and imported into Pointwise where unstructured grids were created on database entities on the surfaces of the wings. Rectangular outer boundaries of the flow domain were modeled. An unstructured block was then created in order to capture the boundary layer by extruding from the domains on the surface of the wing. The flow domain was then created by forming a block between the rectangular outer boundary and the domains created by the previous extrusion. Figures 2a and 2b show the grids for the flow domain and the surface of the NACA 5319 wing. The grids for the flow domain and the surface of the Clark-Y wing were modeled in the same manner. A velocity inlet boundary condition was applied to the inlet of the flow and a pressure outlet boundary condition was applied to the outlet of the flow. A symmetry boundary condition was applied to the boundary along the center of the wing, and velocity inlet boundary conditions were also applied to the top, bottom, and boundary opposing the symmetrical boundary. A wall boundary condition was applied to the surface of the wing. The case was then imported into Fluent.

Figure 2a. Unstructured Grid on Flow Domain of Figure 2b. Unstructured Grid on Surface of NACA 5319 Wing. NACA 5319 Wing.

Using Fluent as the solver, the grid was read in and scaled to the appropriate units of inches. The boundary conditions were then set and the model was defined. The fluid was set to air. The velocity inlet boundary condition was set to a velocity magnitude of 25 mph in the x-direction to correspond to a zero degree angle of attack. In order to analyze angles of attack from -10 to 25 degrees in increments of five, the x and z-components of the flow were changed. The pressure outlet boundary condition was set to zero gauge pressure. The wall boundary condition was set to no-slip and as a stationary wall. The flow was set as viscous using the k-epsilon turbulence model. The Green Gauss node based solver was implemented into the Navier-Stokes governing equations in order to increase the accuracy of the coefficient of drag due to its compatibility with unstructured grids1. The continuity residual was set to converge to 1e-5. Both force and surface monitors were used to collect data on the total pressure and coefficients of lift, drag, and pitching moment, respectively. In order to vary the angle of attack of the wing, the x and z force vectors were changed. The planform area, air temperature, density, and velocity, and chord length were set for the reference values to calculate the lift, drag, and pitching moment coefficients. The planform area used was 666.7 in2. The chord length used was the average chord length of the wing which was 22.6 inches. The solver was then initialized and run for approximately 300 iterations until convergence was reached.

Results Obtained The CFD solution converged within approximately 250 iterations for all angles of attack on both wing models. Plots of Coefficients of Lift Drag vs. Angle of Attack and Pitching Moment Coefficient vs. Angle of Attack were created to compare the coefficients of the two wing designs. These plots are shown in Figures 3 and 4, respectively.

1.60 1.40

1.40 1.20

1.20 1.00

1.00 0.80

0.80 0.60

0.60 0.40 drag 0.40 0.20 0.20 NACA 5319 CL 0.00 0.00 Moment NACA 5319 CM NACA 5319 CD -0.20 Clark-Y CM -0.20 Clark-Y CL -0.40

Coefficients of and lift Clark-Y CD -0.40 -10-50 5 101520253035 -10-50 5 101520253035 Coefficient ofPitchimg Angle of Attack (degrees) Angle of Attack (degrees)

Figure 3. Lift and Drag Coefficients vs. Angle of Figure 4. Pitching Moment Coefficient vs. Angle Attack. of Attack.

It can be seen that the coefficient of lift was on average 10% higher for the NACA 5319 Airfoil design. The coefficient of drag for the NACA 5319 was 6% higher on average than that of the Clark-Y. These figures also show that, as expected, as the angle of attack increased, the coefficient of lift increased until the wing reached its stalling point. For both the NACA 5319 and Clark-Y models, this stalling point was observed at an angle of attack of approximately 25 degrees. The pitching moment coefficient was found to be higher at all angles of attack for the NACA 5319. Fieldview was used for post processing of the data. As expected, the pressure was the highest along the leading edge of each wing. The streamlines showed the effect of the tapered ends of the wing on the flow. For both the NACA 5319 and the Clark-Y, swirling was created on the edge of the wing. However, the NACA 5319 wing design turned the flow more than the Clark-Y wing, which was an indicator of higher lift capabilities.

The wind tunnel test results were analyzed and coefficients of lift and drag were calculated for both wing designs. A plot of these results is shown in Figure 5.

0.7 NACA 5319 CD NACA 5319 CL 0.6 Clark-Y CD 0.5 Clark-Y CL 0.4 0.3 0.2

Coefficient 0.1 0 -15 -10 -5-0.1 0 5 10 15 20 25 -0.2 Angle of Attack (degrees)

Figure 5. Plot of CD and CL results from Wind Tunnel Testing.

These results were compared to those of the CFD analysis. The coefficient of lift was found to differ by an average of 42% between the Wind Tunnel Data and CFD Computations for the NACA 5319 wing design, and 69% for the Clark-Y wing design. The coefficient of drag was found to differ by an average of 49% between the Wind Tunnel Data and CFD Computations for the NACA 5319 wing design, and 56% for the Clark-Y wing design.

Significance and Interpretation of Results The CFD results validate the design choice of the NACA 5319 airfoil being used for the wing due to the higher coefficient of lift over all angles of attack ranging from -10 to 20 degrees. Although the coefficient of drag is also higher, since the percent increase in this coefficient was 4% lower than that of the lift coefficient, the NACA 5319 design is clearly the best choice for this application. Also validating the NACA 5319 design is the increased pitching moment coefficient. This increase is desirable, as a higher pitching moment will correspond to an increased ease of take off. However, the data collected on the pitching moment coefficient is mostly to be used for stability calculations of the wing. The streamlines which are visible flowing over the wing and swirling on the wingtips will help understand the performance effect of this wingtip on the design. It appears that there are more disturbances in the flow over the NACA 5319, but these disturbances do not differ significantly enough from that over the Clark- Y to warrant a change in design. However, this swirling can lead to poor stability of the plane and decreased performance. In future analysis, this angle of taper can be adjusted and the effect on the swirling motion of flow can be examined in order to find an optimum design.

The large differences in calculated values of coefficient of lift and drag between the wind tunnel experiment and CFD analysis can be attributed to several factors. First, the wind tunnel experiment may have several errors associated with it. The first issue could be that the wind tunnel models were made by hand, and therefore, the models could be imperfect in their curvature across the top of the wing. Also, the mount which attaches the wing to the wind tunnel apparatus used to change the angle of attack causes some disturbance in the flow over the bottom of the wing. This disturbance affects the boundary layer on the bottom of the wing and serves to cause error in the readings taken. Also, the readings taken from the panels on the wind tunnel fluctuate a great amount instead of reaching a steady value. These fluctuations lead to errors in reading the exact forces and moments. To avoid these errors in the future, the wing model could be machined using a CNC for more accuracy. Also, a different wind tunnel which takes more accurate readings could be used to gather the data.

Overall, it was concluded that the CFD results were the most accurate, and that the wing design utilizing the NACA 5319 airfoil should be implemented.

Acknowledgments I would like to acknowledge Dr. Marquart who assisted with the CFD analysis of the two wing designs as well as the wind tunnel testing. Also, Andrew Weaver acted as my partner while performing both the CFD analysis and wind tunnel testing. Finally, I would like to thank the OSGC and Laura Stacko for giving me the opportunity to perform and present this research.

References 1. “Fluent 6. User’s Manual”, “Cobalt 2.0 User’s Manual”, Cobalt Solutions, LLC, Dayton, OH, 2003. 2. Shang, J. S., and Scherr, S. J., “Navier-Stokes Solution for a Complete Re-Entry Configuration”, AIAA Journal of Aircraft, Vol. 23, No. 12, December 1986, pp. 881-888. 3. Anderson, J. D., Jr., “Fundamentals of Aerodynamics”, McGraw-Hill Book Company, New York, NY, 1984, pp. 400-421, pp. 35-42. Indoor Navigation Using the Particle Filter

Student Researcher: Mathew A. Cosgrove

Advisor: Dr. Jade Morton

Miami University Electrical and Computer Engineering Department

Abstract This paper describes a navigation solution for a mobile robot that has odometry sensors and range sensors, moving in a structured indoor environment, with previous knowledge of its map. The objective is to follow a set of via points and successfully exit the map. A particle filter algorithm was implemented on a simulated Pioneer 3DX with a SICK LMS-200 scanning laser to obtain a navigation solution.

Project Objectives Accurate localization is a precondition for a robot to significantly cooperate with its environment. The occasion of autonomous vehicles that perform repetitive tasks is becoming more predominant. In this process, the vehicle needs to obtain information necessary for time and spatial task management. The most regularly available sensors for acquiring localization information are proprioceptive sensors, such as wheel encoders, gyroscopes, and accelerometers, which provide information about the robot’s motion [1]. In dead reckoning (DR) [2], a robot’s position can be tracked from a starting point by integrating proprioceptive measurements over time. The limitation of DR is, however, that since no external reference signals are employed for correction, estimation errors accumulate over time, and the position estimates drift from their real values. In order to improve localization accuracy, most algorithms fuse the proprioceptive measurements with data from exteroceptive sensors, such as cameras and laser range finders [3].

The project examines the use of the particle filter algorithm for fusing odometry data with laser range measurements to solve the localization problem. The approach implemented in a mobile robot is developed to accomplish a given mission in a well known environment. The navigation solution has to deal with uncertainly in the measurements of both the odometry and range sensors that help the robot to navigate.

Methodology Used All of the following simulations were performed using MatLab ®. The first step in this solution is to define the environment in which the mobile robot needs to navigate and the points on the map that the mobile robot needs to visit. The map that was used along with the accompanying via points is shown in Figure 1. The state vector for the robots location and orientation is defined as,

The next step is to accurately simulate the odometry data received from the mobile robot. The parameters used concerning the robot are shown in Figure 2. The odometry simulation was performed by using the following kinematic equations,

In order to predict the probability distribution of pose of the moving robot after a motion there needs to be a model of the effect of noise on the resulting pose. Most of the approaches that are used employ an additive Gaussian noise model for the motion and a similar approach was used here. The next step is to accurately simulate the range measurements. This was done by having a well defined environment and calculating the ranges to the features on the map and adding a Gaussian noise to the range portion of the returned measurement. Based on the simulated data and measurements we can implement the particle filter. A particle filter is a nonparametric implementation of the Bayes filter and is frequently used to estimate the state of a dynamic system [4]. The key idea is to represent a posterior by a set of hypotheses. Each hypothesis represents one potential state the system might be in. The state hypotheses are represented by a set of weighted random samples. Such a set of samples can be used to approximate arbitrary distributions and the weight is a non- zero value and the sum over all weights is 1.

The particle filter algorithm is used when a state estimate of a dynamic system is needed. The idea of this technique is to represent the distribution at each point in time by a set of samples, also called particles. The particle filter algorithm allows a recursive estimation of the particle set based on the estimate of the previous time step. The goal is to approximate the unknown, target distribution by samples. The samples are drawn from a proposal distribution and weighted according to their correlation with the features of the map. After determining the importance weights which account for the fact that the target distribution is different from the proposal distribution, the resampling step replaces particles with a low weight by particles with a high importance weight.

Results Obtained Figure 3 shows the estimated path of the mobile robot’s through the maze. Figure 4 shows the evolution of the particle cloud through the maze. Notice how the particle cloud splits in the upper left corner of the M this is because of the larger space around the corner and the extreme features in the environment such as corner. Figure 5 shows a histogram of the particle cloud, this is the target distribution that was approximated. In conclusion the particle filter method for autonomous navigation performed very well.

Figures/Charts

10

9 y

8 V 7

6

5 l VL 4

3 θ

2

1

0 0 1 2 3 4 5 6 7 8 9 10 x Figure 1. Map with Via Points. Figure 2. Parameters of Robot.

10

9

8

7

6

5

4

3

2

1

0 0 1 2 3 4 5 6 7 8 9 10

Figure 3. Navigation Solution through the maze. Figure 4. Particle Evolution through the maze. 14

12

10

8

6

4

2

0 0 20 40 60 80 100 120 140 160 180 200

Figure 5. Histogram of target distribution.

Acknowledgments The author of this paper would like to thank Electrical Engineering Department. for supplying the materials and equipment to conduct this research. The author would also like to thank Dr. Jade Morton for her cooperation on the project.

References 1. Anastasios I. Mourikis, “SC-KF Mobile Robot Localization”, in IEEE Transactions on Robotics, Vol. 23, No. 4, Aug 2007 2. A. Kelly, “General solution for linearized systematic error propagation in vehicle odometry,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Maui, HI, Oct. 29–Nov. 3, 2001, pp. 1938–1945. 3. F. Lu and E. Milios, “Robot pose estimation in unknown environments by matching 2D range scans,” J. Intell. Robot. Syst.: Theory Appl., Vol. 18, No. 3, pp. 249–275, Mar. 1997. 4. Ioannis M. Rekleitis, “A Particle Filter Tutorial for Mobot Robot Localization”, Centre for intelligent machines, McGill University. The Development of a Flexible Manufacturing System

Student Researcher: Jeffrey V. Coulter

Advisor: Dr. Mahoud A. Abdallah

Central State University Manufacturing Engineering Department

Abstract The objective of this project is to utilize the robotic workstation developed in last year’s senior design project to produce a product through the implementation of a flexible manufacturing cell. The project calls for the integration of conveyor belts, grippers, and part feeders with the development of appropriate control software. The product is a simple assembly that can be used to demonstrate the use of various automation components put together to accomplish an industrial operation. Objectives In this project we develop communication techniques between the robot and peripheral devices, and develop a flexible manufacturing system optimized for a specific product. The goal is to integrate multiple workstations to perform an assembly operation. Technical Approach The idea is to have the robot pick up a block and test it to see if it is a good part or a bad, then put it to its respective transporter for further additions. With the possible applications of sensors the robot will detect whether the blocks are good or bad depending on if the switch that is triggered.

This idea was chosen for several reasons; first, due to the time constraint this goal is very reachable. Second, the quality check idea is simple idea that contains the list of objectives that needs to be reached in this project.

The conceptual design of the project is to develop communication between the computer and the workstation using PLC programming controlling I/O and CNC programming to machine the parts to assemble.

Results Due to the time, the project is not yet completed. The project is expected to be complete by the end of the last full week of April. As far as where the project is right now, the communication between the robotic arm and peripherals has been implemented. Every component that has been contributing to the project is nearly assembled to the workstation.

The results so far have impressed me on account how much I can manipulate the robot and peripherals to my liking. This can be used in any process depending how the process is defined and what peripherals are needed. The workstation is very flexible; meaning I can move the robotic arm in any location within the workstation along with the peripherals.

Acknowledgements I want to thank the Ohio Space Grant Consortium, also my advisor, Dr. Abdallah. In addition, I want to acknowledge Harold Pearson, the robotic specialists, Melvin Shirk, laboratory technician, Motoman Robotics Inc., manufacturing graduates of 2007, and Central State University faculty and staff. Impedance Spectroscopy for Biosensors and Hydraulic Hose Failure Detection

Student Researcher: Mitul R. Dadhania

Advisors: Dr. Mark Schulz, Dr. Vesselin Shanov, Dr. Yeo-Heung Yun

University of Cincinnati Department of Mechanical Engineering

Abstract The project involves development of an electronic biosensor for use in detecting cancer cells in solution as well as other molecules and cells pertinent to human health monitoring. Electrochemical Impedance Spectroscopy (EIS) is used to detect the cancer cells on a large electrode surface with a moveable microelectrode. In addition, microelectrodes patterned on a silicon wafer and functionalized with anti- EpCAM to attach cancer cells have been used with EIS to detect cell presence in solution. Carbon Nanosphere Chains (CNSC) and Carbon Nanotube (CNT) thread have been used to develop a failure detection method for hydraulic hoses. CNSC chains at a maximum concentration of 5.0% by weight proved unable to make hydraulic oil conductive for the sensor to work. CNT thread showed a decrease in impedance upon addition of hydraulic fluid. This thread could be integrated into the hydraulic hose for leakage detection.

Project Objectives Two sensor methods can be coupled to provide a sensitive biosensor for cancer cells. One method is to verify that a switch-type method for cancer cell detection is possible. This switch type method can be used to tell if a cancer cell is in a specific location using a movable microelectrode. The other method uses microelectrodes patterned on a silicon wafer. If the cancer cell binds to the microelectrode, the impedance should increase, allowing for detection of the cancer cell’s presence in the solution. Having a moveable electrode that can probe an array of microelectrodes for the presence of cells bound can allow for detection of multiple types of cells on one array. In addition, EIS can be used to learn more about cancer cell physiology, which is a future objective of this research.

Another objective is to develop an inexpensive, practical sensor that can predict failure of a hydraulic hose. Hydraulic hoses are prone to failure and detection of failure before it occurs can protect people from injury. The failure mode is not well understood. If the failure mode involves degradation of the inner surfaces before sudden eruption, a sensor can be designed using impedance measurement. The objective is to explore possibilities using impedance measurement and electrically conductive nanoparticles.

Methodology Used For the movable active sensor experiment, a tungsten microelectrode with a 35µm tip was used as the working electrode (see Figure 1). PC3 cancer cells are roughly 25 +/- 6µm in diameter1. The electrode tip is therefore slightly larger than the cell. The microelectrode is covered with plastic tubing stripped from copper wiring, leaving the tip exposed to provide rigidity and reduce vibration of the thin, paralene insulated microelectrode. A wire is soldered to the electrode for connection to the impedance analyzer. The electrode is then clamped into a micromanipulator which can move in three axes. The counter electrode is prepared by taking a silicon wafer and sputtering a thin layer of gold on its surface. Using conductive epoxy, wires are attached to the gold electrode surface for connection to the analyzer. A rubber pad is added to the top of the gold electrode to provide a well for the cell solution to be held in (see Figure 2 for photo of setup).

The impedance analyzer uses a two electrode setup without a potentiostat or reference electrode. A three electrode setup with potentiostat is typically used for electrochemical measurements of this kind. However, two electrode EIS has and can be used successfully for analyzing cells2.

The PC3 cancer cells are grown by Dr. Zhongyun Dong in the Department of Hematology and Oncology at the University of Cincinnati College of Medicine and suspended in buffer solution. PC3 cancer cells are associated with prostate cancer and express a cell adhesion molecule (CAM) on it called EpCAM. An antibody to EpCAM, called anti-EpCAM, can be used to functionalize an electrode so that it specifically binds the PC3 cancer cell. The functionalization process is quite involved and dependent upon the electrode size. Future work will involve functionalization of a patterned microelectrode for which the bound cell can be probed with the movable microelectrode. Figure 3 shows a gold patterned electrode that before being functionalized with anti-EpCAM and placed in a fluid channel with PC3 cancer cells3.

Two drops of the PC3 cells in buffer solution were added to the top of the counter electrode. The analyzer used a 10mV AC amplitude potential to run the EIS. EIS works by applying an AC potential, measuring current, and then calculating the impedance of the solution. The real and imaginary components can be plotted to form a Nyquist plot, or the impedance magnitude and phase angle versus frequency can be plotted to form a Bode plot. Both tell valuable information about the electrochemical measurement, such as solution resistance and double layer capacitance.

Electrode location was visualized using a high powered stereomicroscope. Figure 3 shows a picture of the image of the cells and the microelectrode tip through the microscope lens. Measurement was taken with the electrode in solution, the electrode in solution touching the cell surface, and the electrode in solution touching the counter electrode surface. Because the arm of the microelectrode acts as a large, easily bendable cantilever, not much force is applied to the cell when the microelectrode is positioned and pushed down upon it.

For testing of the hose sensor, the initial design is shown in Figure 7. For current to be detected between the working electrode and counter electrode upon rubber degradation or cracking, the hydraulic fluid must be electrically conductive, which it is not. Using electrically conductive CNSC that have also been magnetized by microwave treatment, an attempt to make the fluid conductive is made by mixing a 5.0% CNSC by weight solution of fluid. The CNSC are easily dispersed in the fluid and shear mixing and sonication for 30 minutes ensured full homogeneity of the solution before impedance testing. The CNSC fluid solution was also put between two strong magnets, which were used as electrodes, to see if the magnetic field would make the CNSC molecules bridge the gap between electrodes and drop the impedance.

CNT thread, which is woven by slowly pulling and rotating CNTs from a dense CNT array, was prepared for use as a sensor for hydraulic fluid. The thread was connected on either end to the impedance analyzer. Four drops were added to the center of the thread, with measurements taken at 100 mV AC amplitude after each drop. The fifth measurement was taken after the thread was allowed to sit for 5 minutes in the fluid without any additional fluid being added.

Results Obtained Figure 5 shows the results of the moveable microelectrode testing with the PC3 cancer cells. When the electrode is in solution, the impedance magnitude is high. The double layer capacitance between the buffer and electrode surface causes the impedance magnitude to drop at high frequency and be high at low frequency. When the electrode is touching the cell, the impedance magnitude is similar to the buffer solution. A smaller capacitance is seen by the Nyquist plot results, leading to a slightly lower impedance magnitude at high frequency compared to the solution alone in Figure 5. When the microelectrode is touching the large gold counter electrode, the impedance drops several orders of magnitude.

Using CNSC in hydraulic fluid, the impedance magnitude of the fluid was not able to be reduced. Higher concentrations of CNSC greater than 5.0% may yield a more conductive fluid, but will be increasingly more difficult to mix and make the fluid thicker. Within the strong magnets, the CNSC were unable to bridge a gap between the electrodes and the impedance was still as high as unmodified hydraulic fluid.

Figure 6 shows the result of adding different numbers of drops to a CNT thread. As more drops are added, the impedance magnitude increases across the entire range of frequencies tested, indicating that the overall resistance of the thread has increased. After the five minutes had passed, more fluid had dispersed through the thread without adding addition drops, resulting in an even greater impedance magnitude.

Significance and Interpretation of Results The results of Figure 5 show that the presence of a cell between two electrodes can easily be detected using impedance measurement and a cantilevered electrode which does not damage the cell through excessive force. This concept can be applied to specific adhesion of a PC3 cancer cell to a microelectrode surface through the use of anti-EpCAM functionalized electrodes. If a solution needs to be tested for presence of the PC3 cancer cell for diagnostic purposes, a large array of microelectrodes, each about the size of the cancer cell itself, can each be functionalized with anti-EpCAM. If binding can be detected using EIS, the microelectrode probe can move to the specific microelectrode on the array where binding was detected to verify the presence of the PC3 cancer cell on the surface by closing the gap between electrodes and checking the impedance. If the cell is present, the impedance will be high as shown by the results of the experiment.

For the hydraulic fluid hose, a sensor can be designed which incorporates CNT thread into the layers of rubber closest to the inner surface of the hose. These threads have shown to be weak mechanically and, from the results, show an increase in resistance upon exposure to hydraulic fluid. Because the CNTs are hydrophobic, which is demonstrated by their ease of dispersion into the hydrophobic oil molecules of hydraulic fluid, they may be easily absorbing the highly resistive hydraulic fluid molecules upon contact. The hydrophobic molecules can easily begin to cover and isolate individual CNTs that were once part of the conductive thread chain, causing an increase in resistance of the thread. If resistance of the CNT thread is monitored from one end of the hose to the other, and the thread is coiled within the hose, breakage of the thread due to cracks or damage internally, as well as exposure of the thread to the hydraulic fluid, could possibly change the resistance enough to determine if the hose is approaching failure. Using this as a sensor method is contingent upon understanding the precise failure mode of the hydraulic fluid, which will involve further research.

Addition of CNT or CNSC to hydraulic fluid and other fluids, such as motor oil, may prove to be beneficial. CNT addition to oil is being researched for its possible improvements to the lubrication and heat transfer properties of oil.

Figures/Charts

1 243

Figure 1. Tungsten microelectrode tip. Figure 3. Patterned gold electrode3 Figure 2. Experimental Setup. Figure 4. PC3 cancer cells and microelectrode.

Figure 5. Microelectrode and cancer cell results. Figure 6. Increase of thread impedance with fluid. Small crack formation: Large degradation area: Fluid contact with small area of Fluid contact with large area of counter electrode counter electrode

HYDRAULIC FLUID Rubber

Working Electrode: Counter Electrode: Exposed to fluid Rubber Insulated

ANALYZER

Figure 7. Design for hose impedance sensor requiring conductive hydraulic fluid.

Acknowledgments and References 1. Kolios, M. C. Towards understanding the nature of high frequency backscatter from cells and tissues: an investigation of backscatter power spectra from different concentrations of cells of different sizes. Ultrasonics Symposium. 1 (2004) pp. 606-609. 2. Rahman, Abdur Rub Abdur. A micro-electrode array biosensor for impedance spectroscopy of human umbilical vein endothelial cells. Sensors and Actuators. 118 (2006) pp. 115-120. 3. Yun, Dr. Yeo-Heung. University of Cincinnati Department of Mechanical Engineering, Smart Structures and Bio-Nanotechnology Laboratory. Application of Space Travel Muscle Wasting Research to Aging and Health Compromised Populations

Student Researcher: Rhonda S. DePuy

Advisor: Tekla Madaras

Owens Community College Dietetic Technician Program

Abstract There are many similarities between muscle loss in astronauts during space travel and muscle loss during the aging process, individuals with chronic illness and bed ridden individuals. I will examine what current research has revealed about muscle wasting during space travel, essential amino acid and other supplementation, exercise, and how these can be applied to the general population, especially regarding the aging process and muscle wasting diseases. Considering the possibilities of extended space travel in the future, these findings could not only make a three-year mission to Mars safe, from an anatomy and physiology prospective, but could also benefit an extensive and growing need in the medical community.

Project Objective My objective is to research the causes of muscle loss during space travel, the various methods being researched to counteract muscle wasting and the medical conditions on Earth which will benefit from successful intervention.

Methodology Used Through researching different scientific studies I found various scientific points of view about the reason muscle loss is common during space travel and the intercessions which have been tested or researchers believe merit further testing. Additionally, I examined those who would benefit medically by resolving the dilemma of muscle wasting in space.

Results Obtained Extended space travel is currently not achievable partially due to food and nutrition related problems during space travel. Most people experience physiological responses to microgravity which alters the body’s requirements and thus results in loss of lean muscle mass, bone mass and nutrients. Some research suggests foods available are part of the contradiction. Environmental and time constraints limit the quality and variety of foods available to astronauts. The average astronaut will consume 500-1000 kcal/day less than required which hastens muscle atrophy. Many times astronauts lose their appetites, much as the elderly and sick do, consequently when they exercise their muscles become fuel exacerbating atrophy. Crew members have shown adversity to prescribed diet regimens because of gastrointestinal side effects and feeling overloaded with food. Although it isn’t the total solution, healthful well-balanced meals are essential during space travel just as it is on Earth.

There has been some success using supplements containing essential amino acids and carbohydrates to reduce atrophy in bed ridden patients, whose wasting is similar to that of astronauts. A consideration with supplements is timing. There is hypothesis that they should be consumed just before exercise to increase absorption.

Another theory for negative energy balance includes a less effective thermo-regulation in microgravity which may be remedied by improving the efficiency of heat and/or CO2 removal. Radiation was a factor on NASA/MIR 1996-98 when depressed protein resulted from exposure to ionizing radiation and caused free radical damage. It is theorized that antioxidant supplements could be beneficial to stopping free- radical propagation and irreversible damage.

Ultimately research has shown that muscle wasting occurs either because “cellular proteins are broken down at a faster rate than normal or because new proteins are made at a slower rate”. Herman Vandenburg and his research team showed the cause of atrophy in space is most likely due to a “significantly slower rate of protein synthesis”. The same research team believes treating muscles with an insulin-like growth factor (IGF-1) will stimulate protein synthesis. Injected proteins however have limited effectiveness because the body breaks them down quickly and they can not be taken by mouth because stomach acids immediately break them down. A more effectual approach will need to be found for administration. The team has had some success with an “implantable protein factory” and the muscle growth stimulator hGH. Another study indicates space conditions cause the stress hormone cortisol to increase which also increases the rate proteins are broken down.

Significance and Interpretation of Results As an upcoming professional in the field of dietetics, I was hopeful that my research would reveal that oral nutrition and supplements would play the most important role in preventing muscle wasting both in space and on Earth. While some success has occurred in this area it appears to be a multi-faceted problem. Nutritionally speaking negative protein energy balance causes atrophy, however the most efficient venue for administering the essential amino acids is still up for debate and we know that there are other influences, i.e. exercise, that have effects on muscle tissue. Providing excess protein from what is required on Earth will not improve muscle wasting but it is also known that pharmacological remedies and exercise are not effective without adequate protein intake. I am hopeful that continued studies both with space travel and health compromised/bed ridden individuals will be beneficial to both groups. Successful research and resolving this significant issue will lead to longer space exploration missions and will benefit the medical community by alleviating muscle wasting in the elderly, bed-ridden and chronically ill just to name a few.

References 1. Gene therapy: Putting muscle into the research (n.d.). Retrieved January 7, 2008, from NASA, Fundamental Space Biology Outreach Program Website: http://webolife.nasa.gov/currentResearch/currentResearchFlight/geneTherapy.htm 2. National Space Biomedical Research Institute (2007). Nutrition, physical fitness and rehabilitation team strategic plan. Retrieved March 31, 2008 from http://www.nsbri.org/Research/strategicplans/Nutrition.pdf 3. National Space Biomedical Research Institute (2002, August 28). Nutritional Supplements May Combat Muscle Loss. ScienceDaily. Retrieved Feb 10, 2008, from http://www.sciencdaily.com/releases/2002/08/020828062314.htm 4. Paddon-Jones, D., Sheffield-Moore, M., Urban, R., Sanford, A., AArsland, A., Wolfe, R., et al. (2004). Essential amino acid and carbohydrate supplementation ameliorates muscle protein loss in humans during 28 days bedrest. The Journal of Clinical Endocrinology and Metabolism 89, 4351- 4358. 5. Ragovin, H. (2004). Bedridden for Mars. Tufts Journal. Retrieved March 31, 2008, from http://tuftsjournal.tufts.edu/archive/2004/march/features/index.shtml 6. Senter, C. (2001). Weightlessness and weight loss: Malnutrition in space [Electronic version]. Nutrition Noteworthy, 4(1), Article 6. Retrieved Feb 10, 2008 from http://repositories.cdlib.org/uclabiolchem/nutritionnoteworghy/vol4/iss1/art6 Autologous Mesenchymal Stem Cell Transplantation to Improve Fascial Repair

Student Researcher: Tammy M. Donnadio

Advisor: Dr. Hazel Marie

Youngstown State University Mechanical Engineering

Abstract A patient being treated for a hernia has a 10% chance of post surgery complications. A common complication consists of a possible reoccurrence of the hernia. In the medical world, improvements in old techniques and advances in technology are always being discovered. Mesenchymal stem cells (MSC) are cells in the body that have the capability to break down into many different types of cells. This research will attempt to use these cells to repair the abdominal fascial tissue that is found in the hernia. There were twenty-one test subjects to be used in the experiment. The subjects would then be split into three different groups of seven and be tested for the following: tensile strength, collagen deposition, and collagen remodeling.

Project Objectives The main purpose and goal of this experimental procedure is to improve the method for repairing hernias to reduce the amount of reoccurrences. A tensile strength test was performed on the specimens to determine the biomechanical properties of the fascia. The mesenchymal stem cells, plasma-rich fibrin, and a collagen with growth factor imbedded in it, are expected to improve the healing method. The test would conclude whether or not the new method of repair would decrease the number of reoccurrences.

Methodology Used The procedure for this stage of the research was one that developed as the specimens were received from the previous phases. When the specimens were received they were observed and a conclusion was to be made on the ideal way to perform the tensile test.

After the observation of the samples, they were clearly marked to indicate what group the tissue belonged to and whether it was the control specimen or the test specimen. Next, the samples were cut into a dumbbell shape to ensure accurate test results in the Instron testing machine. Once the sample was cut, the length, width, and thickness were measured and recorded. Also, before the sample could be placed in the tensile testing machine, the scar had to be marked with a permanent marker. The specimen was now ready for testing as shown in Figure 1.

Figure 1. Rabbit tissue in Instron Tensionmeter.

The machine used for the tensile testing was the Instron Tensionmeter, Model 5,500R. The original grips (Figure 2) that came equipped with the machine were not sufficient for the tissue testing, therefore a set of grips specially designed for this type of testing were fabricated. Using the new grips (Figure 3), the specimen was placed into the machine and secured. The machine was jogged up to create a slight amount of tension on the specimen. This would ensure the sample was securely in place as well. The machine was then turned on and moving at a constant rate of 10mm per min. The computer program known as Merlin was used to produce a force-extension curve.

Figure 2. Standard grips. Figure 3. New modified grips.

Throughout the testing procedure a video recording of each sample was completed. Also, a fine grid was placed behind each sample during the testing process in order to obtain local deformation during the video recording. All the data was recorded by the computer and finally graphed.

Results The stress that was measured was plotted against the recorded strain for test the test sample as shown in figure 4. By observing and analyzing the stress-strain curve some biomechanical properties could be determined such as the yield strength, yield energy, and Young’s Modulus. The yield strength is the maximum stress a material can withstand without causing plastic deformation. For analysis purposes the yield strength was obtained by using the .2% offset method. The yield energy is a measure of the material’s toughness and is measured by calculating the area under the stress-strain curve. Young’s Modulus is the ratio of the stress of the material to the strain. The calculated result is a measure of how elastic the material is.

Figure 4. Engineering Strain (∆L/L) vs. Engineering Stress (Pa). The biomechanical properties were determined and calculated for both the control sample (15-C) and test (15-T) sample. These values were then compared and the differences can be seen in Table 1.

Table 1. Biomechanical properties for control and test samples. Yield Strength (kPa) Yield Energy (mJ) Young’s Modulus(MPa) Control 400 76.144 1.173 Test 325 95.033 1.133

Conclusions Throughout this experiment different obstacles were encountered and required attention before obtaining the most accurate results possible. The grips used at the beginning of the research were the standard grips that came with the machine. As stated before, these grips were not sufficient for the fragile tissue being tested. Also, the initial two sets of specimens were much smaller than expected, which made it even more difficult to obtain a firm grip on the tissue.

The new grips simplified the testing procedure and resulted in more accurate data. From Table 1, the difference in values could be compared. The yield strength that was calculated for both samples came out fairly similar but the control specimen had the higher value by approximately 75 kPa. Comparing the Young’s Modulus, the values were very similar, which could have been expected.

The yield energy was calculated by super imposing a trend line on the stress-strain plot and finding the area under the curve up to the yield point. The yield energy can be defined as a material’s resistance to failure. Because the test sample produced a larger yield energy value than this sample had more resistance to failure. The high yield energy is a good indication that the new surgical method could be more efficient and can lead to a reduced amount in reoccurring hernias. These are the results based on only one sample. More testing will be necessary to draw a final conclusion on whether or not the new method will decrease the number of reoccurrences.

References 1. “Mesenchymal Stem Cells.” R&D Systems. 2005, 30 March 2008. http://www.rndsystems.com/molecule_group.aspx?g=805&r=7 2. “Materials Testing Solutions.” Instron. http://www.instron.us/wa/resourcecenter/glossary.aspx

Acknowledgments I would like to thank Youngstown State University’s Mechanical Engineering Department and Dr. Hazel Marie for providing the opportunity to perform this research. I would also like to thank Anthony Viviano who assisted with the testing and the new grip design. Finally, I want to thank Megan Genuske (Junior Scholar) for assisting with the project and Mathew Citarella for videotaping the experiment. Integration of Bi-Camera Imaging System on Smart Balloon

Student Researcher: Osama Elbuluk

Advisors: Dr. Jiang Zhe, Dr. Julie Zhao

The University of Akron Department of Mechanical Engineering

Abstract The purpose of the research project was to launch and retrieve a balloon which would reach near-space altitudes while still transmitting wireless data back to a ground command system. In addition, throughout the flight it must periodically take aerial photos (two angles), and at all-times send wireless GPS data for the purpose of tracking and retrieving. The data categories tracked were altitude, internal temperature, external temperature, and humidity. The cameras were strategically placed 90 degrees apart from each other with one camera facing downward out the bottom of the box, while the other was facing out the side of the payload box. Lastly, the cameras were attached to a microcontroller and relay which sent a pulse every three minutes triggering the shutter to constantly take pictures.

Project Objectives As previously mentioned, the main goal of this project was to be able to send and receive wireless data at near-space altitudes. On the balloon there were a series of sensors of which we used to transmit this data. These sensors tested for the altitude of the balloon covering both the ascent and the descent, both the internal (inside of the payload box) and external temperature, and lastly the pressure. So we wanted to retrieve this specific data to show us how certain materials and electronics behave at these altitudes and temperatures as well as just get a general trend of the changing of pressure and temperature with respect to altitude. The last piece of data we wanted to retrieve was a series of pictures. Pictures can explain and tell a lot about the flight of the balloon and are very easy to comprehend unlike other plots and graphs may be a little harder for some to understand.

Methodology Used Now although this project contained both electrical and mechanical engineers, the electrical aspect of the project was more demanding, so a lot of my time was spent working on a very key electrical aspect of the project, the timing circuit. A timing circuit is one which uses a series of microchips wired together to send either a low or high pulse depending upon the desired output to in our case a camera. The way a camera works is that inside of it are two leads, and when the shutter button is pushed the two leads touch and a picture is taken. For our experiment purposes, the camera was taken apart so the timing circuit could be attached to the camera via the two leads. The timing circuit begins with a piece known as a 555 timer. The 555 Timer sends out a square-wave pulse whose frequency is dependent upon the two resistors and one capacitor attached to it. The two resistors and one capacitor equaled 200 kΩ, 40 kΩ, and 100µF respectively. This is very important because this frequency is used to calculate the period which determines how often the camera will take pictures. This series of resistors and capacitors caused the max period to be 5.188 minutes or 5 minute and 11.28 seconds. This was perfect because we did not want the intervals between pictures to be anything more than 5 minutes. For one of these resistors, a potentiometer which is a resistor, but it is unique in that it contains a small dial which allows one to alter the resistance and in turn, alter the frequency. So then this pulse is then sent a 4-bit binary counter whose purpose is merely to count. It counts the pulse sent out by the 555 Timer in binary numbers. This is then sent over 4 outputs to a nand gate. In order for the nand gate to send a low pulse to the camera it must receive a high pulse across all four inputs, and it will only receive 4 high pulse inputs once every sixteen counts. Thus, the frequency at which the camera takes pictures is 1/16 of the frequency of the 555 timer. This worked great except that the length of the low pulse was approximately fifteen seconds so instead taking one picture it took how ever many pictures could be snapped over a fifteen second period. So a nor gate was added, in the event that a nor gate would only send a low pulse when both the nand gate and the 55 timer were low decreasing the duration of the low pulse to approximately 2-3 seconds. Lastly, this was attached to a R72 Relay which was a double pull-double throw relay so we could use one input and send it to two outputs, which in our case was the two cameras. These cameras were mounted perpendicular to one another inside the main payload box, with one on the bottom and one on the side.

Results Obtained

Aerial Sample Pictures (Side Camera) Aerial Sample Pictures (Down Camera)

Significance and Interpretation of Results After testing, the circuit showed inconsistencies causing us to rethink a way of sending a pulse to trigger the cameras. So the cameras were then hooked up to the microcontroller. Using the same relay, we could produce the same pulse for the low-activated cameras, but with the microcontroller. The microcontroller actually had one large advantage over the timing circuit which was that through JAVA we could tell the microcontroller the exact time of the period we wanted between pulses so there was much less error. We succeeded and above are some of the pictures the two cameras took.

Figures/Charts Truth Tables for the Binary Counter, the Nand Gate, and the Nor Gate (from L to R)

Acknowledgments and References I would like to thank the Ohio Space Grant Consortium for providing me with this opportunity, as well as Dr. Paul Lam, Dr. Julie Zhao, and Dr. Jiang Zhe for their guidance throughout the project and my team members for their time and persistent, hard work. Everlasting Energy

Student Researcher: Amy N. Friedlein

Advisor: Dr. Jed E. Marquart

Ohio Northern University Mechanical Engineering

Abstract A wind turbine is a rotating machine that converts kinetic energy into mechanical energy. A wind turbine can utilize the alternative energy source of wind to power the campus of Ohio Northern University and possibly even some of the Village of Ada. Since wind power is much more environmentally friendly than other forms of power currently being used, this type of energy is immensely advantageous and should be harnessed.

Project Objectives The overall scope of this project starts with the installation of data collectors on the radio tower at Ohio Northern University in order to prove to the university, to the village of Ada, and to potential investors that Ada has high enough wind speeds to take advantage of wind energy through the installation of wind turbines. The goal is to convince these three parties that wind turbines are a long-term way to provide “free energy” to the university. The scope of this research is to analyze the performance of wind turbines and to gather preliminary wind speed data.

Methodology Used In order to analyze the performance of wind turbines, the power curves of six different-sized wind turbines, three manufactured by General Electric Company (GE) and three manufactured by Vestas, are plotted. These power curves illustrate how many megawatts of electricity are produced by each wind turbine at a range of wind speeds. In order to gather preliminary wind speed data, for almost seven weeks, each day the wind speed was recorded from weather.com. In order to reduce error, the wind speeds were recorded at approximately the same time each day. The average wind speed over this time period was then calculated so that further calculations could be performed. For two wind turbines manufactured by General Electric Company, the 3.6 MW turbine and the 1.5 MW turbine, the payback periods were calculated based on the cost of the wind turbine and the savings from the energy produced.

Results Obtained In order to calculate the energy produced by the wind turbine in Watts, the following equation from the Iowa Energy Center [1] is used:

1 P = αρAv3 2 where α is the efficiency of the wind turbine, ρ is the density of air (1.2 kg/m3), A is the swept area by the wind turbine in square meters, and v is the wind speed in meters per second.

As can be seen from Figure 1, only four of the six power curves are plotted for the wind turbines. The plots for the 1.5 MW GE wind turbine and the 1.8 MW Vestas wind turbine are not included because their power curves are identical to the power curve of the 3.0 MW Vestas wind turbine. The difference between these three curves is the amount of energy at which each curve reaches its maximum value (the 1.5 MW, 1.8 MW, and 3.0 MW wind turbines reach maxima of 1.5 MW, 1.8 MW, and 3.0 MW, respectively). It is concluded that these three wind turbines have the same performance but different capacities.

An average wind speed of approximately 14.1 miles per hour is calculated based on the recorded wind speeds. Using this wind speed, the payback period of the 1.5 MW GE wind turbine is calculated to be approximately 14 years while the payback period of the 3.6 MW GE wind turbine is calculated to be approximately 18.5 years. These calculations were based on electricity prices in Ada of 4.7 cents per kilowatt-hour.

Significance and Interpretation of Results The power curves developed through this preliminary research can be used in future research to determine the wind turbine that best suits the wind conditions at Ohio Northern University. Since the average lifespan of a 1.5 MW GE wind turbine is approximately 25 years, the installation of six of these wind turbines on campus would provide “free energy” for approximately 11 years.

It is determined through this research that the installation of wind turbines on the campus of Ohio Northern University is extremely advantageous. It is proven through this research that it is economically feasible for the university to invest in the idea of wind energy by installing wind turbines on its campus. The other advantages of wind energy include that it is much more environmentally friendly than energy extracted from coal or oil. Also, since wind is a renewable resource, there is an endless supply of it. Finally, the existence of wind turbines at Ohio Northern University would bring about much positive publicity for the university. However, the disadvantages need to be considered as well. For example, purchasing wind turbines is a long-term investment, not a short-term investment, and wind turbines can cast shadows on the nearby landscape so they need to be installed in the proper locations.

With supportive data, a willing investor, the proper location, and cooperative neighbors, the perseverance of a few students could bring about the installation of wind turbines on the campus of Ohio Northern University. With the help of this research, the overall goal of this project can be reached by turning this dream into a success.

Figures/Charts

Figure 1. Power Curves for Wind Turbines.

Acknowledgments and References 1. General Electric Company 2. Iowa Energy Center < http://www.energy.iastate.edu/Renewable/wind/wem/windpower.htm> 3. Vestas 4. The Weather Channel Micro Air Vehicle Project: Development of Quad-Winged Flapping Mechanisms

Student Researcher: Jeremy N. Fuhr

Advisors: Dr. Haibo Dong, Dr. Joseph Slater

Wright State University Mechanical Engineering Department

Abstract A significant effort has dedicated in recent years towards design and assembly of small unmanned aircraft. This initiative has progressed to a point that these aircraft are reducing in size to scales of centimeters or inches rather than meters or feet. This particular area of development has earned the title of Micro Air Vehicles (MAVs) due to this effort towards minimal size. Wright State University has now taken up this same initiative and is currently progressing towards the development of a flapping wing MAV.

This project will continue the work initiated in 2006 on a flapping wing MAV. The primary focus of the original project was to develop a mechanism that could be utilized to simulate the characteristics displayed by dragonflies during flight. To adequately model the four wing configuration of the dragonfly, this mechanism was to drive two separate pairs of wings which would flap out of phase with one another by a specified degree. For this next project, the group will be focusing on three primary areas: component fabrication, mechanism lubrication, and design optimization.

The component fabrication portion of the project will be focused on finding or developing viable manufacturing methods that can be utilized for small component manufacturing. A rapid prototype machine was employed for the original project. Yields for the individual components by this method were low due to various contributing factors, primarily driven by component size and low resolution for the rapid prototyping machine. Alternative methods are necessary in order to increase product yield while maintaining the ease and speed of the prototyping machine.

Lubrication was not one of the key areas of investigation in last year’s project, and therefore minimal effort was expended on lubrication of the mechanism. This resulted in poor operation of the mechanism as a whole. The team will take an action to improve upon this aspect of the design. As the overall MAV project is still in its infancy, design modifications are still a distinct possibility. While effort will be expended in determining manufacturing methods for the current design components, concurrently the overall mechanism will be reviewed and analyzed for potential improvements.

The overall primary project goal has been established by DARPA (Defense Advanced Research Projects Agency) which seeks a design for an MAV with maximum length, width, and height dimensions of six inches. Additional constraints established by DARPA include a maximum weight of 50 grams, payload weight of 20 grams, and flight time in excess of 60 minutes. This is a highly ambitious goal, and is estimated to take far more time and resources than can be allotted to a single senior design project.

With the time and resource constraint facing the design team, a more practical goal has been established for this project. The goal of this project has been established to be to the refinement of the current design, and production of a prototype that will achieve a measurable percentage of lift of the total prototype weight.

Introduction Mankind’s earliest concepts for mechanical flight were inspired by nature. It was the flapping wing flight of birds and bats that initiated the human interest in flight. This original inspiration was the basis for the assumption that adequate propulsion and lift for flight could only be realized with a configuration which employed flapping wings. This was common consensus until 1799 when Sir George Cayley introduced the concept of a fixed wing airplane with a dedicated propulsion system. His concept of a system which separated lift and propulsion into two distinct components proved to be the key for successful human- carrying flight (Mueller). Moreover, once the Wright brothers achieved powered flight, the idea of mechanical flapping wing flight became marginalized within the aeronautical community. Due to the success of unmanned aerial vehicles (UAV) within the past decade, there has been a growing interest in smaller mission based air craft. Research efforts are in progress dedicated to creation of unmanned aerial vehicles on a minimal scale, thus earning these crafts the title of micro air vehicles (MAV). A simple reduction in scale of existing aircraft will not suffice for this initiative. MAVs must fly at extremely low Reynolds numbers where the viscous forces approach the inertial forces and have complex three dimensional fluid dynamic effects. These unique circumstances present a new set of concerns which conventional aerodynamic principles do not address. This has created the need for the reinvention of flight.

The interest in development of MAVs is not restricted only to the academic realm. There are currently many applications in which MAVs might one day be employed. One of the prime opportunities would be the use of micro air vehicles as real-time soldier battlefield awareness. Current ISR (Intelligence, Surveillance, and Reconnaissance) aircraft are not capable of reaching all areas of the battlefield. The employment of micro air vehicles would give soldiers a platform that could be utilized scan inaccessible areas of the battlefield giving them a monumental advantage over enemy forces. Other possible military applications of a fully functional MAV are remote detonation of explosive, detection of chemical warfare agents, or even a swarm attack in which enemies would be overwhelmed by a multitude of MAVs equipped with explosives.

Another potentially significant opportunity for MAVs could be the ability to aid in search and rescue efforts. The dire need of systems capable of finding survivors has become apparent after recent tragedies such as the attacks on the World Trade Center. Providing rescuers with a system capable of flying through rubble and debris may present the opportunity to deliver food and medications without having to first remove the rubble. Additionally, such a system could allow rescuers to successfully identify the locations of survivors in an expedited manner. These platforms could also be used in applications other than military and rescue efforts such as road accident documentation or pipeline inspection.

In order for a craft to be suited for these applications, certain performance criteria apply. At a minimum, the craft should be able to hover, fly forward, and be capable of toggling between these two modes of flight. Additionally, the craft must be able to carry a payload. This payload may consist of any number of items such as sensory equipment or explosive devices. Many fixed, rotary, and flapping MAVs have already been designed, but none have the capability of both modes of flight. The most developed form of MAV is the fixed wing model. Many fixed wing prototypes exhibit admirable forward flight capabilities but are incapable of producing quality images using on board equipment (Galinski). This deficiency is attributed to the fixed wing design’s necessity to fly in the turbulent boundary layer of the earth. Rotary wing designs, while able to hover have proven to be either unstable or extremely heavy for their size. It has been noted by entomologists that a dragonfly may shift flight modes by varying the phase lag between its fore and hind wings (Maybury & Lehmann 2004, Wakeling & Ellington 1996). Thus, the hypothesis is that a quad-winged flapping MAV will be the simplest method for achieving both modes of flight. Testing of the validity of this hypothesis will be one of the results of the effort to design and build a fully functional MAV. The required components for this platform are mechanisms for flight, wing optimization, and component manufacturability.

Design Criteria The design of a mechanical dragonfly requires that the wing structure, the driving force to the wings and the body, be modeled after the insect’s natural behavior in order to maximize lift. The Defense Advanced Research Projects Agency (DARPA) has also established basic criteria for an MAV in order to be eligible for funding.

One of the primary areas of interest at this level of development is the wing design. Several aspects of the dragonfly wing have been observed for incorporation into the design. Wing stiffness and its impact on the frequency of flapping are to be considered. Following the model, the leading edge of the wing must be the most rigid section of the four wings, while the trailing edge of the wing must be capable of deformation. This should aid in creating the proper geometry for lift. Frequency is another significant consideration. It is theorized that the wings should be able to perform at a resonant frequency or natural frequency. Research indicates that this frequency should be in the range of 15 to 40 Hz.

A further concern is the low Reynolds number associated with the dragonfly wing and that they are shown to operate in an area of turbulent flow. This unique condition necessitates an angle of attack much larger than for a conventional fixed wing aircraft.

Finally, the mass of the wings, driving mechanism and body must be considered. The mass of the entire system must be minimized in order to maximize lift. That being stated, the materials chosen will be a limiting factor of the performance.

Testing the behavior of motion and the mechanical properties of the wings and body will be very important both before and after construction, in order to optimize the performance of the MAV. Initially, hand calculations (possibly aided by programming and software) will be employed for determination of critical characteristics of the structures and conditions to which the final product will be exposed (i.e.: forces, moments, material deflection, etc.). Additional testing including use of a stroboscope for deflection, frequency detection, test-failure analysis and lift analysis will produce the data necessary for the MAV through the different stages of its development.

DARPA regulations are as follows; the mass of the MAV be under 50 grams and carry 20 grams payload, fly for 60 minutes with a camera and have a wing span of no more than 6 inches. Because this project is still in its infancy, a moderate amount of lift generated would be a success. At target of 6 percent has been established, but the goal is to achieve the maximum lift from the design while meeting the DARPA regulations for the MAV.

Project Scope and Approach This Project is still in its infancy due to the lack of information in this field. This lack of information forces the approach to this project to be heuristic. The overall goal of the project was to optimize or completely re-design the original model, as designed by Brian Nicholson (Photo 1) with the intention of realizing a mechanism that will achieve a measurable percentage of lift.

Photo 1. Brian Nicholson’s MAV.

One of the primary concerns for this project is the manufacturability of the small components necessary to create a micro air vehicle that meets DARPA’s requirements. This issue must be resolved before any other obstacle can be overcome. The weight and structural integrity must be improved with the new manufacturing method.

Another challenge faced by the design team was the driving system. The current MAV design from last year’s project employs a four bar linkage system. This system, while functional, is heavy, inefficient, and presents significant difficulties associated with assembly and lubrication. A new driving mechanism design was to be considered. The final obstacle to overcome on this project was the wing deflection. A wing needs to be designed to achieve similar deflection of a dragonfly wing.

Two types of driving methods were considered. Smart materials, which show great promise in the future of micro air vehicles, was one of the design systems to be considered. Standard mechanical driving mechanisms, which are the most common systems employed in micro air vehicles today, was the second method for consideration. Factors such as weight, reliability, functionality, and cost were all compared in deciding on the final design. Dragonfly wings are made of a thin membrane reinforced by a vein structure, which provides rigidity. To achieve proper deformation patterns, the wings design was to be modeled after the dragonfly’s wings. Multiple vein structures and membranes were considered for obtaining optimal deflection. This deflection was to be gauged by using a stroboscope (if available) as well as visual observation.

Overall Project Review (Power and Energy Requirements) Power and energy requirements are necessary in order to determine minimum amount of energy necessary to power the MAV for a specific period of time. This is an obviously critical aspect, given that the energy requirements can serve as a significant limiting factor in the development of the MAVs.

The power necessary can be calculated by employing the torque created by flapping the wing and the angular velocity of the wing, as seen in Equation 1 below. By integrating the power over time, the energy requirements can be obtained.

P = Tω Equation 1.

The angular velocity can be determined simply by inspection of the flight of a dragonfly. From these inspections it was concluded that flapping angle of a dragonfly wing during flight is approximately 60 degrees. Studies have also shown that the flapping frequency of a dragonfly wing is known to be about 15 Hz. Based upon this data, the angular velocity can then be calculated by employment of the Equation 2.

⎛ 2Π(rad) ⎞⎛15(cycles) ⎞ ⎛ (rad) ⎞ ω = ⎜ ⎟⎜ ⎟ =10Π = 31.416⎜ ⎟ ⎝ 3(cycle) ⎠⎝ (sec) ⎠ ⎝ (sec) ⎠

Equation 2.

Determination of torque provided a more significant challenge. In order to determine the torque that is created by the wing, it is necessary to determine the lift force on the wing. Given that this lift force is currently an unknown variable, another method was utilized in determination of the torque. This method was the employment of a group of experiments involving the use of and mathematical modeling of a small DC motor. Utilizing the mathematical model and its associated transfer functions, shown in Figure 1 below, a series of three experiments were completed to develop the relationships of the transfer functions and allow the solving for the unknown constants. When considering the zero frequency response of the system, the constants to be determined are; Kt the torque constant, Kb the back emf constant, and c, the damping constant. It should also be noted that inductance of the motor was taken to be negligible, and the internal resistance of the motor was measured and taken to be 60 ohms.

Figure 1. Mathematical Modeling of a DC Motor and its Associated Transfer Functions.

For the first experiment, voltage to the motor was varied and the values of the current were measure for each corresponding voltage value. This was performed in order to determine the relationship between current and voltage to satisfy transfer function (i). A plot of this relationship for a large range of values is displayed in Figure 2. Data taken at low voltages proved to be erroneous due to the effects of coulomb friction.

Figure 2.

The second experiment involved the use of startup torque to develop a relationship between current and torque to satisfy transfer function (ii). During this experiment, a hanging mass was placed some distance from the center of the motor (see Figure 3). The torque generated by this mass is given by T=mgd. A voltage can then be applied to the motor until the torque created by the motor is equivalent to the torque generated by the hanging mass. Current can then be measured giving a relationship between current and torque.

Figure 3.

A third experiment was performed to determine a relationship between angular velocity and voltage to satisfy transfer function (iii). This was accomplished by varying the voltage to the motor and measuring the corresponding angular velocity using a stroboscope. Figure 4 displays a plot of this relationship.

Figure 4.

The final step of this process is to calculate the current required to flap the wing at 15 Hz. To accomplish this, SigLab was utilized as a function generator to send a sinusoidal function through the motor, thereby causing the wing to flap at 15 Hz with the appropriate amplitude. Using EES (Engineering Equation Solver) this relationship was combined with the three equations with three unknowns formed in the previous experiments to solve for the unknowns and subsequently, the torque necessary to flap a wing. A summary of the torque found and the corresponding power and energy requirements can be seen below.

Equation 3. To ensure accuracy of the torque calculation, CFD (Computational Fluid Dynamics) methods were utilized to estimate the coefficient of lift. This coefficient was then used to determine torque and compare to the value obtained through experimentation. Figure 5 shows the estimations obtained using CFD analysis.

CFD Analysis and Calculations

Figure 5.

Wing Design The configuration of the wings of a dragonfly is significantly more complex than the wings of a traditional aircraft. This complexity is necessitated by the fact that the dragonfly does not fly in a laminar flow condition. Turbulent flow increases the complexity of the interaction between the wings and the surrounding air. Turbulent flow is a low drag situation. This low drag is intuitively a positive situation, however stability and agility becomes an issue with the turbulence.

Dragonflies do not fly with any significant velocity; therefore, the flow will not necessarily be turbulent. It is possible that this flow could be turbulent over the back wings only. To correct this, dragonflies have thousands of microscopic “barbs” on the surface of their wings which act to trip the flow around the wing, thus inducing turbulent flow. This places the dragonfly in turbulent flow, but does nothing to solve the problems of turbulent flow. In nature the dragonfly overcomes this stability problem with the ability to adjust its wings in flight. Dragonflies can move their wings forward, backwards and adjust the angle of their wings at any time. This freedom of movement is what allows the dragonfly to hover, change direction instantly and remain stable amidst turbulent air flow. That being stated, the total complexity of the dragonfly wing was not a legitimate goal of this project, as that is something that will take years (and facilities not currently available) to recreate. The wing portion of this project focused on designing and fabricating a configuration that would increase in flexibility along the length of the wing and from the leading edge to the trailing edge of the wing.

The final wing designs were created with the aid of Solid Works. These designs were based upon magnified images of an actual dragonfly front and rear wings. These images were placed near the monitor and a grid was activated in Solid Works in order to maintain optimum precision. Obviously given that the material chosen for wing construction varies from material of the actual dragonfly wings an interpretation of the geometry was necessary.

The first half of the leading edge (nearest to the body) was necessarily more rigid than the second half for optimal deflection and nodal behavior. With that being the case, the first half of the leading edge was reinforced with a parallel line and of equal thickness to the leading edge. The skin supports or posts run perpendicular to the leading edge and are located in the middle and innermost sections. The leading edge also acts as a skin support, as it curls around at the outer extremities. These posts are what differentiate the front to the rear wings. The rear wings are similar to the front wings, with the exception that the rear wings have a slight increase in surface area. One means of achieving this is by employing a longer inner post. In some ways the difference between the front and rear wings mimics other creatures in nature. Representations of the SolidWorks models are displayed in Figures 6 and 7.

Figure 6. Front wing model. Figure 7. Rear wing model. A Finite Element Analysis (FEA) was performed with COSMO within the Solid Works program. The data entered for analysis is as follows:

Static study with restraints defined at the wing arm and a 0.3 Newton force applied to the skinned area. The material properties for the titanium are 11,500,000 psi elastic modulus and 0.33 for Poisson’s ratio. The mesh used is the finest possible for COSMOS (the 0.3 N force is an approximation). Upon evaluation of the stress results, a fillet with 0.05 inch radius was implemented to the intersection between the arm and the innermost post for both front and rear wing designs. This resulted in reduced the stress concentration at this location dramatically (reference Figures 8 and 9). The deflection realized at the wing tips is 0.35 inches, resulting in an angle of deflection of 8 degrees. This 8 degree angle is thought to be ideal. An increased deflection could result in an undesirable nodal behavior and a reduction is thought to result in a configuration that is too stiff. With the proper amount of deflection, it is theorized that the wing could help the driving mechanism by achieving extra ‘whip’ on the down and up stroke.

Figure 8. Figure 9.

An additional FEA was performed with a 0.2 N and 0.4 N Force in order to gain a better perspective of the behavior of the wings. These forces of 0.2 N and 0.4 N gave a deflection of 6 degrees and 10 degrees respectively. Two designs were created for the front wings, while one was done for the rear wings. The variations in the front wing designs were minor but were thought to be possibly important. One design was with a 90 degree angle between the wing arm and the inner post, while the other has a 5 degree offset. This offset would shift the wing tip forward 5 degrees. The rational for this slight variation was a theory that flapping motion would induce wing twist, mimicking the insects’ natural flight, by shifting the center of gravity forward.

These wing designs were conceived with manufacturing processes in mind. The opportunity to work with a local laser cutting supplier (Mound Laser and Photonics Center) created a potential opportunity to quickly and precisely manufacture these newly conceived designs. MLPC was able to produce a total of four wings; two rear and two front with the 5 degree offset. The wing structures had a layer of oxidization on the side that faced the laser during fabrication. This gave the material a porous surface, which was optimal for application of adhesives, but did create a concern related to oxidization and embrittlement of the material. Fortunately when tested, the oxidation did not appear to pose any problems.

Other materials employed in the construction of the wings were an adhesive and Mylar. The adhesive was medium viscosity super glue. An accelerant was utilized in order to expedite cure times. Due to the precise nature of the process, the adhesive was applied with a syringe. The Mylar acts as the skin of the wing. This material was found to be durable but was also very light weight.

Two types of Mylar were tested; a clear and bi-ply. The clear Mylar was rigid enough to not require the middle post to be glued. The bi-ply did not display the same rigidity, and therefore did require additional adhesive application. The ideal condition was not to glue this middle post. This would allow the skin to fold slightly back during the up stroke, therefore allowing more force downward producing lift. That being the case, the clear Mylar was chosen for the final testing.

The mass of the front clear, front bi-ply, rear clear and rear bi-ply are, 0.2605g, 0.2380g, 0.2914g and 0.2569g respectively. All of the wing lengths were 2.5 inches, this allows of a one inch body width. This produces a 6 inch total wing span, which is the limit created by DARPA. Drive Mechanism Design The drive mechanism provided significant challenges in terms of determination of a configuration that remains simple enough to allow for ease of fabrication and assembly, while achieving the necessary motion. The primary considerations for the design were minimization of the number of components needed, maintaining flat pattern configurations where possible, and designing around necessarily complex components that could be purchased “off the shelf”. Two primary concepts were considered and are shown in Figures 10 and 11.

Figure 10. Figure 11.

Concerns existed for each concept. The five bar linkage concept shown in Figure 10 generated concerns initially relating to the size and subsequent overall vehicle weight. Those concerns aside, a simulated run of this configuration using SolidWorks software displayed favorable results.

The four bar linkage concept shown in Figure 11 was favorable in the fact that the reduced area required for the mechanism would lead to a potentially significant weight savings. One primary draw back realized with this configuration however was the inability to have both wings flapping in phase.

Based on these concerns, the five bar linkage was decided upon as the most viable option. Additional effort was directed at minimizing component mass, while still maintaining structural integrity and functionality. The result of these efforts was the development of four primary components that would necessitate special processing. These four primary components, all modeled using SolidWorks software, are displayed in Figure 12.

Figure 12.

Additional simple components were also modeled using SolidWorks, along with a Plantraco GB05 motor and gearbox, and assembled into the final prototype configuration shown in Figure 13. As with the original concept, a SolidWorks simulation displayed favorable results in terms of mechanism functionality. The engineering drawings for the four primary components and final prototype assembly, as well as a photo of the Plantraco GB05 are included in the appendix of the report.

Figure 13.

The next difficulty encountered was selection of a method and source for fabrication of the primary components. Due to the size of the components, tight tolerancing was an obvious concern. Simple hand fabrication was not an option, and there was a concern as to the ability of available conventional machining equipment to maintain the dimensional accuracy desired. Given those concerns, it was determined that laser cutting was the most viable option for component fabrication, although, no such equipment was readily available.

Investigation into the costs of purchasing equipment capable of meeting the project requirements proved to be cost prohibitive, given that there was no approved budget for the project. Fortunately, during the course of the project, a relationship was being forged between WSU and MLPC (Mound Laser and Photonics Center) in Miamisburg, Ohio. The teams’ fabrication dilemma was presented to Dr. Larry Dosser at MLPC by Dr. Slater, and an agreement was reached to employ MLPC’s laser cutting facilities to produce the necessary components using the SolidWorks models that had been created. A sample of the components is shown in Figure 14. Each of these components was cut from .020” thick Ti Beta 21S sheet stock, donated to the project by Aeronca Inc., in Middletown, Ohio.

Figure 14.

One additional problem encountered during final fabrication was that of maintaining the necessary tolerances for the forming of the MAV body. Due to a lack of another more suitable material, it was necessary to cut the body from the Ti Beta 21S which did present problems due to insufficient tools, and springback issues. These issues were handled by the best means available at the time, and tolerances held to within the bounds of those limitations.

All remaining components were either fabricated from common materials (wood, epoxy, etc.) or were purchased (nails, washers, etc.). Figure 15 shows an in process photo during the assembly process. Additional in process photos are included in the appendix of the report.

Figure 15. Drive Mechanism Analysis A major portion of the time spent on this project was on the analysis of the driving systems. Three different methods of determining the forces were used. Motor tests were performed to determine the force output of the motor and to determine the forces being applied to the wing during flight.

The information gained though the tests were then used in Working Model, a program used to solve the kinematics of devices. Hand calculations were then used to verify the results of the tests and computer programs. Working Model was used to solve the kinematics of several different test models, and was used to determine the optimal design. The multiple mechanical driving systems that were tested are shown in Figures 16 through 19.

Figure 16. First Trial. Figure 17. Second Trial.

Figure 18. Third Trial. Figure 19. Fourth Trial.

There were multiple problems encountered while testing the mechanical systems. The primary problem encountered was phase lag. By creating a situation where the wings flapping out of phase with one another, there was a concern that this would result in a lack of complete loss of lift. This problem was typically realized when attempting to reduce the number of components and total vehicle weight. In Figure 16, two components were removed from the mechanism but this resulted in significant phase lag. This design also required a very wide body. There was a concern that there would be a violation to the DARPA requirement where wing span is constrained to six inches.

The second design shown in Figure 17 was designed using sliders to attempt to improve on the first design. This modification did greatly decrease the body’s required width, but failed to eliminate the phase lag.

The third design, Figure 18, rotated the gear in the other direction. Using this design eliminated the phase lag but a new problem was encountered. There was no simple means of mounting the gear to the body of the MAV without creating interference.

The final design shown in Figure 19 eliminated phase lag, decreased the required body width and there were no interference problems.

The next step in the analysis was to use hand calculations to verify results and to optimize the design. Due to the short period of time that was given, the drawings that were actually used were designed for easy assembly. The factor of safety of this mechanism was maintained at an elevated level to account for any gusts of wind that could add an additional force on the wings. Calculation for the factor of safety was conducted for two cases. The first case was fort yielding, which resulted in a factor of safety of 206. The second case was for fatigue, which resulted in a factor of safety of 21. These high factors of safety should allow the MAV to be able to withstand limited gusts of wind.

Results The end result of the project was a mechanism that partially met the goals initially set out. The completed assembly does operate as intended with the exception of minor binding issues that are attributable to the inability to maintain the necessary precision of the components. Of particular concern was the inability to hold the correct dimensions during the forming operations on the body of the MAV. The completed assembly is shown in Figure 20.

Figure 20.

The binding issues were partially addressed by the use of an increased power source (9V from the original 3.7V) and liberal application of silicon spray lubricant. Aside from the obvious increase in weight by employing the larger battery, the potential for damage to the motors was increased. In the event that binding did occur, if the voltage was not immediately removed, the motors began to quickly overheat and bind internally.

Due to the delays associated with component fabrication, minimal testing was able to be performed on the completed assembly. Testing was limited to lift analysis. One in process measurement of lift was conducted with only the rear wings attached. The results for this test showed a maximum lift of .6g for a total vehicle weight of 25.052 g (2.4%) although there were concerns regarding the reliability of the scale employed. Based on the uncertainty with the equipment, these results were discounted.

An additional test was performed after completion of the assembly. For this testing, a more reliable scale was employed, although the precision of the scale was greatly reduced. Measurements could only be obtained to the nearest gram. For a total vehicle weight (less batteries) of 1 oz. (38.35g), no measurable lift could be obtained.

A summary of the overall project goals is tabulated below. Goal Achieved? Fabricate working mechanisms Yes Maintain a vehicle weight below 50 grams Yes Maintain all vehicle dimensions below 6 inches Yes Generate measurable lift Inconclusive

Costs Due to the lack of funding for this project, it was necessary to maintain minimal cost expenditures. Total project costs are tabulated below. Item/Material Cost Ti Beta 21S sheet stock Donated GB05 motors (3) + international shipping $107.87 Lithium batteries (4) $59.96 Misc. Drive Mechanism materials $17.30 Misc. Wing materials Laser cutting (MLPC) Donated Total

Conclusions/Recommendations Although the majority of the goals of this project were met, it was not possible to conclusively state that the overall project goal of lift generation was achieved. One of the most significant factors impacting the teams’ ability to achieve this goal was the lack of available resources. Figure 21 shows the revised schedule of activities for the project.

Figure 21.

The inability of the team to fabricate the high precision components necessary served to cause significant delays in some activities, and cancellation of others. Of primary concern was the loss of opportunity to optimize the original design concepts. Given that so many of the goals were achieved, had the resources been available earlier in the process, the overall goal of achieving lift would have likely been met. That being stated, there does appear to be the potential for a good working relationship with a resource in MLPC.

The relationship with Mound Laser Photonics Center (MLPC) was definitely beneficial to Wright State University (WSU) and as such this relationship should definitely be kept active. MLPC has capabilities beyond what is reasonable to establish in any reasonable amount of time. In addition, all work performed was donated, aiding in keeping costs at a minimum. It is worth noting that MLPC did not do this solely out of the goodness of their hearts, however; there is an aspect of this project that entails a government grant (DARPA).

Obtaining this government grant is best achieved through proof of concept. The concepts described herein have the potential, with more development, to produce a very efficient and light MAV. It is this notion that needs to be presented to DARPA in an effort to secure funding.

An additional recommendation stemming from this project is to build a micromachining laboratory on campus at WSU. This would provide students with the ability to manufacture simple parts ‘in house’ and drastically reduce the lead time. WSU would then be expanding their manufacturing capabilities and thus open doors for more complex research projects in the future. One way of justifying a very high capital expenditure on this project would be to open up the use of the lab to different colleges at the university. The above are suggestions for WSU, the total scope of which has not been considered. There are obviously safety concerns with creating a new manufacturing laboratory as well as a huge bureaucracy to deal with in receiving government funding. Despite these possible roadblocks, the suggestions about will further advance the capabilities and image of Wright State University.

References 1. Defense Advanced Research Projects Agency. Home Page. 6 June 2007. 14 October 2007 . 2. Galinski, Cezary. “Influence of MAV Characteristics on Their Application.” Diss. Warsaw University of Technology, 2005. 3. Maybury, Will J., Lehmann, Fritz-Olaf, 2004, “The fluid Dynamics of Flight Control by Kinematic Phase Lag Variation Between Two Robotic Insect Wings.” Journal of Experimental Biology, 207, 4707-4726. 4. Mueller, Thomas. Fixed and Flapping Wing Aerodynamics for Micro Air Vehicle Applications (Progress in Astronautics and Aeronautics. USA: AIAA, 2001. 5. Nicholson, Brian. WSU Flapping Quad-Winged Micro-Air Vehicle (MAV) [undergraduate research project]. Dayton, OH: Wright State University; 2007 March 10.

Appendix Engineering Drawings

Review of Water Management in PEM Fuel Cell Models Student Researcher: Kathryn N. Gabet

Advisor: J. Iwan D. Alexander

Case Western Reserve University Department of Mechanical and Aerospace Engineering

Abstract The goal of this project* is to assess current computational fluid dynamics (CFD) models of water management in polymer electrolyte membrane (PEM) fuel cells. It will focus on the various treatments of two-phase water transport within the gas diffusion layers (GDLs). The models differ mainly in the extent to which the two phases are modeled and the extent to which pore-scale phenomenon are taken into account. It is, typically, advantageous to avoid simulating the details of flow and transport processes at the pore scale as these place a high demand on CPU time and memory. However, given that PEM fuel cell performance is closely linked to the ability to manage water transport, it appears that the ability to effectively incorporate pore scale processes would be desirable. Though it is not fully known the extent to which these pore-scale phenomena affect water flow and flooding predictions, they are generally considered an important factor in fuel cell performance.

*The original goal of this project was to analyze the performance a specific multilayer single phase model created in FLUENT consisting of gas diffusion layers, catalyst layers, and a membrane that separates the anode from the cathode. This goal was abandoned due to difficulties obtaining a license to run add-on modules for the simulation. The new direction is to formulate a computational mesh model upon which to a new CFD model can be based. This part of the project is not yet complete.

Background Due to low operating temperatures, low emissions, and high efficiency in comparison to other fuel cells, polymer electrolyte membrane fuel cells are potential alternative to internal combustion engines [1]. Today’s environmental concerns require researchers to discover new and innovative ways to convert energy. Many view PEFCs as the future of the automobile industry because they emit only water and heat as byproducts as opposed to today’s personal transportation which currently produces approximately 900 million tons of carbon dioxide each year, equating to approximately fifteen percent of the world’s CO2 emissions [2]. The number of vehicles is supposed to increase from 600 million to one billion by 2025, making the industry’s need for clean technologies imperative.

PEM fuel cells work by oxidizing hydrogen gas into protons and electrons at the anode. The protons then flow directly through a polymer electrolyte membrane while the electrons travel around an external circuit. The two then reconnect at the cathode with oxygen from air to create water. When the membranes are sufficiently hydrated, this reaction is far more efficient at producing power because of increased proton conductivity. This efficiency, however, can be greatly reduced by an excess of liquid water in the gas diffusion layer (GDL) that blocks gas from flowing through the pores. This phenomenon, referred to as flooding, occurs mainly at the cathode and is the focus of many water management models of PEM fuel cells and greatly reduces the cell’s efficiency in two ways: Flooding limits the amount of oxygen that reaches the cathode catalyst layer by partial blocking its path through the porous media, as well as by reducing the active catalyst area [1,4,5]. Current fuel cells rely upon control of the inlet gas saturations to control water transport within the fuel cell, often leading to flooding and membrane dehydration [3]. When CFD models can correctly describe fluid behavior, scientists will be able to increase cell efficiency by optimizing the cell’s physical parameters to optimize flow within the cell to reduce flooding.

Though the equations governing CFD PEFC models vary depending on the model and its complexity, they all solve transport equations governing the conservation of mass, momentum, species, energy, and charge in some form [1]. This study focuses on water the water transport phenomena shown in Figure 1. These include water production in the cathode catalyst layer due to the oxygen reduction reaction, electro- osmotic drag of water across the membrane, water back-diffusion across the membrane, liquid water flow out of the GDLs into the gas channels, and gas diffusion.

Project Objectives • Review current fuel cell models • Give insight into future goals for CFD modeling of PEFCs

Methodology Used Reviewed current literature. Compared and contrasted characteristics of various models including assumptions and boundary conditions, especially those associated with two-phase flow. Determined future areas of interest for CFD modeling.

Results / Progress Initially, proton exchange membrane fuel cell models described water in PEM fuel cells as a supersaturated gas, rather than a two phase gas-liquid mixture. These models, though initially helpful in recognizing the influence water management had over cell efficiency, they ignore the presence of liquid water in the cell [1]. This assumption is invalid because it disallows liquid water, even at normal operating temperatures below 100°C. More recent models, including those studied in this project, allow for both liquid water as well as vapor. While both one-phase and two-phase models make similar predictions at low current densities, Pasaogullari and Wang show that as current density increases, the single-phase model over predicts performance because it fails to consider flooding in the cathode GDL as water generation at the catalyst increases [6].

Two-phase models can be separated into three main groups, those using unsaturated flow theory (UTF), those using the multi-phase mixture model (M2), and those that solve separate equations for all phases and species. UTF models are characterized by the assumption that the gas phase pressure is uniform throughout the model. Because of this assumption, only liquid water moves within the fuel cell. A good example of this is the Nam and Kaviany model [7]. The M2 model instead solves equations for one “mixture phase” and then breaks down the mixture to arrive at separate velocities for the gas and liquid phases. Because of this mixture phase, the M2 model does not require tracking the boundary between one and two phase flow as is required in UTF, making computations more efficient [8]. Due to limitations resulting from the assumptions made in both models as will be discussed later, attempts have been made to fully model two-phase flow [5,9]

Though the M2 model is the more accurate of the ‘pseudo two-phase’ models, both sets of models make similar assumptions about the nature of the fuel cells [1,8,10]. The first simplification is always that the cells are isothermal. Though this assumption is a decent approximation for one cell, it breaks down when the cells are combined in industrial stacks [8]. The boundary condition used for both models is that the liquid saturation at the GDL-gas channel interface is small and can be ignored. By assuming an arbitrary liquid saturation at the interface, Ju et al. showed this to be incorrect. Their results showed the condition to be a possible cause of discrepancies between experimental results and model observations [10]. Researchers are working on models to relax this assumption and allow for water droplets on the GDL- channel boundary, including one by Gurau currently in review for publication [9].

Fluid flow though the gas diffusion layers is generally accepted to be based on capillary action rather than viscous effects due to low Capillary numbers on the order of 10-7 [5,11]. Attempts to model the capillary pressure within a GDL often express the pressure in terms of a Leverett J-function [11,9,4,5] which characterizes the flow based on soil sample data, rather than data taken from typical GDL media (Figures 2a,b). Gostick et al. published experiments showing a reasonable correlation of experimental data to a J- function [11]. More studies need to occur before this correlation can be universally applied. Possible errors in capillary pressure predictions can also be found due to the effects of changes in wettability of the porous media. In PEMFCs, wettabillity is characterized as a function of the contact angle of water with the surface. Hydrophillic media have low contact angles while hydrophobic media have high contact angles (greater than 90°). As shown in Figure 3, water propagation through porous media is largely affected by wettability. The wettability of a medium is not always constant. Contact angles can differ greatly depending on whether the water is advancing or receding [11,12]. For distilled water on a flat Teflon surface, the contact angle varies by 14° [11]. In addition to these hysteresis effects, once a path has been wetted, it becomes a preferred path for flow [14]

Significance and Interpretation of Results The issues discussed above are important because they are examples of an underlying question in CFD modeling of PEMFCs: are macro-scale models able to characterize flow in a quantitative manner necessary to improve fuel cell design? Pore scale processes affect macroscopic transport properties and need to be fully understood before their effects on fuel cell flooding and consequentially efficiency can be determined [5].

Figures/Charts

Anode GDL Membrane MPL Cathode GDL Fuel H2 & Liquid Flow Electro-osmotic Drag Liquid Flow H2O

H2O Production Air O2 Gas Flow Gas Flow & H2O H2O Back -diffusion

Catalyst Layer Figure 1. Diagram of PEM Fuel Cell.

Figure 2. Typical GDL Materials from [4]

Figure 3. Water (grey) propagation pattern as a function of contact angle (wettability) from [11]

Acknowledgments and References 1. Wang, Chao-Yang, “Fundamental Models for Fuel Cell Engineering,” Chemical Review, vol. 104 (2004) 4727-4766. 2. Jain, V. P. "Ecologically Sustainable Transport: Issues and Perspectives." Sustainable Development An Interdisciplinary Perspective (2005). 3. Chang, Qingming. “Summary of Fuel Cell Performance Modeling,” unpublished. 4. Ugur Pasaogullari, Chao-Yang Wang, and Ken S. Chen, “Liquid water transport in polymer electrolyte fuel cells with multi-layer diffusion media,” J. Electrochem. Soc. vol. 151(2) (2004) A399-A406. 5. Dijilali, N., “Computational modeling of polymer electrolyte membrane (PEM) fuel cells: Challenges and opportunities,” Energy, vol. 32 (2007) 269-80. 6. Ugur Pasaogullari and Chao-Yang Wang, “Two-phase modeling and flooding prediction of polymer electrolyte fuel cells,” J. Electrochem. Soc. 152 (2) (2005) A380-A390. 7. Jin Hyan Nam and Massoud Kaviany, “Effective diffusivity and water-saturation distribution in single- and two-layer PEMFC diffusion medium,” Int. J of Heat and Mass Transfer, vol. 46 (2003) 4595-4611. 8. Ugur Pasaogullari and Choa-Yang Wang, “Two-phase transport and the role of micro-porous layer in polymer electrolyte fuel cells,” Electrochemica Acta, 49 (2004) 4359-4369. 9. Gurau,Zawodzinski, Mann, “Two-Phase Transport in PEM Fuel Cell Cathodes,” in review. 10. Hyunchul Ju, Gang Luo, and Chao-Yang Wang, “Probing liquid water saturation in Diffusion Media of polymer electrolyte fuel cells,” J. Eclectrochem. Soc., 154 (2) (2007) B218-B228. 11. O. Chapius, et al., “Two-phase flow and evaporation in model fibrous media Application to the gas diffusion layer of PEM fuel cells,” J. Power Sources, 178 (2008) 258-268 12. Gostick et al., “Capillary pressure and hydrophilic porosity in gas diffusion layers for polymer electrolyte fuel cells,” J. Power Sources, 156 (2006) 375-387. 13. Lev A. Slobozhanin and J. Iwan D. Alexander, “Capillary pressure of a liquid in a layer of close- packed uniform spheres,” Physics of Fluids, 18 (2006) 082104. 14. S. Agarawal, M.S. Numerical simulation of unsaturated flow in porous media under normal and reduced gravity conditions. M.S. Thesis, Case Western Reserve University (2004). Implicit Large Eddy Simulation of Low Reynolds Number Flow Past the SD7003 Airfoil

Student Researcher: Marshall C. Galbraith

Advisor: Dr. Paul Orkwis

University of Cincinnati Department of Aerospace Engineering and Engineering Mechanics

Abstract The formation and burst of a laminar separation bubble has long been known to be detrimental to the performance of airfoils operating at low Reynolds numbers (Re < 105). With increasing interest on Micro Air Vehicles (MAV), further understanding of the formation and subsequent turbulent breakdown of laminar separation bubbles is required for improved handling, stability, and endurance of MAVs. An investigation of flow past an SD7003 airfoil over the Reynolds number range 104 < Re < 9x104 is presented. This airfoil was selected due to its robust, thin laminar separating bubble, and the availability of high-resolution experimental data. A high-order implicit large-eddy simulation (ILES) approach is shown capable of capturing the laminar separation, transition, and subsequent three-dimensional breakdown. The ILES methodology also predicts without change in parameters the passage into full airfoil stall at high incidence. Results of the computations and comparison with experimental data are analyzed and discussed. Good agreement is generally found between experiment and computations for separation, reattachment, and transition locations, as well as aerodynamic loads.

Project Objectives Low Reynolds flow has been of interest for model airplane designers for decades. As a result, a large database of experimental and numerical data for airfoils for fixed wing configurations has been compiled. Several investigators, such as F. Schmitz1 and more recently R. Eppler2 and S. Selig3 to name a few, have contributed with a vast number of experimental measurements and advanced aerodynamic design methodologies. In recent years, an interest on small Unmanned Air Vehicles, including Micro Air Vehicles (MAV), capable of performing a wide range of missions has grown from the onset of newly developed micro system technologies. Due to their size and low air speed, these vehicles typically operate at Re on the order of 104 to 105. At these low Reynolds numbers, the flow may remain laminar over a significant portion of the airfoil and is unable to sustain even mild adverse pressure gradients. For moderate incidence, separation leads to the formation of a laminar separation bubble (LSB) which breakdowns into turbulence prior to reattachment. The LSB moves towards the leading edge with increasing angle of attack and becomes shorter in streamwise extent. Eventually, as the stall angle is exceeded, reattachment is no longer possible and so-called bubble bursting ensues. The onset and successive breakdown of the LSB at low Reynolds number is know to be detrimental to performance, endurance, and stability of MAVs.

As transition is affected by a wide range of parameters such as wall roughness, freestream turbulence, pressure gradient, acoustic noise, etc., a comprehensive transition model which accounts for all factors has not been developed. Instead, transition models typically focus on only one or two parameters in order to predict transition location. Such models range from simple empirical methods based on linear stability theories, linear or nonlinear parabolized stability equations, to more comprehensive Navier-Stokes models. A design-oriented approach adopted by many researchers is the eN method which is based on linear stability analysis and boundary layer theory. Here, local growth rates of unstable waves based on velocity profiles are evaluated by solving the Orr-Sommerfeld equation. Transition is said to occur when the amplification of the most unstable Tollmien-Schlichting waves reaches a specified critical threshold. This method is used, for instance, to predict transition location in the popular airfoil-design code XFOIL4

The present work investigates the feasibility of an implicit large-eddy simulation (ILES) approach to predict laminar separation bubble formation and transition for low Reynolds–number airfoil applications. The ILES approach, previously introduced in Refs. 5 and 6, is based on higher-order compact schemes for the spatial derivatives and a Pade-type low-pass filter to provide stability. The high-order scheme allows for accurate capturing of the separation and transition process, whereas the highly-discriminating low- pass filter is used in lieu of a standard sub-grid-scale (SGS) model to enforce regularization in turbulent regions. This approach is very attractive since it provides a seamless methodology for mixed laminar, transitional and turbulent three-dimensional flows.

Results are presented for flow over a SD7003 airfoil section. The SD7003 airfoil was chosen due to the existence of experimental data available for comparison, as well as for the relatively large LSB observed on the suction side of the airfoil. High resolution velocity and Reynolds stress measurements have been provided by Radespiel7. Experiments were conducted in a water channel, as well as in a low-noise wind tunnel both at the Technical University of Braunschweig (TU-BS). Freestream turbulence intensities were 0.08% and 0.8% for the wind tunnel and water channel respectively. Measurements are available for Reynolds number 6x104 at 4° angle of attack in the wind tunnel, and at 8°, and 11° in the water channel. PIV measurements for the SD7003 airfoil were also obtained by Ol et al.8 at the Wright Patterson Air Force Base (WPAFB) water channel with a freestream turbulence intensity of less than 0.1%. Aerodynamic load measurements are also available from Ol et al.8 and Selig et al.9, 10.

Methodology Present computations utilize the flow solver FDL3DI, a higher-order accurate, parallel, Chimera, Large Eddy Simulation solver from Wright Patterson Air Force Base. FDL3DI has been proven reliable for many steady and unsteady fluid flow problems.11, 12, 13, 14, 15 The FDL3DI code solves the unsteady, three- dimensional, compressible Navier-Stokes equations. It should be noted that the governing equations correspond to the original unfiltered Navier-Stokes equations, and are used without change in laminar, transitional, or fully turbulent regions of the flow. Unlike the standard LES approach, no additional sub- grid stress (SGS) and heat flux terms are appended. Instead, the high-order low-pass filter operator is applied to the conserved dependent variables during the solution of the standard Navier-Stokes equations. This highly-discriminating filter selectively damps only the evolving poorly resolved high-frequency content of the solution5, 6.

Time accurate solutions of are obtained with the implicit approximate-factorization algorithm of Beam and Warming16 employing Newton-like subiterations. The efficiency of the implicit algorithm was increased by solving the factorized equations in diagonal form. In order to maintain temporal accuracy, which can be degraded by the diagonal form, three subiterations were utilized within a time step.17

For the present simulation, sixth-order compact differencing scheme was implemented in all spatial directions with 8th-order filtering in streamwise and surface normal directions and 10th-order filtering in the spanwise direction. Parallel execution is achieved by partitioning the mesh with a five point overlap between grids. Such an overlap is required to maintain the formal order of accuracy of the solution across grid boundaries. In addition, higher order accurate interpolation is employed to maintain spatial accuracy in overset meshes.18 Decomposition was performed with automated tools developed for FDL3DI.19

Initially, a single baseline O-grid was generated about the SD7003 airfoil. Grid coordinates are oriented such that ξ traverses clockwise around the airfoil, η is normal to the surface, and ζ denotes the spanwise direction. The baseline mesh contained 315x151x101 points in the ξ, η, ζ directions, respectively, which corresponds to approximately 4.8 million grid points. The farfield boundary was located 30 chords away from the airfoil in order to reduce its influence on the solution near the airfoil. Unless otherwise noted, a spanwise extent z/C = 0.2 was prescribed.

A more refined overset mesh system was constructed in order to exploit the Chimera (overset) capabilities of the FDL3DI solver. The baseline mesh was used as the basis for mesh refinement and a portion of it was retained as the near O-grid shown in Figure 1b. A background circular O-grid (Figure 1a) was also generated away from the airfoil to again move the farfield boundary conditions 30 chords away from the airfoil. However, as this mesh is away from the separated flow of interest, its spanwise resolution was reduced to 41 points. Similarly, the near O-grid and pressure-side grid were reduced to 65 spanwise points, while the grid on the suction side of the airfoil maintained 101 spanwise points. The majority of the refinement took place on the suction surface of the airfoil. Here, the mesh was doubled in both the surface normal and streamwise directions compared to the baseline mesh. After these modifications, the total number of mesh points is approximately 5.7 million grid points, with 69% of the points on the suction side of the airfoil. The mesh was then decomposed into 60 blocks for parallel execution.

Boundary conditions were specified in the following manner. Freestream conditions were prescribed with fixed dependent variables on the majority of the far-field boundary. A zero velocity gradient was imposed on the wake region of the outer boundary, as depicted in Figure 1a. A no-slip, 4th-order explicit zero pressure gradient, adiabatic temperature condition was used on the surface of the airfoil. Freestream

163x50x41 ∂u i = 0 ∂x 315x47x6

a) b

428x90x101

101x74x65 Figure 1. Overset computational mesh. Grid dimensions (ξ, η, ζ) are depicted c) for each grid. a) Background O-grid. b) Near O-grid. c) Body fitted grids.

Results Obtained Effects of both angle of attack and Reynolds number are considered. A range of positive angles of attacks, i.e. 2°, 4°, 6°, 8°, 11°, and 14°, were computed for Reynolds number 6x104. In order to limit numerical errors, several numerical parameters were investigated for the 4° angle of attack with a Reynolds number of 6x104. In particular, grid resolution, spanwise width, and freestream Mach number were considered. Time and spanwise mean surface Cp and Reynolds shear stresses were used to evaluate influence of each parameter.

All mean values were calculated by averaging the spanwise averaged time accurate solution over eight characteristic times with a non-dimensional time step of ∆t = 0.00015. Before calculating mean values, the time accurate solution was allowed to develop from an initial solution over five characteristic times. In order to reduce computational cost associated with initial transients from a freestream initial solution, two-dimensional solutions were used for the initial condition of the three-dimensional mesh. The two- dimensional solution was transferred onto the three-dimensional mesh through simple extrusion.

Both 6th- and 2nd- order computations were performed on the baseline mesh at 4° angle of attack. The 2nd- order solution was obtained with a 2nd-order version of the FDL3DI solver which also incorporates fourth- order spectral damping terms. Mean surface Cp and Reynolds shear stress values from these calculations, along with values from overset mesh calculations, are presented in Figure 2. In particular, note the difference between the 2nd-order and 6th-order computations on the baseline mesh. The 6th-order scheme has a sharper pressure gradient in the transition region compared to the 2nd-order computation. More importantly, absolute magnitude of the Reynolds stress is lower in the 2nd-order computations than in the 6th-order case. In fact, the values for the baseline 6th-order computation resemble those found with finer overset grid. This illustrates how the higher order spatial differencing captures finer flow structures than a 2nd-order scheme on the same mesh.

A comparison of separation, transition, and reattachment locations, as well as maximum laminar separation bubble height is given in Table 1. Turbulent transition locations are determined according to Ref. 5. Transition is said to occur when the Reynolds shear stress reaches a value of 0.1% and exhibits a clear visible rise. For all the simulations, the separation location occurs at approximately the same location. However, transition occurs further upstream for both baseline mesh calculations. Differences are also observed in the reattachment location and maximum bubble height. The pressure gradient in the transition regions is also much sharper on the overset mesh as compared to the baseline mesh computations, as shown in Figure 2. These results indicate that all fine details of transition are not captured, even with the 6th-order scheme, on the coarser baseline mesh. However, as it will be shown later, the solution on the overset mesh agrees well with available experimental data, and therefore is deemed adequate to resolve all relevant flow features.

Table 1. Locations of interest on baseline and overset mesh computations (α = 4°, Re = 6x104). Computation Separation Transition Reattachment Max Bubble xs/C xt/C xr/C Height, hb/C Baseline 2nd-order 0.25 0.46 0.66 0.034 Baseline 6th-order 0.24 0.45 0.61 0.028 Overset 6th-order 0.23 0.55 0.65 0.030

a) b) Figure 2. Effect of grid resolution and numerical scheme (α = 4°, Re = 6x104): a) Mean surface Cp b) Reynolds shear stress.

Computed Reynolds shear stress is compared with that of TU-SB and HFWT experimental measurements in Figures 3 and 4a for the available angles of attack, i.e. 4°, 8°, and 11°. Good agreement between computations and both experimental data sets in terms of shape, magnitude, and extent of the fluctuating region are observed for α = 4° and 11°. However, for α = 8°, absolute magnitude of computed Reynolds shear stress are lower in magnitude and occur, along with transition, further downstream than observed by experimental TU-BS measurements. Similar trends were observed in computations by Radespiel et al.7 and Yuan et al.20. These differences are attributed to higher freestream turbulence intensity, approximately 0.8%, and lower PIV imagery resolution used during an earlier campaign during which these measurements were taken. For the HFWT data, Ol et al. speculates that the lack of a recirculation region in the experimental data could come from increased freestream turbulence intensity, or a shortcoming PIV image pars for adequate convergence of flow statistics; or both.

Despite the higher freestream turbulence intensity in the TU-BS water tunnel, the ILES computed Reynolds shear stress agrees well with measured values at 11° angle of attack. Reasonable agreement is also observed with the HFWT data set. While the computed LSB is slightly thicker, separation and reattachment locations agree well. The airfoil is near stall at this angle of attach and requires a greater pressure recovery in the LSB in order for it to reattach. This strong pressure gradient amplifies disturbances and which leads to a rapid transition to turbulent flow forming a short LSB. It would appear that at 11° angle of attack, as opposed to 8° angle of attack, the influence of the transition process is less influenced by freestream turbulence intensity.

Table 2. Measured and Computed SD7003 LSB properties (α = 4°, Re = 6x104). Data Set Freestream Separation Transition Reattachment Max Bubble Turbulence [%] xs/C xt/C xr/C Height, hb/C TU-SB 0.08 0.30 0.53 0.62 0.028 HFWT ~0.1 0.18 0.47 0.58 0.029 XFOIL 0.070 (N=9) 0.21 0.57 0.59 - ILES 0 0.23 0.55 0.65 0.030

α = TU-BS α = TU-BS

HFWT HFWT

ILE ILE

4 Figure 3. Reynolds stress and experimental data for α = 4° and α =8° at Re = 6x10 .

TU-BS α =

HFWT

ILE

a) b) Figure 4. a) Reynolds shear stress and experimental data for α = 11° at Re = 6x104 b) Lift and drag coefficients.

a)

Figure 5. Full angle of attack sweep (Re = 6x104): a) Mean surface Cp b) Reynolds shear b) stresses.

2° 4° 6°

8° 11° 14°

Figure 6. Suction surface skin friction coefficient distribution at 4 Re = 6x104 Figure 7. Instantaneous iso-surfaces of Q-criterion at Re = 6x10. .

Table 2 compares measurements of separation, transition, reattachment, and maximum LSB height, from two facilities along with computations with XFOIL4 and present ILES computation at 4° angle of attack. Here, computed separation locations of XFOIL and ILES fall in between both experimental measurements. However, the ILES separation location of 23% chord fall in the range of values computed from LES calculations and eN methods by Yuan et al.20. The ILES computed transition location of 55% chord is in well agreement of the measured TU-SB transition location of 53% chord. Transition at the HFWT tunnel was measured at 47% chord which would be consistent assuming slightly higher freestream intensity in this facility of ~0.1% compared to 0.08% of the TU-SB low-noise wind tunnel. Reattachment locations are also consistent between ILES and TU-SB at 65% chord and 62% chord respectively. Again, reattachment measured at HFWT occurs slightly further upstream at 58%. However, the maximum height of the LSB differs little be between both experimental measurements and ILES computations.

Lift and drag coefficients were computed from the ILES simulations by integrating skin friction and pressure around the airfoil. In order to accurately perform the integration in overlapping regions of the mesh, the solution was first transferred to a singe mesh, i.e. the baseline mesh. Because points did not necessarily coincide in the refined region of the suction surface of the airfoil, linear interpolation was used to transfer the solution.21 Integrated lift and drag coefficients are compared with experimental measurements of Selig et al.9, 10 and Ol et al.8 as well as values computed with XFOIL in Figure 4b. ILES lift coefficients agree well with measurements of Selig et al. and Ol et al. Measurements by Ol et al. predict stall at 11° angle of attack which is captured by the ILES simulations. In fact, even the post stall lift coefficient of 14° agrees well with the measured lift coefficient. Computations with XFOIL generally agree well with experimental measurements up till stall. Drag coefficients are slightly over estimated by the ILES calculations compared to measurements by Selig et al.

Mean surface Cp, Reynolds shear stresses, and skin friction coefficients for all angles of attack computed are presented in Figures 5 and 6. Here, even at the lowest angle of attack of 2°, an LSB forms, transitions to turbulent flow near the trailing edge, and reattaches at 93% chord. Turbulent transition is indicated by the drop in skin friction coefficient which coincides with the sharp pressure gradient observed after the flat pressure plateau typical of LSBs. As the angle of attack increases, the adverse pressure gradient grows and the LSB shortens. This causes both the separation and turbulent transition locations to move upstream. In addition, the absolute magnitude of the Reynolds shear stresses increases as the LSB moves towards the leading edge as shown in Figure 5b. When the airfoil us fully stalled at 14° angle of attack, the mean surface Cp becomes flat across the entire suction side of the airfoil. Instantaneous iso-surfaces of the Q-criterion of all angles of attack are shown in Figure 7. These iso-surfaces represent vortex structures. As the LSB breaks down, a coherent spanwise vortex over the extent of the airfoil forms and subsequently breaks down in to turbulent structures. Note that the ILES method has seamlessly captured evolution from a closed LSB to bubble bursting and stall without modification to any parameters.

Significance and Interpretation of Results An implicit large eddy simulation (ILES) technique has been used to predict the formation of a laminar separation bubble (LSB) and its subsequent burst and transition to turbulent structures on the SD7003 airfoil. Flow solutions were obtained with a validated Navier-Stokes solver based on high-order compact schemes with a Pade-filter to remove poorly resolved high wavenumbers in the mesh in lieu of an explicit SGS model. Unlike transition models which rely on a limited number of parameters to determine transition locations, the ILES methods solves the unfiltered Navier-Stokes equations without change in the laminar, transitional, and turbulent region of the flow. In addition, the ILES method captured the shift from a closed LSB to bubble burst and stall without modification of any parameters.

Computations compared qualitatively well with measured Reynolds shear stresses for available angles of attack, 4°, 8°, and 11°, at Reynolds number 6x104. Computed separation, transition, and reattachment locations were also in agreement with measured values. The transitional nature of the flow was indicated by turbulent kinetic-energy wavenumber spectra, and a fuller turbulent velocity profile was observed downstream of the reattachment location. Computed lift and drag polar (2°-14°) captures the stall angle and as well as measured lift coefficient post stall. While drag is over predicted, it also compares well with measured values. As expected, with increasing angle of attack, ILES predicts the LSB decreasing in size and moved toward the leading edge until post stall where the bubble has burst and the flow is fully separated. Reynolds shear stresses also increased consistently with increased adverse pressure gradients of higher angles of attack.

Acknowledgments Much appreciation is given to M. Ol, R. Radespiel and J. Windte for providing experimental data. Gratitude is given towards D. Rizzetta and S. Sherer for their assistance with FDL3DI and pre-processing tools. The authors are also grateful to M. List, D. Galbraith, and J. Nimersheim of the University of Cincinnati for their work with visualization. Finally, gratitude is given to the Ohio Space Grant Consortium.

References 1. Schmitz, F.W., Aerodynamik des Fluges, Verlag Carl Lange, Duisburg, 1960. 2. Eppler, R., Airfoil Design and Data, Springer Verlag, ISBN 3-540-52505-X, 1990. 3. Selig, M.S., Donovan, J. F., Fraser, D. B., Airfoils at Low Speeds, SoartechTech Publications. H.A. Stokely, Virginia Beach, VA, USA, 1989. 4. Drela, M., XFOIL Users Guide, Version 6.94 , MIT Aero. and Astro. Department, 2002. 5. Visbal, M. R. and Rizzetta, D. P., “Large-Eddy Simulation on Curvilinear Grids Using Compact Differencing and Filtering Schemes. Journal of Fluids Engineering,” 124:836–847, 2002. 6. Visbal, M. R., Morgan, P. E., and Rizzetta, D. P., “An Implicit LES Approach Based on High-Order Compact Differencing and Filtering Schemes,” AIAA Paper 2003-4098, 2003. 7. Radespiel, R., Windte, J., and Scholz, U., “Numerical and Experimental Flow Analysis of Moving Airfoils with Laminar Separation Bubbles,” AIAA Paper 2006-501, Jan. 2006. 8. Ol, M. V., McAuliffe, B. R., Hanff, E. S., Scholz, U., and Kähler, C., “Comparison of Laminar Separation Bubble Measurements on a Low Reynolds Number Airfoil in Three Facilities,” AIAA Paper 2005-5149, Jun. 2005. 9. Selig, M. S., Guglielmo, J. J., Groeren, A. P., G. Giguere, P., “Summery of Low-Speed Airfoil Data,” SoarTech Aero Publications, H. A. Stokely, Virginia Beach, VA, USA, 1995. 10. Selig, M. S., Donovan, J. F., Fraser, D. B., ”Airfoils at Low Speeds,” Soartech 8, Soartech Publications, H. A. Stokely, Virginia Beach, VA, USA, 1989. 11. Rizzetta, D. P., and Visbal M. R., "Numerical Investigation of Transitional Flow Through a Low- Pressure Turbine Cascade," AIAA Paper 2003-3587, Jun. 2003. 12. Gordnier, R.E. and Visbal, M.R., "Numerical Simulation of Delta-Wing Roll," AIAA Paper 93-0554, Jan. 1993. 13. Visbal, M. R., "Computational Study of Vortex Breakdown on a Pitching Delta Wing," AIAA Paper 93-2974, Jul. 1993. 14. Visbal, M. R., Gaitonde, D., and Gogineni, S., "Direct Numerical Simulation of a Forced Transitional Plane Wall Jet," AIAA Paper 98-2643, Jun. 1998. 15. Rizetta, D. P., Visbal, M.R., and Blaisdell, G.A., "A Time-Implicit High Order Compact Differencing and Filtering Scheme for Large-Eddy Simulation," International Journal for Numerical Methods in Fluids, Vol. 42, No. 6, Jun. 2003, pp. 665-693. 16. Beam, R. and Warming, R., "An Implicit Factored Scheme for the Compressible Navier-Stokes Equations," AIAA Journal, Vol. 16, No. 4, Apr. 1978, pp. 393-402. 17. Pulliam, T.H. and Chaussee, D.S., "A Diagonal Form of an Implicit Approximate-Factorization Algorithm," Journal of Computational Physics, Vol. 39, No. 2, Feb. 1981, pp. 347-363. 18. Sherer, S. E., “Investigation of High-Order and Optimized Interpolation Methods with Implementation in a High-Order Overset Fluid Dynamics Solver,” Ph.D. Dissertation, Aeronautical and Astronautical Engineering Dept., Ohio State Univ., Columbus, OH, 2002. 19. Sherer, S. E., Visbal, M. R., and Galbraith, M. C., “Automated Preprocessing Tools for Use with a High-Order Overset-Grid Algorithm,” AIAA Paper 06-1147, Jan 2006. 20. Yuan, W., Khalid, M., Windte J., Scholz, U., and Radespiel, R., “An Investigation of Low-Reynolds- number Flows past Airfoils,” AIAA Paper 05-4607, Jun 2005. 21. Galbraith, M. C., and Miller, J., “Development and Application of a General Interpolation Algorithm,” AIAA Paper 06-3854, Jun 2006.

3-D Modeling Tool for New Classical Model of Elementary Particles

Student Researcher: Joshua D. Garling

Advisors: Dr. Gerald Brown and Dr. Keith Shomper

Cedarville University Computer Science

Abstract A new electrodynamic model of elementary particles, the ring-based model, seeks to resolve certain inconsistencies with the standard model of elementary particles described by quantum mechanics. This new model has had some success in predicting new phenomena by attempting to describe the foundational geometry, spatial properties, and the balance of forces within known subatomic particles.

The ring model proposes that both protons and electrons are composed of hollow toroidal tubes of helically-circulating electric charge fibers flowing circumferentially at the speed of light. Based on this geometry, it suggests a basis for the fundamental quantum numbers and quantization in general. One research area the investigators of the ring model have suggested pursuing in order to understand the model more fully, is that of a computer graphics visualization of the model.

Project Objectives My research involves creating an interactive, 3-D graphical modeling tool to visualize atomic particles described by this ring-based electrodynamic model. This tool will provide functionality to create and view computer-generated images of these particles based upon a given input description. By allowing the results of the ring model to be viewed dynamically in three dimensions, we can better understand and gain insight on the structure of elementary particles as described by this new classical model of electrodynamics.

With this tool, we can view an elementary particle with all of its sub-particles in a 3-D environment in an interactive manner which allows a user to rotate the graphical model and navigate around it. Our program also allows the user to visualize the electron shells of a given atomic structure. The shells are displayed as partially transparent spheres. A user can manipulate the radii of the shells to better observe where electrons fit into the model. Additionally, users can view the properties of particles and dynamically change them in the program in order to study the implications of their locations and orientations.

Historically, there have been many models made of atoms and how electrons orbit the nuclei. Some new models have not gained wide acceptance, while others have replaced the generally accepted model and become the new standard. Models themselves are designed to describe physical situations in ways that we can perceive them. The Heisenberg Uncertainty Principle states that it is not possible to know both position and momentum of an electron at the same time with a high level of certainty. Because of this, we use models to describe our knowledge of electron behavior. The ring-based model is just one method of describing electrons and is interesting in part because it gives scientists new viewpoints for classical electrodynamic equations. Originally, in Maxwell’s Equations, two approximations were needed to accurately calculate certain electrodynamic properties. This model removes the necessity for these approximations because of the assumed finite size of the electrons. The new model also supports a single universal force that is able to account for all four currently accepted fundamental forces1.

The issue involved with this research is not whether the ring model is right or wrong, but rather whether this visualization is helpful in understanding the ring model. Our goal then is not to prove or disprove the ring model or to determine its validity, but rather to create a tool to display atomic structures using the model in order to bring a better understanding of how the particles exist in relation to each other and to organize complex data in a more meaningful manner. Since the emergence of quantum theory, mathematics and modeling have become very important in chemistry and physics. Scientists desire to create models that can explain complex atomic systems without involving large amounts of computation2. These models are then used for research in quantum chemistry, in addition to modeling atomic structures.

Because our project is visualizing a new model of elementary particles, existing software is not suitable for the needs of this project. Although many 3-D modeling programs already exist with the ability to draw simple shapes, a significant amount of these are used at a commercial production level. These software packages are often very costly to purchase, and thus would not be a viable option for use by most scientists. The objective of this project is then to create a specialized tool which can visualize atomic structures of elementary particles as described by this new model of elementary particles.

Methodology Used This project was organized in multiple stages. The first stage, completed last year, involved manually creating static visualizations of these models using an existing computer 3-D modeling tool. The modeling environment that was used for this project was Autodesk’s Maya software. Maya is a very powerful graphics tool, but licenses cost several thousand dollars, making it impractical for wide-spread use by scientists and researchers. The first state stage of implementation was used as a proof-of-concept design to verify the ability to generate three-dimensional models of elementary particles. Several static models, including models of Neon and Radium atoms, were created within Maya. Using these models, we were able to generate high-quality static images of the models from a user chosen point of view.

The second stage was designed to utilize an existing visualization environment such as OpenDx. We would investigate the functionality of this visualization environment to determine if it had the appropriate capabilities to meet our requirements. This implementation would then have required the extension of that environment to allow more advanced functionality. After a brief look at OpenDx, we determined that it did not provide a suitable environment for our application and instead focused our attention on finding an acceptable graphics library for a custom-built application.

The final stage involved creating a stand-alone, custom application. As part of this stage, we considered incorporating a current graphics library such as OpenGL or DirectX, which would have allowed us to render graphical data in our own environment execute the application on any computer platform. However, the OpenGL library describes graphics at a very low-level through basic functions; therefore, we decided to use a higher-level application programming interface (API) that abstracted out individual calls to OpenGL methods. This removed a significant amount of complexity during development. We eventually chose to use the Java 3D API, originally developed by Sun Microsystems, and now maintained as an open source project.

Java 3D operates as a scene-graph based 3-D API that runs on top of either OpenGL or DirectX. It both acts as a wrapper interface around these libraries, and as an object-oriented encapsulation for graphical programming. Our project was to be implemented as a stand-alone application written in Java and using the Java 3D interface to provide three-dimensional graphical support.

Data input to the application is handled using an XML structure. A user creates an XML document containing a list of all elementary particles to be displayed. Each particle is then described by three- dimensional coordinates relative to an origin, three-axis rotation variables, an electric charge value, a direction of particle rotation, particle radius, and a zoom factor. The data file is then loaded into the application where its document object model (DOM) tree is parsed and its information is stored. This information is then used to generate a 3-D model of the structure described by the particles.

Although the exact shape of an electron described by the ring model is a set of three intertwined charge fibers, our application displays each particle in the three-dimensional environment as a torus. This shape was selected because it approximates the electron in a way that is easily displayed and understood. The torus shape we used was found in a user-created code repository of Java 3D utility classes4. Each particle is associated with a line indicating the direction of magnetic flux, which is determined by the direction of rotation and the electric charge of the particle. Due to the extremely small radius of electron rings, a zoom factor was added to enlarge the particles to a viewable level. Interaction with this model is provided through mouse and keyboard events as well as control dialog boxes. These interactions include toggling the display of electron shells, modifications of the shells’ radii, and manipulation of particle data.

Results Obtained In the first stage we successfully created models of complex elementary particles using Maya. These models showed us that creating models of such particles could be accomplished. Each model required a significant amount of time to create, demonstrating the need for an automated graphical environment that could render 3-D models in real time. The images in Figure 1 display some of the results from this stage of implementation.

After the research of the second stage, and the decision to use Java 3D for our modeling tool, implementation of the final stage began. Initial development included researching the capabilities of Java 3D in respect to our desired functionalities. Several test programs were developed to verify incremental capabilities before the final product entered development.

Figure 2 provides a screenshot of many features of the final product. The program is structured in a manner that provides the user with meaningful dialogs as necessary. The complete program provides functionality to load data from a file and display that data in a three-dimensional representation. User interaction allows dynamic modification of data including shell radii and the properties of individual particles. As a whole, this program demonstrates some capabilities of the Java 3D API, as well as the ability to create specialized 3-D modeling software to visualize the ring-based electrodynamic model of elementary particles.

Figures/Charts

Figure 1. Example Renderings of Elementary Particles.

The first image represents a neon atom. With this display, we can see the ten electrons in their stationary positions around the nucleus. The second image displays a hydrogen molecule with its two protons and two electrons. The final two images represent the electron structure of an atom of radium. The first is a simple representation, while the second contains added electron shell information.

Figure 2. 3-D Model Application Windows.

This image shows several of the application windows a user interacts with while using the tool. The frame in the upper left represents the main program frame. From this frame a user will may open a file selection dialog and locate an XML file containing the data set. The user then selects Generate Model to bring up the main 3-D model frame, shown in the background. The main model window displays all particle rings defined in the data file, along with the x, y, and z axes and applicable electron shells. The Reset button will close the 3-D model frame and prepare the program to load a new data set. The Toggle Shells button in the main program frame turns on and off the spherical representation of electron shells. The Modify Shells button brings up the Shell Editor window, shown in the lower left. From this window, a user may modify the radii of each electron shell. In the main model window, a user is able to click on any of the electron rings to display the Particle Properties window. This window displays the information about the particle. A user is then able to make changes to any of the fields and apply them to the model. The new parameters are then entered into the program and the model window is updated to reflect the changes.

Acknowledgments The author would like to thank his project advisor, Dr. Gerald Brown, for his cooperation on the project and supplying the original research topic. The author also thanks his technical and academic advisor, Dr. Keith Shomper, for his support during this project and knowledge of computer graphics. Finally, the author would like to thank Dr. Stanley Baczek and Mr. Charles Allport for organizing this project.

References 1. Charge Fiber Model of Elementary Particles 2. Head-Gordon, Martin and Emilio Artacho. “Chemistry on the Computer.” Physics Today 2008. 3. Bouvier, Dennis J. “Getting Started with the Java 3DTM API.” 2000. 4. The j3d.org Code Repository Autologous Mesenchymal Stem Cell Transplantation to Improve Fasical Repair

Student Researcher: Megan E. Genuske

Advisor: Dr. Hazel Marie

Youngstown State University Mechanical Engineering

Abstract Hernias are much too common in the surgical world, with any operation involving an incision into the abdominal cavity there are up to a one in ten chance that there will be a post-operative hernia. In addition, these hernias often reoccur. Surgical technology is improving while the rate of incisional hernias is staying steady. Using animal models allows accurate results to be recorded of how the hernia would heal. The whole of this experiment was tested in three arms; these were broken into the following: tensile strength, collagen deposition, and collagen remodeling. For each set of rabbits there was an experimental group and a control group to compare results. Mesenchymal stem cells (MSC) have the ability to differentiate into several different types of cells therefore this experiment was conducted in hopes of their use in improving fascial repair.

Project Objectives This project is an attempt to reduce the number of reoccurring hernias and improve wound repair. It is projected that a combination of mesenchymal stem cells, plasma-rich fibrin, and a collagen with growth factor imbedded in it will improve healing after surgery. The specimens were subjected to tensiometric analysis to conclude whether the stem cells were improving the fascial repair and find the biomechanical properties of the fascia.

Methodology Used The methodology used throughout the experiment had to be determined as the experiment went on. First special grips were made to put into the Instron Tensionmeter, Model 5,500R. These grips enabled better contact without the samples slipping; the grips also allowed for less messy testing. Since this was the last stage of the experiment, samples were obtained when the following stages were completed. Each sample was labeled to keep consistent with which method of hernia repair was used. Once the best part of the sample was reserved, the specimen was patted dry to help with cutting and again to help with staying in the grips. The specimens were cut into a dumbbell shape as in a previous experiment performed by Dubay et al.

Figure 1. Specimens were cut into this approximate shape. (Dubay et al. 2004)

This shape helped to ensure the material failure where the scar was rather than at the grips. Once the specimen was cut, measurements of thickness, width, and initial length were taken, and the scar was marked with a permanent ink. Depending on the specimen the scar was either put transversely or longitudinally in the grips. The specimen was then secured into the designed grips, and then put into the tensionmeter. The machine was then jogged slightly up to ensure a good tight grip on the specimen. The machine was then run moving at constant 10 mm per minute the Merlin computer program recorded the force and extension information. The whole process was also video recorded with a fine grid behind it to visual local deformation, rather than just overall deformation. Once data was recorded by the computer, the results were put in a table and graphed. These graphs yielded several values. Results Obtained The results obtained from this experiment were very limited. The first two sets of specimens were much too small. In addition, grips were developed during the first set. Methodology was acquired after the sample specimens were tested. Two of the specimens obtained very good results. Shown below are the results obtained from tissue sample 15 from both the control and experimental groups.

Figure 2. Results from treated and control specimen 15; both longitudinal tests.

Several very important biomechanical properties may be found from analyzing a stress strain curve. Some of the values include: yield strength, yield energy, and the Young’s Modulus. The yield strength is the force at which yielding of the biomaterial began. This may be determined from the 0.2 % offset method. This remained consist ant with both the treated and control sample. The control sample had a value of 325 kPa and the test had a value of 400kpa. The next property, yield energy, is the energy which the biomaterial could absorb in the elastic range. The elastic range is when the material is still able to return to its original state without any permanent deformation. This may be solved from taking the area under the curve from the initial load to the yield strength. This is where the two samples varied a noticeable amount. The yield energy for the control was 76.144 mJ compared to the treated which was 95.033 mJ. More importantly the plastic region absorbed a significant amount more in the treated sample. This is promising because the plastic region has much to do with the toughness which is the region between the yield stress and the point of failure. This proves the new technique with the stem cells made improved the hernia repair substantially. One of the most important properties is Young’s Modulus, a measure of stiffness. This value is the ratio of the stress over the strain; the modulus is a measure of stiffness of the material. It is obtained through analyzing the slope of the linear region on the curve. For this sample the best fit line were taken for both the treated and control in the linear region. The modulus for the treated sample was 1.133 MPa while the modulus of the control sample was 1.173 MPa; these values are comparable to a rubber type material. These are both very close, with the control just being slightly stiffer; this may just be the variance between two different pieces of tissue. The tests so far have been promising, but additional testing is required. More samples need to have consistent results to come to a definite conclusion. Further testing will prove how much the MSCs improved the fascial repair, and enable for precise data to be obtained.

References 1. DuBay, D.A., et al. Progressive fascial wound failure impairs subsequent abdominal wall repairs: A new animal model of incisional hernia formation. Journal of Surgery 2005;127:463-71.

Acknowledgments I would like to thank Dr. Hazel Marie for advising this project, Anthony Viviano for developing the grips and help with the project, Matt Citarella for filming the tests, and Tammy Donnadio for being the senior scholar on this project. An Investigation of Reaction Rates Using “The Antacid Tablet Race”

Student Researcher: Amy J. Greenfield

Advisor: Mr. William Jones

Cedarville University Department of Science and Mathematics

Abstract The project that I am completing will investigate methods of altering reaction rates. In particular, it will explore the methods of temperature and surface area. It will be used in a high school Chemistry classroom. This project will incorporate the NASA Rocket Educator’s Guide into my lessons discussing chemical kinetics, and in particular, reaction rates. I will begin with lessons covering the concepts of reaction rates. I will also teach the vocabulary and equations dealing with reaction rates. I will conclude these lessons with a laboratory activity exploring the effect of surface area and temperature on the reaction rate of antacid tablets. Working in groups of two or three, students will observe and record the effects of these variables. The activity will conclude with a discussion which applies the observations from the experiment to the effects of the variables upon the power of rocket fuels.

Activity Plan This activity is based off of the “Antacid Tablet Race” lesson found in NASA’s Rocket Educators Guide. This experiment has two components but can be completed during one laboratory session.

The first experiment will determine the effect of surface area on reaction rates. Using stopwatches, students will compare the time it takes a full tablet to dissolve in water with the time it takes a crushed tablet to dissolve. Students will be asked to make predictions before they complete this step of the experiment. Students should find that an increase in surface area leads to an increased reaction rate. A discussion will follow relating the experiment to rocket propellant. In particular, students should come to understand that expanding the burning surface area inside a rocket will increase the burning rate.

The second experiment will examine the effect of temperature on reaction rates. Students will compare the time it takes for a full tablet to dissolve in warm water with the time it takes a full tablet to dissolve in cold water. Students will be asked to make predictions before they complete this step of the experiment. Students should find that an increase in water temperature leads to an increase in reaction rate. Once again, a discussion will follow relating the experiment to rocket propellant. Students should come to understand that fuel is preheated in the rocket’s engine to increase the reaction rate, and in turn increase the rocket’s thrust.

Objectives • Students will gain factual understanding of surface area and temperature in relation to reaction rates. • Students will be able to predict the effects of variable changes on reaction rates. • Students will be able to connect factual knowledge to real life examples during lab activities.

Learning Theory This lesson is based on the constructivist approach to learning. This approach takes into account the student’s way of thinking and learning. It involves actively engaging students in the learning process, most commonly by the use of cooperative learning and hands-on experiences. Since the constructivist approach holds that the best teacher is student experience, this lesson will allow the students to discover the effects of temperature and surface area on reaction rates through a lab activity. This activity will be preceded with a presentation of basic facts and concepts concerning reaction rates. Through this activity, students will be able to connect previous knowledge with the discoveries they make during experimentation.

Materials Required (per group) • 4 Antacid Tablets • 2 Beakers • Tweezers • Stopwatch • Thermometer • Water • Safety Goggles • Scrap Paper

Assessment The assessment of this activity will be completed in the form of a written lab report. Students will be graded on participation as well as completing the associated worksheet. The lab report requires students to make a prediction prior to the trial, record the time it takes for the tablet to dissolve during experimentation, and to answer several follow up questions. An additional assessment could be done requiring students to write an essay connecting this experiment with rocket fuel. A Virtual Dynamic Modularized Network Architecture

Student Researcher: Jonathan J. Guernsey

Advisor: Dr. Lawrence Miller

The University of Toledo Department of Electrical Engineering and Computer Science

Abstract The Internet has become the largest and most highly used global communications network in the world. Unfortunately, due to its underlying core design, it is unable to provide the desired levels of quality of service (QoS), security, robustness, availability, reliability, management, extensibility, and adaptability that are requested by current network applications. The current requirements of network applications are vast and change drastically from one network application to the next. The desired support for both current and future network applications can not be provided by the present static monolithic protocol architecture and its underlying layering paradigm; a dynamic protocol framework is needed. It is envisioned that future networks will be dominated by high speed wireless networks similar to WiMAX, multi-gigabit speed Ethernet as well as single and multi-wavelength fiber networks. Leveraging the power of these extremely high speed media and the power of virtual circuit based technologies, VIMNet, a dynamic protocol stack framework that uses protocol modules and module chaining, is being designed.

Introduction Based upon results from a 2001 workshop, the National Research Council (NRC) claims that the Internet is ossifying to the point of being unable to change or integrate new technology and new services from which it would significantly benefit. They cite several examples that have been difficult to deploy, including multicast services, and IPv6 [1]. Significant research has been done in attempts to provide support for true network QoS, realtime communications, integrated traffic engineering, high network reliability, and high availability. Even alternative network technologies have been developed such as Multi Protocol Label Switching (MPLS) [2], Asynchronous Transfer Mode (ATM) [3, 4], and IBM’s Systems Network Architecture (SNA) [5]. Despite all of their advantages, all have been difficult to integrate. Even integrating security into the TCP/IP protocol stack has proven to be a daunting challenge given that TCP/IP was developed for DARPA as an open, highly available, highly robust, “always on” network [6]. It would appear based upon this, that the ossification of the Internet has reached the point where nearly everything in the TCP/IP architecture is “set in stone”.

Another problem is the Internet’s growth rate. The Internet has shown explosive growth since its first days as ARPANET, moving from ARPANET’s initial size of 4 systems in 1969 to the Internet which had well over 150 million systems in 2000 in the United States alone [4, 7]. Since then another growth spurt has occurred with cell phones becoming internet ready adding a huge number of network systems to the wireless infrastructure. In 2000, almost 100 million cell phones were in use in the United States and growth since then has been tremendous as has use of cellular based internet access [7]. Currently, video game systems, which support online gaming such as Microsoft’s XBOX 3600, Sony’s Playstation 3 and Playstation Portable (PSP), Nintendo’s Wii and DS, and a large increase in cell phone and VoIP phone users has added a large number of networked systems to the already massive Internet. As of the end of 2007, the most recent estimate reports that there are roughly 1.3 billion Internet users, an increase of nearly 200 million from 2006 [8]. This tremendous growth rate has illuminated the serious impact of designing scalability into any new Internet architecture.

This paper first provides a bit of background information on the requirements of the future Internet. Then, a brief overview of our new Internet architecture is provided. This architecture is being designed to be dynamic enough to provide an environment that can support the needs of all applications and their necessary protocols. Following the architecture overview, a description of the new FPGA based network interface card (NIC) development is provided.

Future Internet The NRC states that “... a vision for the future Internet should be to provide users the quality of experience they seek, and to accommodate a diversity of interests [1].” Currently, this vision is not being obtained as the desired support for the diversity of network applications is not being provided. Significant research has been conducted in many areas of networking including network fault tolerance [9], dynamic channel rerouting [10], redundant paths for fault tolerance [11], Diffserv [12], RTP [13], CR-LDP [14], RSVP [15], fast reroute [15], and traffic directing [16, 17]. This research was conducted in an attempt to provide the desired QoS, real-time communications support and increased network availability by improving the techniques being used. Despite this research, current techniques are still insufficient, especially for applications that require stringent resource requirements and transmission deadlines. It is foreseen that the next generation of network applications will require true real-time communications, true quality of service (QoS), as well as significantly improved security [1, 18].

In order to achieve this vision, the future Internet must be designed with the intention of handling the shortcomings of the current Internet. It will need to be highly adaptable to handle rapid changes in technologies and policies. It will have to possess the ability to immediately integrate the new algorithms and protocols required to provide stringent resource allocation requirements, increased security requirements, and unique processing requirements via application customized protocols. Each application connection may contain a different arrangement of protocol modules to provide the required features. The future Internet will also need to be designed with the ability to integrate multiple kinds of networks based upon radically different protocols. It will need to enable different authorities, countries, and regions of the world to form their own radically distinct virtual networks and still be able to interconnect these networks without the use of hard-coded gateways, which are inherently adverse to changes.

These diverse requirements necessitate the need for a new dynamic network protocol framework that can support multiple virtual network architectures based upon application customized protocols. This protocol framework will provide this support through a set of core network services. These core services, which the network infrastructure must provide, will have to include the ability to access and update forwarding tables, the ability to bi-directionally request translation of network naming conventions, provide support for redundant paths, provide packet route tracing through insertion of low level addresses into the network packet trailers, provide built-in network measurement functions, and the ability to dynamically upload network protocol modules into the routers on a connection by connection basis. This functionality will form the majority of the primary network services, with other services being provided by the application customized protocols. Any additional services that customized protocols provide can be used to enhance the core services based upon the needs of the virtual network architecture associated with the individual application connection. However, the core services must provide the fundamental capabilities regardless of any additional services that the application protocols or virtual network architecture will provide.

Network Architecture Any design for the future Internet will be affected by three unalterable factors. First, the Internet will always be a network of networks. Second, the bandwidth limitation of wired connections will be bounded by the theoretical limitations of fiber optic cables. Finally, wireless communication will be bounded by the restrictions of Maxwell’s equations. Keeping these in mind our design for the infrastructure of the future Internet attempts to leverage the benefits of each of these areas while providing facilities to deal with their weaknesses.

At the physical hardware level, our design for the future Internet maintains the hierarchical design of the current Internet. We still envision a spanning tree design as shown in Figure 1. The current Internet foundation is provided by a group of very powerful high-speed packet switch routers. Every packet that comes into a packet switch router must be routed using a routing protocol to determine which direction to send it in an attempt to get the packet to its destination. This process in many core routers is performed millions to billions per second. Virtual circuit based routers function differently than a packet switch router. A virtual circuit router requires a connection establishment request to build a virtual circuit path. At the time of the connection request, routing it performed to find a path from the requesting system to the destination. At each hop in the path a virtual circuit forwarding table entry is made to allow packets to move through the virtual circuit path using a simple forwarding process based on the virtual circuit identifier for the path. At the end of transmission when the path is no longer needed, the virtual circuit path is released. Virtual circuit paths provide a quick means of transmitting data after the connection is established and require a lot less routing work to be performed as it is only performed once. The only extra requirement necessary for virtual circuit based systems is more memory and processing resources to manage the virtual circuits. In our design, the root (core) routers of the Internet will still provide the fundamental backbone of the Internet. However, we envision that the number of core routers needs to be increased by a magnitude of 5 to 10. This increase is intended to provide the necessary foundation to move from an unreliable packet switching network to a virtual circuit based network infrastructure. This move will be further aided by the presence of cheap high-speed memories, flash memories, FPGAs, soft- core processors, and System-On-Chip (SoC) technologies. These technologies will allow for the development of routers that can provide, manage, and guarantee the resources provisioning of many virtual circuits with varying resource requirements. It is our intention to move away from connectionless packet switching systems which inherently provide an unreliable service that must be augmented by several heavy-weight protocols to transparently emulate a reliable connection. At times, this causes severe loss of network transmission bandwidth as more packets are required for retransmissions due to transmission errors, network failures and congestion in the random paths that a packet may traverse. Layered on top of the root router, will be a series of company networks, ISP networks, wireless and satellite networks, then finally the end users. As you move down the hierarchy, the number of systems in each network will increase, with the bulk of the networked systems being comprised of the end users. It is envisioned that the end-user networks will be comprised of mostly high-speed wireless, fiber networks, and multi-gigabit Ethernet networks all of which can be adapted to work with our design.

Starting with the root network, all of the routers and other networked systems will be interconnected in a mesh-like fashion by high-speed single-wavelength and multi-wavelength fiber trunk connections as shown in Figure 2. Each single trunk connection between two systems will be comprised of a series of parallel bi-directional links that will constitute a single logical communication link connected the two systems’ network interface cards (NICs). Every logical link will contain at least one bi-directional connection for private data and at least two bi-directional connections for public data. The private data connection will be utilized for a variety of purposes including security, router management, connection setup, and connection management. The multiple public links will be utilized to transmit the user data in a manner of different configurations including parallel transmission for high-speed real-time services and redundant transmission for high reliability. To implement this ‘trunk’ based communication requires the development of a new NIC design. Our architecture design includes the development of an FPGA based NIC which is covered in the next section of this paper. This communication link and NIC design is scalable and it is our intent that it be utilized for all physical connections up to and including the end-user connections. At the core network level, between each set of routers, ideally there should be multiple pairs of connected NICs which in turn each contain a private connection coupled with multiple public connections. This may mean that between core network systems there may be dozens to hundreds of physical connections composing many logical links. From the core network down to the end user non- wireless networks, it is expected that the number of public connections would decrease down to as little as two public connections coupled with a single private connection at the end-host systems. Legacy networks as well as wireless networks will be adapted to our network model by the usage of protocol framework based soft-gateways. These gateway systems will be inexpensive and dynamic equivalents of the hardware based gateway systems used to connect different network systems today.

Overtop the physical hardware is the dynamic protocol system framework as shown in Figure 3 coupled with a visual representation of the NIC’s physical links. As can be seen from the Figure, the dynamic protocol system as a whole is comprised of a series of core service modules, user application protocol module chains, and the module management system (not shown). The core services sit on the software side of the system right above the network device physical interface (NDPI) boundary. The NDPI boundary is where the software framework interfaces with the NIC hardware and this boundary is controlled by the NIC device driver. The core services coupled with the service module management system provide the dynamic protocol framework environment which allows the user’s applications to utilize custom protocol chains. The core services are responsible for all security, network and virtual circuit management functions required to provide network communication.

One major assumption is that regardless of the technology upon which the future Internet it based, it will still be infeasible to make the hosts/devices attached to the edge of the network completely secure. This assumption, that such a critical component of the Internet will never be secured, has significant implications for how the network is designed so that threats can be quickly identified and solutions for dealing with those threats can be rapidly deployed into both the core and the edges of the Internet. Thus, we are emphasizing timely reaction to security threats instead of their complete prevention.

VIMNet’s use of private network infrastructure will ensure that out-of-link signaling (taking out-of-band signaling one step further) is utilized for network control packets. The use of public and private network infrastructure will ensure that security, signaling, and control data will never be comingled with general user/application data. Thus, VIMNet will provide private channels for administration management packets that can shut down attacks/threats to the network, and provide for quick deployment to other network problems such as congestion, etc.

VIMNet’s private network links are reserved for “network use” only, which means that the data carried by them is not generated by, nor visible to, the end applications using the network. In addition, VIMNet will not be an always on network (it requires a connection be established before allowing transmission). This will allow the network to trace packet routes, without user interference, in order to provide enhanced security.

FPGA Based NIC The network interface hardware for this new network architecture must fulfill the standard requirements of the standard OSI physical layer, providing the electrical, mechanical, and procedural definitions for transmitting the raw bit data over the transmission medium. In our design, the network interface card is also responsible for data link layer checksumming and controlled simultaneous parallel transmission over a set of links in a multiple link trunk. Where it may not be feasible to have multiple link trunks, such as in the “last mile” or in wireless networks, emulation of multiple link trunks over a single link will be utilized through the creation of multiple low level data link virtual connections.

Due to the virtual circuit based design of our new network architecture design has packets formats segregated into two groups; management packets and user data packets. User data packets are defined to be up to 4 megabits in size (512 kBytes). Each user data packet is comprised of a small header, a trailer, and the data portion of the packet. The management packets include packets such as connection establishment, connection management, security, router management, router update, and other management packets. All of the management packets are sent across the private connection part of the logical link, with no part of the packets being sent over the public links. The user data packets are split into pieces during transmission. The header, trailer, CRC, and configuration information is transmitted on the private connection while the data segment portion is sent on the public links. Data transmitted over the public links can be transmitted either in parallel mode or redundant mode. In parallel transmission mode, the links are utilized to transmit different pieces of data in order until all of the data is transmitted. Parallel mode allows for a faster transmission of the data. In redundant mode, the data is transmitted on piece at a time with all links transmitting the same information. On the receive end in redundant mode, all of the public lines are compared to verify that the data is transmitted is equivalent before proceeding with the next character. Redundant mode allows for a more reliable transmission.

The overall design for the FPGA based NIC is shown in Figure 4. The design is broken up into a series of blocks, with each block responsible for a specific portion of the network transmissions. The NIC Control Manager is the driver for the entire design, controlling all of the signaling and timing. The Tx and Rx Mangers provide the parallel and redundant functionality for the public lines as well as normal transmission of data on the private lines. The two MegaCore Functions are protocol blocks to provide transmission of data transmission across the PCI Express bus to the software portion of the Network Architecture and transmission to and from the DDR2 SDRAM. NIC Control Manager The NIC Control Manager provides the master control for all of the sub-blocks within the FPGA on the NIC. This manager provides the configuration signaling as well as initiates the transfers between the memory, PCIe Bus, and the Public and Private Tx/Rx managers. These transfers take place in a round- robin fashion to provide a fair distribution of access time to the SDRAM memory and PCIe Busses. This manger is also responsible for breaking up the packet and forwarding the pieces to the appropriate transmission manager. During the receive process the Control Manager receives the configuration information for a user data packet and buffers this control information on the control signal lines for the Public Rx Manager.

Public Tx Manager This functional block of the FPGA interacts with the transceivers for the public connections as well as the NIC Control Manager. The NIC Control Manager provides the signaling to this block to configure it into either parallel or redundant mode. Along with the configuration information, the NIC control manager also provides the data for the functional block to transmit. The block then either sends the data directly to the transceiver in parallel mode, or duplicates the data and sends each set of duplicates to the transceiver for transmitting.

Public Rx Manager This functional block of the FPGA interacts with the transceivers for the public connections on the receive side as well as the NIC Control Manager. The NIC Control Manager provides the signaling to this block to configure it for either parallel or redundant mode transmission when a configuration sequence is received and processed by the Private Rx Manager. In parallel receive mode, the block receives the data and buffers it until a full byte has be received. Then it writes the byte to the SDRAM MegaCore Function via the NIC Control Manger. In redundant mode, the block receives a group of redundant data sets. First, it compares them to verify that they are equivalent. If they are equivalent a single copy is buffered in the block until a full byte is ready to be written to the SDRAM. Then the byte is written to SDRAM using the MegaCore Function.

Private Tx Manager This functional block of the FPGA interacts with the transceiver for the private connection as well as the NIC Control Manager. During a transmission, the NIC Control Manger provides access to the data for private transmission, passing it the private link a single byte at a time. This block then passes it the byte directly to the transceiver for the public link to transmit.

Private Rx Manager This functional block of the FPGA interacts with the transceiver for the private connection on the receive side as well as the NIC Control Manager. As a transceiver receives data, the Private Rx block writes a byte of received data at a time to the SDRAM memory via the NIC Control Manager.

PCIe Bus MegaCore Function This MegaCore Function interacts with the NIC device driver and is provides the interface between the FPGA and the PCI express bus. It provides the ability to read and write data two and from the PCIe bus connection. Data packets that have received or ones that need to be transmitted are moved across the bus using this function.

DDR2 SDRAM MegaCore Function This MegaCore Function provides the interface to read and write data from the DDR2 SDRAM located on the NIC. This RAM is used to buffer the received packets as well as packets to be transmitted. The SDRAM MegaCore function interacts with both the NIC Control Manager and PCIe Bus MegaCore Function to move data into and out of the SDRAM memory.

Future NIC Design In the next generation of the design, it is intended that the CRC for the data portion be generated within the FPGA on the NIC. As well, another addition will be a block for encryption that will be used to provide security of data at a hardware level.

Acknowledgments The author would like to thank his advisor, Dr. Lawrence Miller, and numerous EECS graduate students who are working to aid in the development of this architecture. Also, the author would like to thank George Janikowski and the members of the University Program at Altera Corporation for their help and contributions toward the FPGA portion of this research. Finally, the author would like to thank a fellow PhD student, Mike Orra, for their peer review of this document.

Figure 1. Internet Network (Tree) Hierarchy.

Figure 2. Root (Mesh) Network.

Figure 3. Dynamic Framework Architecture.

Figure 4. FPGA Design Overview.

References 1. “Looking Over the Fence at Networks - A Neighbor’s View of Network Research,” Washington D.C., 2001. 2. V. Alwayn, Advanced MPLS Design and Implementation, 1st ed., J. , Ed. 201 West 103rd St., Indianapolis, IN 46290: Cisco Press, September 2001. 3. U. Black, ATM, Volume I: Foundation for Broadband Networks, 2nd ed. Upper Saddle River, NJ 07458: Prentice Hall PTR, Jan. 1999. 4. A. S. Tanenbaum, Computer Networks, 3rd ed. Upper Saddle River, NJ 07458: Prentice Hall, 1996. 5. Cisco Systems Inc., Internetworking Technologies Handbook, 3rd ed. Cisco Press, December 2000. 6. S. M. Bellovin, “Security problems in the TCP/IP protocol suite,” SIGCOMM Comput. Commun. Rev., vol. 19, No. 2, pp. 32–48, 1989. 7. K. G. Coffman and A. M. Odlyzko, “Growth of the Internet,” July 2001. 8. Miniwatts Marketing Group, “World Internet Usage Statistics News and Population Stats,” World Wide Web, Jan. 2007. Available: http://www.internetworldstats.com/stats.htm 9. S. Norden, M. Buddhikot, M. Waldvogel, and S. Suri, “Routing bandwidth guaranteed paths with restoration in label switched networks,” in Network Protocols, 2001. 9th Intl. Conf. on, 11-14 Nov. 2001, pp. 71–79. 10. L. Miller and E. Leiss, “A Study of the Impact of Bandwidth Reallocation and Dynamic Channel Rerouting on Real-Time Communications in ATM Networks,” in Centro Latinoamericano de Estudios en Informatica (CLEI) 2001, Merida, Venezuala, September 2001, pp. 89–95. 11. K. Padmanabhan and D. H. Lawrie, “Performance analysis of redundant-path networks for multiprocessor systems,” ACM Trans. Comput. Syst., vol. 3, no. 2, pp. 117–144, May 1985. 12. S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “An Architecture for Differentiated Services,” World Wide Web, December 1998. 13. H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, “RTP: A Transport Protocol for Real-Time Applications,” World Wide Web, United States, July 2003. 14. F. Le Faucheur, “Protocol Extensions for Support of Diffserv-aware MPLS Traffic Engineering,” World Wide Web, ftp.rfc-editor.org, June 2005. 15. S. Kini, M. Kodialam, T. Lakshman, and C. Villamizar, “RSVP with Traffic Engineering Extensions for Label Switched Path Restoration,” World Wide Web, IETF, November 2000. 16. ATM Forum Technical Committee, “Private Network-Network Interface Specification v1.0 (PNNI),” World Wide Web, March 1996. 17. D. Awduche, J. Malcolm, J. Agogbua, M. O’Dell, and J. McManus, “Requirements for Traffic Engineering Over MPLS,” World Wide Web, September 1999. 18. National Science Foundation, “Networking Technology and Systems Program Solicitation (NSF 06- 516),” Internet, March 2006. Available: http://www.nsf.gov/pubs/2006/nsf06516/nsf06516.htm A Polycation Receptor in Paramecium and Tetrahymena

Student Researcher: Kevin T. Gulley

Advisor: Dr. Heather Kuruvilla

Cedarville University Department of Science and Mathematics

Abstract In previous studies, polycation receptors have been studied in Tetrahymena thermophila. A similar receptor has been identified in Paramecium tetraurelia. As expected, this receptor also binds positively- charged chemorepellents and initiates an avoidance response brought about by ciliary reversal. Chemorepellents are substances that result in a negative chemotaxis response when they are encountered by an organism. Interestingly, Paramecium’s receptor has several different properties than Tetrahymena’s receptor, including molecular weight, binding affinity, and ligand specificity. Our lab has been attempting to identify further properties of this receptor.

Using cross-adaptation assays, we have been able to see that this it is a general polycation receptor. Most notably, it appears that Paramecium’s polycation receptor is a receptor tyrosine kinase (Tetrahymena’s is a G-protein coupled receptor). Upon incubation in genistein, a tyrosine kinase inhibitor, the cells exhibit no avoidance to lysozyme, its most potent chemorepellent. We have attempted to visualize this tyrosine kinase activity using immunofluorescence. In our immunofluorescence assay, we allow specific antibodies to bind to the phosphorylated tyrosine kinases. Next, the fluorescent secondary antibody is illuminated to visibly locates the position of these activated proteins. In the future, we will also employ Western blots to further specify the exact size of the proposed polycation receptor-tyrosine kinase.

Project Objectives 1. Identify and Compare polycation receptors of each species 2. Discern each receptor’s second messenger pathway 3. Visualize the location of specific cell machinery involved in chemotaxis 4. Isolate receptors and related cellular proteins in our species and compare them to known proteins

Methodology Used Behavioral assays: Briefly, cells were washed three times in buffer. Cells were then individually transferred by micropipette to a microtiter well, which contained 300 ml buffer. After cells had adapted to buffer, individual cells were transferred to a third well, which contained 300 ml of the polycation being tested. Each cell was briefly observed (1-5 sec) for signs of avoidance.

Immunofluorescence: Paramecium are washed and treated with a Nonidet P-40, a detergent, then incubated with an anti-phosphotyrosine primary antibody. The secondary antibody linked to BODIPY, a fluorophore that allows visible notation of bound antibodies. During antibody incubation, cells are bathed in a bovine albumin solution to discourage non-specific antibody binding.

Results Obtained Behavioral assays of P. tetraurelia showed that the polycations lysozyme, PACAP (1–38, 1–27, and 6– 27), and VIP were all effective chemorepellents in the micromolar range, with EC100 values of 10 µM all of the PACAP isoforms and 50 µM for VIP (Fig. 1A). In contrast, lysozyme had an EC100 value of 0.5 µM (Fig. 1A; Kim et al. 1997), making it by far the most effective chemorepellent in our current study. These data are quite a contrast to the chemoresponse profile previously seen in T. thermophila, where lysozyme and VIP have EC100 values of 100 µM and the various PACAP isoforms have an EC100 value of 0.1 µM or 100nM (Fig. 1B; Keedy et al. 2003). Cross-adaptation studies in Paramecium indicated that these polycations may share a receptor or signaling pathway. Cells adapted to an EC100 concentration of either lysozyme (0.5 µM) or VIP (50 µM) showed baseline avoidance to EC100 concentrations (10 µM) of PACAP 1–27 or 1–38. The opposite sequence also resulted in adaptation. In order to determine whether these compounds signal through a G-protein in Paramecium, we exposed cells to 1µM GDP-b-S for at least 2 h before exposing them to various chemorepellents. A concentration of 1µM GDP-b-S had no marked effect on avoidance to EC100 concentrations of lysozyme (Table 2). GDPb-S also failed to inhibit avoidance to VIP and PACAP 1–38. Because of the limited cell permeability of GDP-b-S, we used another G-protein inhibitor, G-protein antagonist 2, to determine whether it would block avoidance to lysozyme. This drug had no marked effect on lysozyme avoidance when used at a concentration of 100 µM, consistent with the GDP-b-S data (Table 2).

Avoidance to VIP and PACAP 1–38 were also unaffected by this inhibitor. For the sake of comparison, we attempted to inhibit lysozyme, PACAP, and VIP avoidance in Tetrahymena using this drug. Unfortunately, this drug proved to be lethal to Tetrahymena (Table 2). In Tetrahymena, the pharmacological evidence suggests that the polycation signal is mediated by adenylyl cyclase (Hassenzahl et al. 2001), and avoidance to lysozyme and PACAP is reduced to baseline by the addition of Rp-cAMPs, a membrane-permeable, competitive inhibitor of cAMP (Table 2; Hassenzahl et al. 2001). We attempted to inhibit avoidance to lysozyme using 100 µM Rp-cAMPs in P. tetraurelia, but there was no marked effect on lysozyme avoidance in this organism (Table 2). Avoidance to VIP and PACAP 1–38 were also unaffected by Rp-cAMPs (HK., unpubl. data).

We used the tyrosine kinase inhibitor, genistein, to block lysozyme avoidance in Paramecium. Although this compound has no marked effect on lysozyme avoidance in Tetrahymena (Table 2), we found that a genistein concentration of 0.1 mg/ml was sufficient to bring lysozyme avoidance down to baseline levels in Paramecium (Table 2). In contrast, the control phytoestrogen, daidzein, had no effect on lysozyme avoidance at the concentrations tested (Table 2). Interestingly, genistein concentrations as high as 100 mg/ml had no marked effect on avoidance to VIP or PACAP 1–38 in Paramecium (HK., unpubl. data).

We found that neomycin sulfate effectively blocked lysozyme avoidance in P. tetraurelia as well (Table 2). A concentration of 15 µM neomycin was sufficient to reduce lysozyme avoidance to baseline levels. Avoidance to PACAP 1–38 and VIP were also antagonized by neomycin. However, the concentrations required to reach baseline avoidance were higher; near 100 µM for both of these compounds.

We also are currently doing immunofluorescence studies with Paramecium. Washed cells are exposed to lysozyme at EC100 concentrations. Immediately, these live cells are fixed using a 3.7% formalin solution.

The fixed cells are then treated with Nonidet P 40, a detergent that allows the antibodies easier access to the interior of cells. After detergent incubation, cells are exposed the primary and secondary antibody. Fluorescence photography revealed antibody binding focused in the basal bodies of extracellular cilia as was expected. However, a consistently measureable difference in fluorescence intensity was not seen between control and lysozyme-treated cells (Fig 2). Genistein, the tyrosine-kinase inhibitor, was used with the same protocol to reduce background tyrosine kinase activity. Using this procedure, a noticeable difference was seen between the control and lysozyme group. However, more data is needed to draw a credible conclusion.

Significance and Interpretation of Results Our data show that P. tetraurelia and T. thermophila show a number of similarities in their polycation signaling pathways, as well as some surprising differences. Our data confirm the hypothesis that lysozyme, VIP, and PACAP are all chemorepellents in Paramecium. Based on our cross-adaptation data, we believe that these three polycations share a receptor in Paramecium, as previously hypothesized in Tetrahymena. Polycation avoidance in both organisms is also inhibited by neomycin sulfate (Kuruvilla et al. 1997; current study). However, lysozyme signaling in Paramecium appears to be mediated through a tyrosine kinase pathway, as suggested by our genistein data. In contrast, the Tetrahymena response appears to be mediated through a G-protein coupled pathway which is inhibited by pertussis toxin and GDP-b-S (Hassenzahl et al. 2001).

Adaptation in our behavioral assays refers to a loss of behavioral avoidance when an organism is exposed to EC100 concentrations of a chemorepellent for a time period of about 15min (Kim et al. 1997; Kuruvilla et al. 1997). This behavioral response is believed to occur due to loss of ‘‘functional’’ receptors as determined by Scatchard analysis (Kim et al. 1997; Kuruvilla et al. 1997), possibly due to phosphorylation or other modifications, but not due to endocytosis (Cantor et al. 1999). Cross-adaptation, then, results when cells that have been depleted of functional receptors by one type of repellent no longer respond to EC100 concentrations of another repellent. This implies that the two repellents use a shared receptor. In many cases, because the receptor is shared, the second messenger pathway will also be common to both repellents. Intriguingly, the concentrations of genistein that we used in our study to inhibit lysozyme failed to inhibit avoidance to either VIP or PACAP 1–38 (HK., unpubl. data). Despite this fact, our cross-adaptation data and the fact that neomycin sulfate effectively eliminated avoidance to all three ligands all support the existence of a single membrane receptor that binds all three polycations. Neomycin sulfate has been shown in previous studies to be a competitive inhibitor of lysozyme binding to its receptor in Tetrahymena (Kuruvilla et al. 1997), as well as an antagonist of lysozyme-induced depolarization in Paramecium (Hennessey et al. 1995), where it is also purported to act via competitive inhibition. If, in fact, neomycin is blocking depolarization by competing for the proposed lysozyme receptor in Paramecium, then our neomycin data support the evidence for a shared polycation receptor in this organism. Indeed, the fact that higher concentrations of neomycin are required to block VIP and PACAP compared with lysozyme (about 100 µM compared with 15 µM) when lysozyme has a much lower EC100 than VIP or PACAP in Paramecium, suggests competitive inhibition as a mechanism of action. Because a 15-min exposure to neomycin was used in our assay, we cannot rule out the possibility that it is having effects on cell metabolism; for example, inhibiting the turnover of PIP2. If lysozyme, VIP, and PACAP are all binding to the same membrane receptor as our data suggest, why doesn’t genistein inhibit avoidance to all three polycations? Several possibilities exist. One possible explanation is that genistein would block VIP and PACAP avoidance if used at sufficiently high concentrations. However, the fact that avoidance to lysozyme was blocked at genistein concentrations of 0.1 mg/ml and exposure to 100 mg/ml genistein failed to block avoidance to PACAP and VIP (HK., unpubl. data) casts doubt on this explanation. An alternative explanation is that although all three ligands bind to the same receptor, the ligands with higher EC100 values use an alternative mechanism in addition to the tyrosine kinase pathway. An example of this might be receptor aggregation that would activate a voltage-gated calcium-based depolarization independently of the tyrosine kinase pathway. Voltage-gated calcium channels have previously been linked to ciliary reversal. In this scenario, binding of PACAP or VIP to membrane receptors would activate a tyrosine kinase; however, blockage of the kinase via genistein would not be sufficient to inhibit avoidance. However, a competitive inhibitor of receptor binding, such as neomycin, would inhibit avoidance to all of these polycations. This scenario is consistent with our results. Additional experiments are currently underway in order to test this hypothesis and to gain more insight into the tyrosine kinase signaling mechanism in Paramecium. Immunofluorescence data so far has pinpointed phosphotyrosine presence, but hasn’t yet led to conclusive data (Fig 2). We intend to continue testing these hypotheses (voltage-gated channels, etc) using both more immunofluorescence assays and also Western blots. Western blots will allow us to see if tyrosine kinase is still activated by PACAP or VIP at high genistein concentrations, and also allow us to compare the size of Paramecium’s tyrosine kinase to other known kinases. These further experiments will help us to determine whether a tyrosine kinase is involved in signaling through all three polycations, and whether other second messenger pathways are implicated. Additional information about chemorepellent signaling in protozoans may help us better understand how chemorepellents function in the organisms’ native habitat, as well as allowing us to make comparisons between the mechanisms of chemorepellent signaling in lower eukaryotes and multicellular organisms.

Figures and Tables

Figure 1. Polycation avoidance in Paramecium and Tetrahymena. Behavioral assays (see Materials and Methods) indicate that both Paramecium (A) and Tetrahymena (B) avoid vasoactive intestinal peptide (VIP; circles), pituitary adenylate cyclase activating polypeptide (PACAP) 1-27 (squares), PACAP 1-38 (diamonds), PACAP 6-27 (hexagons), and lysozyme (stars). Each point represents the mean ± SD of six trials. Each trial consisted of 10 cells which were scored as either (+) or (-) for avoidance. A. The EC100 of each ligand in Paramecium is 50 µM VIP, 10 µM for PACAP 1-27 and 1-38, and 0.5 µM for lysozyme. The EC100 of PACAP 6-27 could not be reached because of its toxicity. The EC50 of these polycations in Paramecium was approximately 3 �M for VIP, 0.75 µM for all PACAP isoforms, and 0.1 �M for lysozyme. B. The EC100 of each ligand in Tetrahymena is 100 µM VIP, 0.1 µM for all PACAP isoforms, and 100 µM for lysozyme. The EC50 of these polycations in Tetrahymena was approximately 7.5 µM for VIP, 0.05 µM for all PACAP isoforms, and 50 µM for lysozyme (see also Keedy et al. 2003; used with publisher’s permission).

Table 2. Effect of various pharmacological inhibitors on avoidance to100 Figure 2. Immunofluorescence of lysozyme- mM lysozyme in Tetrahymena thermophila and 0.5 mM lysozyme in exposed and non-exposed cells. Paramecium tetraurelia.

References 1. Bartholomew, J., Reichart, J., Mundy, R., Recktenwald, J., Keyser, S., Riddle, R. & Kuruvilla, H. 2007. GTP avoidance in Tetrahymena thermophila requires tyrosine kinase activity, intracellular calcium, NOS, and guanylyl cyclase. Purinergic Signalling, in press. 2. Cantor, J. M., Mace, S. R., Kooy, C. M., Caldwell, B. D. & Kuruvilla H. G. 1999. Adaptation to Lysozyme Does Not Occur Via Receptor-Mediated Endocytosis in Tetrahymena thermophila. WWW J. Biol. (4)6. Available online: http://www.epress.com/w3jbio/vol4/cantor/paper.htm 3. Hassenzahl, D. L., Yorgey, N. K., Keedy, M. D., Price A. R., Hall, J. A., Myzcka, C. C. & Kuruvilla, H. G. 2001. Chemorepellent signaling through the PACAP/lysozyme receptor is mediated through cAMP and PKC in Tetrahymena thermophila. J. Comp. Physiol. A, 187: 171-176. 4. Hennessey, T. M., Kim, M. Y. & Satir, B. H. 1995. Lysozyme acts as a chemorepellent and secretagogue in Paramecium by activating a novel receptor-operated Ca++ conductance. J. Memb. Biol., 148: 13-25. 5. Keedy, M., Yorgey, N., Hilty, J., Price, A., Hassenzahl, D. & Kuruvilla, H. 2003. Pharmacological evidence suggests that the lysozyme/PACAP receptor of Tetrahymena thermophila is a polycation receptor. Acta Protozool., 42: 11-17. 6. Kim, M. Y., Kuruvilla, H. G. & Hennessey, T. M. 1997. Chemosensory adaptation in Paramecium involves changes in both receptor binding and the consequent receptor potentials. Comp. Biochem. Physiol., 118A(3): 589-597. 7. Kim, M. Y., Kuruvilla, H. G., Raghu, S. & Hennessey. T. M. 1999. ATP reception and chemosensory adaptation in Tetrahymena thermophila. J Exp. Biol., 202: 407-416. 8. Kohidai, L., Kovacs, K. & Csaba, G. 2002. “Direct chemotactic effect of bradykinin and related peptides-significance of amino- and carboxyterminal character of oligopeptides in chemotaxis of Tetrahymena pyriformis.” Cell Biol. Internat., 26(1): 55-62. 9. Kuruvilla, H. G., Kim, M. Y. & Hennessey, T. M. 1997. Chemosensory adaptation to lysozyme and GTP involves independently regulated receptors in Tetrahymena thermophila. J. Euk. Microbiol., 44(3): 263-268. 10. Kuruvilla, H. G. & Hennessey, T. M. 1998. Purification and characterization of a novel chemorepellent receptor from Tetrahymena thermophila. J. Memb. Biol., 162: 51-57. 11. Kuruvilla, H. G. & Hennessey, T. M. 1999. Chemosensory responses of Tetrahymena thermophila to CB2, a 24-amino-acid fragment of lysozyme. J. Comp. Physiol. A, 184: 529-534. 12. Szabö, R., Mezö, G., Hudecz, F. & Köhidai, L. 2002. Effect of the polylysine based polymeric polypeptides on the growth and chemotaxis of Tetrahymena pyriformis. J. Bioactive Compatible Polymers, 17(6): 399-415. Shuttle 'Copter

Student Researcher: Ericka M. Harris

Advisor: Dr. Paul C. Lam

The University of Akron Department of Education

Abstract The goal of the activity Shuttle 'Copter is for students to explore the concepts rate and resistance. Students will be given small paper helicopters that fall in a way that is similar to leaves and other objects in nature. After reviewing and discussing the basic information about how space shuttles land, students will conduct their own rate of descent experiments. They will work in pairs and graph the rate of their copter as it falls from a set distance from the floor. After their initial trials, they will continue to test by changing variables including by adding paper clips to their models for extra weight. Once the trials have been completed students will work with a formula to discover rate of descent. The lesson ends with a discussion of how different variables affect the speed of falling objects. The lesson source is NASAexplores.

Lesson The fourth grade class that participated in this lesson was first questioned as to what they knew about the landing of planes and space shuttles. The students that knew the differences in the landing procedures shared the information with the rest of the class. Next, the students listened to “Coming in for a Landing,” an article that explains the precautions needed for safely landing a space shuttle. Once students had the background knowledge needed for the experimental portion of the lesson, the instructions were explained. Learners were shown how to create their own “’copter” to demonstrate rate of descent and variables that may cause different speeds. Students were placed in groups and carried out several trials of dropping their ‘copters and recording the time taken to land on the floor. Learners in the class changed variables while experimenting, including making the ‘copter heavier by adding paper clips and folding the wings into different positions. The groups came up with average times for landing for each variable tested. They were then asked to share their results with their classmates. All of the data was written on the board and the class discussed possibilities for the differences in results.

Objectives Students will review any previous knowledge about the landing of space shuttles. Students will perform experiments with hand-made ‘copters and record their data. Students will analyze the data and discuss any discrepancies between the average results.

Standards Alignment

Math Patterns, Functions and Algebra Standard Benchmark for Grades 3-4 G. Describe how a change in one variable affects the value of a related variable.

Data Analysis and Probability Standard Benchmark for Grades 3-4 A. Gather and organize data from surveys and classroom experiments, including data collected over a period of time. E. Describe data using mode, median and range.

Science Physical Sciences Standard Benchmark for Grades 3-5 C. Describe the forces that directly affect objects and their motion. Scientific Inquiry Benchmark for Grades 3-5 A. Use appropriate instruments safely to observe, measure and collect data when conducting a scientific investigation. B. Organize and evaluate observations, measurements and other data to formulate inferences and conclusions. C. Develop, design and safely conduct scientific investigations and communicate the results.

Underlying Theory This lesson was completed with an overall constructivist approach, with direct instruction at the end for the review of data. In order for students to gain the most from this type of experiment, it is critical that they are actively engaged in all aspects- measuring, performing, recording, and analyzing. Since students were immersed in the lesson and asked to draw their own conclusions, they were able to benefit from their own investigation and experience. By approaching students directly during the review, the teacher is able to clarify any misunderstood concepts and ensure that students have reached accurate conclusions.

Student Engagement Active student involvement was necessary and used along with the constructivist approach of the lesson. Learners were engaged during all parts of the lesson, as they were able to create, manipulate, and record data about their own ‘copter. Participation was very high and this was likely due to the hands-on nature of the task.

Resources Students were placed in groups of four or five and each group was given a set of materials. The supplies included: ‘copter template, yardstick, scissors, paper, pencil, and paper clips. Members of the groups decided on who would have which jobs and used the resources appropriately. They worked together so that each classmate was either recording data, timing the object, dropping the object, or planning the next variable to be tested.

Results At the close of the lesson, the students reported their average times to the class and teacher. There was a wide range of results including the possibilities for differences. They concluded that each team may have made unique errors that caused discrepancies. Overall, the fourth grade students were able to conclude that the heavier objects landed more quickly and that variations on the “wings” of the ‘copter also significantly affected the results of each trial.

Assessment Students were informally assessed throughout the lesson. Their participation, following of instruction, and conclusions were all observed. Students were required to write the data from their experiments as a record of their work. This record allowed me to be able to see how well they understood what they were doing and it also showed me how wisely they used their time, as some students were unable to complete all of the required trials.

Conclusion Fourth grade students are able to benefit from the hands-on activities offered through the NASA website. Although students are typically encouraged to read to learn at this stage, the benefits of actively participating in an experiment can peak student interest in ways that reading about a certain topic cannot. Students were able to discuss their previous knowledge about the landing of space shuttles, gain a visual understanding of how variables can affect the landing of an object, calculate average times, and make personal conclusions based upon their own collected data. The students remarked that they enjoyed the physical and fun nature of the math and science activity. Proportions in Our Galaxy

Student Researcher: Beth L. Hegarty

Advisor: Miss Sarah Gilchrist

Cedarville University Department of Science and Mathematics

Abstract Our universe is a vast system of stars, planets, and galaxies, all moving in intricate patterns and paths that scientists have only just begun to explore. Sometimes the planet we live in can seem so big that we forget the size of the universe our planet lives in. We are going to explore proportions that relate math to the size of our galaxy.

Project Objectives This lesson is designed to accomplish three different tasks:

1. To let students participate in a lesson that is interactive and applicable to life. This project will give students a way to apply simple math concepts that are learned within the classroom to a project that is interesting and exciting to them, and will capture and maintain their interest.

2. To help students understand how math relates to other subjects. Math and science are inter-related, and this project can help students see how math relates to astronomy, as well as all branches of science.

3. To aid students in understanding the vastness of the Milky Way by simple math. Oftentimes, we become consumed by the (perceived) enormity of the planet earth. This activity will help students realize that the world they live in is barely a fraction of the entire universe that contains our world.

Methodology Used 1. Allow students to work through the chart on scale factors (See Worksheet 1 below). Students will need to use calculators to complete this worksheet. (Note: Values filled in with italics are to be filled in by students).

2. Have students compare results with each other, to make certain they are correct, and allow students to fix any errors.

3. Ask students to search for a different scale factor, one that would be easier to create a model from. (Note: I have suggested 1, and filled in the data for that. Students are free to fill in any value they choose).

4. Hand out volume of planets worksheet (See Worksheet 2 below), and have students complete in small groups of 2-3, with a new scale factor that is the average of their earlier scale factors. (Note: There are several different ways for students to fill out this worksheet, allow them to do so in whatever way they choose).

5. Discuss how proportions work in scaling down our galaxy, and how they can help us understand the size of different things.

Results Obtained/Significance and Interpretation of Results As I have done no actual research, or had an opportunity to present this lesson in a classroom, I have no results to describe or interpret.

Figures/Charts

Worksheet 1 Mini Version Mini Version Actual New Scale New Mini- Planet Scale Factor (cm) (km) Diameter (km) Factor Version Sun 62,500,000 2240 cm 0.0224 1,400,000 100,000,000 1400 cm Mercury 62,500,000 8 cm 7.81 x 10-5 4,880 100,000,000 4.88 cm Venus 62,500,000 19 cm 1.94 x 10-4 12,100 100,000,000 12.1 cm Earth 62,500,000 20 cm 2.04 x 10-4 12,740 100,000,000 12.7 cm Moon 62,500,000 6 cm 5.56 x 10-5 3,476 100,000,000 3.48 cm Mars 62,500,000 11 cm 1.09 x 10-4 6,794 100,000,000 6.79 cm Jupiter 62,500,000 229 cm 2.29 x 10-3 143,200 100,000,000 143.2 cm Saturn 62,500,000 192 cm 1.92 x 10-3 120,000 100,000,000 120 cm Uranus 62,500,000 83 cm 8.29 x 10-3 51,800 100,000,000 51.8 cm Neptune 62,500,000 73 cm 7.29 x 10-4 49,500 100,000,000 49.5 cm Pluto 62,500,000 4 cm 4.00 x 10-5 2,500 100,000,000 2.5 cm

Worksheet 2 Your Average Scale Planet Mini-Volume (km3) Actual Volume (km3) Factor Sun 81,250,000 5.62 x 109 4.57 x 1017 Mercury 81,250,000 237.5 1.93 x 1010 Venus 81,250,000 3630.77 2.95 x 1011 Earth 81,250,000 4346.25 3.45 x 1011 Moon 81,250,000 86.03 6.99 x 109 Mars 81,250,000 643.69 5.23 x 1010 Jupiter 81,250,000 6.02 x 106 4.89 x 1014 Saturn 81,250,000 3.54 x 106 2.88 x 1014 Uranus 81,250,000 2.86 x 105 2.32 x 1013 Neptune 81,250,000 2.49 x 105 2.02 x 1013 Pluto 81,250,000 32 2.60 x 109

Analysis of Drag on a Model Rocket

Student Researcher: Douglas J. Hoersten

Advisor: Dr. Jed E. Marquart

Ohio Northern University Mechanical Engineering Department

Abstract An experimental study was performed to compare the coefficient of drag on a model rocket at different cross sections using varying speeds as inputs. First, the dimensions of the rocket were measured, and a symmetrical half of its outline was drawn in the computer program Gambit [1]. This meshed outline was written to a case for use in the computer program Fluent [2]. A range of air speeds were set as inlets, and their corresponding coefficient of drag were calculated through Fluent. These results yielded a wide variety of drag coefficients, which are compared using graphs and tables.

Project Objectives The objective of this project is to calculate the coefficient of drag on a model rocket. The coefficients of drag will be compared based on the corresponding velocity inlets.

Methodology Used First, the radius is measured every quarter of an inch along the body along the rocket using a vernier caliper. These radius values are recorded and are input into a grid in Gambit. The points in Gambit are connected with a line representing the body of the rocket. A rectangular “box” is drawn around the outline of the rocket which represents a wind tunnel. Enough area was used between the rocket and the walls of the wind tunnel to ensure that the walls would not affect the flow of the air. The left edge of this tunnel is set as a velocity input to represent the speed that the rocket is traveling. A small edge on the engine protrusion of the rocket is also set as a velocity inlet to represent the exhaust velocity that is propelling the rocket. The right side of the tunnel is set as a pressure outlet to allow the pressure caused by the increasing velocity to leave the wind tunnel. The file is then meshed to create points where the data will be taken. This completed Gambit file is then exported, to be used as a case in Fluent. In Fluent, the boundary edges created in Gambit are given the velocity inputs as boundary conditions. These velocity inputs for both the rocket’s freestream and exhaust velocity are varied to calculate the coefficient of drag for different velocities along the path of the rocket. The coefficient of drag was calculated by Fluent, using iterations, and it was recorded in a table for later analysis.

Results Obtained I obtained results for the coefficient of drag that ranged from approximately 11 to slightly under 27 based on the velocities input for the air speed and the rocket exhaust. These coefficients of drag are shown in a table as in a graph in order to compare the changes in coefficients when the velocities change.

Significance and Interpretation of Results As seen in the graph below, the coefficient of drag increases for lower freestream velocities while the exhaust velocity remains constant. There is a slight decrease at a freestream speed of approximately 160 ft/s for a rocket exhaust of 100 ft/s and 0 ft/s. However, the overall trend is an increase. Similarly, the coefficients increase with higher constant exhaust velocities and at a higher rate. This is seen as the slope of each trend per constant exhaust velocity. Therefore, these results show that, higher air speeds produce lower drag coefficients. Similarly, since the equation for the coefficient of drag squares the velocity value, the coefficient of drag will increase at a higher rate when the constant exhaust speed is increased. Interpreting these results, I conclude that using the highest speed possible will produce the lowest coefficient of drag; however, the speed must be limited to prevent failure in the rocket and to ensure that the rocket follows the desired path. Also, it is ideal to use the least engine exhaust velocity possible, since a higher exhaust velocity will cause a rapidly increasing coefficient. Results from this report are significant because reducing the drag on objects moving through fluid reduced the amount of force required to move the object. Reducing the force required will save energy consumption in the long run. As the graph below also shows, at a high freestream velocity, the velocity of the exhaust has less of an effect on the coefficient of drag. This means that, eventually, the freestream velocity will be the main factor in determining the coefficient of drag, no matter what exhaust speed is input. At lower air speed, though, the coefficients of drag do depend on exhaust velocity. Also, as seen on the Velocity Vector and Contour images seen on my poster board, the maximum amount of drag occur at the tip of the rocket.

Figures/Charts

Figure 1. Graphical Comparison of Results.

Acknowledgments and References 1. Gambit 2.4.6 Grid-Based Application 2. Fluent 6.3.26 CFD Code-Based Application Solar System Unit Plan

Student Researcher: Dana L. Hollis

Advisor: Dean Jane A. Zaharias

Cleveland State University Education Department

Abstract In order to meet the eighth grade Ohio Science Content Standards, I created a four-week unit plan about the solar system using resources and ideas from that I obtained from the NASA Glenn Research Center. In order to educate my students effectively, I created a pre-assessment for my students, then used the results from this assessment to tailor the unit plan and then re-assessed my students to see what they had gained from the unit plan.

Ohio Science Content Standards: Earth and Space Sciences: The Universe • Describe how objects in the solar system are in regular and predictable motions that explain such phenomena as days, years, seasons, eclipses, tides and moon cycles. • Describe the life cycle of a star

Student Objectives: • The student will be able to explain how the solar system was formed. • The student will be able to name and describe the planets. • The student will be able to describe the life cycle of a star. • The student will be able to describe the different phases of the moon

Day 1: Pre-Assessment Day 2: Introduction: How was the solar system formed? • Read “How the solar system was formed?” as a class. • Classwork/Homework: Vocabulary Worksheet, Multiple Choice Exit Quiz Day 3: Hands on Activity: A walk through our solar system, Adapted from COSI experience Day 4: Solar System Video Days 5-10: Project Research Introduction. Days 11-13: Group Presentations Day 14: Celestial Body Review Days 15-16: The Life Cycle of a Star. • Engage Activity: Star Cycle: Watch NASA video clip of the life cycle of a star • Read “The life cycle of a star” • Classwork/Homework: Vocabulary Worksheet, Multiple Choice Exit Quiz Days 17-18: Moon Phases • Engage Activity: Moon Phases, Adapted from NASA – Eclipse 99 Activity Days 19-20: Post-Assessment

The above unit plan is a shortened version of the lesson that I taught within my classroom. In order to utilize my resources from NASA, I used numerous activities in my plans that I obtained through NASA. These hands-on activities engaged my students and motivated them to want to learn more about the solar system. NASA provided me with lithographs of the solar system, the moon phases, and many educational posters for my classroom. Many of my students are visual learners and this helped them greatly to better understand the content of the unit plan. It was imperative to me to include technology in my unit plan in a variety of ways. The students were able to view the video clip from the NASA website on the SmartBoard. The students were required to use the NASA website when preparing their research report, poster and/or PowerPoint. This allowed them to not only search the internet but to search and become familiar with the NASA website as well. As previously mentioned, the students were given a pre-assessment before the unit plan, after the unit plan they were given the same assessment (see results below). Using this pre-assessment I was able to conclude that the students were somewhat familiar with the planets but not at all familiar with the how the solar system was formed, the moon phases and the life cycle of a star. The research project allowed them to review and refresh the knowledge they already had about the planets. The project allowed them to utilize their resources within their classroom, the internet, the SmartBoard, Microsoft word and PowerPoint. They were also able to use the lithographs and posters provided to me through NASA. It was imperative for me to present the formation of the solar system, the moon phases and the life cycle of the star in a fun and motivating manner. The video clip and the hands-on activities that I found on the NASA website were essential in this process. They engaged the students and promoted further learning and interest in the topics.

All the students showed evidence of learning with an average of a 30% improvement, which overall shows significant improvement. These results show that the use of my research and resources through NASA were effective within my classroom. The role this research and resources played within my unit plans and in my classroom were an important part of the learning process for my students. From this unit plan, my students became familiar with NASA and what it has to offer in the areas of science and technology. They continue to utilize the NASA website, as will I in my future unit plans.

Assessment Results

Student Name Pre-Assessment Grade Post-Assessment Grade Improvement % Alex 73% 93% 20% Antanise 51% 84% 33% Anthony 51% 78% 27%

Brittnay 40% 76% 36% Clay 67% 89% 22% Davonte 13% 67% 54% Donovan 27% 80% 53% Kaylee 51% 78% 27% Kaysha 71% 98% 27% Kylan 29% 53% 24% Mason 69% 93% 24% Michael 47% 62% 15% Onix 78% 96% 18% Roger 67% 80% 13% Shyanne 53% 87% 33% Teionna 56% 78% 22%

Wilson 7% 62% 56% MEAN 50% 80% 30% MODE 51% 93% 33% MEDIAN 51% 80% 27% Design of High-pass Filter to Improve the Quality of Sound and Desired Frequency

Student Researcher: D’Nita M. Howard

Advisor: Dr. Edward Asikele

Wilberforce University Computer Engineering Department

Introduction Generally, machines, equipment, vehicles are nothing but well put together parts. For example, an audio speaker consists of wiring, wire coils, voice coils, surrounds, etc. The one important part to a speaker is its filter. Within electronics, a filter is basically a device that separates unwanted signals from signals that are wanted, such as frequencies. An electronic filter is an electronic circuit that processes signals specifically intended to enhance wanted signal components. Such electronic filters can be passive or active, analog or digital, in discrete–time or continuous-time, linear or non–linear, or infinite or finite impulse responders.

One of the oldest forms of electronic filters are passive analog linear filters which were constructed only using resistors and capacitors or inductors. There are several different pass band filters which transmit frequencies, such as with speakers, television sets, or car stereos. These filters are low-pass filters, high- pass filters, band-pass filters, band-stop filters and all-pass filters. High-pass filters among them all are more common because you get a better quality of sound, and it can be found anywhere.

Even though a filter within itself gives only what is needed, all filters do not do the same thing. A low- pass filter passes low frequency signals, but reduces the amplitude of signals with frequencies higher than the cutoff frequency. There are many different names for this type of filter, such as, high-cut filter or a treble-cut filter. Low-pass filters appear in different forms from electronic circuits, used in subwoofers, to acoustic barriers to digital algorithms, used to smooth data sets.

A band-pass filter is a device used to pass frequencies within the range desired and rejects frequencies that are not desired. This type of filter can simply be created by combining a low-pass filter and a high- pass filter. A band-stop filter, or otherwise known as a band-rejection filter, passes unaltered frequencies and at the same time satisfies the specific range to very low levels.

Description One very popular filter used is the high–pass filter. A high-pass filter passes high frequencies well, but attenuate frequencies lower than the cutoff frequency. The cutoff just refers to a boundary point where energy is reflected instead of transmitted. Basically, it creates an easy passage for high frequency signals and a difficult passage for low frequency signals. For example, a high-pass filter with a very low cutoff frequency blocks DC (Direct Current) from a signal that is not preferred. This type of filter is called DC blocking filters.

The simplest electronic high–pass filter consists of a capacitor, a device that stores energy in an electric field between two plates, in series with a resistor, a component that opposes and electric current by producing a voltage drop, parallel with the signal path. The capacitor is used to block direct current from voltage going in and voltage coming out. The capacitor has impedance which increases while frequency decreases. This helps with the blocking of signals of lower frequency. Mathematically speaking, the resistance multiplied by the capacitance is the time constant, which is proportional to the cutoff frequency and the output becomes half the input. A simple equation is frequency equals one divided by two multiplied by pi multiplied by time. 1 1 f = = 2πτ 2πRC Experiment or Diagram

This first graph depicts output power versus frequency. After the cutoff point, with more power comes an increasing amount of frequency. Even though the quality of the frequency is greater as it increases, the lower frequency value is still satisfied.

This image represents four main filters and their differences. The first is a low-pass filter, the second is a high-pass filter, the third is a band-pass filter and the last is a band-stop filter. All the filters are both graphed and shown as they would look built.

The characteristics of a filter can be shown on a graph called a Frequency Response Curve.

Uses High-pass filters have a very common construction and they are extremely important in electronic design. They are usually found in everyday home appliances and a variety of other electronics. For example, high-pass filters are found in televisions, digital image processors, AM/FM tuners, tweeter speakers, etc. When they are used in digital image processing they perform transformations in the frequency domain. Image processing is basically taking a photograph or a video and the output are parameters, such as enlargement, reduction or rotation, related to the image or video.

Conclusion High-pass filters guarantee a better quality of sound and frequency in a person’s everyday use of appliances and electronics. The Infrastructure and Security Advantages of Biometric Technology

Student Researcher: Royshawnn Q. Hunt

Advisor: Dr. Edward Asikele

Wilberforce University Computer Information Systems

Abstract Since September 11, 2001, there has been a great deal of interest in using biometrics for verification of identity. Unlike typical identification methods, which require a person to have some type of identification card, personal identification number (PIN), or password biometric information is part of a person. They are thought to be more reliable and not easily forgotten, lost, stolen, falsified, or guessed. This is because a biometric identifier relies on unique biological information about a person.

Biometrics is the science of measuring, recording, and applying physical characteristics for the purpose of identification. Although the biometrics market is immature, biometric technology is mature and usable. Biometrics takes many forms, including face recognition, finger and palm-print recognition, finger/hand/ face geometry, iris and retina recognition, voice recognition, and written signature recognition.

Project Objectives Objectives for this research project were to: extensively research biometric technology to distinguish and characterize the types of biometric technologies, seek which biometric technology is the most secure and fault tolerant, to gain further knowledge of biometric technology, and to seek present and future applications that biometric technology is being used.

Methodology Used The method used for gathering information pertaining to biometric technology consisted of researching through the local public library reference catalog and computer database search engines such as: Google, Yahoo, OhioLink, and SciNet. The reference material found from books and magazines were first examined for explanation of biometric technology as well as for evaluation and applications. Reference materials from books and magazines were examined again to compare with online references for possible updated information on biometric technology.

Results Obtained Biometric technologies are positioning themselves as the foundation for many highly secure identification and personal verification solutions. Currently biometric solutions provide a means to achieve fast, user- friendly authentication with a high level of accuracy. Biometric-based solutions are able to provide confidential financial transactions and personal data privacy. The need for biometrics can be found in federal, state and local governments, military, and in commercial applications. Enterprise network security infrastructures, government IDs, electronic banking, investing and other financial transactions, retail sales, law enforcement, and health and social services are currently benefiting from biometric technologies.

Biometric technology is designed to provide a greater degree of security than traditional authentication techniques since the biometric credentials are difficult to steal, lose, or forget. Biometrics may be influenced as a corresponding form of authentication to increase security for a critical resource. Also, biometrics systems are designed to improve the confirmation of IT audit traces and user accountability because the technology provides a higher level of assurance in the identity verification process.

Biometrics is an authentication mechanism that relies on the automated identification or verification of an individual based on unique physiological or behavioral characteristics. Common physical biometrics includes fingerprints, hand and face geometry, retina, and iris pattern. Common behavior characteristics include signature dynamics, voice recognition, and keystroke dynamics. Several other technologies are being studied and will be implemented which include vein patterns, facial thermographs, fingernail bed, body odor, ear shape, gait, footprint recognition, and foot dynamics. Biometrics is used for identification and verification. Identification is determining who a person is. It involves trying to find a match for a person’s biometric data in a database containing records of people and their characteristics. Verification is determining if a person is who they say they are. It involves comparing a user’s biometric data to the previously recorded data for that person to ensure that this is the same person.

Biometric technologies can be combined to provide enhanced security. Multimodal biometric system combines use of two or more biometric technologies in one application. A multimodal system allows for an even greater level of assurance of a proper match in verification and identification systems.

Biometric Technology Comparison Table

Characteristic Fingerprint Face Retina Hand Iris Voice Signature

Dryness, Age, Reason for age, Glasses, Swelling, Contacts, Sick, Quickness, lighting, Failure dirt, movement weight glasses noise angle movement cut

Very Very Accuracy High High High High High High High

User Acceptance Medium Medium Medium Medium Medium Medium Medium

Long-term High Medium High Medium High Medium Medium Stability

References 1. Bolle, Ruud, Jonathan H. Connell, Sharath Pankanti, Nalini K. Ratha, and Andrew W. Senior. “Guide to Biometrics”, 2004. 2. Bolle, Ruud and Sharath Pankanti. Biometrics: Personal Identification in Networked Society. Kluwer Academic Publishers, 1998. 3. Nanavat, Samir, Michael Thieme, and Raj Nanavati. Biometrics: Identity Verification in a Networked World. John Wiley and Sons, Inc., 2002. 4. Reid, Paul. “Biometrics for Network Security”, 2004. 5. Tocci, Salvatore. High-Tech IDs: From Finger Scans To Voice Patterns. New York: Franklin Watts (Grolier Publishing), 2000. 6. Wayman, J., A. Jain, D. Maltoni, D. Maio. “Biometric Systems”, 2004. 7. The Biometric Consortium http://www.biometrics.org/ 8. Bromba, Manfred, Dr. "Biometrics FAQ." http://www.bromba.com/faq/biofaqe.htm#Biometrie 9. Find Biometrics http://www.findbiometrics.com/ 10. Biometric News Portal http:/// www.biometricnewsportal.com 11. Network Computing http:/// www.networkcomputing.com 12. http://et.wcu.edu/aidc/BioWebPages/Biometrics_Home.html Freeze-Thaw Durability and Nondestructive Testing (NDT) of Pervious Concrete (PC)

Student Researcher: Frederick K. Hussein

Advisor: Norbert Delatte, P.E., Ph.D.

Cleveland State University Department of Civil and Environmental Engineering

Abstract One of the main benefits of Pervious Concrete (PC) is the reduction in storm water runoff produced in comparison to other non pervious pavements. The Federal Clean Water Act has placed restrictions on the amount of storm water volumes and the water pollution associated with storm water runoff. PC greatly reduces runoff by allowing water to infiltrate through it. There have been many installations of PC in areas where freeze-thaw cycles are minimal or nonexistent. In order to broaden the use of pervious concrete in areas where freeze-thaw cycles are an issue, satisfactory freeze-thaw strength must be documented. Also, ways to evaluate the performance of PC must be developed.

Project Objectives The objectives of this research project are to investigate the freeze-thaw durability of PC and also the use of NDT for the evaluation of PC. The freeze-thaw durability of a particular mixture of PC will be determined in terms of the amount of freeze-thaw cycles that samples made of the mixture can undergo without losing a significant amount of mass or strength. The strength of the samples will be measured using NDT so that further freeze-thaw cycles can be achieved. The void ratio of the samples will also be determined. Compressive cylinders of the mixtures will be tested, using both NDT and destructive testing. The results of the two types of testing will be compared.

Methodology Used Three different mixtures were used to prepare the samples. The specifics of these mixtures are outlined in Figure 1.

The freeze-thaw durability was evaluated using an automated machine that rapidly froze and thawed the samples. The testing was done in accordance with ASTM standard C 666 – 97, procedure A. The procedure called for the samples to be fully saturated while frozen and thawing. The samples were determined to be failed either because they were physically falling apart, or because the resonant frequency test yielded results indicating internal fracture.

The strength of the samples was found in terms of the dynamic Young’s modulus of elasticity. The dynamic Young’s modulus was calculated using the longitudinal resonant frequency, which was found in accordance with ASTM standard C 215 – 02. The equation given for the dynamic Young’s modulus, in Pascals, is (ASTM, 2003):

Dynamic E = DM(n’)2 Where: D = 4 (L/bt) for the freeze-thaw beams, and 5.093 (L/d2) for the compressive cylinders L = length of specimen in meters t, b = freeze-thaw beam cross section dimensions, in meters with t being in direction of driver d = diameter of compressive cylinder in meters M = mass of specimen in kilograms n’ = fundamental longitudinal frequency in Hertz

The void ratio of each sample was also found. The equation used to determine the void ratio as a percentage is (Park and Tia, 2004):

Void Ratio = [1 – (W2 – W1) / (ρw * Vol)] * 100 Where: W1 = weight under water force W2 = dry weight ρw = density of water Vol = volume of sample

The NDT method used to determine the resonant frequency was impact echo (IE). To perform the IE test, a piezo-electric accelerometer was attached to one end of the specimen. The opposite side of the specimen was impacted using a small hammer. The impact induced a stress wave, which reflected and reached the fundamental frequency of the specimen. The acceleration of the waves was picked up by the accelerometer, and a Fourier transform was performed to obtain the frequency information. The fundamental frequency appeared as a noticeable spike in the frequency. The IE equipment used was an Olson Instruments, Inc. RT-1 Resonance Tester.

The strengths of cylinders of the three mixtures were also found in terms of the compression strength. The testing was done in accordance with ASTM standard C 39/C 39 M – 03. The samples were tested to failure by using a compression test machine, which consisted of an inverted hydraulic jack and an inline pressure dial.

Results Found At the time of reporting the samples had undergone 127 freeze-thaw cycles. A total of 33% of the samples of mixture 1 and 2 and 100% of the samples of mixture 3 failed sometime between 19 and 127 freeze-thaw cycles. Distinguishing the samples of mixture 1 and 2 was impossible after 127 cycles, because the paint used as a marker had fallen off; therefore the results were averaged for the purpose of reporting. The dynamic Young’s modulus was found to be inversely proportional with the number of freeze-thaw cycles, as is shown in Figure 2. The void ratio was found to be proportional with the number of freeze-thaw cycles for mixture 1, and inversely proportional for mixtures 2 and 3 between 0 and 19 cycles, as is shown in Figure 3. The dynamic Young’s modulus and the compression strength were found to be proportional, as shown in Figure 4. The void ratio and compressive strength were found to be inversely proportional, as shown in Figure 5.

Significance and Interpretation of Results The failure of all of the samples of mixture 3, the only mixture containing fine aggregate, seems odd. Conventional thinking would be that the addition of fine aggregate would increase the strength. The fact that measurements were not taken on any of the samples between 19 and 127 freeze-thaw cycles leaves the exact amount of cycles when the failures occurred a mystery. Another mystery is the fact the void ratio and the number of freeze-thaw cycles were found to be inversely proportional for mixtures 1 and 2 between 0 and 19 cycles. The cause of these unexpected results of the void ratios may have been errors in testing or data recording. The results of the compression strength versus dynamic Young’s modulus and the void ratio versus compression strength were as expected and further testing may yield an accurate proportioning factor for these parameters.

Figures

Mix # Coarse Aggregate (%) Fine Aggregate (%) Cement (%) Water (%) 1 79 0 17 4 2 76 0 17 7 3 71 7 15 7 Figure 1.

Dynamic Young's Modulus vs. Freeze-thaw Cycles

20

18 Batch 1

16 Batch 2 14 Batch 3

Modulus (GPa) Modulus 12 Dynamic Young's Young's Dynamic

10 Batch 1 & 2 050100 (average) Freeze-thaw Cycles

Figure 2.

Void Ratio vs. Freeze-thaw Cycles

30 Batch 1

Batch 2

25 Batch 3

Void Ratio (%) Batch 1 & 2 (average) 20 050100 Freeze-thaw Cycles

Figure 3.

Compression Strength vs. Dynamic Young's Modulus

21000 19000 17000 Batch 1 15000 Batch 2

(kPa) 13000 Batch 3 11000 9000 Compression Strength 7000 25 30 35 40 45 50 Dynamic Young's Modulus (GPa)

Figure 4.

Void Ratio vs. Compressive Strength

21000

19000

17000 Batch 1 15000 Batch 2

13000 Batch 3

11000

Compressive Strength (kPa) 9000 20 25 30 35 Void Ratio (%)

Figure 5.

References 1. ASTM C 39/C 39M - 03. 2003. Standard Test Method for Compression Strength of Cylindrical Concrete Specimens. Annual Book of ASTM Standards 4.02. ASTM International. 2. ASTM C 215 – 02. 2003. Standard Test Method for Fundamental Transverse, Longitudinal, and Torsional Resonant Frequencies of Concrete Specimens. Annual Book of ASTM Standards 4.02. ASTM International. 3. ASTM C 666 - 97. 2003. Standard Test Method for Resistance of Concrete to Rapid Freezing and Thawing. Annual Book of ASTM Standards 4.02. ASTM International. 4. Park, S., and M. Tia. 2004. An Experimental Study on the Water-Purification Properties of Porous Concrete. Cement and Concrete Research 34: 177–184. Controlling an Induction Machine Using dSPACE

Student Researcher: Nicole E. Jones

Advisor: Dr. Ana Stankovic

Cleveland State University Electrical and Computer Engineering Department

Abstract This paper presents control of an induction machine using dSPACE and Lab-Volt methods. The speed of the induction machine was changed by varying the operating frequency applied to the machine. The Lab- Volt test bench provided a hands-on learning experience using modules that house the physical components of the circuit while the dSPACE system is more conceptual using mathematical models of the circuit and a DSP board. The benefits to using the dSPACE system is that it is a straightforward approach to modeling the control system and the results are more accurate results when compared to the theoretical operation of the induction machine.

Project Objectives This project focused on learning to build and simulate a real-time model using a dSPACE board and comparing it with the results of the experiment built and tested on a Lab-Volt test bench. Once a model is built in Simulink it gets downloaded into the Digital Signal Processor memory, where it will start running in the DSP board. The user can now digitally interact with the system, using ControlDesk. ControlDesk is the graphical user interface that interfaces with dSPACE, and allows the user to set up a virtual instrument-oriented experiment environment that manages, controls and automates a system, in this case an Induction motor.

Methodology Used In this project, it was observed that the speed of an induction motor increase as the operating frequency of the inverter increases and vice versa. This experiment was tested in two fashions, first in hands-on testing manner using a Lab-Volt test bench. Next the system was simulated more abstractly using just an induction motor and instead of hard wiring a physical circuit – it was modeled mathematically and simulated through a graphical user interface.

The Lab-Volt test bench, which emphasizes a hands-on experience in electric machines built first for operating the experiment. The Lab-Volt system consists of various modules that contain the physical components housed in a box that has connections on the front for the students to wire the circuit. The circuit used for the Lab-Volt simulation is shown in Figure 1. After wiring up the circuit the operating frequency was varied by turning the DC SOURCE knob on the Chopper/Inverter unit. The phase voltage was measured using the AC Voltmeter module. The speed of the motor was measured using a handheld tachometer.

Figure 1. Lab-Volt circuit for controlling an induction machine.

Next the system was modeled in Simulink using the dSPACE defined blocks and the common blocks for computations of the system (Figure 2). The dSPACE blocks include the DS1104SL_DSP_PWM3 for the converter poles, and the DS1104ENC_POS_C1 and DS1104ENC_SETUP for the position measurement.

RT I Data

0 24.25/120 Duty cy cle a Vm f_ref V/f duty a,b,c Duty cy cle b

f Saturation Duty cy cle c Duty Cycle Generation

PWM Stop

0 DS1104SL_DSP_PWM3

c2

ENCODER W_mech (RPM) MASTER SETUP Terminator DS1104ENC_SETUP Speed Measurement

Figure 2. Simulink model for controlling an induction machine.

Since the experiment is measuring speed, the output from the encoder is fed through a gain that will convert the change in position to speed. Once an accurate model of the system is obtained, the C code can be compiled using the build function in Simulink. Next using the variable file created from the build procedure, a virtual experiment environment is created using ControlDesk. The digital controller for the system is established by dragging and dropping the measurement tools and parameter sets from the virtual instrument panel. Now that this virtual experiment environment is set up, the system can be simulated and the results observed. The operating frequency can be modified in the virtual experiment environment. The speed of the motor is also measured on this screen.

Figure 3 shows the ControlDesk environment set up for operating and measuring the speed of the Induction motor under no load conditions. The operating frequency is adjusted using the slider tool or by entering it into the numerical input tool; the speed of the motor is measured in the digital display. This layout can be modified by simply adding more measurements or displays for the desired variables.

Figure 3. Control Desk interface.

Results Obtained The following table (Table 1) shows the results for the speed (RPM), the voltage (V) and the operating frequency (Hz) for each of the experiments. The theoretical values are also included for reference.

Table 1. Experimental Measurements. Experimental Values Theoretical Values Frequency Speed Speed L (Hz) VPHASE (V) (RPM) VPHASE (V) (RPM) A B 12.5 22 330 25 375 B T E 16.6 29.2 438 33.2 498 - E N 25 41.6 624 50 750 V S C 40 65 975 80 1200 O T H 50 75.5 1133 100 1500 L 63 95 1425 126 1890 T 83 120 1800 166 2490 100 161 2415 200 3000 Experimental Values Theoretical Values Frequency Speed Speed

(Hz) VPHASE (V) (RPM) VPHASE (V) (RPM) d B 12.5 2.8 317 3.3 375 S O 16.6 3.9 492 4.4 498 P A 25 5.97 738 6.6 750 A R 40 9.72 1185 10.6 1200 C D 50 12.19 1484 13.3 1500 E 63 15.3 1857 16.7 1890 83 19.45 2469 22.0 2490 100 20.04 2976 26.5 3000

LabVolt Experiment

250

200

150 Experimental Theoretical 100 Voltage (V)

50

0 0 500 1000 1500 2000 2500 3000 3500 Speed (RPM)

dSPACE Experiment

30

25

20

Theoretical 15 Actual Voltage (V) 10

5

0 0 500 1000 1500 2000 2500 3000 3500 Spe ed (RPM)

Figure 4. Lab-Volt & dSPACE Voltage vs. Speed Graph.

Significance and Interpretation of Results The intent of the project is confirmed with the fact that the experiment was capable of being implemented in both a hands-on procedure and abstract procedure. DSPACE combined with Simulink makes it very straightforward for implementing the induction motor system – allowing students to use a graphical user interface without developing the C code from scratch. This permits students with the opportunity to relate the physical components with the mathematical models and observing the results.

Acknowledgments and References The author of this report would like to thank Dr. Ana Stankovic for the use of her state of the art Power Electronics and Electric Machines Laboratory (http://academic.csuohio.edu/stankovica/Lab.html) and her guidance and support for the project. In addition, the author would also like to thank Ke Chen and Anthony Lombardi for their assistance in the project.

1. Mohan, Ned. DSP Based Electric Drives Laboratory User Manual. 2. Lab-Volt Manual. Atmospheric Electricity: Paving the Way for Meeting Tomorrow’s Energy Needs

Student Researcher: Jonathan F. Juhl

Advisor: Professor Charles W. Allport

Cedarville University Physics Department

My plans for researching atmospheric electricity thorough are closely tied to my involvement in Cedarville University’s High Altitude Balloon team. Our team has recently developed the capabilities to loft small payloads to altitudes of 90,000-100,000 feet via a high altitude helium-filled balloon. This will give us the ability to perform scientific research anywhere from ground level to near-space heights. Specifically the project idea is to develop a payload which will be able to measure the amount of electric charge in the atmosphere. There are many mysteries involved with understanding the nature of electric charges in our atmosphere. By performing this research, I hope to add to the knowledge which might lead to the answering of questions such as, what mechanism creates clear weather charge in the atmosphere. What force cause thunderclouds to separate into positive and negative charges? How does the presence of an electric field vary depending on location, atmospheric conditions, and time of year? Could the electricity in the atmosphere someday be harnessed practically for useful purposes? Before we can answer some of the big questions like these, there must be a more basic understanding of the presence and behavior of electric charge in the atmosphere. My goal is to research this concept in my project.

As a future physicist, the issue of the world’s rising energy needs is one that draws particular attention, as it is primarily the laws of physics that govern the ability to make and distribute power in useable forms. It is exciting to live in a day and age where the science community has a solid understanding of much of the guiding principles of physical laws and therefore we have the capabilities to explore new and revolutionary ways of tackling today’s issues, such as renewable energy.

Another reason this area of atmospheric electric charge holds particular attention to me is I am also quite interested in space science and exploration. Study in these fields has shown us that there are instances of electrical phenomena which occur on other planets, such as Venus, Jupiter, and Saturn. Once we have a clearer understanding of electricity in our own atmosphere and ways to potentially harness this for useful means, there is the possibility that the same or similar techniques could be used to aid in the powering of human and robotic exploration of these and other planetary bodies.

One particularly interesting and potentially dangerous occurrence on Mars is the possibility that there is lightning generated in powerful dust devils which sweep frequently across the deserts of the Red Planet. Knowing more about these and similar phenomena before we ever send humans to another planet like Mars is vital for human and electronic safety. The most important part of studying this issue is getting the chance to practice research first hand in an area that interests me and can someday potentially help others.

It is an honor to be able to participate in this research through partnership with the Ohio Space Grant Consortium. Mathematics of the Moon

Student Researcher: Lauren J. Keller

Advisor: Otis Wright III

Cedarville University Department of Education

Abstract This project will incorporate several different NASA educational materials in an attempt to help ninth or tenth-grade students to gain a better understanding of the principles of measurement as well as some basics concepts of estimation while using some concepts learned in Algebra I and Geometry. After preparing students for the unit by having them find the diameter of the moon using a NASA handout, cardboard discs, and their algebra and geometry skills, I will have the students work through the basic facts involving the diameter, density, volume, mass, and other aspects of both the Earth and the moon found in the NASA activity for finding the distance to the moon. The students will then work in cooperative groups to complete the student worksheet and calculate the distance between scale models of the Earth and moon. This is completed by comparing the diameter of several sports balls to the diameters of the Earth and moon and working through calculations which involve changing units of measurement and determining a scale for the models of the Earth and moon. The students will also have an opportunity to challenge themselves and answer several of the “brain buster” questions given on the Moon ABCs Fact Sheet. The conclusion of this lesson occurs as each cooperative group tries to fit their scale model of the Earth and moon in the classroom, demonstrating the great distance between the two as well as the difference in diameter even when scaled down to a much smaller size.

Geometry Lesson: Mathematics of the Moon

Grade Level: 9th or 10th grade Topic: Diameter and Distance of the Moon

DOMAIN A: PREPARATION I. Objective(s): 1. Each student will be able to calculate the diameter of a sphere in at least 8 out of 10 given problems. Ohio Academic Content Standard (8-10): Geometry and Spatial Sense G. Prove or disprove conjectures and solve problems involving two- and three-dimensional objects represented within a coordinate system. 2. All students will be able to solve problems using proportions at least 80% of the time. Ohio Academic Content Standard (8-10): Number, Number Sense, and Operations G. Estimate, compute, and solve problems involving real numbers, including ratios, proportion, and percent, and explain solutions. 3. Every student will be able to calculate distances between objects using given information by converting measurement units in at least 80% of given problems. Ohio Academic Content Standard (8-10): Measurement A. Solve increasingly complex non-routine measurement problems and check for reasonableness of results. E. Estimate and compute various attributes, including length, angle measure, area, surface area, and volume, to a specified level of precision. 4. All students will be able to represent geometric shapes and object using a variety of tools and methods in at least 8 out of 10 given problems. Ohio Academic Content Standard (8-10): Geometry and Spatial Sense E. Draw and construct representations of two- and three-dimensional geometric objects using a variety of tools, such as straightedge, compass, and technology.

DOMAIN B: ENVIRONMENT II. Materials: 1. 2 cm. wide cardboard disk 2. Wooden stakes 3. Meter sticks 4. Calculators 5. String 6. NASA “Moon ABCs Fact Sheet” for students 7. Various sports balls 8. Meter tapes 9. NASA handouts for both Diameter of the Moon activity and Distance to the moon activity

DOMAIN C: TEACHING III. Motivation: 5 minutes 1. We will begin class today by going outside (on a day that the moon is visible) and spending some time as a class brainstorming what we know about the moon, how we think the size compares to the Earth, and how far it appears to be from us. 2. I will also have each students make an educated guess about both the distance of the moon from the Earth and the diameter of the moon before they begin the activities. This way, we can see how they compare to the actual results at the end of the activity.

IV. Procedure(s): 45 minutes 1. The class will spend the first part of the period outside the school working in cooperative groups. Each group will have a cardboard disk, a wooden stake, a meter stick, and a calculator and piece of string for the first part of the activity. 2. The students will work in groups to go through the NASA worksheet on “Diameter of the Moon.” I will give them an estimated distance of the Earth from the moon that I got from the local planetarium. 3. Each group of students will place their 2 cm. cardboard disk on top of the wooden stake so that the disk covers the moon exactly from one person’s point of view. The other student will then measure the distance from the observer’s eye to the cardboard disk and then use the given information in a set of proportions to solve for the diameter of the Moon. 4. During the second part of the class period, the students will compare their calculated diameter of the moon to the actual diameter given on the “Moon ABCs Fact Sheet” after which they will begin working through the “Distance to the Moon” NASA worksheet. 5. The students will spend time using several different sports balls which I have provided to create a model of the Earth and the moon as well as the distance between the two. They will have to convert measurements and create a scale for the model during this time, reinforcing their measurement skills that they should be developing throughout their geometry class. 6. When the students have created their scale model of the Earth and the moon, we will attempt to fit each of their models inside the classroom, moving to the hallway if we need more room.

V. Guided Practice: 25 minutes 1. NASA worksheets on diameter of moon and distance to moon.

VI. Closure: 15 minutes 1. As a wrap-up for the first part of the lesson, to have the students reinforce the information we have learned about circles and spheres in the last few days in geometry class, I will have them use more given information as well as their calculated results to find the volume of the Moon and its density. 2. To conclude the second part of the lesson on the distance between the Earth and moon, we will go over the students’ predictions about the diameter and the distance between the Earth and moon to see how correct the predictions were. They will also examine the distance between the scale models of the Earth and moon to see if that gives them a better conception of the distance between the two. VII. Independent Practice: 20 minutes 1. Complete “Brain Busters” on “Moon ABCs Fact Sheet” next to diameter, surface area, volume, and density.

VII. Assessment: 1. The first assessment of this lesson will occur during the lesson while I circulate to ask questions of each student. 2. The second assessment of the lesson will occur two days later when I check the “Brain Busters” homework assigned to them after the project. 3. The third assessment will be on the next quiz that covers the topic of radius, diameter, surface area, and volume of a sphere.

IX. Extension: (if applicable) 1. Questions found in the NASA teacher guide for “Distance to the Moon.”

X. Diversity of Needs: (Providing for diversity in instruction based on academic, social, and/or emotional differences) 1. Students who need extra help will be able to ask questions during the entire project. I will be circulating throughout the class to make sure that all students are understanding the material and getting the correct answers to their problems.

Conclusion The learning theory involved in this lesson is mainly based upon Piaget’s constructivism and Gardner’s theory of multiple intelligences. Constructivism holds that learners gain construct new knowledge through from their experiences by assimilating and accommodating the new information into their previously existing framework. In this lesson, students learn more information about the moon and its placement in regard to the Earth by using the information they have already known from previous geometry classes. We have already learned information about spheres, how to use diameter and other given information to calculate surface areas, volumes, and densities as well as basic principles of measurement. By using this information to calculate the diameter, volume, and surface area of the moon as well as the distance between the Earth and the moon, the new information is easily assimilated into the students’ frameworks of knowledge. Also incorporated into this lesson is Gardner’s theory of multiple intelligences. Although geometry class can mostly involve the logical/mathematical intelligence, this activity gives students the opportunity to learn through a hands-on bodily-kinesthetic method. This project involves many different concrete activities that will interest those students who have a harder time learning in the auditory, logical/mathematical, or visual methods. Since this lesson is weighted so heavily on the bodily-kinesthetic area of intelligence, the student involvement in this lesson is very strong. The students are working in cooperative groups throughout the entire lesson, putting together models and tools to measure distances and diameters. The teacher spends very little time talking during this lesson while the students spend the entire time working on their own and learning independently.

Throughout this real-life application, the students will be assessed by the accuracy of their calculations on the NASA worksheets provided by nasa.gov as well as the accuracy of their calculations on the “Brain Buster” activities found in the “Distance to the Moon” NASA worksheet. The mathematical skills reinforced by this activity will also be assessed on a unit quiz to come later in the week.

Overall, this project seems to be a very effective method of reinforcing concepts about spheres and overall measurement as well as being a good review of algebra concepts such as proportions. The students must use the theories and formulas they have learned in the past and put them all together to complete this activity, making it an effective learning experience simply by assimilating so many ideas into one lesson. The NASA material creates an extremely lifelike situation in which the students can practice their math skills while still acquiring more knowledge about science and their surrounding world. Space Shuttle Glider Takes Mathematics on a Ride

Student Researcher: Crystal M. Kerr

Advisor: Dr. Jane A. Zaharias

Cleveland State University Department of Teacher Education

Abstract Space Shuttle Glider takes Mathematics on a Ride is a project that I adopted using NASA’s Educational Brief Space Shuttle Glider Activity. This activity will help students go beyond asking “why do I need to do this?” and “I’m not going to need to know this in the future” to seeing how subjects like Math and Science can correlate with one another. This cross curricular activity will enable students to make connections to science and math in their daily lives in seeing how a space shuttle glider works and the mathematical logistics behind it. This project will include fun activities for the students to do and include a phenomenal chance to learn all about NASA’s program and their resources geared towards students and teachers. Upon completing the activity, students will be given a worksheet that displays some of the exciting things NASA has to offer for the classroom and for students.

Project Objectives For students to understand that different subjects tie into one another and for students to have a positive view of the educational programs that NASA has to offer. I would like for the students to be able to apply math to every day concepts and be excited about doing so. The students will be able to: measure the length of the space shuttle glider, divide their solutions and formulate answers that relate to real world solutions and determine the glide ratio of this glider. This lesson will help motivate students and promote the resources that are out there for them through NASA.

Methodology Used Relevant Ohio standards/Benchmarks/Indicators for Mathematics: 2.1 - Select appropriate units for measuring derived measurements 2.3 - Estimate a measurement to a greater degree of precision than the tool provides and, 2.5 - Analyze problem situations involving measurement concepts, select appropriate strategies, and use an organized approach to solve narrative and increasingly complex problems.

Materials: Space shuttle glider (one per row), Rulers, Calculator, and Tape Measure

Procedures The teacher will start off the day with the students of the Algebra class by introducing herself. She will tell the students a little bit about herself and proceed to ask each student if they did anything fun for spring break. The teacher will explain to the students that they will be doing a fun activity today which involves materials taken from NASA’s website. The teacher will pass out the Space Shuttle Glider activity, the roles sheet, and will then place a space shuttle glider on the desk of each person in the front row (5 rows which equals 5 space shuttle gliders passed out).

Each row will be considered a group. The students will be given the opportunity to take on specific roles within their group. The roles needed are space shuttle glider/holder (2 people), measurer (1 person), mathematician (1 person) and reporter (1 person). This totals five people in each group. Once the students have selected who will be doing which roles, the teacher will walk them through the instructions, assignments for each role and will then go into the first activity for the space shuttle glider.

The instructions for this activity are very simple. Each student must participate and be active in their role. The space shuttle glider holders are responsible for keeping and launching the space shuttle gliders. The measurer is responsible for all measurements needed in this activity. The mathematician is responsible for all calculations and is permitted to use the calculator. The reporter is responsible to reporting all answers to the whole class for the questions posed. (Actual glider instructions and questions can be found following this document on pages 3-6).

Once answers have been submitted, students will move on to the next activity until they have completed both activity numbers 1 and 2. Once the activity has been completed, the teacher will pass out the NASA facts sheet and explain to students how NASA’s website is a resourceful tool for the students to use. The link for this website is as follows:

http://www.nasa.gov/audience/forstudents/index.html

Results The students truly enjoyed this lesson and were inspired by finding out about the resources that NASA has available for them. The students enjoyed working in groups and even made it more of a competition to see who could launch their rockets the furthest. Since both groups had a chance to launch their rockets twice, students were eager to see if they could even out do themselves in launching the space shuttle rocket. Students in this Algebra class had a chance to use their skills at thinking critically and problem solving in order to answer the questions posed from this NASA based experiment and were thrilled to do more and learn more with NASA.

Acknowledgments and References 1. NASA, Space Shuttle Glider. Retrieved January 3, 2008, from NASA Web site: http://www.nasa.gov/pdf/58283main_Space.Shuttle.Glider.508.pdf 2. Ohio Academic Content Standards, Mathematics Academic Content Standards. Retrieved April 16, 2008, from Ohio Department of Education Web site: http://www.ode.state.oh.us/GD/Templates/Pages/ODE/ODEDetail.aspx?page=3&TopicRelat ionID=333&ContentID=801&Content=47606

The following is from the NASA website link: http://www.nasa.gov/pdf/58283main_Space.Shuttle.Glider.508.pdf

Counter-Rotating Aspirated Compressor Time Accurate Simulations

Student Researcher: Robert D. Knapke

Advisor: Mark G. Turner

University of Cincinnati Department of Aerospace Engineering

Abstract The research consisted of 3D time accurate simulations of a counter-rotating aspirated compressor design. This particular compressor stage was designed and tested at MIT. A flow solution was achieved through the use of the unsteady solver TURBO with a phase lag boundary condition. Comparisons were made between the experimental results from MIT and those obtained from the numerical solutions. Both aspirated and non-aspirated conditions were simulated with the numerical solver. The aspirated simulation showed good comparison with the experimental data, especially near the tip. On a 1D basis, the experimental efficiency is 87.9%, while the aspirated simulation has a 1D efficiency of 89.4%. Comparison between the aspirated and non-aspirated solutions showed a 2.2% higher efficiency for the simulation with aspiration.

Project Objectives One of the objectives of participating in this research was to gain a basic understanding of the methods for numerical computation of flow solutions. This included understanding parallel computation and working with a cluster of computers to produce a solution. For this particular research, becoming familiar with the numerical flow solver TUBO was of interest. Another objective was to use the solution obtained through the simulation to help build an understanding of internal fluid dynamics. Determining various types of flow features and blade interactions was also of importance. The final objective was to prepare for graduate school through the experience gained as a student doing active research.

Methodology The design and testing of the counter-rotating geometry was conducted at the MIT Gas Turbine Laboratory. A blow-down testing method was used, as shown below:

The working fluid (a mixture of Argon and CO2 in this case) was held at high pressure (in tank A). A quick release valve was opened and the fluid passed through the test section (C through E) and into the dump tank (G). The test section geometry consists of an inlet-guide vane (IGV) blade row and two counter-rotating blade rows, the second of which is aspirated (suction). To simulate the flow, the numerical solver TURBO was used, which is a 3-D, unsteady Reynolds Averaged Navier-Stokes solver. For this research, TURBO was used to simulate the two counter rotating rotors. The conditions at the inlet of the IGV were obtained from the MIT test results. A separate TURBO simulation was used to determine the effects of the IGV on the inlet conditions. A phase lag boundary condition was used at the periodic and rotor interface boundaries. Both pressure and mass flow exit boundary conditions were applied and comparisons between the test and simulation exit conditions were made. The computational grids of the rotors were produced using the Turbomachinery Gridding System (TGS). Single block H- grids were used for this research. One H-grid block was used for each rotor passage and a third H-grid block was merged onto the exit of the second rotor to capture downstream features and to position grid cells at the location of the MIT test measurement plane. The aspiration on the second rotor was simulated through the use of mass flow sinks at the cells along the aspiration slot location. The simulation was conducted at the Gas Turbine Simulation Laboratory (GTSL) at the University of Cincinnati, Center Hill Research Center. For each simulation, 52 Pentium Core2Duo 2.4GHz CPUs were used. Both the aspirated and non-aspirated simulations were conducted simultaneously due to the availability of processors. Initially, a mass flow exit boundary condition was imposed followed by a pressure exit boundary condition. Shortly after applying the pressure exit boundary condition, the aspiration effects were added to the aspirated simulation and both simulations were run to convergence. Time averaged data was gathered and post-processed using APNASAcat. Comparisons with experiment were made.

Results Both simulations were run for 100,000 iterations at a rate of approximately 3.3 seconds per iteration, which equates to about 4 days of wall clock time. The time averaged data was then gathered over 4640 iterations. The figure below shows a comparison of spanwise efficiency at the axial position of the MIT test exit measurement plane:

1.0 Current 0.9 Simulation aspirated 0.8 Current 0.7 Simulation non-aspirated 0.6 experimental t=250ms 0.5 % span 0.4 experimental t=300ms 0.3

0.2 experimental t=350ms 0.1

0.0 MIT CFD 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 (APNASA) Efficiency

Experimental data is shown for three different times after the fluid was released. The current simulation shows good agreement with the experimental data, especially at the tip. By investigating the current time- accurate aspirated simulation, it was found that the aspiration slot is pulling high entropy flow from the tip down into the aspiration slot. The non-aspirated simulation did not show improved efficiency at the tip as it did not remove flow. This clearly shows the benefit of the aspiration. The aspiration also caused the shocks in both rotor passages to be moved further downstream into more choked positions. For rotor 1 it is because of the higher mass flow due to the aspiration and in rotor 2 the aspiration slot sets the shock foot on the suction surface. The simulations performed show the benefit of the aspiration and the efficiency of the aspirated simulation shows good agreement with the experimental data. Further simulations will be used to determine the off-design performance of the compressor. Moreover, simulations of all three blade rows will be used to explore distortion and to better explain the good stall characteristics observed during the MIT testing.

Acknowledgments The author would like to acknowledge and thank AVETEC, Inc., for the support and funding they have provided for this research. In addition, the computational resources were obtained through OCAPP, and paid for through the State of Ohio Third Frontier program. The author would also like to thank the Ohio Space Grant Consortium for their support. Furthermore, the author would like to thank Professor Mark Turner for his guidance throughout this research. Finding the Height of an Aurora

Student Researcher: Melissa B. Lewis

Advisor: Dr. Darrin Frey

Cedarville University Mathematics Education

Abstract This project is one that could be used in a Geometry classroom. Using NASA educational materials, students will discover how the mathematics concepts that they are learning in class can be implemented into real and interesting life situations. More specifically, the students will use what they have learned about the properties of triangles and trigonometry in order to simulate techniques used by scientists in the 1800’s when finding the height of an aurora. The first part of this project involves a simple introduction of auroras, including what they are and how they come about, as well as a brief history of the study of auroras. Following that history, students will be divided into six groups. These groups will work as teams to make and then use a clinometer to measure the height of an object hung in the classroom. The groups will each measure at three different stations which are a predetermined length away from the object. Once all the groups have completed their measurements and determined the height of the object, the class will engage in a discussion, lead by the teacher, about the activity. In this discussion, the teacher will construct a model of the object for the students to visualize and then the students will also create scale drawings of the activity. Finally, the teacher will connect the ideas discussed at the beginning of class with the activity by explaining how a method similar to that used in the activity could be used by scientists to determine the height of auroras. To conclude and wrap up students’ thinking, the students will pretend to be scientists as they are given various scenarios for which they will need to determine the height of the aurora.

Project Objectives The main objective of this activity is for students to visualize and practice with the properties of right triangles, as well as trigonometric concepts. Through providing a hands-on and authentic learning experience, students will be able to better understand the skills that they are learning in class and how these skills can be applied in real life situations. Another objective of this activity is for students to understand what a clinometer is and how it works. A third objective is that, through the group work, students will learn how to effectively participate and communicate with a team of other individuals. Finally, in displaying and discussing their results, students will learn how to properly speak about mathematical ideas and concepts.

This lesson also meets objectives and benchmarks in the 9-12 mathematics content standards. First, it gives students an opportunity to find answers to problems by substituting numerical values in simple algebraic formulas. Second, it allows for students to make and interpret scale drawings.

Methodology Before class begins, the teacher hangs an object high in the center of the classroom and marks the spot directly beneath it on the floor with an X made of tape. The teacher then creates 2 sets of 3 stations at different lengths from the object. For example, Station 1a would be located 5 feet from the object and Station 1b would be located 5 feet from the object on the opposite side. Station 2a would be 4 feet from the object with Station 2b located directly across from it at 4 feet from the object as well. Finally Station 3a would be 3.5 feet from the object and, on the opposite side of the room, Station 3b would also be 3.5 feet from the object. The teacher will also predetermine the 6 groups that the class will be divided into. Once the students arrive, the teacher directs them to their groups. Before explaining the activity, the teacher walks students through a few examples on how to determine the height of an object given the distance from the object and the angle of elevation. The students then work through a few examples in their groups with the teacher checking for accuracy. Following the examples, the teacher gives explains the basic idea of the activity to the students: that they will be making and using clinometers at 3 Stations in order to determine the height of the object in the center of the room. After the explanation, the teacher tells each group what Station they will start at and also hands them two worksheets, one with the instructions on how to make a clinometer and the other with the following procedure and table written on it.

Procedure 1. Determine a task for each group member: Sighter, Recorder, Measurer, and Calculator

2. Assemble the Clinometer as directed on the worksheet labeled “Clinometer Instructions”.

3. Locate the station number that you were assigned.

4. Record the distance (in feet) that your Station is from the object in the table below.

5. Measure the eye level of the Sighter from the floor to eye level. Record this measurement (in feet) in the table below.

6. Use the clinometers to locate the top of the object. The Measurer determines the angle reading on the clinometers. Record this measurement in the table below.

7. Repeat the process above at the second assigned station recording the results in the table below.

8. Use the tangent ratio and the previous examples that were completed with teach assistance to determine the height of the object.

Clinometer Sighting Students Station Number Length to the X Reading Eye Height

For each station fill in the following measurements for your calculations.

TAN(angle measure) = (the missing height)/(the length to the X)

TAN(______) = (the missing height)/ (______)

TAN(______)(______) = (the missing height)

Next add in the height of the Sighter.

(missing height)+(height of the Sighter) = (total height of the object)

The estimated total height of the object is ______feet.

9. Average the results from all of the stations to obtain a closer estimate of the height of the hanging object.

Once all of the groups have finished collecting their data and calculating for the height of the object, the teacher selects a station to create a visual model of the triangle that students are using to make their calculations. To do this, the teacher selects two students as helpers. These students each tape the ends of two separate pieces of string to the floor while the teacher tapes a third piece of to the floor directly below the object, which is on the X. The teacher then takes the other ends of the strings and tapes them to the object. The strings create triangles that allow the students to see exactly how the properties of right triangle and trigonometric concepts can be applied to this situation. After viewing this model, the students then work in their groups, using a piece of graph paper, to make a scale drawing of the two stations and the object.

After the groups have completed their drawings, the students share their work. Finally, the teacher leads the class in a discussion about how this activity and way of measuring height could be used by scientists to find the height of an aurora. To conclude the activity or for homework, the students are given a few different scenarios in which they must act like scientists and solve for the height of an aurora given various lengths and/or angles. Conclusions This activity allows students to have a hands-on experience that will keep them interested and engaged in the class while giving them the opportunity to practice the mathematical skills and concepts that they have learned. Students will enjoy working in groups as well as being able to see how their learning can apply in real-life situations. Finally, since the activity is memorable and allows for kinesthetic learning to occur, the students will be able to better recall and apply the mathematical concepts to other problems and situations they will encounter in class.

References 1. “The Northern Lights” http://image.gsfc.nasa.gov/poetry/activity/Nlbook_col.pdf 2. “The Height of an Aurora” from the NASA packet “Extra-Credit Problems in Space Science” High Fidelity Simulation of an Embedded Transonic Fan Stage with Characterization of Flow Physics

Student Researcher: Michael G. List

Advisor: Dr. Mark G. Turner

University of Cincinnati Department of Aerospace Engineering and Engineering Mechanics

Abstract The Blade Row Interaction (BRI) rig at the Air Force Research Laboratory’s Compressor Aero Research Lab was designed to observe the aerodynamic interactions between a downstream transonic rotor and an upstream, highly loaded stator. The rig simulates an embedded transonic fan stage. Resulting aerodynamic phenomena have been shown to directly affect component efficiency and engine performance. Understanding these unsteady interactions and accounting for them in the design process would benefit performance, specifically fuel economy, component efficiency, and compressor pressure ratio. Previous numerical work employed extremely dense computational meshes and fine time steps to resolve these interactions. This work proposes additional Computational Fluid Dynamics (CFD) investigations of the BRI geometries. As such, it is imperative to understand the differences in results between simulation strategies – phase-lag approximation, periodic simulations with modified blade counts, and full annulus simulations – and their applicability to affecting the design process in real time. With recent advances in supercomputing, computation of full-annulus unsteady flow fields has become possible for several stages. Full-annulus simulations allow computation of a wider range of conditions affecting today’s high performance gas turbine engines, including near-stall and stall. This also opens the possibility of modeling distorted inlet conditions. This is still an expensive process and requires large amounts of data storage and typically post-processing times that cannot effect the design phase. Advancing the available toolsets for co-processing and post-processing of this data is an additional area of interest that has a potential to greatly impact the effectiveness of CFD.

Project Objectives The primary objectives of this research are to bring high-fidelity CFD for turbomachinery into the design phase of new multi-stage machines and to investigate physical interactions between the blade rows in transonic fan and compressor stages. Through the utilization of high-fidelity modeling, the effect of unsteady interactions not normally captured can be observed. Understanding of unsteady flow physics can positively impact reduced-order models and the design of blade shapes and flow path configurations as shown by Turner (2005). The ultimate goal in terms of the simulation capability is to replicate experimental efforts such as those conducted by Gorrell (2001, 2003), Estevadeordal (2007), and Langford (2007).

Methodology The Stage Matching Investigation (SMI), a precursor investigation to BRI, was designed to experimentally reproduce a high-speed, highly-loaded compressor stage. A three blade-row configuration was used including an upstream wake generator row, transonic rotor, and downstream stator row. By varying the axial gap between the wake generator and the rotor, spacing representative of currently operational fans and compressors could be evaluated. The rotor used in the BRI rig was designed for axial inlet flow thus requiring a swirler-deswirler combination to create a wake through diffusion while keeping axial inlet flow to the rotor. The rig was designed to permit the stator-to-rotor axial spacing to be set to three values -- close, mid, and far -- as shown in Figure 1. The combination of the swirler and deswirler created a wake through diffusion rather than base drag from the thick trailing-edge designs of previous wake generators (Gorrell 2003, 2006). The variable stagger of the swirler vanes could change the loading of the deswirler blade row. The clocking between the swirler and deswirler blade rows allowed the path of the swirler wake through the deswirler vanes to be controlled and allowed for optimization of the amount of total pressure loss produced exiting the deswirler and entering the rotor. It was desired to match the total pressure loss produced by the SMI wake generators. The BRI investigation focused on the effect of changing the stagger angle, the stator/rotor axial spacing, and the operating condition.

Figure 1: BRI experimental rig.

Once the flow path and blade geometries were developed for the desired simulation configurations, elliptically-smoothed H meshes were created for the individual blade passages. These were developed parametrically using the Turbomachinery Gridding System (TGS) as described by Kamp (2007). This methodology allowed any of the swirler-deswirler configurations to be gridded with the same set of scripts, thus ensuring the ability to run production style simulations. Additionally, the use of TGS allowed consistent grids to be created for each of the three spacings under investigation. This greatly simplified the grid-generation process and increased confidence that flow features resolved in one simulation would be consistently resolved in other simulations. Table 1 shows the dimensions for the passage grids and the number of blade passages simulated. In addition to these point counts, pure H mesh blocks were appended and pre-pended appropriately to achieve the desired spacing configuration. Close spacing included more than 140 million points and required 748 processors; mid spacing totaled 154 million points and used 848 processors; far spacing used 166 million grid points and required 912 processors. Table 1. Grid dimensions for the passage grids.

Axial Radial Tangential Number Swirler 223 101 151 8 Deswirler 256 101 201 8 Rotor 391 101 226 7

The solver used during this work, TURBO, is a 3-D, viscous, unsteady RANS solver and has been previously discussed by Chen and Whitfield (1993), Chen, et al. (1994, 1997, 1998), and Chen and Briley (2001). It employs a finite volume Roe scheme to obtain up to third order spatial accuracy and an implicit time integration to obtain second order temporal accuracy. The 2-equation turbulence model, the NASA/CMOTT k-epsilon model, has been specially developed for turbomachinery calculations by Zhu, et al. (2000). TURBO integrates to the wall in the case of y+ <10.5 and otherwise uses wall functions. The turbulence model includes a near-wall damping term which allows the use of the k-epsilon model at this resolution. Hub and tip clearances were modeled using the approach described by Kirtley (1990) with the recommendations of Van Zante (2000).

TURBO has been shown to run on 8996 processors using MPI on an SGI Altix 4700 system. The simulations ranged from 169 million cells to 826 million cells for demonstration purposes and included quarter annulus and full annulus simulations with three and four blade rows. Figure 2 shows the scaled speedup of each simulation as the number of processors was increased from the minimum required number of processors. The results are scaled to allow comparison of the various sized simulations. The figure shows excellent performance benefit for a particular distribution of cells per processor (100,000 is the optimum for the machine hardware).

Figure 2. Normalized scaled speedup versus relative number of CPUs.

During the original BRI simulations the flow field was initialized from a uniform flow condition. A thru- flow condition adjusted the velocity vectors to follow the flow path. This methodology required nearly 10 full wheels to bring the unsteady mass flow rates to an acceptably periodic state. To lessen the computer resources consumed by large-scale CFD the initialization process has been broken into several components in a pipeline, each using slightly more resources than the last.

The first stage in initialization is an energy-based initial flow, similar to the thru-flow routine used in the solver. This is executed serially on each blade passage to obtain geometry conforming velocity vectors and density and energy based upon total temperature and total pressure distributions specified by the user.

A set of script utilities then automatically generates isolated blade row simulations which can be executed in parallel (for the grid sizes in question) on 28 to 48 processors. These simulations are run with steady state parameters, though a steady state is not necessarily obtained. The purpose of these simulations is to quickly generate boundary layers around the blade and to help initialize the pressure field of the solution. As such, physical times on the order of one-eighth of a wheel are used along with a high CFL and local time-stepping to accelerate the process.

Once the isolated blade-row simulations are complete, another set of script utilities provides a mixing plane simulation which begins from the isolated blade row solutions. Incorporation of this step allows multi-stage simulations to attain a more appropriate pressure distribution, thus giving a much better initialization for TURBO than the original methodology. The mixing plane simulation requires the sum of the number of processors from each of the isolated blade row simulations – still a minor simulation by comparison to sector or full-annulus simulations.

The process that has been developed clearly reduces the amount of resources consumed during the initialization of the flow field. With the new methodology it is feasible to develop full-annulus simulations that can be run in a time frame that affects the design process.

As an exercise in computer science, the data transfer between blade rows was analyzed for efficiency. The original methodology in TURBO required that all members of the upstream side of the interface plane transfer their unsteady data to the members of the downstream side of the interface. This methodology is highly inefficient for large blade counts and can result in a substantial penalty.

In order to reduce the required MPI communication, two additional arrays (each of dimension 2*number of radial points*number of blade passages) were allocated. These arrays were used to store the current location of the grid in radians and the location of the passages on the opposite side of the interface. As the simulation progressed, each passage computed which neighbors required information and did only the appropriate sends.

For the BRI full annulus simulations, this “fast-interface” methodology reduces the communication between the swirler and deswirler rows by a factor of 16. With communication times on the order of 2 seconds, this means that 1.8 seconds per iteration can be saved. The end result is nearly 9000 saved compute hours for each rotor revolution.

Results Obtained

During the original BRI simulations the flow field was initialized from uniform flow and substantial processing time was required to produce a periodic solution. This can be seen in the top of Figure 3. The bottom of Figure 3 shows the initialization using the isolated blade row and mixing plane approach. Although the mass flow rates are higher than the target flow rate, the steady-state initialization was able to set up the flow in roughly 2 revolutions at a coarser time step (new simulations) rather than 10 revolutions at a finer time step (original simulations). This will have tremendous benefit for future simulations and the convergence of high-fidelity, full-annulus simulations.

21 Inlet 20 Swirler Exit Deswirler Inlet 19 Deswirler Exit Rotor Inlet 18 Rotor Exit Stator Inlet 17 Exit

16 t 15 Mass FlowMass (kg/s) Rate 14

13

12 0 500 1000 1500 2000 2500 3000 3500 Iteration

Figure 3. Sample mass flow rate history for the initialization of BRI simulations.

Figure 4 shows a comparison of the 3 blade-row, time-averaged static pressure at the hub and casing to the experimental data collected for BRI. The simulation matches very well with the experimental pressures. Additionally, flow features seen in PIV data were also seen in the BRI simulations, though these results have not yet been compared quantitatively.

Figure 4. Comparison of CFD results to static pressure data for close spacing.

Figure 5 shows the results of the close, mid, and far spacing simulations, respectively. Pressure gradient magnitude contours are shown for midspan. The shock propagation upstream through the deswirler row and into the swirler row is shown. As the rotor bow shock impinges upon the trailing edge of the deswirler vane, the shock is turned normal to the flow on both the pressure and suction sides. This sets up a traveling shock along the blade surface that is associated with an entropy rise (the same feature was seen in the SMI experiments). In contrast to the previous SMI endeavor, the suction side shock and reflected shocks interact with the boundary layer to periodically form and collapse a separation bubble, as seen in Figure 6. Visualization and post-processing of the results required parallel computing. The toolset developed was described by List (2007).

Figure 5. Pressure gradient magnitude contours at midspan for close, mid, and far spacing.

Figure 6. Entropy contours at midspan for far spacing showing separation bubble.

Significance and Interpretation of Results The BRI simulations completed to date have shown good agreement qualitatively to PIV images and quantitatively to experimental pressure data. The simulation results have been instrumental in interpreting the experimental results and will continue to be investigated. Work is being carried out to produce reduced order methods that can predict performance detriment due to blade row spacing. While the majority of the physics phenomena observed in BRI experiments were captured with the original 3 blade-row investigation, it is important to continue to move towards full-annulus simulations that included all blade rows and full detail of the flow path. This will allow better prediction of efficiencies and will enable stall simulations and distortion modeling.

Acknowledgments The author is most grateful for support from the Ohio Space Grant Consortium (OSGC). The author would like to thank AFRL / RZTF for supporting the research, and would like to thank Tim Beach for his efforts and aid in grid generation. The author would also like to thank the Air Force Research Laboratory Major Shared Resource Center (AFRL MSRC) and the High Performance Computing and Modernization Program (HPCMP) for use of the computing systems and support of the challenge project under which this research was executed. Without the support of the AFRL MSRC team this work would not have been possible.

References 1. Gorrell, S. E., Copenhaver, W. W., and Chriss, R. M., 2001. “Upstream Wake Influences on the Measured Performance of a Transonic Compressor Stage”. AIAA Journal of Propulsion and Power, 17, pp. 43–48. 2. Gorrell, S. E., Okiishi, T. H., and Copenhaver,W.W., 2003. “Stator-Rotor Interactions in a Transonic Compressor, Part 1: Effect of Blade-Row Spacing on Performance”. ASME Journal of Turbomachinery, 125, pp. 328–335. 3. Gorrell, S. E., Okiishi, T. H., and Copenhaver,W.W., 2003. “Stator-Rotor Interactions in a Transonic Compressor, Part 2: Description of a Loss Producing Mechanism”. ASME Journal of Turbomachinery, 125, pp. 336–345. 4. Gorrell, S. E., Car, D., Puterbaugh, S. L., Estevadeordal, J., and Okiishi, T. H., 2006. “An Investigation of Wake-Shock Interactions with Digital Particle Image Velocimetry and Time- Accurate Computational Fluid Dynamics”. ASME Journal of Turbomachinery, 128, pp. 616–626. 5. Turner, M. G., Gorrell, S. E., and Car, D., 2005. “Radial Migration of Shed Vortices in a Transonic Rotor Following a Wake Generator: A Comparison Between Time Accurate and Average Passage Approach”. In ASME Turbo Expo 2005. ASME Paper GT2005-68776. 6. Estevadeordal, J., Gorrell, S., and Copenhaver, W., 2007. “Piv study of wake-rotor phenomena in a transonic compressor under various operating conditions”. AIAA Journal of Propulsion and Power, 23(1), January-February, pp. 235–242. 7. Langford, M. D., Breeze-Stringfellow, A., Guillot, S. A., Solomon, W., Ng, W. F., and Estevadeordal, J., 2007. “Experimental Investigation of the Effects of a Moving Shock Wave on Compressor Stator Flow”. ASME Journal of Turbomachinery, 129, pp. 127–135. 8. van de Wall, A., Breeze-Stringfellow, A., and Dailey, L., 2006. “Computational investigation of unsteady flowmechanisms in compressors with embedded supersonic rotors”. In ASME Turbo Expo 2006. ASME Paper GT-2006-90633. 9. List, M. G., Gorrell, S. E., Turner,M. G., and Nimersheim, J. A., 2007. “High fidelity modeling of blade row interaction in a transonic compressor”. In 43rd AIAA/SAE/ASME Joint Propulsion Conference. AIAA Paper no. 2007-5045. 10. List, M. G., 2007. “Quarter annulus simulations of blade row interaction at several gaps and discussion of flow physics”. Master’s thesis, University of Cincinnati, Cincinnati, OH, August. Aerospace Engineering. 11. Chen, J. P., and Whitfield, D. L., 1993. “Navier-Stokes Calculations for the Unsteady Flowfield of Turbomachinery”. AIAA Paper no. 1993-0676. 12. Chen, J., Celestina, M. L., and Adamczyk, J. J., 1994. “A new procedure for simulating unsteady flows through turbomachinery blade passages”. In ASME Turbo Expo 1994. ASME Paper no. 94- GT-151. 13. Chen, J. P., Ghosh, A. R., Sreenivas, K., and Whitfield, D. L., 1997. “Comparison of computations using navier-stokes equations in rotating and fixed coordinates for flow through turbomachinery”. In 35th Aerospace Sciences Meeting and Exhibit. AIAA Paper no. 97-0878. 14. Chen, J. P., and Barter, J., 1998. “Comparison of time-accurate calculations for the unsteady interaction in turbomachinery stage”. AIAA Paper no. 98-3292. 15. Chen, J. P., and Briley,W. R., 2001. “A parallel flow solver for unsteady multiple blade row turbomachinery simulations”. In ASME Turbo Expo 2001. ASME Paper no. 2001-GT-0348. 16. Zhu, J., and Shih, T. H., 2000. CMOTT Turbulence Module for NPARC. Contract Report NASA CR 204143, NASA Glenn Research Center, Lewis Field, OH. 17. Van Zante, D. E., Chen, J., Hathaway, M. D., and Chriss, R., 2008. “The influence of compressor blade row interaction modeling on performance estimates from time-accurate, multi-stage, Navier- Stokes simulations”. ASME Journal of Turbomachinery, 130, January, p. 011009. 18. Kamp, M. A., Nimersheim, J., Beach, T., and Turner, M. G., 2007. “A turbomachinery gridding system”. In 45th AIAA Aerospace Sciences Meeting and Exhibit. AIAA Paper 2007-18. 19. Kirtley, K. R., Beach, T. A., and Adamczyk, J. J., 1990. “Numerical analysis of secondary flows in a two-stage turbine”. In 26th Joint Propulsion Conference. AIAA Paper no. 90-2356. 20. Van Zante, D. E., Strazisar, A. J., Wood, J. R., Hathaway, M. D., and Okiishi, T. H., 2000. “Recommendations for achieving accurate numerical simulation of tip clearance flows in transonic compressor rotors”. Journal of Turbomachinery, 122, October, pp. 733–742. 21. Estevadeordal, J., Gorrell, S., Gebbie, D., and Puterbaugh, S., 2007. “Piv study of blade-row interactions in a transonic compressor”. In 43rd AIAA/SAE/ASME Joint Propulsion Conference. AIAA Paper no. 2007-5017. 22. List, M. G., Turner, M. G., Galbraith, D. S., Nimersheim, J. A., and Galbraith, M. C., 2007. “High- resolution, parallel visualization of turbomachinery flowfields”. In 43rd AIAA/SAE/ASME Joint Propulsion Conference. AIAA Paper no. 2007-5043. Wing Mechanization Design and Analysis of a Perching Micro Air Vehicle

Student Researcher: Jennifer M. Lukens

Advisor: Dr. Brian Sanders

University of Dayton Mechanical and Aerospace Engineering Department

Abstract This paper describes the development of a mechanized wing concept for a perching micro air vehicle. The wings are capable of rotating in pitch at two spanwise joints to simulate the motion of a bird’s wings during a perching maneuver. This project focuses on the wing mechanization design and analysis as well as the structure/mechanism integration. The advantage of a perching type of landing is that it allows the vehicle to land with approximately zero vertical and horizontal velocity on a tree branch, power line, or ledge. The requirements to perform this maneuver were investigated, the structural design was developed, and the mechanization integration to achieve this motion was determined. A model was designed and manufactured to demonstrate the kinematic mechanism making this wing motion possible. Wind tunnel testing and analytical simulation were also completed to further develop the model.

Project Objective Interest in the design and development of bird-like micro air vehicles (MAVs) has emerged in recent years. Micro air vehicles are characterized by low flight speed, small size, and low Reynolds number. One mission scenario for a MAV is to travel covertly into dangerous territory to collect and transmit data. Creating a MAV that acts and looks similar to a bird allows the vehicle to be “hidden in plain sight,” meaning that the vehicle would blend in with, or not stand out from, its surrounding environment. Research in the area of bird-like MAV development has focused strongly on the aerodynamics of flapping and on the kinematic mechanism that makes the beating motion possible. What is missing is an investigation of how a bird-like MAV could be mechanized to land, with a trajectory similar to a bird perching. The objective of this project is to determine the wing motion necessary to complete this landing maneuver and to develop a mechanized model to perform these maneuvers while still meeting size, weight, and power requirements.

Perching can be defined as landing with approximately zero vertical and horizontal velocity on a specific point. This is achieved by reducing speed while maintaining lift during the landing trajectory. There are several specific maneuvers that birds utilize to complete this type of landing. Often, birds flap their wings at a high frequency for a short period of time toward the end of the landing trajectory in order to create additional downward thrust. Birds also rotate their bodies and wings to very high angles of attack, up to 90 degrees, in order to increase the wing surface area in the direction of the flight path, increasing drag to reduce horizontal flight speed. Lastly, birds often assume an ascending path toward the end of their trajectory. A bird will fly below its landing target and pull upward at the end of the flight path exchanging kinetic for potential energy, permitting it to land with minimal horizontal and vertical velocity. These maneuvers, along with others, allow birds the desired capability of precision landing. This project investigates the possibility of a MAV that can perch without utilizing flapping.

The first step in developing a perching MAV is to investigate the degrees of freedom required to mechanize the wings for landing and the power required to perform this motion. This paper analyzes the effects of the wing rotational motion through high angles of attack. The model developed is a first iteration at integrating actuators into the wings for perching mechanization, and considers not only the geometric and weight requirements, but also the power necessary to perform this type of maneuver.

A mechanized model was developed to demonstrate the benefits of wing rotational motion for a perching maneuver as well as to illustrate that size and weight constraints for this type of vehicle can be met utilizing current technology. Wind tunnel testing was completed to analyze the aerodynamic loads on the rotating wings as well as to understand the power requirements of the drive mechanism. Finally, the entire vehicle was modeled analytically for vehicle flight simulation. The analytical results were compared to the wind tunnel results to confirm the accuracy of the analytical model.

The remainder of this paper will describe the design of the mechanized model, the analytical modeling of the vehicle, and the experimental procedure for the wind tunnel testing with a few results.

Methodology Used In order to develop design requirements and determine a limited set of wing degrees of freedom for the perching model, pigeon maneuvers and wing/body kinematic degrees utilized to land were researched. From this understanding, two degrees of freedom were chosen for each wing. These degrees of freedom correspond to wing twist at the shoulder, and a secondary wing twist at the wrist. Once this mechanization concept was determined, pigeon geometry and weight was used for load approximation and structural sizing1,2,3.

The model consists of a fuselage and attached wings as depicted in Figure 1a. Notice that the entire span is broken into four sections, symmetrical about the fuselage. The wing actuators, completely contained within the fuselage, allow each section of the wings to rotate up to 90 degrees wing incidence. The total vehicle goal mass was determined to be 306g with a final weight of 158g. The model structure is constructed of balsa with carbon fiber spars and a Mylar skin. The wings are NACA 0009 airfoils. Although the wing panels are slightly thicker than typical bird wings, the added thickness is necessary for incorporating mechanization.

The spar design consists of coaxial cylinders that allow the distal wing sections to be mechanized independently of the proximal wing sections. The two proximal wing panels are attached to a single spar driven by one actuator so that they maintain the same wing incidence. The proximal spar is a cylindrical tube extending from the wrist joint of the right wing to the wrist joint of the left wing. The distal spar is a solid carbon fiber rod that fits inside of the proximal spar allowing it to rotate independently of the proximal spar. Two distal spars are used, one for each distal wing section. These spars are connected to separate actuators inside the fuselage and extend to the tips of the wings. The three actuators located within the fuselage mechanize the three spars, and therefore the wings, independently.

Wind Tunnel Testing To more completely understand and analyze the model, a wind tunnel was utilized for aerodynamic testing. Results obtained from this test include values for lift and drag at various wing angles and wind speeds. The test also measured the forces during wing rotation at several different wind speeds and angle changes. The goal of the wind tunnel tests was to determine if the structure and actuation was robust enough to handle the dynamic forces that would occur during wing rotation. This wind tunnel test also was completed to better understand the dynamic and post stall effects on the wings, and the power required to drive the actuation motors. Future tests will be completed to obtain more accurate measurements. This first test provided a way to understand the order of magnitude of the forces that could be expected during a typical landing trajectory and to discover faults in the experimental procedure.

Wind tunnel tests were completed in the University of Dayton's low speed wind tunnel. The tunnel is a closed test section, open air tunnel with a square cross-sectional area of 76.2 by 76.2cm. The flow speed can be adjusted by controlling the fan motor speed. A honeycomb grid at the front of the tunnel splits and damps vortices to reduce turbulence intensity. Speeds from 1-10 m/s were used during the testing. At speeds lower than 5m/s, there was a significant amount of turbulence in the flow due to the atmospheric conditions and wind gusts outside the tunnel. It was difficult to precisely measure the wind speed during testing using the pitot tube device that was available, especially at speeds under 5m/s. There was also some error in other atmospheric condition measurements and calibration techniques that led to significant uncertainty in the results.

Figure 1b illustrates the experimental setup for the wind tunnel testing. The model was mounted on a single strut from the bottom of the tunnel designed specifically to connect with the model and the force transducer; the model was positioned in the center of the test section vertically and horizontally. The force transducer was located 12.86cm directly below the aerodynamic center of the vehicle. A protractor was positioned in line with the model to measure the wings at different angles. Wires for the motor control and voltage measurement came out of the back of the fuselage and ran along the backside of the sting through a hole in the bottom of the tunnel.

After testing was completed, an aerodynamic tare approximation, correction factors and data filtering were required in order to more accurately analyze the results. An 8th order lowpass digital Butterworth filter with a normalized cutoff frequency of 25Hz was used to filter the wind tunnel data. Blockage and wall interference corrections were also considered once the data was filtered.

Analytical Model An integrated aeroelastic multi-body morphing simulation tool called IAMMS4 was used to analytically model the MAV and to simulate the vehicle in flight. The results from the model simulations were compared to wind tunnel test results, and were used to obtain analytically based values for the aerodynamic and structural forces acting on the vehicle. The analytical model serves as an estimate of the forces that can be expected to act on the vehicle during a landing trajectory. For future research, the analytical model can be used to further develop the vehicle, to optimize the landing trajectory and vehicle design, and to investigate control schemes for perching.

IAMMS utilizes the multi-body dynamics code MSC-ADAMS, and therefore models are built inside the ADAMS environment. The ADAMS solver is used to perform time integration for the multi-body model of the vehicle. Loads are computed for each time step using an AFRL-developed code that utilizes vortex lattice computations and splining techniques to interpolate the aerodynamic forces to the structure. Matlab/SIMULINK is used for the flight control system; although no flight control system was utilized for the present study as simulations consisted only of commanded motions corresponding to wind tunnel testing.

In ADAMS the mechanized vehicle was structurally replicated to include the three spars and the wing ribs as shown in Figure 2. The fuselage was modeled to replicate the effects of the physical model. The actual fuselage is a symmetrical airfoil at zero degrees angle of attack and therefore generates zero lift and negligible drag attributed only to skin friction. The three spars in the model and were built using bar elements and were linked together with revolute joints at inner ends of the distal spars to allow for only the torsional degree of freedom. The three spars were modeled to be completely independent of each other besides the connection at the distal spar inner ends. All three spars are capable of independent rotation. Twelve ribs were included in the model with three ribs for each of the four wing sections in the physical model. The ribs serve as structural members and are clamped to the spars. Each rib has three applied force locations that were interpolated from the aerodynamic vortex lattice code. The three force locations are at the spar and on the two ends.

The aerodynamic model, which does not appear in Figure 2, consists of 4 separate rectangular wing surface panels, with a separate panel defined for the fuselage. Each wing surface is physically modeled at the incidence angle for that wing section at the particular instance in time. Local velocities are also determined for the wing panels, and these are included as part of the boundary condition for the vortex lattice solution. The four wing panels, defined by the spars and ribs, can be programmed to rotate to any angle specified up to 90 degrees. The proximal wing, right distal wing, and left distal wing angles are controlled using wing angle splines attached to three state variables, one for each independent wing section. Separate splines were written to replicate each of tests completed in the wind tunnel. This modeling of the wind tunnel tests is the same as the method previously used in a similar study for analysis of a large-scale morphing vehicle5.

The original aerodynamic code used to determine the loads acting on the model did not take into consideration post stall events. Because in many cases portions of the wings in the model will be at angles of attack beyond the stalling point, additional calculations needed to be made for the post stall aerodynamics. To do this, post stall airfoil data was developed into two splines, one each for lift and drag coefficients, and was incorporated into the original tool. This data for the lift and drag coefficients is shown in Figure 3 and was obtained from wind turbine high angle of attack data6. The lift coefficient data specified a stall point at 10 degrees corresponding to a lift coefficient of 0.8. After stall, at around 15 degrees, the lift coefficient started to increase again. The maximum lift coefficient for this airfoil is at 45 degrees angle of attack. After 45 degrees, the lift coefficient continues to decrease. The drag coefficient remains at a very low amount until after stall, at which time it increases basically as the sine of the angle of attack, reaching a maximum at 90 degrees. Several different wing incidence variations and wing speeds were simulated in ADAMS to determine the aerodynamic forces and moments acting on the model during an assumed flight trajectory.

Results Obtained and Significant and Interpretation Wind Tunnel Results Once corrections were accounted for and filtering was completed, the results from the wind tunnel tests were analyzed to determine trends. Two different tests are focused on in this paper. Other wind tunnel results are discussed in a companion paper7. These tests were chosen to represent a variety of wing motions and the associated lift and drag trends. The lift and drag results for these tests are shown in Figure 4. For each test two plots are shown. The top graph shows the lift and drag forces as a function of time, and the bottom graph shows the corresponding wing incidence angle for the proximal and distal panels. By analyzing these tests, an understanding of the effects of the wing motion and the forces associated with the different motions can be developed.

According to lift coefficient data for the NACA 0009 airfoil, the wings should stall at a wing incidence of approximately 10 degrees6. Trends also show that after stall the lift coefficient should initially decrease, but begin to increase again at 15 degrees due to the pure resultant forces acting on the wing. The maximum lift is reached as a wing incidence angle reaches 45 degrees. At this point, the lift coefficient again decreases. The lift curve for several of the tests does not exactly mimic this data; instead, dynamic effects due to the wing rotation cause unexpected results.

The results from test 15 (wind speed at 5 m/s) shown in Figure 4a, do not show an initial stall at 10 degrees as expected. The lift curve shows that a lift decline does not occur until the distal wings are at 6 degrees and the proximal wings have reached approximately 28 degrees; the a second lift peak, expected from a quasi-static response, is not evident. During this test, the wing incidence pitch rate was relatively high, 80 degrees in 0.8 sec, resulting in the observed stall point at about 28 degrees wing incidence. The higher than expected stall angle could be a demonstration of dynamic stall. The maximum lift value for this test is fairly lower than was expected. This error could be due to inaccurate calibration, wind speed measurements, or wing incidence measurement techniques.

The lift and drag results for tests 29 (wind speed at 5 m/s) are plotted in Figure 4b. The data shows that the lift increases as the angle of the wings rotated until the distal panels reached approximately 30 degrees and the proximal panels reached 40 degrees. At this point, the lift peaked and started to decrease. The stall point is not evident at the 10 degree angle for the wings. The results show the classic shape of the lift curve, however stall is not observed due to hysteresis. Prominent stall on re-attachment when reducing wing incidence is rarely evident.

In all of the tests, either the starting or ending values for lift and drag were found to be slightly negative. It is possible that the actual incidence of the wings was slightly negative which would result in a negative lift at the beginning of the motion. The slightly negative drag at the beginning of the motion could be due to error in calibration or in the aerodynamic tare approximation. Again, the error in the force values could be due to the wall or solid blockage interference, or inaccuracy in the angle and flow speed measurements. From the results of the wind tunnel tests, it appears that some dynamic effects take place during the wing rotation that lead to varied stall angles. Further investigation of this phenomenon must be studied in order to understand the flow physics of the post stall rotating wings.

Power Measurement Another important aspect in the vehicle design is the power required to mechanize the vehicle. During the wind tunnel test, an external power source was hooked up to the vehicle; however, future iterations of the model will include an onboard power supply for the vehicle applications to be feasible. Further tests were completed to determine the power required to rotate the wings. Specifications about the distal servo motors were used to approximate the amount of power that would be needed to rotate the wings. The distal motors had a voltage recommendation of 4.8 to 6V. However, for the testing the power supply accessible had an available voltage of 4.5V. This voltage was deemed to be sufficient to properly drive the motors. The specifications also stated that the distal motors had an average current rating of 100mA with a maximum of 300mA and a static rating of 1mA. Using this data the average power was calculated to be 0.45W with a maximum at 1.44W and a static power of 0.0045W. Similar specifications for the proximal servo motor could not be obtained. Since the proximal motor is slightly larger than the distal motors and is used to rotate both proximal panels, it was assumed that it would require more power and was focused on during the power testing.

The effect that added torque due to the aerodynamic pitching moment on the wings had on the motor power during flight was important to understand. Since the torque rating of the motor, 0.3kg.cm/4.8V, is much higher than the pitching moment that was expected to act on the wings, the added voltage from the torque was assumed small and benchtop power tests were found to be sufficient for the first test method used to determine the necessary power. Figure 5 shows the instantaneous power as a function of time for a proximal wing rotation from 0 degrees wing incidence to 85 degree. In this test, the peak instantaneous power was found to be approximately 0.53W with a static power of 0.1W. This result is typical of the power measurements for most tests. A very small mount of power is required to maintain the wing in a static position. As the motor moves the wing into position the power increases to a higher level and remains at this level during the entire motion. After the motion is completed, the power is reduced to the static level.

Analytical Model Results Figure 6 shows the analytical results for lift and drag and wing rotation as a function of time for several different simulations. One wind tunnel test simulated was test 22 as shown in Figure 6a. In this test the flight speed was 7.5m/s. In this test only the distal wing panels rotated; the proximal panels remained at a wing incidence of 40 degrees throughout the simulation. At the beginning of the simulation, the distal wing incidence was 4 degrees. The distal panels rotated throughout the test and ended at 30 degrees. The lift increased throughout the test, and since the wing incidence never reached 45 degrees, the maximum lift was not obtained.

The last test examined is shown in Figure 6b. In this simulation, corresponding to test 29, the flight speed was set to 5m/s. During the simulation, the proximal sections started with a wing incidence of 80 degrees and the distal panels started with a wing incidence of 70 degrees. All the panels rotated until they reached 0 degrees. In the test a maximum lift of approximately 0.9N is achieved when the proximal wing panels are at 45 degrees. After this point, the lift decreases.

At a flight speed of 10 m/s with all wings positioned at 10 degrees incidence, there is more than enough lift to sustain flight for the vehicle; however, the lift observed in these runs is lower than the lift required to maintain steady level flight for the 306 g vehicle. It should be noted that this model assumes a symmetric airfoil and does not include dynamic effects. As observed in wind tunnel tests, dynamic effects may create an added lift when the wing panels are quickly rotated. Using a high lift cambered airfoil would also increase the vehicle lift and its ability to perform a perched landing. Accounting for dynamic effects and changing the wing airfoil would improve the model. These results do not represent an optimized condition for perching. Analyzing multiple landing trajectories and wing rotation speeds would assist in determining an optimized maneuver. Several other simulations were completed to further develop the model. These results are described in a companion paper8.

Comparison The results from the analytical model were compared to the wind tunnel results and it was found that the wind tunnel results were significantly different. As previously mentioned, there were several uncertainties in the wind tunnel testing including inaccurate calibration techniques. Therefore, the analytical results cannot be absolutely validated based on this set of results. he compared results do show similar trends between the wind tunnel forces and the analytical forces that are nearly the same. However, the analytical model did not account for dynamic effects, and it used assumed values for lift and drag coefficient for post stall angles of attack. For example, Figure 7 shows a comparison of the analytical model and the wind tunnel results for test 29. In this test the proximal wings started at 80 degrees and rotated to 0 degrees and the distal wings started at 70 degrees and rotated to 0 degrees. Notice that the lift curves are similar in shape, however the analytical results show a higher maximum lift value and an earlier maximum lift than the wind tunnel tests. The analytical results show a maximum lift at proximal wing incidence of 45 degrees and the wind tunnel results show a maximum lift at a proximal wing incidence of 41 degrees.

Conclusions The goal of this project was to determine if a vehicle could be built that was light weight, structurally robust, and could develop the forces required to perform a perching maneuver using only wing deflections, inspired by bird landing techniques. The model constructed is a first iteration at developing such a vehicle and was successful in withstanding the assumed landing trajectory forces. The model met all size and weight requirements. Although much additional design work needs to be completed, such as adding a tail and developing control strategies, this first iteration vehicle achieves its desired result.

The wind tunnel test was completed to determine two main things. First it was important to determine whether the model was structurally robust enough and the actuation was powerful enough to withstand the forces that it would encounter during an assumed flight trajectory, including forces related to flutter and dynamic effects. During the wind tunnel test, there were some wing vibrations noticed in the distal wings, but the actuation and structure met their requirements during testing. Secondly, the testing was meant to develop an understanding of the order of magnitude of the forces that could be expected during wing rotations and the approximate power required to rotate the wing panels during flight. It also served as a dry run for further tunnel testing.

This model is a first iteration at developing a mechanized, bird-like, vehicle capable of a perching maneuver. Future models will need added degrees of freedom in the wings, as well as a mechanized tail and built in energy harvesting techniques. Eventually, the model should mimic bird-like behavior and should contain onboard cameras and communication devices. This model was successful in understanding the kinematic degrees of freedom and power necessary for a perching maneuver.

Figure and Charts

Figure 1. a.) Model Configuration - Head on View; b.) Wind Tunnel Test Setup. 2

1.5

1

0.5

0 -20 0 20 40 60 80 100 120

-0.5 Lift Coefficient Drag Coefficient -1 Angle of Attack (Deg)

Figure 2. Analytical Model. Figure 3. Lift and Drag Coefficient Splines.

Figure 4. Lift and Drag (top subplots) and Wing Incidence Figure 5. Power Results for Proximal Wing Angles (bottom subplots) for a.) Test 15 b.) Test 20. Rotation from 0 to 85 Degrees.

Figure 6. Results from Analytical Simulation a.) Test 22 b.) Test 29.

Figure 7. a.) Analytical model results, Test 29; b.) Wind tunnel results, Test 29.

Acknowledgments The author would like to thank Dr. Brian Sanders and Dr. Greg Reich at the Air Force Research Laboratories for their support. The author would also like to thank Dr. Aaron Altman of the University of Dayton for the opportunity to utilize the low-speed wind tunnel under his direction, and for his help and insightful suggestions during the testing. The author would also like to thank Tom Ward and Becki Quam of Ward Engineering for their enthusiasm and assistance in the design and fabrication of the model. Final, the author would like to thank OSGC for their support.

References 1. Tennekes, Henk. The Simple Science of Flight. Cambridge: MIT P, 1998. 2. Withers, Philip. "An Aerodynamic Analysis of Bird Wings as Fixed Aerofoils." Journal of Experimental Biology 90 (1981). 3. Liu, Tianshu. "Comparative Scaling of Flapping-and Fixed-Wing Flyers." AIAA Journal 44 (2006). 4. Reich, G. W., Bowman, J. S., Sanders, B., and Frank, G. J., “Development of an Integrated Aeroelastic Multi-Body Morphing Simulation Tool,” AIAA-2006-1892, AIAA Structures, Structural Dynamics, and Materials Conference, Newport, RI, May, 2006. 5. Scarlett, J., Canfield, R., and Sanders, B., “Multibody Dynamic Aeroelastic Simulation of a Folding Wing Aircraft,” AIAA-2006-2135, AIAA Structures, Structural Dynamics, and Materials Conference, Newport, RI, May, 2006. 6. Sheldahl, Robert E., and Paul C. Klimas. Aerodynamic Characteristics of Seven Symmetrical Airfoil Sections Through 180-Degree Angle of Attack for Use in Aerodynamic Analysis of Vertical Axis Wing Turbines. Sandia National Laboratories. Springfield: National Technical Information Service- U. S. Department of Commerce, 1981. 7. Lukens, J. M., Reich, G. W., and Sanders, B., “Wing Mechanization and Wind Tunnel Testing for a Perching Micro Air Vehicle,” to appear in CIMTEC Smart Materials, Structures, and Systems Conference, Acireale, Sicily, June, 2008. 8. Lukens, J. M., Reich, G. W., and Sanders, B., “Wing Mechanization and Analysis for a Perching Micro Air Vehicle,” AIAA Structures, Structural Dynamics, and Materials Conference, 2008. Will Lorain County Community College Benefit from a Photovoltaic System?

Student Researcher: Ryan M. Marquette

Advisor: Dr. Susanne Clement

Lorain County Community College Engineering Technologies

Abstract The goal of my project was to determine if the installation of a photovoltaic (PV) system would benefit Lorain County Community College economically and environmentally. A PV installation can be considered "green" if the net energy production saved over the lifespan of the system (assumed to be 25 years per manufacturer warranty) exceeds the costs invested in manufacturing and installation. The environmental benefit is the reduction of carbon dioxide (CO2) emissions. Results of this research suggest that installation of a PV system would reduce CO2 emissions by 8,924 tons. Utility savings over system life will surpass the Net Cost (in year of installation) after the 25th year resulting in a net savings. Although the Cumulative Cash Flow will result in a financial payback during the 26th year, it is up to the college to decide whether or not the wait of 25 years is a reasonable decision. Aspects such as grant money and deregulation will significantly increase the financial payback time for Lorain County Community College, resulting in a realistic decision of installing a photovoltaic system.

Project Objective My objective was to investigate the potential impact (financial and environmental) of a photovoltaic system for Lorain County Community College by regarding the system paying back the college financially. From this, I was able to estimate the reduction of carbon dioxide going into the environment. This includes an assessment of the long-term financial payback of such systems. The objective was to see if a PV installation on the PC Campana Building at Lorain County Community College would profit economically in the future.

Methodology Used A literature review was done to gather information about the viability of solar cells on a single building. It was determined that the A. J. Lewis Center’s photovoltaic system at Oberlin College could be served as a model for comparison with Lorain County Community College’s PC Campana Engineering Technologies Building due to both colleges being in Lorain County. I also gathered information by interviewing solar energy experts who provided specifications for a hypothetical system that could be used for Lorain County Community College. The PC Campana Building is the college's southernmost building with no trees and buildings to block the solar arrays. The total kWh used at the PC Campana Building is kept at a constant kWh. Robert Flyer, the Physical Plant Director of LCCC, reports that the building is constant due to it utilizing steam from the central boiler plant and chilled water from the central chiller plant. Differences in the A. J. Lewis Center’s size were looked at during the research as well to show CO2 emissions and financial payback.

Results Obtained A modeled PV system using 2,124 Sanyo Electric photovoltaic panels with an array size of 424.800 kW was used for the study. As specified, the PC Campana Building solar production would generate an annual 521,123 kWh. This number is slightly greater than the total kWh used at the PC Campana Building, which is 521,004 kWh. The difference between the two is a resultant of a negative 119 kWh Utility Energy to Purchase. The net cost of this system would be an average of $2,402,299, and the utility savings over system life, on the 25th year, is $2,359,434. During the 26th year, the system would begin to pay for itself. The solar system will reduce green house gas emissions by 8,924 tons of CO2 over 25 years. When the utility savings over system life is divided by the net cost, it comes out to be a 97% total life-cycle payback. With the system installed, 100% of electricity would be supplied by solar during the prime months of March through September. Specific data that was observed for solar radiation for Lorain County Community College and Oberlin College is an annual amount of 4-5 kWh/m²/day, or a total of 44-56 kWh/m²/year. The Adam Lewis Center building was aiming towards an environmentally friendly academic research facility, according to a report published by Michael E. Murray and John E. Petersen at the A. J. Lewis Center. Their research suggests that although the A. J. Lewis Center’s system reduces 409,800 pounds of CO2 emissions (PC Campana’s system would reduce 8,924 tons) rapidly after just 3.7 years, the system does not pay back itself financially due to the initial cost of $385,778. Their report says this is due to the revenues of installation cost and lifespan of the system.

Significance and Interpretation of Results: Review with Robert Flyer The system described demonstrates that nearly any building, given the budget, room for the system, and climate conditions can benefit from a photovoltaic system. The results that I concluded was that this system will benefit economically (the payback to the college after 25 years) and environmentally (the system being 100% emissions-free). With additional technology added on to the building such as motion- sensitive lighting and photo sensors, the PC Campana Building could become more energy efficient. The photovoltaic array system in which I have studied may even be much more viable then it already is in the not too distant future. This is because Ohio's electric rates are predicted to increase significantly beginning in 2009 due to deregulation or partial deregulation (this is currently being debated through legislation in the Ohio House and Senate). If electric rates increase as expected, Lorain County Community College could easily be paying in excess of kilowatt-hours in the next year or two and be significantly higher over the next 5 to 10 years. Mr. Flyer and Mr. Green suggest anywhere between a 10% - 40% increase due to deregulation. This will greatly reduce the wait for a positive Cumulative Cash Flow, which could result in a financial payback as quick as the 10th year. Addition to deregulation, with new technology in photovoltaics such as longer life spans, my goal for this project could very well become a reality. A study on the Adam Lewis Center system suggests that the revenue to cost ratio can differ significantly for different currencies. For example, because of the dominance of coal as a fuel source in the region of installation, the Oberlin system pays back CO2 relatively rapidly, just like my research suggest. On the other hand, because of Oberlin’s system size, it will never pay back the financial cost in which it was installed. Robert Flyer suggests that with the significant amount of grant money that would be given to the college for such a project, and the substantial rate of deregulation of electricity, my study would then imply that the college would be paid back its net cost much more quickly, resulting in a realistic decision of implementing such a system at Lorain County Community College.

References 1. Flyer, Robert. Personal interview. 3 Mar. 2008. 2. Green, Nathan. Interview. 15 Mar. 2008. 13 Mar. 2008 . 3. Lewis, Adam, comp. Adam Joseph Lewis Center for Environmental Studies. 2000. Oberlin College. 4 Feb. 2008 . 4. Murray, Michael E., and John E. Patterson. PAYBACK AND CURRENCIES OF ENERGY, CARBON DIOXIDE AND MONEY FOR a 60 KW PHOTOVOLTAIC ARRAY. Diss. Oberlin College, 2004. Oberlin. Ecosystems on Mars

Student Researcher: Michele S. Mayer

Advisor: Professor Charles W. Allport

Cedarville University Departments of Science and Math, and Education

Abstract Rather than simply identifying the different components of an ecosystem, this activity allows students to apply their knowledge by creating a hypothetical biome on Mars. Using a combination of biotic and abiotic factors that best suits the climate and seasons of the Red Planet, students may also account for these special features by inventing new technology that would aid in the sustainability of their ecosystem.

This activity serves as a great capstone to an entire ecology unit because students must apply their knowledge from multiple lessons (about energy flow, biogeochemical cycling, niches, population regulation, even plant physiology).

Lesson Before this activity is implemented, students should have a thorough understanding of the components of an ecosystem, how energy and necessary gasses are recycled in an ecosystem, what causes population fluxation, and the roles of different types of organisms (detritivores, producers, etc.). Also, it is important that students are familiarized with the special features of Mars that set it apart from Earth’s biomes.

Because creativity is such an asset and often increases with the number of people, this activity can be more effective when done in groups. I recommend it is done in small groups of two to three students. After explaining expectations to students and giving them sufficient background information about Mars, allow for one to two days of research. Another day or two should be allowed to compile the information and create the final written and drawn product. If there is enough time, have people/groups swap final products with another ecosystem in order to do peer evaluations of each other’s work. One person may see a missing component or missing link in a cycle that another student failed to observe.

Objectives • Students will formulate a list, description, and drawing of the hypothetical biotic and abiotic components and the required technologies of a self-sustaining ecosystems on Mars • Students will explain and defend their choices for the different components they included in their ecosystem.

Ohio Academic Content Standards Grade Seven - Diversity and Interdependence of Life, 3

Grade Nine - Diversity and Interdependence of Life, 9, 15, 16

Student Engagement As students create a new type of ecosystem, they have the opportunity to apply their basic knowledge about the interactions of an ecological community and see why the different components are necessary for its health and functioning. The research and the choice of components is all student lead.

Resources • Information sheet of Mars facts • Resources for researching the most fit organisms for students’ ecosystems • http://sse.jpl.nasa.gov/planets/profile.cfm?Object=Mars http://www.spacetoday.org/SolSys/Mars/MarsThePlanet/MarsSeasons.html http://www.exploringmars.com/science/climate.html Assessment In order to examine if a hypothetical ecosystem would be sustainable, one should follow the energy and carbon cycle, examine the roles of the chosen organism, the balance of species diversity, and examine what steps have been taken to account for the drastic climate and seasonal differences.

The sustainability of the students’ ecosystems should only be one component of the assessment. A number of discussion questions should be asked either orally as a class or written individually to ensure that students have sound reasoning for including the different factors they did.

• What makes the specific organisms chosen most suited for your Mars ecosystem over other organisms? • Would such an ecosystem be sustainable on Earth? Why or why not? • How did you account for the seasonal differences between the Earth and Mars?

Conclusions This activity gives students the opportunity to use their individual creativity to apply their knowledge of ecosystems. It incorporates multiple concepts that are often taught as separate lessons and shows the interactions between these components.

Teachers have to encourage creativity in the combination of organisms chosen. Otherwise, some students may simply “create” an ecosystem that is identical to one of earth’s biomes, except that it is under a piece of technology that simulates earth’s seasons and climate. Transient Performance of a Pneumatic Engine

Student Researcher: Stephen P. Meador

Advisor: Robert J. Setlock, Jr.

Miami University Department of Mechanical and Manufacturing Engineering

Abstract Continuing use of fossil fuels to power automobiles is leading to increasing levels of pollution and greenhouse-effect gases in the atmosphere, and efforts need to be made to lessen this effect. An obvious remedy to the problem is to either reduce or eliminate the need for combustible fuels. One solution that has been devised is a hybrid engine that uses both compressed air and combustible fuel to produce mechanical work. This engine type can use both the energy sources combined or independently. Such an engine is similar to a conventional engine, but an additional valve, which connects to a compressed air storage tank, has been added to the cylinder head. This paper evaluates the effects of timing scenarios on the throttle response of such an engine when it is operating under a mode relying solely on compressed air for propulsion. The effects of various valve timing scenarios on throttle response are presented.

Project Objectives Higelin [1] describes a concept to decrease the level of combustible fuel consumption by using compressed air to provide power to an engine. The compressed air can be used as the sole energy source or as a supplement to a traditional combustible fuel. This hybrid pneumatic-combustion engine concept operates similarly to a conventional internal combustion engine; however, it has a third valve, a charging valve, which is connected to a high-pressure air tank. This conceptual engine can operate in a variety of different engine modes including pneumatic motor mode, pneumatic pump mode, as a supercharged engine, as an undercharged engine, or as a conventional internal combustion engine.

Higelin mentions possible timing alterations of a pneumatic-combustion engine and briefly discusses the potential benefits of these scenarios. However, in-depth analysis of the timing scenarios is not provided. Chen [2] has evaluated the effect of valve timing on the steady state solution for the pneumatic motor mode. While this research shows the benefits of different timing scenarios, the intended use of the pneumatic motor operating mode, described by Higelin, is when accelerating a vehicle from a standstill. This characteristic requires an analysis of the transient characteristics of the engine. This research hopes to characterize the transient performance of a pneumatic engine with varying valve timing scenarios.

Methodology Used The pneumatic engine was modeled using a thermodynamic first law analysis, as is described by Chen. The model is a differential equation that accounts for conservation of energy in the engine cylinder, as shown in Equation 1 below, where T is the cylinder temperature, m is the instantaneous air mass in the cylinder, Cv is the constant volume specific heat for air, Q is the heat transfer from the cylinder to the atmosphere, W is the work being performed on the cylinder, h is the specific enthalpy at the cylinder temperature, and u is the specific internal energy at the cylinder temperature.

[Equation 1]

Equation 1 was then coupled with a MATLAB SimMechanics model of a reciprocating engine. Simulink was used to measure various qualities of the overall system as a function of time.

Results and Conclusions To quantify the effect of valve timing on the throttle response of a pneumatic motor, the engine rotation speed was measured. It was found that the timing of the opening of the charging valve and the closing of the exhaust valve do not have a significant effect on the throttle response of the engine as is shown in Fig. 1 and Fig. 4, respectively. However, the effect of the timing of the closing of the charging valve and the opening of the exhaust valve, shown in Figure 2 and Figure 3, respectively, do have a significant effect on the throttle response of the engine.

Figure 2 shows that the engine’s rotation speed accelerates much quicker when the charging valve closes later. This is likely due to the compressed air having an additional orifice through which to exit, which reduces the resistance to motion of the piston. In contrast, Figure 3 shows that engine slows down when the exhaust valve opens late because the piston is being forced to further compress the air that is in the cylinder.

Figures Engine Rotation Speed vs. Time 3000 Base Early Opening 2500 Late Opening

2000

1500

1000 Engine Rotation Speed (rpm)

500

0 0 2 4 6 8 10 12 14 16 18 20 Time into Simulation (s)

Figure 1. Effect of Charging Valve Opening Timing on Engine Rotation Speed.

Engine Rotation Speed vs. Time 3000 Base Early Closing 2500 Late Closing

2000

1500

1000 Engine Rotation Speed (rpm)

500

0 0 2 4 6 8 10 12 14 16 18 20 Time into Simulation (s)

Figure 2. Effect of Charging Valve Closing Timing on Engine Rotation Speed. Engine Rotation Speed vs. Time 3000 Base Early Opening 2500 Late Opening

2000

1500

1000 Engine Rotation Speed (rpm) Speed Rotation Engine

500

0 0 2 4 6 8 10 12 14 16 18 20 Time into Simulation (s)

Figure 3. Effect of Exhaust Valve Opening Timing on Engine Rotation Speed.

Engine Rotation Speed vs. Time 3000 Base Early Closing 2500 Late Closing

2000

1500

1000 Engine Rotation Speed (rpm)

500

0 0 2 4 6 8 10 12 14 16 18 20 Time into Simulation (s)

Figure 4. Effect of Exhaust Valve Closing Timing on Engine Rotation Speed.

References 1. Higelin, Pascal, Alain Charlet, Yann Chamaillard. "Thermodynamic Simulation of a Hybrid Pneumatic-Combustion Engine Concept." Int.J Applied Thermodynamics 5(2002): 1-11. 2. Chen, Ying, Hao Liu, Guoliang Tao. “Simulation on the port timing of an air-powered engine.” Int.J Vehicle Design 38(2005): 259-273. Infrared Led Tracking System

Student Researcher: Eric J. Miller

Advisor: Albert Bosse, Ph.D.

University of Cincinnati Aerospace Engineering

Abstract The determination of the true position and orientation of a mobile robot is required for autonomous satellite docking research. Accurate position measurements must be known for scrutiny of the algorithms being tested on the robot. A truth system will be built using current technology used in head pose tracking systems. These systems are commonly found in virtual reality environments. The system planned to be implemented, consists of an array of infrared light emitting diodes (LED) mounted on the mobile robot. Two cameras mounted on the wall of the laboratory will capture the LEDs in a 2-D image. Using simple geometry calculations, the 2-D image of the LEDs will be transferred back into 3-D coordinates giving the position and orientation of the mobile robot.

Project Objectives The design requirements for this project are to attain the position and orientation of the mobile robot within ±5mm anywhere in the lab. The output of all 6 DOF of the robot shall be recorded at a rate of 1Hz.

Methodology The following equipment was utilized to build this metrology system. Point Grey Research FLEA2 cameras were utilized. The camera resolution is 1024x768 and images are taken at a rate of 15 FPS. Visible filters are used so that only infrared light will be detected. A pan-tilt device is used to view the entire room. The PTU-D46-17 pan-tilt platform has a rotation speed of 300°/second and a resolution of 3.086 arc minute (.0514°). The infrared LEDs are wide angle(120°) beam LEDs. They use T1¾ (5mm) mounts and have a peak emission of 830 nm. The target has an array of LEDs but for calculating the position and rotation angles only 3 LEDs are required. Given the known positions of all three LEDs, the target orientation can be calculated.

Four different frames of reference are used. The robot frame is attached to the robot and is determined by the 3 LEDs. The camera frame is used for calculating the positions of the LEDs. The right camera focal point is the origin of the camera frame with the z axis pointing out of the camera. The LED positions in the camera frame are then translated into the lab frame. The lab frame can be located anywhere in the room, such as on the target satellite. The pan-tilt Frame is used for calculating the rotation matrix for different pan and tilt angles. Figure 1 shows how all four frames are positioned with respect to each ˆ other. The vector RLR is read as the vector from the lab origin to the robot origin. Another way of saying it is the position of the robot frame in the lab frame.

It is important that the exact position of the LED light is found on the image for accurate position measurements. The program first goes through and finds the blobs that meet a certain criteria. Once these initial blobs are found, the centroiding function goes through and looks at each blob. A 40x40 box surrounds the blob and using the moment method the x and y coordinates are calculated. The x and y centroid coordinates were found using Equation 1. Pixvalue is the value of the pixel ranging from 0 to 255.

∑ (X i * pixvaluei, j ) X Po int = Equation 1: ∑ (pixvaluei, j )

∑ (Yi * pixvaluei, j ) YPo int = ∑ (pixvaluei, j ) Calculating the position of the LED in relation to the camera frame is a matter of converting the 2-D image coordinates taken from both cameras and transferring them into a 3-D position. The pixel shifts from both cameras are used along with the focal and baseline distances to calculate the 3-D position. The verge angle of the two cameras would also be required but for this project the cameras are considered to be pointed parallel to each other.

Equations 2 through 4 calculate the X, Y, and Z coordinates of the LED in the camera frame. Baseline is the distance between the cameras and is 150mm. Remember, the X, Y, and Z coordinates are off of the right camera. The focal length is 2800units which includes a scale factor and comes from the calibration data. It is not the same as the manufactures published focal length. PPX and PPY are the principle points of the camera in the x and y directions. These equations were derived using the pinhole camera model.

Baseline =150mm

Focal = 2800 pixels Baseline Equation 2: X C = *(X RPoint − PPX ) (X LPo int − X RPoint ) Baseline Equation 3: YC = *(YRPoint − PPY ) (X LPo int − X RPoint ) Baseline* Focal Equation 4: ZC = (X LPoint − X RPoint )

The coordinates of the LEDS in the camera frame must be translated and rotated into the lab frame. Figure 1 visually shows the frames used. The translation goes as follows; the camera coordinates are first translated to the pan-tilt frame. Second, the coordinates are then rotated by angles phi and theta for the tilt and pan axis rotations, respectively. Third, the rotated coordinates are then translated RCPT and

RCL distances. RCPT is the position of the pan-tilt in the camera frame. RCL is the position of the lab coordinates in the camera frame. Equation 6 shows the entire rotation matrix.

⎡X L ⎤ ⎡1 0 0 − X CL ⎤ ⎡1 0 0 X CPT ⎤ ⎡ cos(θ ) 0 sin(θ ) 0⎤ ⎢ Y ⎥ ⎢0 1 0 − Y ⎥ ⎢0 1 0 Y ⎥ ⎢ 0 1 0 0⎥ ⎢ L ⎥ = ⎢ CL ⎥ * ⎢ CPT ⎥ * ⎢ ⎥ ⎢Z L ⎥ ⎢0 0 1 − Z CL ⎥ ⎢0 0 1 Z CPT ⎥ ⎢− sin(θ ) 0 cos(θ ) 0⎥ Equation 6: ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 1 ⎦ ⎣0 0 0 1 ⎦ ⎣0 0 0 1 ⎦ ⎣ 0 0 0 1⎦

⎡1 0 0 0⎤ ⎡1 0 0 − X CPT ⎤ ⎡X C ⎤ ⎢0 cos(ϕ) − sin(ϕ) 0⎥ ⎢0 1 0 − Y ⎥ ⎢ Y ⎥ * ⎢ ⎥ * ⎢ CPT ⎥ * ⎢ C ⎥ ⎢0 sin(ϕ) cos(ϕ) 0⎥ ⎢0 0 1 − Z CPT ⎥ ⎢Z C ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣0 0 0 1⎦ ⎣0 0 0 1 ⎦ ⎣ 1 ⎦

The pan-tilt needs to be reoriented periodically so that the LED array stays within the viewing angles of the cameras. The delta pan and delta tilt angles are calculated from Equation 13 and 14. These calculated angles are then added to the original pan and tilt angles to get the new required pan and tilt angles for the calculation of the LED positions in the lab frame.

X Equation 13: θ = tan −1 C Z C Y Equation 14: ϕ = tan −1 C 2 2 X C + ZC The program operation includes a lot of data output. The LED coordinates in the right and left images are output for later scrutiny of the algorithms. The 3-D coordinates of the LEDs in the camera frame are also output to a file. The calculated distances between the lights are shown to double check that the output data is valid. The positions of the LEDs in the lab coordinates are output as required. The lab positions are later used to calculate the orientation of the robot frame with respect the lab frame. The frequency is also displayed and is shown in Hz. The program usually runs around 3Hz. The rotation angles for the pan and tilt are output to verify the orientation of the pan tilt device. The angles are output in both degrees and in pan-tilt units.

The program interface also allows control of the pan-tilt movement. Four different options exist for pan- tilt movement. The pan-tilt can track the lights around the room and if it loses the lights it can scan the room to relocate them. Also, specific positions can be input to move the pan-tilt to a desired position.

The program currently uses a sorting function to sort the lights so that the LED positions are calculated in the same order every time. The lights must be sorted so that the averaging works correctly. Averaging is required to eliminate the noise in the data. The sorting function makes use of the distances between the lights. Sorting the distances from smallest to largest, the corresponding LEDs can be obtained. The calculation of the Euler angles also requires that the LEDs be calculated in the same order.

It is important to note that the cameras must be calibrated to obtain the focal length and the distortion model. Every time that the cameras are refocused a calibration must be conducted. The current calibration programs used also output a range of other values. The current calibration programs are quite robust. Using a known checkerboard pattern, the characteristics of the camera can be calculated. All calibration parameters are coupled making calibration a complicated process. Reference 1 presents more details on the calibration process.

Results Obtained One validation test was conducted and is shown in Figures 2 through 4. The data shown below is from a verification test in which the target was moved in a straight line along the z-axis of the lab and rotated left then right. Data was taken at 3Hz. Positions and rotation angles are respect to the lab frame. The results are quite good. Looking at the positions, the Y position is constant as should be, since the cart is moving along the floor. The Z position is a stair step with each step representing the time that the cart is at rest. X position is constant until the cart is rotated left then right. The Euler angle results are also reasonable. Yaw and Roll are constant throughout the test while Pitch shows the rotation of the cart.

Conclusions The system has been assembled and is operational. The program captures the left and right images, finds the centroids of the lights, and calculates the position of the LED in the camera frame. The coordinates of the LEDs in the camera frame are then transformed into the lab frame giving the lab coordinates and the orientation of the robot. The position data is output to a file. The system can also track the LED target around the room and scan for the target if it loses site of it. More testing is required to determine if this system meets the required accuracy requirements. However, the system does meet the required data output rate of 1Hz.

Acknowledgments I would like to think Adam Gerlach, Ryan Miller, and Ashley Verhoff for their help on this project. Without their assistance it would not have been possible to accomplish what I did.

References 1. T. Luhmann, et al. Close Range Photogrammetry: Principles, Techniques and Applications, Dunbeath: Whittles; Hoboken, NJ: distributed in North America by J. Wiley & Sons, 2006.

Figures and Tables

Figure 1. Frames of reference.

Figure 2. Position of the robot in the lab.

Figure 3. Position of the robot in the lab. Figure 4. Orientation of the robot in the lab. UCAV Wind Tunnel Test Video Inertial Force Measurement

Student Researcher: Robert W. Mitchell

Advisor: Dr. Aaron Altman

University of Dayton Department of Mechanical and Aerospace Engineering

Abstract A high speed camera was used to extract velocities and accelerations in order to determine inertial forces on a small UAV wind tunnel model. The testing that was performed used a high-speed video method to calculate the velocities and accelerations of the model which was perturbed with pitch and roll doublets. The model that was used during testing was the Bihrle ICE UAV which was positioned behind a C-130 afterbody in the Vertical Wind Tunnel (VWT) at Wright Patterson Air Force Base. The velocities and accelerations, from the high-speed video, were used to find the inertial force of the model given the mass and moment of inertia distributions. The high speed video data were compared to data from the on-board Bihrle accelerometer. The comparison gave a better understanding of the uncertainty and error of using the low cost, high tech, high-speed video method as an alternative to accelerometers.

Nomenclature x X coordinate xo Initial Starting Point y Y coordinate x Current Point n m Slope x Position of Model position a Acceleration

xn−1 Position of Previous Time Step ω Angular Velocity v Velocity α Angular Acceleration ∆t Time Step

Project Objective Currently, the use of accelerometers in wind tunnel models is the most accurate and widely used way to gain data on the velocities and accelerations during testing. Accelerometers have some inherent drawbacks, they are expensive, add additional weight and complexity to the model, and require calibration. Also, there are no other widely used accurate methods of collecting data on the inertial forces of a test model. The method of using a low cost, high speed video to track the movements about a known fixed reference over a specified time interval is potential alternative to expensive accelerometers.

High speed video methods have been used to characterize the motion of flapping wing insects and birds. They have been and are currently used to calculate the dynamics of flapping flight. The work by Fry, Sayaman, and Dickinson1 uses a high speed video method to perform a three-dimensional capture of the wing and body kinematics of free-flying fruit flies in rapid flight maneuvers. The video of the wing kinematics was used to dynamically scale a robotic model to measure the aerodynamic forces produced by the wings.

The method of using high speed video to calculate the dynamic characteristics on a wind tunnel model has been used when there is a weak signal and a significant amount of interference. One application where this method has been used is with high speed rotating micro air vehicles. The work by Shao-rong, Yan, Lan-xing, and Jun2 used a high speed video method to calculate the angle of blade pitch to that of the connecting rod angle and found that the precision of the angles was very high, which proved that this method was effective for calculating angles in environments with significant interference.

Methodology Used The testing was performed in the Vertical Wind Tunnel (VWT) at Wright-Patterson Air Force Base. The model that was used during testing was the Bihrle ICE UAV which was positioned behind a C-130 afterbody. A PCO 1200 high speed camera was used to take images of the model during the testing with a Nikon 28 mm lens set with an F stop of 2.5. The PCO 1200 camera has a resolution of 1280 x 1024 pixels, with a pixel size of 12.0 x 12.0 µm². The noise read out of the camera was 41 e- rms at 66 MHz. The exposure time was 5 msec with zero delay with a frame rate of 200 frames/sec. The data storage for the camera was 3 Giga bytes which allowed approximately 15 seconds of recording.

The test setup in the wind tunnel is shown in Figure 1. The model was bisected by a bar that ran the width of the tunnel test section. The model was attached to the bar by a low friction pivot slide that was placed at the Test Model center of the model. The pivot slide allowed the model to move freely up and down the bar along with Slide Bar the ability to freely move in the yaw axis.

The camera was positioned on the opposite side of the tunnel controls in front of the window so the camera was about 10 ft from the model. The lens was then focused on the slide bar for the model. The aperture Pivot Slide and the F stop were set to provide the best image quality. The camera was connected to a laptop Figure 1. Wind Tunnel Test Setup. computer and controlled by the CamWare software.

Results Obtained The best possible set of images that would provide a good base to analyze the motion of the model was chosen from the eleven tests. The pitch perturbations images were looked at for data analysis. Based upon testing data received from Bihrle Applied Research Inc. the set of images chosen was from a test with a level slide bar model. The set of images analyzed was 1.2 seconds in length and showed the model starting at rest, accelerating, and then decelerating back to a velocity of zero. The time increment between images was 0.02 seconds. This increment was selected since it was the same sample rate as the Bihrle data.

To calculate the movement of the model a set of x and y axes were drawn manually on each image using Plot Digitizer. The axis remained in a fixed location so the x axis ran through the center of the slide bar and the y axis was positioned on the very left edge of the image. The axes were numbered in increments from 0 to 1000 to Pitch Angle provide good resolution of the position. Using Plot Digitizer the points along the X-Axis right edge of the slide bar were selected using the digitize command. The points were manually selected for the slide position which served as the model position. For the Slide Position pitch of the model the top of the fuselage Y-Axis from the start of the vertical tail to the slide bar was used since the reference would not change as a function of the roll angle of the model. The selected measuring locations Figure 2. Position Layout. are shown in Figure 2.

Using the measuring locations, ten points were chosen for the slide position. Approximately 20 to 27 points were selected along the top of the fuselage for the pitch angle. The points were saved into Microsoft Excel format and then reduced.

The slide position points were averaged to provide a location of the model at a given time. The positions were used to calculate the position, velocity, and acceleration of the model based on the known time step. The initial time step used was 0.02 seconds. Using the initial position at time 0.00 second as the zero distance the change in position was calculated using Equation 1.

x position = xn − xo (1) To assign the distance in feet, the known length of the slide pivot to the underside of the fuselage was used. Three sets of data were compared to find the average length in inches on the unitless x axis. Once the reference length in inches was found, the position data was normalized into feet. To calculate the velocity of the model Equation 2 was used. x − x v = n n−1 (2) ∆t To calculate the acceleration of the model the change in velocity over time was used, shown in Equation 3. v − v a = n n−1 (3) ∆t Using the XY scatter plot function in Excel, the points from the pitch angle were graphed. Using the trend function, a line of best fit was applied to the pitch data. The line equation provided the slope of the pitch data. The slope of the line was used to calculate the y position based on the slope using Equation 4. The starting x, y coordinates of the line were (0,0) which caused the line to run through the origin making the equation origin offset zero, so the b in the equation was neglected. To find the corresponding y value based on the slope, an x value of 5 was used to find another point on the line. y = mx + b (4) The second set of coordinates were then input into Equation 5 to find the pitch angle, y θ = ±90 + tan −1 2 (5) x2 The (+) was used if the slope was negative and the (-) was used in the slope was positive. This was done since the relative axis of the model was shifted by - 90o.

Due to the randomness of the calculated velocity data another approach to model the acceleration was used. Using the velocity as a function of time data, a smoothed fifth order polynomial curve fit was applied to the data using Excel. Taking the derivative of the polynomial curve equation with respect to time, the acceleration polynomial equation was found. Using this equation the acceleration as a function of time was found.

The velocities and accelerations, from the high-speed video, were used to find the inertial force of the model given the mass and moment of inertia distributions. The mass distributions were so that the model was balanced on the pivot slide. The pivot slide was also assumed to be the center of pressure which means that all lift forces would be centered at the pivot slide eliminating moments created by lift. With these distributions and assumptions the inertial forces would have a similar profile as the acceleration. The angular velocity and angular acceleration can be obtained by analyzing pitch data. The angular velocity can be calculated using Equation 6. ∆θ ω = (6) ∆t The angular acceleration can be calculated using Equation 7. ∆ω α = (7) ∆t Significance and Interpretation of Results The uncertainty in the position data was amplified every time the data was divided by the time step. For example, this occurred when the velocity data which was based on change in position was divided by the given time step. The uncertainty was augmented again when acceleration was calculated by the change in velocity for the given time step.

The mean, standard deviation and percent error for the zero velocity and at maximum velocity data are shown in Table 1. The standard deviation for the data is very close with a maximum error of less than 0.15%. The data for the faster velocities has less error and a lower standard deviation.

Table 1. Error Analysis.

Zero Velocity Maximum Velocity Units Feet Units Feet Mean 406.03 2.01 Mean 502.70 2.49 Standard Deviation from Mean 0.589 0.003 Standard Deviation from Mean 0.536 0.003

% Error (Position) 0.145 0.145 % Error (Position) 0.107 0.107

A high speed video method was used to extract velocities and accelerations on the Bihrle ICE UAV positioned behind a C-130 afterbody, which was perturbed in pitch and roll. The low cost, high tech, high-speed video method was able to be used to calculate velocities and accelerations based on the position of the model as a function of a known time step. The repeatability of the high speed video method should be very high since the method is based on a fixed reference. If the camera is stationary during the testing and there is a clear fixed reference such as bar or window, the position could be found, enabling the calculation of the velocity and acceleration of the object if the time step is known.

Accelerometers are still more accurate at finding the acceleration curve and values than the current method of using a high speed video. However, the video method establishes that the position and velocity of a wind tunnel model could be measured with the same precision as an accelerometer. The acceleration data of the model was still too sporadic to accurately determine the values and trend of the model’s acceleration. As a result of the acceleration data having considerable uncertainties, the inertial forces also had similar uncertainties. To help minimize the uncertainties in the acceleration data, the derivative of the velocity polynomial curve fit was calculated resulting in a much smoother data curve.

Figures/Charts 0.90

0.80

0.70

0.60

0.50

0.40 Average X Position 0.30 Position (ft) Position Bihrle Position Data 0.20

0.10

0.00 0.00 0.20 0.40 0.60 0.80 1.00 1.20 -0.10 Time (sec) Position Data Comparison

15

10

5 ) 2

0 Polynomial 0 0.2 0.4 0.6 0.8 1 1.2 Deviative Acceleration -5 Bihrle Acceleration Data Acceleration (ft/sec -10

-15

-20 Time (sec) Acceleration Data Comparison References

1 Fry, Sayaman, and Dickinson. “The aerodynamics of free-flight maneuvers in Drosophila”, Science. vol. 300, no. 5618 18 April 2003, pp. 495-8. 2 Shao-rong, Yan, Lan-xing, and Jun. “Application of high-speed camera to characteristic measurement of MAV”, Optics and Precision Engineering. Vol. 15, no. 3 March 2007, pp. 378-83. Feasibility Study Report: Analytical Prediction and Mechanical Design of a High-Altitude Intelligent Balloon

Student Researcher: Wai Moe

Advisor: Dr. Jiang Zhe

The University of Akron Department of Mechanical Engineering

Summary Development of an intelligent balloon flying at near-space altitude for atmospheric studies, remote sensing, and image capturing is one of the most attractive experiment for balloon technology. A helium filled balloon has been designed, developed, assembled and launched by an undergraduate team at the University of Akron. The mechanical engineering and electrical engineering teams worked together to equip the balloon with a temperature sensor, pressure sensor, data collection/storage device, a wireless data transmitter and aerial image-capturing unit. The ultimate goal of the project is to develop a wirelessly controllable intelligent balloon model which can go up to near - space altitude of 95,000 feet and transmit the atmospheric data back to the primary station instantaneously. This research project provides a cost efficient approach for scientific observations and atmospheric research at near-space altitude.

Introduction Through the research reports, the large-scale balloons have been launched for the atmospheric studies. The large-scale balloons have higher cost to launch, limited launching sites and difficult recovery issues. The ultra lightweight balloon system with compact sensor components is a low cost system with almost limitless launching sites and is relatively novel in the recent atmospheric era. With the development of global positioning system, it is possible for wireless data transmission and trajectory tracking of the balloon.

Design/Approach FAA (Federal Aviation Administration) limits the weight of each sensor boxes to be no more than 6 lb. Sensor components should be placed in two or more separate boxes, if the payload is more than 6 lb. Once the total weight of the balloon and the payload is determined, an initial buoyant force can be calculated. The initial buoyant force is the force requir ed to make the craft neutrally buoyant. Since the helium balloon needs to elevate the buoyant force of the balloon must be greater than payload weight. Due to the turbulent nature of the wind and the atmosphere, a factor 1.5 buoyant force to 1 payload weight is generally used to overcome the tumultuous wind and to maintain a reasonable elevation speed.

For low altitudes and relatively small changes in altitude, the densities of both air and helium can be considered constant.

Figure 1. Lift force of Intelligent Balloon System.

Figure 2. Lift force of the Intelligent Balloon System.

However, when the balloon is in high altitude, the density of the air and helium change and influence the volume of the helium gas inside the balloon and the buoyant force. The density of the air and helium depends on the pressure and temperature of the atmosphere and pressure and temperature change accordingly with the altitude of the intelligent balloon. The data changes in temperature and pressure of the atmosphere are obtained from NASA and NOAA.

The density of air the helium is calculated from the Ideal gas law equation.

The change in density of air and helium with altitude can be seen in the following chart.

Figure 3. Change in density of air and helium according to altitude.

The change in gravity also occurs when the balloon gain altitude, the following diagram shows that the acceleration of gravity is minimum and has no major influence on the density of helium and air.

Figure 4. Change in acceleration of gravity with altitude.

The volume of the helium inside the balloon expands as the altitude incline. If the helium balloon is inclining constantly it will burst eventually at a certain altitude, depending on the rating and the material of the balloon, due to the expansion of helium gas. The expansion of helium gas and the volume of the balloon can be predicted using following equations.

The volume expansion of the helium balloon should be similar to the following chart depending on the initial size of balloon at the ground level.

Figure 5. Volume expansion of balloon with altitude.

According to previous experiments the balloon with 6 feet initial diameter balloon explodes approximately at 95,000 feet altitude. The explosion of balloon tends to be very turbulent and can cause shock and loosen the connection of sensors and electronic components inside the sensor box. Therefore a thermal cutter system can be developed to prevent the sensor box to prevent form being hit by the shock wave. The thermal cutter is connected to the microprocessor inside the payload box. When the microprocessor can be programmed to send a signal at a designated altitude. When the signal is sent, a current go through the Nichrome filament inside the thermal cutter and separate the system form the balloon before the balloon explodes.

The parachute for the intelligent balloon system cannot deploy immediately when the system is cut off from the balloon because of the very low air density at that altitude. The system will be almost free falling before the air density is reasonably high enough for the parachute to open.

The drag force equation can be used to describe the lift force of the parachute and decent velocity of the intelligent balloon system.

When the parachute is fully open.

Therefore,

The Cd is the drag coefficient of the parachute. For the dome shaped parachute, the drag coefficient is generally 1.5, but more accurate number can be obtained from the vendor of the parachute. In order to limit the damage of the sensor box and sensor components, the descending velocity V should be less than 3 m/s when it hits the ground. The density of the air, ρ air, increase as the intelligent balloon system descends and provide more drag force. The area of the parachute can be calculated if all above variables are known.

The microprocessor controlled GPS system will be transmitting the latitude, longitude and altitude of the intelligent balloon system to the base station instantaneously. When the intelligent balloon system can be recovered using the GPS data when it is landed on the ground.

Future Study The intelligent balloon will be launched and collect atmospheric data and aerial pictures. The collected data will be compared with the theoretical data. Two-way communication system will be developed between the intelligent balloon and the base station. The thermal cutter system will be able to cut the line between the balloon and the payload boxes anytime without having to preprogram the microprocessor, when the signal is sent from the base station to the intelligent balloon. Using the two way communication system, the ground station will have more real time control over the intelligent balloon system and it will be able to transmit the collected atmospheric data and aerial picture to the base station instantaneously.

Appendix Buoyancy force of different helium balloon size with altitude

Change in temperature with altitude

Change in pressure with altitude

Determination of the In Vitro Dissolution Rates of Respirable Airborne Particles of JSC-1A Lunar Mare Regolith Simulant in Simulated Lund Fluid

Student Researcher: Maisha M. Murry

Advisor: Dr. Henry Spitz

The University of Cincinnati Department of Mechanical, Industrial, and Nuclear Engineering

Abstract Submerge Astronauts are susceptible to the exposure of lunar dust when removing their space suit following a lunar excursion off of the shuttle. It is highly probable that an astronaut exposed to lunar dust would inhale oxygen containing re-suspended lunar dust. Unfortunately, respiratory dissolution rates for lunar dust are not known at the moment. Using in vitro dissolution, the respirable fraction of JSC-1A Lunar Mare Regolith Simulant would be evaluated through radiochemical separation techniques and analyzed by alpha spectrometry to determine the dissolution fraction of JSC-1A Lunar Mare Regolith Simulant. The in vitro dissolution technique would be evaluated using simulated lung fluid and pure water at room temperature, (25oC) and body temperature (37oC). The results obtained from this research would show how readily the respirable fraction of JSC-1A lunar simulant would be dissolved in an in vitro environment of neutral simulated lung fluid. The results would also allow the comparisons of dissolution rates of the respirable fraction of JSC-1A lunar simulant at room temperature and at body temperature in simulated lung fluid and pure water at 1hr, 8hr, and 24hr dissolution periods.

Project Objectives The expectation of increased extravehicular activity within the lunar environment, would result in increased exposure duration and airborne concentrations” of lunar dust.1 This increased activity would enhance the chances of an astronaut being exposed to lunar dust when removing their space suits at which time they may inhale oxygen containing re-suspended lunar dust. Since the exchange of oxygen in the lung is essential for sustaining life, the dissolution rates of airborne particulate matter is a very important parameter in determining in vitro clearance rates of respirable airborne particles in the lung. The primary objective would be to determine the dissolution rates for the respirable fraction of JSC-1A lunar simulant in neutral simulated lung fluid at room temperature and body temperature.

The second objective would be to determine the dissolution rates for the respirable fraction of JSC-1A lunar simulant in pure water at room temperature and body temperature. The dissolution rates obtained using the pure water would allow for comparison of the results between the dissolution rates obtained using simulated lung fluid and comparison to the soluble compounds as defined by the CRC Handbook of Chemistry and Physics.

Methodology Used JSC-1A Lunar Mare Regolith Simulant would be the respirable airborne particles used to determine the dissolution rates of respirable airborne particles in simulated lung fluid maintained at constant room temperature (25oC), simulated lung fluid heated and constantly maintained at body temperature (37oC), pure water maintained at constant room temperature and pure water heated and constantly maintained at body temperature. JSC-1A Lunar Mare Regolith Simulant was chosen because it is a representative simulant of Lunar Soil 14163 major elemental composition, which is listed in Table 1. The major elemental composition of JSC-1A Lunar Mare Regolith Simulant is listed in Table 2 would be the preliminary compounds analyzed to determine the dissolution rates within the specified fluids. The chemical composition of the simulated lung fluid that would be used is listed in Table 3.

JSC-1A lunar mare regolith stimulant with a 1 mm or less particle size was obtained commercially from Planet LLC. To recover the respirable fraction the JSC-1A lunar stimulant would be sieved to recover particles 10 micrometers or less in aerodynamic diameter. A known amount of the respirable fraction the JSC-1A lunar stimulant would be placed on the filter and activated by Neutron Activation Analysis. The filter would be analyzed by gamma spectrometry to determine the initial amount of activated species contained on the filter. The activated filter would then be submerged in simulated lung fluid for a period of 1hr. Figure 1 illustrates the dissolution system setup that would be used. Following the dissolution period the filter would be removed from the simulated lung fluid and analyzed by gamma spectrometry to determine the residual activated species remaining on the filter. The simulated lung fluid would then be filtered by suction filtration to recover any un-dissolved particles which would also be analyzed by gamma spectrometry. An aliquot of the filtered simulated lung fluid would be subjected to radiochemical separation techniques and analyzed by alpha spectrometry to determine the dissolution fraction. Additional dissolution fractions would be obtained at 8hr and 24hr dissolution periods. The entire procedure would be repeated using simulated lung fluid and pure water at room temperature, 25oC and body temperature, 37oC. The temperature of the submerging fluid would be held constant using a water bath. Carbon dioxide would be added as a cover gas over the simulated lung fluid to maintain a relatively neutral pH. The dissolution rates of the major elemental composition of JSC-1A lunar stimulant would be determined for each submerging fluid.

Results Obtained No results are available at this time.

Significance and Interpretation of Results The results that would be obtained from this research would give an indication as to how well JSC-1A Lunar Mare Regolith Simulant dissolves within an in vitro environment of neutral simulated lung fluid. The results would also allow the comparisons of dissolution rates of the respirable fraction of JSC-1A lunar simulant at room temperature and at body temperature in simulated lung fluid and pure water at 1hr, 8hr, and 24hr dissolution periods.

Figures/Tables

2 Table 1. Major Elemental Composition of Lunar Soil 14163

Major Elemental Composition % by Weight Silicon Dioxide (SiO2) 47.3 Titanium Dioxide (TiO2) 1.6 Aluminum Oxide (Al2O3) 17.8 Ferric Oxide (Fe2O3) 0.0 Iron Oxide (FeO) 10.5 Magnesium Oxide (MgO) 9.6 Calcium Oxide (CaO) 11.4 Sodium Oxide (Na2O) 0.7 Potassium Oxide (K2O) 0.6 Manganese Oxide (MnO) 0.1 Chromium III Oxide (Cr2O3) 0.2 Diphosphorus Pentoxide (P2O5) -

3 Table 2. Major Elemental Composition of Lunar Soil Simulant JSC-1A

Major Elemental Composition % by Weight Silicon Dioxide (SiO2) 46-49 Titanium Dioxide (TiO2) 1-2 Aluminum Oxide (Al2O3) 14.5-15.5 Ferric Oxide (Fe2O3) 3-4 Iron Oxide (FeO) 7-7.5 Magnesium Oxide (MgO) 8.5-9.5 Calcium Oxide (CaO) 10-11 Sodium Oxide (Na2O) 2.5-3 Potassium Oxide (K2O) 0.75-0.85 Manganese Oxide (MnO) 0.15-0.20 Chromium III Oxide (Cr2O3) 0.02-0.06 Diphosphorus Pentoxide (P2O5) 0.6-0.7 9 Table 3. Composition of simulated lung fluid

Chemical Composition Concentration (g/L) . Magnesium Chloride (MgCl2 6H2O) 0.203 Sodium Chloride (NaCl) 6.019 Potassium Chloride (KCl) 0.298 . Sodium Phosphate, dibasic (Na2HPO4 7H2O) 0.268 Sodium Sulfate (Na2SO4) 0.071 . Calcium Chloride (CaCl2 2H2O) 0.368 . Sodium Acetate (NaH3C2O2 3H2O) 0.952 Sodium Bicarbonate (NaHCO3) 2.604 . Sodium Citrate (Na3H5C6O7 2H2O) 0.097

Vent

To pH meter

CO2

pH probe

Water Bath Level, 37oC Simulated Lung Fluid Level

Filter Holder

Figure 1. Schematic drawing representing the dissolution system that will be used.4

Acknowledgments Funding for this research provided by the Ohio Space Grant Consortium is greatly appreciated. I would also like to thank my advisor Dr. Henry Spitz, Dr. Samuel Glover, and Dr. Jude Iroh for their guidance, use of their equipment, and support.

References 1. Khan-Mayberry, N. The Lunar Environment: Determining the health effects of exposure to moon dust. NASA Johnson Space Center. 2. McKay, D. S.; Carter, J. L.; Boles, W. W.; Allen, C.C.; Allton, J. H. JSC-1: A new lunar soil stimulant. Engineering, Construction, and Operations in Space IV American Society of Civil Engineers 857-866 (1994). 3. Orbital Technologies Corporation Material Safety Data Sheet for JSC-1A. 4. Heffernan, T. E.; Lodwick, J. C.; Spitz, H.; Neton, J.; Soldano, M. Solubility of airborne uranium compounds at the Fernald Environmental Management Project. Health Physic Vol. 80 No. 3 255-262 (2001). 5. LaMont, S. P.; Maddison A. P.; Filby R. H.; Glover S. E. Determination of the in vitro dissolution rates of 238U, 230Th, and 231Pa in contaminated soils from the St. Louis FUSRAP sites. Journal of Radioanalytical & Nuclear Chemistry, Vol. 248: No. 3 509-515 (2001). 6. Ansoborlo, E.; Hengé-Napoli, M. H.; Chazel, V.; Gibert, R.; Guilmette, R. A. Review & Critical Analysis of an available in vitro dissolution tests. Health Physic Vol. 77: No. 6 638-645 (1999). 7. Willman, B. M.,; Boles, W. W.; Members, ASCE, McKay, D. S.; Allen, C. C. Properties of lunar soil stimulant JSC-1. Journal of Aerospace Engineering Vol. 8: 77-87 (1995). 8. Ashley, K. (NIOSH) Fairfax, R. (OSHA) NIOSH Manual of Analytical Methods M. Sampling & Analysis of Soluble Metal Compounds. 167-178. 9. Moss, O. R.; Simulants of Lung Interstitial Fluid. Health Physic Vol. 36: 447-448 (1979). Weather Making a Cloud

Student Researcher: Marcy E. Namestnik

Advisor: Dr. Jane A. Zaharias

Cleveland State University Department of Education, Middle Childhood

Abstract The lesson that I will be teaching utilizes NASA lessons as well as NASA’s educational website to teach weather to my seventh grade students. The Educator's Guide provided by NASA is aligned with the seventh grade state standards, provides a wealth of information, and has activities incorporated with each new concept that is introduced such as the water cycle and cloud formations. The students will be encouraged to use NASA’s educational website to complete the activities as well as gather more information about weather in the classroom throughout the unit. By the end of the lesson the students will be able to explain cloud formation. It is important for students to learn this because weather affects their daily lives.

Lesson The basis for this activity came from the “Investigating the Climate System, Clouds” contained in NASA’s education guide. The beginning of the lesson started off with an anticipatory set of brainstorming questions to allow the students to start thinking about the importance of clouds, how they form, where they form, and why they are in the atmosphere. The students were asked to respond to the five brainstorming questions:

1. What are clouds? 2. Where do they come from? 3. Why don’t they look the same? 4. Why are there sometimes clouds in the sky that don’t rain or snow? 5. Why is it even important to study clouds?

By gathering the responses and creating a graphic organizer, the students had some basic knowledge on clouds to begin the activity. The first step was to fill the bottle with warm water one-third of the bottle. After the student’s bottle was filled, the students were instructed to place the cap back onto the bottle and observe what was happening inside the bottle. Students then squeezed and released the bottle, observing what happened. The squeeze represented the warming that occurs in the atmosphere and the release represented the cooling that occurs in the atmosphere. Matches were carefully used by the teacher, and proper safety guidelines were taken in order to fill the bottle with smoke. The trapped smoke inside the bottle enhanced the process of water condensation. Once again, the students were asked to slowly squeeze the bottle hard and release. The students started noticing that something was happening. A cloud appeared when students released the bottle and disappeared when students squeezed the bottle. This represented the drop in air pressure inside the bottle. After the activity was finished, the students got together as a whole group and had a discussion about what they observed.

Objectives ™ 7th grade science students will be able to construct a cloud in a bottle. ™ 7th grade science students will be able to correctly explain cloud formation using their observations during the activity.

™ 7th grade science students will be able to discuss what the water cycle’s role is in creating clouds

Alignment with the Ohio Academic Content Standards: Seventh Grade

Earth and Space Science Standard D ™ Describe the connection between the water cycle and weather-related phenomenon.

Earth and Space Science Standard D ™ Make simple weather predications based on the changing cloud types associated with frontal systems.

Student Engagement This lesson uses an anticipatory set of questions to draw the interest of students, a hands-on activity to promote discovery learning, and a discussion to cover any other questions that were raised throughout the activity. The students were highly engaged in every step of the lesson. The most exciting part for the students was when the cloud formed inside the bottle. This was the part of the lesson where I was able to see the learning occur because I heard students saying, “Wow! A cloud is forming in the bottle that’s so cool, is it because of the smoke?”

Resources The teacher resources that were used were the Educator’s Guide from NASA and the NASA website to gather supplemental material. The students were given articles and information from the website about weather, cloud formation, climate, and several other topics that were included in the weather unit. Although the resources in the Cleveland Public schools are limited, and there are only four computers per room, the students rotated on the computers in shifts to experience the wonderful resources the NASA website has to offer. The only materials that were needed for this activity were 2-liter bottles, matches and warm water.

Results My goal for this lesson was to have the students use a hands-on activity to explore the formation of clouds. The students were engaged during the activity and displayed a eagerness to learn why a cloud was forming inside the bottle. The discussion after the activity went very well, the students were asking questions and we also went over the brainstorming questions from the beginning of the lesson to clarify any misconceptions. Most of all, students made the personal connection as to why it is important to study clouds; the weather affects them on a daily basis.

Assessment As a learning activity in itself, a written assessment is not really needed. I used qualitative assessment as I walked throughout the classroom, observing student procedures, comments, and questions. I was also able to see how well they worked together in groups during the brainstorming and discussion part of the lesson. Using collaborative grouping is a subjective form of assessment. An alternative option for assessment would be to have students draw a picture of how the cloud formed in the jar or provide a number of extension activities suggested in the Educator’s Guide.

Conclusion The project allowed the students to have a concrete example of how clouds form and through discussion the students were able to realize why and how they form. I was very pleased with the overall success of the activity and the lesson. The students enjoyed having the hands-on activity as well as having some computer time on NASA’s website. Integrated Bipolar Plate-Gas Diffusion Electrodes for PEM Fuel Cells

Student Researcher: David N. Neff

Advisor: Dr. Bor Jang, Dean

Wright State University Department of Mechanical and Materials Engineering

Abstract Bipolar plates make up a large portion of the weight and manufacturing costs in fuel cells. Bipolar plates are metal plates that separate each membrane-electrode assembly (MEA) in a fuel cell stack and contain flow channels through which the fuel flows. Bipolar plates provide structural strength and conduct electricity in the fuel cell. Electricity also flows through the gas diffusion layer (GDL) which lies next to the bipolar plate. Because the GDL is separate from the bipolar plate, there is a contact resistance as the electricity flows from one to another. The scope of this project is to research and develop a cheaper way of manufacturing a more efficient and lighter weight bipolar plate-GDL combination. This will be done by using materials such as exfoliated graphite, a binding resin, and a sacrificial polymer additive. Processing temperatures and pressures as well as component ratios are just some of the variables to be investigated in order to easily and quickly make bipolar plate-GDL combinations.

Project Objectives Fuel cells are electrochemical devices that hold great promise for using cleaner and more renewable energy. There are various types of fuel cells, and one of the most common, especially for automotive applications, is the proton exchange membrane fuel cell (PEMFC). In a fuel cell, most of the reactions that produce the electricity occur in the membrane electrode assembly (MEA). The electricity is converted directly from chemical energy as opposed to traditional combustion engines in which the chemical energy is converted to heat first, then to mechanical energy. As a result, fuel cells have a potentially higher theoretical efficiency which is what makes them so attractive.

On each side of the MEA are plates that seal in the MEA and prevent unwanted species from entering or exiting. When multiple MEAs are combined to create a fuel cell stack, they are separated by bipolar plates. [1] Bipolar plates serve four main purposes in a fuel cell. The first is to contain channels that supply fuel to the cell and remove exhaust. The second task it performs is to collect the electrical current generated by the fuel cell. The third task is to cool the area where reactions occur. The final task is to provide structural strength to the fuel cell stack. Because of the material requirements placed on bipolar plates, they contribute to a significant portion of the cost, size, and weight of fuel cells. The plates can actually compose as much as 80% of the weight of a fuel cell. [2]

In order to provide a significantly rigid structure for the fuel cell stack, the DOE has set a target for a minimum flexural strength of 25 MPa. [3] The material used for a bipolar plate is one that must be able to support a potentially complex system of grooves for supplying the fuel. Preferably, the groves can be made in the plate material with minimal effort during the manufacturing process. This, however, is not always possible. To collect and transmit current efficiently, the material used in the bipolar plates needs to have a high electrical conductivity. [1] The DOE target for electrical conductivity is a minimum of 100 S/cm or an areal conductivity of 100 S/cm2. [3] A good thermal conductivity is also desirable as fuel cells produce heat that must be removed. Bipolar plates provide the easiest method of removing the heat by conducting it away from the MEA. [1]

Electrochemical stability is important for a bipolar plate material. Depending on the type of fuel cell, a variety of chemicals may be present such as air, water, hydrogen, carbon monoxide, carbon dioxide, strong acids, and various peroxides. Many of these chemicals create a corrosive environment that can degrade materials used in a bipolar plate. The more electrochemically stable a material is, the less it is likely to corrode. Thermal stability is also important. A material used for a bipolar plate needs to be able to maintain its strength through a range of temperatures from room temperature to 200°C. Through these temperatures, the material should preferably have a low coefficient of thermal expansion. If a bipolar plate would undergo significant changes in size as the temperature changes, the components attached to it could be severely damaged. [1]

To prevent fuel leakage, a material should have a low permeability to the chemicals it delivers to the fuel cell. Of major concern is hydrogen due to its small size. The DOE has set a limit of 2 x 10-6 cm3/(s*cm2) as the limit for hydrogen permeability at 80°C and 0.3µPa. Also to prevent leakage, the surface roughness of a bipolar plate near contacts and seals should be low, and the overall thickness of the plate should be uniform within 0.02 mm. [1] The cost of a present working fuel cell is around $200/kW [4] with the cost of the bipolar plates being as much as 45% of that total [2]. In order to reduce the cost, the DOE has set a target goal of $5/kW by 2010 for the price of bipolar plate production and a target weight of less than 0.4 kg/kW. [3]

A novel material that has been developed in the past few years is thermally expanded graphite (EG), also referred to as foamed graphite or exfoliated graphite. This material is produced by heat treating intercalated graphite compounds. In the typical procedure, acceptor-type graphite flakes generally between 0.3 and 5 mm in size are bathed in a sulfuric acid and oxidant system to form the intercalated graphite compound. In the second step, hydrolysis is performed on the compound. Finally, by exposing the intercalated graphite flakes to a sufficiently high temperature, the layers of graphite are pushed apart forming a foamed compound. There are several advantages to such a foamed compound. It has a low bulk density and a large specific surface area. It can be molded without the use of binders such as polymers. It is resistant to aggressive chemicals, and because it is graphite, it has a high electrical conductivity. [1]

The gas diffusion layer (GDL) is an electrode support layer that reinforces and protects the catalyst layer in a PEMFC. The GDL is porous to allow gas access to the catalyst. As reactions occur around the catalyst particles, the GDL conducts electricity away from the reaction sites. Choosing the structure and thickness of the GDL can affect its strength, electrical conductivity, and ability to let gas diffuse through it. The GDL is a separate layer located between the bipolar plate and the electrolyte. Since it is separate from the bipolar plate, there can be a contact resistance as electricity is conducted from the GDL to the bipolar plate. [5]

The concept for further tests beyond the results presented in this paper is to use a sacrificial additive such as polyvinyl alcohol or polypropylene carbonate to produce porous structures and channels within the EG phenolic resin mixture samples. The use of an additive may allow for a combination bipolar plate and GDL to be produced.

Methodology Used Expandable graphite flakes, obtained from Asbury Graphite Mills, Inc., were expanded at approximately 900°C in a Lindberg tube furnace. During this process, the flakes expand vertically into long foam “worms” that are as much as 300 times the initial volume of the flakes. Figure 1 shows the flakes before and after expansion. Bakelite phenolic resin granules from IASCO ranged in size from a fine powder to over a millimeter in diameter. For use in some samples, a coffee grinder was used to grind the mixture of granules entirely to a fine powder.

The phenolic resin was mixed with the EG worms in varying ratios using a magnetic stir bar for around ten minutes. The EG and phenolic resin mixtures were pressed and cured in a Buehler Simplimet 3 Mounting Press. So far, the only conditions used have been 4200 psi at 150°C for 2 minutes. This process was used to produce 5g disks.

Bulk, or in-plane, conductivity is the conductivity of electricity across the surface of a sample. Bulk conductivity was measured by a Lucas Labs 302 Resistivity Stand as shown in Figure 2. The stand contains a four point probe connected to a Keithley 2400 SourceMeter, which provides the source current, and a Keithley 2182A Nanovoltmeter, which measures the output voltage. The four point probe contains four in-line probe tips, as shown in Figure 3, that are touched to the surface of the sample. At least five measurements were made on each sample by touching the four point probe to both top and bottom sides of the disks in varying orientations. Bulk conductivity was calculated from the equation 1 σ = Eq. 1 V 4.5324 ⋅ cf ⋅ ⋅ L I where V is the measured voltage, I is the provided current, L is the thickness of the sample, and cf is a correction factor that may be used. The correction factor is based on the dimensions of the sample, and for the present measurements was assumed to be one.

Through-plane conductivity is the conductivity of electricity through a sample from one side to another. Through-plane conductivity was measured by placing the sample between two carbon fiber sheets and then between two copper plates. The current source and voltage probes were attached to the copper plates. This assembly was then placed in a Carver pneumatic press. The sample was put under a pressure of 1000 psi for the measurements. The carbon fiber paper is used to improve contact between the sample and the copper plates. Five measurements were made on each sample by releasing the pressure and pressing the assembly again. The current source was provided by the Keithley 2400 SourceMeter, and the voltage was measured by the Keithley 2182A Nanovoltmeter. Through plane conductivity was calculated from the equation L ⋅ I σ = Eq. 2 A⋅V where V is the measured voltage, I is the provided current, L is the thickness of the sample, and A is the contact area between the sample and the copper plate.

The SourceMeter and Nanovoltmeter were coordinated and run in a manner so as to compensate for the system resistance of the instruments. Several hundred data points were averaged to produce each conductivity result because each measurement averaged at least 100 data points.

Results Obtained Mixtures of EG and phenolic resin ranging from 50 weight percent (wt%) to 100 wt% EG were prepared and tested. All disk mixtures held together after compression molding. The results for bulk conductivity on the top and bottom sides of each sample are presented in Table 1. Those results are plotted in Figure 4. The distinctions of top and bottom correspond to the orientation of the sample when it was compressed in the mounting press. The results for through-plane conductivity are presented in Table 2 and plotted in Figure 5.

Significance and Interpretation of Results The bulk conductivity for each sample varied from the top to bottom surface. The reason for this variation is inadequate mixing of the EG and phenolic resin. Size and density differences between the EG and phenolic resin cause much of the phenolic resin to settle to the bottom during mixing. When the mixture is poured into the mounting press, there is a higher concentration of phenolic at the top of the sample. As expected, the bulk conductivity on top and bottom of the sample without phenolic resin is nearly the same.

The through-plane conductivity results follow a more expected trend of increasing from the low EG concentrations to the pure EG. Since these measurements look at the entire sample at once, the placement of the phenolic resin in the sample does not affect the through-plane conductivity in the same way it does the bulk conductivity.

There are several tasks that remain to be investigated. One of the immediate problems is how to better mix the samples which includes the issues of time for mixing and the best size of the phenolic resin granules. Also, flexural strength of the mixture samples needs to be tested. Preferably, thinner samples can be made while still maintaining sufficient strength to provide support within a fuel cell. Finally, sacrificial materials such as polyvinyl alcohol or polypropylene carbonate need to be tested to determine their feasibility in creating porous structures and channels within the EG and phenolic resin mixtures. Figures and Charts

Figure 1. Expandable graphite flakes before expanding (left) and graphite “worms” after expanding (right).

Figure 2. Lucas Labs 302 Resistivity Stand. [6]

Figure 3. Four point probe used for bulk conductivity measurements. [6] wt% EG σ %S/cm 50 20.7 60 20.7 70 25.7 80 32.4 90 37.6 100 44.0

Table 1. Averaged through-plane conductivity results.

Through-Plane Conductivity

50

45

40

35

30

25

20 Conductivity (S/cm) Conductivity

15

10

5

0 50 55 60 65 70 75 80 85 90 95 100 EG Composition (wt%)

Figure 4. Plot of through-plane conductivity results as a function of EG concentration.

wt% EG σ_top σ_bottom %S/cmS/cm 50 16.4 52.6 60 6.5 49.5 70 43.7 66.9 80 10.2 54.6 90 38.5 51.8 100 42.2 39.0

Table 2. Averaged bulk conductivity results for the top and bottom surfaces of the samples. Bulk Conductivity

80

70

60

50

Bottom 40 Top

Conductivity (S/cm) Conductivity 30

20

10

0 50 55 60 65 70 75 80 85 90 95 100 EG Composition (wt%)

Figure 5. Plot of bulk conductivity results for sample tops and bottoms as a function of EG concentration.

Acknowledgments and References A special thanks is given to James Guo for his training and assistance in making the conductivity measurements.

1. Dobrovol’skii, Yu. A., A. E. Ukshe, A. V. Levchenko, I. V. Arkhangel’skii, S. G. Ionov, V. V. Avdeev, and S. M. Aldoshin. “Materials for Bipolar Plates for Proton-conducting Membrane Fuel Cells.” Russian Journal of General Chemistry. Vol. 77, No. 4, 2007. pp. 752-65. 2. Hermann, A., T. Chaudhuri, P. Spagnol. “Bipolar Plates for PEM Fuel Cells: A Review.” International Journal of Hydrogen Energy. 30, June 2005. pp. 1297-1302. 3. Hydrogen, Fuel Cells & Infrastructure Technologies Program: Multi-Year Research, Development and Demonstration Plan. US Department of Energy. Oct. 2007. p. 3.4-26. 4. Cunningham, B. D., J. Huang, D. G. Baird. “Review of Materials and Processing Methods used in the Production of Bipolar Plates for Fuel Cells.” International Materials Reviews. Vol. 52, No. 1, 2007. pp. 1-13. 5. O’Hayre, Ryan, S. W. Cha, W. Colella, and F. B. Prinz. Fuel Cell Fundamentals. New York: Wiley & Sons, 2006. 6. “S 302 Product Information Page.” Lucas Labs. . Measuring Models and Materials of Planets

Student Researcher: Sarah A. Niedermayer

Advisor: Sarah Gilchrist

Cedarville University Department of Mathematics Education

Abstract This project would be appropriate for students in 8th through 10th grade. I have incorporated NASA curriculum with data concerning radii and masses of planets as well as densities of other materials. At the completion of the project, students will have experience with Scientific notation, conversions in the metric systems, logic, and problem solving. This project will be interesting to those who enjoy learning about space, to those interested in building models, and students who enjoy real life applications. Students will also have opportunities to present their models to the class. This will help students communicate mathematically and also understand how scientists present ideas to their colleagues.

Project Objectives This project meets many objectives. It is designed to give early high school students real life situations that involve the metric system, making conversions and computations, and creating models. First, students will have practice working with the metric system in terms of real life situations. They will also have practice converting numbers to different units within and out of the metric system. Additionally, students will have to apply percentages in real life situations. They will also see how cost factors into many scientific situations. Students will be required to create an accurate, yet cost efficient model. Students will also sketch their models in order to visualize what the final project will look like. Through working in groups, students will develop interpersonal skills needed for future tasks. As a final objective, students will present their models in order to rehearse speaking mathematically and sharing logical thinking.

Methodology This lesson may take more than one class period to complete in order for students to work together. As a class, the teacher will discuss the planets and their massive size. The usefulness of scientific notation should also be emphasized. Then, students will determine why scientists cannot find what materials are in the inside of each layer of the planets. Finally, the teacher and students will conclude that building a model is a necessary and efficient way to study planets.

Individually, students will complete a few computations. Since the exact measure of the inner radii are not known, I will instruct students to suppose that each inner radius is 45% of the entire planet’s radius. This will give students a review of using percentages. Students will also need to write the inner and outer core radii in scientific notation. They may need a review over this topic. Yet, it is important for students to note that many real-life situations involve large numbers. Next, students will be given this scenario:

Suppose our class is a team of astronomers. We have been assigned to create models of the 8 planets in our solar system, Pluto the dwarf, and the moon. Since we cannot dig to the center of each planet, we must determine which materials we should use to create the best model to represent the mass and radii of our planet. Once the materials are determined, we will be able to study the inner cores of our planets.

We will split our classroom into 10 groups, with 2-3 students in each group. Then, each group will be assigned one planet. Use the given tables to find your planet’s radius and mass. Then, answer the following questions.

1. What is the volume of the inner core (make sure it is in cubic centimeters)? 2. What is the volume of the outer shell? 3. Imagine that the inner core is made of Pure Iron and the outer curst is made of Basalt Rock. What is the mass your model planet? 4. Is this a good model representing the mass of the planet? Why or why not? 5. Now, try to find the best combination of materials that will create a more accurate model mass. Please show all work. 6. Suppose that we are on a strict budget for the creation of our models. How much will your model cost? 7. Is there a way to make your model more cost efficient? Explain in complete sentences. 8. Draw a picture of your planet labeling the layers of the planet and the materials you will use to build it. Label the materials used and the cost of each portion. 9. How could this model be improved? Give at least 2 ways in complete sentences. 10. How would our models change if the inner core was 75% of our model? List at least 2 different ways.

Conclusion After students have finished the questions regarding their planet, the classroom will come back together for a teacher lead discussion. First, students will create two lists. They will list the planets in order by mass and by radius length.

Now, each group will take turns presenting their model to the class. Students should share the size of the radius, the cost of their model, and the materials used. It is especially crucial that students are able to explain why they chose the materials to build their models that they did. While each group is presenting, the teacher should be filling in a chart for the class. It should contain the cost of the models as compared to the masses and radii. Once the chart is completed, students should attempt to draw conclusions. If help is needed, consider these questions. Does the length of the radius affect the mass of the planet or model? What effect did the mass have on the model? What other materials could we use to build these models?

Figures/Charts

Radius- Sci. Inner Core Inner Radius- Planet Mass(kg) Radius (m) Notation Radius (m) Sci. Notation Mercury 3.30 x 1023 2,440,000 Venus 4.87 x 1024 6,051,000 Earth 5.97 x 1024 6,378,000 Mars 6.42 x 1023 3,397,000 Jupiter 1.90 x 1027 71,492,000 Saturn 5.69 x 1026 60,268,000 Uranus 8.66 x 1025 22,559,000 Neptune 1.03 x 1026 24,764,000 Pluto 1.31 x 1022 1,160,000 Moon 7.35 x 1022 3,397,000

Material Density Cost Pure Iron 10.0gm/cc 500 gm=$58.90 Basalt Rock 5.0gm/cc 60 gm=$363.75 Water 5.0gm/cc free Silicate Rock 3.0gm/cc 3kg= $90.06 Ice 1.0gm/cc free gm/cc = grams per cubic centimeter = grams/cm3 References http://library.thinkquest.org/TQ0312074/evtable.htm http://image.gsfc.nasa.gov/poetry/weekly/Week14.pdf http://geophysics.ou.edu/solid_earth/notes/planets.html www.fishersci.com Monotonic Shear Testing of Hydrocyphenyl Hydrogels

Student Researcher: Garrett J. Noble

Advisor: Dr. Timothy Norman

Cedarville University Mechanical Engineering

Abstract Unlike many other tissues in the body, damaged articular cartilage does not repair itself [1]. This has encouraged many researchers to engineer tissue that mimics the properties of articular cartilage as well as integrates with the surrounding tissue in the body. Hydrogels are becoming a popular option for this as these materials can create a scaffold with many of the same properties of cartilage itself.

Researchers at the Cleveland Clinic’s Lerner Research Institute, Anthony Calabro, Richard Gross, and Aniq Darr; have discovered a novel hydroxyphenyl hydrogel made up of tyramine-substituted hyaluronan molecules that can be enzymatically cross-linked. Other hyaluronan based gels have been developed, but the chemical used to cross-link them are toxic to cells, so they must be seeded with chondrocytes only after the matrix has been constructed, producing poor results [2]. The enzymatic cross-linking of this novel biomaterial is completely biocompatible, eliminating this obstacle. Another advantage of this hydrogel is that it can be injected as a liquid and then cross-linked in situ allowing the matrix to completely integrate with the surrounding tissue. This hydrogel can also be synthesized in different concentrations, some of which show similar mechanical properties to those of articular cartilage.

When engineering a new tissue, it is desirable to create a material that can support and sustain everyday loading conditions in the same way as the original tissue. Loading in the knee joint is complex, consisting of compression and shear [3]. A biomaterial that can support and sustain loading in the knee must have sufficient compression and shear properties. The hydrogel developed by researchers at the Cleveland clinic is being evaluated for use in the knee. However, little information is available concerning the shear properties of this material. Therefore, the goal of this research is to evaluate the monotonic and dynamic shear properties of different solutions of this novel hydrogel under static and cyclic loading Progress to date has included a literature review, protocol development, writing of a proposal to Cleveland Clinic researchers and test apparatus design.

Objective The objective of this study is to evaluate the shear strength of the hydroxyphenyl hydrogel invented by Dr. Anthony Calabro, et al. Specifically, using different compositions of hydrogel, we plan to conduct a monotonic shear test to measure shear stress and strain for determining shear strength, shear modulus, and toughness.

Methods Sample Preparation 1. Gel will be pipetted into a square Plexiglas mould with sides of 8.5mm and a thickness of 2mm, with glass plates on the top and bottom to allow for easy removal of the cross linked gel from the mould.(Note: the size of the test specimen n will be modified if necessary depending on the validity of the test results. See assessment of test validity below). 2. The gel will be then be cross linked with HRP and put in the refrigerator overnight to set. 3. The gel will be removed from the mould and adhered with cyanoacrylate glue to two parallel aluminum plates, as shown in Figure 1. 4. Samples will be clamped to assure adhesion to the plates by screwing down the clamp until the adhesive flows. 5. These fixtures will then be stored in a refrigerator.

Shear Testing [4, 5, 6] 1. The samples be put in a saline bath and then loaded onto and MTS Systems tensile testing machine. During testing, the sample will be subjected to shear forces and will deform. 2. These samples will then be subjected to shear stress at a constant displacement rate of 0.025 mm/s until failure. 3. During loading, the load and displacement will be measured. The load will be measured by a load cell mounted on the crosshead and the displacement by an LVDT mounted between the upper fixed end and a tab attached to the side of the lower portion of the specimen. 4. The shear strength, shear modulus, and toughness will then be calculated from the resulting stress- strain curves.

Test Plan Due to the versatile nature of this hydrogel, we plan on testing 8 samples each of three compositions: tyramine-gelatin, tyramine-HA, and a combination of tyramine-gelatin and tyramine-HA.

Analysis The shear stress on an object can be defined as the force that acts tangential to the surface of a material. This can be defined mathematically as:

(1)

where V is the force applied and A is the area over which it is applied. We will use values obtained from our experiments for V and A to solve for this shear stress.

This shear stress produces a shear strain on the material, which causes the material to change shape, but does not change the lengths of the sides of the material. The angle (γ) shown in Figure 2 is a measure of the change of shape, or distortion, of the element. This is defined as the shear strain. This angle will be measured from the displacement of the sample during the experiment by knowing the sample thickness (t) and the displacement amount (δ) as illustrated in Figure 2.

The shear stress can then be plotted as a function of shear strain. The slope of the linear portion of the curve is called the shear modulus of elasticity. This relationship is shown mathematically in Equation 2.

(2) The shear strength will be found by the maximum shear force obtained prior to failure.

Finally, the modulus of toughness, µt, which is defined as a material’s ability to absorb energy, will be calculated. This material property is equal to the area underneath the stress-strain curve when the material is stressed to failure. We will determine this value using a numerical method of integration on the stress-strain curves produced by our experiments.

Statistical Analysis: Results for each formulation will be statistically analyzed using JMP (Cary, NC) to determine mean and standard deviation of shear modulus, shear strength, and shear toughness.

Assessment of Test Validity: Specimens will be examined post-failure to determine if failure occurred in the gel or the gel-glue interface. Specimens will be sliced longitudinally and mounted on a slide for macroscopic examination. The validity of test results will be assessed based on results of this evaluation. Failure outside the gel or migration of glue into the test region will not constitute a successful test and may result in modification of test methodology.

Status: Awaiting approval to begin creating and testing specimens. Figures

Acknowledgments Dr. Timothy L. Norman, Dr. Anthony Calabro, and Dr. Aniq Darr.

References 1. Sharma, B., Williams, C. G., Kim, T. K., Sun, D., Malik, A., Kahn, M., Leong, M., and Elisseeff, J. H. Designing Zonal Organization into Tissue–Engineered Cartilage. Tissue Eng. 2007 Feb; 13(2):405-14. 2. Calabro, A., Gross, R. A., Darr, A. B. Hydroxyphenyl cross-linked macromolecular network and applications thereof. 2006. US Patent 6,982,298, 2006. 3. Neptune, R. R., Kautz, S. A., 2000. Knee joint loading in forward versus backward pedaling: implications for rehabilitation strategies.Clinical Biomechanics 15 (7), 528–535. 4. Lee, C. S. D., Gleghorn, J. P., Choi, N. W., Cabodi, M., Stroock, A. D., Bonassar, L. J. Integration of layered chondrocyte-seeded alginate hydrogel scaffolds. Biomaterials. 2007 Jul;28(19):2987-93. 5. Hu, B., Messersmith, P. Enzymatically cross-linked hydrogels and their adhesive strength to biosurfaces. Orthodont Craniofac Res. 2005;8:145–9. 6. Matsumura, K., Hyon, S. H., Nakajima, N., Peng, C., Iwata, H., Tsutsumi, S. (2002) Adhesion between poly(ethyleneco-vinyl alcohol) (EVA) and titanium. J. Biomed. Mater. Res. 60: 309-315. Preventing Bone Resorption in Space Flight

Student Researcher: Katie M. O’Brien

Advisor: Tekla Madaras

Owens Community College Dietetic Technician Program

Abstract Astronauts experience a number of medical problems in space flight, which include decreased body mass index, muscle atrophy, and increased bone resorption. These three health issues are closely related. Decreased body mass correlates with decreased bone mass. Muscles move bone so muscle atrophy also contributes to decreased bone density because the muscles are pulling with less force. Factors that cause bone resorption in space flight are zero gravity, mineral loss, lack of UV light, lack of resistance exercise, changes in fluid balance, and diet. Other factors affecting bone resorption are genetics, sex, age, and fitness. It is important for astronauts to consume adequate amounts of caloric energy to avoid negative energy balance, which contributes to decreased body mass and bone mass. Their diet must also provide vitamins and minerals affecting bone health, which include calcium, phosphorus, vitamin D, vitamin K, and sodium. Diet alone does not prevent bone resorption; mechanical strain is necessary to maintain bone mass and density. These health problems need to be resolved to allow longer space flight trips in the future. Research is currently being done to investigate dietary and physical countermeasures to bone resorption in zero gravity. A countermeasure for bone resorption would also help bedridden patients.

Project Objective My objective is to research the causes of bone resorption and related issues of decreased body and muscle mass in space flight. The focus of my research is investigation of space induced bone loss. My goal is to discover nutrition interventions to prevent bone loss in zero gravity.

Methodology Used I used information from published studies about the effect of space flight on bone mass, the causes of decreased bone mass on space, and potential nutritional interventions. The studies I used in my research focused on experimentation of specific nutrition interventions for bone loss. Potential nutritional interventions that I investigated were calcium, vitamin D, and vitamin K supplementation, and adequate energy intake.

Results Obtained During space flight astronauts experience medical problems which include decreased mass of body, muscle, and bone. During a space mission astronauts experience a weight loss of 5 – 10% of their preflight weight. In addition, astronauts’ bone mineral density is reduced by 3 – 4%. Bone resorption increases about 50% in microgravity. This loss of lean body mass and bone density is caused by negative energy balance, depletion of mineral stores, decreased mineral absorption, decreased intake of vitamins and minerals and decreased exercise.

Energy needs in space are the same as on Earth. Astronauts’ energy intake was found to be voluntarily 20% lower than calculated energy needs, causing negative energy balance and subsequent weight loss. Lack of resistance exercise also caused a depletion of body mass. Muscles move bone so muscle atrophy also contributes to decreased bone density because the muscles are pulling with less force. Weight bearing exercise is necessary to maintain bone mass and density. In addition, adequate calcium and vitamin D intake is necessary for development of maximum bone density. Calcium intake was monitored during the Spacelab D2, Euromir 94, and Euromir 95 space flights. Intake was found to be 47% of the recommended intake for the 65 years and older age group. Negative calcium balance was also observed in the Skylab and Mir missions. Intestinal calcium absorption decreased after three weeks exposure to microgravity. Calcium is lost at a rate of 250 mg/day during space flight. Decreased concentrations of 1, 25-dihydroxyvitamin D and parathyroid hormone during flight were also observed in crew members on Skylab and the Russian Space Station Mir.

Based on knowledge of bone ossification, various nutrition interventions were attempted to reduce microgravity induced bone resorption. During the 21-day Mir 97 mission, astronauts consumed 100 mg/d calcium with vitamin D supplementation. These supplements did not prevent bone loss. A 6-day bed rest study of 8 adult males receiving calcium and vitamin D supplementation yielded similar results.

The effect of vitamin K supplements on the reduction of bone loss was studied during space flight and in an experiment with simulated microgravity using rats. During the middle of the 179-day Euromir 95 mission, one astronaut was provided with 10 mg of vitamin K supplementation. Bone formation markers were decreased in this astronaut before supplementation but were increased during supplementation. This study shows promise for the use of vitamin K supplements to increase bone formation but further investigation is necessary. In the study with long-term tail suspended rats, the treatment group receiving vitamin K showed an increase in bone mineral density and bone metabolic markers stayed near a normal level. The control group showed a decrease in bone mineral density and bone metabolic marker levels.

Significance and Interpretation of Results From a nutritional perspective, adequate energy, vitamin and mineral intake is necessary for maintaining health on Earth and in space. To date, there is limited evidence to support the use of supplementation of calcium or vitamin D to prevent space induced osteoporosis. However, the studies of vitamin K supplementation have shown promise. While nutrition intervention does not prevent bone loss in microgravity, adequate energy, vitamin, and mineral intake during space flight is necessary to prevent aggravation of bone and body mass loss. Solving the problem of space osteoporosis will allow longer, more frequent space trips in the future. It will also decrease health risks to astronauts while in space. A continuation of experiments will increase knowledge in this field and may present a solution in the near future.

References 1. Cena, H., Sculati, M., & Roggi, C. (2003). Nutritional concerns and possible countermeasures to nutritional issues related to space flight. European Journal of Nutrition, 42(2), 99 - 110. 2. Heer, M. (2002). Nutritional interventions related to bone turnover in European space missions and simulation models. Nutrition, 18(10), 853-856. 3. Iwasaki, Y., Yamato, H., Murayama, H., Sato, M., Takahashi, T., Ezawa, I. et al. (2002). Maintenance of trabecular structure and bone volume by vitamin K2 in mature rats with long-term tail suspension. Journal of Bone and Mineral Metabolism, 20(4), 216 - 222. 4. Lane, H., Kloeris, V., Perchonok, M., Zwart, S., & Smith, S. M. (2007). Food and nutrition for the moon base: what we have learned in 45 years of space flight. Nutrition Today, 42(3), 102-110. 5. Smith, S. M., & Heer, M. (2002). Calcium and bone metabolism during space flight. Nutrition, 18(10), 849-852. The Development of a Soil for Lunar Surface Mobility Testing in Ambient Conditions

Student Researcher: Heather A. Oravec, M.S., E.I.T.

Advisor: X. Zeng, Ph.D., P.E.

Case Western Reserve University Department of Civil Engineering

Abstract The mechanical properties of the lunar soil are critical parameters in predicting vehicle performance on the Moon. In preparation for Man’s return to the Moon, surface vehicles must be tested on terrain that represents the mechanical strength of the lunar ground. Terrain that simulates the lunar trafficability conditions must have similar compaction and shear response underneath the wheel. This paper discusses the development of a soil (called GRC-1) and soil-preparation method to emulate the measured compaction and shear characteristics of the Moon’s surface. A semi-empirical design approach was used incorporating particle sieve and hydrometer analyses as well as triaxial strength testing. Soil preparations were developed to match stress-strain curves resulting from in-situ lunar experiments. Additionally, results of laboratory strength tests with returned lunar soil samples and lunar soil simulants were compared to provide insight into the material’s relative strength properties.

Project Objectives In order to develop effective lunar vehicles and validate mobility models, it is necessary to test prototypes under simulated terrain conditions. This is a challenge since relatively small quantities of previously developed lunar soil simulants are available. Most simulants are made from uncommon materials, which are not available in large quantities. What is required is a mixture of readily-available terrestrial soils, which emulates the known mechanical properties of the lunar soil. Other soil properties, such as mineral composition and chemistry are not important for mobility assessment.

This study is directed towards the development of a replica lunar soil based on known physical and mechanical properties of the actual lunar soil, with an emphasis on strength characteristics. The research takes into account the results of past lunar missions including, but not limited to: data obtained by astronauts’ observations, in-situ lunar soils tests, returned lunar soil samples, and laboratory lunar soil tests. In addition, the composition and mechanical properties of past lunar soil simulants are taken into consideration in the development of a new lunar soil replica. These precedent lunar simulants include: MLS-1, JSC-1, and JSC-1a. A new lunar soil replica could be prepared to emulate various lunar terrain conditions, in order to validate lunar vehicle trafficability models and evaluate various lunar vehicle prototypes.

Lunar Soil Properties There are several different sources of lunar soil data. These consist of Earth based observations including infrared technology, astronauts’ observations from past lunar missions, in-situ lunar terrain tests performed by robots, laboratory test results from returned lunar soil samples, and lunar meteorites found on the Antarctic ice caps. The Lunar Sourcebook (Heiken, 1991) provides tables containing lunar soil parameters that summarize much of what is known about the lunar terrain to date. This information was obtained through different missions such as NASA’s successful Surveyor missions which were the first to provide data on the lunar soil with respect to its composition and consistency. The Soviet space program was also beneficial in offering information on the lunar soil via the Luna missions. More specifically, the 1970 Luna missions returned the first robot collected lunar soil samples to the Earth and launched the first robotic lunar rover, Lunokhod 1. A lunar soil mechanics milestone was conceived as this vehicle successfully deployed a ground penetrometer to evaluate the terrain strength along its route. Together the Surveyor and Luna operations enabled the Apollo missions to be a success. It was from the Apollo missions where the majority of the current data on the lunar soil was obtained through lunar soil sampling.

Particle Size and Shape One of the most important properties in the development of a lunar soil simulant is the replication of the grain size distribution. The grain size distribution of lunar soil as determined by Carter et al. (2004) is shown in Figure 1. As stated in the Lunar Sourcebook (Heiken et al., 1991), the mean grain size of the lunar soil ranges from 40 to 800 micrometers (µm), with most grains falling between 45 to 100 µm. Lunar soil is generally well graded, containing a vast array of particle sizes. The soil particles exhibit a profile range from round to very angular, and can be somewhat elongated or oblong in shape. With these physical properties the particles become fairly sharp and have a tendency to interconnect. The lunar soil is best compared to the terrestrial soils of cobble bearing silty sand, fine grained slag, or terrestrial volcanic ash.

Specific Gravity The specific gravity of lunar soil ranges from 2.3 to greater than 3.2. The Lunar Sourcebook (Heiken et al., 1991) suggests an effective value of 3.1 be used for all engineering type analyses. This is due to the fact that the lunar soil exhibits subgranular porosity which exists as voids enclosed within the interior of the lunar soil particles. During specific gravity testing, water cannot fill these types of voids. This essentially causes the specific gravity of lunar soil to be underestimated. Other authors such as Carrier et al. (1991) suggest that the specific gravity of lunar soils ranges from 2.9 to 3.5 which may be a better range of specific gravity due to the fact that it takes subgranular porosity into account.

Density The bulk density of the lunar soil is a very important factor in predicting vehicle mobility as it influences the soil’s bearing capacity and slope stability. The best approximations of the lunar soil bulk density with respect to depth are provided in the Lunar Sourcebook (Heiken et al., 1991) as shown in Table 1. These estimates come directly from a paper written by Mitchell et al. (1974) as a result of the Apollo soil mechanics experiment S-200. It is important to keep in mind that these values represent the best estimates for the bulk density of intercrater areas. These values were determined by taking into account the different testing methods and soil sampling techniques including all soil disturbances associated with the sampling techniques (Heiken et al., 1991). The relative density of the lunar soil is dependant on the sizes and shapes of the soil grains, and is determined based on the following relationship:

ρmax x (ρ – ρmin) DR = x 100% (1) ρ (ρmax – ρmin)

Where, ρmax is the maximum bulk density, ρmin is the minimum bulk density, and ρ is the bulk density of the sample. The relative density of lunar soil generally refers to the degree of particle packing. This property is vital to vehicle mobility as it controls the shear strength of the soil. Corresponding to Mitchell et al.’s (1974) best estimates for the bulk density of the lunar soil with respect to depth, the relative density of the lunar soil is shown in Table 2. It is important to note that this table is only valid for the lunar soil in intercrater areas, which range in relative density from medium dense to very dense.

Strength Properties The most critical soil properties for the mobility of a wheeled vehicle on the lunar surface are the friction angle and cohesion. Together they can be combined in the classic Mohr-Coulomb equation to represent the ultimate shear strength of the soil, which directly affects the bearing capacity, slope stability, and potential to thrust against the lunar terrain. The equation is as follows:

τ = c + σ * tan(φ) (2)

Where, τ is the ultimate shear strength in kPa, c is the cohesion of the soil in kPa, σ is the normal stress applied to the soil in kPa, and φ is the friction angle of the soil in degrees. Some of the best pre-Apollo estimates tend to be under-conservative with respect to the shear strength parameters of the lunar soil. For example, Scott and Roberson (1969) determined values of 0.35 to 0.70 kPa for cohesion and values of 35º to 37º for friction angle based on the data obtained from the Surveyor soil mechanics surface sampler. Later, Mitchell et al. (1972d, 1974) used data from the Apollo missions to determine more representative values of the shear strength parameters of the lunar soil. These values range from 0.1 to 1 kPa for cohesion and 30º to 50º for friction angle. These values of cohesion and friction angle generally increase with increasing density of the soil.

Lunar Soil Simulant Properties Two of the most well known lunar soil simulants are MLS-1 and JSC-1. These simulants are no longer in production; however, the data collected from these two simulants is very helpful in the creation of a new lunar soil simulant. MLS-1 was a high titanium basalt hornfels material developed by the University of Minnesota. It best approximated the chemical composition of the Apollo 11 soil (Heiken et al., 1991). However, MLS-1 lacked one major aspect of the lunar soil, which was the glassy agglutinate fraction of soil particles. In addition, MLS-1 tended to overestimate the lunar soil strength properties. Therefore in this research the focus is placed on the lunar soil simulant JSC-1. JSC-1 was developed at the NASA Johnson Space Center as a volcanic ash basaltic composition which was mined from a volcano field in Flagstaff, Arizona (Heiken et al., 1991). This simulant best portrays the characteristics of the lunar mare soil.

Particle Size Distribution Figure 2 shows the grain size distribution of the lunar soil simulant JSC-1 as compared to the lunar soil. According to McKay et al. (1994) the median particle size of JSC-1 is 98 µm as determined by a study performed at the University of Texas, Dallas, and 117 µm as determined by a study performed at NASA Johnson Space Center.

Specific Gravity According to McKay et al. (1994) the average specific gravity as determined by the Lambe and Whitman method (1996) is 2.9. This value slightly undervalues the specific gravity of the lunar soil as stated in the Lunar Sourcebook (Heiken et al., 1991). In addition it is the lower bound of the range of lunar soil specific gravity as suggested by Carrier et al. (1991). Generally speaking the specific gravity of JSC-1 underestimates the specific gravity of the lunar soil.

Density As determined by Klosky et al. (2000), maximum and minimum bulk densities of JSC-1 were found to be 1.83 and 1.43 g/cc. These numbers were determined using ASTM D 2049 (ASTM, 1991) using a smaller than recommended sample size and a shake table providing both vertical and horizontal motion.

Strength Properties The cohesion of JSC-1 is estimated as 1.0 kPa. The angle of internal friction of the material is approximated as 45º. These values were determined using the Mohr-Coulomb failure criterion as described above. It is believed that the failure envelope for this material (as for the lunar soil) may be slightly non-linear. Compared to the lunar soil the cohesion of the JSC-1 material falls at the upper bound of the Apollo best estimates. In addition, the friction angle of JSC-1 falls near the upper bound of the lunar soil friction angle values.

Methodology Used Design and Characterization of a New Simulant: GRC-1 One of the most important factors in developing a new lunar soil simulant is to ensure that it can be produced in large quantities at a relatively low cost. A new design is proposed, which is composed of commercially available sand from the Best Sand Corporation of Chardon, Ohio (Fairmount Minerals and Subsidiaries Company). This mixture, called GRC-1, is available in large quantities for mobility testing.

Preparation The GRC-1 sand mixture was designed to closely match the average particle size distribution of lunar soil, barring the finer fraction of soil particles as shown in Figure 3. The decision to omit the fine particles was imposed as a safety precaution to prevent dust generation during testing. Generally speaking fine particles allow a granular material to be prepared to a higher density and raise the shear strength of the soil. As such, GRC-1 is expected to have less strength than a mixture that includes the fines.

The new mixture (for GRC-1) was created using four different sand products from the Best Sand (BS) Corporation. These sands are denoted by BS 110, BS 530, BS 565, and BS 1635. Their corresponding grain size characteristics are listed in Figure 4. A statistical analysis was run on the four different types of Best Sand in order to properly proportion the mixture. It was determined that by mixing 8% of BS 530, 36% of BS 110, 24% of BS 565, and 32% of BS 1635 the grain size distribution of the coarse grained portion of the lunar soil could be closely approximated. Sieve tests following ASTM D 422 test procedures were used to verify the distribution. Two different samples were prepared and tested in order to confirm the homogeneity of the mixture. The results are shown in Figure 5. It is clear that GRC-1 closely matches the grain size distribution curve of the coarse grained lunar medium. Also, the 4.5 kg and 45.4 kg samples of GRC-1 show good homogeneity of the soil mixture. The grain size distribution curves of these two samples are nearly identical. In addition, for the coarser grained fraction, the GRC-1 grain size distribution curves fall within a 5% error bound of the lunar soil medium.

Results Obtained Index Properties Standard ASTM laboratory tests were run on the initial mixture of GRC-1 to determine the maximum density, minimum density, and specific gravity of the soil simulant. ASTM D4253 and ASTM D4254 standard testing procedures were followed to determine the maximum and minimum index densities, respectively. A maximum index density of 1.89 g/cc with a standard deviation of ± 0.0265 was determined while a minimum index density of 1.60 g/cc with a standard deviation of ± 0.0058 was determined for the initial GRC-1 mixture. In its most dense state, GRC-1 is denser than values obtained for both JSC-1 and the actual lunar soil (1.83 g/cc and 1.79 g/cc, respectively). In its least dense state, GRC-1 is again denser than both JSC-1 and the actual lunar soil (1.43 g/cc and 1.45 g/cc, respectively).

ASTM D854 standard testing procedures were followed to determine the specific gravity of the GRC-1 mixture. After running three tests, it was determined that the specific gravity of GRC-1 was 2.58. This value falls near the lower bound of the specific gravity of the actual lunar soil (2.3 to 3.2) as stated in the Lunar Sourcebook (Heiken et al., 1991). In addition, it is below the average specific gravity value of 2.9 determined by McKay et al. (1994) for JSC-1.

Strength Properties Values of cohesion and angle of internal friction were determined by unconsolidated undrained (UU) triaxial compression testing. The triaxial test equipment was purchased from ELE International of Loveland, Colorado and is designed to follow ASTM D28059-95 standards for the testing of unconsolidated undrained compressive strength of cohesive soils in triaxial compression. GRC-1 soil samples were prepared to bulk densities ranging from 1.58 to 1.78 g/cc and were tested at cell pressures of 50, 100, and 200 kPa. In the triaxial testing of GRC-1, water was not used as the source of cell pressure. A supply of shop air was used instead in order to better simulate the environment that exists on the Moon, which does not include any free water source. A minimum of three trials were run per soil sample. The standard Mohr-Coulomb equation was used to determine the corresponding values of cohesion and friction angle.

It is clear that the angle of internal friction of GRC-1 generally increases with increasing bulk density, ranging from 30.40º to 44.38º (see Table 3). These values are well within the range of friction angle values determined for the lunar soil (30º to 50º as stated in the 1991 Lunar Sourcebook). The cohesion however, ranges from 0 to 9.92 kPa in no specific order which does not agree with the 0.1 to 1 kPa values of cohesion for the lunar soil as determined by Mitchell et al. (1972d, 1974). This indicates that the cohesion of the material is very sensitive with respect to the Mohr-Coulomb relationship. It may indicate that the Mohr-Coulomb relationship is indeed non-linear as suggested by McKay et al. (1994) or that better test controls need to be implemented when running UU triaxial tests on GRC-1. Adding the finer fraction of soils to the GRC-1 mixture may result in more accurate cohesion values as compared to the lunar soil. In addition, as is shown in Figure 6, the stress at failure generally increases with increasing cell pressure which is crucial in order to ensure an accurate test has been run. This graph also exhibits the general trend that increasing the bulk density or decreasing the void ratio increases the stress at failure.

Significance and Interpretation of Results The initial mixture of GRC-1 is a starting foundation to the development of a new lunar soil simulant. It is created from manufactured soil from the Best Sand Company in Chardon, Ohio and can be easily reproduced at a relatively low cost. Compared to the actual lunar soil and to the past lunar soil simulant JSC-1, GRC-1 tends to have a higher bulk density, lower specific gravity, and high values of cohesion. However, the grain size distribution and friction angle of GRC-1 tends to agree with the lunar soil very well. It is important to keep in mind that this initial mixture of GRC-1 represents only the coarse grained fraction of the lunar soil. In future research the authors plan to develop a second formulation, including the finer fraction of the lunar soil simulant and to run similar tests on the material. It is anticipated that the addition of this finer grained material will ensure a simulant more representative of the actual lunar soil. In addition further testing will be performed to develop scaling relationships to account for the difference in gravity between the Earth and the Moon. Upon development of a successful lunar soil simulant it is the intention of NASA Glenn to use this material in vehicle mobility studies for the development of lunar roving vehicles for future missions to the Moon.

Figures and Charts

100 Upper Bound 80 Lower Bound

60 Average Bulk Density (g/cc) Depth (cm)

40 1.50 ± 0.05 0 – 15

Percent Finer Percent 1.58 ± 0.05 0 – 30 20 1.74 ± 0.05 30 – 60 0 1.66 ± 0.05 0 – 60 10 1 0.1 0.01 0.001 Particle size (mm) Figure 1. Particle size distribution of lunar soil. Table 1. Average bulk density of lunar soil.

Soil Description Relative Density (%) Depth (cm) (Lambe and Whitman 1969) 65 ± 3 0 – 15 Medium to Dense 74 ± 3 0 – 30 Dense 92 ± 3 30 – 60 Very Dense 83 ± 3 0 – 60 Dense

Table 2. Lunar soil relative density (Mitchell et al. 1974) and (Houston et al. 1974).

100

90

80

70

60

50

40 JSC-1

30

20 Accumulative Percent Accumulative 10

0 1 10 100 1000 1000 Grain Size, Micrometers Figure 2. Particle size distribution for JSC-1 (Carter et al., 2004). 100 Lunar soil medium 80 Lunar soil upper bound Lunar s o il lo wer b o und 60

40 Percent Finer 20

0 10 1 0.1 0.01 0.001 Particle size (mm)

Figure 3. Average lunar grain size distribution (coarse grained fraction).

Figure 4. Typical silica sand grades (Best Sand Corporation).

100 Average Test Friction Cohesion 80 Bulk Density No. Angle (º) (kPa) Lunar Medium (g/cc) 60 1 1.58 30.40 9.92 Hand Mix (4.5 kg) 2 1.60 33.28 0.00 40 Mechanical Mix (45.4 kg) 3 1.67 33.83 7.17

Percent Finer (%) Finer Percent 4 1.74 42.06 4.30 20 +5% Error 5 1.75 42.43 9.04 6 1.78 44.38 1.64 0 -5% Error 10 1 0.1

Particle Size (mm) Figure 5. Grain size distribution for GRC-1. Table 3. UU triaxial test results.

1200 Test 1 1000 Test 2 Test 3 800 Test 4 Test 5 600 Test 6 400

Stressat Failure (kPa) 200

0 0 50 100 150 200 250 Confining Pressure (kPa)

Figure 6. Relationship between stress at failure and confining pressure.

Acknowledgments The authors acknowledge the NASA Glenn Research Center for their financial support through grant NNC06AA25A. The opinions expressed in this paper are that of the authors and do not represent the official policy of the funding agency. The first author would like to recognize the support of the Ohio Space Grant Consortium through a doctoral fellowship.

References 1. Annual book of ASTM standards. (1991). Sect. 4, Vol. 04.08, ASTM, West Conshohocken, Pa. 2. Carrier W.D. III, Olhoeft G.R., and Mendell W. (1991), Physical properties of the lunar surface, in Lunar Sourcebook (G.H. Heiken, D.T. Vaniman, and B.M. French, Eds.), Cambridge University Press, Cambridge, 736 pp. 3. Carter, James L., et al. Lunar Simulant JSC-1 is gone: The Need for New Standardized Root Simulants. Proceedings of Space Resources Roundtable VI. Johnson Space Center. Houston, Texas, 2004. 4. Heiken, G.H. et al., "Lunar Sourcebook: A User's Guide to the Moon". Cambridge University Press, 1991. 5. Houston W. N., Mitchell J. K., and Carrier W. D. III (1974) Lunar soil density and porosity. Proc. Lunar Sci. Conf. 5th, pp. 2361–2364. 6. Klosky, J.K. et al. (2000). Geotechnical Behavior of JSC-1 Lunar Soil Simulant Journal of Aerospace Engineering, Vol. 13, No. 4, pp. 133-138. 7. Lambe T.W. and Whitman R.V. (1969) Soil Mechanics, John Wiley and Sons, Inc., New York. 8. McKay, David S., et al. JSC-1: A New Lunar Soil Simulant. Engineering, Construction, and Operations in Space IV. American Society of Civil Engineers, pp. 857-866, 1994. 9. Mitchell J.K., Houston W.N., Carrier W.D. III, and Costes N.C. (1974) Apollo Soil Mechanics Experiment S-200. Final report, NASA Contract NAS 9-11266, Space Sciences Laboratory Series 15, Issue 7, Univ. of California, Berkley. 10. Mitchell J.K., Houston W.N., Scott R.F., Costes N.C., Carrier W.D. III, and Bromwell L.G. (1972d) Mechanical proper-ties of lunar soil: Density, porosity, cohesion, and angle of friction. Proc. Lunar Sci. Conf. 3rd, pp. 3235-3253. 11. Scott R.F. and Roberson F.I. (1969) Soil mechanics surface sampler. In Surveyor Program Results, pp. 171-179. NASA SP-184. Study on the Optimal Compensation for an On-Board Processing Satellite Payload Experiencing Critical Channel Impairment

Student Researcher: Mike Orra

Advisor: Junghwan Kim, Ph.D.

College of Engineering / The University of Toledo Electrical Engineering and Computer Science Department

Abstract The complexity of communication satellites is ever-increasing as a result of growing demand for greater data throughput and increased flexibility. Towards this end, many systems are now being designed to incorporate on-board processing (OBP), wherein the originally transmitted data sequence is recovered on- board the satellite payload and reformatted (as needed) for downlink transmission. Typical examples of broadband services provided by OBP satellites are direct-to-home television, high-speed internet, video conferencing/telephony, distance education, telemedicine and access to large databases.

Channel impairment represents a significant problem for any satellite communications systems. Compromised channel integrity is the result of many factors and typically, for a given waveform and satellite payload architecture, a select number of impairment factors can be identified as being critical. Researchers have extensively studied and modeled the effects of signal degrading factors in isolation for specific satellite systems as well as their corresponding compensation techniques; however, there appears to be little, if any, published literature regarding the effects and potential compensation schemes for systems experiencing degradation caused by the combined effects of multiple critical factors. Modeling the effects of all critical factors through traditional closed form mathematical analysis is extremely difficult. Models developed in previous studies cannot readily be adopted when considering the cumulative effects of all factors, as each factor influences the other. Therefore, an empirical and heuristic approach is required.

Project Objectives This research proposes the use of machine learning models such as neural networks and Bayesian belief networks in assessing the optimal compensation scheme for on-board processing satellite payloads experiencing critical channel impairment. Such models can be developed by learning from collected data, domain expertise, or both. Successful establishment of a machine learning model that can be initially developed based on domain expertise and continually refined through empirical data presents a powerful tool for realizing optimal operation of the satellite payload.

Introduction The first step towards achieving the outlined project objective lies in the development of a fully functional modulator-demodulator, or modem, and an OBP whose functional block diagram is shown in Figure 1. Modems are responsible for three major functions. Each of which will be briefly described.

Forward Error Correction Satellite communications is largely unidirectional. Consequently, if a corrupted message is received, the receiver does not have the luxury of requesting a re-transmit from the sender. Rather, the capability to detect and correct bit errors must be contained entirely within the message being transmitted. Forward error correction (FEC) encoding is the process in which an encoded message containing n bits is constructed based on an input sequence containing k bits (where n > k) and a given generator scheme. By increasing the length of the bit sequence beyond what is required for representation of the data, error detection and correction functionality can be introduced. The idea is to minimize the effects of noise during transmission by distributing the noise energy over a greater number of bits. This translates into a mechanism for minimizing the likelihood of receiving corrupted bits. The measured difference between the signal-to-noise ratios of the coded and uncoded sequences is referred to as coding gain. There exists many different types of codes and deciding which to use is largely dependent on the nature of the end-application. Moreover, coding schemes can be concatenated in order to produce messages comprised of “inner” and “outer” code layers. The two classes of codes that are considered in this research are convolutional and turbo codes. An important parameter of FEC coding schemes is the code rate, or the ratio of the number of information bits, k, to the number of information and redundancy bits n. Coding rates of 1/2, 1/3, 1/4, and 2/3 are developed for convolutional code while rates of 2/3, 3/4, 4/5, 5/6 and 6/7 are implemented for turbo codes. References [1-6] provide further details on these schemes.

Modulation and Demodulation The second major modem block that must be developed for the modem is the actual modulator, whose role is to represent the data message in terms of a varying sinusoidal wave. The carrier frequencies are typically on the order of GHz. The basic format for any modulation scheme is given as:

s(t) = Ac (cosωc + φ) (1)

Information can be embedded in the sinusoid by altering the signal amplitude (Ac), frequency (ωc), or phase (φ ). Such variations to the sinusoid are known as amplitude shift keying (ASK), frequency shift keying (FSK) and phase shift keying (PSK), respectfully. In this research work our modulation schemes of interest are limited to M-ary FSK (MFSK) and M-ary PSK (MPSK), where M refers to the size of the alphabet (typically a power of 2). Thus, each symbol represents log2M bits. Also, Gaussian minimum shift keying (GMSK), a continuous phase modulation (CPM) scheme (which is a sub-class of PSK schemes), is included in the investigation. In CPM schemes, the phase is constrained to be continuous. CPM signals are more resistant to adjacent channel interference (ACI) due to their compact power spectrum; that is the majority of the energy is contained in the channel bandwidth. In this study, the modulation schemes of interest are BFSK, QFSK, 8FSK, symmetric differential PSK (SDPSK), SDQPSK, and GMSK.

Demodulation techniques are categorized as being either coherent or non-coherent. The difference between the two techniques lies in the utility of the received signal phase. Coherent demodulation assumes valid phase information is maintained in the received signal, which is challenging to achieve as signal phase is easily altered by filters, amplifiers, frequency converters, etc. Non-coherent demodulation techniques do not rely on phase information to demodulate signals; rather, non-coherent demodulation can be established through the use of correlators and square-law detectors that determine which symbols were transmitted based on the corresponding calculated energies. This study is limited to non-coherent demodulation techniques. Further details regarding modulation/demodulation techniques developed in this study can be found in [7-16].

Finally, the modem of interest here is responsible for performing fast frequency hopping (FFH) wherein the transmission carrier frequency is varied using a pseudo-random assignment scheme in order to minimize narrowband interference and the probabilities of detection and interception [2, 16]. FFH is characterized by multiple frequency hops in a given symbol duration. Frequency hopping (FH) is typically employed only for Government related communication systems, not in commercial applications. FH is being considered in the modem for completeness.

The development of a simulated full processing satellite payload has been performed in tandem with the modem. Fully processed architectures are characterized by the fact that they recover the original baseband data transmitted on-board the payload and re-process it for downlink transmission.

A detailed block diagram of the OBP architecture is shown in Figure 2. The OBP system is responsible for frequency dehopping, RF downconversion, analog anti-aliasing filtering, analog to digital conversion, digital anti-aliasing filter, the group demultiplexing or channelization, re-encoding, re-modulation, signal multiplexing, RF upconversion, and frequency hopping. It is important to note that the downlink encoding and modulation schemes may or may not be the same as those used for the uplink signal. It should also be noted that the baseband packet switch is not being included in the simulation, as it is outside the scope of our research. Finally, both uplink and downlink signals are multiplexed using frequency division multiplexing (FDM) only. Of the functions performed by the OBP, perhaps none, with the exception of baseband packet switching, is more critical than group demultiplexing. While a number of different techniques exist for performing this task, this study is limited to examining tree-structure filter banks, and polyphase discrete Fourier transform (PDFT) filter banks. Both schemes exhibit similar computational complexity, however, their operations are notably different. Fundamentally, PDFT assumes a more computationally efficient implementation of a uniform DFT filter bank. That is, it operates by first representing the N-channel multiplexed input signal and a prototype low-pass filter (LPF) in their respective polyphase decomposed substructures. Signals are then passed into an N-point generalized DFT (GDFT). The GDFT outputs N filtered and downconverted channels. Alternatively, tree-structure filter banks recursively divide the multiplexed signal into high and low pass components using half-band quadrature mirror filters in successive stages until finally each channel has been demultiplexed. Further information on OBP can be found in [16-22].

Impairment Factors The impairment factors which are typically relevant for any wireless satellite communications systems are as follows:

• Narrowband additive white Gaussian noise (AWGN) – White noise is characterized as a Gaussian process with zero mean and a constant noise spectral density. White noise is generated by natural sources such as thermal vibrations of atoms in antennas, cosmic radiation, etc. and is a good model for corruption experienced by satellite communications links [23].

• Adjacent channel interference (ACI) – Spreading of the signal spectrum can be introduced in the uplink signal by earth station high power amplifiers (HPAs) and traveling wave tube amplifiers (TWTAs) for the downlink signal. It can also be introduced by poor filtering in frequency modulated systems. The spillover of spectral energy from adjacent channel(s) into the desired channel results in signal degradation.

• Frequency offset – System transmitters and receivers use local oscillators to perform frequency translation. Inherently, differences in the oscillator frequencies may be present which can result in carrier frequency offsets that can degrade modem performance.

• Intersymbol interference (ISI) – Transmission signals that are in rectangular waveforms (i.e. non- return to zero format) are problematic. When a rectangular pulse train passes through a filter, the pulses are spread in the time domain and cause interference with adjacent symbol pulses. This condition can be mitigated by using pulse shaping filters to soften the transition edges; however, it is important to note that many different pulse shapers can be used, and that some will be more appropriate than others for a given context.

• Quantization – The process of digitizing the received, downconverted signal introduces analog to digital conversion noise, and quantization noise.

• Fading – This phenomenon refers to the distortion that communication signals experience when propagating through certain media. In wireless communications systems such as satellite communications, fading can be attributed to multipath propagation; a circumstance in which the transmitted signal arrives at the receiver’s antenna via two or more paths. Two commonly used statistical models for describing fading effects are Rayleigh and Ricean distributions.

It is important to note that phase noise has been omitted from this listing as all demodulation is being performed non-coherently.

Machine Learners Analyzing the cumulative effect of critical signal degradation factors is a complex and extensive task; little has been published in this area as the complex nature of this application does not readily lend itself to developing a closed form mathematical model, particularly when considering differing modulation, coding and architectural aspects. Machine learners have and continue to gain attention as mechanisms for solving such problems. In this study, artificial neural networks (ANNs) and Bayesian belief networks (BBNs) will be considered as tools for compensation of the aforementioned impairment factors.

Neural networks have been successfully used to “learn” highly non-linear functions, and perform time- series forecasting. They can also, through training or (to an extent) self-discovery, learn to recognize and classify patterns with excellent accuracies; however, training a neural network to learn when many variables are involved can require large training and testing data sets that may be not be readily available or are simply unattainable. Furthermore, while neural networks have the ability to model highly non- linear functions and sometimes correctly classify unseen data through generalization, they are black-box models that offer no insight into their decision making process [24-26].

BBNs are machine learning models that completely describe variables and their relationships based on a set of conditional dependencies and probabilities. Models can be developed through training based on collected data, leveraging established domain expertise, or both. Because Bayesian inferences are made based on conditional probabilities, the rationale behind their decision making process is easily extracted and understood. Such properties make Bayesian models ideal candidates for use in diagnostic applications [24, 26].

Current Results and Future Work An extensive literature survey was conducted in order ensure that the simulated model will reflect the current trends and needs of the communications industry. Based on the findings of the survey aforementioned coding, modulation and OBP architectures were selected for implementation. All simulations are being developed on the Matlab computing platform. Presently, convolutional coding for rates of 1/2, 1/3, 1/4 and 2/3 and turbo codes for rates of 2/3, 3/4, 4/5, and 5/6 have been implemented. For modulation, BFSK, QFSK, 8FSK, SDPSK and GMSK have also been implemented. Simulations examining the convolutional coded performance of all listed modulation schemes have been conducted for single channel cases. Frequency hopping also has been implemented for single channel cases without coding.

With respect to OBP architectures, the digital front-end and tree-structured group demultiplexers have been successfully implemented for processing a multi-channel frequency division multiplexed input signal with convolutional coding. The same simulations have been conducted using a uniform DFT filter bank for group demultiplexing. Uniform DFT filter banks perform the same operation as PDFT filter banks; however, because they are not in polyphase format, they are not computationally optimized. Models for traveling wave tube amplifiers (TWTAs) are currently being investigated and will soon be implemented as well.

The next phase in this study is to complete the integration of the coding, modulation/demodulation, and OBP modules. Thereafter, further study and development of impairment factors can be undertaken and included in the model. Finally, the simulated communications model must be placed in an appropriate context wherein the machine learners of interest can be developed and applied.

Acknowledgments I would like to thank the Ohio Space Grant Consortium for their support, my advisor, Junghwan Kim, Ph.D., for his guidance and expertise, and my colleagues, Chong Wang and Pooja Raorane.

Figures/Charts OBP Satellite Payload OBP Sub- OBP Rx Sub-Unit Rx Sub-Unit Tx Sub-Unit Tx Sub-Unit Unit

Downlink AWGN Uplink AWGN

Frequency Dehopping Frequency Hopping

PN Code PN Code FH Demodulator FH Modulator Generator Generator

Modulation Demodulation

FEC FEC Output Data Input Data Decoder Encoder

Ground Terminal Ground Terminal

Figure 1. Block Diagram of Modem with OBP satellite payload.

Figure 2. Block Diagram of Modem with OBP satellite payload. References 1. S. Lin and D J. Costello, Jr., Error Control Coding: Fundamentals and Applications, Prentice-Hall, Inc.: Englewood Cliffs, 1983. 2. J. G. Proakis and M. Salehi, Contemporary Communications Systems Using MATLAB, PWS Publishing Company: Boston, 1998. 3. C. Berrou, A. Glavieux, P. Thitimajshima, “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes,” 1993 IEEE International Conference on Communications, Vol. 2, p.1064- 1070, May 1993. 4. W. Ryan, “A Turbo Code Tutorial,” unpublished paper available at http://telsat.nmsu.edu/~wryan/ 5. O. F. Acikel, W.E. Ryan, “High Rate Turbo Codes for BPSK/QPSK Channels,” 1998 IEEE International Conference on Communications, Vol. 1, Issue 7 – 11, p.422-427, June 1998. 6. F. , S.C.Kwatra, J. Kim, “Analysis of Puncturing Pattern for High Rate Turbo Codes”, IEEE 1999 Military Communications Conference Proceedings, Vol. 1, p. 547 – 550, 1999. 7. F. Xiong, Digital Modulation Techniques, Artech House: Boston, 2000. 8. S. Haykin, Digital Communications, Wiley: New York, 1988. 9. J. G. Proakis, Digital Communications, 2nd Edition, McGraw-Hill: New York, 1989. 10. K. Murota and K. Hirade, “GMSK Modulation for Digital Mobile Radio Telephony”, IEEE Transactions on Communications, Vol. 29, Issue 7, p. 1044 – 1050, July 1981. 11. N. Al-Dhahir and G. Saulnier “A High-Performance Reduced-Complexity GMSK Demodulator”, IEEE Transactions on Communications, Vol. 46, No.11, p. 1409-1412, November 1998. 12. M. Zhao and M. Yuan, “A Noncoherent GMSK receiver for software radio”, IEEE Vehicular Technology Conference, 2002, Vol. 4, p. 1675-1679, May 2002. 13. A. Abrardo, G. Benelli and G. R. Gau, “Multiple-Symbol Differential Detection of GMSK for Mobile Communicaiton”. IEEE Transactions on Vehicular Technology, Vol. 44, Issue 3, p. 379-389, 1995. 14. K. Tsai, T. Oyand, “Gaussian Minimum Shift Keying Modulator”, IEEE Transactions on Signals, Systems and Computers, Vol. 1, p. 235-240, 2000. 15. Y.-C. Wu and T.-S. Ng, “New implementation of a GMSK demodulator in linear software radio receiver”, IEEE Transactions on Communications, Vol. 2, p. 1049-1053, 2000. 16. R. L. Peterson, R. E. Ziemer, D.E. Borth, Introduction to Spread Spectrum Communications, Prentice-Hall, Inc: Englewood Cliffs, 1995. 17. T. Nguyen, J. Hant, D. Taggart, C. Tsang, D. M. Johnson, J. Chuang, “Design Concept and Methodology for the Future Advanced Wideband Satellite System”, The Aerospace Corporation, Communications Systems Engineering Department, El Segundo, California, USA. 18. R. E. Crochiere and L. R. Rabiner, Multirate Digital Signal Processing, Prentice Hall: Englewood Cliffs, 1983. 19. H. Gockler and H.Eyssele, “Study of On-Board digital FDM-Demultiplexing for Mobile SCPC Satellite Communications – Parts I and II”, European Trans. Telecomm. Systems, Vol. 3, pp. 7-30, 1992. 20. S. K. Mitra and J. F. Kaiser, Handbook for Digital Signal Processing, John Wiley & Sons, New York: 1993. 21. F. Taylor, The Athena Group, Inc., and J. Mellot, Hands-On Digital Signal Processing, McGraw-Hill, NewYork: 1998. 22. M. Bellanger, Digital Processing of Signals: Theory and Practice, 2ed., John Wiley & Sons, Chichester: 1989. 23. T. T. Ha, Digital Satellite Communications, 2 ed., McGraw-Hill, New York: 1990. 24. T. M. Mitchell, Machine Learning, McGraw-Hill, Boston: 1997. 25. J. M. Zurada, Introduction to Artificial Neural Systems, West Publishing Company: St. Paul, 1992. 26. D. P. Bertsekas and J. N. Tsitsiklis, Neurodynamic Programming, Athena Scientific, 1996. Accelerometers

Student Researcher: Hallee M. Palmer

Advisor: Miss Sarah Gilchrist

Cedarville University Department of Science, Mathematics and Education

Abstract My project will incorporate NASA’s Microgravity materials and the formula for acceleration due to gravity. The students will learn about proportions, estimating acceleration, and how to substitute numbers into equations. This lesson is designed for a seventh or eighth grade math class. First, the students will review what gravity is and will learn the formula for calculating the velocity of objects falling toward Earth. Then, the students will make an accelerometer which they will use to measure different acceleration environments caused by various motions. Using their greatest gravity experienced, they will calculate the time it would take an object to fall a specified height. The last step will be to use a chart that has the proportions between gravity on the planets. The students will solve the proportions and calculate the time it takes for an object to fall a specified height on each planet. As a concluding assignment they will be challenged to think about ways to modify the accelerometer in order to have more accurate measurements.

Objectives • To learn about acceleration due to gravity by building an accelerometer • To measure and estimate gravity of various jumps and then graph the points • To fit a curve to data • To discuss ratios of weights on different planets • To practice working in groups and developing skills needs for cooperation

Alignment with NCTM Standards • Understand and use ratios and proportions to represent quantitative relationships • Select, create, and use appropriate graphical representations of data • Build new mathematical knowledge through problem solving

Lesson The basis for this lesson came from the “Accelerometer” lesson contained in NASA's Microgravity Educator's Guide. First, one needs to explain a bit about gravity to explain what it is and how it is measured. Then, introduce the formula for the attractive force between two objects. Once the students have estimated the force of gravity using the formula then it is time to build the accelerometer.

The students can work in small groups to follow the pattern and make the accelerometer. Then, they will need to calibrate it using three sinkers. The first sinker is glued to the rubber band and is at 1g. A second sinker is added and a mark is made for 2. Lastly, a third sinker is added and a mark is made for 3g. This process is repeated for negative gravity with the accelerometer held upside down.

The students make several jumps to measure the gravity they experience at various parts of the jump. They should do several trials from each of 2 or three heights (ground, foot stool, and chair). They should record the maximum and minimum gravity they experienced, the gravity at the highest part of their jump and the gravity right before landing. Then, they can graph their data points using a line graph with their starting height as the x-axis and the gravity as the y-axis. Each group can then work on fitting a curve to their data either by hand or by using a graphing calculator if it is available.

As a final section on gravity, the students can use proportions and find their weight on each of the planets. To give them more of a challenge the teacher can give them the mass or weight of objects on planets and let them use proportions to find the weight on another planet.

Proportions of Gravity

Mercury Venus Mars Earth Jupiter Saturn Uranus Neptune Pluto .38 .91 .38 1 2.54 .93 .8 1.2 .7

Materials and Resources

For each Accelerometer: •Light weight poster board •1 “drilled egg” lead fishing sinkers, 1 oz. size •Masking tape •Rubber band, #19 size •4 small paper clips •Scissors •Ruler •Ballpoint pen •Pattern

For the class: •2 more fishing sinkers to calibrate the accelerometers •A foot stool and a chair

Assessment The students fill out the sheet (below) which is from the NASA package after they do their jumps from several heights. Then, they plot the data they collected to practice their graphing skills. This can be checked to make sure they are graphing the points properly. If time allows they can fit a curve to the data as well.

Conclusion This project allows the students to have a hands-on experiment where they learn about gravity. They also are able to practice using formulas, solving proportions, and graphing data. This hands-on approach makes the math and formulas more interesting. It also shows the students a practical use for math.

Sources 1. NASA Microgravity Educator's Guide http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/Microgravity_Teachers_Gui de.html 2. Principles and Standards for School Mathematics. http://standards.nctm.org/document/appendix/numb.htm

Electrospinning of Polymer Nano Fibers

Student Researcher: Azm Parvez

Advisor: Shing-Chung Wong

The University of Akron Department of Mechanical Engineering

Abstract Electrospinning or electrostatic spinning introduced by Formhals and developed by Reneker uses high voltage to charge the polymer solution and deposit the polymer fibers ranging from few hundred microns to few nanometers on conductive ground collector. The high voltage power source is used to eject out the electrically charged solution of polymer solution out of the pipette by overcoming the surface tension of the solution. The discharged polymer solution undergoes stretching and elongation driven by the electrostatic repulsion and can be directed or accelerated by electrical forces and collected in sheets or in spools or other useful geometrical forms. The solvent in the solution evaporates or solidifies before being collected on the ground collector. When the jet dries or solidifies, an electrically charged fiber remains. This project aims to design an electrospinning station to enable collection of aligned nanofibers. The collected fibers will be evaluated for their mechanical properties using microscopic and spectroscopic techniques.

Project Objective Design an eletrospinning station and a rotational collector to process aligned polymer fibers.

Methodology Used An electrospinning setup consists of a syringe pump, a high voltage source, and a collector (Figure 1). During the electrospinning process, a polymer solution is held at a needle tip by surface tension. The application of an electric field using the high-voltage source causes charge to be induced within the polymer, resulting in charge repulsion in the solution. This electrostatic force opposes the surface tension; eventually, the charge repulsion overcomes the surface tension, causing the initiation of a jet. As this jet travels, the solvent evaporates and the jet solidifies and is collected on a conductive ground collector.

Significance and Interpretation of Results An electrospinning station was designed and built in the lab with successful collection of electrospun nanofibers. Fibers that were processed included Polycaprolactone, Polyethylene Oxide. The parameters evaluated for electrospinning are solution concentration, surface tension, polymer molecular weight, flow rate, field strength/voltage, distance between tip and collector, collector composition and geometry. The results will lead us to a better understanding of the deformation mechanics of polymers when the fiber diameter approaches the size of polymeric molecules. The latter forms the important fundamental basis of our team efforts. Figure 1 shows a scanning electron microscopy image of an electrospun polycaprolactone (PCL).

Electrospinning Solution ParametersElectrospinning process can be manipulated by a number of variables. Solution properties include the viscosity, conductivity, surface tension, polymer molecular weight and dielectric constant. The effects of the solution properties can be difficult to isolate since varying one parameter can generally affect other solution properties (e.g., changing the conductivity can also changes the viscosity). The effect of some parameters investigated on electrospun fiber morphologies and sizes are described in this section.

Viscosity/concentration: Solution viscosity (as controlled by changing the polymer concentration) has been found to be one of the most important parameter for obtaining of optimum fiber size and morphology when spinning polymeric fibers.

Conductivity/solution charge density: It has been found that increasing the solution conductivity or charge density can be used to produce more uniform fibers with fewer beads present. Surface tension: The impact of surface tension on the morphology and size of electrospun fibers has also been investigated. It was found that beading was affected by the surface tension.

Flow rate: In general, it was found that lower flow rates yielded fibers with smaller diameters. Flow rates that were too high resulted in beading since fibers did not have a chance to dry prior to reaching the collector.

Distance between tip and collector: Varying the distance between the tip and the collector has been examined as another approach to controlling the fiber diameters and morphology. It has been found that a minimum distance is required to allow the fibers sufficient time to dry before reaching the collector.

Collector composition and geometry: A number of materials and geometries have been studied for the collection of electrospun polymeric fibers.

Field strength/voltage: electric field should be sufficient to overcome the surface tension and increased voltages produces jets with larger diameters and ultimately lead to the formation of several jets ambient parameters: Increased temperature caused a decrease in solution viscosity, resulting in smaller fibers. Increasing humidity resulted in the appearance of circular pores on the fibers. Figures/Charts

Figure 1. Electrospinning setup and SEM micrograph of electrospun polycaprolactone (PCL).

Rotating Drum Collector

References 1. Wong, S.-C., Lee, H., Qu, S., Mall, S & Chen, L. 2006. A study of global vs. local properties for maleic anhydride modified polypropylene nanocomposites. Polymer, 47, 7477-7484. 2. Wong, S.-C., Qu, S., Lee, H. & Mall. S. 2006. “Instrumented indentation on intercalated clay reinforced polypropylene nanocomposites” in Proceedings of 2006 ASME International Mechanical Engineering Congress, November 5-10, Chicago, IL. 3. Quynh P. Pham, Upma Sharma, Ph.D, and And Antonios G. mikos, Ph.D Electrospinning of Polymeric Nanofibers for Tissue Engineering Applications: A Review, TISSUE ENGINEERING Volume 12, Number 5, 2006. Analysis of Extrema Values of a Scalable Parallel Algorithm for Simulating the Control of Steady State Heat Flow through a Metal Sheet

Student Researcher: Monica A. Porché

Advisor: Robert L. Marcus

Central State University Mathermatics and Computer Science

Objective The objective of this research project is to determine particular extrema values for heat topologies of a parallel algorithm which simulates the control of steady-state heat flow through a metal sheet. The heat topologies of the sheet were displayed using MATLAB surface plots.

Procedure The simulation used initial heat conditions of 100º C and 0º C applied to the edges of the metal sheet. The algorithm used a method which performed a parallel row-wise decomposition of a simulated grid placed on the metal sheet with sizes of 400x400. The program was run using various numbers of compute nodes to determine the speed-up of the algorithm. Parallel efficiency techniques were used to ensure that the algorithm remained scalable.

Results The results of the research successfully tracked the locations of maxima and minima values under changing heat settings of the edges, determined the specific values of the extrema, and determined how many occurred at each heat setting. MATLAB graphics were used to display the results.

The steady-state heat spectrum across the metal sheet shown after lowering the heat on the edges showed clusters of residual heat and after raising the temperature of the cold edges showed clusters of cool areas. These cluster areas were identified as local maxima and local minima. The experiment data showed the existence of local extrema that were not distinguishable from visual inspection of the MATLAB surface plots. The results also showed when the rate at which heat applied to the edges changed more gradually more local maxima would occurred. The number of local minima was not affected by the rate of change of temperature on the cold edges. Speed-up analysis showed that with a 400x400 sheet the algorithm displayed almost linear scalability up through 16 nodes.

Conclusions This research is useful in helping to determine where local maxima and minima occur in case the cooling or heating of a computer chip, for example, is needed. Instead of applying the cooling or heating to the entire chip, it can be applied to the hot or cold spots embedded in the chip causing the cost of cooling or heating to decrease drastically.

Figures

Reengineering of Drill Collar Transportation and Storage

Student Researcher: Ryan D. Prater

Advisor: Dr. Richard Gross

The University of Akron Department of Mechanical Engineering

Abstract An engineering firm in Twinsburg, Ohio, that designs and manufactures drill collars have come up with major issues in the storage and transportation of their product. Drill collars are large cylinders that are typically fifteen feet long and can weigh over two thousand pounds. They are mainly used to survey underground levels for oil drilling and help stabilize the drills for all drilling endeavors. In order to evaluate surfaces as drilling take place, many sensitive sensors surround the collar. These sensors are extremely important and very expensive. The smallest amount of damage would cause the collar to be replaced. In order for this not to happen, storing and transporting these collars will require crucial research and results to reengineer the machines and containers that carry on these tasks.

Project Objectives The purpose of this project was to design a new way to transport and store drill collars without the potential of damage and personal injury. The engineering firm informed our design team on transporting issues that have happened in the past. The most unfortunate incident occurred when a worker of the company broke sever bones in his foot due to a drill collar crashing on top of it during transporting. Another objective was to design a way to maneuver drill collars in a vertical and horizontal direction so that the collars could be carried through wide and narrow passage ways such as allies and doorways. There were also issues with weather that needed to be addressed on which friction needed to be increased to reduce slip when a collar was grasped and raised into the air. All objectives needed to bet met with an understanding of budget and lead times for manufacturing by the engineering firm.

Methodology Used The methods used for this particular project were stress and load related. This includes the fundamentals in engineering taught in early stages of schooling such as calculating moments and counteracting forces. This was crucial in the redesigning of a forklift with specific attachments to ensure the safety of the drill collars. The weights of the drill collars used ranged from 800 to 2000 pounds. With the disasters that were reported earlier, the weight capacities could not be neglected. Safe points were determined on the drill collars in which the collars could be carried that provided equilibrium during transportation and would not damage the sensors or transmitting antennas.

Designing a new shipping container required the use of ANSYS which is used for finite element analysis of materials. The redesigned shipping container was drawn using Solid Works where the figure was then copied into the ANSYS system. This program allowed a load to be placed at any point of the figure while calculating the maximum and minimum stress points of the figure. This is also very important in the aspect of understanding where critical flaws may exist in the design that could cause material failure that may cause personal injury or product damage.

Results Obtained All of the load specifications were met in the redesigning of the forklift apparatus and shipping container. The weight constraints and maneuverability issues were met as well. The specifics of the drill collars and forklift design are being withheld by the engineering firm due to patent and legal issues.

Acknowledgments • The Ohio Space Grant Consortium • The University of Akron College of Engineering • The University of Akron Senior Design Team • Dr. Paul C. Lam • Dr. Richard Gross Exploring Software-Defined Radio

Student Researcher: Richard E. Reid, III

Advisor: Dr. Zhiqiang Wu

Wright State University Department of Electrical Engineering

Abstract There has been a revolution in radio known as software-defined radio (SDR). As we move into a wireless world, SDR fits right in. In an SDR, software defines the channel modulation waveforms and signal processing is performed on a reconfigurable piece of electronics. The goal of SDR is to allow the transmission and reception of new radio protocol by editing the software. GNU Radio is the platform used to study SDR. GNU Radio was used to receive an FM signal; and graphical user interfaces for GNU Radio applications were built in Python. Performance-critical signal processing blocks of code were constructed in C++. GNU Radio provides the glue to tie the blocks of code together. SDR offers a promising future for radio communications due to its versatility.

Project Objectives My primary objective was to install GNU Radio and run some of the software that comes with the installation of GNU Radio. My secondary goal was to transmit a text message from one station to another using GNU Radio. It would obviously be challenging to know whether or not a message was transmitted without making the receiver work as well.

Methodology Used Python programming language was used to design blocks of code that would be tied together in GNU Radio. The program designed should allow the user to choose the modulation scheme (frequency, amplitude, or phase modulation), pulse shape of the signal, and the frequency for the text to be transmitted. SDR requires less hardware than traditional radios. The amount of hardware used was a Radio Frequency (RF) front end, an antenna which gets the signal from the air. Also, a Universal Software Radio Peripheral (USRP) motherboard which contains the field programmable gate arrays (FPGAs) and analog-to-digital converters (ADCs) was used.

Results Obtained The code to receive an FM radio signal is one of the many example codes provided with the successful installation of GNU Radio. Most of the challenges in this project were software issues. Designing and implementing the code to transmit and receive a text message posed as a greater barrier. To tackle any software task, it is best to complete the pseudo code first. Pseudo code and designing a graphical user interface was completed. The figure below shows the graphical user interface designed using Python.

Figures

Acknowledgments This research would not have been possible without the guidance of my advisor, Dr. Zhiqiang Wu, and the assistance of my student mentors, Haitham Taleb and Lee Patton.

References 1. http://www.falla7.com/wsu/GNURADIO/ 2. http://www.pattoncentral.org/?p=67 Computational Fluid Dynamics Simulation Study

Student Researcher: Michael J. Reinbolt, Jr.

Advisor: Dr. Calvin Li

The University of Toledo Mechanical, Industrial, and Manufacturing Engineering

Abstract Computational Fluid Dynamics (CFD) simulation is the use of computationally-based design and analysis software to solve fluid flow problems. Using CFD, a computational model is created that represents the device or system that you want to study. Computers are used to perform millions of calculations that will simulate the interaction of fluids and gases around a desired object. After solving, the software will output predictions of the fluid dynamics. The results can then be studied and necessary changes can be made to designs. CFD plays an increasingly important role in industrial applications and scientific research. I will be using Gambit software for the pre-processing which entails the building of a geometric model and the application of a boundary layer mesh. I will be using Fluent software for the majority of the simulation which includes solving and post processing of the data. The software code is based on the finite volume method on a collocated grid. The project will provide me with a fundamental understanding of CFD simulation by reading finite element analysis codes, running the simulation, and analyzing and processing output data. My research will focus on the simulation of gas and liquid flow, heat transfer, and possibly moving bodies.

Project Objectives I began my research during the middle of the Fall 2007 Semester. The goal was to effectively learn how to use Gambit software first, then gain a fundamental understanding of Fluent software, and finally begin a full research project. In order to learn how to use both Gambit and Fluent software proficiently, I was assigned tutorial problems to go through each week. I was also required to write progress reports so my advisor could track my progress and identify where I was having problems. Through the tutorials, I have learned the basic functions and developed a good understanding of how to setup a problem (pre- processing), calculate a solution, and analyze the results (post-processing). The main objective in this research has been learning how to lead a problem from start to finish as if in a real scenario. A thorough understanding of the software is required before any independent research project can be completed. I am set to start a project using what I learned this coming summer semester. I will be modeling a problem from my advisors research.

Methodology Used In order to output accurate solutions in CFD software, a fundamental understanding of the science and mathematical equations governing the software is necessary. Fluent solvers are based on a finite volume method in which general conservation equations for mass, momentum, and energy are used to solve the problem. Each problem requires attention to all settings in order to get a good solution. When starting a problem, the first step is to construct the geometric model and prepare the mesh. This crucial first step is called pre-processing. The overall quality of the mesh will determine the accuracy of the final solution. During my research, I investigated the differences in mesh quality when changing element shapes, volume and face meshes, and using size functions. Using different element shapes and sizes leads to trade-offs between mesh size, quality, computational requirements, and application time and effort for the user. Size functions can be used to control the growth of mesh and can provide a smooth transition from fine to coarse mesh.

The next step involves setting up the numerical model that will be used to calculate the solution. To achieve this, first the material properties, boundary conditions, and initial conditions of the problem must be entered. Then the solver settings must be specified that best fits the problem. In this research, I also explored the differences in solution accuracy when changing the solver parameters. These included the solver selection, discretization schemes, and interpolation methods (gradients). The solver can either be setup as pressure or density based. The pressure based solver can be used for a wide range of applications from low speed incompressible flow to high speed compressible flow. But it cannot be used for multiphase and periodic mass flow cases. The density based solver is used when there is pairing between energy, density, or momentum such as compressible flow with combustion. The gradient option has several options to use in different situations. Green-Gauss Node Based is more accurate than the Green- Gauss Cell Based and is recommended for triangle and tetrahedron meshes. The Least Squares Cell- Based interpolation method yields about the same accuracy as Green-Gauss Node Based, but is recommended for polyhedral meshes.

Results Obtained Shown below is a problem demonstrating some of the work I have been exploring on improving mesh quality and using different solver settings to increase accuracy. The problem is a mixing elbow with a cold fluid at 20 °C mixing with a warmer fluid at 40 °C. The cold fluid enters through the large inlet at the front of the pipe, and the warmer fluid enters through a smaller inlet at the elbow. The mesh has been adapted based on the temperature gradient to improve prediction. Also, a second order discretization scheme was used to improve accuracy of the solution. The first order discretization scheme can lead to a diffusive solution in which mixing is over predicted. The figure to the left displays the solution of the temperature contours before the changes, and the figure to the right shows the solution after the changes.

Figure 1. How changes in mesh and discretization schemes affect solution accuracy.

References 1. “Introduction to FLUENT: Fluid Flow and Heat Transfer in a Mixing Elbow”, (2006) Fluent Inc. Computerized Modeling of a Heterogeneous Reservoir

Student Researcher: Charles R. Reynolds

Advisor: Dr. Benjamin H. Thomas

Marietta College Petroleum Engineering Department

Abstract A typical hydrocarbon reservoir is composed of a relatively consistent pore volume, permeability and grain distribution. In a heterogeneous reservoir the composition and characteristics of the rock change across the extent of the reservoir (1). Such reservoirs are complex to recreate in computer models due to the inability to generalize the reservoir across the entire area. Initial computer generated models are often not successful in matching flow characteristics and behavior to the reservoir history (1). The computer generated model must be modified so that any data calculated using the model will be indicative of what would happen in the actual reservoir.

Project Objectives The scope of this project is the creation of a representative computerized model of a waterflood in a heterogeneous oil reservoir. In waterflooding, water is injected into the reservoir from wells surrounding the production well in order to push the oil out of the formation greatly increasing the overall recovery of oil from a reservoir. Using complex computer programs, the Gordon Sandstone can be modeled and used to evaluate the effectiveness of a current waterflood. The model can then be used to predict the performance of the waterflood in other portions of the Jacksonburg-Stringtown field.

Methodology Used The preliminary step was the gathering of data regarding the composition, flow characteristics and history of the chosen reservoir. It was necessary to find specific data for each of the observed sub-sections within the formation in order to assure that the model accurately represents the subsurface structure of the reservoir. Previous hydrocarbon production and water injection data from a centralized location within the field was obtained from the current operator. This data will serve in the creation and adjustment of the preliminary computerized model.

The second stage in the project was to create a model, initially using generic data and later using actual data from the chosen field in order to model natural fluid flow conditions within the reservoir. This model was adjusted to match calculated flow values to the actual production history prior during water flooding. Known water injection data was applied to the model and any observed discrepancies were corrected. The reservoir model was then prepared to use in the prediction of future recoverable hydrocarbons from the waterflood area.

Results Obtained The Gordon sand is comprised of five lithofacies defined from cores on the basis of grain size and variability, presence of laminations, evidence of bioturbation, lithologic texture, and presence of crossbedding. The Five lithofacies are the featureless sandstone, laminated sandstone, conglomeratic sandstone, shale and heterolithic bioturbated (3). Each of the lithofacies is relatively distinctive and has a recognizable pattern in geophysical logs. The only one of the lithofacies that displays characteristics of pay (high permeability) is the featureless sandstone. In core samples and logs, the laminated sandstones appear denser than the featureless ones, with the sandstones have a density of around 2.5 g/cm3 and 2.3 g/cm3 respectively (2).

The reservoir is compartmentalized into thin, laterally continuous flow-units separated vertically by the low-permeability shale (1). The top unit is composed of all the previously described sandstone lithofacies, is 17-20 feet thick; and displays a coarsening upward pattern on wireline logs (2). The lower boundary of the unit is the top of a field-wide shale bed. The middle unit is sharply divided between featureless sandstones in the east and shale in the west and has a relatively thick reservoir rock (2). The upper boundary matches the shale-bed boundary of the top unit, while the lower boundary is placed at the base of the lower fine-grained sandstone where it overlies a shale bed. The lowest unit is mainly shale and thin, discontinuous beds of laminated sandstone (2). It is characterized by an absence of relatively thick reservoir sandstone compared to the other sequences. The upper boundary is equivalent to the lower boundary for the middle unit and the lower boundary is placed at the base of a thin sandstone that can be found throughout the field and is simple to pick on logs.

Upon constructing the initial computerized reservoir model an initial simulation was run to verify that all necessary parameters were specified. The output data from the simulator indicated that the layers in the model would need to be adjusted to be more representative of the observed lithofacies. Once the permeability and porosity of the model were adjusted an additional simulation was performed. When compared to the historical production data over an analogous period of time the calculated data was found to be comparable in many respects. The rapid increase in water cut, which is representative of water breakthrough at the producing well, correlated nearly perfectly with historical data. Additionally the sustained level of high water cut was observed in the model with producing well number 4 having a continuous water cut higher that that of well 2, correlating well with historical data. Calculated daily oil production rates showed similar overall trends as the historical data. Despite this fact, the relatively low rates seen in historical data after 6-7 years could not be matched in magnitude by the simulator.

Table 1: Lithofacies Characteristics.

Lithofacies Thickness Grain Size Average Permeability (ft) (mD) Shale 1-20 clay/silt 2.81 Heterolithic bioturbated 1-5 clay to sand 0.81 Laminated sandstone 5-15 fine sand 3.48 Conglomeratic sandstone 5-8 fine sand to granule 3.84 Featureless sandstone 1-10 fine sand 41.29

Table 2. Flow Unit Characteristics.

Unit Lithofacies Gross Thickness (ft) Net Pay (ft)

Laminated, Upper Conglomeratic and 17-20 5-7 Featureless Sandstones Featureless Sandstone Middle 5-10 1-3 and Shale Laminated sandstone Lower 0-5 0-1 and shale

Chart 1. Water Cut Comparison.

Chart 2. Oil Rate Comparison.

References 1. Matchen, D. L., Avary, K. L., Hohn, M. E., and McDowell, R.R., 2000, Understanding stratigraphic heterogeneity within the Jacksonburg-Stringtown Oilfield, West Virginia, USA; crucial for waterflood success: AAPG Bulletin, v. 84 issue 9, p. 1389. 2. Ameri, S., Aminian, K., Avary, K. L., Bilgesu, H.I., Hohn, M. E., McDowell, R. R., and Matchen, D. L., 2001, Reservoir Characterization Of Upper Devonian Gordon Sandstone, Jacksonburg- Stringtown Oil Field, Northwestern West Virginia: Appalachian Oil and Natural Gas Research Consortium, p. 97. 3. Hohn, M., 2004, Sample of Geo-Physical Logs From Two Upper Devonian Oil Fields in West Virginia: West Virginia Geological and Economic Survey, Geostatistical Case Studies, p. 8. HF Radio Data Communications and Resonator Tuning

Student Researcher: Vincent A. Richardson

Advisor: Dr. Edward Asikele

Wilberforce University Computer Science and Engineering

Abstract When we discuss varactor and resonator tuning, we are researching varactor tuning circuit for dielectric resonator stabilized oscillator. Tuning varactor circuitry is disclosed for a dielectric resonator stabilized oscillator. A varactor diode is electrically connected in a loop with an RF bypass capacitor, and the voltage across the diode is varied. In a first embodiment, first and second dielectric substrates face each other along the plane of the loop, with the loop there between.

The invention relates to dielectric resonator stabilized oscillators, and more particularly to a varactor circuit for electronically tuning the resonator.

Dielectric resonators have been used to stabilize the operating frequency of microwave oscillators. Because of the high Q of these resonators, the frequency of oscillation is essentially the same as the resonant frequency of the resonator. A common configuration of such oscillator employs an active circuit with a negative real part of impedance connected to an output transmission line. With no dielectric resonator, the impedance presented by the output transmission line is such that no oscillation occurs. When an appropriate resonator is coupled to the output line, it can present a proper impedance to the active circuit to cause oscillation. This occurs at the resonant frequency of the resonator. If the resonator Q is high, any tuning of the active circuit has little affect on the frequency of oscillation. About 0.1% or 0.2% tuning can be achieved by using a varactor junction diode in the active circuit. However, many applications require greater bandwidth.

It is known in the prior art to vary the resonant frequency of a dielectric resonator by coupling another resonant circuit containing a varactor diode to the resonator. A tuning bandwidth of over 1% has been achieved using this technique.

Objective To observe and calculate the resonant frequency of LRC circuits to better understand the principle of resonator tuning procedures.

Introduction

Apparatus

.

Procedure

Analysis

Tables and Graphs

Table 1. Series circuit, frequency and current.

Figure 1. Series circuit current versus frequency (R=100).

Table 2. Series circuit, frequency and current.

Figure 2. Series circuit, current versus frequency.

Table 3. Parallel circuit, frequency and current.

Figure 3. Parallel circuit, current versus frequency.

Table 4. Summary data.

Acknowledgments • Wilberforce University Cooperative Education Program o Dr. Edward Asikele (Engineering/Academic Advisor) o Mr. Khalil Habash (Academic Advisor) o Dr. Deok H. Nam (Computer Science and Engineering Instructor) • Northrop Grumman Xetron o Art Bush (NG Recruiter) o James Carter (Northrop Grumman Xetron Lab Technician) • Ohio Space Grant Consortium Demonstration of an Independent Streamerlike Atmospheric Plasma Jet

Student Researcher: Matthew D. Rippl

Advisor: Dr. Robert Leiweke

Wright State University Mechanical Engineering

Abstract For many years scientists have been studying the phenomenon and properties of plasmas. Although plasmas were discovered more than a century ago, many novel engineering applications of plasmas are still being developed. Wright State University, The United States Air Force Research Laboratory (AFRL) Propulsion and Power Division (RZPE) at Wright-Patterson AFB (Dayton, Ohio), and UES Inc. (Dayton, OH), are exploring the operating characteristics of a novel, weakly ionized plasma processing tool based upon an electrically driven streamerlike Atmospheric Pressure Plasma Jet (APPJ).

This APPJ source differs from other arrangements (such as a Capillary Dielectric Barrier Discharge (CDBD)) in that the jet is a self-sustained streamer-like discharge generated outside the tube, rather than yielding an effluence of the processing species generated by a discharge within the capillary itself. Thus, the placement of the processing species, especially those having relatively short lifetimes at ambient air conditions, can be controlled while the jet can be sustained at lower feed-gas flow rates. The results of this experiment indicate that the APPJ is electrically driven and is operationally independent of the CDBD[1].

Project Objectives The CDBD utilized in this experiment operates by applying unipolar pulsed applied voltage excitation to generate a stable, non-equilibrium, cold plasma between the anode and cathode which can provide for the efficient generation of reactive species that are useful for “cold” material processing of aerospace materials (for example) at atmospheric pressure[5]. In addition to this CDBD, a streamerlike Atmospheric Pressure Plasma Jet (APPJ) is created which has been observed to propagate away from both electrodes at a velocity four orders of magnitude greater than that of the feed gas[2]. Using an applied voltage of 10-13 kV and a pulse repetition rate of up to several kilohertz with an Ar/He flow gas, the visible jet extends into open air up to ∼3.5 cm. This APPJ source differs from other arrangements (such as a traditional DBD’s) in that this jet is a self-sustained streamerlike discharge generated outside the tube[1], rather than a simple effluence of the processing species generated by the discharge within the CDBD itself. Thus, the placement of the material processing species (radicals), especially those having relatively short lifetimes at ambient air conditions, can be spatially controlled while the jet can be sustained at lower feed-gas flow rates [3].

The goal of this project is to confirm the electrical independence of the APPJ discharge from the CDBD by retracting the cathode ring. Understanding the behavior of the APPJ is a first step towards understanding how various combinations of processing gases can be used to treat surfaces and potential use in the biomedical field [4].

Methodology Used Using a Pyrex (O.D.=3 mm, I.D.=2 mm) capillary to serve as the dielectric, a cylindrical cathode and anode surround the glass tube with an initial gap distance of 5 mm. Then a gas mixture is passed through the capillary with a controlled total gas Flow Rate of 4.5 SLM (~2400 cm/s) and Mixture Ratio of 95% and He 5% Ar. By applying a 12 kV unipolar pulsed applied voltage (∼20 ns rise time) to the anode at 1 kHz rep rate, the CDBD and Plasma Jet emission (argon at 750 nm) were observed and data was acquired[1]. Two Photomultiplier Tubes (PMT) were positioned above the jet emission and CDBD gap. Connected to an oscilloscope, the PMT’s returned intensity values in mV. Then to visualize and record a second set of data, a Gated Intensified Charge Couple Device (ICCD) gated at 200 ns took a picture of the relative light emission of the jet and CDBD for each data run. Then the variation in gap distance was provided by a micrometer controlled stage that was connected to the anode and increased 1 turn between data sets. Data sets were recorded for a gap distance of 5 mm-35 mm that included 48 turns of the micrometer that increased the gap by .635 mm for every full turn. The data collected was then analyzed using Origin 7.5 and produced evidence of Plasma Jet independence.

Results Obtained

Figure 1. Propagating streamer from glass capillary tube.

Figure 2. Figure 3.

Figure 2. Graph of Gap Distance vs. Intensity illustrating the decline of the DBD while the jet remains constant. Figure 3. Graph of Gap Distance vs. Time Delay from where the voltage spike occurs. Noting that jet remains relatively constant while the DBD occurs later and later.

Significance and Interpretation of Results Figure 1 results demonstrated that while the CDBD intensity decreased significantly the Jet stayed relatively constant. This helps support the idea that the jet is independent from the effects of the CDBD. The time delay in figure 3 similarly supports independence; if the jet was a result of the CDBD their respective time delays should be proportional. A jump can be observed in the time delay graph around 18 mm gap distance. The cause of the jump is the propagation of the jet in the opposite direction after the CDBD is unable to breakdown. The increasing trend of delay time then correlates with the outside jet further indicating independence of the Plasma Jet.

Acknowledgments The author would like to thank Mr. Brian Sands (UES, Inc.) for the use of the experimental apparatus at the Center for Advanced Power and Energy Conversion Lab (CAPEC) at Wright State University and providing invaluable discussions of gas breakdown phenomenon. Also acknowledging Dr. Biswa Ganguly (AFRL/RZPE) and Dr. Kunihide Tachibana at Department of Electronic Science and Engineering Kyoto University, Japan, for provision of the experimental apparatus.

References 1. Brian L. Sands, Diswa N Ganguly, Kunihide Tachibana, J. Appl. Phys. Let., 92, 15, 151503, (2008). 2. Kunihide Tachibana, Transactions On Electrical and Electronic Engineering, 1, 145-155, (2006). 3. J. L. Walsh, J. J. Shi, and M. G. Kong, J. Appl. Phys. Let., 88, 171501-1, (2006). 4. M. Laroussi and X. Lu, J. Appl. Phys. Let., 87, 113902 (2005). 5. K. H. Becker, K. H. Shoenbach, and J. G. Eden, J. Appl. Phys. Let., 39, R55-R70 (2006). 6. Ulrich Kogelschatz, Plasma Chemistry and Plasma Processing, 23, No. 1 (2003). Endothelialization and Function in Bifurcating Microfluidic Channels

Student Researcher: Alexander L. Rivera

Advisor: Dr. Harihara Baskaran

Case Western Reserve University Department of Biomedical Engineering

Abstract Organ failure is one of the key problems in medicine today. The number of patients awaiting organ transplants far exceeds the number of available organs; therefore, tissue engineering has been viewed as a favorable therapy with the potential of providing artificial organs. The field of tissue engineering has generated promising regenerative tissue applications over the past few years with, however, numerous limitations. In general, after a tissue engineered product is implanted, a key problem that affects its function is the delivery of vital nutrients and growth factors to the product due to its lack of blood vessels. In vivo, the implanted product can take several weeks to generate the necessary blood vessels; therefore, there is a clear need for a built-in microvasculature system. Such a system must have a monolayer of endothelial cells in the vessels to prevent thrombogenicity and to allow for better integration. This research project is aimed to characterize the effect of flow on the viability and function of bovine aortic endothelial cells (BAECs) in microvascular analogs. The analog consisted of a bifurcating network of microchannels of various dimensions and was made of poly(dimethyl siloxane) (PDMS) using standard microfabrication techniques [1]. BAECs were seeded into these devices and exposed to various flow conditions. For a given inlet flow rate to the device, the bifurcating network allowed for multiple flow conditions to be tested. Cells were assessed for viability at various time points. BAEC nitric oxide production was measured at various flow rates; however, the data obtained was inconclusive. Therefore, further experiments will have to be conducted to assess the functionality of the cells in these networks.

Project Objectives This project aims to aid in the creation of a built-in microvasculature system for tissue engineered products. To reach this objective, two parameters must be accessed. First, the viability and growth of BAECs must be tested under flow conditions in these networks. Second, the functionality of the BAECs must be evaluated by measuring their nitric oxide production under various flow conditions. These parameters are important because a built-in microvasculature system must have this monolayer of functioning endothelial cells to prevent thrombogenicity in vivo.

Methodology Used The PDMS channels were created using soft-lithography techniques in which PDMS was polymerized on silicone wafers containing the desired channel designs. Upon creating these PDMS channels, they were sealed by plasma bonding a blank PDMS slab to the PDMS channel. To complete the device, tubing was added to the inlet and outlet points. An image of a completed device can be found in Figure 2. Two different designs were tested, a bifurcating network containing five generations and one with six generations [1]. After sterilization, the channels were washed with ethanol followed by phosphate buffered saline (PBS). Human plasma fibronectin (20µg/ml) was then added to the device, which was TM incubated for 1 hour at 37°C and 5% CO2. The device was then washed once more with VascuLife EnGS-MV cell culture medium (Lifeline Cell Technologies) followed by the addition of 1 ml of BAEC containing medium (3 million cells per ml). To allow for cell attachment, the device was incubated at 37°C and 5% CO2 for 2 hours. Following this period, the device was attached to a medium containing syringe and a syringe pump (Harvard Apparatus) at 37°C and 5% CO2. The pump continuously supplied medium throughout the device at a flow rate of 5 µl/minute until the cells formed a confluent monolayer. During this time period, phase contrast images were taken to access cell growth at various time points. After the cells reached confluence (approximately 72 hours), the serum containing medium was replaced with serum-free medium. After washing the channels with 1 ml of this medium, different flow rates were tested (25, 50, 100, 250, and 500 µl/minute), and medium samples were taken for each flow rate from the exit tubing. These samples were then assessed for nitric oxide content using a Griess Reagent System (Promega). The unknown samples were compared to known concentrations present in a calibration curve to determine nitric oxide concentrations. Upon completing this experiment, the device was assessed for viability. Calcein AM with DMSO (1µl/ml) and ethidium homodimer-1 (1µl/ml) in medium was added to the device, which was incubated fro 20 minutes. The device was then washed with PBS and fluorescent microscopy was used to image the live and dead cells.

Results Obtained At various time points, phase contrast images of the devices were taken to assess the growth of BAECs in these channels. Multiple samples were tested for both the generation 5 and generation 6 devices. In most cases, cell growth was excellent in these devices with the cells forming a confluent monolayer on the surface by the 72 hour period. A comparison of phase contrast images at different time points for a single device can be seen in Figure 1. In addition, for a single generation 6 device, the viability of cells was assessed at the 72 hour period using fluorescent imaging. From these images, nearly all of the cells were alive with only a few dead cells present. One of these images can be found below as Figure 2. The fluorescent image on the right indicates dead cells as the bright spots, while the phase contrast image on the left indicates the total number of cells in that region. The nitric oxide data obtained from the medium samples taken at various flow rates showed that no nitric oxide was produced by the cells; however, these results are most likely due to limitations in the sensitivity of the Griess Reagent System used.

Figure 1. The phase contrast images above are from a region of a generation 6 device at 2 hours, 24 hours, 48 hours, and 72 hours (after cell seeding) from left to right respectively. The white measurement bars in the top right corners of the images represent 100 µm.

Figure 2. Phase contrast image (left) showing all cells present and fluorescent image (right) showing dead cells at 72 hours after cell seeding in a region of a generation 6 device are shown above. The white measurement bars in the bottom left corner of each image represents 100 µm. To the right of these two images is a macro-scale image of the device used in these experiments (generation 6).

Significance and Interpretation of Results The phase contrast images taken showed excellent cell growth in these channels. The images shown in Figure 1 are representative of the other samples tested. Most samples required 72 hours from the initial cell seeding to reach a confluent cell state. In addition, the fluorescent images showed excellent viability in the device tested. With the cell growth and viability results, these devices have been shown to be excellent environments for BAEC growth, therefore, assessing the first parameter of this project. However, the nitric oxide tests were inconclusive. The data indicated that no nitric oxide had been produced by the cells; however, it is suspected that the Griess Reagent System used was not sensitive enough to detect the low levels of nitric oxide produced by the cells. Due to these issues, the functionality of the cells cannot be assessed at this time. To determine the functionality of these cells, further experiment will have to be conducted using approaches other than nitric oxide production.

Acknowledgments and References 1. V. Janakiraman, K. Mathur, and H. Baskaran (2007). Optimal planar flow network designs for tissue engineered constructs with built-in vasculature. Annals of Biomedical Engineering, 35, 337-347. 2. Dr. Harihara Baskaran, Case Western Reserve University, Project Advisor. 3. Dr. Nicholas P. Ziats, Case Western Reserve University, Supplied BAECs. Quasi-One-Dimensional Materials for Thermoelectric Energy Generation Applications

Student Researcher: Thomas R. Robbins

Advisors: Dr. Douglas Dudis and Dr. Kevin Hallinan

University of Dayton Mechanical Engineering Department

Abstract Thermoelectric devices are solid state heat engines that may be used either for electrical generation or for the movement of heat. Thermoelectric devices have found application in power supplies for satellites due to their long operating life, reliability, and because they do not generate vibrations.

Quasi-one-dimensional materials are discotic organic molecules which, because of their planar geometry, pack very closely along a single axis and have significant electron interaction among their p-orbitals. This leads to anisotropic properties within these materials, due to the interaction along the stacking axis that does not occur parallel to the planes of the molecules. The properties specifically of interest for this research are the low thermal conductivity of these materials along the stacking direction and relatively high electrical conductivity, when properly doped, in the stacking direction. This combination may make some of these materials ideal for thermoelectric devices, such as those used to provide power in deep space Satellites, temperature sensors, and energy recovery from waste heat.

Research has focused on doping molecules containing thiazolo-thiazole structures which have been determined through X-ray diffraction to have quasi-one-dimensional crystal structures. Determination of the material properties of these materials has also been conducted. This includes measurement of electricity and Seebeck Coefficients for each material. Due to disappointing results for thiazolo-thiazole materials and difficulty synthesizing new materials of this type, the research progressed to looking at another group of discotic organic molecules, Phthalocyanines.

Project Objectives Theoretical analysis of Quasi-one-dimensional materials has shown these materials to have the potential for an order of magnitude improvement over existing materials currently used in Thermoelectrics1. Quasi-one-dimensional materials are materials in which quantum effects normally only observed in low dimensional materials are observed in bulk materials due to the crystal structure, and unique interaction along a single direction with in the material.

Within this research the objective has been to investigate groups of Quasi-one-dimensional materials and determine if they are appropriate as thermoelectric materials. Starting with the pure material, the materials being studied were to be doped, using either organic or traditional inorganic molecules, into a conducting state. To determine if these materials are appropriate for thermoelectric applications, there electrical conductivities and thermopowers were measured. In addition to doping these materials and measuring their properties, dispersion of these materials into a polymer and alignment of the conducting axes in powders to determine the processability of Quasi-one-dimensional materials has also been conducted. At the beginning of the research, Thiazolo-thiazole derivatives were being considered and the goal was to measure their properties and evaluate these materials for thermoelectric, however, when these were shown not to be effective materials for thermoelectrics, Benzo-Trithiophene and then Phthalocyanines were also considered.

Methodology Research consisted of a number of sequential steps for each new material, dependent upon the preceding steps. Beginning with pure materials, the materials being studied were recrystallized either through sublimination or through solvent evaporation. The materials were recrystallized to form single crystals so that they could be analyzed with X-ray diffraction to verify that the material being studied does have a Quasi-one-dimensional crystal structure.

Materials were then doped into a conducting state. Doping was first attempted with organic charge transfer compounds that would form into the Quasi-one-dimensional crystal structure, the electron donor tetrathiafulvalene and the electron acceptor tetracyanoquinomethane. Doping was attempted by co- solution of the material being studied with the dopant, at a number of temperatures. Co-sublimination of the pure material with the dopant. If organic dopants were unsuccessful, the material being studied was exposed to Iodine or Bromine vapor to conduct doping2.

The electrical conductivity of the doped materials was then measured using a standard two probe technique. A four probe technique was available, but due to the low conductivity of the samples, this method was unnecessary for measurement of the electrical conductivity. In addition to electrical measurements, the thermopower for those materials that showed good electrical conductivity was measured. Thermopower measurements were conducted using a system developed in the course of the research which implemented a small temperature differential across the sample and measured the voltage that developed within the sample using a high impedance measurement device. The accuracy of this system was verified using Bismuth Telluride, a known thermoelectric material, and results were within 10% of literature values.

After a conducting material had been produced, the material was dispersed into a polymer matrix and the properties of the dispersion measured to determine the performance of the new material in a dispersion, which would mechanical durability to the material in applications. It was determined that the most effective way of dispersing the Quasi-one-dimensional materials was to mix the material with epoxy resin and add the hardening agent so that the polymer formed with the material within the matrix. The affect on the electrical conductivity of applying an electrical field to align the individual crystals within a powder was also measured.

Results The first material studied was Dibromothiazolo-Thiazole, X-ray diffraction showed that this material had two distinct close stacking crystal structures, figures 1 and 2. The first was observed when the material was crystallized from sublimination, while the second was observed when the material was crystallized from solution. Despite the promising crystal shape, it proved impossible to dope this material using either the organic materials or the inorganic dopants.

Mononitrothiazolo-thiazole was also studied, however, due to the extremely small amount available for study (25 mg), the decision was made not to attempt X-ray diffraction for fear of losing too much of the sample. Mononitrothiazolo-thiazole proved impossible to dope.

Due to the failure of these two materials and the failure of doping attempts in other Thiazolo-thiazole materials researched previously by Dr. Dudis, Thiazolo-thiazole derivatives were ruled out as possible Quasi-one dimensional thermoelectric materials.

The next material researched was Benzo Trithiophene. X-ray diffraction was performed separate from this research by the chemist who supplied the Benzo Trithiophene. The Benzo Trithiophene was successfully doped with Tetracyanoquinomethane, observed visually through the shift of this normally light orange material to an iridescent blue, indicating a shift of the crystal structure observed in a Quasi- one-dimensional material when it has been doped. The conductivity of the doped material remained very low (10^-6 S/cm<) however and doping was attempted using Bromine and Iodine vapor so that a higher conductivity might have been achieved. Doping with Iodine vapor failed and doping with Bromine resulted in a chemical reaction that destroyed the Benzo Trithiophene. Progressing forward with the research, Phthalocyanine materials were to be studied next. Phthalocyanines had been previously been studied for use in organic electrodes and had already been shown to be conducting. Cobalt Phthalocyanine was the first material studied. Cobalt Phthalocyanine, as obtained from Aldritch, was successfully doped with Iodine vapor and showed good conductivity measurements. However, the molecular analysis of the doped material as well as the original Cobalt Phthalocyanine showed >6% contamination with chlorine and the tests are currently being conducted again with pure materials. The impure Iodine doped Cobalt Phthalocyanine powder was pressed into a compact and the electrical conductivity was measured. Alignment of the randomly oriented crystals within the powder compact using a magnetic field with an applied current through the material resulted in a three fold increase in the conductivity when the field was applied with lasting improvement after the field was removed, seen in Figure 3.

Conclusions This work has shown that thiazolo-thiazole materials are not appropriate as thermoelectric materials due to the inability to accept charge carriers and become conducting. Benzo Trithiophene is itself not a good material for thermoelectrics, but related materials show great promise, especially closer packing materials in which there will be greater intra stack charge mobility. Research is currently being conducted at Wright State University to develop materials related to Benzo Trithiophene. Phthalocyanines have shown the most promise in this research. Existing measurements3 and the experimental results indicate that phthalocyanines may be able to out perform existing thermoelectric materials, allowing not only improved application in space but also opening up many terrestrial applications as well.

Figures

Figure 1. Sublimation unit cell of Figure 2. Solution unit cell of BibromoThiazolo- BibromoThiazolo-Thiazole. Thiazole.

Cobalt Phthalocyanine Iodine Conductivity vs. Voltage

7.E-05

6.E-05 With Field 5.E-05

4.E-05 After Removal of Magnetic Field

3.E-05

Conductivity Starting Measurement 2.E-05

1.E-05

0.E+00 0 20406080100120 Voltage

Figure 3. The Conductivity of the Cobalt Phthalocyanine Iodine pressed powder versus the voltage, for three cases. The lower set of points before any field is applied. The mid set of points after a magnetic field has been applied and removed. The high set of points, while the field is being applied.

Acknowledgments The author would like to thank Dr. Douglas Dudis for his Guidance on this project, Tiffany Hall and Dr. Eric Fossum of Wright State University and Dr. Vladimir Benin for the materials they have supplied, and Dr. Albert Fratini for his assistance performing X-ray crystallography.

References 1. Casian, Dusciac, and Coropceanu, Huge Carrier Mobilities expected in quasi-one-dimensional organic crystals, Physical Review B, 66 165404, 2002. 2. Harikumar and Sivasankara Pillai, Doping-enchanced electrical conductivity and electrocatalytic activity in cobalt phthalocyanine, Journal of Materials Science Letters 8 969, 1989. 3. Inabe and Tajima, Phthalocyanines-Versatile Components of Molecular Conductors, Chemical Reviews, 104 5503, 2004. Design and Analysis of a Variable Compliance Robotic Transmission Using a Magneto-Rheological Fluid Damper

Student Researcher: Ehsan Sadeghipour

Advisor: Dr. James Schmiedeler

The Ohio State University Mechanical Engineering Department

Abstract The development of machines that locomote as bipeds is a solution to mechanical locomotion on other planets. Two problems in developing autonomous versions of these machines are energy consumption and the risks associated with the possible impact of robotic components with the astronauts around them. Research has shown that using variable compliance, or elasticity, in robotic joints can decrease both of these factors. This project has focused on developing variable compliance robotic transmissions to increase biped walking efficiency and to decrease the impact forces associated with a possible collision. The results of this study are important in developing autonomous robots that can safely interact with astronauts for an extended period of time.

Introduction Safe and autonomous biped robots may be used as assistants on other planets; yet the low energy efficiency of such machines remains a real challenge (e.g. ASIMO has a battery life of 25 minutes [1]). Alexander [2] suggests increasing energy efficiency by using springs to store and then release energy in the cyclic up-and-down and back-and-forth leg movements inherent to walking and running. Yang et al. [3] have shown that compliance in parallel with the transmission at the knee joint reduces the energy use of the biped robot ERNIE. Softer springs lead to greater efficiency of slow gaits, while stiffer springs lead to even greater efficiency of fast gaits. Thus, in order for a bipedal robot to change the speed of its gait while maintaining high energetic efficiency, it must employ variable compliance in its transmission.

Bicchi et al. [4] also consider the importance of joint compliance in robotic transmissions to ensure safety by reducing the impact forces of collision. However, they note that transmission compliance causes robotic joints to accelerate more slowly and decreases accuracy by increasing vibrations while decelerating. Instead, they propose using variable compliance transmissions to allow for low transmission compliance while accelerating or decelerating and high transmission compliance at high speeds, when the greatest chance of a dangerous collision exists. Westervelt et al. [5] show that by placing a Magneto- Rheological (MR) fluid damper in parallel with a compliant element in an actuation system, the effective compliance can be controlled to vary from highly compliant to essentially rigid. When the damper is turned on, the transmission is rigid, whereas when the device is turned off, the transmission is highly compliant with low damping effects. The compliance range between the two extremes is realized through switching. This behavior is achieved by exploiting the properties of the MR fluid in the damper.

Micron-sized ferrous particles in MR fluid align to increase its apparent viscosity when placed in a magnetic field. Larger magnetic flux densities lead to larger effective viscosities. Additionally, unlike a Newtonian fluid, a minimum stress must be applied to a magnetized MR fluid before it begins to strain. Therefore, such a damper is rigid as long as the applied load is below a threshold, which is a function of the applied magnetic flux density. MR fluid’s rapid response time (only milliseconds) allows for rapid adjustment of the transmission compliance. Kyle Sabatka, a former student at OSU’s Locomotion and Biomechanics Lab, has designed and analyzed a rotary MR fluid damper to investigate creating variable compliance using MR fluid dampers [6]. This damper is composed of two concentric steel cylinders with MR fluid contained in the gap between the two. An electromagnet placed in the inner cylinder provides an adjustable magnetic field. This student’s damper design has served as the starting point of this project.

Objectives The objective is to design an improved MR fluid damper that may be used as part of a variable compliance transmission to increase the walking efficiency of a biped robot. The challenge is to maximize the load required before the fluid begins to strain in the presence of a magnetic field, while minimizing the damping in the absence of a magnetic field. Under such conditions the damper will act as a rigid body when accuracy is needed, with minimum damping effects when high compliance is required. Due to these constraints, the two most important variables in this study are the size of the gap between the two cylinders and the magnetic flux density within the MR fluid. A small gap results in a greater magnetic flux density within the fluid and a higher on-state holding torque; however, such a design choice also increases the off-state damping coefficient, which is undesirable. The opposite of these conditions is true for a large gap. Thus, there is a trade-off between off-state damping and on-state holding torque.

The non-linear nature of magnetic flux may be used to overcome this inverse relationship between gap size and off-state damping. The magnetic flux lines between two parallel plates are almost straight lines; however, a protrusion on one of the plates causes the flux lines in that area to congregate around the protrusion and fringe flux lines will emanate from the sides of the protrusion. Therefore, for the same average gap between two sets of parallel plates, the set with the protrusions will produce a greater amount of magnetic flux density in the gap. In this project non-linear surfaces are employed in designing a better MR fluid damper. After the effectiveness of the new design has been demonstrated experimentally, a similar design may be used for the development of an actual robotic actuation transmission system. Furthermore, those conclusions may be used to design a variable transmission for the Locomotion and Biomechanics Lab’s biped robot ERNIE to test the effectiveness of such a system on the efficiency of an actual biped robot.

Procedure and Results Two new designs were created for the inner surface of the outer cylinder. While the original design used a flat surface, the first new design added rectangular cutouts along the circumference of the surface, and the second new design used semicircular cutouts along that surface. The average gap in all cases was the same. At 3.01% of the outside cylinder radius, the cutout depths were very small. The electromagnetic features of the three designs were modeled using the software package OctaveFEMM, which enabled numerical computation of the magnetic flux density in the gap. For the same average gap, the design with the rectangular cutouts showed a 15.3% increase in the gap’s average magnetic flux density with respect to the flat profile, and the design with the semicircular cutouts showed an 11.3% increase in the gap’s average magnetic flux density with respect to the flat profile.

Based on these results, the case with rectangular cutouts was further investigated. An optimization routine was developed using MATLAB’s Optimization Toolbox and OctaveFEMM to find the gap geometry with the best trade-off between maximizing both the average gap and the average magnetic flux density in the gap for the rectangular cutouts case. Also, the software package ANSYS has been used to numerically investigate any changes in the shear stress applied to the cylinder walls by the MR fluid based on the optimized gap geometry. The study has so far shown that changing the flat cylinder profile to a nonlinear one can greatly increase the magnetic flux density in the gap, even with the same average gap. Based on these results, and with the completion of the ANSYS-related portion of this study, a prototype of this geometry will be built by the end of this spring so that it may be experimentally compared with the earlier flat surface prototype.

References 1. ASIMO Technical Information, January 2003. American Honda Motor Co., Inc. Corporate Affairs & Communications. 2. R. McN. Alexander. Three uses for springs in legged locomotion. International Journal of Robotics Research, 9(2):53–61, 1990. 3. T. Yang, E. R. Westervelt, and J. P. Schmiedeler, Using parallel joint compliance to reduce the cost of walking in a planar bipedal robot, 2007 ASME IMECE, Seattle, WA, 2007. 4. A. Bicchi, and G. Tonietti. Fast and “soft-arm” tactics. IEEE Robotics and Automation Magazine, 11(2), pp. 22–33, 2004. 5. E. R. Westervelt, J. P. Schmiedeler, and G. Washington. Variable transmission compliance with an MR damper. 2004 ASME IMECE, Anaheim, CA, 2004. 6. K. Sabatka. Design and Testing of an MR Drum Rotary Device to Achieve a Variable Effective Compliance Transmission. Honors Thesis The Ohio State University. 2006. Natural Indicators Lesson Plan

Student Researcher: Joseph J. Scavuzzo

Advisor: Dr. Paul C. Lam

The University of Akron Secondary Education Chemistry/Physics

Abstract The OSGC Student Research Project Symposium has given me the opportunity to put some of the techniques I have learned in school into practice. The lesson I have created will give kids a chance to use scientific concepts in a hands on way. I hope to spark the interest of my students so that they will become excited about science. The lesson I have chosen to teach will demonstrate the usefulness of titrations. Students will be testing the pH of some household products, and will perform an acid/base titration.

Project Objectives I hope to accomplish three main goals in this lesson; first students will further their lab skills and technique, second students will gain an understanding of acid/base chemistry, and third students will gain some basic understanding of titrations. To accomplish these goals students will perform a two part lab. First, students will prepare a natural indicator and use that indicator to determine the pH of some common household products. Second, students will perform a strong acid/base titration and determine the concentration of an analyte.

Methodology Used This lesson is designed for students that already have some background in acid/base chemistry. It will be comprised of two learning tasks. The first is an intermediate lecture on acid/base chemistry. The second is a two part lab period in which students will determine the pH of various solutions and students will perform a quantitative strong acid/base titration.

I) Lecture on acid/base chemistry a. The chemistry behind neutralizations. i. Students will be involved in a discussion dealing with how acids and bases neutralized each other. ii. Students will discuss how pH is affected by neutralization. b. The role of an indicator i. Students will participate in a discussion of the physical changes involved in indicators. II) Lab activity a. pH determination of household products i. Students will create a natural indicator from cabbage. ii. Students will then use the indicator to determine the pH of various household solutions. b. Quantitative titration i. Students will titrate solution of HCl of unknown concentration with NaCl. ii. Students will then determine the concentration of the HCL.

Name: Joseph J. Scavuzzo Grade Level: 11 or 12 Subject Area: Chemistry Lesson Topic: Acid/Base and Natural Indicators Time Allocation: 2-3 class periods

Instructional Using knowledge of indicators and acid base interactions students will be able to determine the pH of Goals: various solutions and to use acid/base chemistry analytically. Learning 1) Students will gain experience working in the lab. Objectives: 2) Students will gain a better understanding of acid base chemistry. 3) Students will be able to determine pH using a natural indicator. 4) Students will learn how to use titrations to determine the concentration of an analyte. ODE 11-12 Scientific Inquiry A. Make appropriate choices when designing and participating in scientific investigations by using cognitive and manipulative skills when collecting data and formulating conclusions from the data. Standards: ODE 12, Physical Sciences, Nature of Matter 1. Explain how atoms join with one another in various combinations in distinct molecules or in repeating crystal patterns. Grouping of Students will be placed in groups of four determined by the instructor. Students: Materials: 1) Lab materials such as beakers and burettes 2) One head of cabbage 3) House hold acids and bases 4) Dilute NaOH and HCl Prior 1) Students should have an understanding of the differences and some properties of acids and bases. Knowledge 2) Students should have basic laboratory skills such as mass and volume measurement skills. Needed: 3) Students should know how pH relates to concentrations of H3O. Procedures: Instructional Strategies: 1. The teacher will give a lecture on acid/base chemistry, pH, natural indicators and titrations. 2. Students will participate in a two part group project. Addressing Learning Modalities: Diversity: Auditory Learners- Students will participate in a discussion of the concepts involved in the lesson. Students will also be encouraged to discuss the lab with their lab partners. Visual Learners- Students will have diagrams of the setup used in the lab, and students will be using a hands-on approach in the lab. Kinesthetic/Tactile Learners- Students will be taking a hands-on approach to solving the problems provided.

Assessment/s: Before instruction: During the lecture and discussion the teacher will ask questions and gauge the students understanding by the answers and mastery of the concepts. During instruction: Informal assessment, the teacher will float between lab groups and assess the progress and understanding of each group. After instruction: Students will turn in a worksheet which will be assessed based on the understanding of concepts and the accuracy of calculations.

Space Food and Nutrition

Student Researcher: Rachel A. Scavuzzo

Advisor: Dr. Paul C. Lam

The University of Akron Early Childhood Education

Lesson Plan: “What are the best foods to take into space?” Grades 1-3

Purpose Children will determine the best foods to take into space. They will do this by tasting a variety of foods, as well as using many skills for developmental growth.

Standards Math 1. Number, Number Sense and Operations: Correctly numbering food and placing in order. 2. Data Analysis and Probability: Charting data, analyzing data, and making decisions based on that data.

Science 1. Earth and Space Sciences: Analyzing data to determine whether or not it will be able to survive in the harsh conditions of the atmosphere. 2. Scientific Inquiry: Being able to decipher between the foods and make appropriate estimates. 3. Life Sciences: Applying the scientific ways to everyday life.

Materials Tray Plates Food samples: orange, cereal (Cheerios), instant pudding, slice of bread Drink samples: milk, water, Gatorade Chart and writing utensil

Note: Please modify menu according to allergies and/or religious restrictions.

Previous: Children at this age will need some background information. Ina previous lesson children should be taught the basic ideas of gravity, atmosphere, astronauts, NASA spacecrafts, and any other information the teacher sees fit.

Procedure 1. Put children into small groups of about 3-4. 2. Place food and drink samples on a tray in the middle of each group. 3. Have children place a sample onto plate and try each food. 4. Children will fill out the chart individually, and then share results with their immediate group. 5. Finally, children will participate in a class discussion to compare their thoughts on food, nutrition, and space.

Discussion Questions 1. Because food is stored at room temperature in space, which foods will spoil? 2. Why is it important to try the foods here before taking them into space? 3. Does it matter if all of the astronauts like the same types of food? Why or why not? Extension of Lesson After this lesson children may be interested in finding out more information about space foods and nutrition. Lessons such as food preparation for space, food selections, planning and serving food, classifying space food, mold growth, and waste are all on the NASA website. http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/Space_Food_and_Nutrition_Educ ator_Guide.html

What are the best foods to take into space?

Food/Drinks: Tastes Good: Will NOT Spoil: Comments:

Orange

Cheerios

Pudding

Bread

Milk

Water

Gatorade

Abstract I wanted children to start thinking critically at this young age. The lesson plan included helps children to use many skills such as sensory skills, mathematics, science, critical thinking, and data analysis. The goal of this project is for children to taste different foods and decipher whether of not they would do well in space. Students will be very involved in this hands-on project. They will taste the foods themselves, chart, and discuss with minimal teacher intervention.

Critique/Conclusion Although this lesson needs some background information, I feel as though it is a complete project with many different aspects. Not only does it get young children to think critically, it also will spark interest to look further and learn more on their own.

Resources 1. NASA: http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/Space_Food_and_Nutrition_ Educator_Guide.html 2. Ohio Academic Content Standards Application of Cavitation for Controlled Cleaning

Student Researcher: Miranda L. Steinberger

Advisor: Dr. Sorin Cioc

The University of Toledo Mechanical, Industrial, and Manufacturing Engineering

Abstract Cavitation is a phenomenon where a small, low pressure, fluid bubble forms inside a liquid typically due to some form of vibratory energy, or due to the characteristics of the flow. When a low pressure cavitation bubble is surrounded by a higher pressure medium, the cavitation bubble begins to collapse and pressure builds up inside the bubble. When the cavitation bubble finally implodes, the built up pressure inside is released into the surrounding liquid. The cavitation bubble implosions have been found to have several effects. In pumps and propellers, cavitation causes adverse effects including noise, vibration of device components, metal erosion, and decreased device efficiencies. In the precision cleaning industry, cavitation is a welcomed phenomenon. The fast and frequent collapse of cavitation bubbles is used to clean surfaces immersed in a cleaning fluid. This precision cleaning effect of cavitation has the potential to be applied for cleaning bacteria biofilms that can develop and grow on prostheses used for medical implants.

Project Objectives There are three main goals with this project. The first portion of the project was devoted to finding a material that has properties similar to bacterial biofilms that can successfully be cleaned by cavitation. The second goal of the project was to determine if there is a vibratory amplitude, cycle and application distance that creates an optimal environment for the most effective cleaning. The final goal of this project is to obtain measurement of the pressures generated when cavitation bubbles collapse.

Methodology Used Biofilms are populations of bacteria that get grown into a coating attached to a surface. These biofilms are present in plaque, infections, along with several other places. Based upon the basic characteristics of biofilms, six different materials were selected for testing. Fabric paint, nail polish, wood glue, wall paint, and candle wax were all applied to 1”x1” mirror surfaces for testing. The sixth material was black scum found on small plastic plates. All six of these materials were placed in a water bath and radiated with ultrasound-induced cavitation. The cavitation was caused by a 24 kHz ultrasonic processor and a sonotrode. During the testing the vibratory amplitude, cycle, distance, and time were varied to determine the best parameters for optimal cleaning results. Also, the ultrasonic processor was set to have vibratory amplitudes ranging from 40 to 100% and to occur at either half or full cycles. The sonotrode was placed at different distances from the sample ranging from 0.2 to 1.5 inches away. The exposure time to the vibratory energy ranged from 15 seconds up to 10 minutes per test. Several of the samples were exposed to ultrasonic energy for multiple tests. This added up to a cumulative exposure time of up to 25 minutes for some samples.

Results Fabric paint, nail polish, wall paint, and candle wax were unaffected by cavitation in the parameters’ domains in which they were tested.

The effects of cavitation on wood glue were inconclusive. Initial testing using a sample that had four layers of wood glue applied and allowed to dry for two days became visibly more porous after cavitation exposure. The glue was colored with food coloring for a better estimation of the cleaning effects. Additional testing was completed using just a single layer of wood glue and allowed to dry for a week. These samples were unaffected by cavitation.

Figures

Figure1. Single layer wood glue before Figure 2. Single Layer wood glue after cavitation. cavitation.

All samples of the black scum on plastic plates had noticeable cleaning after cavitation was applied. Sample 1: Cumulative cavitation exposure time = 10minutes Test 1 – Sonotrode was placed 1” away from sample, ultrasonic processor was set at half cycle and 50% amplitude, the test was run for 5minutes. Test 2 - Sonotrode was placed 1” away from sample, ultrasonic processor was set at full cycle and 100% amplitude, the test was run for 5minutes.

Figure 3. Sample 1 before Figure 4. Sample 1 after Test 1. Figure 5. Sample 1 after Test 2. cavitation.

Sample 2: Cumulative cavitation exposure time = 25 minutes Test 1: Sonotrode was placed 1” away from sample, ultrasonic processor was set at full cycle and 50% amplitude, the test was run for 5minutes. Test 2: Sonotrode was place 1” away from sample, ultrasonic processor was set at full cycle and 50% amplitude, the test was run for 10 minutes. Test 3: Sonotrode was place 1/5” away from sample, ultrasonic processor was set at full cycle and 50% amplitude, the test was run for 10 minutes.

Figure 6. Sample 2 before cavitation. Figure 7. Sample 2 after Test 1.

Figure 8. Sample 2 after Test 2. Figure 9. Sample 2 after Test 3.

Sample 3: Cumulative cavitation exposure time = 10 minutes Test 1: Sonotrode was placed 1/5” away from sample, ultrasonic processor was set at full cycle and 50% amplitude, the test was run for 3minutes. Test 2: Sonotrode was place 1/5” away from sample, ultrasonic processor was set at full cycle and 50% amplitude, the test was run for 7 minutes.

Figure 10. Sample 3 before Figure 11. Sample 3 after Test 1. Figure 12. Sample 3 after Test 2. cavitation.

It was observed that cavitation was most effective cleaning at the closest distance of 1/5’’. The longer the samples are exposed to cavitation, the more cleaning was observed. Effectiveness was significantly lower for a distance of 1” regardless of the duration or amplitude. The pressures generated when cavitation bubbles collapse is still being investigated.

References 1. Coatings: An Introduction to the Cleaning Procedures, William R. Birch, June 2000, http://www.solgel.com/articles/June00/Birch/cleaning.htm 2. Biofilms, Cesar Caro, April 2000, http://www.princeton.edu/~ccaro/papers/biofilms.html NASA’s Role in Preserving and Protecting Our Environment

Student Researcher: Brittany M. Studmire

Advisors: Ransook Evanina, Christie Myers

Cleveland State University Chemical Engineering Department

Abstract It is the goal of the Environmental Management Branch (EMB) to ensure NASA Glenn Research Center’s cooperation in preserving and protecting our environment through pollution prevention, the continual improvement of operations, and complying with regulations. In this sense, it was my job to assist the Safety, Health, and Environmental Division in pursuing this goal by working closely with the Environmental Management Branch to ensure safe soil, safe air, safe water, and a safe work environment for all employees.

With regards to this, there are numerous things that we in EMB do to fulfill our goal. This includes helping to minimize NASA’s negative impact on society; ensuring a safe and healthy workplace and environment for all employees; ensuring all environmental compliance regulations put forth by the Environmental Protection Agency, executive orders from the president, and other governing relations are followed; identifying risks from past, present, and current programs, operations, and activities and developing and implementing processes to fix said risks; and providing an awareness of NASA’s responsibilities for the environment to Senior Management.

Objectives and Results Often when a new site is under consideration for construction, the soil is tested for hazardous material. A drill is bored into the earth for taking samples of the soil. Each borehole sample is given an identification number, and then is placed in the database with the data of the assessment and the name of the project. My responsibilities included identifying and gathering missing soil sample numbers and project dates and implementing them into a database. I reorganized the soil sample numbers by projects, allowing easier access for other employees. Then, I scanned in the analytical data for the soil projects, thus eliminating the need for hard paper copies and allowing for easier electronic access.

In addition to this, I completed a Storm Water Pollution Prevention Site Inspection to ensure all appropriate codes were being met for the safety of the environment and the individual. Water pollution from contractor sediment was the main concern. Such things that were looked at include evaluating the construction entrance for proper geotextile fabric and gravel, proper storm inlet protection, silt fencing, and soil stabilization. These active measures will help ensure that NASA spends less on Glenn street cleaning, storm water sewer maintenance, water treatment costs, or sediment removal from reservoirs.

Another problem EMB was faced with was storm water runoff that was harming the local creek and river. The solution that was to label all drains that led to the creek or river with stickers (Figure 1.) announcing the water runoff’s final destination, as well as placing awareness posters around various parts of the lab. This helped us reduce local creek and river pollution, reduce local wildlife deaths due to the pollution, and enable the environment to thrive.

I also performed a water sample analysis (Figure 2.) on a drinking water fountain and compared it with the water from my kitchen sink to evaluate its cleanliness. Such things as chlorine content, pH level, and bacteria count were analyzed to ensure they met the safety limit.

Furthermore, I developed a questionnaire (Figure 3.) that evaluated the emissions health risks and contamination of the air as a result of paint booth vapors. This questionnaire allowed us to locate all paint booths situated on the NASA GRC lab, in addition to giving us the opportunity to monitor each booth for potential hazardous fumes. From this information, we will be able to distribute permits as needed, be adequately prepared for audits, and make sure we are in compliance with the Ohio Environmental Protection Agency emissions regulations; thus helping NASA become as pollution-free as possible.

And finally, I helped implement the Emergency Response Plan for NASA Glenn by fixating a list of all important contacts needed in case of an emergency. This will help ensure mission success and collaborative cooperation from all individuals.

Overall, these may seem like small things, but they are important projects nonetheless that bring us closer to our goal of creating a safe, healthy environment that is sustainable for future generations.

Figures/Charts

Figure 1. Storm Water Drainage Labels.

Figure 2. Water Sampling Data and Analysis.

Figure 3. Paint Booth/Fume Hood Questionnaire. Acknowledgments I would like to thank the following people: Ransook Evanina and Christie Myers, my mentors; Priscilla Mobley, my branch chief; Danielle Griffin, Aaron Walker, Dan Papcke, Eli Abumeri, and Don Easterling. The Complete Thermodynamics of Benzene Via Molecular Simulation

Student Researcher: John L. Tatarko

Advisor: Rolf A. Lustig, Ph.D.

Cleveland State University Department of Chemical and Biomedical Engineering

Approximately 1000 different chemicals are manufactured on a production basis. The complete thermo- physical properties have been characterized for less than 15 of these. Traditionally, property measurements were based on costly and time consuming laboratory experiments. Typically, these experiments were restricted to PVT data, heat capacities, speed of sound, and vapor liquid equilibrium data. The cost to determine these properties at one temperature is well over $1000. Molecular simulation provides virtual laboratory experiments over the entire fluid range. It is possible to characterize the properties of important industrial chemicals for use in chemical process design.

There are multiple research objectives. In addition to the study of thermodynamics, classical mechanics, and statistical thermodynamics there is additional material in programming and computer architecture to master. A system of benzene molecules in the fluid phase is subjected to computer simulation. Fifteen thermodynamic properties along with the mixed temperature and density derivatives of the residual Helmholtz energy function are measured over the entire fluid range. Finally, an equation of state is fit to the data.

The computer simulations are to be run on a 20 node PC cluster in the Department of Chemical and Biomedical Engineering at Cleveland State University. This a scratch-built system running a Linux operating system and using proprietary software developed by my research advisor. There are two very different approaches to molecular simulation.

The molecular dynamics method generates true physical trajectories of all particles within a closed system for a real time period on the order of 10-12 seconds. Time averages are obtained solving Newton’s equations. The Verlet algorithm is a popular finite difference method for this approach which is based strictly on physics.

Monte Carlo simulation requires only the coordinates of the particles in the system for determination of thermodynamic properties. Momenta are not required. This approach is based on ensemble averages of an NVT system. The Metropolis algorithm provides a stochastic process that generates a random walk for particles in the system. My research is based on Monte Carlo simulation.

Crucial to successful simulation is proper representation of molecular geometry, energetics, and intermolecular interactions. Ensemble averages require that the potential energy of the closed system be determined. Benzene is modeled as a planer 6-center Lennard-Jones molecule, (6clj). Each site α on molecule i, interacts with site β on molecule j. For benzene, there are 36 molecular interactions. The Lennard-Jones potential model is one of several that incorporate expressions for molecular attraction and repulsion. The software incorporates all these parameters within the program. At present the software is flexible enough to simulate the thermodynamics of many symmetrical molecules.

Presently the initial modeling and calculations have been completed. The phase boundaries have been calculated via thermodynamic perturbation theory and compared to that calculated by the theory of corresponding states. Four hundred points in the benzene fluid phase region have been submitted for simulation and measurement. It is estimated that the first run will take about 2 weeks.

The software calculates the thermodynamic values and associated errors for any point in the fluid region. The phase boundaries are recalculated from these results and compared to the theory. A new phase envelope is calculated and compared to experimental results. From the results, a highly accurate equation of state can be developed that represents the complete thermodynamics of benzene in the fluid region. The mixed partial derivatives, up to order 4, of the Helmholtz energy function are also calculated. From these a three dimensional surface can be developed.

These results are not just an academic pursuit. The accuracy of commercial chemical process simulators is highly dependent on reliable property tabulation. The availability of reliable thermodynamic data throughout the fluid region may lead to innovative and more robust process design. Results from the Monte Carlo simulation can be used to predict transport properties via molecular dynamics simulation. My research this summer will entail the construction of a data base that includes 25 symmetric molecules similar in structure to benzene. This will form the basis for future computer simulation.

I wish to thank my research advisor, Rolf A. Lustig, Ph.D., for his gift of time and advice and use of his computer system and software. I also thank the faculty and staff in the Department of Chemical and Biomedical Engineering at Cleveland State University. Finally, I wish to acknowledge the generous support of The Ohio Space Grant Consortium. Design of an Unmanned Aerial Vehicle Autopilot Using a Model Airplane Flight Simulator

Student Researcher: Brian J. Tomko

Advisor: Dr. Jed E. Marquart

Ohio Northern University Electrical and Computer Engineering and Computer Science

Abstract There have been several implementations of unmanned aerial vehicles (UAVs). One particular implementation involves using GPS to navigate a model airplane around a small area. There have been several open sourced and proprietary projects dedicated to taking the average hobby airplane or helicopter and creating a UAV using sensors. Design of such a project requires much planning as to what control systems will be involved, how the sensors will be interfaced, how the plane will navigate, what size of airspace will be involved, how precise the navigation must be, and much more. As a preliminary step of the project, it would be very helpful to put the implementation into a simulator to test control systems and various autopilot algorithms.

For my project, I selected a model airplane simulator called CRRCsim which is open source and has a very realistic flight model based on geometry and weight. I modified the source code so that the simulator would output sensor values such as pitch, bank, altitude, latitude, and longitude and store them in shared memory. Then, I wrote an autopilot separate from the modified simulator which would read these values and apply a tunable PID control system algorithm developed by the creators of the FlightGear Flight Simulator open source project. The autopilot would in turn fly the plane via a joystick emulator called PPJoy to preselected waypoints. I also developed a landing algorithm for the autopilot for precise runway landing.

Project Objectives This paper is written for the Ohio Space Grant Consortium (OSGC) Scholarship. This is a significant research contribution to control systems engineering in the UAV field for hobbyists. The project was intended to increase further development in the UAV field and to provide a test bed for UAV design. It also implements an algorithm for autonomous landing.

Methodology Assumptions The following assumptions were made prior to the design of the project. The GPS latitude and longitude shall be set to update at 4 Hz. The GPS calculations shall be done every 4 Hz. The GPS latitude and longitude shall be rounded to the nearest 4 decimal places in degrees (as done in a GPS NMEA string). The servos shall move at a speed of 0.22 seconds per 60 degrees (as are most hobby servos for model airplanes). The servos shall have a range of motion of 90 degrees. The servo update values shall be rounded to the nearest 2 degrees (for realism). The sensors for pitch, bank, altitude, and heading (but not the GPS) shall update every 50 ms. The control system shall update every 50 ms after the sensors update. Altitude of the plane is the same value as the simulator value. Sensor values of altitude, heading, pitch, and bank are exact with no noise.

Initial Setup The approach for how the UAV autopilot simulation would function is shown in Figure 1. Closed Loop System Write Values of Sensors Airplane Simulator Shared Memory

Moves Aircraft Read Values Control Surfaces of Sensors

Aircraft Servo Control Signal Virtual Joystick Autopilot

Figure 1. Overall Program Interaction. Modifying the Simulator The CRRCSim program was modified such that it would give the necessary values from the game and export them to shared memory. The first step was to modify the code so that the game would allocate shared memory when it was launched. Next, the main loop of the program where each frame is updated was modified to send the following values from the game to shared memory: altitude above the ground in feet, bank angle in degrees (0 degrees is level, negative is left, positive is right), pitch angle in degrees (0 degrees is level, negative is nose down, positive is nose up), heading in degrees (same as a magnetic compass), airspeed in feet per second, latitude in radians (with launch point set at 0 radians), longitude in radians (with launch point set at 0 radians), and a Boolean flag to tell if the plane is in a stalled state. Setup a Separate Autopilot Program A new program was created for the autopilot. The purpose of the autopilot program is to read the current parameters from the simulator and then execute the control system to fly the plane. Code was written to find the shared memory from the game. Then, an infinite loop was created with code to read each of the values from the shared memory. Setup a Virtual Joystick for the Autopilot Program There is a need for the autopilot to interface with the simulator directly instead of a human interface device such as a joystick. A free virtual joystick program called PPJoy was downloaded and installed. Then, a separate thread was implemented in the autopilot program which would read four servo position variables from the autopilot for aileron, elevator, throttle, and rudder control and call the PPJoy program via header files to fly the plane. The separate thread makes it easier to implement the servo delay timing. In addition, a program was written to calibrate the virtual joystick in the actual simulator by iterating through all the possible discretized joystick values.

Flight Control System Setup There is a need of flight control systems to allow the plane to stably fly on its own. The control systems needed to control angle of bank, angle of pitch, and heading. Utilizing these control systems, the altitude will be controlled as well as enabling the execution of performance turns. Very small heading changes (turns) utilize rudder control, but for performance reasons, large turns require a combination of bank (ailerons) and pitch (elevator). Also, from the maintain pitch control system, a way to climb, descend, or maintain altitude could be derived. The implementation of how the plane would be controlled, from the lowest level of control to the highest level of control, is shown in Figure 2. Abstraction Layers of the Flight Control Methodology Navigation Landing

List of Waypoints GPS Calculations GPS Calculations

Derived Control Systems Derived Landing Control System

Maintain Altitude Performance Turns Sideslip

Flight Control Systems

Bank Pitch Small Heading

Airplane

Sensors Control Surfaces Servos

Figure 2. Diagram of the Layers of Implementation. Implement the Control Algorithm A PID (proportional-integral-derivative) closed loop control system algorithm was obtained from the source code of the FlightGear Flight Simulator open source project (www.flightgear.org). The code was implemented inside the autopilot program with slight modification. The formula for the autopilot first computes the change in output, ∆un , for the current time step by: ⎡ ⎛ T ⎞ T ⎤ ⎜ s ⎟ d ∆un = K P ⋅ ⎢()ePn − ePn−1 + ⎜ ⋅ en ⎟ + ⋅ ()eDf n − 2 ⋅ eDf n−1 + eDf n−2 ⎥ ⎣ ⎝ Ti ⎠ Ts ⎦

Then, from ∆un , the absolute output, un , can be calculated by: un = un−1 + ∆un

In these equations, ∆un is the incremental output. K P is the proportional gain. eP is the proportional error with reference weighing such that eP = β ⋅rn − yn where β is the weighing factor, rn is the reference or set point, and yn is the measured process value. en is the error such that en = rn − yn . Ts is the sampling interval or ∆t . Ti is the integrator time. Td is the derivator time. eDf is the derivative

Ts Ts Ts error with reference weighing and filtering such that eDf n = eDf n−1 /( +1)+ eDn ⋅ /( +1) and Tf T f T f T f is the filter time where Tf = α ⋅Td (withα usually set to 0.1) and eD is the unfiltered derivative error with reference weighing such that eD = γ ⋅rn − yn (γ is the weighing factor). un is the absolute output.

The algorithm inputs are yn which is the current process value, rn which is the reference point, β which is the weighing factor (usually set to 1), γ which is the unfiltered derivative error weighing factor

(usually set to 0), α which is the filter time weighing factor (usually set to 0.1), K P which is the proportional gain, Ts which is the sampling interval or ∆t , Ti which is the integrator time, Td which is the derivator time, umin which is the minimum output value for un , and umax which is the maximum output value for un .

The algorithm output is un which is the absolute output value. The reason the change in output based on the change in error is calculated instead of only the absolute term is to allow gentler control movement when large control inputs are encountered and to prevent integrator windup. The following variables were selected for a C code structure that is utilized for the bank, pitch, and small heading controls based on the above variables. The input values are rn which is the reference (set point) value and yn which is the measured process value. The configuration values (which are constant) are Kp which is the proportional gain, alpha which is the low pass filter weighing factor (set to 0.1 for all control systems), beta which is the process value weighing factor for calculating proportional error (set to 1.0 for all control systems), gamma which is the process value weighing factor for calculating derivative error (set to 0.0 for all control systems), Ti which is the integrator time (in seconds), Td which is the derivator time (in seconds), umin which is the minimum output clamp (set to -1.0 for all control systems), umax which is the maximum output clamp (set to 1.0 for all control systems). The previous state tracking values are epn-1 which is the proportional error, edfn-1 which is the derivative error, edfn-2 which is the derivative error, un-1 which is the output, desiredTs which is the desired sampling interval (in seconds), elapsedTime which is the elapsed time (in seconds), and un which is the absolute output.

Ts is not included in the structure because it is the same for all the control systems and determines the delay of the main loop of the autopilot program. Every 50 milliseconds, or Ts, the autopilot program reads the sensors values from shared memory, executes the control algorithms, and then updates the virtual joystick position. Tuning the Control Systems The process of tuning the values in the C code structure for each control system was accomplished using a tutorial from the FlightGear project. The first step involves finding the ultimate gain KPu. This is done using a trial and error method. First, the derivative time Td is eliminated by setting it to its minimum value of zero. Then, the integral time Ti is eliminated by setting it to a very high value. Next, the proportional gain, KP, was set to a very low value. For the value to be tuned, its reference value was set to 0, and its process value was set to the sensor reading every 50 ms. The plane was then launched in the simulator and the control system was enabled. KP was then adjusted until continuous cycling (a sustained oscillation with constant amplitude) of system occurred. Utilizing this ultimate oscillation period, Tu, and the ultimate gain, KPu, the control system was then ready to be tuned using a Ziegler-Nichols Method for PID closed loop control systems as shown by the following equations:

K P = 0.6⋅ K Pu Where: K P is the proportional gain and K Pu is the ultimate gain. T T = u Where: T is the integral time and T is the ultimate period. i 2 i u T T = u Where: T is the derivative time and T is the ultimate period. d 8 d u Bank Control System Using the tuning process described in 3.3.2, KPu was found to be 0.02 resulting in continuous cycling with an ultimate period, Tu, of 2.5 seconds. Then, using the Ziegler-Nichols Method for PID closed loop control systems, KP=0.012, Ti=1.25, and Td=0.3125 Pitch Control System Using the tuning process described in3.3.2, the plane was found to oscillate up and down with a KPu of 0.015 and an ultimate period, Tu, of 3.0 seconds. Then, using the Ziegler-Nichols Method for PID closed loop control systems, KP=0.009, Ti=1.5, and Td=0.375 Heading Control System The maintain heading for small differences in heading control system was tuned using the process in 0 Tuning the Control Systems. The only major difference was the value that was tuned was not what the compass sensor of the plane indicated, but rather the relative heading (or the number of degrees the plane needed to turn to reach its desired heading). The plane was found to oscillate side to side with a KPu of 0.075 and an ultimate period, Tu, of 1.0 seconds. Then, using the Ziegler-Nichols Method for PID closed loop control systems, KP=0.045, Ti=0.5, and Td=0.125

Implementing the Derived Control Systems Performance Turns From the maintain bank control system, a way to turn the plane more than just a few degrees was derived. If the relative heading was more than 40.0 degrees (a large turn), the plane would enter a fixed bank angle of 30 degrees and apply slight right rudder for a right coordinated turn. Likewise, if the relative heading was less than -40.0 degrees, the plane would enter a fixed bank angle of -30 degrees and apply slight left rudder for a left coordinated turn. If the relative heading was between -40 and 40 degrees, the bank would linearly level off as relative heading approached zero, and the maintain heading for small differences in heading control system would be engaged to use rudder to help push relative heading to zero. This derived turning system also requires a maintain altitude control system running because lift is lost during a bank causing the nose to continue to pitch down. Figure 3 shows a graph of bank angle versus relative heading.

Bank Angle vs. Relative Heading 40 40

30

20

10

b()rh 0

10

20

30

− 40 40 60 40 20 0 20 40 60 − 70 rh 70 Relative Heading (degrees) Figure 3. Bank Angle (degrees) vs. Relative Heading (degrees).

Altitude Control Similarly, from the maintain pitch control system, a way to climb, descend, or maintain altitude was derived. If the delta altitude (desired altitude minus current altitude) was greater than 30 feet, the plane is very low and would therefore enter into a fixed pitch of 20 degrees to climb to its desired altitude. Likewise, if the delta altitude was less than 30 feet, the plane is very high and would therefore enter into a fixed pitch of -20 degrees to descend to its desired altitude. If the delta altitude was between -30 and 30 feet, the pitch would exponentially level off via a square root function. If the delta altitude was between - 2 and 2 feet, the pitch would be zeroed. The issue with this model is even though it provides a workable solution, the pitch angle to maintain altitude varies with airspeed and the plane itself. Also, this model assumes the plane is moving fast enough and has enough throttle to prevent a stall condition when climbing. Figure 4 shows a graph of pitch angle versus delta altitude.

Pitch Angle vs. Delta Altitude 30 30

20

10

p()da 0

10

20

− 30 30 40 20 0 20 40 − 55 da 55 Delta Altitude (feet) Figure 4. Pitch Angle (degrees) vs. Delta Altitude (feet). Navigation Setup The next step in the project was to implement airplane navigation via GPS. This entailed the following steps. Implement a System of Waypoints A way to use GPS for airplane navigation is to implement a system using waypoints. Each waypoint was implemented as a structure in the code consisting of a latitude (in degrees), a longitude (in degrees), and an altitude (in feet). Several different waypoints were created as an array of waypoints. The plane would then fly to each waypoint sequentially. A waypoint was considered cleared when the plane was within a 60 foot radius of the waypoint. A very basic flight for navigation and landing using waypoints is shown in Figure 5.

Begin Glideslope Touchdown Point Waypoint Waypoint

Launch Point Last Waypoint

Autopilot Engage Figure 5. Navigation Using Waypoints. GPS Calculations For the plane to be able to navigate successfully as well as land, various calculations needed to be done. The following formulae were obtained from the internet (http://williams.best.vwh.net/avform.htm). These formulae use a great circle approximation instead of a flat earth approximation for additional accuracy and because computing power is cheap enough to perform the calculations. Also, all calculations were done in double precision floating point for accuracy because of the close proximities between points in the model airplane realm. Distance between Two Points The distance between two points was necessary to determine how close the plane was to its current waypoint.

The great circle distance d in radians between two points with coordinates {lat1,lon1} and {lat2,lon2} is given by: 2 2 ⎛ ⎛ lat1 − lat2 ⎞ ⎛ lon1 − lon2 ⎞ ⎞ d = 2 ⋅ arcsin⎜ sin⎜ ⎟ + cos(lat1) ⋅ cos(lat2) ⋅sin⎜ ⎟ ⎟ ⎜ 2 2 ⎟ ⎝ ⎝ ⎠ ⎝ ⎠ ⎠ where the planes position is {lat1,lon1}, and the current waypoint is {lat2,lon2}. A simpler identical formula is available, but it uses an acos function which can be more subject to rounding error at close distances. The distance can then be converted from radians to feet by: d ⋅180 ⋅ 60 ⋅ NAUTICAL _ MILES _ TO _ FEET d _ feet = π Course between Two Points The true course between two points (not adjusted for magnetic variation) was necessary to determine what compass course in degrees to fly to the waypoint. The true course great circle bearing tc in radians between two points with coordinates {lat1,lon1} and {lat2,lon2} is given by: tc = mod{arctan 2[sin(lon1 − lon2) ⋅ cos(lat2), cos(lat1) ⋅sin(lat2) − sin(lat1) ⋅ cos(lat2) ⋅ cos(lon1 − lon2)]} where the planes position is {lat1,lon1}, and the current waypoint is {lat2,lon2}. The course can then easily be converted from radians to degrees. The simulator does not implement magnetic variation in the game, so the magnetic heading is the same as the true course heading.

Landing Algorithm with Cosswind Capability An additional need of the autopilot program is the capability to autonomously land the plane via GPS. Landing would occur after the plane completed all its waypoints. In this algorithm, the plane must perform the following: • Line up with the runway at a predefined distance • Maintains the heading of the runway • Follows an imaginary glideslope down to the touchdown point ƒ Execute sideslipping to maintain horizontal alignment with the glideslope, especially in the presence of crosswinds

Preflight A runway landing heading and GPS touchdown point must be selected before the simulated plane takes flight. The touchdown point is the heading, latitude, and longitude values of the desired landing position on the runway. Altitude at the landing point is always 0 feet since the simulator always references altitude as above ground. The wind speed values of the simulator must be configured during preflight. The desired landing glideslope angle and the desired distance from the touchdown point to start the landing approach must also be chosen in the autopilot program during preflight.

GPS Calculations Landing Waypoints The autopilot computes two GPS landing waypoints based on the touchdown point. The first waypoint is the heading, altitude, latitude, and longitude where the plane will begin to follow the glideslope. The second point is a desired distance away from the glideslope waypoint so that the plane has enough distance in order to reach the glideslope point at the runway orientation heading. A point {lat, lon} a distance d out on the true course, tc, radial from point {lat1, lon1} is found by: lat = arcsin[]sin(lat1) × cos(d) + cos(lat1) × sin(d) × cos(tc) dlon = arctan2[]sin(tc) × sin(d) × cos(lat1), cos(d) − sin(lat1) × sin(lat) lon = mod(lon1 − dlon + π , 2π ) − π where the touchdown point is {lat1, lon1} in radians, d is the distance in radians from the touchdown point where final approach begins, and tc is the true course radial in radians given by: tc = mod(runway_heading + π , 2π ) A side view of the landing glideslope and the required waypoints for landing are shown in Figure 6. Begin Glideslope Waypoint

Touchdown Point Waypoint

Last Waypoint Figure 6. Side view of the Glideslope. Cross Track Distance The cross track distance is the perpendicular distance (to the right or left) that the plane is from landing glideslope. The distance is determined by using the planes current location and then calculating the perpendicular bisector to the glideslope. The cross track error, XTD, (distance off course) in radians is given by: XTD = arcsin[]sin(distance_A_to_D) ⋅sin(course_A_to_D − course_A_to_B) where A is a waypoint marking the beginning of the final approach a given number of feet away from the runway touchdown point and is lined up with the runway, B is a waypoint marking the touchdown point of the runway, and D is the position of the plane (usually off course). Positive XTD means right of course, and negative XTD means left of course. This assumes point A is not the North or South Pole. An illustration of cross track distance is shown in Figure 7.

Cross Track Along Track Distance Distance Distance Between Waypoints Minus Along Track Distance

Horizontal Glideslope

Begin Glideslope Touchdown Point Waypoint Waypoint Figure 7. Cross Track Distance and Along Track Distance. Along Track Distance The along track distance can be used to determine how far the point on the glideslope perpendicular to the plane is from the touchdown point. However, the distance from the plane to the touchdown point is a good enough estimate that along track distance need not be used. However, if this were to be implemented later, the along track distance, ATD, (distance from A to the point abeam the plane along the glideslope) in radians is given by: ⎛ sin(XTD) 2 ⎞ ATD = arcsin⎜ sin(dist_A_to_D) 2 − ⎟ ⎜ ⎟ ⎝ cos(XTD) ⎠ where A is a waypoint marking the beginning of the final approach a given number of feet away from the runway touchdown point and is lined up with the runway, B is a waypoint marking the touchdown point of the runway, and D is the position of the plane (usually off course). Then, the distance from the point abeam the plane along the glideslope to the touchdown point is given by: distance_A_to_B − ATD An illustration of along track distance is shown in Figure 7. Altitude Along Glideslope Instead of using the altitude for a given waypoint, the autopilot must constantly compute the desired altitude based on the glideslope angle and the position of the plane along the glideslope. The computation of delta altitude which is used for the maintain altitude control system's reference value is given by: ∆Altitude = 5 + distance_from_touchdown_point ⋅ tan(GLIDESLOPE_RADIANS) − Plane_Altitude The 5 adds 5 feet to the glideslope to allow time for the plane to level out. After the plane attains the touchdown waypoint, the plane is assumed to be close enough to the ground for more accurate sensors such as sonar to give the exact height above the ground; the engine is cut, and the plane descends at a rate of 2 feet per second until it is 0.5 feet above the ground. The plane will try to hold 0.5 feet above the ground until it settles to the ground. Landing Control Systems The plane uses the sideslip method to remain horizontally lined up with the glideslope. The sideslip method is where rudder is used solely to maintain the fixed heading of the runway, and ailerons/bank angle is used to slide the plane over to the glideslope based on the cross track distance via the horizontal component of lift. When cross track distance is greater than 2 feet (the plane is right of course), the autopilot uses a square root function to determine the amount of left bank to use for the sideslip. Likewise, when cross track distance is less than -2 feet (the plane is left of course), the autopilot uses a square root function to determine the amount of right bank to use for the sideslip. Otherwise, the plane is deemed close enough to the glideslope, so zero bank is used. Figure 8 shows a graph of bank angle versus cross track distance. It is safe to use the sideslip all the way to landing, for a good crosswind landing entails landing on one wheel at a time. When the plane gets within 60 feet of the touchdown waypoint, the throttle is set to idle, and the plane descends at a rate of two feet per second. Descent rate is controlled by time since drag forces will eventually slow down the plane. This creates a gentler landing than GPS would allow. The plane will try and level off when it is half a foot off the ground so it can settle gently to the ground after enough airspeed is lost.

Bank Angle vs. Cross Track Distance 30 30

20

10

ba() xtd 0

Bank Angle (degrees) Angle Bank 10

20

− 30 30 60 40 20 0 20 40 60 − 70 xtd 70 Cross Track Distance (feet) Figure 8. Bank Angle (degrees) vs. Cross Track Distance (feet). Results Obtained The plane successfully flies to all its waypoints and makes a gentle landing. In addition, the control system tuned on the one specific plane also works well for other planes with different handling and flight characteristics. In addition, the GPS speed of 4 Hz and its precision of four decimal places are very sufficient to navigate and land a model airplane successfully around a small flying area.

Significance of Results With the successful navigation and landing via GPS waypoints and the working control systems in the simulator, confidence can be gained when implementing a real UAV using a model airplane. One can design a model airplane using modeling software, specify its physics parameters, implement it in the simulator, and use the simulator to test control systems before building the actual UAV.

References 1. "CRRCsim: a Model-Airplane Flight Simulation Program for Linux." SourceForge. 16 July 2005. 1 Apr. 2008 . 2. Olson, Curtis L. "Autopilot Tuning." FlightGear Flight Simulator. 4 Feb. 2004. 1 Apr. 2008 . 3. Olson, Curtis L. "PID Algorithm Implementation." FlightGear Flight Simulator. 4 Feb. 2004. 1 Apr. 2008 . 4. Westhuysen, Deon. "PPJoy - Parallel Port Joystick Driver for Windows 98, Me, 2000 and XP." 1 Apr. 2008 . 5. Williams, Ed. "Aviation Formulary V1.43." 31 July 2006. 1 Apr. 2008 . Three Dimensional Dynamic Visualization of Bone Remodeling

Student Researcher: Ryan E. Tomlinson

Advisor: Dr. Christopher J. Hernandez

Case Western Reserve University Biomedical Engineering

Abstract Astronauts may seriously increase their risk of bone fracture after returning to a weight-bearing environment from outer space due to bone loss. Since bone mineral density is not a good predictor of bone fracture risk, here we present automated techniques to use serial milling imaging to examine bone remodeling. Serial milling imaging is a technique in which the cross-section of a specimen is imaged then cut away repeatedly until the entire specimen volume is acquired. In this project, each cross-section is made up of a mosaic of fluorescent images with in-plane resolution of 1.7µm and out-of-plane resolution of 5µm. Non-uniform illumination in each image of the mosaic is removed using a retrospective background function. The mosaic of images is tiled together into a single cross-section using normalized cross correlation. Signal originating from fluorescent material below the focal plane is removed by subtracting a portion of subsequent images. The cross-sectional images are vertically aligned using a fiduciary marker. An iterative thresholding technique is used for segmentation, and morphological processing is used to remove noise and artifacts. The resulting binary images are stacked to create a three- dimensional view of the specimen. We demonstrate this technique on bone labeled with multiple fluorescent markers.

Project Objectives 3D dynamic histomorphometry has several distinct advantages over 2D dynamic and 3D static histomorphometry. These advantages include increased accuracy and repeatability compared to 2D dynamic, and the ability to measure remodeling activity, unlike 3D static. However, 3D dynamic histomorphometry poses several technical challenges to overcome. Here we present the image processing solutions to the major issues impacting 3D dynamic histomorphometry.

First, we developed the protocols necessary to obtain two labels of bone formation in rat trabecular bone with fluorescent markers. Two such fluorescent marker sets were developed – one for use in rat bone and one for use in human bone. Secondly, we created the image processing toolset necessary to convert the cross-sectional mosaics into a single cross-sectional image devoid of noise and confounding artifacts. Lastly, we innovated a system to reliably stack the two-dimensional images into a three-dimensional view of bone formation and resorption. We are also able to demonstrate the feasibility of human bone formation studies using this technique.

Methodology Used For this project, we inject rats with bone formation markers with a one-week time interval between markers. Current studies in a rat model utilize Xylenol Orange and Calcein as fluorescent labels. Various tetracycline-based formation markers are also being investigated for clinical use of this system. After harvesting the L5 vertebrae, the bones are fixed in formalin for 24 hours and embedded in polymethylmethacrylate (PMMA) tinted with Sudan Black dye.

The serial milling imaging system is computer controlled to repeatedly cut away the top of the specimen and capture a mosaic of images for each cross-section. The system automatically changes filter sets to capture UV signal (bone), FITC signal (Calcein), and TRITC signal (Xylenol Orange). After the full volume of the specimen is acquired, a series of two-dimensional image processing steps is applied, as follows.

Non-uniform illumination correction Non-uniform illumination (NUI) is a large-scale, low-amplitude intensity gradient caused by the light source. NUI adversely affects attempts to quantify image content by reducing contrast in areas with poor illumination. Although bad acquisition can be corrected with prospective correction (calibration), object- dependent shading requires retrospective correction (estimation of the original image from the corrupted image). Since bone and fluorescent labels only fluoresce when directly illuminated with light of the appropriate wavelength, our images contain object-dependent shading. In order to correct the image, a new image is created by blurring the original (corrupted) image with a large Gaussian kernel. From each of 16 regions of the blurred image, the darkest pixel is selected. These extracted points are fit to a second order polynomial surface representing the illumination profile in the form of Equation 1. 2 2 f (x, y) = a0 + a1y + a2 x + a3y + a4 x + a5 yx (1) Finally, the corrected image is obtained by dividing the original image by the illumination profile f(x,y).1,2 An example of this automated correction technique can be seen in Figure 1.

Out of Plane Fluorescence Subtraction Unlike other imaging modalities, serial milling imaging leaves a large portion of the specimen below the focal plane of interest. Thus specimen below the plane of imaging will fluoresce, and this fluorescence will be captured in images from the focal plane. Using an opaque embedding medium, such as tinted polymethylmethacrylate, reduces the intensity of this out of plane fluorescence. However, to remove the remaining fluorescence in the current image (Ii), we utilize the image captured directly below the plane of interest (Ii+1). Note that the fluorescence transmitted through the medium will follow an exponential decay, denoted by e-us. There will also be some dispersion of light as it returns from below the focal plane to the camera, denoted by h(x,y). Therefore, the true signal from the focal plane (Ci) can be calculated by Equation 2. An example of this automated technique is seen in Figure 3. −2µs Ii − ()()(e )(Ii + 1) ⊗ h(x, y) = Ci (2)

In-Plane Alignment In order to maintain high-resolution without sacrificing field of view, it is necessary to capture a mosaic of images for each cross-section of the specimen. This is accomplished by moving the camera relative to the stationary specimen. Although the motion of the camera is highly repeatable, the accuracy of the camera’s positioning is lower than the resolution of the images. Therefore, we employ normalized cross- correlation to position the overlapping images relative to each other in the final mosaic. Since cross- correlation is relatively expensive computationally, only the known image overlap plus twice the machine inaccuracy is cross-correlated. This automated technique provides perfect in-plane alignment of images without significant computational expenses.

Vertical Alignment Utilizing the same cross-correlation method used for in-plane alignment, we perform vertical alignment between cross-sectional images with a fiduciary marker. Several holes are drilled through the specimen and filled with epoxy. Because the epoxy fluoresces with higher intensity than bone, we are able to isolate the drill holes in each cross-sectional image. To vertically align the specimen, each cross-section is aligned relative to a drill hole in the first cross-section. The drill hole with the highest correlation coefficient (strongest correlation) is chosen for vertical alignment. This automated technique provides superior vertical alignment with small labor costs and little computational expense.

Segmentation Because of the large intensity variations present in bone, global thresholding is not an appropriate option for segmenting these images. Therefore, an implementation of an iterative thresholding algorithm is employed3. This technique initially globally segments the image, and then iteratively reclassifies each pixel based on its immediate neighborhood. After many iterations, the original image is segmented accurately. Figure 2 shows the increasingly accurate segmentation with increasing iterations. Although this algorithm is computationally expensive, the intensity variations in these images demand an intensive method. Conclusions After two-dimensional processing, the cross-sectional images are stacked into a three-dimensional model of bone with bone formation markers using AMIRA (Mercury Systems). This view is shown in Figure 4. Three-dimensional histomorphometry allows us to make quantitative measurements not available using other forms of histomorphometry.

Figures

Uncorrected Corrected Figure 1. NUI correction applied to the uncorrected image (left) results in the corrected image (right). Note the increased illumination in the corners of the image, resulting in higher accuracy in segmentation.

Figure 2. As the iterative thresholding algorithm iterates, the segmentation becomes increasingly more accurate. After 100 iterations, noise is completed removed and the segmentation is complete.

Figure 3. This underlying image is used to remove signal originating from below the focal plane. Arrows in the Target Image indicate out of plane fluorescence. The Final Image showcases the efficacy of this out of plane fluorescence subtraction technique.

Figure 4. This figure illustrates one of the unique facets of this imaging technique – capturing multiple fluorescent signals. In this three- dimensional view of bone, the yellow and orange is signal from bone formation labels. Also, a bone resorption cavity is marked with the black arrow.

Acknowledgments The authors would like to acknowledge CJ Slyfield and the MMM Lab Group for extensive help during the preparation of this manuscript. Additionally, the authors note that this work was supported in part by NIH/NIAMS R21 AR054448, Case Alumni Association, and Case Western Reserve University.

References 1. Russ, J.C. (1999) The Image Processing Handbook, 3rd ed. IEEE Press, Boca Raton, Florida. 2. D. Tomaževič, B. Likar, and F. Pernuš. (2002, Dec.). Comparative Evaluation of Retrospective Shading Correction Methods. J. Microsp. 208(3), pp. 212-223. 3. H.S. Wu, J. Berba, and J. Gil. (2000, March) Iterative Thresholding for Segmentation of Cells from Noisy Images. J. Microsp. 197(3), pp. 296-304. Optical Tracking and Verification for Autonomous Satellite Research

Student Researcher: Ashley M. Verhoff

Advisor: Dr. Albert B. Bosse

University of Cincinnati Department of Aerospace Engineering and Engineering Mechanics

Abstract Autonomous docking of spacecraft with existing satellites is an extremely useful prospect. From refueling to service and maintenance to orbital alterations and transfers, this technology offers many valuable possibilities. In support of this goal, an optical tracking and verification program for determining the position of an autonomous ground vehicle is being developed. Two movable cameras and a computer code utilize computer vision algorithms to track infrared LEDs and employ stereo vision geometry to calculate their three-dimensional locations from two-dimensional images. This system is necessary for verifying the accuracy of the algorithms employed by the robot during scenarios in which the vehicle is docking with a mock satellite bus.

Project Objectives The goal of this project was to continue previous work on developing a C++ program that, by collaborating with two cameras and a pan-tilt platform, would have the ability to “see” and track infrared LEDs for position determination. This objective was accomplished in two phases: application of computer vision functions and utilization of epipolar geometry techniques. Once the cameras were able to distinguish the LEDs based upon threshold brightness levels, their centroids were able to be calculated, two-dimensional positions within the images were determined, and finally, basic geometric expressions were formulated such that the LED locations in three-dimensional space could be calculated.

Methodology Used Once communication between the two cameras and the C++ program was established, various computer vision functions were used to both recognize and track three infrared LEDs. These calculations yielded initial estimates for the two-dimensional locations of the LEDs in each camera’s field of view. However, these were very rough estimates, as the functions used were able to distinguish large bright areas from the dark background, but the centroids of these illuminated areas were not as accurate as required. Thus, a routine based upon calculating mass centroids was developed. A square area (40 pixels by 40 pixels) surrounding the initial centroid was analyzed by the C++ program. The brightness level of each pixel seen by the cameras was used as that pixel’s “mass” and, with known distances from a predetermined x- axis and y-axis in the camera’s two-dimensional coordinate system, the coordinates of a new centroid were found that emulated the center of mass of each infrared LED.

With more exact two-dimensional coordinates determined, simple geometric relations were utilized to convert from the two-dimensional coordinate system offered by the camera images to three-dimensional locations with respect to the camera coordinate system. Equations 1, 2, and 3, respectively, were used in this conversion to yield the three-dimensional x-, y-, and z-coordinates of each LED’s centroid.

()XRPOINT − PPX × BASELINE X = (1) XLPOINT − XRPOINT

()YRPOINT − PPY × BASELINE Y = (2) XLPOINT − XRPOINT

BASELINE × FOCAL _ LENGTH Z = (3) XLPOINT − XRPOINT In the above equations, the position of each LED centroid in the right camera’s field of view is (XRPOINT, YRPOINT); the centroid in the left camera’s field of view is (XLPOINT, YLPOINT). The coordinates of the projection of the center of each camera on its image plane, which is known as the principal point, is (PPX, PPY). The variable FOCAL LENGTH, as its name suggests, is the focal length of each camera, and is measured by calibrating the cameras. The BASELINE length is the distance between the centers of the two cameras.

Results Obtained In order to verify the accuracy of this optical metrology system, three infrared LEDs were attached to a cart and the cart was moved in a predetermined trajectory in the field of view of the cameras. The cart was pushed away from the laboratory reference frame origin (along the z-axis) in three increments of approximately 300mm each. The orientation of the cart was then altered by rotation about the y-axis approximately 70s after test initiation. All motion of the cart was kept within the stationary field of view of the cameras. As the cart was maneuvered, the three-dimensional locations of the LEDs in the laboratory frame of reference and the Euler angles of the rigid LED cluster were output to a data file and plotted versus time. The position and orientation results of this verification test are shown in Figures 1 and 2, respectively.

Significance and Interpretation of Results As is shown in Figures 1 and 2, the optical metrology system yields reasonably accurate results for position determination of infrared LEDs. However, results from additional verification tests, such as when the pan-tilt platform was directed to move, do not seem to be as accurate. Future work includes verification and even modification of the calibration system used, as the cameras must currently be calibrated without their infrared filters. The robotic arm may also be employed to quantitatively determine the accuracy of the system at various radial and axial distances from the cameras and those results could be used to improve errors.

Figures

Figure 1. Time History of 3-D Coordinates. Figure 2. Time History of Euler Angles.

Acknowledgments The author would like to thank Eric Miller, who began this project last year and who enabled assistance with its continuation, and Dr. Albert Bosse, who offered direction and support throughout the past two quarters. Design and Implementation of an Intelligent Balloon for Real-Time Environment Monitoring

Student Researcher: Thomas V. Vo

Advisor: Dr. Julie Zhao

The University of Akron Electrical and Computer Engineering Department

Abstract The University of Akron has a team of mechanical and electrical engineers that have designed a balloon system that will carry a payload to near-space and have it descend back to Earth via a parachute.

During the duration of the trip the payload transmits real-time sensor information about the external and internal environment of the payload to a base-station receiver on the ground. In addition, tracking information is also transmitted in order to provide a means of recovering the balloon payload once it lands.

The payload box and its contents were designed with the consideration that the payload may undergo heavy vibrations, experience extremely cold temperatures, strong impacts upon landing, and possibly even land in water. The payload and contents were designed as rugged as possible because a failure in the system due to software (i.e. getting lost in loop), hardware (i.e. severed connection), or a combination of both may cause loss of tracking information and most likely the loss of the payload.

Although a general overview of the project has been presented, this paper will focus on the electrical engineering design and implementation aspects of the payload box.

Project Objectives During a balloon’s ascent and descent to and from near-space altitudes (70,000+ feet), the balloon payload will undergo a harsh environment. There are heavy vibrations, low pressures and extremely cold temperatures. In order to be able to measure the physical phenomena of the environment, a controller, GPS receiver, and other sensors had to be identified that could properly function with the given extreme temperature conditions. Also a dependable communication system was required to be able to receive the data to successfully track the balloon and get real-time environment information. In order to ensure the recovery of the payload, the robustness of the system, in terms of software and electrical connections was a major consideration.

System Design and Implementation Environment Measuring & Tracking Devices A combination temperature and humidity sensor was used to measure different heights throughout the balloon’s journey. The transducer used for this application was the Humirel Relative Humidity (RH)/Temperature module. The humidity sensing range was 1 – 99% RH with an accuracy of ±3% RH. The temperature sensor on the other hand has a range of -30˚C to 70˚C. An internal temperature sensor was also used so that the temperatures on the outside and inside of the payload could be compared. This transducer has the capability of measuring temperatures form -55˚C to 125˚C with ±5% accuracy. Lastly, an absolute pressure was used on the payload and is called the MPX5100A. It is a piezo-resistive transducer capable of measuring pressure in a range of 15 – 115 kPa.

In order to track the payload, the GPS receiver LassenIQ Evaluation Kit was used with an RS-232 serial communication interface. Radio The communication system consisted of the portable hand-held amateur radio TH-D7 and a TM-D700A as a base-station receiver. Although both devices are capable of two-way communication the current system implemented is a one-way transmission of packets from within the balloon payload using the hand-held radio and sending it to the base-station receiver. The TH-D7 and TM-D700A both contain a Terminal Node Controller for the transmission of data packets. A more definitive description of data being sent as well as the packet format will be discussed later on. Also, the transmission radio power required was low and only a quarter of a Watt.

Controller & Breakout Board The main system controller used was the JAVA-based SNAP Module. The module is a 72 SIMM board that provides multiple digital I/O, 3 serial ports, real-time clock, and supports many different communication protocols that are common for various digital sensors. This module was connected to a STEP+ evaluation board that allowed for the expansion of additional sensors and a proto-typing area to add additional components. The STEP+ board provided an onboard temperature sensor and 4 ADC channels. The GPS and Radio devices were also connected to the serial DE-9 connectors provided on the STEP+ board. The STEP+ board was modified in order to satisfy the modular and reliability design philosophy. This was done by modifying the board and placing screw-in terminals in the proto-typing area. The screw-in terminals provide a mechanically reliable connection that allow for easy sensor and relay device connections without soldering. One screw-in terminal set was used to connect the four channel ADC device that allowed for easy connections to any sensor circuit with an output voltage from 0 to 5 Volts with a resolution of up to 16 bits. The second screw-in terminal set was used to provide up to four low-current, 3.3V outputs specifically for the use of driving relay circuits. The sensors connected to the terminals were the previously mentioned temperature/humidity sensor combination (each part taking one channel), a pressure sensor, and the additional temperature sensor. Three of the four relay outputs are used for activating the digital camera, activation of the balloon pay-load nylon rope cutter, and the activation of the 120 dB siren.

Relay Circuits The digital camera relay circuit was a very simple design. The only action that this camera needs to perform is capturing images every 15 seconds. For the microcontroller to control this action, wires were soldered across the shutter pins and then connected across a relay.

Another relay was used for an electrical cutting device to be activated. The cutter is a small box with a nichrome wire inside. A thin nylon rope that holds the balloon to the payload is looped through this box and inside the loop of a nichrome wire. When a voltage is applied across the nichrome wire, current passes through it causing the wire to heat up considerably. The wire heats up enough for the nylon rope to melt and in doing so releases the payload from the balloon. A relay is used to switch in this voltage to the nichrome wire when a signal is applied to it from the microcontroller. The microcontroller will only initiate this signal when the balloon reaches a specific altitude. Since the GPS module determines the altitude of the payload, and in the case of the GPS dropping out for some reason, a timer will be used to also initiate this cutter signal. The altitude will take precedence over the timer, but the timer is just for backup in case of a GPS failure.

The last relay used will be for a siren to emit a loud, high pitched sound. The main reason for the siren is to assist in the location of the payload once it has reached the ground after its descent. The controller will activate the siren relay based upon the predicted time it will take for the payload to reach the ground.

The completed payload box along with the various components labeled can be seen in Figure 1. 1 2 3 4 5 6 7

Figure 1. 1- Handheld Radio, 2-Digital Cameras, 3-Relay Circuits, 4-SNAP Microcontroller, 5-GPS Receiver, 6-Audible Siren, and 7-Rope Cutter Box Software The software was written in JAVA and still followed the modular and rugged philosophy. The software system implemented an internal software Watchdog to ensure that in the event an unexpected error caused the system to lock up, the system would be reset. This again was to follow the philosophy of robustness. The Pseudo Code for the system is the following:

Initialization: Open GPS Serial Port. Open Radio Serial Port and put it in converse mode. Initialize ADC device. Initialize 1-wire internal temperature device. Start periodic camera-shutter activation periodic timer for every 15 seconds. Start single-shot secondary wire-cutter activation cutter timer for a time past the expected altitude cut-down. Start single-shot siren activation cutter timer for a time before the balloon is expected to touch ground. Start Watchdog Timer with a 1 minute timeout timer. Main loop: Feed Watchdog Timer and reset timer. Read and store most current GPS Data, Altitude, Latitude, and Longitude. Read and store most current internal temperature device. Read and store all four ADC channel voltages. Take stored data and format into proper sizes to be placed into communication packet. Ensure that data is valid and proper size for the packet. Assemble packet starting with a flag character and fixing sensor data on after. Calculate CRC16 checksum on current packet and then append result on to end of packet. Transmit packet through Radio on serial port. Store packet transmitted internally with onboard memory. Wait 12 seconds and repeat main-loop. Timers: The camera, siren and wire-cutter timers are on separate threads concurrently running with the main system. Each timer will activate when its corresponding activation timer’s threshold has been reached. An internal data log file was also created to state when each activation occurred.

Data Transmission Packet Format The packet format can be seen in Table 1. A flag character is sent first consisting of one byte. This is followed by altitude GPS data given in meters. Since the balloon can reach over 25,000 meters, 5 bytes are required to represent the 5 ASCII characters. Next is the latitude which consists of 9 data bytes in decimal form. The decimal point is not needed and since the number of bytes is kept constant for each variable, the decimal point is removed before transmission and put back in after transmission. Therefore, the latitude formatted data only has 8 bytes. This is also done for each value containing a decimal in order to save transmission power by not sending useless information. The next piece of data is the longitude, which consists of 10 bytes. The first value of the longitude is constantly 0 since the region that the balloon will be tracked in will never change that value, hence it was removed. After removing that leading zero, as well as the decimal point, the formatted longitude is only 8 bytes long. The internal temperature sensor is a 1-wire device and the temperature is given in degrees Celsius. Since the temperature is capable of going below 0, in order to eliminate the need of representing the negative value, the temperature was converted to absolute temperature. The decimal point was also fixed for this value and so the decimal point was also eliminated, leaving a 4 byte data value. The next 4 pieces of data come from the 4 ADC channels and are values that range from 0 to 5 volts with precision of up to three decimal values. Also, removing the decimal point leaves these values to be only 4 bytes as well (this data is converted to the physical parameter it is measuring on the receiving side). Lastly, the entire packet up to this point is taken and a CRC16 checksum calculation is performed on the packet. This data is at maximum a 4 byte value and appended onto the packet.

Table 1. Sensor data formatted into packet.

Data Flag Altitude Latitude Longitude Temp. Temp. Humidity Temp. Pressure CRC16 Internal External External Checksum

# of Bytes 1 5 8 8 4 4 4 4 4 4

Data Read x aaaaa bbbb.cccc ddddd.eeee ff.ff g.hhh i.jjj l.mmm n.ooo pppp

ffff Format Data F aaaaa bbbbcccc ddddeeee ghhh ijjj lmmm nooo pppp +273

Conclusion The Intelligent Balloon system design was completed and designed to be as modular as possible while simultaneously providing a rugged solution. This was done in terms of electrical connections as well as the software architecture of the system. The electrical connections for the various sensors as well as relay circuits were made through screw-in terminals that provided solid connections and were easily swappable. The software was written to provide a robust system that stored packets internally in the event those data packets are missed, allowing for recovery when the payload is retrieved. The software also provided a safeguard against the software getting locked by implementing a Watchdog timer. Overall, the system was successfully designed and is considered from electrical considerations to be a rugged solution. Teaching Science and Using the Sheltered Instruction Observation Protocol (SIOP) Teaching Strategies

Student Researcher: Trudy Wilson-Simmons

Advisor: Dean Jane A. Zaharias

Cleveland State University College of Education

Abstract One of the reasons that America’s standardized test scores in science have been declining is because our schools have seen an increase of English as Second Language (ESL) students since the year 2000. These students are less than proficient in English. For years, they have been put into the mid-stream of education and expected to sink or swim. Most classrooms have an “English only” policy (this means that the students are expected to speak English in the classroom). This is a very difficult task for some students for many reasons. The responsibility lies on educators to help ESL students bridge the gap between their language discrepancies and their academic success. Why? Because when the Department of Ohio Education administers the Ohio Achievement Test (OAT) or the Ohio Graduation Test (OGT), the first thing that they look at is the overall performance of each school district. Eventually they check the status of ESL students and students with learning disabilities, but not until the proficient and non- proficient scores are publicized.

Project Objective The objective of this lesson is to use the Sheltered Instruction Observation Protocol SIOP teaching method (e.g. using flashcards for better memorization, using English and Spanish words on the flashcards as a scaffolding aid, using acronyms for memorization, and etc.) to teach science to a select group of students to observe if there is a noticeable increase in the students learning and retention skills.

Method I am presently assigned to Cedarbrook Middle School in Painesville, Ohio. Cedarbrook serves a very diverse population of students; 33.7% Hispanic, 24% African American, 14.7% Multi-Racial, 1% Asian, and 27.5% Caucasian; of that racial make-up, 79.7% are economically disadvantaged.

The students in my research class are all 7th grade students. Originally this research project was geared toward Hispanic ESL students, but instead of omitting any of the students, the study was done on the availability of the students in the room. Hence, my study contains a mixture of students with different learning abilities and nationalities.

The Lesson Plan for Unit One: The Solar System

Unit One: The Solar System 9:30-10:15 Plan

Goals/Objectives: Students will learn study strategies that will allow them to Personal

th access and retain information, scaffold instruction, and promote higher order Planning thinking skills.

Standard/Benchmarks: Language Arts, AP1,5,7,8; MS1,4,6; and RA1.

Behavioral Objectives: ¾ Teacher will explain to student the objective of this lesson.

Monday, March 24 March Monday, ¾ Students will engage in a volunteer pretest for the unit.

Friday, March 28th Thursday, March 27th Wednesday, March 26th Tuesday, March 25th proficient in English; speaking, writing, or listening). The pre andposttest results are as follows: posttest. The students had an overall improvement of 25.77% (keep in mind that students 6-9 are not questions correct (40.77% correct) on the pretest, and 17.3 questions correct (66.54% correct) on the out. The pre and posttest have arawscore of26. At the beginning ofthe study, therewere a total of ten students; however, student number eight dropped The data collected was apretest, posttest, and journal notes (reflections) throughout the research process. Results Obtained ¾ Behavioral Objectives: Learning Games: thinking skills. access and retain information, scaffold instruction, and promote higher order Goals/Objectives: ¾ Behavioral Objectives: IndependentInstructions: thinking skills. access and retain information, scaffold instruction, and promote higher order Goals/Objectives: ¾ Behavioral Objectives: Posttest thinking skills. access and retain information, scaffold instruction, and promote higher order Goals/Objectives: ¾ Taking Behavioral Objectives: Modeled Instruction-ReadingGuided& Note thinking skills. access and retain information, scaffold instruction, and promote higher order Goals/Objectives: ¾ ¾ ¾ ¾ ¾

Students will engage ina board game thatwill teach the vocabulary words. words. Students will begiven vocabulary flashcards using English andSpanish Students will review vocabulary words using their flashcards. process. Teacher will model reading students andguide them through the reading information on the solar system. Students will view the NASAeducational website to research for more flashcards. Students will engage inan activity matching picture and definitions to the Students will be administered aposttest. information wanted on the back oftheir flashcards. Students will follow allow with the story andsummarize any additional Student will work independently orwith apartner to review thecards

Students will learn study strategies that will allow them to Students will learn study strategies that will allow them to Students will learn study strategies that will allow them to Students will learn study strategies that will allow them to Asa class, the students got an average of10.6 Development Development Professional Professional ROOM 19 Meetings SSMT SSMT PLC

Planning Planning Planning Planning Personal Personal Personal Personal Unit One: Language Arts Across the Curriculum: The Solar System

PRE TEST 1/ POST TEST 1

Series1 Series2

30

25

20

15 SCORES

10

5

0 Student 1 Student 2 Student 3 Student 4 Student 5 Student 6 Student 7 Student 8 Student 9 Student 10

Acknowledgments Thanks to Dean Jane Zaharias from Cleveland State University, for being role models for students, and teaching us how to aim for our goals. Thanks to Dr. Scott Sowell for extending his expertise in advice and direction for all of his students. Also, a special thanks to Dr. Ronald Beebe and Diane Corrigan, Cleveland State University (EDB 511, Classroom Inquiry) teachers, and Principal Denise Ward at Cedarbrook Middle School in Painesville, Ohio, for making it possible for me to work with the students.

Reference 1. Finnegan, Ivan. The Solar System. 2007. San Francisco, CA. The Five Mile Press Pty Ltd. Biped Robot Actuated By Shape Memeory Alloys

Student Researcher: Michael J. Zimcosky

Advisor: Dr. Mohammad Elahinia

The University of Toledo Mechanical Engineering Department

Abstract There is a need for an active ankle foot orthosis (AAFO) that more closely mimics human gait for patients with drop foot. Drop foot is a neuromuscular disorder which results in a loss or reduced function of the ankle’s dorsiflexor muscles (muscles which point the toe upward). Current AAFO’s are powered by DC motors or pneumatic actuators, which are bulky and limit the range of motion of the AAFO. The University of Toledo Dynamic and Smart Systems Lab is developing an AAFO which will be actuated by shape memory alloy (SMA) wire. The advantages of a SMA actuator include its high force output to mass ratio (80,000:1), large recoverable strains of up to 10%, and its stress strain relationship which is similar to muscle tissue.

The first and current step in the development of a SMA actuated AAFO is to create a biped robot actuated by SMA wires. The biped will be used to develop and validate a SMA actuator and its control logic, which will then be applied to the AAFO.

Project Objectives The objective of this project is to build a biped robot which has a knee joint actuated by SMA wires, and is able to walk continuously on flat terrain. This will include developing an actuator for the knee joint and the control logic for the biped.

Methodology Used SMAs are used for two different functions, the shape memory effect, and the pseudo-elastic effect (also called super elasticity). Both effects are the result of a solid state phase transformation between martensite and austenite. The shape memory effect allows a SMA to be plastically deformed, up to 10% of its original length, and then recover the entire deformation when heat is applied. The SMA is originally in a twinned martensite phase, which has a relatively low modulus of elasticity. When stress is induced the SMA plastically deforms into a deformed martensite phase, and will remain until the temperature of the material rises above the austenite transformation temperature (ATT). Once the SMA is heated to the ATT a stress is produced in the material, causing it to contract to its original length, and transforming into an austenite phase. When heat is removed and the material drops below the ATT the SMA returns to a twinned martensite phase. (See Figure 1) The pseudo-elastic effect allows the SMA to be used as a spring which deforms (up to 10%) along a stress-strain curve that plateau’s at a known stress. When this stress is reached the material can continue to be deformed with no increase in external force. This effect results when the SMA’s ATT is below the ambient temperature. The pseudo elastic SMA is originally in an austenite phase which has a relatively high modulus of elasticity. As deformation increases, stress also increases proportional to the modulus of elasticity. When the deformation reaches 1-3% of its original length the material transforms from the harder austenite phase to a softer martensite phase, allowing the material to deform without in increase in stress. (See Figures 2 and 3).

The knee joint actuator will consist of a shape memory effect SMA wire as the flexor (causes lower leg to rotate about the knee) and a pseudo-elastic SMA wire as the extensor (causes lower leg to return to vertical). The flexor wire will be attached 0.5 in below the knee and anchored to the top of the thigh resembling the hamstring in the human leg, while the extensor wire is mounted in a similar fashion on the front of the leg resembling the quadriceps in the human leg. When electrical current is run through the flexor wire its temperature increases beyond the ATT creating an internal stress which causes the wire to contract. The external force caused by the stress creates a moment about the knee causing the lower leg to rotate. This rotation causes the extensor wire to elongate transforming it into martensite. This produces a moment of a lesser value in the opposite direction as the flexor wire. When current is removed from the flexor wire its temperature falls below the ATT returning the flexor wire to a martensite phase. The moment about the knee from the extensor is now greater than that of the flexor, causing the lower leg to rotate back to vertical. (See Figure 4.)

The biped’s movement is being considered in the sagittal (side) plane only. To keep the robot stable in the frontal plane, a four legged design is being used. The two inner legs rotate in sync with each other, as do the two outer legs. This causes the system to act as two legs, similar to a person walking with crutches. The biped’s structural design is based on McGeer’s passive dynamic biped robot, which is famous for its energy efficient gait cycle, and its ability to descend an inclined plane actuated only by gravity. Modifications to McGeer’s design will be made for weight reduction and mounting the actuators. A DC servo motor will be used to actuate the hip joint (See Figure 5.)

A closed loop controller will be developed in Simulink to control the biped. Feedback will come from optical encoders mounted on each of the knees, as well as from the servo motor at the hip.

Results Obtained Currently a single leg of the robot has been constructed to test the SMA actuator before the biped is assembled. A controller has been developed to cycle knee flexion in a sinusoidal path, based on feedback from the optical encoder at the knee joint. Figure 6 shows the current capability of the SMA actuator.

Significance and Interpretation of Results There is a 1-2 second phase delay when the knee flexion changes direction, but it is unknown if this will adversely affect the operation of the biped. Installation of the actuator on the robot is the next step.

Figures/Charts

Figure 1. Shape Memory Effect Figure 2. Pseudo-Elastic Effect (University of Alberta (University of Alberta http://www.cs.ualberta.ca/~database/MEMS/sma_ http://www.cs.ualberta.ca/~database/MEMS/sma_ mems/sma.html) mems/sma.html) Flexor Extensor

Actuator Moment

Figure 3. Pseudo-Elastic Stress Strain Curve Figure 4. Actuator Configuration.

(www.nitisurgical.com)

Figure 5. Biped based on McGeers Robot

Figure 6. Knee Flexion.