NASA/ OHIO SPACE GRANT CONSORTIUM

2010-2011 ANNUAL STUDENT RESEARCH SYMPOSIUM PROCEEDINGS XIX

Final Launch of STS-135 Atlantis, July 8, 2011 NASA Kennedy Space Center

April 8, 2011 Held at the Ohio Aerospace Institute Cleveland, Ohio

TABLE OF CONTENTS Page(s)

Table of Contents ...... 2-8 Foreword ...... 9 Member Institutions ...... 10 Acknowledgments ...... 11 Agenda ...... 12-14 Group ...... 15 Other Symposium ...... 16-20 Student Name College/University Page(s)

Allen, Joshua E...... Wilberforce University ...... 21-22 Live Video Using Qt with C++

Allison, Jennifer E...... Lakeland Community College ...... 23-25 Natural Convection and Evaporative Cooling of Containment

Ash, Stephanie D...... Ohio Northern University...... 26-29 Computational Leed Analysis of the (1x1) Structure of Au(111)

Balderson, Aaron M...... Marietta College ...... 28-29 Mehtane Invasion into Freshwater Aquifers in Susquehanna County, Pennsylvania

Barker, Sydney M...... University of Cincinnati ...... 30-32 Intelligent Algorithms for Maze Exploration and Exploitation

Barnes, Caleb J...... Wright State University ...... 33-40 A High Resolution Spectral Difference Method for the Compressible Navier-Stokes Equations

Baylor, Brandon S...... Marietta College ...... 41-44 A Look into Marcellus Shale Completion

Bendele, Dean T...... Marietta College ...... 45-48 Enhanced Drilling and Fracturing Techniques for Horizontal Well Stimulus

Bennett, Heather M...... University of Cincinnati ...... 49-51 Rock Out: The Rock Cycle on Earth and Moon

Benton, Melissa R...... The University of Toledo ...... 52-54 Evaluating the Potential for Thermally Enhanced Forward Osmosis

Blake, Adam M...... Wright State University ...... 55-57 Developing Lithium Ion Powered Wheelchair to Facilitate Student Learning

Bodnar, Karin E...... The University of Akron ...... 58-60 The Effects of Electromagnetic Radiation on the Galvanic Corrosion of Metals

Bradford, Robyn L...... Central State University ...... 61-64 A Study of Concentrated and Distributed Ballast Weight on a Racing Shift Kart Chassis

2 Student Name College/University Page(s)

Bridges, Royel S...... Wilberforce University ...... 65-66 NRC Regulation of Medical Uses of Radiation

Brinson, Tanisha M...... Wilberforce University ...... 67-68 How Can Multicast Packets Be Used To Pass Across A Virtual Switch Without Data Loss?

Bryant, Rachel L...... Wright State University ...... 69-70 Wireless Charging System

Carter, Jeffery W...... Ohio Northern University...... 71-75 CFD Analysis of Wind Tunnel Blockage

Chan, Catelyn H...... Cuyahoga Community College ...... 76-77 Effects of a Centralized Electronic Health Record

Charvat, Robert C...... University of Cincinnati ...... 78-83 The SIERRA Project: Unmanned Aerial Systems for Emergency Management

Cobb, Katherine M...... University of Dayton ...... 84-87 Investigation of Small Ring Carbamates and Thioncarbamates and Analysis of Moringa Oleifera Extract

Coburn, Kimberly M...... The University of Toledo ...... 88-91 The Village of Ottawa Hills Storm Sewer Mandates

Cosby, Lauren E...... University of Dayton ...... 92-94 Fabrication of Nanostructured Sensors for Detection of Volatile Organic Compounds

Cross, Devin M...... The University of Akron ...... 95-98 Sustainable Solar-Powered Fixed Wing Flight

Croston, Michael E...... The University of Akron ...... 99-100 SAE Baja Floatation and Maneuverability on Water

Dandino, Charles M...... University of Cincinnati ...... 101-104 Counter-Spinning Carbon Nanotube Yarns

Daniels, Jesse E...... Central State University ...... 61-64 A Study of Concentrated and Distributed Ballast Weight on a Racing Shift Kart Chassis

Dawson, Alexander J...... Miami University ...... 105-108 Experimental Investigation of Magneto-Rheological Based Elastomers Based on Hard Magnetic Fillers

Day, Jason H...... The University of Toledo ...... 109-111 Spectroscopy: Exploiting the Interaction of Light and Matter

DeChellis, Danielle M...... Youngstown State University ...... 112-113 Lunar Math

Dehner, Gina M...... University of Cincinnati ...... 114-115 The Night Sky

3 Student Name College/University Page(s)

DiBenedetto, Joseph M...... Ohio University ...... 116-117 Commercial UAV Autopilot Testing and Test Bed Development

Edwards, David M...... Ohio University ...... 118-119 Evaluation of a High-Cost Autopilot and Certificate of Authorization Process

Edwards, Kristen D...... Central State University ...... 120-121 A Load Balancing, Scalable Parallel Algorithm for Simulating the Control of Steady State Heat Flow Through a Metal Sheet

Flateau, Davin C...... University of Cincinnati ...... 122-125 Observing Transiting Extrasolar Planets at the University of Cincinnati

Fleming, Michelle K...... Youngstown State University ...... 126-128 Biomechanical Investigation: Hip Fracture Repair and Removal of Hardware

Foster, Daniel R. E...... The Ohio State University ...... 129-136 Elastic Constants of Ultrasonic Additive Manufactured Al 3003-H18

Fries, Kaitlin M...... University of Dayton ...... 137-139 Synthesis and Characterization of Polymer Electrolyte Material for High Temperature Fuel Cells

Gambone II, Thomas M...... The University of Akron ...... 140-142 Digital Image Correlation (Dic / Digic)

Gerlach, Adam R...... University of Cincinnati ...... 143-149 Robust 3d Pose Estimation of Articulating Rigid Bodies

Grage, Danielle L...... University of Cincinnati ...... 150-154 Jet Acoustics: The Effects of Forced Mixers and Chevrons

Guzman, Nicole D...... The Ohio State University ...... 155-160 Biophysical and Biochemical Characterization of Microrna Profiling of Breast Cancer Cell- Secreted Microvesicles

Hall, Pierre A...... The University of Akron ...... 161-165 Sustainable Personal Transportation

Haraway, Malcolm X...... Wilberforce University ...... 166-167 The Advantages of Nuclear Energy

Hoffman, James M...... University of Dayton ...... 168-171 Finite Difference Modeling of a Low Cost Solar Thermal Collector

Hudak, Marguerite J...... The University of Akron ...... 172-173 Going Solar! The Power to Generate Energy

Hupp, Marsha E...... Marietta College ...... 174-177 Prevention of Liquid Loading In Horizontal Gas Wells

Hutchinson, Amanda E...... Cleveland State University ...... 178-179 The Great Planetary Debate

4 Student Name College/University Page(s)

Hyden, Kathryn R...... Youngstown State University ...... 180-182 Fiber Metal Laminates

Iliff, Christopher J...... Ohio Northern University...... 183-184 Math and Comets: Discovering the Orbital Period of a Comet

Ismail, Tariq H...... Youngstown State University ...... 185-190 Antacid Table Race

Jenkins, Emma B...... The University of Akron ...... 191-199 Inhibition of Amyloid Beta 1-42 Peptide Aggregation

Jennings, Alan L...... University of Dayton ...... 200-204 Memory-Based Open-Loop Control Optimization for Unbounded Resolution

Brooke R. Johnson...... Youngstown State University ...... 205-208 Finite Element Analysis of Soft Biological Tissue with Comparisons to Tensile Testing

Johnson, Phillip E...... Cuyahoga Community College ...... 209-210 Engine Reconstruction to Run On Water

Jones, Melissa A...... Wright State University ...... 211-214 Quantitative Analysis of Haptic Performance Using Human Machine Interaction and Multi-Task Performance Model

Kecskemety, Krista M...... The Ohio State University ...... 215-222 Investigation Into the Impact of Wake Effects on the Aeroelastic Response and Performance of Wind Turbines

King, Kari J...... University of Cincinnati ...... 223-224 Geometry in Rockets

Klepac, Katherine J...... Cleveland State University ...... 225-226 Exploring Space

La Croix, Daniel E...... Cedarville University ...... 227-229 The Creation of a Web-Based Steel and Aluminum Microstructure and Properties Library

Layton, Kara E...... Cedarville University ...... 230-231 Modeling the Solar System

Less, James M...... The University of Toledo ...... 232-233 Cruising With Sir Isaac: Using Newton Cars to Investigate Motion

Lund, Elise J...... Marietta College ...... 234-235 Phases of the Moon

Mbalia, Kamau B...... Central State University ...... 236-237 Solar Powered Water Purification System

5 Student Name College/University Page(s)

McGee, Myron L...... Central State University ...... 238-239 Analysis of Jet Engine Turbine Blade Manufacturing Process Uncertainties for Enhanced Turbine Blade Reliability and Performance

Mendenhall, Leah R...... Marietta College ...... 240-241 What’s the “Scope” About Gyroscopes?

Miracle, Tanya L...... The University of Akron ...... 242-245 Creation of a Super-Hydrophobic Surface on Stainless Steel Using Fluorocarbon Based Organosilane Coatings

Mitchener, Michelle M...... Cedarville University ...... 246-247 Expression of E2F2 during Conjugation in T. thermophila

Morris, Nathaniel J...... Central State University ...... 248-249 Mulitspectral Sensing and Image Stabilization

Murray, Amy V...... Ohio Northern University...... 250-253 Robotic Football Players

Newsom, Susan M...... Terra State Community College ...... 254-255 Nuclear Fission Power versus Nuclear Fusion Power

Nguyen, Loc P...... Wright State University ...... 256-258 Effect of Ink Formulation and Sintering Temperature on the Microstructure of Aerosol Jet® Printed YSZ Electrolyte for Solid Oxide Fuel Cells

Nyers, Rebecca G...... Cleveland State University ...... 259-260 Evaluating Global Climate Conditions

O’Connor, Thomas P...... Cedarville University ...... 261-263 Rehabilitation Engineering Design Project

Payne, Jillian M...... Cedarville University ...... 264-265 Earth vs. Mars

Phillips, Shannon L...... Cleveland State University ...... 266-267 This is Rocket Science

Piontkowski, Renée D...... Cuyahoga Community College ...... 268-269 Electronic Health Records and Its Impact on the Healthcare Industry

Post, Therese M...... University of Dayton ...... 270-271 NASA - Friend of Education - Re-Introducing Students to NASA

Ragan, William Tyler ...... Marietta College ...... 272-274 Hydraulic Fracture Design in the Marcellus Shale

Richards, Danielle N...... Wilberforce University ...... 275-277 Improvement of Coplanar Grid CZT Detector Performance with Silicon Dioxide Deposition

6 Student Name College/University Page(s)

Rogers, David A...... Ohio Northern University...... 278-280 Path Following Algorithm Development

Rupp, Bradley B...... The University of Toledo ...... 281-284 Synthesis and Characterization of Polycarbonate Nanocomposites Using In-Sity Polymerization

Russell, Allison N...... Cedarville University ...... 285-286 Elemental Light Spectroscopy

Scheidegger, Carré D...... Cleveland State University ...... 287-288 Biogeography-Based Optimization with Distributed Learning

Schmidt, Joel E...... University of Dayton ...... 289-292 Synthesis of FeSb2 Nanorods for Use as Low Temperature Thermoelectric Materials

Seitz, Ciara C...... Cleveland State University ...... 293 Responsive Polymers

Sellar, Jessica L...... Miami University ...... 294-297 Autonomous Golf Cart Project

Shapiro, Daniel K...... Ohio University ...... 298-300 Augmentation of the DME Signal Format for Possible APNT Applications

Slattery, Christopher J...... Ohio Northern University...... 301-304 SAE Aero Competition Improvements

Smith, Bartina C...... University of Dayton ...... 305-308 Removal of a Bittering Agent Potentially Released to Water Supplies: Implications for Drinking Water Treatment

Smith, David M...... Columbus State Community College ...... 309-310 Using Three Dimensional Printing in Product Development

Smith, Matthew G...... Ohio Northern University...... 311-313 The Wing in Ground Effect on an Airfoil

Snyder, Zachary J...... Owens Community College ...... 314-315 Vital Monitoring Systems

Studmire, Brittany M. M...... Cleveland State University ...... 316-317 Optimization of Algae Lipid Measurements and Biomass Recovery

Studmire, Tyra P...... Cleveland State University ...... 318 Biofuels as an Alternative Source for Aviation

Sweeney, Kevin M...... Ohio University ...... 319-321 The Rotation Rate Distribution of Small Near-Earth Asteroids

Sylvester, Jorge A...... The University of Akron ...... 322-323 Zosteric Acid Integrated Thermoreversible Gel for the Prevention of Post-Surgical Adhesions

7 Student Name College/University Page(s)

Tillie, Charles F...... Cleveland State University ...... 324-325 Characterization and Modeling of Thin Film Deposition Processes

Tocchi, Zachary M...... The University of Akron ...... 326-327 Problem Based Learning Approach to Designing Aircrafts

Wensing, Patrick M...... The Ohio State University ...... 328-334 Optimization of a High Jump for a Prototype Biped

Williams, Michael D...... Wilberforce University ...... 335 Advances in Technology: Building a Personal Computer

Willingham, Rachael L...... The University of Toledo ...... 336-339 Tensile Testing of Auxetic Fiber Networks and Their Composites

Wolfarth, Ryan A...... Miami University ...... 340-342 Transfer and Storage of High Rate GPS Data

Wukie, Nathan A...... University of Cincinnati ...... 343-345 Bleed Hole Simulations for Mixed Compression Inlets

Yeager, Tara N...... Youngstown State University ...... 346-347 Reaping Rocks

Yeh, Benjamin D...... Cedarville University ...... 348-350 Creep and Subsidence of the Hip Stem in THA

Proceedings may be cited as: Ohio Space Grant Consortium (April 8, 2011) 2010-2011 Annual Student Research Symposium Proceedings XIX. Ohio Aerospace Institute, Cleveland, Ohio.

Articles may be cited as: Author (2011) Title of Article. 2010-2011 Ohio Space Grant Consortium Annual Student Research Symposium Proceedings XIX, pp. xx-xx. Ohio Aerospace Institute, Cleveland, Ohio, April 8, 2011.

8 FOREWORD

The Ohio Space Grant Consortium (OSGC) is a member of the National Space Grant College and Fellowship Program funded by Congress and administered by NASA Headquarters. The OSGC supports graduate fellowships and undergraduate scholarships for students studying toward degrees in Science, Technology, Engineering and Mathematics (STEM) disciplines at OSGC member colleges or universities. The awards are made to United States’ citizens, and since 1989, more than $4.1 million in financial support has been awarded to over 655 undergraduate scholars and 164 graduate fellows working toward degrees. The students are competitively selected from hundreds of applicants.

Funds for the fellowships and scholarships are provided by the National Space Grant Program. Matching funds are provided by the member universities, the Ohio Aerospace Institute (OAI), and private industry. Note that this year $696,260 will be directed to scholarships and fellowships representing contributions from NASA, Ohio Aerospace Institute, member universities, and industry.

On Friday, April 8, 2011, all OSGC Scholars and Fellows reported on these projects at the Nineteenth Annual Student Research Project Symposium held at the Ohio Aerospace Institute in Cleveland, Ohio. In eight different sessions, Fellows and Senior Scholars offered 15-minute oral presentations on their research projects and fielded questions from an audience of their peers and faculty, and received written critiques from a panel of evaluators. Junior, Community College, Education, and Bridge Scholars presented posters of their research and entertained questions from all attendees during the afternoon poster session. All students were awarded Certificates of Recognition for participating in the annual event.

Research reports of students from the following schools are contained in this publication:

Affiliate Members Participating Universities •The University of Akron •Marietta College •Case Western Reserve University •Miami University •Cedarville University •Youngstown State University •Central State University •Cleveland State University Community Colleges •University of Dayton •Columbus State Community College •Ohio Northern University •Cuyahoga Community College •The Ohio State University •Owens Community College •Ohio University •Terra Community College •University of Cincinnati •The University of Toledo •Wilberforce University •Wright State University

9 MEMBER INSTITUTIONS

Lead Institution Representative  Ohio Aerospace Institute ...... Ms. Ann O. Heyward

Affiliate Members Campus Representative  Air Force Institute of Technology ...... Dr. Jonathan T. Black  Case Western Reserve University ...... Dr. Jaikrishnan R. Kadambi  Cedarville University ...... Dr. Robert Chasnov  Central State University ...... Dr. Gerald T. Noel, Sr.  Cleveland State University ...... Ms. Pamela C. Charity  Miami University ...... Dr. Tim Cameron  Ohio Northern University ...... Dr. Jed E. Marquart  Ohio University ...... Dr. Roger D. Radcliff  The Ohio State University ...... Dr. Füsun Özgüner  The University of Akron ...... Dr. Craig C. Menzemer  University of Cincinnati ...... Dr. Gary L. Slater  University of Dayton ...... Dr. John G. Weber  The University of Toledo...... Dr. Lesley M. Berhan  Wilberforce University ...... Dr. Edward A. Asikele  Wright State University ...... Dr. P. Ruby Mawasha

Participating Institutions Campus Representative  Marietta College ...... Dr. Benjamin H. Thomas  Youngstown State University ...... Dr. Hazel Marie

Community Colleges Campus Representative  Columbus State Community College ...... Mr. Jeffery M. Woodson  Cuyahoga Community College...... Ms. Sandy L. Robinson  Lakeland Community College ...... Dr. Frederick W. Law  Lorain County Community College ...... Dr. George Pillainayagam  Owens Community College ...... Ms. Tamara Williams  Terra Community College ...... Dr. James Bighouse

Government Liaisons Representatives  NASA Glenn Research Center: - Ms. Dovie E. Lacy - Mr. James B. Fitzgerald - Ms. Darla J. Jones - Dr. M. David Kankam - Ms. Susan M. Kohler  Wright-Patterson Air Force Base – Research: - Mr. Wayne A. Donaldson  Wright-Patterson Air Force Base – Education: - Ms. Kathleen Schweinfurth - Ms. Kathleen A. Levine

10 ACKNOWLEDGMENTS

Thank you to all who helped with the OSGC’s Nineteenth Annual Research Symposium!

Ohio Aerospace Institute Evaluators Mark Cline Edward Asikele Donald W. Majcher Matthew Grove Lesley M. Berhan Jessica Mazzola John M. Hale Cynthia C. Calhoun Ashlie B. McVetta Craig Hamilton Mark Cline Cathy Mowrer Michael L. Heil Ben Ebenhack Jay N. Reynolds Ann O. Heyward Douglas A. Feikema Daniela Ribita Deborah Kalchoff James H. Gilland Aaron Rood Gary R. Leidy Patricia Grospiron Benjamin H. Thomas Craig Hamilton John G. Weber Albert J. Juhasz George Williams Campus Representatives Affiliate Members: Dr. Jonathan T. Black, Air Force Institute of Technology Dr. Jaikrishnan R. Kadambi, Case Western Reserve University Dr. Robert Chasnov, Cedarville University Dr. Gerald T. Noel, Sr., Central State University Ms. Pamela C. Charity, Cleveland State University Dr. Tim Cameron, Miami University Dr. Jed E. Marquart, Ohio Northern University Dr. Roger D. Radcliff, Ohio University Dr. Füsun Özgüner, The Ohio State University Dr. Craig C. Menzemer, The University of Akron Dr. Gary L. Slater, University of Cincinnati, Director, Ohio Space Grant Consortium Dr. John G. Weber, University of Dayton Dr. Lesley M. Berhan, The University of Toledo Dr. Edward A. Asikele, Wilberforce University Dr. P. Ruby Mawasha, Wright State University

Participating Institutions Dr. Benjamin H. Thomas, Marietta College Dr. Hazel Marie, Youngstown State University

Community Colleges Mr. Jeffery M. Woodson, Columbus State Community College Ms. Sandy L. Robinson, Cuyahoga Community College Dr. Frederick W. Law, Lakeland Community College Dr. George Pillainayagam, Lorain County Community College Ms. Tamara Williams, Owens Community College Dr. James Bighouse, Terra Community College

Special thanks go out to the following individuals: Michael L. Heil and the Ohio Aerospace Institute for hosting the event. Ann O. Heyward, Ohio Aerospace Institute, for all of her contributions to the OSGC. James H. Gilland, Ohio Aerospace Institute, for his motivating post-luncheon speech. Jay N. Reynolds, Cleveland State University, for organizing the Poster Session. Ohio Aerospace Institute staff whose assistance made the event a huge success! Silver Service Catering (Scot and Mary Lynne) Sharon Mitchell Photography

11 NASA on celebrating 50 years! 2011 OSGC Student Research Symposium Hosted By: Ohio Aerospace Institute (OAI) 22800 Cedar Point Road • Cleveland, OH 44142 • (440) 962-3000 Friday, April 8, 2011

AGENDA

8:00 AM – 8:30 AM Sign-In / Breakfast / Refreshments / Student Portraits ...... Lobby

8:30 AM – 8:45 AM Welcome and Introductions – Gary L. Slater ...... Forum (Lobby Level) Director, Ohio Space Grant Consortium

8:45 AM – 10:30 AM Oral Presentations – All Senior Scholars and Fellows (105 minutes) Session 1 (Groups 1, 2, 3, and 4)  Group 1 ...... Forum (Lobby Level)  Group 2 ...... President’s Room (Lower Level)  Group 3 ...... Industry Room A (2nd Floor)  Group 4 ...... Industry Room B (2nd Floor)

10:30 AM – Noon Poster Presentations (90 minutes) ...... Lobby All Junior, Community College, Education, and Bridge Scholars

12:05 PM – 1:00 PM Luncheon Buffet ...... Atrium / Sunroom (Lower Level)

1:00 PM – 1:30 PM "Who Are You Calling an Engineer?" – Jim Gilland ...... Sunroom Senior Scientist/Research Team Manager, Ohio Aerospace Institute

1:30 PM Group Photograph ...... Lobby / Atrium Stairwell

1:45 PM – 3:30 PM Oral Presentations (Continued) – All Senior Scholars and Fellows (105 minutes) Session 2 (Groups 5, 6, 7, and 8)  Group 5 ...... Forum (Lobby Level)  Group 6 ...... President’s Room (Lower Level)  Group 7 ...... Industry Room A (2nd Floor)  Group 8 ...... Industry Room B (2nd Floor)

3:35 PM Presentation of Best Poster Awards ...... Sunroom

3:45 PM Symposium Adjourns

12

STUDENT ORAL PRESENTATIONS SESSION 1 – 8:45 AM to 10:30 AM (105 minutes)

Group 1 – Mechanical Engineering/ Group 2 – Mechanical Engineering Aerospace Engineering

FORUM (AUDITORIUM – LOBBY LEVEL) PRESIDENT’S ROOM (LOWER LEVEL) Evaluators: Lesley Berhan, Douglas Feikema, Evaluators: Jim Gilland, Ashlie McVetta John Weber, and George Williams

Mechanical Engineering: 8:45 Charles M. Dandino, Senior, UCincinnati 8:45 Jeffrey W. Carter, Senior, Ohio Northern 9:00 James M. Hoffman, Senior, UDayton 9:00 Michelle K. Fleming, Senior, Youngstown State 9:15 Brooke R. Johnson, Senior, Youngstown State 9:15 Thomas P. O'Connor, Senior, Cedarville 9:30 Daniel E. La Croix, Senior, Cedarville 9:30 Jessica L. Sellar, Senior, Miami U 9:45 Daniel K. Shapiro, Senior, Ohio U Aerospace Engineering: 10:00 Rachael L. Willingham, Senior, UToledo 9:45 Danielle L. Grage, Senior, UCincinnati 10:15 Caleb J. Barnes, MS 2, Wright State 10:00 Robert C. Charvat, MS 1, UCincinnati 10:15 Adam R. Gerlach, Doctoral 1, UCincinnati

Group 3 – Electrical Engineering/ Group 4 – Chemical Engineering/ Robotics, Electrical, and Computer Engineering Chemical and Biomolecular Engineering

INDUSTRY ROOM A (2ND FLOOR) INDUSTRY ROOM B (2ND FLOOR) Evaluators: Edward Asikele, Cynthia Calhoun Evaluators: Don Majcher, Daniela Ribita

Electrical Engineering: Chemical Engineering: 8:45 Karin L. Bodnar, Senior, UAkron 8:45 Lauren E. Cosby, Senior, UDayton 9:00 Royel S. Bridges, Senior, Wilberforce 9:00 Emma B. Jenkins, Senior, UAkron 9:15 Amy V. Murray, Senior, Ohio Northern 9:15 Tanya L. Miracle, Senior, UAkron 9:30 Danielle N. Richards, Senior, Wilberforce 9:30 Bradley B. Rupp, Senior, UToledo 9:45 Ryan A. Wolfarth, Senior, Miami U 9:45 Joel E. Schmidt, Senior, UDayton

Robotics, Electrical, and Computer Engineering: Chemical and Biomolecular Engineering: 10:00 Patrick M. Wensing, MS 2, Ohio State 10:00 Nicole D. Guzman, Doctoral 1, Ohio State

Electrical Engineering: 10:15 Alan L. Jennings, Doctoral 2, UDayton

13

STUDENT ORAL PRESENTATIONS (Cont.)

SESSION 2 – 1:45 PM to 3:30 PM (105 minutes)

Group 5 – Mechanical Engineering Group 6 – Astrophysics/Physics/ Biochemistry/Biology and Chemistry/ Biomedical Engineering

FORUM (AUDITORIUM – LOBBY LEVEL) PRESIDENT’S ROOM (LOWER LEVEL) Evaluators: Douglas Feikema, Jim Gilland Evaluators: Daniela Ribita, George Williams

1:45 Adam M. Blake, Senior, Wright State Astrophysics: 2:00 Alexander J. Dawson, Senior, Miami U 1:45 Kevin M. Sweeney, Senior, Ohio University 2:15 Kathryn R. Hyden, Senior, Youngstown Physics: 2:30 David A. Rogers, Senior, Ohio Northern 2:00 Davin C. Flateau, Senior, UCincinnati 2:45 Christopher J. Slattery, Senior, Ohio Northern Biochemistry: 3:00 Daniel R. E. Foster, Doctoral 2, Ohio State 2:15 Katherine M. Cobb, Senior, UDayton Biology and Chemistry: 2:30 Kaitlin M. Fries, Senior, UDayton Biomedical Engineering: 2:45 Jorge A. Sylvester, Senior, UAkron

Group 7 – Industrial Systems Engineering/ Group 8 – Computer Engineering/ Manufacturing Engineering/Environmental Petroleum Engineering Engineering/Civil Engineering

INDUSTRY ROOM A (2ND FLOOR) INDUSTRY ROOM B (2ND FLOOR) Evaluators: Ben Ebenhack, Aaron Rood, Evaluators: Edward Asikele, Cathy Mowrer Ben Thomas

Industrial Systems Engineering: Computer Engineering: 1:45 Loc Phuoc Nguyen, Senior, Wright State 1:45 Joshua E. Allen, Senior, Wilberforce Manufacturing Engineering 2:00 Thomas M. Gambone, II, Senior, UAkron 2:00 Robyn L. Bradford, Senior, Central State 2:15 Brandon J. Leake, Senior, Wilberforce 2:15 Myron'Tyshan L. McGee, Senior, Central State Environmental Engineering: Petroleum Engineering: 2:30 Kamau B. Mbalia, Senior, Central State 2:30 Brandon S. Baylor, Senior, Marietta College Civil Engineering: 2:45 Dean T. Bendele, Senior, Marietta College 2:45 Kimberly M. Coburn, Senior, UToledo 3:00 Marsha E. Hupp, Senior, Marietta College 3:00 Bartina C. Smith, MS 1, UDayton 3:15 William Tyler Ragan, Senior, Marietta College

14 April 8, 2011

15 Welcome Session

OSGC Director Dr. Gary L. Slater (University of Cincinnati) welcomes attendees to the Nineteenth Annual Student Research Symposium.

Oral Presentations—Morning Session

Amy V. Murray (Ohio Northern University) Bradley B. Rupp (The University of Toledo) Danielle N. Richards (Wilberforce University) presents “Football Robotics”. presents “Synthesis and Characterization of presents “Improvement of Coplanar Grid CZT Polycarbonate Nanocomposites Using In-situ Detector Performance with Silicon Dioxide Polymerization”. Deposition”.

Adam M. Blake (Wright State University) Robert C. Charvat (University of Cincinnati) presents “Developing Lithium Ion Powered presents “SIERRA Program (Surveillance for Thomas P. O’Connor (Cedarville University) Wheelchair to Facilitate Students Learning”. Intelligent Emergency Response Robotic presents “Vocational Rehabilitation Design Aircraft)”. Project”.

16 Oral Presentations — Morning Session

Brooke R. Johnson (Youngstown State James M. Hoffman (University of Dayton) Michelle K. Fleming (Youngstown State University) presents “Finite Element Analysis presents “Use of Thin Liquid Films in a University) presents “Biomechanical of Soft Biological Tissue with Comparisons Low Cost Solar Thermal Panel”. Investigation: Hip Fracture Repair”. to Tensile Testing”.

Danielle L. Grage (University of Cincinnati) Alan L. Jennings (University of Dayton) presents “ Jet Exhaust Aeroacoustics and presents “Developmental Learning Applied to Correlating Flowfield”. Autonomous Robotics”. Keynote Speaker Afternoon Session

Kamau Mbalia (Central State University) Davin C. Flateau (University of Dr. James H. Gilland, Senior Scientist at presents “Solar Powered Water Cincinnati) presents “Extrasolar OAI, keynote speaker, presents“ Who Are Purification System”. Planet Study and Characterization You Calling an Engineer?” with Precision Photometry”.

Kimberly M. Coburn (The University of Christopher J. Slattery (Ohio Northern Jorge A. Sylvester (The University of Toledo) presents “EPA Stormwater University) presents “SAE Aero Akron) presents “Novel Anti-cancer Management for Small Municipalities”. Competition Improvements”. Selective Enzyme Inhibitors”.

17 Poster Presentations

Heather M. Bennett (University of Tanisha M. Brinson (Wilberforce University) Jason H. Day (The University of Toledo) Cincinnati) presents “The Rock Cycle— presents “How Can We Successfully Get presents “Space Spectroscopy”. On Earth and Out of this World”. Multicast Packets From the Virtual NIC Without Experiencing Increased Latency?”

Danielle M. DeChellis (Youngstown State David M. Edwards (Ohio University) Malcolm X. Haraway (Wilberforce University) presents “Lunar Math”. presents “High-Cost Autopilot and University) presents “Advantages of Certificate of Authorization Process”. Nuclear Energy”.

Elise J. Lund (Marietta College) presents Jillian M. Payne (Cedarville University) Ciara C. Seitz (Cleveland State “Phases of the Moon”. presents “What is Necessary for Life on a University) presents Planet Other Than Earth?” “Stimuli Responsive Particles”.

David M. Smith (Columbus State Zachary J. T. Snyder (Owens Community Charles F. Tillie (Cleveland State Community College) presents “The Past, College) presents “Biomedical University) presents “Characterization of Present, and Future of Computers in Electronics/Computer Electronics”. Chemical Vapor Deposition Processes”. Design Engineering”.

18 Poster Presentations

Rachel L. Bryant (Wright State University) Devin M. Cross (The University of Cedarville scholars Daniel La Croix (left) presents “Wireless Charging System”. Akron) presents “Sustainable and Thomas P. O’Connor (right). Solar-Powered Fixed Wing Flight”.

(From left to right) OSGC Director (From left to right) Marietta scholars Lesley M. Berhan (The University of Gary L. Slater (University of Cincinnati) Aaron M. Balderson and Dean T. Bendele, Toledo) and Michelle M. Mitchener and Robert Chasnov talking to advisor Ben W. Ebenhack. (Cedarville University) discuss (Cedarville University). “Identification of Modifications to Ets2- Responsive Genes in Fibroblasts”.

Junior, Education, and Community (From left to right) Robert C. Charvat Jay N. Reynolds (Cleveland State College students presenting their posters (University of Cincinnati), Patrick M. University) talks with Michael E. Croston during the Poster Session. Wensing (The Ohio State University), and (The University of Akron) about his Alan L. Jennings project “SAE Baja Floatation and (The University of Dayton). Maneuverability on Water”.

Aaron M. Balderson (Marietta College) Zachary J. T. Snyder (Owens Community and Jai Kadambi (Case Western Reserve College) and Craig Hamilton (OAI) OSGC Associate Director, Gerald T. Noel University) discuss Aaron’s project discuss Zachary’s project, (Central State University) (left) and Edward “Marcellus Shale Production”. “Vital Monitoring Systems”. Asikele (Wilberforce University) (right). 19 Poster Presentations

Danielle N. Richards (Wilberforce Heather M. Bennett (University of Cincinnati) Zachary J. T. Snyder (Owens Community University) browses materials on the and Gerald T. Noel (Central State University) College) discusses OSGC Display. discuss Heather’s poster “ The Rock Cycle— “Vital Monitoring Signs” On Earth and Out of this World”. with his advisor, Tekla Madaras.

(From left to right) University of Akron (From left to right) Wilberforce University: (From left to right) University of students Devin M. Cross, Michael D. Williams, Brandon J. Leake, Cincinnati: Robert C. Charvat, Michael E. Croston, and Pierre A. Hall Royel S. Bridges, Danielle N. Richards, Gary L. Slater (OSGC Director), discussing a research poster. Edward Asikele (Campus Representative), Charles M. Dandino, Malcolm X. Haraway, Joshua E. Allen, Danielle L. Grage, and Davin C. Flateau. and Tanisha M. Brinson.

“Best Junior Poster” Carré D. Scheidegger Cleveland State University “Biogeography-Based Optimization with Distributed Learning and Intelligence”

“Best Education Poster” Leah R. Mendenhall Marietta College “Gyroscopes”

“Best Community College Poster” Jennifer E. Allison Lakeland Community College Poster contest winners from left to right: Jennifer E. Allison (Lakeland “Natural Convection and Evaporative Cooling” Community College), Carré D. Scheidegger (Cleveland State University), and Leah R. Mendenhall (Marietta College).

20 Live Video Using Qt with C++

Student Researcher: Joshua E. Allen

Advisor: Dr. Edward Asikele

Wilberforce University Department of Computer Engineering

Abstract The Data Systems Branch at NASA Glenn Research Center in Cleveland, Ohio is a part of the Facilities Division, one of the largest divisions at the center. The data systems branch supports the facilities by providing software development, software maintenance, and software upgrades for all the facilities at NASA. Escort is an ancient data acquisition system that was created for NASA about 30 years ago. Escort is used by numerous branches to collect data from their experiments. Escort is a very old system, and when systems get old they need to be upgraded. In order to upgrade escort, hundreds of thousands of dollars must be spent for this system to keep up with the demands of engineers and researchers.

Project Objectives The challenge for my division was to find a replacement for this ancient system all together. Three people in my division were assigned a different software language to develop a prototype display to replace escort savable and reusable customized displays. We had to configure to software to: • Use resizable widgets (Graphs, Text Boxes, Dials, etc). • Fonts (color, text style, size) • Live video • Determine Cost • License Availability

We evaluated three different software languages: • LabView Data Systems • LabDeck • QT using C++

I had to create a window that allowed an engineer to view live video. The video window had to be resizable without losing its resolution. A running time stamp, play, pause, stop, rewind, fast-forward, and volume control had to be embedded also.

Methodology There was a branch meeting the day I finished to present each prototype. My branch chief was there along with branch chiefs from other NASA centers. LabView presented first and it allowed for resizable widgets to be placed on the screen and different fonts. It did not allow live video or the ability to save a display.

LabDeck presented second and it allowed for resizable widgets to be placed on the screen, different fonts, and a savable display. It did not allow live video though. My mentor presented our project and it met all the requirements.

• Savable displays • Resizable widgets • Customizable fonts • Live video

We compared the costs and licensing for all three software’s evaluated:

21 Labview • $5000 for the software & licensing per computer

LabDeck • Requires a Labview license Qt using C++ • Free software download & license

We do not know how long Nokia will support QT and in the event they stop supporting QT will NASA be able to support it.

Results It took about 4 weeks to finish my video window and it accomplished all the requirements. The decision has not been made yet, but it’s still in deliberation by the heads at NASA.

QT is clearly the best option and the creator of the LabDeck prototype voiced this after her presentation also.

Once a decision is made work can begin on the replacement of a multi- million dollar system that is out of date and cannot keep up with the cutting edge needs of today and tomorrow.

Acknowledgments The author would like to thank the Data Systems Branch, Testing Division at the NASA Glenn Research Center.

22 Natural Convection and Evaporative Cooling of Containment

Student Researcher: Jennifer E. Allison

Advisor: Hiram Reppert

Lakeland Community College Department of Nuclear Engineering Technology

Abstract Westinghouse AP1000 is a reactor designed to use natural convection and evaporative cooling in the event of a design base accident to cool the reactor and maintain it at a safe level without any manual intervention. My project will take the temperature data collected from a mock containment structure as it was heated and then cooled following the parameters of this reactor. I will analyze the data and determine pros and cons of this type of cooling and how it best fits into the commercial nuclear power industry.

The Westinghouse AP 1000 reactor is a two loop pressurized water reactor based off of the AP 600 design. It can produce approximately 1154 megawatts of power. The design was modified in order to have a safer and more efficient plant. The AP 1000 is also the “first Generation III + reactor to receive final design approval” (Wikipedia). Part of the design includes a passive safety system that relies on natural convection to cool the reactor in the case of a design based accident. The AP 1000 “plant is designed to achieve and maintain safe shutdown condition without any operator action and without the need for ac power or pumps” (Westinghouse). With this type of design there is less chance of error occurring in the failing of an active component in the event of an accident. In fact in the for an AP 1000 plant on average there are “50% fewer safety-related valves, 80% less safety-related piping, 85% less control cable, 35% fewer pumps, 45% less seismic building volume” (Westinghouse) than in other plants. The water tank that supplies cooling water holds enough water for three days of cooling and if additional water is needed more can be added to the tank.

Project Objectives The purpose of this experiment was to determine if natural convection and evaporative cooling would be an effective method of emergency cooling on the Westinghouse AP 1000 containment structure. Also, the data collected was used to determine if the temperature inside of containment could be stabilized with this method.

Methodology Used A model of this containment would be constructed with two heaters in the bottom to simulate the heat generated by the nuclear fuel. The heaters will be connected to a dial that allows the heat to be adjusted and controlled. Then the temperature data would be taken and certain time intervals. This data would then be used to calculate the convection transfer from containment. Before beginning the experiment the containment structure should be checked to make sure it is correctly assembled, and the thermocouples should be calibrated. Also make sure that the heaters are plugged in and functioning before they are turned on. Then take data at intervals appropriate to the amount of heat being added.

The experiment had two parts. In Part I heat was applied gradually starting with one heater at half power and stepping up to both heaters at full power. Once the full heat leveled out, water was added to bring the temperatures back down. The data for part A was broken up into each level of power on the heater and the cooling effects of the water. Part II took both heaters at maximum and then applied water when it leveled out at a peak temperature. This was to see how the water cooled the containment and see if the temperature could be maintained using natural convection.

Begin with Part I by plugging one Variac heater in and setting it at 60V output and turning it on. Monitor the temperatures at various locations including the exhaust and air intake. Take the temperature readings at steady intervals, about every fifteen minutes. When the temperatures have leveled out turn the heater output up to 120V and continue taking readings every fifteen minutes. When the temperatures level out

23 again add the second Variac heater at output 60V and continue with your readings. Once this levels out turn the second heater to 120 V output and take readings every five minutes. When this has leveled out add the water and continue reading temperatures every five minutes. This concludes Part I, use the temperature readings to average each level of thermocouples and create a graph showing the increasing and decreasing temperatures. Turn off the heaters and clean up the water when finished.

Part II begins by turning both Variac heaters to 120V, after making sure that all connections are still intact and the heaters are plugged in. Take readings every five minutes until the temperature levels out. After the last reading turn on the water and continue taking readings as the temperature falls. Then average the temperatures at each level of thermocouples and create a graph of the readings. Turn off the heaters and clean up the water when finished.

Use the collected data to calculate the conduction and convection heat transfers in the containment. The height and diameter of the containment vessel and the thickness of the sheet metal will need to be measured before or after the experiment when the metal is cool.

Results Obtained In Part I there was a stepping up of the temperatures until water was added and then a steep drop off occurred. This shows that the heat can be maintained at different levels, which in a nuclear reactor would allow the power to be maintained. The steep drop off illustrates how quickly it can be cooled in the event of an accident that would trigger emergency cooling.

In Part II there was a steep climb in temperature until it leveled out and then a steep fall off when the water was added. This follows with the theory that dropping water onto containment and allowing natural convection and evaporation to cool it. The steep climb was bringing the heaters to a maximum and then applying the cooling water. If more time was had to observe the temperature would level off to that of the surrounding atmosphere and be maintained there as long as cooling water was available.

Significance and Interpretation of Results The slight variations in the collected data can be attributed to restarting the heaters on the different days of the lab experiment and from averaging the temperatures taken at the upper, middle, and lower levels of the containment. The averages were taken because the placement of the thermocouples could have been directly over a heater or under where the water entered. The mock containment we used was not a perfect replica but it still illustrated how natural convection is a viable method for cooling containment.

The data clearly show the steep drop in temperature when the water was added to the heated containment vessel, and overall fits what was expected entering the experiment. That dropping water onto a heated containment would quickly cool it.

Not leaving the water on for a long enough period of time to completely determine if the temperature could be maintained at the lower levels. From the temperatures taken it would appear that it was headed in that direction but I think more data is needed for a positive conclusion. Other variations in the collected data can be attributed to averaging the temperatures taken at the upper, middle, and lower levels of the containment. The averages were taken because the placement of the thermocouples could have been directly over a heater or under where the water entered. The mock containment we used was not a perfect replica but it still illustrated how natural convection is a viable method for cooling containment. In order to really verify the data the experiment should be repeated and probably lengthened. Adding more thermocouples and temperature readings would also help to add accuracy.

This type of cooling is good for the industry because it requires no AC power for the safe shutdown of the plant. Particularly in light of what has happened in Japan this is an excellent feature to have. It also requires fewer safety related components which would mean that there is less chance of failure.

A possible con of this structure is the effect the water will have on any metal inside the containment structure. Because it uses natural convection moisture in the air will be circulated and if not properly

24 monitored could cause some rust. Currently the United States has seven plants that have requested to build this plant and two have broken ground on construction. The design was approved by the Nuclear Regulatory Committee in January of 2006.

Figures and

Figure 1. Part I Temperature Curve

Figure 2. Part II Temperature Curve

Acknowledgments I would like to acknowledge my faculty advisor, Hiram Reppert, for designing the mock containment structure used in this experiment and coming up with the parameters for the temperature readings.

References 1. “AP 1000” en.wikipedia.org. 13 December 2010. Wikipedia. 7 February 2011. 2. “Issued Design Certification-Advanced Passive 1000 (AP 1000), Rev. 15” nrc.gov. 13 December 2010. Nuclear Regulatory Commission. 9 February 2011 3. “Westinghouse AP 1000” ap1000.westinghousenuclear.com. 2011. Westinghouse. 7 February 2011.

25 Computational Leed Analysis of the (1x1) Structure of Au(111)

Student Researcher: Stephanie D. Ash

Advisor: Dr. Mellita Caragiu

Ohio Northern University Department of Physics and Astronomy

Abstract One of the major outcomes of the study of solid-gas interfaces is the knowledge of the actual position of the atoms in the top-most atomic planes of the solid surface. Among the most successful techniques in the study of solid surfaces kept under ultra high vacuum conditions is low-energy electron diffraction (LEED). The present study investigates the surface of clean gold, cut along the crystallographic plane (111). Computational LEED analysis of experimental data provided by collaborators at Penn State University reveals an unreconstructed Au(111) surface with the main feature being the relaxation of the top-most atomic layers.

Project Objectives Due to the increasing interest in the basic properties of the Au surface, we have undertaken the project of studying the clean Au surface cut along the (111) crystallographic plane, by applying low-energy electron diffraction (LEED). In principle, the surface of a crystal can exhibit relaxation of the top-most atomic layers - associated with a deviation of the interlayer distance from the bulk value, as well as a reconstruction of the outer atomic planes - which would determine a different surface unit cell than in the case of unreconstructed surfaces. The current investigation posed the question of what exactly the modifications of a clean Au(111) surface might be, and in addition planned to obtain the exact geometrical coordinates of the gold atoms in the few top most layers.

Methodology Used A LEED investigation consists of two parts: experimental and computational. A low-current LEED instrument has been used to collect the data. Initially, electrons are sent perpendicular to the surface, and their energy is varied between 100 and 450eV. Some of these electrons are elastically scattered and collected in the form of bright diffraction spots on a screen. The intensity of each spot (beam) is extracted as a function of the energy of the electrons forming that particular beam. Thus, the experimental Intensity versus Energy plots - I(E) - are obtained.

The computational analysis consists in calculating theoretical I(E) curves [1], which would correspond to very particular positions of atoms in the surface layers. Each possible arrangement of atoms would give a different set of I(E) curves. These curves are compared with the experimental ones and only if the theoretical and experimental curves are similar enough one concludes that the suggested arrangement of atoms matches the real position of atoms in the atomic layers. The similarity between the curves is judged by a reliability factor, R-factor, which should take values as small as possible, i.e., close to 0.2, 0.3 for a good match [2].

Results Obtained Two sets of experimental data were analyzed, both coming from the same Au(111) crystal with the only difference consisting in the way the surface was prepared prior to being probed by electrons. In one instance the surface has been annealed to 255ºC for about 30 minutes before the LEED images were acquired (referred to as the annealed sample), as opposed to the other instance in which this step has been omitted (and the crystal will be called unannealed).

The LEED pattern (Figure 1 panel a) points out that the surface is unreconstructed, with a (1x1) unit cell, as illustrated in panel b of the same figure.

26 Layer A Layer B

Layer C

a) b) Figure 1. a) LEED pattern corresponding to unreconstructed (1x1) Au(111); b) representation of the Au(111) surface; the surface unit cell is represented by the grey parallelogram.

The best fit experimental and theoretical I(E) curves for both annealed and unannealed samples are reproduced in Figure 2 (panels a and b, respectively).

a) b) Figure 2. Best fit I(E) curves for the (1x1) Au(111) structure; the continuous curves represent the experimental data, while the dashed curves correspond to the calculated I(E). Panel a refers to the annealed sample, and panel b to the unannealed one.

The LEED calculation indicates a relaxation of the first six atomic layers, as shown in Table 1. As an interesting fact, the annealed sample exhibits a slight expansion of the first interlayer spacing, as opposed to a slight contraction of the same interlayer spacing in the case of the unannealed sample.

Table 1. The top five interlayer spacings for the two cases analyzed: annealed and unannealed Au(111). Annealed sample Unannealed sample d(Au1-Au2) 2.38±0.18Å 2.33±0.19Å d(Au2-Au3) 2.36±0.03Å 2.35±0.04Å d(Au3-Au4) 2.33±0.05Å 2.33±0.04Å d(Au4-Au5) 2.37±0.06Å 2.35±0.05Å d(Au5-Au6) 2.29±0.07Å 2.32±0.09Å

References 1. A. Barbieri, M. A. Van Hove, private communication - to acknowledge the use of the "Barbieri/Van Hove SATLEED package" and the "Barbieri/Van Hove phase shift package"; 2. J. B. Pendry, J. Physics C 13, 937 (1980).

27 Mehtane Invasion into Freshwater Aquifers in Susquehanna County, Pennsylvania

Student Researcher: Aaron M. Balderson

Advisor: Benjamin Thomas, Ph.D.

Marietta College Department of Petroleum Engineering

Three wells operated by Cabot Oil & Gas (Baker #1, Gesford #3 and Gesford #9) were ordered to be plugged by the Pennsylvania Department of Environmental Protection due to natural gas invading freshwater aquifers (Watson). These wells were all located in Dimock Township in Susquehanna County, Pennsylvania (Consent Order), denoted by a predominant, black square on Figure 1, part of the Marcellus Shale play actively being drilled. The gas invasion affected 19 homes (Consent Order) and water tests showed dissolved methane contents of the water to be 50 mg/L (Water Well). Cabot Oil & Gas was required to deliver water to homes temporarily, prepare and implement a plan to check the integrity of the wells, and provide new vent stacks or extend existing vent stacks on the water supplies (Consent Order). In addition to the cost of these actions Cabot Oil & Gas paid $570,000 in settlement of civil and monthly stipulated penalties (Consent Order). A review of the drilling and the completion process for these wells could reveal some possible remedial actions that need to be taken in the future to avoid methane invasion into freshwater aquifers.

Figure 1. Marcellus Shale-Appalachian Basin Natural Gas Play. geology.com, n.d. Web. 4 Apr 2011. . (modified from original version)

The Gesford #3 well was completed on December 16, 2008 (Watson). At this location freshwater was detected at 350 feet (Watson). Pipe was set and cemented, across the freshwater zone, before Cabot Oil & Gas drilled into a gas bearing zone located at 1459 feet (Watson). The zone was expelling 900 million cubic feet of gas per day (Watson). Drilling continued to 1673 feet where pipe was set and cemented (Watson) to provide another string of protection for the freshwater aquifers and to shut in the gas bearing zone. The well was completed into the target zone where a depth of 7058 feet was reached (Watson). Once the production string was set the well began producing (Watson). All wells are designed to have the natural gas flow into the production string and to the surface, not into any adjoining formations to the well bore. Something failed though because gas entered some adjoining formation which resulted in gas invading the freshwater zone. The gas could be from the gas expelled at 1459 feet or it could be gas from the target formation.

28 During the drilling of the Gesford #9 well there was a freshwater zone encountered at 350 feet (Watson). In order to protect the zone 857 feet of pipe was set and cemented (Watson). A logging tool was run in the casing that determined cement adequately filled the annulus (Watson). Therefore the freshwater zone was protected by a layer of steel and a layer of cement. Drilling only commenced to a depth of 1911 feet (Exhibit 1) before the well was ordered to be plugged. A production string was never set because the target formation was never encountered. Furthermore no natural gas invasion was recorded. Nonetheless gas was discovered in the freshwater aquifers so therefore could have originated from this well.

Freshwater was discovered at 990 feet while drilling the Baker #1 well, and pipe and cement were set at 1094 feet to protect this zone (Watson). Another pipe was set at 1534 feet which further protected the water zone (Watson). The first recorded gas show is at 5908 feet at the Manatango Shale. Drilling continued to a depth of 7450 feet. Cementing problems were encountered but were remediated. A logging tool showed that cement adequately filled the annulus. Despite the efforts to protect the water zone natural gas still invaded.

These three wells are located in the Marcellus Shale, a gas play that is actively being drilled. All three of these wells were ordered to be plugged, which successfully stopped the gas invasion (Watson). Studying these wells may uncover procedures that need to be implemented to ensure gas invasion does not occur anymore.

References 1. Exhibit 1. 31 Mar. 2011. Cabot Oil & Gas Corporation. 24 May 2010 . 2. Watson, Robert W. Report of Cabot Oil & Gas Corporation's Utilization of Effective Techniques for Protecting Fresh Water Zones/Horizons During Natural Gas Well Drilling, Completion and Plugging Activities. 31 Mar. 2011. . 3. Consent Order and Settlement Agreement. 31 Mar. 2011. Commonwealth of Pennsylvania: Department of Environmental Protection. 15 Dec. 2010 . 4. Water Well Test Data. 31 Mar. 2011. Department of Environmental Protection and Cabot Oil & Gas. .

29 Intelligent Algorithms for Maze Exploration and Exploitation

Student Researcher: Sydney M. Barker

Advisor: Dr. Kelly Cohen

University of Cincinnati Department of Aerospace Systems

Abstract The purpose of the project is to develop maze exploration algorithms for a multi-agent system, using autonomous robots, that allows the agents to successfully navigate through an array of different mazes based on the game Theseus and the Minotaur. Theseus and the Minotaur is a maze game in which Theseus tries to get to the exit of each maze without being eaten by the Minotaur. For every one move Theseus makes, the Minotaur can make two. The mazes become progressively harder as each maze is completed. The objective of the research is to create a Fuzzy Intelligent System (FIS) in MATLAB that can be implemented to an autonomous multi-agent system so that the multi-agent system (two robots) can autonomously traverse any maze. The goal is to have the robots simulate the Theseus and the Minotaur game without any human interaction.

Methodology Used The research will begin by first playing the game Theseus and the Minotaur on the computer in order to reveal the Minotaur’s tendencies, weaknesses, and predictability and develop strategies for traversing the maze for Theseus. The expertise gained from playing the game will be used to formulate preliminary inputs, outputs, membership functions, degrees of membership, and rules for the Theseus and the Minotaur FIS. To further refine the inputs, outputs, membership functions, rules, and degrees of membership, an m-file in MATLAB will be written in order to create an interactive maze game resembling Theseus and the Minotaur game. Once all the inputs, outputs, membership functions, degrees of membership, and rule base are finalized, the FIS will be created in MATLAB using the Fuzzy Toolbox function. The FIS will be first tested in MATLAB to make sure that the FIS is fully functional and there are no problems with the components of the FIS as well as to check if any information is missing in the FIS. Once the fuzzy based decision making algorithm is tested and validated, it will be applied to the laboratory mobile robots (LEGO Mindstorms NXT 2.0) which will navigate in representative maze environments built in the lab.

Results Obtained From playing Theseus and the Minotaur game, I noticed three main weaknesses that the Minotaur has and used the weaknesses to formulate inputs, outputs, membership functions, and rules for the FIS to be used by Theseus. My first set of inputs, outputs, membership functions and rules were very broad: therefore, the fuzzy system was very large and too complex. Some of the rules incorporated the same inputs and membership functions as well as contradicted each other. I trimmed the number of inputs and created a single output in order to eliminate the repetitive and unnecessary rules and make my system more efficient and less complex. Fifteen interactive mazes were created and tested in MATLAB. The m-file prompts the user to choose where he or she wants Theseus to move next and the computer calculates where the Minotaur’s next two moves. Creating and analyzing the interactive mazes helped to refine the current FIS and make a more functional system. A single m-file was created which contains the FIS created for Theseus and a reactionary heuristic (also used in the interactive mazes) for the Minotaur since the Minotaur’s moves are predictable. The m-file first prompts the user to choose one of the twenty maze designs created in which he or she would like to work with. MATLAB presents the requested maze and plots the preliminary positions of Theseus and the Minotaur. The m-file is set-up in a WHILE loop and continues to run the same loop until either Theseus has reached the exit or the Minotaur has caught Theseus. Testing the FIS in MATLAB presented many problems with my Fuzzy Inference System. The FIS had too many rules in the rule base and as a result some of the rules overlapped and contradicted each other which forced the FIS to arbitrarily choose one of the rules when calculating the output and as a result the output was not correct. Another problem with the FIS was that it was too complex. There were

30 too many inputs to consider when calculating the output. In order to minimize the complexity of a single FIS, cascade learning was implemented. Cascade learning takes a large system and splits the system up into different sets and sub-sets. In the case of the FIS in this research project, in order to get the desired output (Theseus’ Move) multiple small, simple fuzzy inference systems were created and then implemented together to get the final output. The m-file was debugged in MATLAB in order to observe what rules in the FIS were overlapping or missing in the rule base and then a preliminary cascade model was made. The model was revised several times until the final model was created which contained four cascades. The changes that were made included changing and adding rules, changing the m-file script, changing membership functions, adding inputs, and changing the order of my FIS sets. Apart from making small changes, strategies needed to be developed for Theseus to use when he is close to the maze exit and when he is far from exit. The three strategies were named the “End Game”, “Minotaur Lure”, and “Minotaur Trap” strategies. The end game strategy is used by Theseus when Theseus is close to the exit and the Minotaur is far from Theseus and the exit. The end game strategy is comprised of a simple heuristic that does not require “fuzzy” thinking, but a straight forward path to get to the exit. The “Minotaur Trap” strategy is used when Theseus and the Minotaur are close to the exit or Theseus is far from the exit. In both cases Theseus is required to lure the Minotaur from the exit and trap him so that Theseus can safely reach the exit. The “Minotaur Lure” strategy is used when the Minotaur is close to the exit and Theseus is far away from the exit. In this case, Theseus must move towards the exit in a manner that will lure the Minotaur from the exit and into the middle of the maze. Once the Minotaur is lured away from the exit, then Theseus and implements the Minotaur Trap strategy to trap the Minotaur and then move towards the exit. The final m-file was tested on 20 maze designs and completed all 20 mazes.

Creating maze exploration algorithms using fuzzy logic as an approach will be useful in traversing mazes where there are many paths that will lead to the solution. By implementing fuzzy logic to the exploration algorithms, the optimal path to the solution can be found. The Fuzzy Inference System will be given the maze design and will use the inputs, outputs, membership functions, and rules to calculate the best solution and outputs that solution to the multi-agent system: Theseus robot and Minotaur robot. The best method to creating an algorithm using fuzzy logic is through a cascade learning concept. By splitting the fuzzy inference system into smaller sets, the entire system is simplified and the output can be calculated quicker.

Figure 1. Theseus and the Minotaur Maze Game

Figure 2. Fuzzy Toolbox in MATLAB

31

Figure 3. Maze Design Created in MATLAB

Figure 4. Final FIS with Cascade Learning Tree Model

Acknowledgments The author of this paper would like to extend thanks to Dr. Kelly Cohen, Aerospace Professor at the University of Cincinnati and Project Advisor. Dr. Cohen provided his lab, needed materials, reference reading materials, and guidance for the current work and future work going into the research project. The author would also like to extend thanks to Chelsea Sabo, an Aerospace Graduate student at the University of Cincinnati. Chelsea served as the graduate assistant to the research project. Chelsea helped the Fuzzy Inference System get started and provided input and guidance throughout the research period. The author would like to thank Pablo Mora for help in writing the code for the interactive mazes. Thanks are extended to Cody Lafountain who helped correct the programming mistakes in MATLAB. Thanks are given to the McNair Scholars program and the Ohio Space Grant Consortium for grant that aided in developing the research project.

References 1. 1Butler, Charles, and Caudill, Maureen, Naturally Intelligent Systems, The MIT Press, Cambridge, Mass, 2000, Chaps. 2- 5. 2. 2Dixon, K. R., Khosla, P. R., and Malak, R. J., “Incorporating Prior Knowledge and Previously Learned Information into Reinforcement Learning Agents,” Institute for Complex Engineered Systems Technical Report Series, 31 Jan. 2000. 3. 3Huser, J., Peters, L., and Surmann, H., “A Fuzzy System for Indoor Mobile Robot Navigation,” Fourth IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 95, IEEE, 1995, pp. 83-88. 4. 4Ishikawa, S., “A Method of Autonomous Mobile Robot Navigation by Using Fuzzy Control,” Advanced Robotics, 9th ed., 1995, pp. 29-52. 5. 5Kosko, B., Fuzzy Thinking: The New Science of Fuzzy Logic, Hyperion, New York, 1993

32 A High Resolution Spectral Difference Method for the Compressible Navier-Stokes Equations

Student Researcher: Caleb J. Barnes

Advisors: Dr. George Huang and Dr. Joseph Shang

Wright State University Department of Mechanical and Materials Engineering

Abstract A high-resolution numerical simulation procedure to model steep gradient regions such as shock jumps and flame fronts has been used to achieve higher order resolution using an orthogonal polynomial Gauss- Lobatto grid and adaptive polynomial refinement. The method is designed to be computationally stable, accurate, with the ability of resolving shock discontinuities, and is capable of solving a system of equations such as the Euler equations for compressible flow. Results from both a shock tube and shock- acoustic wave were tested. A method for removing Gibbs phenomenon was employed by selectively activating an artificial diffusion term in regions of spurious oscillation. The method’s first order solution was validated in comparison to a 1st order Roe scheme solution. Polynomial refinement was applied by uniformly increasing the polynomial degree for each cell to produce higher order solutions. Refinement was shown to produce solutions approaching analytical values.

Introduction and Objectives There are several approaches for computationally modeling fluid dynamics. These include the finite difference, finite element, and spectral methods to name a few. Finite element and finite difference methods are frequently used and offer a wide range of well-known numerical schemes. These schemes can vary in terms of computational accuracy but are typically of lower order. If a more accurate solution is desired, it is common practice to refine the mesh either globally or in a region of interest. This can often be a complicated or time consuming process as global mesh refinement will greatly increase the computation time while local refinement requires an elaborate refinement operation. Alternatively, polynomial refinement has been used to improve the solution accuracy and has been shown to converge more quickly than mesh refinement in some cases [1,2]. For finite difference methods, polynomial refinement is performed by including neighboring node values in a higher order polynomial [3]. This can increase the complexity of the scheme especially near the boundaries where nodes do not exist to construct the higher order polynomials. Finite element methods instead increase the number of unknown values within the cell itself to construct a higher order solution [4].

Spectral methods are considered a class of solution techniques using sets of known functions to solve differential equations [5]. Basic spectral methods solve series expansions of trial functions using the method of weighted residuals (MWR). These trial functions can be truncated to the desired order of accuracy. Either collocation or Galerkin methods may be used in the method of weighted residuals, where collocation uses Dirac delta functions at collocation points as trial functions while the Galerkin method utilizes the test function as the trial function [5].

A number of drawbacks plague the practicality of spectral methods. For instance, spectral methods require a simple domain and increased resolution can only be obtained by increasing the approximation order. However, computational efficiency greatly decreases as the approximation order is increased. Additionally, increased order reduces the size of the time step that may be used to a large degree [6,7]. To this end researchers investigated applying spectral methods to globally decomposed domains. Such methods are capable of obtaining a high level of accuracy and converge exponentially [5]. In the past several spectral schemes have been proposed and demonstrated such as the spectral element methods [8], multi-domain spectral methods [6,7], spectral volume methods [9-11], and more recently the spectral difference methods [12-13]. The spectral difference method developed by Liu et. al. provides a high level of flexibility, accuracy, and efficiency with a lower complexity than other advanced schemes such as the discontinuous Galerkin method [14-16].

33 Huang et. al [1,2] expanded the spectral difference method to treat time in the same fashion as space and implemented a self-adaptation approach to the polynomial refinement procedure. Additionally, a new approach for discontinuity-capturing and an implicit discretization were applied. These improvements created a more robust polynomial refinement procedure, sharper discontinuity resolution, and allowed for larger time steps. For the current research, Huang’s method is further expanded for a system of equations and solves the Euler equations. A new shock capturing method for the Euler equations is identified by defining artificial diffusion guidelines and selectively applying the diffusion term where it is needed. Additionally, a new flux definition is defined at the cell interfaces by upwinding the characteristic values crossing the interface. The objective of the current study is to develop a state-of-the-art scheme for the compressible Navier Stokes equations that is capable of an infinite order of accuracy and adaptively refines polynomial approximations where needed.

Methodology Used The compressible Navier-Stokes equations may be greatly simplified in certain circumstances by ignoring viscous interactions reducing the system of equations to the Euler equations which are a coupled system of nonlinear hyperbolic partial differential equations which are often used to model high Reynolds number flows where boundary layer development is considered insignificant compared to the overall flowfield. The Euler equations can be written in conservation form for one dimension as shown in Eqns. 1 & 2. The inviscid Navier-Stokes equations are heavily investigated in the current study because the hyperbolic limit of the compressible Navier-Stokes is equations is often the most difficult to tackle [17].

(1)

(2)

The equations are cast into delta form by defining . Eqn. 3 is then solved for and substituted into Eqn. 1 producing Eqn. 4. This is known as the delta form [17]. (3)

(4)

Here, the right-hand-side (RHS) is solved explicitly and represents the physics of the original PDE. The left-hand-side (LHS) is solved implicitly and discretized for numerical stability. Eqn. 4 is discretized in terms of the center and neighboring stencil points on the LHS and the RHS is treated as a source term. (5) Including a term to account for numerical dissipation, Eqn. 4 becomes Eqn. 6 and the resulting coefficients in Eqn. 5 are represented by Eqns. 7-11. (6)

(7)

(8)

(9)

(10)

(11)

The coefficients given above are defined using 1st order upwinding for the advective terms and 2nd order central differencing for the diffusive terms. The value of is the correction to each iteration and will go to 0 with convergence along with the RHS. On the RHS, high-order methods are applied to

34 compute the flux and time derivatives. While high-order methods can be applied in the time direction [1,2] a first order approximation is used for the time derivative on the RHS in the current work.

The nature of the delta scheme allows the LHS of the above equations to be first-order accurate without affecting the overall accuracy of the solution in order to obtain numerical stability. The overall accuracy is determined by how the RHS of the equation is treated. Here, the real physics is applied and updated in an iterative fashion through changes in the primitive variables. The flux gradient and derivatives of the primitive variables are determined using the methods described later in this paper.

Polynomial Interpolation For the highest order of approximation, every node is included for the solution of each unknown as is done in basic spectral methods. This is accomplished by reconstructing the solution using a series of polynomials shown in Eqn. 12.

(12)

(13)

The shape function can be differentiated as shown in Eqn. 14 and used to replace in Eqn. 12 to approximate the polynomial derivative as shown in Eqn. 15.

(14)

(15)

The shape function derivative values can be represented by an matrix which is only dependent on the choice of the roots. Therefore, the coefficient matrix can be calculated in advance and stored as input in order to reduce computation time. The flux derivative term on the RHS of Eqn. 6 is determined using this definition. Higher order differentiation is obtained by consecutively applying the shape function derivative to the previously calculated derivatives as demonstrated in Eqn. 16.

(16)

The choice of the roots is an important consideration as it uniquely defines the shape function. Any number of distributions can be chosen, including uniformly spaced roots. One difficulty in using polynomial refinement is the occurrence of Runge phenomenon which is the result of oscillations at the end points of an interpolation when high-degree polynomials are applied over a region of space [18]. The Runge phenomenon is diminished by choosing a non-uniform placement for the unknown values or roots. Several choices exist that greatly diminish or eliminate the Runge phenomenon. These require a non- uniform placement for the unknown values. For the current study, the Lobbato mesh is a convenient choice because it includes the endpoints of the interval [6,7]. While single-domain systems are useful for solving simple geometry problems, this is often not the case in two and three dimensional flow problems. The method described so far is now applied to individual subdomains. The terms cell, subdomain, and local domain are used interchangeably in the current work.

Two types of boundary conditions are now necessary due to the multi-domain formulation: global and local. The global boundary conditions refer to the conditions at the global domain edges which are simply the traditional boundary conditions. The boundary conditions here may be set implicitly or explicitly. The other boundary type is the local boundary condition. This is necessary as each subdomain requires an independent solution procedure. Implicit methods such as the block tri-diagonal solver or Guass-Seidel iterations can be applied to each cell, but the interaction with the adjacent cell must be accounted. This is handled by extending the cell’s stencil to the first internal node of the adjacent cells on both sides. These external nodes serve as the boundary conditions to the local domain.

35 Interface Fluxes The benefit of the Lobatto formulation becomes clear when defining flux values at the subdomain interface. The flux terms at the subdomain interfaces need to be treated differently in order to allow waves to propagate through the interface. A number of flux difference and flux vector splitting methods have been proposed and used over the years [3]. Probably the most popular technique is Roe’s approximate Riemann solver [19]. An alternative flux splitting method was used to pass the flux values across a cell interface. This method involves calculating the characteristic values on the left and right sides of the cell interface and choosing the upwind characteristic values based on the averaged eigenvalues across the interface.

The characteristic values are determined by calculating the conservative quantities for nodes on the left and right of the cell interface and decoupling the system by multiplying through by the left eigenvectors as shown in Eqn. 17. (17) Two nodes are located at the center cell interface because each cell has a node located at the cell edge. This is advantageous because it allows for conservation to be satisfied directly for both polynomials across the interface. A basis for choosing the characteristic value to propagate must be determined once the characteristic values on both sides of the interface have been found. By calculating the average primitive variables across the cell interface, the average eigenvalues at the subdomain boundaries are found and used to choose the upwind characteristic direction. The characteristic splitting method is used to guarantee the fluxes satisfy conservation. In order to choose the upwind characteristic values, Eqn. 17 is modified to Eqns. 18-20. (18) (19) (20) Eqns. 18 & 19 calculate the left and right characteristic values while the upwind values are chosen based on the wave speed using Eqn. 20. The resulting fluxes are then computed from the upwinded characteristic values using Eqn. 21, where and are determined using the averaged primitive values. (21) The primary difference between the Roe scheme and the upwind characteristic methods lies in the fact that the Roe scheme uses the flux values calculated directly on both sides of the interface whereas the present method decouples the flux to apply upwinding directly to the characteristic variable. A comparison of the 1st order solution for both the Roe scheme and the present method will be shown later in the results.

Adaptive Polynomial Refinement The variable nature of the current method opens the door to several benefits provided by polynomial interpolation. That is, the ability to transfer between varying degrees of approximation. If a poor solution is detected in a region, the polynomial order can be increased to enhance the resolution in that cell. The converse is true as well. If a smaller approximation becomes sufficient, the polynomial degree may be reduced locally to save computation time in future time steps. Such a process is known as adaptive polynomial refinement [1,2]. Adaptive polynomial refinement requires the conservative variables to be known either from a previous time step or from the initial conditions and this may be of low order. The known values are used to determine the second derivative using Eqn. 16 which is useful for determining the need for solution refinement. Refinement is determined by comparing the area under each function with a higher degree approximation. If little to no change is found, the polynomial approximation is sufficient. A new value, , will now be defined in order to determine the level of refinement required.

(22)

(23)

36 The integral is evaluated using Gaussian quadrature for each conservation law. The value of is chosen as 0.001 and denotes the integration is over the element. The value of is determined by the initial polynomial degree within the cell and incremented by two each iteration until Eqn. 23 is satisfied. This process is repeated for each subdomain at every time step, while local polynomial approximations are allowed to vary across the global domain.

Discontinuity Capturing Eliminating the Gibbs phenomenon is no trivial task when using high order polynomial reconstructions. The Gibbs phenomenon is the appearance of spurious oscillations when a continuous function is used to model a discontinuity or a very steep regime [20]. Treating the Gibbs phenomenon is an important consideration in solving the Euler equations as many cases of interest involve discontinuities. Huang et. al (2005) preferred the use of an artificial viscosity term to eliminate Gibbs phenomena over the use of flux limiters [1,2]. The use of a flux limiter reduces the accuracy to 2nd order and can risk the elimination of physical extrema. Alternatively, an artificial viscosity term is introduced to smooth oscillations and is selectively activated for each cell depending on the presence of oscillations in the solution or its derivatives [1,2]. The magnitude of the diffusion is determined based on the flux gradient for Roe’s approximate Riemann solver [Roe] applied to the internal cell node values.

The change of flux across a cell drawn around an internal node using Roe’s method (Eqn. 24) is modified to Eqn. 25 using a series of approximations.

(24)

(25)

Choosing only the maximum magnitude eigenvalue in the Jacobian matrix reduces the coefficient to a scalar value as shown in Eqns. 26 & 27. (26)

(27) is determined independently for each cell and applied only to the internal nodes in order to prevent diffusion from crossing the cell boundary. Therefore, increasing the diffusion value for each cell will cause the solution to approach the first order solution for the given number of cells by forcing the polynomial to become linear. a. Diffusion Limiter A method for detecting Gibbs oscillations is applied in [2] for scalar hyperbolic equations and adapted for the Euler equations in the present study. The diffusion limiter is activated when non-physical oscillations are forming. The formation of non-physical oscillations in a cell solution is detected by comparing the area under the derivative of the conservative variables between two polynomial approximations as shown in Eqns. 28 & 29. The parameters and refer to the polynomial degree and derivative order respectively and are determined empirically. The value for is generally chosen to be 100 while the derivative order of two is found to sufficient. The parameter is chosen as 0.001. The sensitivity for the algorithm may be increased by increasing the value of because higher order derivatives are more sensitive to oscillation. The integral is evaluated using Gauss quadrature on the Lobatto points. (28)

(29)

The above process is similar in nature to the previous adaptive polynomial process, but there are a few important distinctions between the two. First, the integral comparison is between two significantly different polynomial orders and the derivative order is allowed to vary. Second, the diffusion term is only

37 applied once at the beginning of each time step. The result of this search simply activates/deactivates the diffusion terms provided in Eqn. 6 for the current time step.

The artificial diffusion term modifies the solution by smoothing the reconstruction between the subdomain end points. Very large diffusion constants approaching infinity drive the reconstruction to a linear function reducing the subdomain solution to first order. However, a lower diffusion constant allows for a smooth reconstruction that suppresses Gibbs oscillations. The derivation provided above is a theoretical guideline meant to ensure the diffusion is at least sufficiently diffusive, but may not be the optimum value. It will be shown later that this guideline is generally overly dissipative and can be adjusted for better performance.

Results Obtained The method was tested on several benchmark cases for performance and accuracy. For the sake of brevity, only the shock tube solution is discussed in the present work because it is one of the most standard test cases for new methods and contains a number of interesting flow phenomena including a shock wave, expansion wave, and contact discontinuity [21].

The upwind characteristic method for conserving flux values at cell interfaces was tested against the Roe scheme and in all cases produced almost identical solutions for simple first order cases as demonstrated in Figure 1. The artificial diffusion term was then incorporated and high order solutions were generated using the definition for the diffusion constant defined in Eqn. 28. The diffusion limiter was tested against constant diffusion solutions and the results are shown in Figure 2. Polynomial refinement is shown in Figure 3 at the shock for polynomial degrees of 9, 29, and 99 for 20 cells. It may be noted that the solution is approaching the analytical values as the polynomial refinement is increased.

It was found that the convergence to the exact solution can be greatly accelerated using a simple scaling factor on the diffusion constant replacing the one half term with a scalar . The solution using 20 cells and 19th degree polynomials is shown for the shock-wave using the nominal p-value of 0.5 and a reduced value of 0.1 in Figure 4. The reduced p-value solution for 19th degree polynomials was then compared to the nominal p-value solution for 99th degree polynomials in Figure 5. It was found that the 19th degree polynomial solution can surpass the 99th degree solution using the simple scaling factor. Finally, the 9th degree polynomial solution using 20 cells is compared to the 1st order solution using 200 cells in Figure 6. The high order scheme produces a solution much closer to the analytical values than the 1st order scheme as expected. It should be noted here that the polynomial order is uniform for the high order solution. A much better result can be obtained with 200 solution points if adaptive polynomial refinement were used allowing lower degree polynomials near smooth regions and higher degree polynomials near the discontinuities.

Significance and Interpretation of Results A new spectral difference method was demonstrated for the inviscid limit of the compressible Navier- Stokes equations which utilizes an alternative conservation scheme at the cell interfaces and a dynamic artificial viscosity term for high-order discontinuity capturing. The upwind characteristic method was shown to match the Roe scheme for first order solutions and is compatible with the current work. The addition of the artificial diffusion term eliminates spurious oscillations produced by high order polynomials at steep gradients and selectively applying the diffusion term produces significantly more accurate results. Polynomial refinement was shown to approach the exact solution. However, the exact solution can be approached more quickly by refining the diffusion term. Currently, the method is capable of outperforming 1st order solutions and provides flexibility for easily refining the solution process. The diffusion term can be better optimized to produce more accurate solutions and the inclusion of adaptive refinement will allow automatic convergence approaching exact solutions.

38 Figures

Figure 1. First order shock tube solution using Roe Figure 2. Shock tube solution using full diffusion scheme vs. upwinded characteristics at t = 0.2 s and selective diffusion at t = 0.2 s

Figure 3. Shock tube solution using 9th, 29th and Figure 4. Shock tube solution for 19th degree 99th degree polynomial approximations at t = 0.2 s polynomial using p values of 0.1 and 0.5 at t = 0.2 s

Figure 5. Shock tube solution using 99th degree Figure 6. Spectral solution using 20 cells and 9th polynomial with nominal p value compared with degree polynomials compared with 1st order 19th degree polynomial using p = 0.1 at t = 0.2 s solution using 200 cells

Acknowledgments The author would like to thank Dr. Z. J. Wang for providing data for comparison. The generous support of the Ohio Space Grant Consortium is greatly appreciated.

39 References 1. Huang, P., Wang, Z. J., and Liu, Y., “An Implicit Space-Time Spectral Difference Method for Discontinuity Capturing Using Adaptive Polynomials," AIAA Paper 2005-5255, AIAA, 2006. 2. Huang, P., “High Order Discontinuity Capturing Using Adaptive Polynomials," AIAA Paper 2006- 305, AIAA, 2005. 3. Steger, J.L., Warming, R. F., “Flux Vector Splitting of the Inviscid Gasdynamics Equations with Applications to Finite Difference Methods," Journal of Computational Physics, Vol. 40, No. 2, 1981, pp. 263-293. 4. Hughes, T. J. R., The Finite Element Method, Linear Static and Dynamic Finite Element Analysis, Prentice-Hall, Inc. 2000. 5. Gottlieb, D. and Orszag, S., Numerical Analysis of Spectral Methods: Theory and Applications, Society for Industrial and Applied Mathematics, Philadelphia. 1987. 6. Kopriva, D., “A Conservative Staggered-Grid Chebyshev Multidomain Method for Compressible Flows. II. A Semi-Structured Method." Journal of Computational Physics, Vol. 128, 1996, pp. 475- 488. 7. Kopriva, D., “A Staggered-Grid Multidomain Spectral Method for the Compressible Navier-Stokes Equations," Journal of Computational Physics, Vol. 143, No. 1, 1998, pp. 125-158. 8. Patera, A., “A Spectral Element Method for Fluid Dynamics: Laminar Flow in a Channel Expansion," Journal of Computational Physics, Vol. 54, No. 3, 1984, pp. 468-488. 9. Wang, Z. J., “Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids I. Basic Formulation," Journal of Computational Physics, Vol. 178, No. 2, 2002, pp. 210-251. 10. Wang, Z. J., Liu, Y., “Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids II. Extension to Two-Dimensional Scalar Equation," Journal of Computational Physics, Vol. 179, No. 2, 2002, pp. 665-697. 11. Wang, Z. J., Liu. Y., “Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids III. One-Dimensional Systems and Partition Optimization," Journal of Scientific Computing, Vol. 20, No. 1, 2004, pp. 137-157. 12. Liu, Y., Vinokur, M. and Wang, Z. J., “Spectral Difference Method for Unstructured Grids I: Basic Formulation."Journal of Computational Physics, Vol. 216, No. 2, 2006, pp. 780-801. 13. Liu, Y., Vinokur, M. and Wang, Z. J., “Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids,” Computational Fluid Dynamics 2004, pp. 449-454. 14. Cockburn, B., and Shu, C. W., “TVB Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws II: General framework,” Mathematics of Computing, Vol. 52, No. 186, 1989, pp 411-435. 15. Cockburn, B., Lin, S. Y., Shu, C. W., “TVB Runge-Kutta Local Projection Discontinuous Galerkin Finite Element Method for Conservation Laws III: One-Dimensional Systems,” Journal of Computational Physics, Vol. 84, No. 1, 1989, pp. 90-113. 16. Cockburn, B., Hou, S., Shu, C. W., “TVB Runge-Kutta Local Projection Discontinuous Galerkin Finite Element Method for Conservation Laws IV: The multidimensional Case,” Mathematics of Computing, Vol. 54, No. 190, 1990, pp. 545-581. 17. Tannehil, J. C., Anderson, D. A., and Pletcher, R. H., Computational Fluid Mechanics and Heat Transfer, 2nd Ed., Taylor & Francis, 1997. 18. Chaney, W. and Kincaid, D., Numerical Mathematics and Computing, 6th Ed., Thompson Brookes/Cole, 2008. 19. Roe, P. L., “Approximate Riemann Solvers, Parameter Vectors, and Difference Schemes," Journal of Computational Physics, Vol. 43, No. 2, 1981, pp. 357-372. 20. Gautsche, W., Orthogonal Polynomials: Computational Approximation, Oxford Science Publications, Oxford University Press, 2004. 21. Sod, G., “A Survey of Several Finite Difference Methods for Systems of Nonlinear Hyperbolic Conservation Laws," Journal of Computational Physics, Vol. 27, No. 1, 1978, pp. 1-31. 22. Shu, C. W., and Osher, S., “Efficient Implementation of Essentially Non-Oscillatory Shock-Capturing Schemes II,” Journal of Computational Physics, Vol. 83, No. 1, 1989, pp. 32-78. 23. Shang, J. S., “A High-Resolution Method Using Adaptive Polynomials for Local Refinement," AIAA Paper 2010-0539, AIAA, 2010.

40 A Look into Marcellus Shale Completion

Student Researcher: Brandon S. Baylor

Advisor: Dr. Ben Thomas

Marietta College Department of Petroleum Engineering

Abstract The major Appalachian Basin topic being discussed by oil and gas producers today is the Marcellus Shale formation. Since 2004 businesses have only recently been able to aggressively pursue natural gas production from this underground formation. This paper examines hydraulic fracturing development and design in addition to general Marcellus Shale background material. Also, two common fracturing methods are analyzed and compared: the Plug and Perf technique and the Frac Packer method, which has many names depending on the company. The case study for this comparison was performed in the Barnett Shale, a successfully developed shale formation in Texas comparable to the Marcellus.

Project Objectives The purpose of this paper is to consider Marcellus Shale fracturing techniques, downhole completion tools, and implications of producing from this shale play. A general discussion of the Marcellus Shale is provided, followed by a discussion on fracturing. To conclude the paper, information is provided on two types of fracture methods. The study of this reservoir will benefit producers of the Marcellus formation and provide an insight into the expected value of this previously untapped formation.

Methodology Used Numerous sources were referenced during the research phase of this paper; all of which can be found under the works cited and section of this paper. Information about the history of shale plays in the Lower 48 was obtained online from various industry resources. Fracturing methods and design data was found from industry papers to supplement classroom material. In addition, data about the downhole completion tools was researched from a Suncor Energy presentation titled Unconventional Completion Information Sharing, which was presented to the petroleum engineering juniors at Marietta College last semester.

The document will focus on the completion methods of the Marcellus Shale formation and its implications. The feasibility of producing from shale formations depends on effective fracturing and stimulation techniques. Usually production from shale plays is not possible unless complex, non-linear fractures are present to connect the large reservoir surface area to the wellbore. Data is also available regarding the general properties of shale formation compared to other plays in the United States. This paper does not address hydraulic fracturing simulation. The information provided on fracturing is simply background discussion to set the context for more detailed models.

Results Shale is a sedimentary rock mostly composed of clay and silt size particles. In most cases shale amasses in low energy environments of deposition, which may include deep water deposits or tidal flats where the smaller particles are able to fall out of suspension in the calm waters. Algae, plants, and other organic materials are then deposited on top of the clay and silt particles; the pressure of compaction gives the shale a laminar structure. This layered arrangement limits both vertical and horizontal permeability of the shale formation. The figure below illustrates the comparative permeability of various shale plays.

41

Figure 1. Permeability Terminology

As shown by Figure 1, the typical unfractured shale permeability ranges from 0.001 to 0.00001 mD, which is many order of magnitudes lower than typical oilfield rock formations. Because of this extremely low permeability, shale formations were always seen as uneconomic and unfeasible to produce. Now, with modern fracturing techniques to facilitate fluid flow through the formations, shale plays are taking center stage in United States natural gas production.

Current estimates of natural gas potential from United States gas shale plays ranges from 500 to 1,000 Tcf (1). The natural gas that resides in these formations is found in one of three places: (1) the pore space of the shale, (2) the natural fractures of the shale, or (3) adsorbed on organic matter within the shale (1). It should also be noted that the majority of hydrocarbons produced from shale plays is dry gas.

Since the Barnett Shale was initially developed around 20 years ago, the technology behind natural gas production in shale plays has advanced significantly. Horizontal drilling, multi-staged hydraulic fracturing, and other stimulation techniques that made the Barnett a success are also being employed on the Marcellus Shale. In 2003 companies began testing these methods on newly drilled Marcellus wells. The efforts were successful, and since that time development of the Marcellus has increased rapidly. Some areas of the Marcellus are being developed more quickly than others, but development is expected to continue to help provide natural gas to the major metropolitan cities of Northern Appalachia (1).

This formation, which is still in its producing infancy, has an estimated 1,500 Tcf of gas-in-place and 262-500 Tcf of reserves (1). This relatively new play has drawn the attention of producers because of its large size and potential economic impact on the U.S. Commonwealth (3). In 2005 wells began to be drilled on a regular basis; although, sufficient production data has not yet been collected to accurately predict the EUR for these early producing wells. Pennsylvania oil and gas regulation, for instance, does not require producers to report annual well production until five years have passed. Therefore, 2005 well production will not enter the public domain until the year 2011 (4).

42

Figure 2. Marcellus Shale Distribution in the Appalachian Basin

Hydraulic fracturing was first introduced in the Appalachian Basin in the early 1960s. Other than horizontal drilling, hydraulic fracturing has been the most successful method to increase economic feasibility of gas shale formations. It is a technique used to raise the permeability of the formation, which in turn allows fluid to be more easily transmitted to the wellbore. Fracturing a well helps to overcome obstructions that hinder the fluids path to the wellbore; these obstacles may include low permeability or damage in the near wellbore area caused during drilling.

As formation fracturing has developed, engineers have improved their understanding of necessary reservoir conditions for successful stimulation. Even today the process of hydraulic fracturing is being polished to more effectively maximize production and fracture pathways. The basis of modern fracturing in the Marcellus Shale was derived from past experience, mainly the Barnett Shale, a highly successful Texas shale formation. Data gathered from cores, logs, or offset wells is often used with an electronic model or simulation, thereby allowing the engineer to see the expected fracturing before it has taken place.

The process of hydraulic fracturing involves a series of events that typically require large amounts of fracture fluid, which is a water-based material. Candidates for formation fracture include low (1 mD or lower) and moderate permeability zones (1-10 mD or higher). The Marcellus Shale clearly falls into the low permeability category. For these types of wells, fracture length is the main priority to successfully stimulate, as opposed to fracture conductivity for moderate permeability zones. Furthermore, shale formations are almost always good candidates because of the ample hydrocarbons and natural fractures.

The first step in any hydraulic fracture is to run a series of pressure tests to ensure that the equipment is safe to handle at the high operational pressures required for the treatment. Once the pressure test is deemed sufficient, the first sequence begins with an acid treatment. The acid in this case helps to clean up the near well bore and repair any damage that may have been caused by the drilling process. Next, a column of slickwater—enough to fill the well bore and effectively open the rock face of the formation—is pumped down the well. With the help of the friction reducers provided by the slickwater, a series of proppants (a mixture of water and fine grain sand) are pumped down into the fracture to hold open the fractured part of the formation. By running the hydraulic fracture in a series of stages, the engineers and employees on location can better control the fracture performance.

The plug and perf method is a common practice to isolate zones during a hydraulic fracturing of a formation. The basic premise of the method is to run a composite bridge plug (which is mounted to the perf gun) down the hole to the setting depth on either a wireline or coil tubing string. The plug is then charge set while gun is pulled to the next perforating depth. It is a simple pumping method that provides

43 good stage isolation and reduces the completion time (relative to some other methods). It also provides the ability to diagnose the stage fractures in addition to having many suppliers.

Disadvantages of the plug and perf method include having to mill out the set plug. The plug or guns may be stuck due to setting prematurely, and this would require remedial work with a workover rig. In addition, the integrity of the casing and adequate bonding in the lateral section may be in question during this process. Lastly, more accurate geology and geophysics is required.

The Frac Packer method is considered a mechanically staged process. The idea is to isolate each zone being fractured with the packer, and then pump the fracture fluid down the subsequent ports. To activate the ports, metal or ceramic balls are dropped through the packer. The balls get larger sequentially as they land on the seats of different ports. Once a ball seats, the next port opens, and the fracture fluid can be pumped. It can be expected that this method of fracturing can support an estimated 10,000 psi differential (5). Advantages of the Frac Packer include the availability of around two dozen stages. In addition, the transition to the sequence of stages is quick with the ball seats. Essentially, operating time is reduced compared to the plug and perf method. The operation is simple, and provides good zonal isolation with no cementing problems. On the down side, up front cost is higher for the Frac Packer. Also, the packers are susceptible to damage while running in the hole. Better geology may be required, and the ability to diagnose the fracture may be limited.

A case study was performed in the Barnett Shale comparing the Plug and Perf against the Frac Packer method. “The challenge was to create a system that not only saved time and reduced cost, but also increased production from their horizontal wells to improve their overall return on investment….A detailed analysis was done to compare the production results from two [Frac Packer] wells to two parallel offset plug and perf wells” (8). In both cases, the Frac Packer method yielded better results. One year results of cumulative production showed values 80-143% higher for the Frac Packer (2).

Conclusion It was determined that hydraulically fracturing formations, combined with horizontal drilling is the most successful way to economically and efficiently produce natural gas from the Marcellus Shale, a formation which will impact the US economy for years to come. Based on the evidence provided by the Barnett Shale case study, the Frac Packer method provides a better method to fracture the Marcellus. At this time, both horizontal and vertical wells are being utilized in Appalachia. Vertical wells are used for tight interval spacing. Conversely, horizontal wells are used to help reduce the number of wells on a single pad, thereby mitigating the environmental footprint of drilling. Lastly, it should be noted that the fracturing process has minimal effect on shallow free water aquifers. The zones are isolated behind adequate casing, and a simple stress profile will show that the fractures actually grow horizontally at a certain depth, therefore never contaminating the free water aquifers.

References 1. Arthur, Daniel et. al. “Hydraulic Fracture Considerations for Natural Gas Wells of the Marcellus Shale.” Presented at the Ground Water Protection Council 2008. 2. “Packer Plus case study,” presented at AADE National Technical Conference and Exhibition, 31 March 2009. 3. Schweitzer, R. “The Role of Economics on Well and Fracture Design Completions of Marcellus Shale Wells,” paper SPE 125975 presented at the 2009 SPE Eastern Regional 4. Thomas, Ben. Marcellus Shale Analysis. Personal Interview by Brandon Baylor. 28 March 2010. 5. Vandeponseele, Angela. Suncor Energy. “Unconventional Completion Information Sharing.” Presentation presented at Marietta College.

44 Enhanced Drilling and Fracturing Techniques for Horizontal Well Stimulus

Student Researcher: Dean T. Bendele

Advisor: Dr. Benjamin Thomas

Marietta College Department of Petroleum Engineering

Abstract While gas shale formations, like the Marcellus Shale, are being discovered and developed, people in the industry have come to realize how little is known about these types of reservoirs. What is known is how little gas is able to be recovered from the shale. To increase these recoveries, new forms of drilling, such as horizontal and multilateral wells, are being utilized. With the Marcellus Formation being a relatively new development, not much is known about fracturing in it or how the wells will interact with each other. Simulation programs, such as the one provided by Meyer & Associates, Inc., have been developed. However, the industry would like to know how the wells will interact if hydraulic fracturing makes their drainages overlap, and can it be used to our advantage?

Project Objectives The Marcellus Shale is opening up as the largest area of natural gas reserves in the United States. It covers an area extending from southern New York to the southwest tip of Virginia and approximately 50 to 100 miles into Ohio. Estimates of its natural gas reserves range from 50 trillion to 500 trillion cubic feet. However, only an estimated 10% of the natural gas is recoverable. As such, hydraulic fracturing is being utilized to help increase these recoverable reserves.

To help make producing the Marcellus Shale economically viable for gas companies, new, enhanced forms of drilling and fracturing must be used. Horizontal drilling must be done in shale plays in order to achieve this economic viability. One form of horizontal drilling is called multi-lateral drilling. This type of drilling forms multiple horizontal wells that stem from a single horizontal that spans the entire lateral length. This is already being used in other developed shale plays, like the Eagleford Shale in South Texas. Usually, these are spaced at distances to keep their drainage radii from overlapping. If the wells were to overlap, the pressure decline interference between the two laterals would negatively affect the flow inside each. However, the possibility of purposely overlapping these wells has not been tested, so a fracture prediction spreadsheet for overlapping the fractures created in a hydraulic fracture treatment is attempted in this project. The fracturing software from Meyer & Associates is used to confirm the predictions in the spreadsheet.

This project is meant to act as the beginnings of a design proposal. The design is meant to fracture in the lateral lengths and make the fracture half-lengths intercept the central horizontal; theoretically allowing maximum well interference. The two lateral wells would then be closed off, and all the gas would, flow into the one central well. Hopefully, this would increase the drainage radius of the well, as well as the potentially produced gas reserves.

Methodology Shale plays, like the Marcellus Shale, have small porosities, which correspond to very small permeabilities. This means flow from the natural fractures in a shale reservoir will be very small. Porosity is the empty space within a rock that can contain hydrocarbons or other fluids, and permeability is the ability of a fluid to flow through a rock. Effective porosity and effective permeability are the connected spaces within a rock that allow flow. Hydraulic fracturing is meant to open the non-connected spaces to allow for greater flow, and to increase the recoverable gas reserves. The problem with shale plays, however, is the lack of hydrocarbons actually contained in pore spaces. Instead, the gas is adsorbed, or fused, to the shale. This means that the recoverable gas will remain low, even with proppant fracture stimulus.

45 All reservoirs contain natural fractures that allow for some magnitude of flow into or out of the formation. These fractures are caused by several stresses in the formation. The main stresses that cause fractures are the vertical stress and the minimum and maximum horizontal stresses. Some minor, but still important, stresses acting on the formation are tectonic stress and natural reservoir pore pressure (which acts against the vertical and horizontal stresses). The direction in which hydraulic fractures form is mostly dependent on the orientation of the horizontal stresses (this applies to most reservoirs, which are at a depth where the overburden, or vertical stress, is much greater than the maximum horizontal stress). Hydraulic fractures are considered to form perpendicular to the minimum horizontal stress, or parallel to the maximum horizontal stress. This is considered the fracture length. However, a vertical fracture height is also formed during fracturing. The model used for this project, the Perkins and Kern and Nordgren model (PKN model), assumes a fracture that is dominated by the fracture length.

When fracturing a brittle formation, such as sandstone, the fracture is fairly clean. That is to say, the rock readily splits. Fracturing a shale formation, on the other hand, is not as clean. Shale is more ductile than sandstone. Instead of the formation breaking, the rock slowly parts. This creates a narrow tip in the fracture for some distance, which is followed by a large expansion as the formation parts. This makes the fracture half length a more important number than it would be for a brittle formation. The fracture half length is widely regarded as the effective length of the hydraulic fractures. Models have shown that beyond a fracture’s half length, a negligible amount of proppant will be able to enter a fracture to help prop it open. Without this proppant, the closure stress will close the fracture up to the proppant when the well is put on production.

The equations that are utilized in the PKN model are as follows:

σv = ρH/144 (Vertical Stress)

σv’ = σv – αp (Effective Vertical Stress)

σH’ = [ν/(1-ν)]σv’ (Minimum Horizontal Stress)

σH,max = σH,min + σTect (Maximum Horizontal Stress) G = E/[2(1+ν)] (Elastic Shear Modulus) 1/4 wmax = 2.31[qiµ(1-ν)Xf/G] (Maximum Fracture Width – Newtonian Fluid) 1/4 wavg = 0.3[qiµ(1-ν)Xf/G] [(π/4)γ] (Average Fracture Width – Newtonian Fluid) n' n' 1/(2n'+2) n' 1-n' 1/(2n'+2) wmax = 12[(128/(3π))(n'+1)((2n'+1)/n') (0.9775/144)(5.61/60) ] *(qi K'Xfhf /E) (Maximum Fracture Width - Non-Newtonian Fluid) n' n' 1/(2n'+2) n' 1-n' 1/(2n'+2) wavg = 12[(128/(3π))(n'+1)((2n'+1)/n') (0.9775/144)(5.61/60) ] *(qi K'Xfhf /E) *[(π/4)γ] (Average Fracture Width – Non-Newtonian Fluid)

KL = ½[(8/3)η + π(1-η)] (Leakoff Volume Coefficient) 1/2 qiti = Afwavg + KLCL(2Af)rpti (Injected Fluid Volume)

Vpad = Vi[(1-η)/(1+η)] (Fracture Pad Volume)

Tpad = Vpad/qi (Time to create a pad) ϵ = (1-η)/(1+η) (Time Power Coefficient for Proppant Concentration)

Mp = cp(Vi – Vpad) (Mass of Proppant - cp = pounds per gallon)

Cp = Mp/(2Xfhf) (Proppant Concentration, pound per square foot)

46 Results Obtained The following reservoir properties were found for a Marcellus Shale reservoir:

Stress Gradient = 0.874 psi/ft Young's Modulus (E) = 3,360,000 psi Poisson's Ratio (ν) = 0.201 Fracture Toughness = 1,595 psi-in1/2 Efficiency (η) = 0.725 Pore Gradient = 0.625 psi/ft Reservoir Pressure (p) = 5,030 psi Biot's Constant (α) = 0.7 1/2 Leakoff Coefficient (CL) = 0.001 ft/min Injection Rate (qi) = 30 bpm

Desired Fracture Length (Xf) = 300 ft Formation Net Height (H) = 60 ft Proppant Concentration (Cp) = 1.10 ppg

The most important value that should be found using the PKN spreadsheet is the injected volume required to obtain the desired fracture length and fracture height. This value was then compared to the Meyer & Associates, Inc.’s MShale simulation, as well as the fracture width, height, and length. The fracture height and length are both set desired fracturing criteria. The fracture height was chosen to maintain the fractures within the Marcellus formation (which is assumed to have a thickness of 60 feet), and the fracture length (or a well spacing of 300 feet) was chosen to simulate an overlap of the reservoir fractures. The following table was created to find the volume injected to get a certain height:

2 hf (ft) Af (ft ) Afwavg Afwavg/qi KLCL(2Af)rp KLCL(2Af)rp/qi ti (min.) Vi (gal) 42 25,200 1,192 39.7 49.3 1.64 51.6 64,962 48 28,800 1,363 45.4 56.4 1.88 60.0 75,580 54 32,400 1,533 51.1 63.4 2.11 68.6 86,463 60 36,000 1,703 56.8 70.5 2.35 77.5 97,602

At a fracture height of 60 feet and a fracture length of 300 feet, the projected volume of fluid injected is approximately 98,000 gallons.

However, the result from the MShale program from Meyer & Associates, Inc. shows a different case. Using the PKN model for the simulation, MShale shows a fracture length of 3,251 feet, much larger than the desired 300 feet. What is curious about these results is that the MShale program shows a total injection volume of 100,918 gallons. Although this is a close agreement between the gallons injected, an extra 2,951 feet of length is excessively high. These results are unusual, and raise questions about the accuracy of the Marcellus Shale data used, or a user error in the setup of the simulation.

Significance and Interpretation of Results Since the cause of the difference in the spreadsheet results and the simulation results remain undetermined, it can’t be said what conclusions can be taken from this outcome. However, one can speculate that the PKN may not be as accurate for Marcellus Shale as it is for more common formation types, like quartz sandstone. This could also result in an interesting research topic. For now, however, further variables should probably be considered before ruling out the PKN prediction spreadsheet. Things like the shale’s tensile stress, breakdown pressure, closure stress, and pressure drawdown. Variables taken into consideration, but not necessarily focused on in the PKN model. Another restriction to the ultimate outcome of the project would be the drainage radius of the Marcellus Shale. Unfortunately, there

47 isn’t enough production history to have determined a definite number for the distance of a Marcellus well’s drainage.

Acknowledgments Dr. Benjamin Thomas and Meyer & Associates, Inc.

Figures and Graphs

Figure 1. Results of MShale simulation

Figure 2. Example of Multilaterals Figure 3. Example of a PKN Model Fracture

References 1. B. J. Hulsey, Brian Cornette, MicroSeismic Inc., David Pratt, Rex Energy Corporation. “Surface Microseismic Mapping Reveals Details of the Marcellus Shale.” Society of Petroleum Engineers. SPE 138806. 2010. 2. Bruce R. Meyer, SPE, Meyer & Associates, Inc. and Lucas W. Bazan, SPE, Bazan Consulting, Inc. “A Discrete Fracture Network Model for Hydraulically Induced Fractures: Theory, Parametric and Case Studies.” Society of Petroleum Engineers. SPE 140514. 2011. 3. Bryce B. Yeager, SPE, Energy Corporation of America and Bruce R. Meyer, SPE, Meyer & Associates, Inc. “Injection/Fall off Testing in the Marcellus Shale: Using Reservoir Knowledge to Improve Operational Efficiency.” Society of Petroleum Engineers. SPE 139067. 2010. 4. John C. Gottschling, BJ Services Company, U.S.A. “Marcellus Net Fracturing Pressure Analysis.” Society of Petroleum Engineers. SPE 139110. 2010. 5. Krisanne L. Edwards and Sean Weissert, EQT Production/SPE; Josh Jackson, Chesapeake/SPE; Donna Marcotte, Consultant/SPE. “Marcellus Shale Hydraulic Fracturing and Optimal Well Spacing to Maximize Recovery and Control Costs.” Society of Petroleum Engineers. SPE 140463. 2011. 6. Michael J. Economides, A. Daniel Hill, Christine Ehlig-Economides. “Hydraulic Fracturing for Well Stimulation.” Petroleum Production Systems. p. 421. Prentice Hall, PTR. Upper Saddle River, NJ. 1994. 7. R. Henry Jacot, SPE, Atlas Energy Resources; Lucas W. Bazan, SPE, Bazan Consulting Inc.; and Bruce R. Meyer, SPE, Meyer & Associates, Inc. “Technology Integration – A Methodology to Enhance Production and Maximize Economics in Horizontal Marcellus Shale Wells.” Society of Petroleum Engineers. SPE 135262. 2010.

48 Rock Out: The Rock Cycle on Earth and Moon

Student Researcher: Heather M. Bennett

Advisor: Professor Amy Brass

University of Cincinnati Department of Education, Criminal Justice and Human Services

Abstract This three-day unit, which aligns with the Ohio sixth-grade Earth and Space Science standards, adapts activities from the Exploring the Moon and Exploring Meteorite Mysteries teachers’ guides to enhance instruction on the rock cycle. Using models made from common household materials, students demonstrate the importance of the atmosphere and agents of erosion (e.g. surface water, ice and wind) to Earth’s rock cycle by comparison with the Moon. While making edible breccia, they discover how minerals get “mixed up” into new rocks on Earth and on the Moon. Finally, their progress is assessed on the basis of short creative projects comparing the “life and times” of a lunar rock and a terrestrial rock.

Lesson On day one, students begin by viewing footage from the BBC series Planet Earth and pictures from the Apollo 11 mission. The teacher uses NASA lithographs1 and a Venn to help them compare features of the Earth and Moon that influence geology (water, gravity, internal temperatures), while reviewing the terrestrial rock cycle they have read about in their texts. Next, students conduct experiments in four-person cooperative learning groups. Two lumps of salt-and-flour clay mixed with rice cereal represent lunar and terrestrial igneous rocks. Because of friction in Earth’s atmosphere, most meteoroids vaporize before they can collide with the surface. Thus only the “Moon rock” lump is bombarded from above with heavy objects. On the other hand, due to the presence of moving air and liquid and solid phases of water on Earth, the terrestrial “rock” is exposed to sand-blasting (by rubbing with sandpaper and blowing across it), glaciers (by pushing ice cubes across the surface), and water erosion (by pouring streams of water from a beaker).2 Students measure each lump before and after treatment, and make notes of their observations during the investigation. The class then analyzes and discusses the differences, the reasons behind them, and what could happen next to the rock fragments in each location.

On day two, students compare Earth’s sedimentary and metamorphic rocks with lunar breccias. Candy- filled popcorn balls model what happens after an asteroid hits the lunar surface. While the syrup mixture is heating, students predict what will happen to igneous rocks with two shapes of “minerals,” one white and cylindrical (marshmallows), the other yellow and rounded (popcorn kernels). On impact, heat and pressure shatters rocks and can change crystal structure (shock metamorphism3). Both “minerals” change; the marshmallows even partially melt, oozing between the other “crystals” to form the “matrix” that holds all the shattered bits together. Pieces of other rocks including fragments of the asteroid (represented by angular bits of candy) get mixed up and bound in the marshmallow matrix4. Applying pressure to hot “fragments” with buttered hands, students mold edible “breccia” which may be wrapped in plastic and carried home. Once their areas are clean, students record and compare observations with predictions and discuss how metamorphism and sedimentation are similar to and different from this process. They also may begin work on an open-ended group project discussing the “life story of a rock.” For example, they could put on a skit about two “rock groups” (one from the Earth and one from the Moon) sharing their histories on a talent search television program. To guide their projects, and as part of their grade, students create a graphic organizer that summarizes the possible transformations of Moon rocks and Earth rocks.

On day three, groups continue to work on their projects and then present to the entire class.

Objectives  Students will describe the significance of gravity, water and high internal temperatures to Earth’s rock cycle by comparison with the Moon.

49  Students will explain how rocks form and change through an open-ended project comparing the “life stories” of a rock on the earth and the Moon.

Alignment Grade 6-8 Earth and Space Sciences Benchmark D. Identify that the lithosphere contains rocks and minerals and that minerals make up rocks. Describe how rocks and minerals are formed and/or classified.  Grade 6 Indicator 1. Describe the rock cycle and explain that there are sedimentary, igneous and metamorphic rocks that have distinct properties […] and are formed in different ways.5

Underlying Theory It is well known that modern employers want to hire critical thinkers. Since science, technology, engineering and mathematics (STEM) are seen as fields of study that generate and thrive on this habit of mind, President Obama and others concerned about America’s educational competitiveness recommend bolstering the quality of STEM instruction. Central to their appeal is the charge to build lessons around scientific inquiry and to engage traditionally underrepresented groups, including girls, Latino Americans and African Americans.6 In fact, meta-analyses reveal that there is no noticeable male/female gap in science until high school, when girls’ disinclination and self-doubts about the subject rapidly emerge and become pronounced.7 Thus, intervention at the middle school level is crucial to building competence and confidence with active, high-level learning. Dynamic learning groups and investigations like these that facilitate the participation of every student are an important part of the solution. The sequence of this unit is couched in the extensively researched 5E Instructional Model, which has shown effective with all demographics of students in guiding them to engage with the subject matter, personally explore it, explain how it works, creatively elaborate on its application to new situations and finally evaluate their progress and growth.8 These ideas are well-founded in constructivism, the philosophy that teachers cannot “impart” knowledge; students must actively build it through guided experience.

The opportunity for choice in the unit supports the inclusion of Gardner’s multiple intelligences. Put simply, students tend to perform better when they are given opportunities to connect their learning with their gifts in music, spatial ability, interpersonal intelligence, and other areas. In contrast to lessons that fixate on memorization and rudimentary comprehension, this unit deepens student learning by recruiting creativity and critical thinking skills. Bloom’s Taxonomy, a piece of research that ranks types of tasks in terms of their intellectual demand, considers synthesis and analysis higher-order skills.

Student Engagement Incorporating hands-on activities, creativity and technology increases students’ interest and investment in lessons. This unit is aimed to engage and challenge. Activating background knowledge is also important for success and engagement.9 A review of previous grade-level indicators suggests that students do have extensive prior experience.10 A KWL may be used before the lesson to judge individual student readiness and remind them of what they have learned.

Resources Day 1:  Clay-dough (2 batches: 4 c. flour,1 c. salt, 4 c. water, 4 T. oil, cooked at medium low)  Chocolate and crispy rice cereals  For each group: disposable plates, beakers, ice cubes, fist-sized rocks, coarse sandpaper

Day 2: Be sure to check student dietary restrictions (peanut allergies, Vegan, Kosher, etc.) in advance.  Kitchen supplies: hot plate and electric popcorn popper (plugged in safely away from students), mixing bowl, large saucepan, large stirring spoon, potholder, trivet, stiff spatula, 2 butter knives, large bowls, timer, plastic wrap, mop  Edible breccia: 3⁄4 c. yellow popcorn kernels (for 57. qt.; approx. 30 popcorn balls) or 5 bags microwaved popcorn, 6 T. margarine, 1 cup light corn syrup, 3 t. cold water, 4 c. powdered sugar, 1 1⁄2 c. marshmallows, 2 t. vanilla. (Melt margarine in pan on hot plate at medium high heat, and add

50 remaining ingredients. Stir and cook until boiling. Remove from heat. Pour syrup mixture over popcorn and stir with spatula to coat loosely.)  Mix-ins: chocolate chips, cookie crumbs, chopped nuts  Supplies for each student: disposable plastic spoons, bowls and plates with 1 pat of butter, lined loose-leaf paper, pencils, sanitizing wipes, paper towels

Day 3:  Project materials: poster/construction paper, markers, paints, rocks, costumes, props, etc.

Assessment Student achievement will be analyzed on the basis of performance tasks and formative assessments. The keystone piece, the “life story of a rock” project, will be graded on a rubric with input from student self- evaluations. Individually created graphic organizers that summarize the rock cycle as it connects with the projects will also contribute to evaluation. Observation of student work and dialogue during the edible breccia creation and the rock cycle experiment is helpful as an informal measure of comprehension. Finally, the rock cycle lab report and the paragraph describing breccia formation provide gradable written artifacts of development.

Results and Conclusion Organizing this unit represents a significant time investment on the part of a dedicated teacher, but the payoff in terms of student attitude and participation is dramatic. Sixth-graders enthusiastically immerse themselves in a heavily-tested concept that they typically consider rather dull. In NASA’s stimulating extraterrestrial context, they make connections between disciplines that they may always have segregated in their minds. Their discoveries could spark interest in STEM-related careers or in science as simply an interesting study. Certainly one benefit of this unit is the intrinsic motivation it builds by the end as students gradually take responsibility for their own learning. In short, teachers can expect high attendance, high engagement, sticky fingers and plenty of laughter.

Reference 1. Our solar system lithograph set (2009). NASA, LS-2009-09-003-HQ. 2. Regolith formation (1997). Exploring the Moon: A teacher’s guide with activities for Earth and space sciences, p. 47-52, NASA, EG-1997-10-116-HQ. 3. Mason, R. (1990). Petrology of the metamorphic rocks. ISBN-10: 0045520275; p. 207. 4. Edible rocks (1997). Exploring meteorite mysteries, 8.1-8.10, NASA, EG-1997-08-104-HQ. Retrieved from http://ares.jsc.nasa.gov/Education/Activities/ExpMetMys/Lesson8.pdf 5. Ohio Academic Content Standards, Science (www.education.ohio.gov) 6. www.whitehouse.gov/the-press-office/2010/09/27/president-obama-announces-goal-recruiting- 10000-stem-teachers-over-next- 7. Shibley-Hyde, J. (2005). The gender similarities hypothesis. American Psychologist, 60(6), 581-592. doi: 10.1037/0003-066S.60.6.581 8. Bybee, R. et al. (2006). “BSCS 5 E Instructional Model.” Retrieved from www.bscs.org/pdf/bscs5eexecsummary.pdf 9. Marzano, R. J. (2004). Building background knowledge for academic achievement: Research on what works in schools. ASCD. ISBN: 0-87120-972-1 10. Basic concept of gravity (Grade 3 Physical Sciences Indicator 3); the Moon orbits Earth because of gravity and the Moon has a weaker gravitational pull than the Earth (Grade 5 Earth and Space Sciences Indicator 3); water, ice and wind can erode rock (Grade 4 Earth and Space Sciences Indicator 8)

51 Evaluating the Potential for Thermally Enhanced Forward Osmosis

Student Researcher: Melissa R. Benton

Advisor: Dr. Glenn Lipscomb

The University of Toledo Department of Chemical and Environmental Engineering

Abstract Pressure retarded osmosis (PRO) has been investigated as an energy source using solutions of high and low salinity to achieve a volume flux and, consequently, a pressure buildup due to osmotic pressure differences. The intent of this research is to explore the use of thermal energy rather than concentration to induce a volume flux across a semipermeable membrane. A double-sided, semipermeable reverse osmosis (RO) membrane and an osmosis filtration cell consisting of two polypropylene blocks will be used to contact two solutions of equal salinity and varying temperature. In the future, the results of this project may justify continued development of this PRO technique.

Project Objectives Osmotic pressure describes the force required to prevent osmotic flow across a membrane from pure solvent to a solution with a given concentration. Osmotic pressure of a given solution not only has a direct relationship to concentration, but also to solution temperature. This project will explore the effect of thermal gradients on forward osmosis and osmotic pressure.

Although PRO has been extensively studied as an energy source using salinity gradients between solutions, this option may prove to be more practical for use in a residential setting. Many homes do not have access to highly concentrated saltwater; however, by using temperature gradients to create an osmotic pressure difference between solutions, both solutions can be maintained at the same concentration.

As water passes through the RO membrane from cold to hot, a concentration gradient and volume flux are expected. This project focuses on volume flux as a result of temperature gradient. To capture the available energy, a valve could be installed on the hot side outlet. Once a certain pressure is reached, the valve would open to a letdown turbine which may be used to produce electricity. The used hot and cold solutions would be let down to a common feed tank with a mixer, maintaining a single solution concentration.

To generate a continuous process, two piping networks would be offset from the feed tank. The first would pass on top of the roof for solar heating or another heat source. The second would run underground to be cooled. This design would allow for renewable, environmentally friendly electricity production.

Methodology Used In order to demonstrate the direct relationship between solution temperature and osmotic pressure, two 0.5 M sodium chloride (NaCl) solutions were studied under two sets of thermal conditions. Solution 1 was maintained at 21.5°C, or room temperature, in both trials. Solution 2 was heated to 45°C and 53°C in Trials 1 and 2 respectively.

Based on the principles of the Morse Equation, osmotic pressure (π) is directly related to the Van’t Hoff factor (i), concentration (M), the ideal gas constant (R), and temperature (F). Due to osmotic pressure differences between solutions, a volume flux will occur from cold to hot across the RO membrane. Volume flux can be found using a concentration change measured by a conductivity probe.

π = iMRT

Vflux = M1V1/M2 – V1

52

The experimental apparatus consisted of two polypropylene blocks with a double sided, semipermeable RO filter and spacers. A hot plate and temperature probe were used to control the temperature of Solution 2. A conductivity probe indicated the concentration of the hot solution, allowing for calculation of volume flux, throughout the experiment.

Results Obtained The data obtained in the laboratory indicates that temperature has a measurable effect on osmotic pressure of a solution. In Trial 1, the hot solution was heated to 45°C±2°C. Over a period of 60 minutes, a volume flux of 237 mL was observed. In Trial 2, the hot solution was heated to 53°C±2°C. In a 60 minute time frame, a volume flux of 206 mL was observed.

Significance and Interpretation of Results The data collected gives insight to the worthiness of continued research on PRO by temperature gradient as a viable energy option. In both trials a volume flux occurred, indicating that temperature has a significant effect on osmotic pressure. These results can be used to justify continued, in-depth research.

Laboratory errors may have hindered the volume flux from reaching full potential. At the beginning of each trial, the temperature of the hot saline solution was inconsistent due to significant heat loss to the atmosphere and to the cold solution. A more efficient method of insulation may minimize heat losses and improve efficiency. Also, the hot solution reservoir contained some water vapor due to heating which may have concentrated the saline solution. Using conductivity probes on both sides of the membrane would enable the data to accurately reflect the effect of unintentional concentration differences.

Additionally, the system should be explored further using a valve and pressure gauge on the hot solution outlet to determine whether the resultant backpressure is enough to power a turbine to produce electricity. If data suggests a turbine could be powered using the backpressure created, a prototype system could be engineered and tested.

Figures and Tables

Figure 1. Diagram of Forward Osmosis Apparatus

53

Figure 2. of Volume Flux to Hot Side Over Time

Figure 3. Plot of Maximum Volume Flux as a Result of Temperature

As illustrated in Figures 2 and 3, higher solution temperature on the hot side induced a more rapid volume flux; however, the total mass flux at 53°C is less than was achieved at 45°C. Extensive testing and economic calculations would allow determination of an optimum operation temperature.

Acknowledgements The author would like to thank Dr. Glenn Lipscomb for his support and guidance throughout the project. The author would also like to thank Rahul Patil and Xi Du for their assistance in the laboratory.

Reference 1. Achilli, Andrea, Tzahi Y. Cath, and Amy E. Childress. Power generation with pressure retarded osmosis: An experimental and theoretical investigation. Journal of Membrane Science. 343 (2009) 42-52.

54 Developing Lithium Ion Powered Wheelchair to Facilitate Student Learning

Student Researcher: Adam M. Blake

Advisor: Dr. Hong Huang

Wright State University Department of Mechanical and Materials Engineering

Abstract The purpose of this research project is to design a prototype of a power wheelchair to better facilitate the learning experience of students with disabilities. For many disabled individuals, power wheelchairs offer the only source of independent mobility. A large hurdle faced with mobility devices such as power wheelchairs is accessibility. Often, wheelchair users need to get to a place that they simply cannot reach due to the size and weight of the wheelchair. Limitations on reducing size and weight can be primarily found in two aspects of a power chair: the batteries and the motor system. Due to the implementation of large and heavy batteries, the frame must be built larger to accommodate the power source. The same can be said for the motor system. With the current gel cell batteries commonly used for mobility applications, there is very little room for size and weight reduction. If a new source of power can be found to cut down these factors, the efficiency of the chair and its mobility could be greatly increased.

Project Objectives This research proposes to design and prototype a new-generation power chair that will be more convenient for transportation around campus, while improving students learning experiences. The first phase is to replace the standard battery with a commercially obtained lithium ion battery. Li-ion batteries offer two to three times the energy density as the lead-acid batteries found in most chairs today 1. Also, the new battery will weigh less, take up less space on the chair frame, and show improved battery capacity and cycle life. There will be room to make the chair more user-friendly by cutting down weight and size. Once the battery is in place, a test stand will be manufactured. Performance tests comparing the lead-acid battery to the Li-ion batteries will be systematically accessed.

With a high performance battery, optional integration systems will be explored also. The learning experience for students would be enhanced by making the chair more capable of powering portable electronics. Integration systems such as a computer charger/stand, cellular phone charger, and IPod dock would make the chair more popular among college-age students. The capabilities of the battery to power devices such as these will be researched.

Methodology Used In order to gain better understanding of the setup and mechanics involved in power wheelchairs, a used power wheelchair was first disassembled. A used Invacare Arrow model chair was acquired, and this model was investigated. In doing this, special note was taken on the setup of the power system and the motors that deliver power to the wheels. In order to judge the power needs of the chair, it is important to first know what kind of batteries were used. The manner in which they are wired into the control center is also vital, considering that different wire-up methods can deliver different voltage and capacity properties.

Once the chair was disassembled, a few critical components were measured for size and weight. Namely, these items were the batteries, each drive motor, drive wheels, and the frame with all parts removed. Once this information was obtained, research progressed into what types of battery would be best suited for a replacement. It was determined that a replacement battery would have a few key qualities. Weight is an important focus, as a reduction in weight would increase the efficiency of the chair, thus increasing time between charges and chair performance. Capacity, measured in Amp hours, is another important aspect. This property relates to a batteries performance at low amperage over time, given by Peukert’s Equation2:

55

Equation 1. Peukert’s Equation 2

Where t is the time of operation, C is the theoretical capacity (Ah), I is the current being drawn (A), and n is the Peukert number. Another key quality by which to judge batteries for this application is energy density. Energy density can be calculated either volumetrically or gravimetrically, the respective equations for each are shown below.

Equation 2. Volumetric Energy Density

Equation 3. Gravimetric Energy Density

Where Q is the energy density, E is the voltage (Volt) , C is the capacity (Ah), W is the weight (kg), and V is the volume (liter). is measured in Wh/L, where is measured in Wh/Kg. Both are important measurements, because both weight and size are of concern. Once these values are calculated for the stock batteries, comparison will be done to find an acceptable replacement.

Results Obtained During disassembly, the tire, batteries, drive motors, and frame were separated from each other. With this done, each entity was weighed separately. This information was used to analyze which parts of the chair contributed the most weight. Results are summarized in Figure 1. Important to note is the fact that batteries account for 37% of the total weight of the chair, thus the power system will be under analysis for weight reduction first.

Upon removal of the battery pack, it was found that this particular chair utilized two Group 24 GEL batteries, each one 12 Volts with a capacity of 84.5 Amp hours 3. The size of each battery measured 10.2” X 6.8” X 9.24”, and the batteries weighed in at 52 pounds apiece. The energy density calculated for the Group 24 GEL batteries implemented on the chair is found to be 96.3 Wh/L volumetrically, and 42.97 Wh/kg gravimetrically. Additionally, the batteries were connected in series, as shown in Figure 23 . This is an important fact, because combination of batteries in series effectively doubles the voltage, while keeping the same capacity. Therefore, it can be said that the chair runs on 24 V, with a capacity of 84.5 Ah. This new information provides a good base point to start with requirements for a replacement.

Progress then continued into searching for replacements to these gel-celled batteries. As can be seen in Figure 3, Lithium Ion batteries (UBBL26 and 55Ah, 24V LiPO4) are the forerunner in energy density values compared to the gel and AGM. With this data known, it appears that the best alternative to the gel cell batteries currently employed would be the 55Ah, 24V LiFePO4 battery pack. It offers a 61% decrease in volume and a 64% decrease in weight, which is a very significant improvement. Areas of concern however include the 300% cost increase and the slightly decreased capacity rating. Using Equation 1 as a relative approximation of time between charges, and a current estimation of 5 amps for mean usage,4 and a Peukert number of 1.3 for gel cells and 1.05 for LiFePO4 2, the stock batteries should last around 10.4 hours. Comparatively, the LiFePO4 batteries should last around 10.1 hours. This is a somewhat minor difference, and it is only an approximation. Current demand from a chair cannot be calculated exactly, because it often depends on operator habits and terrain 4.

In the future, testing will be done to determine the range of the power chair with a battery of such specifications. These tests will be done on the roller style test stand, in order to allow the chair to remain stationary while the motor is running. Total range of the chair will be tested at varying speeds. Based on battery life in between charges and ability to power portable devices, pros and cons of the new battery system will be evaluated. Future progress will include an evaluation of a frame redesign in order to lessen the weight of the chair and decrease the size.

56

Figures

Figure 1. Distribution of weight in power chair.

Figure 1 shows the breakdown of weight by component for the Invacare Arrow that was disassembled. Battery weight and frame weight are almost identical, making up for a combined 75% or the total weight of the chair. The drive motor assemblies contribute 19% of total weight, and drive wheels contribute a mere 6% overall.

Figure 2. Wire up schematic of batteries in Invacare Figure 3. Comparison of Battery Candidates Arrow 2

As can be seen in the chart, the two Li Ion batteries (UBBL26 and 55Ah, 24V LiPO4) show superior energy densities compared to the stock M24 SLD G Gel battery. The 8A22NF-DEKA and HALF U1 18A8 batteries are AGM type,or Absorption Glass Mat, and offer very similar qualities to the stock battery. The downfall of Li Ion is the price. UBBL26 offers the highest energy densities, making it the best candidate if it were not for the nearly 1000% price increase. It is hard to justify an increase of this magnitude, so the battery pursued will be the custom 55Ah, 24V Lithium.

References 1. Cooper, Rory, David VanSickle, Steven Albright, Ken Stewart, and Margaret Flannery. "Power wheelchair range testing and energy consumption during fatigue testing." Journal of Rehabilitation Research. 32.3 (1995): 258-263. Print 2. Doerffel, Dennis; Sharkh, Suleiman Abu. "A critical review of using the Peukert equation for determining the remaining capacity of lead-acid and lithium-ion batteries.." Journal of Power Sources. 155.2 (2006): 396-398. Print. 3. Invacare 3G Storm Series Arrow Owner's Manual, Invacare,Elyria, OH 44035 2010. 4. Bronzino, Joseph. The Biomedical Engineering Handbook. 2. 1. Boca Raton,FL: CRC Press LLC, 2000. 141.8-12. eBook.

57 The Effects of Electromagnetic Radiation on the Galvanic Corrosion of Metals

Student Researcher: Karin E. Bodnar

Advisor: Dr. Nathan Ida

The University of Akron Department of Electrical Engineering

Abstract A 2001 report to the Federal Highway Administration states that the direct costs of corrosion to the U.S. economy represents 3.2 percent of the U.S. Gross Domestic Product, or about $279 billion annually. The report concludes that corrosion has a major impact on the U.S. industrial complex and associated infrastructure as well as an adverse effect on industrial productivity, international competitiveness and security [1]. To address the growing need for corrosion management in performance assessment and systems health monitoring, the University of Akron has an undergraduate degree program in Corrosion and Reliability Engineering.

One specific research goal of the Corrosion and Reliability Engineering program is to understand the effects of electromagnetic radiation on the corrosion of metals. To understand if electromagnetic waves have an effect on the corrosion rate in metals, an exhaustive literature search was performed on relevant and related issues. Based on the results of the literature search, an experimental setup was developed. Preliminary test samples consist of aluminum coupled with steel in corrosion accelerating environments. Samples were placed near a transmission line operating at 80 MHz. By comparing the samples along the length of the transmission line and using control samples, the results of the experimental study will be analyzed. Further research will then be performed on the same metal couplings at higher frequencies and on other metal couplings.

Project Objective The objective of this research is to determine the effects of electromagnetic radiation (EMR) on the corrosion of metal-metal junctions. Specifically, this research will focus on the application of corrosion on antennas in a maritime environment. Experimentally, metal-metal junctions will be subjected to EMR in an accelerated corrosion environment and an experimental procedure for the formation of the oxide layer between metal-metal junctions will be developed.

Methodology Used Laboratory bench-top experiments will be conducted first as a proof of concept to determine electromagnetic radiation (EMR) impacts on corrosion beyond the normal corrosion in atmospheric and maritime environments. In these experiments, the electromagnetic environment will be controlled by selecting frequencies from the spectrum segments of 2 MHz – 5.8 GHz, which are frequencies typical of antenna transmissions. Next, the effects of corrosion products between metals on impedance will be explored through spectroscopy techniques. Finally, the theory of electromagnetic radiation and subsequent corrosion effects will be explored and modeled.

The bench-top experimental setup consists of a power supply operating at a switching frequency of 80 MHz connected to an amplifier that outputs 4 watts across a transmission line. The length of the transmission line is 300 cm. A load box at the opposite end of the transmission line is used to dissipate the power. A test specimen is placed approximately 20 cm from the transmission line along the length of the line. Each test specimen is composed of of C1018 carbon steel washers bolted to an aluminum sheet using non-conducting fasteners. Each specimen contained either 8 or16 carbon steel washer and aluminum sheet metal-metal junctions. To create an accelerated corrosion environment, each specimen was sealed in an acrylic tube and a layer of tap water was poured along the bottom of the tube. Figure 1 illustrates EMR Transmission Line Experimental Setup.

58 Using electrochemical impedance spectroscopy techniques and a standard electrochemical cell, the corrosion layer formed between two metal surfaces can be analyzed. First, a procedure was developed to form a uniform iron oxide layer on the surface of a carbon steel specimen. The procedure requires a potentiostat and a standard electrochemical cell. The electrochemical cell is a device which facilitates chemical reactions through the introduction of electrical energy. In the electrochemical cell, a carbon steel specimen was used as the working electrode, saturated calomel electrode (SCE) was used as the reference electrode, and platinum-coated niobium was used as the counter electrode. The solution used for the chemical reaction is a mixture of 0.6 M NaCl and 0.01 M NaOH. Using the potentiostat, a voltage of -0.3 volts (direct current) was applied to the specimen for 30 minutes. Electrochemical impedance curves were produced for the oxide layer that formed on the carbon steel. This technique for the formation of an oxide layer on a single carbon steel specimen will be used in future spectroscopy analyses of steel-oxide-steel junctions and thus, is capable of quantifying the diode-like behavior of metal-oxide-metal junctions.

In Electromagnetic Theory, it is well known that a current can be modeled as a sinusoidal waveform varying in magnitude as a function of position along an open transmission line, as shown by Figure 2. If the current distribution along the transmission line is known, then the magnetic field can be modeled at any point in space along the line. The position of the specimen in relation to the magnetic field distribution is shown in Figure 3. Using this information, a finite element model and a lumped circuit model will be developed to represent the parameterized and general characteristics of the formation of rust on the surface of the specimen. Once each model is developed individually, they can be combined to create a dynamic model of the effect of electromagnetic radiation on the formation of rust between metal- metal surfaces [2].

Results Obtained Preliminary results of the EMR Transmission Line Testing results are shown in Figure 4. The figure shows the magnitude of the magnetic field along the length of the transmission line. The magnetic field distribution along the antenna varies in magnitude as a function of the wavelength of the antenna. From observing these test results, a potential correlation between the magnitude of the magnetic field and the rate of corrosion formed on the specimen was noted. From observing the corrosion on washers of relatively high magnetic field amplitude, or the corrosion on washers 1, 2, and 6, there is relatively more corrosion than on washers 3 and 4, which were at relatively low magnetic field amplitudes. Additionally, it was observed that more corrosion formed on the side of the steel-washer-to-aluminum junction closes to the transmission line, or source, than the on the side further from the source. This can be seen by the corrosion products between washers 1, 3, 5, and 7, in the bottom of the figure. The edge of the metal- metal junction facing the source, or the transmission line, was observed to have more corrosion product than the edge further away from the source.

In order to confirm the results seen from the EMR Transmission Line testing, more testing will be done with Al-Steel samples using a new experimental setup. Additionally, experiments will be performed with other sample materials, such as Fe-Al samples and at other frequencies within the range of 2 MHz – 5.8 GHz.

The results of the electrochemical impedance spectroscopy for the formation of an oxide layer on carbon steel samples will be used as a basis for future spectroscopy of metal-oxide-metal junctions. Using the techniques described in the Methodology section, the real versus imaginary impedance of the oxide-steel junction was plotted every 30 minutes. Figure 5 shows that after the corrosion product is formed on the surface of the carbon steel specimen, the real vs. imaginary impedance of the specimen decreases significantly, indicating a change in the corrosion rate of the specimen.

59 Figures and Tables

Figure 1. EMR Transmission Line Experimental Setup Figure 2. Current Distribution along an Open Transmission Line

Figure 3. Specimen and Magnetic Field Lines Figure 4. EMR Transmission Line Experimental Results

Figure 5. Real Impedance vs. Imaginary Impedance of Carbon Steel Specimen

Future research will focus on repeating the EMR experiment with Al-Steel samples and with a new experimental setup so that measurements can be made to quantify the observed correlations. Additionally, new experiments with other sample materials, such as Fe-Al samples and new experiments at other frequencies within the controllable range will be performed. Finally, the modeling of EMR corrosion to determine the effects of corrosion products between metals will be refined.

References 1. www.corrosioncost.com 2. Dr. Nathan Ida, Dr. Joe Payer, and Dr. Xi Shan for their technical advisement and expertise in the areas of Electromagnetics and Corrosion

60 A Study of Concentrated and Distributed Ballast Weight on a Racing Shift Kart Chassis

Student Researchers: Robyn L. Bradford and Jesse E. Daniels

Advisor: Dr. Mahmoud Abdallah

Central State University Department of Manufacturing Engineering

Project Description Many shift kart chassis are failing prematurely due to the mandatory addition of ballast weight in order to meet minimum weight requirements. Currently, there is little information known about this problem (i.e., the specific types of failures that are occurring, the conditions under which they are occurring and the exact failure locations). The purpose of this project is to investigate the problem of shift kart chassis failure by comparing the stress induced from concentrated and distributed ballast weight. Literature suggests that concentrating weight on the chassis close to the center of gravity and as low as possible may decrease stress on the kart, as opposed to distributing weight around the kart. Testing this hypothesis will include a comparison of theoretical stress results using finite element analysis and experimental stress results using strain gauges. Three different scenarios will be tested: (1) the addition of weight bolted to the back of the driver’s seat, (2) weight added to the chassis sides, and (3) low-placed concentrated weight near the center of gravity (CG). The goal for the project is to design a ballast weight assembly that reduces chassis cycle fatigue and increases chassis life, reliability, and safety.

Introduction A shifter kart (shift kart) is a scaled-down version of a Formula 1 race car (Figure 1). They are low-riding racing vehicles that are designed without a suspension and a differential. The karts can either have a 125 or 250 cc two-stroke engine with a 5 or 6 speed manual transmission. The top speed for a 125 cc shift kart is about 85 mph and the 250 cc model can reach up to 150 mph. Acceleration is rapid from 0 to 60 mph in about 3 seconds; and they can stop quickly with a 4-wheel hydraulic braking system. [2]

Shift kart racing is quite popular and is regarded as an entry-level motorsport. To create an even playing field, all karts must meet specific requirements that differ according to racing class. Although shift karts are much less expensive than other types of racing vehicles, the price range for a new kart can be in the thousands of dollars. As such, racers want to protect their investment and do all that they can to reduce the likelihood of chassis failure.

Project Goal and Objectives The project goal is to design a ballast weight assembly that reduces chassis failure from cycle fatigue; and the objectives for this study are as follows:

• To conduct a literature search on shift kart chassis failure • To conduct a literature search on ballast weight practices • To model a chassis using CAD software • To perform stress analysis using finite element analysis (FEA) • To design a data acquisition system (DAS) to measure, record, and analyze deflection from static loads at critical points on the frame identified by FEA • To conduct static load tests and collect strain data using the DAS

Methodology Used Literature Search - An internet search was conducted to gather information on shift kart chassis failure. Of particular interest was finding specific information on failure types, the location of failures, and the operating conditions under which they occurred. The results for failure proved inconclusive as only a few anecdotal references by racers to chassis failure were found in their blogs. The search for current ballast weight practices was more successful. Literature results show that there are two primary techniques for adding weight to shift karts. Either weight is bolted to the back of the racing seat, or it is added along the

61 sides of the chassis. These practices are believed to cause increased stresses to the frame that are resulting in premature failure from cycle fatigue.

Literature results also indicate that the optimal location for ballast weight is low-placement close to the center of gravity (CG) and occupying the least amount of area as possible. In other words, concentrated ballast is preferable to distributed and benefits include increased acceleration and agility. [3] The operating assumption here is that the improper location and configuration of ballast weight on the shift kart are causing increased stresses to the frame and contributing to failure. If stresses can be significantly reduced, then the durability and life of the kart may be extended.

Theoretical Stress Analysis - A standard carbon steel alloy (ASTM-A36) shift kart was purchased for modeling and testing purposes. The chassis was used to develop a 3D solid CAD model using Solid Edge as shown in Figure 2. This CAD model was then exported to ALGOR to create 3D solid finite element models and meshes for static stress analysis. A summary of the test conditions is given in Table 1.

Table 1. Summary of Test Conditions Condition Description C0 Null (no ballast) C1 Ballast bolted to the back of the racing seat C2 Concentrated ballast located low and near the CG

For this study, the S3 racing class was chosen that established a minimum weight requirement of 395 lb according to the Superkarts! USA (SKUSA) rule book. Superkarts! USA is a North American regulatory organization for shift kart racing. [4] The actually kart weight is 192 lb and the driver’s weight is 176 lb. This results in the need for 27 lbs ( 30 lb) of ballast. ALGOR’s static stress contour plots identified high stress regions on the frame for (1) weight bolted to the back of the racing seat; and (2) concentrated weight near the center of gravity. Ballast added to the sides of the frame will be examined experimentally.

Experimental Stress Analysis – With complex structures like the shift kart chassis, the experimental determination of stress requires strain measurements that are subsequently converted into stress values. Thus, experimental stress analysis will be done by measuring the deformation of the shift kart chassis under load. To accomplish this, a laptop-based data acquisition system (DAS) was designed to measure, record, and analyze bending and torsional strain at the high stress points identified by ALGOR. The four main components of the DAS are:

1) The metallic foil 120 ohm strain gauges. 2) The Wheatstone bridge-based strain module with built in signal conditioning (bridge completion, amplification, filtering, voltage excitation of 2.5 V, analog-to-digital converter, etc.). 3) A laptop computer. 4) LabVIEW software (a graphical programming environment by National Instruments that transforms the laptop into a virtual measuring instrument).

The critical stress regions from ALGOR were used for locating and mounting the strain gauges to the shift kart. A half-bridge strain gauge configuration that measures bending strain but rejects axial strain is being used (axial strain is considered negligible). The circuit diagram for the half-bridge circuit is shown in Figure 3. Voltage measurements from the half-bridge circuit are converted to strain using the following formula:

 2V  R    r   L  1  (1) GF  Rg 

Where  is strain (in/in, m/m), Vr is the voltage ratio the internal channels use in the voltage-to-strain conversion equation, GF is the gauge factor, RL is the lead resistance, and Rg is the nominal gauge

62 resistance [5] [6]. Strain measurements from the DAS can then be converted to stress using Young’s modulus (modulus of elasticity), an equation of the ratio of stress to strain:

 / AF 0 E  (2)   / LL 0 where E is Young’s modulus (psi or N/m2),  is stress (psi or N/m2), F is the force (load) applied to the object, A0 is the original cross-sectional area through which the force is applied, L is the amount by which the length of the object changes and L0 is the original length of the object. Solving for stress in Equation 2 yields the following formula:

  E (3) where Young’s modulus for ASTM-A36 steel is 1029 6 psi ( 10200 9 N/m2).

Results Obtained The ALGOR stress contour plots show that ballast weight bolted to the back of the racing seat result in increased stresses primarily around welds in 4 regions of the chassis as indicated in Figure 5. Stresses from a concentrated weight are shown in Figure 6. The concentrated ballast resulted in a small increase in stress in some of the weld joints and a decrease in stress in others. Experimental static load tests on the physical kart will be conducted soon.

Significance and Interpretation of Results Since there was an increase in stress due to concentrated weight, dynamic testing will be studied in the future to understand the nature of chassis cycle fatigue from weight added higher than the CG height (as in the case of weight bolted to the top back of the racing seat).

The ALGOR stress profiles obtained thus far have allowed for the development of a preliminary design solution as shown in Figure 7. The proposed ballast assembly is made of two metal pieces (actual metal to be determined). The assembly mounts around the frame in the corner closest to the CG.

For the kart model used in this study, the CG is identified in Figure 2. Because of the racing seat position, a portion of the weight must be located underneath the seat. To accommodate this, a hole will be drilled through the seat and the weight assembly to securely mount it in place with a nut and bolt.

An important design advantage is the assembly’s simplicity and ease of use. The weight can be easily disassembled and then quickly adjusted to add different weight as necessary to meet the minimum weight requirements for various racing classes.

Figures/Charts

Figure 1. Rosso Korsa shift kart [1] Figure 2. CAD drawing of the physical shift kart chassis with the center of gravity (CG) labeled.

63

Figure 3. Circuit diagram for the half-bridge strain gauge configuration [5].

Figure 4. ALGOR stress contour plot for null Figure 5. ALGOR stress contour plot for ballast condition (no ballast). weight bolted to the back of the racing seat.

Figure 6. ALGOR stress contour plot for the Figure 7. A conceptual CAD drawing of a ballast concentrated ballast condition. weight assembly (isometric view). Acknowledgments The authors would like to thank their advisor Dr. Mahmoud Abdallah and Mr. Mel Shirk, Engineering CNC Lab Technician. Special thanks are also extended to the Ohio Space Grant Consortium (OSGC), the Ohio Aerospace Institute, and Dr. Gerald T. Noel, Central State University OSGC Campus Representative, for this research opportunity. Appreciation is also given to Mr. Gorgui Ndao, Manager for Central State University’s Center for Student Opportunities (CSO), and Ms. Marian Hoey, CSO Administrative Assistant. Other individuals who contributed greatly to this research project were Mr. Johnny Daniels and Mr. Nate Morris. Thank you all for your support.

References 1. http://racinggokartsforsale.com/racing-go-karts-for-sale/shifter-kart-rosso-korsa.php 2. http://www.125ccshifterkarts.com/ 3. Longacre Racing, Ballast Placement Tips [online resource], URL: http://www.thedirtforum.com/ballastplacement.htm [cited 19 January 2011]. 4. Superkarts! USA, URL: http://www.superkartsusa.com/ 5. National Instruments, Strain Gauge Configuration Types [online resource], URL: http://zone.ni.com/devzone/cda/tut/p/id/4172 [cited 28 January 2011]. 6. National Instruments, Choosing the Right Strain-Gauge for Your Application [online resource], URL: http://zone.ni.com/devzone/cda/tut/p/id/3092 [cited 11 December 2010]. 7. Zecher, J., Finite Element Analysis Tutorial Using ALGOR Version 14, SDC Publications, Indianapolis, Indiana, 2003, Chaps 4, 11. 8. Beckman, B., The Physics of Racing, Part 1: Weight Transfer [online resource], URL: http://phors.locost7.info/phors01.htm [cited 19 January 2011]. 9. Figliola, R.S. and Beasley, D.E., Theory and Design for Mechanical Measurements, 4th ed., John Wiley & Sons, Hoboken, New Jersey, 2006, Chap. 11.

64 NRC Regulation of Medical Uses of Radiation

Student Researcher: Royel S. Bridges

Advisor: Dr. Edward Asikele

Wilberforce University Department of Engineering and Computer Science

Abstract The NRC or Nuclear Regulatory Commission is a government regulatory service that regulates the safety of nuclear power production and other civilian uses of nuclear materials. Therefore my conducted research involves the role of NRC in Regulating the Medical Use of Nuclear Materials. Quoted directly for the NRC website, NRC has regulatory authority over the possession and use of byproduct, source, or special nuclear material in medicine. In this case, byproduct is a secondary or incidental product deriving from a nuclear chemical reaction, or a biochemical pathway, and is not the primary product or being produced. Byproduct materials are used in many medical practices and medicines. Some of these practices and entities are calibration sources, radioactive drugs, bone mineral analyzers, portable fluoroscopic devices, brachytherapy sources, and other devices.

Project Objectives The objective of this research is to support the discussion encompassing the role of the Nuclear Regulatory Commission in regulating the medical use of radiation and nuclear materials. The Nuclear Regulatory Commission, or NRC, is a government agency that regulates the safety of nuclear power production and other civilian uses of nuclear materials. Quoted directly for the NRC website, the NRC has regulatory authority over the possession and use of byproduct, source, or special nuclear material in medicine. In this case, byproduct is a secondary or incidental product deriving from a nuclear chemical reaction usually taking place in nuclear reactors. Byproduct materials are used in many medical practices and medicines. Some of these practices and uses are in calibration sources, radioactive drugs, bone mineral analyzers, portable fluoroscopic imaging devices, brachytherapy sources, and other devices. The main medical uses of radiation discussed within the research are within x-rays, chemotherapy, homeopathic treatment, teletherapy, brachytherapy, therapeutic medicine, radiopharmaceuticals, and nuclear medicine.

Methodology Used The research done is of an analytical nature in addition with fundamental basic research. With the use and analyzing of facts and available information encompassing the Nuclear Regulatory Commission a critical evaluation was made in the large role of the NRC in regulating the medical use of radiation. Some generalization was possible after critical evaluation and analyzing.

Results Obtained After conducted research and analyses it was clear and evident that the Nuclear Regulatory Commission had a significant role in the regulation of the medical use of radiation. With the regulation of government agencies such as the NRC, radiation can be used for magnificent things, most importantly to save lives. Also in conclusion to my research with the use and regulation of radiation, advancements in the medical field and technology have been made and are now safe and open to the greater public.

Significance and Interpretation of Results The NRC accompanied with the Agreement States regulates the radiation and exposure of patients and workers through licensing, inspection, and enforcement of its requirements. The NRC regulates the medical use of radioactive material in the 15 states and the Agreement States include the other 35 states that have entered into an agreement with the NRC to regulate the use of certain radioactive materials under the Atomic Energy Act of 1954. The NRC regulates the use of radioactive materials through a Regulatory Process that includes five components: NRC Regulatory Process:

1. Regulation 4. Operational Experience 2. Licensing, Decommissioning, and Certification 5. Research, Support and Decisions 3. Oversight

65 The first of the five components is Regulation involves developing regulations and guidance for their applicants and licensees. Licensing, Decommissioning, and Certification involves licensing or certifying applicants to use nuclear materials or operate nuclear facilities or decommissioning that permits license termination. The third component, Oversight, is the overseeing of licensee operations and facilities to ensure that licensees comply with safety requirements. Operational Experience is evaluating operational experience at licensed facilities or involving licensed activities. Research, Support and Decisions, is the fifth component and includes the conducting of research, holding hearings to address the concerns of parties affected by agency decisions, and obtaining independent reviews to support the NRC’s regulatory decisions.

Other objectives of the research are to discuss the NRC’s involvement in the regulating working environment safety and handling of radioactive medicine and low-level radioactive waste through licensing and certification of those that administer nuclear medicine and dispose of it. Low-level radioactive waste includes contaminated protective shoe covers and clothing, wiping rags, mops, filters, medical tubes, swabs, injection needles, syringes, laboratory animal carcasses and tissues. The NRC also licenses the nuclear medical physicist and technologist that administer radiopharmaceuticals and works to insure radioisotopes are used properly, effectively, and handled in accordance with governmental standards.

In conclusion to my research it is no secret that nuclear radiation can be harmful if not handled properly and has a history of accidents to solidify such nature. Although radionuclides can be harmful, with the regulation of government agencies such as the NRC, radiation can be used for magnificent things. One of the most important uses of ration is in medicine where it can be used to save lives. With the use and regulation of radiation, advancements in the medical field and technology have been made and are now safe and open to the greater public.

Figures/Charts

Figure 1. The Nuclear Regulatory Process

Acknowledgments and References 1. "AAPM File Not Found." AAPM: The American Association of Physicists in Medicine. Web. 07 Apr. 2011. . 2. "NRC: Backgrounder on Medical Use of Radioactive Materials." NRC: Home Page. Web. 07 Apr. 2011. . 3. "NRC: Byproduct Materials." NRC: Home Page. Web. 07 Apr. 2011. . 4. "NRC: Fact Sheet on Byproduct Materials." NRC: Home Page. Web. 07 Apr. 2011. . 5. "NRC: Fact Sheet on Medical Use of Radioactive Materials." NRC: Home Page. Web. 07 Apr. 2011. . 6. "NRC: Radioactive Waste." NRC: Home Page. Web. 07 Apr. 2011. . 7. "Nuclear Medicine Technologists." U.S. Bureau of Labor Statistics. Web. 07 Apr. 2011. . 8. “Online Manuel for Nuclear Handling” < http://las.perkinelmer.com/content/manuals/gde_safehandlingradioactivematerials.pdf

66 How Can Multicast Packets Be Used To Pass Across A Virtual Switch Without Data Loss?

Student Researcher: Tanisha M. Brinson

Advisor: Dr. Edward Asikele

Wilberforce University Department of Computer Science

Abstract In computer networking, multicast is the delivery of a message or information to a group of destination computers simultaneously in a single transmission from the source creating copies automatically in other network elements, such as routers, only when the topology of the network requires it. Multicast is most commonly implemented in IP multicast, which is often employed in Internet Protocol (IP) applications of streaming media and Internet television. In IP multicast the implementation of the multicast concept occurs at the IP routing level, where routers create optimal distribution paths for datagram’s sent to a multicast destination address. At the Data Link Layer, multicast describes one-to-many distribution such as Ethernet multicast addressing, Asynchronous Transfer Mode (ATM) point-to-multipoint virtual circuits (P2MP) or Infiniband multicast.

Project Objectives The logistics of this project will deal with the aspects of IP multicast as well as TV multicasting. IP multicast is a technique for one-to-many communication over an IP infrastructure in a network. It scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary.

Methodology Used A virtual switch is a software program that allows one virtual machine (VM) to communicate with another. Just like its counterpart, the physical Ethernet switch, a virtual switch does more than just forward data packets. It can intelligently direct communication on the network by inspecting packets before passing them on. Some vendors embed virtual switches right into their virtualization software, but a virtual switch can also be included in a server's hardware as part of its firmware. One of the key challenges with server virtualization has been to figure out a way to allow network administrators to move VMs across physical hosts without having to stop and reconfigure them individually. Because moving VMs across physical hosts in a scalable way is time consuming and can potentially expose the network to security breaches if not done right, these concerns may prevent an enterprise from taking its virtualization initiative beyond simple server consolidation to more dynamic resource allocation. That's where advancements in virtual switches can help. Because a virtual switch is intelligent, it can potentially be used to ensure the integrity of a VM's profile including its network and security settings as the virtual machine (VM) is migrated across physical hosts on the network.

Results Obtained The figure below shows how the VMware Infrastructure complies with a modular design so that all resources can be shared and assigned as needed. Virtual and physical networking components are designed identically the same way. If you need to share some of your physical or logical resources, you simply need to have them available and then configure them for use. This helps to create the most flexibility and if done correctly, the most efficiency. Here in Figure 1 you can see that VMs can be connected to each other through a virtual switch component, and then to physical NICs as needed. In Figure 1 you will also find that the management network is separate (and isolated) from the rest of the network thus increasing security for the management of your infrastructure.

67

References 1. Elias N., Khnaser (2009) “VCP Exam Cram: VMware Certified Professional”, Que Publishing-2009. 2. http://en.wikipedia.org/wiki/Multicast 3. http://virtualizationadmin.com/articles-tutorials/vmware-esx-articles/installation- deployment/vmware-understanding-virtual-switch.html 4. http://www.coleengineering.com/

68 Wireless Charging System

Student Researcher: Rachel L. Bryant

Advisor: Dr. Yan Zhuang

Wright State University Department of Electrical Engineering

Abstract Most modern electronics require a cable connection to a power source when charging, such as laptop computers, cell phones, and mp3 devices. This connection can be inconvenient and restrictive. The ability to charge a device wirelessly, with the same efficiency as a cable connection would be ideal. There are already a few existing ways that power can be transmitted wirelessly, but most lack the sufficient range and efficiency to be put into practical use. Using resonance coupling, a technique in which frequencies at a transmitter are matched to the frequencies at a receiver to increase inductive properties of the system, wireless power transfer becomes more efficient. The goal is to design the appropriate matched network for an existing set of resonance coils to wirelessly charge a lithium-ion polymer battery quickly and efficiently.

Project Objectives The purpose of this project is to create a matched network capable of maximizing the efficiency of power transferred between two existing coils. The goal is to reach better than 40% efficiency over 3 meters between the coils, and wirelessly charge a 3.7 volt lithium-ion polymer battery. To achieve this main objective of the project, smaller goals must be met. The first goal is to learn how to use a Hewlett- Packard impedance analyzer in order to measure the resonant frequency of the two copper coils and the impedance of the coils at that resonant frequency. Due to the age of the model owned by Wright State University, there is no faculty member with the knowledge required to perform these tests. Using the impedance found at the resonant frequency, calculations must be performed in order to create circuit prototypes that will maximize the power transfer of the load. Simulation software is to be used to test the power transfer efficiency of the entire system before the circuit boards are milled. After all these small goals are met, the milled circuit boards will be incorporated into the charging system and tested by monitoring how effectively a lithium-ion polymer battery is wirelessly charged.

Methodology Used Maximum power transfer occurs when the load impedance is matched to the source impedance. A matching network can be placed between a load and a transmission line, and is designed so that the characteristic impedance of the transmission line is equal to the impedance seen looking into the matching network (Pozar, 2005, p 222). This project requires a network to be designed matched to 50Ω, which is the impedance of the power amplifier that will be used during testing. The resonant frequency of the copper coils first needed to be measured, because energy will be transferred easiest at this frequency. To find the resonant frequency, the transmitter coil was connected to an impedance analyzer. This step in the project required an above average understanding of the HP impedance analyzer and a specialized test fixture. The impedance analyzer was set up to show the magnitude and phase of the impedance of the coil as many frequency sweeps were performed. The receiver coil went through the same testing. The magnitude and phase data were transformed into real and imaginary parts. The resonant frequency was then determined by looking at when the imaginary part of the impedance was zero. The impedance of the coils at the resonant frequency was observed.

The impedance of the copper coils at resonant frequency needed to then be normalized by using

69 where Zo is the impedance of the power amplifier, 50 Ω, ZL is the impedance of the coils at resonant frequency, and zL is the normalized impedance. The normalized impedance was then plotted on the Smith Chart and the chart was used to help characterize of the impedance matching prototypes.

The remaining part of the project requires knowledge on the substrates that will be used to mill the circuit boards. Wright State University has several substrates, but the materials of which they are made are unknown. Contact with the manufacturer is in progress to receive characteristic information on the substrates. When the wavelength and other information are determined, the circuit can be fully simulated, milled, and tested.

Results Obtained Figure 1 shows the data collected during the frequency sweeps. The resonant frequency for both coils is 10.8 MHz. The impedance of both coils was measured to be 934 Ω at the resonant frequency. Figure 2 shows the circuit prototype found from the Smith Chart, where d is the length of the transmission line and l is the length of the open shunt stub.

Figure 1. The imaginary part of the impedance crosses zero at 10.8 MHz.

Figure 2. Circuit prototype where d and l are in terms of wavelength to be determined by the characteristics of the substrate

Reference 1. Pozar, David. (2005). Microwave Engineering (3rd ed). New Jersey: John Wiley & Sons.

70 CFD Analysis of Wind Tunnel Blockage

Student Researcher: Jeffrey W. Carter

Advisor: Dr. Jed Marquart

Ohio Northern University Department of Mechanical Engineering

Abstract Although the Navier-Stokes equations and its subsequent simplifications have been around for some time, the ability to solve them with a computer is relatively new. As computing power has increased, it has allowed Computational Fluid Dynamics to become more dynamic and accurate. The ability to simulate fluid flow and its interactions with boundaries has limitless potential and applications. There are several reasons why CFD has an advantage over experimental testing: there is no need for a physical model or prototype to construct, the cost is almost always cheaper, and there is more data generation in one test run (velocity, density and pressure at every grid point.) However, there is constant discussion as to which one is better or should be used in certain situations. This experiment examined a common wind tunnel effect through CFD analysis.

When experiments are performed inside of a wind tunnel there are certain considerations that must be made. Among them is the blockage factor correction. Wind tunnel experiments can be altered and less accurate if the blockage factor is not represented in the results. The purpose of this project was numerically examine wind tunnel flow blockage by comparing the CFD’s resulting values (i.e. coefficient of drag) to previous experimental or analytical published results for a given geometry inside a wind tunnel. This was accomplished using CFD software: Pointwise, Cobalt, and Fieldview.

Project Objectives The geometry used was modeled off of Ohio Northern College of Engineering’s open circuit wind tunnel’s test section. Figure 1 shows the tunnel modeled in Pointwise as a wire frame.

12”

12” 24” Figure 1. Wind Tunnel with 4 inch cube as wire frame

As shown, the dimensions of the wind tunnel were 12”x12”x24”. Five different cube sizes were used in the analysis; the side lengths were 1 inch, 2 inch, 4 inch, 7 inch, and 9 inch. This gave a percent blockage range of roughly 1-56%. The center of the cube was placed at the center of the wind tunnel for every cube size. The boundary conditions used can be seen in figure 2 and 3. Sink Corrected Mass Flow Source Riemann Invariants

Solid Wall

Adiabatic No Slip Figure 2. Boundary Conditions No Trips

71 Solid Wall Slip

Figure 3. Boundary Conditions

To get a better understanding of the boundary conditions that were applied to each surface a boundary file example is displayed in Figure 4.

4 3 Tunnel Wall Inlet Solid Wall Source Slip Riemann Invariants P-Stat T-Stat K or Nu~ Omega Mach Axis End Points No Swirl(A,B,C) ####################### 14.7 530. -1. -1. .0526 0. 0. 0. 1. 0. 0. 0. 0. 0. 2 Yes Outlet ########################################################## Sink 5 Corrected Mass Flow cube Corrected Mass Flow Solid Wall 0.01137 Adiabatic No Slip 0 No Trips ####################### Yes Figure 4. Boundary Condition File

The boundary domains were meshed in Pointwise as unstructured. From there it was exported to Gridgen. This had to be done to obtain a tight mesh around the cube in order to capture the boundary layer. To do this anisotropic tetrahedral meshing was used. Figure 5 shows an example of one of the tight meshes using this method.

Figure 5. Anisotropic Boundary Layer Mesh

In this analysis all extrusions had inputs very close to the following; initial distance normal to the surface, Δs, of 0.001 inches, growth rate = 1.3, and 20 layers.

72 Methodology This analysis was solved using turbulent Navier-Stokes equations along with the continuity, momentum, and energy equations. In the solver used, Cobalt, the Spalart-Allmaras turbulent model was used. This model is based off of the Reynolds- Averaged Navier Stokes (RANS) model. The following equations [2] are the very general forms of the governing equations used by cobalt. Equation 1 is the mass conservation equation.

(1)

Equations 2, 3, and 4 are the momentum equations; Mx, My, and Mz respectfully.

(2)

(3)

(4)

Equation 5 is the conservation of energy.

(5)

The coefficient of drag is computed in Cobalt through equation 6.

(6)

Results Obtained The .job file was run for all of the cases and the .out file was evaluated. First, convergence had to be reached for each case. With convergence reached, the results could be interpreted. The original Excel worksheet with the compiled data has been attached to this report. Figure 6 shows the initial data received from Cobalt.

Figure 6. Uncorrected Cd and Blockage Ratio

This data was obviously unexpected. Although there is the expected and noticeable trend showing a positive correlation between the blockage ratio and the coefficient of drag, the values for Cd at the high blockage ratios were drastically different. After some thought, it was decided that the mesh quality around the cube should be investigated. Although the mesh rating was adequate for all of the cases and the anisotropic tetrahedral height, number of layers, and growth rate were kept the same, the length in the

73 two dimensions along the face of the cube changed significantly. Table 1 shows the number of points on the cube’s constructing connectors as well as the resulting Cd.

Table 1. Nodes Per Inch on Cube Data Cube Blockage Cube Mesh Size Size Cd Ratio (nodes/in) (in) 1 1% 15 1.01 2 3% 15 4.13 4 11% 7.5 19.61 7 34% 4.3 144.5 9 56% 3.9 630.6

After examining this data, it is easy to see the negative correlation between the number of points on the cube’s connectors (Cube Mesh Size) and the coefficient of drag. To examine this more carefully, it would be necessary to increase the Cube Mesh Size to approximately 15 nodes per inch. However, this would inherently increase the total cell count. A short study was completed using the 7 inch cube and comparing the Cube Mesh Size to the total cell count. The result is shown below in Figure 7.

Figure 7. Mesh Sizing Study

Given the current processing speed, it would be impossible to adequately mesh the cubes so that all cases had an acceptable Cube Mesh Size. Instead, another source of error was found: human. In the .job file, the reference area must be inputted by the user to adequately calculate Cd. So the solution found in each case was incorrect by a factor of the cube’s length squared. Using this corrected information, Figure 8 was created.

Figure 8. Corrected Blockage Cd Data

74 The theoretical coefficient of drag for a normal facing cube is 1.05. [3] The resulting corrected data matched what was expected very closely. At lower blockage ratios, the percent error was less than 4%. However, as the blockage ratio increased along similar lines as projected by previous research [4], there was noticeable error and the error increased to an unacceptable amount showing the need for a correction factor based on blockage ratio. According to certain wind tunnel testing books, the allowable degree of blockage is 5 to 6% in low-speed tunnels. [5] For future work, it would be ideal to create a correction factor for the coefficient of drag based on the blockage ratio. To aid in this, more cases could be examined to increase the number of data points. The mesh size was another concern throughout this analysis. However, the mesh size could be cut in fourth by running two symmetry planes throughout the length of the wind tunnel. This would allow for tighter mesh controls, specifically tighter control of the Cube Mesh Size.

Another idea for future work is to modify the boundary conditions. The wind tunnel walls were modeled as a slip wall to reduce the complexity. However, for future work it would be worthwhile to examine any changes that changing the slip walls to no slip walls would have in the results.

Acknowledgments I would like to thank Cody Esbenshade for his significant contributions throughout the completion of this project. Also, I would like to thank Dr. Jed Marquart for his aid in meshing the models.

References 1. Sahini, Deepak. “Wind Tunnel Blockage Corrections: A Computational Study.” Texas Tech University. August 2004. February 10, 2011. 2. Tu, Jiyuan, Guan Heng. Yeoh, and Chaoqun Liu. Computational Fluid Dynamics a Practical Approach. Amsterdam: Butterworth-Heinemann, 2008. Print. 3. "Drag Coefficient." Wikipedia, the Free Encyclopedia. Web. 18 Feb. 2011. . 4. "Aeronautical Engineering: Wind-tunnel Blockage Corrections, Wind Tunnel Models, Wind Tunnel Testing." AllExperts Questions & Answers. Web. 18 Feb. 2011. . 5. Gorlin, S. M., and I. I. Slezinger. "Wind Tunnels and Installations." Wind Tunnels and Their Instrumentation. Jerusalem: Jerusalem Program for Scientific Translations, 1966. 22-24. Print.

75 Effects of a Centralized Electronic Health Record

Student Researcher: Catelyn H. Chan

Advisor: Kathy Loflin

Cuyahoga Community College Department of Health Information Management Technology

Abstract Currently, research and experiments are being conducted to facilitate a centralized, electronic health record that can be used by all facilities to improve the quality of patient care, provide a universal terminology for all medical personnel and institutions, reduce clinical errors, and provide a longitudinal, centralized electronic health record that practitioners and patients can access anytime, anywhere.

Project Objective With the implementation of a centralized electronic health record in the near future, there will still be impediments that will prevent the practitioner or patient to access their health record. With HIPAA and government regulations, a patient will still need to follow the proper protocol for treatment. I propose having all the patient’s data and medical information stored in a centralized database. In any type of situation, the practitioner and patient can access that information. The question and where my research starts is where does one store this confidential information, and what security measures can be taken to prevent theft and unauthorized use. The database would have to be at a location that is secure with qualified, trained professionals performing the work. I suggest that the organization be non-profit, regulated by the government, but most importantly, not run by the government. The forerunner and best candidate is the Mayo Clinic. They are already conducting research and projects to have a centralized EHR. They are eminent in their research and results, and they are currently, the lead investigator for this project.

Methodology Used A patient’s medical record starts from birth. At that time, the doctor enters the medical information of the infant into their system. From there, the doctor sends the data via secure encryption to the Mayo Clinic. The Mayo Clinic receives this information and logs it in their database. They back it up to other onsite storage sites. The scenario changes and the infant goes to a new primary care physician. The new primary care physician can easily access this information because the ROI is already in the centralized system. However, let’s say the patient is an 85-year old male who has had a heart attack and has just emigrated from another country. How would one find his past medical history to best treat him? The centralized EHR would have to go beyond the United States. To achieve the most optimal level of patient care, all countries would need to adopt the centralized EHR at the Mayo Clinic. I propose that all facilities and institutions can input medical information into the Mayo Clinic’s data system, but for security purposes, cannot retrieve it. Also the Mayo Clinic would need to have maximum security such as anti-virus protection, firewall, encryption, extensive background checks of employees, and internal and external locks.

Results The final result of having a centralized EHR at the Mayo Clinic would be less costly and more efficient for all those involved. It is debatable if everyone will be pleased with having a centralized database that stores one’s confidential medical information, but it would be safe and any type of care from a visit to the emergency room, pharmacy, or pediatrician would be accessible. Patients will also have options to limit others from gaining access to their centralized health record. Patients will be able to decide who can view their information by setting controls. Medical information such as HIV-status, substance abuse, or mental illness will still have the same regulations and restrictions according to law, but patients will have a choice to allow who can have access to their information, and if they decide they do not want the Mayo Clinic to control their information, then that is their right. All patients will be advised orally and in writing of the stipulations if they choose to limit access to their confidential information. It will be

76 similar to receiving notification of their HIPAA rights, and they will be asked to sign a waiver. The electronic health record has already started to replace the paper-based and hybrid records, and consumers and patients need to be educated of the imminent changes coming to the near future. Technological advances have helped to improve the quality of patient care, and the outcome will be overall healthcare improvement.

77 The SIERRA Project: Unmanned Aerial Systems for Emergency Management

Student Researcher: Robert C. Charvat

Advisors: Dr. Kelly Cohen and Dr. Manish Kumar

The University of Cincinnati Department of Aerospace Systems and Fire Science

Abstract The SIERRA (Surveillance for Intelligent Emergency Response Robotic Aircraft) Project is an organization of University of Cincinnati faculty, graduate students, and undergraduate students who are currently developing next generation UAS (Unmanned Aerial Systems) for Emergency management application in the area of wildland fires. The team has partnered with both West Virginia Division of Forestry and Marcus UAV Corporation to develop a program which can provide cost-effective solutions to wildland fire management. The program features the implementation of both a Tactical UAS and a computer based EMRAS (Emergency Management Resource Allocation System) which is expected to lead the way to a new generation of affordable technology to improve wildland fire response and management.

Introduction Wildland fires are a natural occurrence which throughout much of the country are essential aspects to a healthy ecosystem. These fires though can be man caused and are known nationwide to cause large amounts of destruction. Each year the United States spends an estimated $1 Billion on wildland fires, and suffers amounts as high as $20 Billion in damages. Though Federal and State agencies have a significant amount of resources to battle these fires, they continue to threaten natural resources and citizens.

Table 1. Wildland Acres Burned in the United States by Year

Figure 1. Typical Fire Time Line

78 There are many different methods in wildland fire fighting; most rely on the basic principle of taking a fire and separating it from its fuel source to prevent it from spreading. This is completed by the creation of a fire line in which surrounds a fire, preventing it from accessing other areas to burn. To complete this task, an Incident Commander or first responder will arrive at a fire as other fire response units begin to arrive and organize a method to contain the fire. This consists of determining the conditions of the fire including determining the terrain, weather conditions, and fuel sources. Additionally, aspects such as risk to civilians, capability of current resources, and containment method must be considered. This part can take some time for a clear and decisive plan to be implemented. With the implementation of this plan, fire personnel conduct the mission, receive updated orders, and contain the fire. What is important to realize is that fire tends to grow at an ever-increasing rate the longer it is unattended without a response. Example, a fire in which in the first 20 minutes may have only been 1 acre, another 20 minutes later may be 4 acres. A quick response is essential to prevent fires from growing quickly out of control.

Supporting the fire management techniques are the ICS (Incident Command System) and the LCES (Look-out, Communication, Escape route, and Safety Zone) System. These systems account for the management structure and safety aspects for wildland fire fighting. In the ICS System, a tiered management method is used such that all personnel report to a single Incident Commander to manage the mission. In the LCES System, a set of lookouts overlook all personnel to ensure management of their location relative to dangers. A communication system is implemented to communicate information, an escape route system is established to provide a way of removing personnel from dangerous situations, and safety zones are established in case of emergencies. In a perfect world this allows for information to flow from all levels to the Incident Commander who with perfect information can guide the personnel to fight the fire. Also in a perfect world the LCES system can rapidly detect changes in conditions and quickly respond to fire personnel situational awareness issues to maintain safety.

Figure 2. The ICS System Tiered Structure Figure 3. LCES System featuring lookouts and crews

Project Objectives The ICS and LCES Systems have been known to work very well, though some fire activity has demonstrated that it is possible for these systems to break down with deadly consequences. During previous research conducted during 2008-2010 the team had discovered that in the areas of Emergency Management Task Optimization and Unmanned Systems, there was potential for improved performance of fire personnel and resources with these technologies. Though the team had demonstrated this via computer modeling, a clear and concise understanding of how this could work at the operational level was not understood. To implement these technologies in a cost-effective manner it would require the team to dive deep into the wildland fire problem and UAS community to get the operational understanding required. With this in mind the SIERRA Project was founded on the fundamentals of developing fiscally responsible technologies with active emergency management organizations in which could lead to a new generation of affordable technologies to improve wildland fire response and management.

79

Figure 4. Integrated LCES System with UAS Figure 5. Featured NASA System Engineering Component Successes

Methodology To assist in focusing on fiscally responsible methods for the project the team selected the Systems Engineering Process. Systems Engineering is known for its great successes as used by NASA in reducing costs when developing very large, unique space projects such as the Space Shuttle. Systems Engineering does this by managing the development by using tools which relate performance measures to Money and Time. By doing this it allows for complex systems to be easily measured with monetary values which allows for substantial savings even when dealing with qualitative concepts in Engineering. With Systems Engineering established in the project it was now capable for the team to begin to measure the success of the program in potential monetary cost savings.

The team quickly worked to form partnerships in which would allow for an in depth understanding of how many of the problems in wildland fire fighting could be modeled and understood. One of these relationships has been with the State of West Virginia Division of Forestry. West Virginia along with many other states can be described by its mountainous terrain and forests, which, when combined with a population spread throughout the land can create a wildland fire situation in which any fire could quickly threaten natural resources and human lives. Combined with problems due to coal mine fires and continued population growth outside of the cities the wildland fire situation can be described as challenging. This made them a prime candidate for demonstrating how new technology could reduce the risk to the population and save money. This relationship provided for the team to have members become certified wildland fire fighters (FFT2) and to develop a hands-on understanding of how the wildland fire situation works and could be improved.

Figure 6. West Virginia Wildland Fire training Figure 7. 2010 Florida Tiger Bay Fire Exercise

The team also focused on developing relationships with the UAS manufacturing community. This was noted to be a key aspect, as if a solution was developed, but could not be attained in an affordable manner consistent with the systems engineering process; it would be useless to the operational community. The team has formed a working relationship with Marcus UAV Corporation, a UAS manufacturer in Georgia. This relationship has been important as it has allowed for the SIERRA team to develop monetary

80 standards for expected costs related to UAS which needs to be considered when selecting systems in which could become operational.

Figure 8. UC relative relationship to Industry, Figure 9. Example of ICS Breakdowns and Government

With an understanding of industry-related costs, trends, and the background in creating situational awareness software the group proceed to develop the system requirements. The UAS would have to be small enough to be hand thrown featuring a live camera feed, 1 hour flight time, and integrated mapping system supporting autonomous flight. The Zephyr UAV system was found to complete this requirement and was noted to be a candidate for any testing. The EMRAS (Emergency Management Resource Allocation System) would have to run from the laptop in which the UAS was running off of and be able to integrate data with that system from a live picture feed and prediction software tool. This system would work by providing an integrated data approach to the Incident Commander to give them a better idea of the fire situation and provide recommendations of what they may be able to do.

Figure 10. Selected UAS System for Demonstration Figure 11. EMRAS Results and Discussion

The team discovered several conclusions during the first phase of research which primarily was to develop an advanced understanding of the wildland fire situation. The first conclusion was that the overall weakness of the ICS and the LCES System was not their function but the complication of what occurred when communication breaks down as it can in a disaster situation. What was noted is that as radios begin to need recharging, personnel become fatigued, and as the disaster grows larger, it becomes more difficult for the Incident Commander to control the situation. As the Disaster grows larger the Incident Commander has more personnel in which information must up down through to get information where information can be lost, changed, or interpreted incorrectly. Orders also flow down through more personnel in which they can be lost, changed, or interpreted incorrectly. Beyond the increase in time for information and orders to flow problems can become drastically more complicated as problems become compound.

The team understood that there are two main ways of fixing this problem. The first method of changing the system throughout all emergency management response organizations. The main problem with this

81 would be the required retraining of potentially millions of preexisting fire personnel, the lack of a feasible option for testing a system like this, and the lack of a clear understanding of how a program so large could be paid for. The other alternative was to introduce the appropriate UAS and situational awareness platform in a manner in which could positively affect both the ICS and LCES System by substituting possible communication problems with greatly improved surveillance capabilities and improved resource management. This was the selected opportunity as it was the most cost effective to test.

Integrating the UAS into the current system would provide benefits in both the LCES and ICS System in the following ways:

ICS – The Incident Commander could have a direct look into the fire situation cutting out information loss as information flows directly to him or her

LCES – The lookouts and crews could now have an improved situational awareness which would allow them to become more autonomous from the Incident Commander in managing their own safety

Figure 12. Integrated ICS and LCES System with UAS Component and related description of benefits

Working with partners, the SIERRA Team is currently planning a UAS demonstration for late May 2011 in West Virginia. This demonstration is designed to show how by utilizing these technologies a clear comparison can be made to improved situational awareness resulting in increased fire response times and decreased costs. The demonstration is planned to be conducted at an approximate 40 acre burn being conducted by the West Virginia Division of Forestry in which the UAS will be able to demonstrate its main surveillance capabilities. The overall goal of the event is to demonstrate that a low cost, low altitude, low training requirement system can meet the surveillance requirements in which are required to provide the increased situational awareness to support this mission.

Figure 13. Expected Improved Fire Time Line due Figure 14. Area to be burned in Late May 2011 to UAS System

82

A successful test demonstration is essential to clearly show the potential of a UAS to save money as it is expected that it will open the door for potentially thousands of these systems to be introduced. This will not only save money, but it is expected to save lives. In closing a successful demonstration supports the continued success of using advanced aerospace technology to support traditionally non aerospace applications and reinforces the trends began by NASA during the space programs by showing that technology developed for space and aerospace applications can introduce changes into our lives in which previously could only be viewed in movies.

Acknowledgments The author would like to thank OSGC for their support, as well as the State of West Virginia Division of Forestry, University of Cincinnati, Marcus UAV Corporation, NASA(various offices), as well as Dr. Kelly Cohen, and Dr. Manish Kumar for the continued support of the project. Special thanks to SIERRA Team member Bryan Brown for supporting this paper.

References 1. National Wildfire Coordinating Group Incident Standards Working Team, “Glossary of Wildland Fire Terminology,” PMS 205, Nov 2008. 2. West Virginia Forestry, West Virginia University, and West Virginia Natural Resources, “Mid-Atlantic Wildfire Training Academy, 2010 Photo Gallery” 2010. 3. National Interagency Coordination Center. “Wildland Fire Summary and Statistics Annual Report 2010,” Boise, Idaho, 2010. 4. National Interagency Coordination Center. “Wildland Fire Summary and Statistics Annual Report 2009,” Boise, Idaho, 2009. 5. National Interagency Coordination Center. “Wildland Fire Summary and Statistics Annual Report 2008,” Boise, Idaho, 2008. 6. National Interagency Coordination Center. “Wildland Fire Summary and Statistics Annual Report 2007” Boise, Idaho, 2007. 7. National Interagency Coordination Center. “Wildland Fire Summary and Statistics Annual Report 2006” Boise, Idaho, 2006. 8. National Interagency Coordination Center. “Wildland Fire Summary and Statistics Annual Report 2005,” Boise, Idaho, 2005. 9. National WildFire Coordination Group. “Fireline Handbook. MWCG Handbook 3,” PMS 410-1, Boise, Idaho, March 2004. 10. National Wildfire Coordinating Group Incident Standards Working Team, “Incident Response Pocket Guide” PMS 461, Boise, Idaho, January 2006. 11. National WildFire Coordination Group. “Fireline Handbook Appendix B Fire Behavior.” PMS 410-2, Boise, Idaho, April 2006. 12. Georgia Forestry Commission “A Guide for Prescribed Fire in Southern Forests,” Technical Publication R8-TP 11, Dry Branch, Georgia, February 1989. 13. Fuller, Margaret, “Forest Fires: An introduction to wildland fire behavior, management, and prevention, Wiley Nature Editions” John Wiley & Sons, New York, New York, 1991.

83 Investigation of Small Ring Carbamates and Thioncarbamates and Analysis of Moringa Oleifera Extract

Student Researcher: Katherine M. Cobb

Advisor: Dr. Vladimir Benin

University of Dayton Department of Chemistry

Abstract The Moringa oleifera tree is an entirely edible plant shown to have many nutritional and medicinal benefits. A deeper understanding of the nature of this so-called “miracle tree” and synthesis of similar compounds has great potential for improving health in impoverished nations, and its chemical composition and uses are worth exploring further. The first part of this project involved an exploration of synthetic routes toward the synthesis of analogs of a structure in pterygospermin, a hypothesized antibiotic compound in Moringa extracts: methyl-1,3-oxazetidine-2-thione and its oxygen analog 3- methyl-1,3-oxazetidine-2-one. Syntheses were moderately successful in yielding intermediate products, but the desired cyclic carbamates/thioncarbamates were not obtained. The second part of this project was further analysis and classification of Moringa root extracts through soxhlet and liquid-solid extraction. Pterygospermin was partially justified by initial researchers in that it decomposed into benzylisothiocyanate and benzoquinone. Proton NMR analyses showed evidence for the presence of benzylisothiocyanate, but benzoquinone was not seen, providing evidence that pterygospermim is not present in Moringa extract.

Project Objectives The Moringa oleifera tree, native to northern India and many parts of Africa, is a tree discovered to have significant nutritional and medicinal properties. The trees have been used in their native countries as a form of folk medicine and in the United States as a natural dietary supplement. Recently the tree’s medicinal characteristics have been attributed to the presence of isothiocyanates; but in the 1950s, research regarding the chemical components of the roots resulted in the classification of a hypothesized antibiotic compound, pterygospermin, which was partially justified by decomposition into benzylisothiocyanate and benzoquinone [1, 2]: O

H2 H2 N C S C N N C 2 + C OOC S S O

pterygospermin benzylisothiocyanate benzoquinone The first part of this research explores synthetic procedures to produce a thione substructure of pterygospermin: 3-methyl-1,3-oxazetidine-2-thione (1), and its oxygen analog, 3-methyl-1,3-oxazetidine- 2-one (2): H3C S H3C O N N

H2CO H2CO 1 2 These are fascinating compounds and, based on their resemblance to the β-lactam structure of penicillin and other thiolactams, they may also exhibit antibiotic properties. The second part of this research involved further analysis and classification of the chemical composition of Moringa oleifera trees using soxhlet and liquid-solid extraction techniques. A better understanding of the chemical components of these trees could lead to global benefits in health and prevention of illness.

Methodology Three different synthetic routes were explored in the synthesis of compounds 1 and 2, where proton NMR was used to evaluate the progress of each reaction. Route 1 was approached two different ways. Route

84 1a involves reacting bis(trimethylsilyl)alkyl amine with a chloroformate/chlorothionformate to form a TMS substituted carbamate/thioncarbamate. This compound would be subsequently reacted with formaldehyde to form a methyl(TMS) ether-substituted carbamate/thioncarbamate. After deprotection of this compound, the resulting alkoxy anion would attack the carbonyl group to form the desired cyclocarbamate/cyclothioncarbamate. This scheme was attempted four different times reacting heptamethyldisilazane with various chloroformates/chlorothionformates.

L L L R L L R R TMS R N OR' N OR' N R N + N OR' +CHO Cl OR' 2 O TMS TMS O O Si L = O, S

Route 1b involves reacting bis(trimethylsilyl)alkyl amine first with chloromethyl(trimethylsilyl)ethyl ether to form N-N-bis(trimethylsilyl)-TMS-ethane nitroacetal. This would then be reacted with a chloroformate/chlorothionformate to form a (methyl TMS) ether substituted carbamate/thioncarbamate, which would be deprotected to produce the desired cyclic carbamate/thioncarbamate. This scheme was attempted once reacting heptamethyldisilazane with chloromethylethyl ether.

L L R R L L R N TMS TMS TMS N OR' N OR' TMS N O + R N + Cl O Cl OR' O TMS TMS O O Si L = O, S

Route 2 involves reacting an alkyl substituted amine with chloromethyl(trimethylsilyl)ethyl ether to form an O-alkylaminol. This compound would then be reacted with a chloroformate/chlorothionformate to form a methyl(TMS)ethyl ether-substituted carbamate/thioncarbamate. Deprotection of this compound would result in an alkoxy anion that would subsequently cyclize, forming the desired cyclic carbamate/thioncarbamate. Aniline was reacted with chloromethylethyl ether, and the resulting compound was subsequently reacted with ethyl chloroformate. This reaction was repeated three times. In the second trial, the product of the first reaction was left exposed to the atmosphere to observe possible decomposition. In trial three, the temperature of the reaction was maintained at 0°C. L L R L R R L N OR' N OR' N TMS R TMS + O Cl O + R NH2 N O Cl OR' TMS H O O L = O, S

Route 3 involves reacting an alkyl-substituted amine with a chloroformate/chlorothionformate. The resulting carbamate/thioncarbamate would then be reacted with paraformaldehyde in the presence of TMS-chloride to form N-chloromethyl carbamate/thioncarbamate. This would then be reacted with TMS ethanol to form a methyl(TMS)ethyl ether-substituted carbamate/thioncarbamate. Deprotection of this compound would result in an alkoxy anion that would subsequently cyclize, forming the desired cyclic carbamate/thioncarbamate. Route 3was first attempted by reacting aniline with 4- nitrophenylchloroformate or phenylchlorothioncarbonate, but these reactions were unsuccessful in yielding the carbamate/thioncarbamate products. The first reaction was then bypassed by using a commercially available carbamate, phenylurethane. Phenylurethane was reacted with paraformaldehyde in the presence of TMS-chloride. The resulting compound was then reacted with TMS ethanol. The product of this reaction was then exposed to fluoride ions to facilitate ring closure.

RNH2 L L L L R L + TMS Cl R R R R N OR' + TMS N OR' N OR' N N OR' +(CH2O)n HO L TMS O H Cl O O Cl OR' L = O, S

85 In the second part of this experiment, chopping moringa tree roots were placed in a soxhlet extraction apparatus using toluene as a solvent. This extraction was run for 24 hours. The resulting extract was passed through a silica gel column, and the various fractions were analyzed via proton NMR. Two liquid- solid extractions were then done in which chopped moringa roots were submerged in toluene and stirred for one week. The resulting extract was passed through a silica gel column , and the various fractions were analyzed via proton NMR.

Results Obtained Proton NMR analyses of the first reaction of Route 1a were consistent with the formation of a product where the TMS group was replaced by a hydrogen in addition to the desired TMS-substituted product or as a result of decomposition of this product (Figure 1). Proton NMR of the first reaction of Route 1b gave sufficient evidence that the desired product was not obtained. Route 1 was thus unsuccessful in yielding stable products and was discontinued.

Proton NMR analyses of Route 2 and literature data supported the formation of an amine trimer (Figure 2) from the second reaction in the scheme, indicating that the reaction sequence was largely unsuccessful in yielding the desired intermediate products [3, 4].

Route 3 was the most successful scheme in this experiment. Proton NMR gave evidence for the successful formation of the desired N-chloromethyl carbamate intermediate. Proton NMR of the product of the subsequent reaction showed evidence for the possible presence of the methyl(TMS)ethyl ether- substituted carbamate in a mixture of other compounds. The product was then exposed to fluoride ions to facilitate ring closure. Proton NMRs were taken periodically to see if any changes occurred in the solution, but changes were too minimal to be conclusive.

Problems were encountered in trying to purify products of Route 3 via silica gel filtration. Given the presence of a functional group that resembles an acetal and that acetals are unstable in acidic environments like that of silica gel, products were expected to be unstable in the column. Further synthetic experiments should be explored for Route 3, employing effective purification techniques.

After 24 hours in the soxhlet extraction apparatus, noticeable discoloration occurred, indicating possible decomposition. A smaller than expected amount of extract was obtained from the extraction process. Proton NMR of the crude extract gave evidence for the presence of benzylisothiocyanate, but no other peaks were distinguishable (Figure 3). Proton NMR of various fractions isolated from the liquid-solid extraction also showed benzylisothiocyanate, but not benzoquinone. The fact that benzoquinone is a stable compound with known NMR shifts indicates that pterygospermin is not present in Moringa extract.

Acknowledgments I would like to thank the University of Dayton Honors Department and Chemistry Department for their support and funding of this project. I would also like to thank Dr. Vladimir Benin for all the time and effort he put in to help me with this research.

86 Figures and Tables

Figure 1. 1H NMR of Reaction 1 Route 1a Figure 2. 1H NMR of Amine Trimer of Route 2

Figure 3. 1H NMR of Crude Moringa Extract

References 1. Rao, R. R. and M. George, Investigations of Plant Antibiotics. Part III. Pterygospermin—The Antibacterial Principle of the Roots of Moringa Pterygosperma. Indian Journal of Medical Research, 1949. 37(2): p. 159-167. 2. Kurup, P. A. and P. L. N. Rao, Antibiotic Principle from Moringae Pterygosperma Part I. Purification. Journal of the Indian Institute of Science, 1952. 34 pt a: p. 219-227. 3. Majumdar, Susruta and Kenneth B. Sloan. “Synthesis, hydrolyses and dermal delivery of N-alkyl-N- alkoxycarbonylaminomethyl (NANAOCAM) derivatives of phenol, imide, and thiol containing drugs.” Bioorganic & Medicinal Chemistry Letters 16 (2006) 3590–3594. 4. Majumdar, Susruta and Kenneth B. Sloan. “Practical Synthesis of N-Alkyl-N- alkyloxycarbonylaminomethyl Prodrug Derivatives of Acetaminophen, Theophylline, and 6- Mercaptopurine.” Synthetic Communications 36 (2006) 3537–3548.

87 The Village of Ottawa Hills Storm Sewer Mandates

Student Researcher: Kimberly M. Coburn

Advisors: Dr. Douglas Nims and Dr. Cyndee Gruden

The University of Toledo Department of Civil Engineering

Abstract In 1987, Congress amended the Clean Water Act to require the United States Environmental Protection Agency (EPA) to establish a National Pollution Discharge Elimination System (NPDES). Phase I of this endeavor focused on municipalities that were greater than 100,000 people. Now, the EPA has started implementation of Phase II which focuses on the requirements for smaller municipalities (MS4s) in order to qualify for their NPDES Permit. The requirements for the permit include public education, public involvement, illicit discharge detection, construction site runoff, and pollution prevention which can be viewed in detail in Chapter 3745-39 of the Ohio Revised Code.

The primary goal of this research is to assist a specific MS4, the Village of Ottawa Hills, in such a way that will meet the current and future EPA expectations. This not only includes the development of a detailed of their storm water infrastructure and implementation of a plan to detect and address non- storm water discharges, but it also includes educating and involving the public in the maters of storm water control.

This research investigated several different case studies of small and medium municipalities that provided a variety of detailed options that can be used in order to comply with the NPDES Regulations. This research also consisted of the use of state-of-the-art mapping software and surveying equipment, which resulted in a final product in the form of a digital map that is not only comprehensive, but accurate which can be shared between several different organizations.

Project Objectives The primary objective for this project is to assist the Village of Ottawa Hills, Ohio to meet MS4 requirements set forth by the Ohio EPA. The requirements for the MS4’s NPDES permit as summarized from Chapter 3745-39 of the Ohio Revised Code include:

1. Public education by distributing educational materials and performing outreach about the potential effects of storm water discharge on the water quality.

2. Public involvement by inviting them to participate in the creation and execution of a storm water management panel.

3. Developing an illicit discharge detection plan and an elimination plan for the storm water system including creating a system map and informing the population.

4. Construction runoff control for erosion and sedimentation control for construction greater than one acre which could include silt fences and temporary detention basins.

5. Post construction runoff control to prevent or minimize water quality impacts.

6. Pollution prevention and reduction by a program developed by municipal operations that would include instructions on pollution prevention measures and methods.

In order to meet these regulations, this research included the investigation of the laws and current practices that are in place. There is also a goal of cooperation with different government organizations such as the City of Toledo, Lucas County Engineers and the Ohio EPA in order to not only meet these

88 regulations, but to be able to communicate and share information. The project will be completed by the sharing of existing information and then field verifying the storm infrastructure. The information is then used to create a finished product that will be usable by the Village of Ottawa Hills offices, the Engineering Firm SSOE, and other local area government organizations.

Methodology The first step of this research was to investigate the methods that can be used in order to meet or exceed the expectations of the Ohio EPA and the Village of Ottawa Hills. This was done by investigating EPA provided resources and contacting the applicable staff such as Lynette Hablitzel of the Ohio EPA Northwest District. Several municipalities were contacted in regards to how they met EPA compliance. Also, audits from the Ohio EPA of other similar Villages were gathered with the aid of Jason Fyffe, Ohio EPA Permit Specialist, that contained suggestions to meet the public education and involvement criterion.

It was decided that several things could be done in order to help meet the requirements for public education and involvement. A presentation about the storm water program will be done at a public meeting for the Village of Ottawa Hills. There is also going to be an article written in the local newspaper, the Village Voice, about the current storm water management efforts. The fourth, fifth, and sixth part of the regulations were already met in the form of Village Ordnances 2007-2 and 2009-8.

The third part of the regulations is often the most difficult part to fulfill. It required the mapping of the entire storm system and then the creation of a plan to monitor for illicit discharges. However, in order to map the storm infrastructure the village must have a complete inventory of all of their storm water assets. Therefore, organizations such as the Ohio Department of Transportation, Lucas County Engineers, the City of Toledo, and the local engineering firm SSOE were contacted and their plans consolidated with the village’s records. These plans were then all electronically scanned and indexed for later convenience.

When the plans were all organized there were two main software programs considered in order to map the plans of the storm infrastructure. For design purposes, AutoCADD is often a superior program to use because of its flexibility in creating design elements. However, it has a distinct disadvantage when used for mapping when compared to ArcGIS or GIS, because AutoCADD is limited by the amount of information which can be stored in data labels. GIS can have an infinite amount of labels or attributes in the form of data fields that is all stored and easily recalled through queries in its database such as seen in Figure 2. Another interesting feature of GIS is the capability of the attributes to hyperlink to multiple external files. Most importantly, the GIS program information can be imported and exported directly from an AutoCADD file and several other programs in the form of a useable layer.

The versatility of the GIS program has made it a standard in the mapping industry that is utilized by several government agencies and even GoogleMaps. The local governments surrounding the Village of Ottawa Hills also utilize GIS to manage their inventory. They are actually required to manage the water and wastewater infrastructure, while the Village is only responsible for the storm water infrastructure. Therefore, it was logical to be able to obtain their GIS layers and incorporate the village’s storm water infrastructure into that so all groups can benefit from the data.

Once the data was obtained from Lucas County Engineer Robert Neubert, the template map in GIS was then created in order to produce new elements that would be consistent with the data that was provided. The estimation of elements is currently at 880 catch basins or manholes, 869 storm lines, and 57 outfalls. When drawn in GIS, the elements will be projected into the NAD 1983 HARN Ohio State Plane so it is possible to accurately identify objects with GPS devices such as with the GMS-2 as shown in Figure 3. This state-of-the-art device was obtained from the Lucas County Engineers for free since the project is being done for one of their patron villages.

89 Results Obtained This research was able to meet its objectives and assist the Village of Ottawa Hills to meet their NPDES permit requirements. It investigated several different alternatives which allowed for a variety of options to be explored before the final generated GIS Map Model, such as seen in Figures 1 and 2, was decided upon for the third part of the regulations. This map was created by the sharing of information, analyzing, and surveying. This research helped to meet the first and second regulations of public education and involvement by being able to present at an upcoming village meeting and by writing an article for their local newspaper. The results of this research for the Village of Ottawa Hills will not only be used to meet the present expectations of the EPA, but it will have long term applications that can be used for future village endeavors such as to plan new construction, maintenance, renovations, and much more.

Acknowledgments  This project partially fulfilled the requirements of CIVE 4750 Senior Capstone Design at the University of Toledo. Additional team members working on this project were University of Toledo Students William Gharst and Justin Snyder.  Douglas Nims, Ph.D., P.E. Associate Professor [email protected] 1-419-530-8122  Dr. Cyndee Gruden, PH.D,P.E. Associate Professor [email protected] 1-419-530-8128  Mark Thompson Village Manager  Lynette Hablitzel EPA Storm Water Advisor  Jason Fyffe Storm Permit and Specialist  Robert A. Neubert, Jr. CET. CST Lucas County Engineer Certified Technician  Bryan D. Ellis P.E., P.S. Glass City Engineering & Surveying, LLC. Surveying Professor

Figures

Figure 1. Example of In Progress GIS Map for Ottawa Hills

Figure 1 shows an example of the interactive GIS software map that was put together for the Village of Ottawa Hills. It is composed of many different projected layers that can be points, shapes, and images. Each of these layers can have different attributes associated with it such as seen in the following Figure 2. The left of this figure shows the different layers, while on the right are the attributes that have been assigned to a catch basin.

90

Figure 2. Example of GIS Layers with Associated Attributes

Figure 3. Example of Surveying Equipment and Location

Figure 3 shows an example of the GMS-2 surveying device to the left running the ArcPAD GIS Software. This handheld GPS equipment can be used to edit datasets in the field and even has a camera that can capture and link images such as the catch basin inlet part of the infrastructure, shown to the right.

References 1. Lawriter LLC, 2008. Ohio Administrative Code Chapter 3745-39 Phase II Storm Water Rules for Small Municipal Separate Storm Sewer System. http://codes.ohio.gov/oac/3745-39 2. GMS-2 Handheld GID Mapping System. Topcon Positioning Systems, Inc., 2009. http://www.topconpositioning.com/uploads/tx_tttopconproducts/GMS2_Broch_7010_0766_RevC.pdf

91 Fabrication of Nanostructured Sensors for Detection of Volatile Organic Compounds

Student Researcher: Lauren E. Cosby

Advisor: Dr. Karolyn Hansen

University of Dayton Department of Chemical Engineering

Abstract In the past several years, technology has had a break-though in the implementation of micro-cantilevers in sensor devices. This project is a biomimetic approach, combining biology with sensor technology, which has a variety of applications. One of these applications is odor detection and sensing of certain analytes in the atmosphere for detection of Volatile Organic Compounds (VOCs), such as methanol, ethanol, acetone and toluene. In order for a sensor to recognize these analytes, we are looking to integrate biology and biochemical concepts for detection. The project is currently in the device development stage focusing on the nanostructured sensor. Silicon Nitride (Si3N4) microcantilevers are utilized because it is more flexible and has an increased sensitivity to resonance frequency shift due to molecular binding. Titanium (Ti) nanorods are deposited onto the cantilever tips to increase the surface area of the biosensor for an increase in the number of molecular binding sites and increased sensitivity.

Potential applications for this sensor technology include: forensic assays, medical diagnostics, war fighter protection, homeland security, and environmental assessment.

Project Objectives Presently, the two main objectives of this stage in the project are to characterize the nanostructured surface and develop an exposure chamber for testing. We want to be able to quantify the increase in surface area and what area is available for binding and be able to control VOC exposure to the surface. Along with that, be able to determine the effectiveness of functionalization.

Methodology Used The first phase of development included work on the deposition of nanostructures onto a cantilever. First it was important to identify specific cantilever characteristics that would be measured to provide a means of analysis. Si and Si3N4 cantilevers were used to compare material qualities such as flexibility, melting point temperatures and other material characteristics. CNTs and nanorods were constructed on the cantilever tips through the process of Electron Beam Evaporation (E-beam Evaporation). The source material Silicon Dioxide (SiO2) or Titanium Dioxide (TiO2) was heated and vaporized onto the tip. CNTs and nanorods were compared to observe which structure would be more suitable to implement in the overall design. The density, length, diameter and morphology of the CNT and nanorods and the cantilever surface are being measured through image analysis, done on the Scanning Electron Microscope (SEM) and Image J image analysis processing. There are current efforts on finding an efficient means of measuring the density, length, diameter, and aerial coverage of the nanorods. Also, research has been conducted on exposure set ups and constructed a gas mixing system in order to vary the analyte composition and observe effects of varying the surface chemistry. Collaboration with Georgia Tech has resulted in an exposure set up similar to the following image.

Figure 1. VOC Exposure Chamber

92 Nitrogen will serve as the carrier gas throughout the system. One pathway will act as the reference stream while the other conduit will be the VOC stream; both are controlled with mass flow controllers and will enter the measurement chamber for testing. The analyte bubbler, shown as the purple cylinder, will contain the analyte of choice and be carried to the measurement chamber by the Nitrogen coming through the system. In the measurement chamber, a laser will be reflected onto the cantilever and register any change in the surface of the tip using the Doppler Effect. If analytes have bound to the surface, the resonance frequency will shift and the laser will return at a different angle. Running an outflow stream through a GC is being considered to verify the analyte to carrier gas ratio.

The next stage of the process is to functionalize the nanorods first with Silane and observe preliminary molecular binding of VOCs and assess the coverage on the cantilever. In order to do so, the cantilever undergoes a solvent wash before deposition.

Results Obtained Deposition of both CNT and Nanorods onto the cantilever tips has been completed. In Figure 1, the CNT are restricted to one side. Iron (Fe) was used as a catalyst to control location of growth. Figure 2 depicts Titanium nanorods at a 45º angle. It is important to note the difference in size of either nanostructure. Although CNTs are longer and seem to provide more surface area, nanorods can be deposited at lower temperatures, maintaining the integrity of the device and they still provide a considerable number of sites for analyte binding.

Figure 2. CNT functionalized cantilever surface. Figure 3. Ti-Nanorod functionalized substrate shows surface density. Work is still being done to review the coverage density, count per area and spacing between each nanostructure.

In order to functionalize the nanostructures with silane, the cantilevers would go through a similar process to the solvent wash. Yet as the samples were set to air dry, devastating clumping occurred, damaging the surface. Clumping has been defined as an increase in the density of nanostructures in a specific area and a morphing of the original formation as observed in the following image.

Figure 4 . Clumping of nanostructured material.

93 With some review, CO2 Critical Point Drying seemed to be a reasonable approach to fix the problem. It was important to first understand the cause of clumping. As the solvent evaporates, the surface tension causes force on the structures and as it evaporates, the force causes them to pull together. One approach is to use a solvent with a lower surface tension, yet that approach didn’t work as well. With critical point drying, as T is increased, surface tension decreases. The meniscus flattens and the phase change between liquid to vapor during the process occurs gradually, relieving the stresses of surface tension. For CO2 critical drying, the optimum T is 31.5 C and 1072 psi and drying was reached at these conditions. After trial runs, the cantilevers were imaged using SEM and the integrity of the cantilever surface was maintained.

Figure 5. Clumped nanostructure after air drying Figure 6. Nanostructure after trial CO2 Drying. after solvent wash.

Since CO2 Drying is successful, testing has been continued to ensure the preliminary results. Subsequently, depositing Silane will begin instead of the solvents. Raman spectroscopy will be used to verify that the amine functional groups are present on the nanostructured surfaces.

Future Work After the functionalization of nanorods, the device will be exposed to VOCs. Observation of the reaction and any challenges between known concentrations and mixtures of VOCs will be evaluated. Surface chemistry will also be varied, depositing peptides and DNA sequences to observe the difference in VOC absorbency. According to a paper by Joel White et al., certain DNA sequences can indentify certain VOCs, which will allow for different pattern recognition to better model an odor detection device. Finally, the responses received from optical and electrical assessments on bending, deflection and piezo- resistivity will be analyzed. Responses in binding between different concentrations will be observed. This will allow optimization of the NR density on cantilever surfaces for optimal response by the device.

Acknowledgments and References Funding was provided by the Biotronics Program of the Air Force Research Laboratory through UES, Inc. contract # S-845-001-001a to KMH, and by the AFRL Minority Leaders Program and the Ohio Space Grant Consortium to LC.

Titanium nanorod images are courtesy of Dr. Andrew Sarangan and Piyush Shah (UDRI).

94 Sustainable Solar-Powered Fixed Wing Flight

Student Researcher: Devin M. Cross

Advisor: Dr. Tom Hartley

The University of Akron Department of Aerospace Systems and Mechanical Engineering

Abstract Flight has opened the world to humanity and airborne transportation has become a necessity for our society. We depend on aircraft to research, protect, patrol, and transport. One could travel anywhere in the world at the cost of a plane ticket, but like all other modes of transportation, flight has its limitations. A plane can only fly as far as its fuel can take it before it has to land to refuel. To add to this, almost all aircraft used at the present are dependent on a limited, non-renewable, polluting resource to fly. Without oil, all aircraft (with exception only to gliders and bicycle planes) would be grounded hunks of machinery, destined to collect dust and dirt on runways. This could be an inevitable future for aviation with the amount of oil left in the world decreasing more and more rapidly, but the solution to aviation’s energy resource problem could lie above the skies through which said aircraft fly. Solar energy is not a new concept, but one that is often over looked too quickly when compared to fossil fuels. The fact of the matter is that the sun shines more energy on our planet in a day than any of the entire worlds electrical power plants put together and working at maximum output could ever hope to produce. In actuality, it would take forty-four million electrical power plants producing a billion watts of electricity each to equal the power provided by the sun. Averaged over a year, approximately 342 watts of solar energy (in the form of electromagnetic radiation) hit a square meter of the earth’s outer atmosphere when facing the sun. As this energy travels through our planet’s atmosphere the gases absorb and reradiate some of this energy. One must also account of the albedo, or ability of a given type of surface on the earth to reflect back some to most solar radiation. One might be discouraged from wanting to rely on such an unpredictable source of energy, but if the amount of solar energy only depletes the further it travels through our atmosphere, an aircraft could take advantage of more of this free energy the higher it flies. By incorporating solar/photovoltaic cells into the aerodynamic structure of an aircraft, such as a plane, it is possible to think that the higher it flies in the earth’s atmosphere during day, the more solar radiation the aircraft can take advantage of to power its flight to the point of being able to sustain long periods of flight. A fixed wing aircraft, with its large rigid wing, provides the most suitable and/or convenient aerodynamic structure to incorporate a photovoltaic device. If sustainable solar-powered flight is possible, what optimizations to the plane and the solar devices are needed? What are the costs, what is the availability of such technology, and what are such an aircraft’s limitations? Before being able to build such a “flying” machine, one must research whether sustainable solar-powered flight is possible.

Project Objectives The objective of this project is to learn and investigate whether, with current knowledge and technology, it is possible to construct a sustainable solar-powered airplane and if feasible, at what costs and to what limits.

Methodology Used In order to understand what a solar powered airplane would require to function properly, I needed first to understand how much solar radiation will be readily available to be used to power the aircraft, whether it is a range or definite number. How the earth’s atmosphere and surfaces absorb and reflect solar radiation had to be studied and understood. After learning what solar energy is available for solar cell use, I needed to familiarize myself with electrical engineering concepts and study how a photovoltaic cell worked. From there, I investigated what different types of solar panels where available and how they differed from each other in material make-up, construction, function, efficiency, cost, and availability. After teaching myself what I was unfamiliar with in the way of electrical engineering, material science, chemistry, and astrophysics, I reviewed and further investigated how a plane is able to fly, its limitations, and how it can be optimized through specific material usage, structural design modifications, and specific aerodynamic

95 structure selection and/or modification. To further optimize the plane, the mathematics for calculating and improving lift and thrust, while minimizing drag needed to be studied.

It was then decided to use an old R/C aircraft constructed by the University of Akron SAE Aero Design team in 2007 as a model for a smaller solar powered aircraft. “LAZOR”, as the plane was named, was an ideal candidate for a model for analysis in that it was available to analyze, successfully flew, successfully landed, and was designed to carry a payload much larger than its own weight, which could be used as a representation of carrying a solar module. The plane when empty weighed 6.8 pounds and could lift 45.82 pounds (as calculated), meaning that a photovoltaic device weighing 32.02 pounds could be substituted in for the maximum payload “LAZOR” can fly with. From “LAZOR’s” engine testing data, it was found that in using a 13-6 Bolly twin blade propeller and an E4010 muffler, the OS61FX gas powered engine could produce 6.8458 pounds of thrust (30.46203796 N). Given that the plane would needed a take-off velocity of 53 feet per second (36.1 mph) and my assumption that for a manufactured aircraft the approximate propeller efficiency would at least be 80 percent (average range is 50-87%), I was able to calculate the amount of power demanded for take-off by using the equation for calculating thrust:

T = [(Pe) (ƞprop)]/v and altering it to solve for the engines demanded power.

Pe = [(T)(v)]/(ƞprop) From this calculation, it can be determined how much electrical power would be needed to power the OS61FX to maintain the same engine/power plant performance, as the two powers would be equal. Once the electrical power needed was determined, it could be deduced what kind of solar panel could or could not be used based on a solar panels listed power output, size, and weight.

Knowing the power required for “LAZOR” to take-off with a maximum payload, a comparison was then able to be made between “LAZOR”, which could represent a modern day UAV such as the Stalker, and a larger commercial jet. I chose to compare “LAZOR” to the new Boeing 747-400, as it is a newer airplane, and therefore is assumed to be at the “cutting edge” of modern day commercial planes that will be used on a day-to-day basis. The 747 is listed as having engines that produce a maximum thrust force of 63,300 pounds each and has a take-off velocity of 180 miles per hour (80.6 m/s). Plugging these numbers into the above engine power equation, the required power for the 747 engine was determined and compared with that of “LAZOR’s”. After all of the above was carried out and analyzed, a rough conclusion was drawn in regards to whether construction sustainable solar-powered fixed wing flight is possible.

Results Obtained The graph below shows the available solar cells and there efficiencies.

96 It was found that a solar cell’s strength in efficiency correlated directly with the heftiness of the cost and inversely with its availability to be purchased. In other words, the more efficient the solar device, the more expensive and difficult it was to find. Three-junction concentrator photovoltaic cells have the greatest solar efficiencies at a minimum of 33 percent (approximately) and a maximum of 40.7 percent. The next best is two-junction concentrator solar cells, followed by single-junction GaAs (Gallium Arsenide) concentrator solar cells. These photovoltaic cells may provide the best efficiencies, but unfortunately would be the most difficult to incorporated into an aerodynamic structure since they require a Fresnel/curved lens to concentrate solar rays to the actual solar cell (semiconductor). They are also extremely difficult to find available for purchasing.

Thin film solar cells would be the easiest to incorporate into an aerodynamic structure, such as the wing or fuselage of an airplane. They are able to bend around surfaces, yet maintain their durability and functionality making thin film photovoltaic cells ideal to be used in the skin/shell of the plane. Single- junction GaAs thin film solar panels were researched to have the highest efficiency at approximately 24.7 percent, which is nearly half the efficiency of the better three-junction concentrator solar cells. Efficiency aside, single-junction GaAs thin film solar panels are more available to the public for purchase, though expensive at approximately $10,000 per square meter.

Calculating for the power that “LAZOR’s” OS61FX engine requires for a 53 feet/s take-off at 6.8458 pounds of thrust force and lifting a 45.82 pounds (conversion to metric required):

Pe = [(T)(v)]/(ƞprop) = [(6.8458 lbs)(53 ft/s)]/(.8) = [(30.46203796 N)(16.1544 m/s)]/(.8) Pe = 615.199325 W

Therefore, 615.199325 watts of electrical power would need to be provided by the solar panel. In researching purchasable higher efficiency panels, it was found that no solar panels with efficiencies between 15 and 20 percent came anywhere near able to produce that much power. For example, newer “Sanyo 205 watt HIT Power Solar Panel 30 Volt” is easily available for purchase and weighs 35.3 pounds, which is less than the maximum payload “LAZOR” can carry, but with a module efficiency of only 16.3 percent, it at maximum output produces 205 watts of electricity. That is a third of the power required by “LAZOR’S” engine. It must also be noted that “LAZOR’s” wing, which is the most logical place incorporate a solar cell on the plane, only has an approximate area of 6.25 square feet. Sanyo’s solar panel has an area of 13.56 square feet, more than double that of “LAZOR’s” wing. From this, it is easily concluded that a much more efficient solar cell would be needed in order to get “LAZOR” close to leaving the ground. It can also be said that an airplane like “LAZOR” was not built to the specifications of being powered solely by a photovoltaic device; therefore it may be unrealistic to think that we can use a specific and/or typical plane design such as that of “LAZOR” or a Boeing 747.

Finally, calculating the power required for a Boeing 747-400 to take off at 180 miles per hour with four engines producing 63,300 pounds of thrust apiece (assume the propeller based engines for ease of comparison):

Pe = [(T)(v)]/(ƞprop) = [4(63,300 lbs)(180 mph)]/(.8) = [4(281,668.6148 N)(80.55555556 m/s)]/(.8)

Pe = 113,449,858.7 W

Dividing “LAZOR’s” required engine power into the 747’s:

PeB/PeL = (113,449,858.7 W)/(615.199325 W) = 184,411.5462 times more power is required for the 747 engines (prop.)

Dividing “LAZOR’s” wing area into the 747’s (5,600 square feet per wing): 2 2 AwingB/AwingL = [2(5,600 ft )]/(6.25 ft ) = 1,792 times more wing area than “LAZOR”

97 Comparing the two calculated ratios above, it is easy to see that the power ratio of the two planes is much larger that the wing area ratio. Based off of these calculations and the previous conclusion that it would be difficult enough to find a solar cell that was light and small enough to fit into “LAZOR’s” wing structure as well as efficient enough to power “LAZOR’s” engine, the fact that the power ratio is so much larger than the wing area ratio (more than 100 times) proves that sustainable solar-powered fixed wing flight could not be achieved with a plane design anywhere close to that of the Boeing 747-400.

Interpretation of Results From an engineering standpoint, in reviewing my research results I have come to the conclusion that sustainable solar powered flight is not fully achievable with larger, faster moving aircraft, such as the Boeing 747-400, as well as with a conventional airplane design in general. Few solar/photovoltaic cells are even efficient enough to provide substantial power to an OS61FX engine. If the earth’s atmosphere is hit with 342 watts worth of solar radiation per square meter on average, “LAZOR’s” engine would require two solar panels covering the entire wing surface working nearly at 100 percent efficiency to reach an output of 615.199325 watts. On top of that, solar panels that are half as efficient as the most efficient solar modules are ranging in cost from thousands to tens of thousands of dollars and the most efficient solar panels are not readily available for the public to purchase. All that being said, I believe that sustainable solar powered flight can be achieved if the aircraft is designed and construct with the purpose of being powered solely by solar radiation in mind. These types of aircraft would need to be constructed with lighter material, would have to have larger wings forms to provide an ideal surface a solar cell, and would most likely travel at subsonic speeds as traveling fast with current forms of propulsion requires lots of thrust and; therefore lots of power. Using airfoil designs that have large coefficients of lift at low speeds (laminar flow), such as the Selig 1223 or Eppler 423, could also aid in creating aircraft that can lift more and/or heavier solar module to provide more power for propulsion . Today’s small, light-weight, subsonic aircraft, such as gliders, ultra-lights, UAVs, etc. most in particular could benefit from the use of solar power still since they are so small and light. Overall, solar energy is still a free, clean, and abundant resource that flying machines especially can take the most advantage of since they can travel closer to the upper reaches of our planet’s atmosphere where the sun radiation is most intense and useful.

References 1. The Balance of Power in the Earth-Sun System. (2005, September). NASA Facts. Retrieved March 1, 2011: http://www.nasa.gov/pdf/135642main_balance_trifold21.pdf 2. Solar Radiation at Earth. (2010, June 9). Windows To The Universe. Retrieved March 9, 2011: http://www.windows2universe.org/earth/climate/sun_radiation_at_earth.html 3. Sanyo 205 Watt HIT Power Solar Panel 30 Volt >HIP-205NKHA5. (2011). Ecodirect.com. Retrieved March 6, 2011: http://www.ecodirect.com/ProductDetails.asp?ProductCode=Sanyo-HIP-205NKHA5 4. What is Fresnel Lens?. (2006). bhlens.com. Retrieved March 6, 2011: http://www.bhlens.com/ 5. Gallium Arsenide Solar Cells. (2002). Photovoltaic Systems Research &Development. Retrieved March 6, 2011: http://photovoltaics.sandia.gov/doc/PVFSCGallium_Arsenide_Solar_Cells.htm 6. How a Propeller Works. (2005). Propulsion by Propellers. Retrieved February 15, 2011: http://www.mh-aerotools.de/airfoils/propuls4.htm 7. 747 Fun Facts. (2011). Boeing. Retrieved March 6, 2011: http://www.boeing.com/commercial/747family/pf/pf_facts.html

98 SAE Baja Floatation and Maneuverability on Water

Student Researcher: Michael E. Croston

Advisor: Dr. Richard Gross

The University of Akron Department of Mechanical Engineering

Abstract Every year for SAE Baja there are three competitions. One of these requires that the Baja car floats and can maneuver on water. This requires the development of a floatation device along with some way of propelling the car. The most commonly thought of device would be a propeller, but in this case a propeller is not a practical option. Since the driving force on the car is the wheels, they will be utilized along with modified fenders.

Project Objectives This project deals with the design of a floatation device, fenders, and paddle system to maneuver a Baja car on water. This can be broken down into three components; steering, drive and floatation. The front wheels will act like a rudder on a ship, to steer the car around obstacles. The rear fenders and the tires will work together to accelerate the car. The floatation must keep the car from sinking. It will need to be able to support the weight of the car and driver at a determined height. The materials need to be able to withstand not only the aquatic portion of the competition but also hours of rigorous off road conditions.

Methodology Used To make the car float, foam will need to be added to the underside of the car. The amount of foam, that is required, depends on the weight of the car and driver. The weight will be calculated by the average size of the Baja team, which is 175 lbs. and the weight of the car which is 450 lbs. Polyethylene foam is the material of choice because of its durability and light weight properties. It can yield without tearing, allowing it to absorb hard impacts. Polyethylene foam is classified as closed cell, meaning that it will not absorb water or other fluids easily ("Polyethylene Closed Cell"). Every cubic foot of foam weighs 2 lbs. but can support over 50 lbs. This shows that approximately 11.5 of foam will be needed.

The driving force for the car will be the rear fenders along with the tires. The tires were chosen on the basis that they have deep tread which will be able to pick up water, but still be able to go on dirt. The rear fenders were designed to meet three qualifications. They must be light, strong and effective. There are three materials that were chosen for the rear fender assembly. The bracket that holds the fender to the rear axle bearing housing is AISI 304 stainless steel because it is strong, corrosion resistant, and can be welded easily to the bearing housing. The fenders are made of carbon fiber because it is extremely light, and its tensile strength is about four times as strong as high carbon steel. Carbon fiber is also very easily formed to various contours ("Properties of Carbon"). The deflector paddle is made from 6061 aluminum alloy because it is light, corrosion resistant and can be easily bent to shape.

The key feature of the fenders is the deflector paddle which will use the velocity of the water to propel the car forward. Without the paddles on the fender almost all of the velocity of the water is lost. Two different designs were theoretically tested using the linear momentum equation.

For the test, a steady flow rate, and will be taken into account. Therefore the equation can be simplified into the following (Munson et al. [Page 201-202]).

99 From this equation it can be concluded that if everything is considered constant, except for area, the angled design will yield more forward thrust.

Figure 1. Comparison of the paddle designs.

Results Theoretically the results conclude that the angled design will be a better choice for more efficient acceleration. In the summer of 2011 the flotation and maneuverability will be tested in real world applications at the SAE Baja competition.

References 1. Munson, Bruce R., et al. Fundamentals of Fluid Mechanics. Hoboken: John Wiley & Sons, Inc., 2009. Print. 2. “Polyethylene Closed Cell Foam Sheets.” The Foam Factory, Inc., 2010. Web. 12 Dec. 2010. . 3. “Properties of Carbon Fiber.” Carbon Fiber Tube Shop, 7 Apr. 2010. Web. 22 Nov. 2010. .

100 Counter-Spinning Carbon Nanotube Yarns

Student Researcher: Charles M. Dandino

Advisor: Dr. Mark Schulz

University of Cincinnati Department of Mechanical Engineering

Abstract Carbon Nanotubes (CNT’s) have remarkable properties that make them a topic of high interest in material science. CNT’s can be grown in dense groups on silicone substrates such that they resemble nanoscale forests. Through a very particular method, fibers from these forests can be pulled like cotton to form a very thin ribbon which can, in turn, be twisted to make a very fine, high strength thread. However, this twisting stores a relatively large amount of energy within the thread which can cause it to unwind when a weight is suspended from the thread. Additionally, the imbalance causes the thread to twist on itself and, in some cases, knot if it is not kept under adequate tension. These side effects of twisting make the thread difficult to handle and unacceptable for some applications. To remedy this problem, this work explores the option of spinning multiple CNT threads together in the opposite direction that each individual thread was spun as to relieve the energy while still keeping the fibers together.

Project Objectives This project aims to qualitatively compare a control group of CNT thread and yarn samples that have been traditionally spun with a test group of CNT yarn that was counter spun to see which has the least internal potential energy. This comparison demonstrates whether or not the method of counter-spinning has promise as a way of achieving equilibrium in spun threads.

Methodology To achieve meaningful results, five test samples were prepared. Each sample of thread or yearn was twisted at 9 turns per mm. The first three samples (CMD1, CMD2, and CMD4) were thread (single ply) spun from three different substrates named accordingly. 10 meters of each thread was produced. 5 meters of CMD124 yarn (multi-ply) was also produced by simultaneously spinning from all three substrates as shown in Figure 1. Finally, a 5 meter test sample, CMD124BCS, was created by counter spinning the first three samples (CMD1, CMD2, CMD4) from their bobbins onto a separate bobbin as shown in Figure 2. The three bobbins, made of aluminum, were allowed to rotate freely on a steel axel to minimize resistance to rotation without the increased complexity of bearings. This provided a sample of each of the threads that comprised CMD124BCS so that each could be tested individually to check for anomalies within the threads. Similarly, the traditionally spun yarn, CMD124, was used as a direct reference to be compared to the CMD124BCS since these yarns have the most similar properties. Each of these sample threads and yarns were put through the following tests five times (except the ESEM imaging which was only performed once) and the data were averaged.

There are three standard tests used to characterize CNT threads and yarn: four probe resistivity test, mechanical tensile test, and environmental scanning electron microscope (ESEM) imaging. The first test, the four probe resistivity test, supplies a measured current to the outer two probes and measures the voltage drop across the inner two probes. The CNT threads and yarns were wrapped around each probe once to increase contact. See Figure 3 for the schematic. This was used to calculate resistivity by the following equation:

The second test, the tensile test, used a high sensitivity load cell and precision tensile tester connected to a data acquisition unit to slowly pull the thread till it yielded. The data acquisition unit tracked the length

101 the thread had been pulled and the associated tensile force along the thread. This was used to calculate ultimate strength and the modulus of elasticity by the following equations:

The third test was to use an ESEM to obtain photographs of the threads and yarns to examine their twist angle, mode of failure, and obtain an accurate measurement of the diameter.

The final test was a torsion test. This nonstandard test was designed as a way to gauge how far a thread or yarn was from equilibrium. In this test, 10cm samples were taken from each thread or yarn sample and a 1.0g mass was suspended from the end by wrapping the thread or yarn around a hook on the mass and then adhering the thread with a thin coating of super glue that was not allowed to creep up the thread or yarn. The mass was then suspended and allowed to spin at a slow pace while the experimenter counted the revolutions the mass made before it settled in equilibrium. The super glue was scraped from the mass after each run to maintain a relatively constant mass (starting mass: 0.9941g, ending mass: 1.0043g).

Results The results of the four probe electrical test, summarized in Table 1, showed that the resistivity of each of the samples was on par with each other with no more than a 2x difference between any two samples. During these tests, the order of magnitude is typically all that is examined for improvement or degradation between samples.

The mechanical test showed the counter-spun test sample (CMD124BCS, the sample in question in this paper) demonstrated a considerably lower ultimate strength than its control (CMD124) as shown in Table 2. Furthermore, the ultimate strength was notably lower than two of the three threads that comprised it. The counter-spun yarn also failed in a pattern that was similar to the pattern exhibited when the sample slips in the machine rather than failing. After this pattern was noticed several times despite significant care to avoid slip, it was determined that the pattern was actually indicative of the yarn failing one thread at a time rather than all at once. See Figure 4 for typical failure patterns and Figure 5 for counter-spun failure patterns.

The ESEM test showed that the twist angle and diameters of the threads and yarns (which have been shown to be critical in thread strength) were consistent with the values for ultimate strength that was found in samples from other tests. The images also showed how the typical yarn failed, Figure 6, and how counter-spun yarn failed, Figure 7. ESEM images also verified that the counter-spun yarn was failing in individual threads. Figure 8 shows a yarn sample from CMD124BCS that begins with three threads at the top of the picture and then has a vacancy in the helix beginning approximately halfway down the thread.

The torsion test, summarized in Table 3, showed that, under the tension of a 1.0g mass suspended under the influence of gravity, a 10cm length of thread was out of equilibrium. The samples, originally spun at 9 turns per mm (or 900 turns per 10cm) rotated 138 to 314 times (on average, depending on the sample) before reaching equilibrium. However, the standard deviation from the five tests run on each sample ranged from 16 to 30 revolutions to equilibrium demonstrating that this measurement was not highly reproducible, but still statistically relevant for a qualitative comparison. The traditionally spun (control) yarn rotated an average of 314 times before reaching equilibrium and the counter spun yarn (test yarn) only rotated an average of 178 times before reaching equilibrium.

Significance / Interpretation of Results The electrical test results clearly demonstrated that the counter-spinning technique used does not have a significant impact on the resistivity of the yarn. This is important for applications such as electrical

102 wiring in lightweight applications where counter-spun yarn might be used due to its increased ability to be easily handled without twisting on itself or becoming tangled.

The mechanical test results show that the counter-spun yarn (CMD124BCS) is 38% weaker than its traditionally spun counterpart (CMD124). This is a significant decrease in strength, however, both yarns were spun at 9 turns per mm despite the fact that the diameter of the counter-spun thread is ~38µm while the diameter of the traditionally spun yarn is only ~28µm. 9 turns per mm is ideal for a thread diameter smaller than either of these, so it would be closer to idea for the traditionally spun yarn. The turns per mm measurement is an intermediate metric between the machine settings and the actual goal, which is twist angle. The ideal twist angle is ~30 degrees. The fibers in the traditionally spun thread are much closer to 30 degrees than the fibers in the counter-spun thread. In fact, the counter spun thread exhibits a tight thread wind with a twist angle exceeding 30 degrees and a very shallow fiber angle of about 10 degrees. This is a result of the unwinding from the counter-spinning method. This low fiber angle is why the yarn has a lower internal energy, but may also be contributing to decreased strength. The fiber angles are shown in Figure 9 and Figure 10.

The torsion test showed that there was a decrease in the average rotations to equilibrium from 314 for the control sample (CMD124) to only 178 for the test sample (CMD124BCS). While this rotation count cannot easily be converted into an actual energy calculation, it does provide conclusive qualitative evidence that the counter spinning method does reduce the internal energy of the yarn. Additional work can be done to optimize the process such that the resulting yarn is produced to be near equilibrium upon completion.

Acknowledgments The author would like to thank Dr. Mark Schulz for his continued support of this and related research and projects. The author would also like to thank Mr. Joe Kleuner for around the clock technical support and guidance on CNT spinning techniques and breakthroughs.

Figures and Tables

Figure 1. CMD124 Spinning setup Figure 2. Counter-Spinning of CMD124BCS

Data Acquisition Unit

Current Source / Ammeter Voltmeter

Figure 3. Four Probe Schematic Figure 4. Typical Yarn Failure Pattern

103

Figure 5. Counter-Spun Yarn Failure Pattern Figure 6. Typical Yarn Failure

Figure 7. Counter-Spun Yarn Failure Figure 8. Counter-Spun Yarn

Figure 9. Fiber Angle of Typical Yarn Figure 10. Fiber Angle of Counter-Spun Yarn

Table 1. Four Probe Resistivity Test Summary Table 2. Mechanical Tensile Test Summary Sample Resistance (Ω)Resistivity (Ωm) Ultimate Diameter (m) Modulous (Pa) CMD1 1.47E+03 2.81E‐05 Strength (Pa) CMD2 1.53E+03 5.62E‐05 CMD1 2.08E‐05 1.90E+08 2.52E+09 CMD2 2.89E‐05 7.84E+07 1.29E+09 CMD4 1.54E+03 3.92E‐05 CMD4 2.40E‐05 1.53E+08 1.22E+09 CMD124 8.08E+02 2.82E‐05 CMD124 2.81E‐05 1.81E+08 1.19E+09 CMD124BCS 5.91E+02 3.42E‐05 CMD124BCS 3.62E‐05 1.13E+08 8.76E+08 Table 3. Torsion Test Summary Table 4. Equipment List Sample Avg Rotations Range Std. Dev. Brand Part Number Model Number CMD1 231 59 30 Keithley 2182A Nanovoltmeter Keithley 6220 Precision Current Source CMD2 138 75 27 Mark‐10 ESM301 Motorized Stand CMD4 190 56 20 Mark‐10 BG05 Load Cell CMD124 314 54 21 Dell Insiron B120 Laptop CMD124BCS 178 48 16 UC NA CNT Spinning Machine

104 Experimental Investigation of Magneto-Rheological Based Elastomers Based on Hard Magnetic Fillers

Student Researcher: Alexander J. Dawson

Advisor: Dr. Jeong-Hoi Koo

Miami University Department of Mechanical and Manufacturing Engineering

Abstract This study investigates a new generation of magnetorheological elastomers (MREs) based on hard magnetic materials. The type and dispersion of the filler material affects how the MREs respond to an applied magnetic field. A random dispersion of soft magnetic particles, such as iron, results in a MRE with stiffness that varies with the applied magnetic field. Unlike “soft” MREs, a dispersion of hard magnetic materials aligned in an electromagnetic field will produce an MRE with magnetic poles. When a magnetic field is applied, perpendicularly to these poles, the filer particles generate torque and cause rotational motion of the MRE blend. The primary goal of this project is to fabricate and test the properties of MREs filled with hard magnetic particles (or H-MREs). This experimental work investigated the effect of different types of filler materials, filler percentage, and base elastomer types by measuring the blocked force and the displacement of H-MREs with varying magnetic fields.

Introduction A dispersion of hard magnetic materials aligned in an electromagnetic field produces an anisotropic, magnetically poled MRE similar to a flexible permanent magnet. The nature of MREs consisting of hard magnetic materials (H-MRE), both the permanency of embedded particles as well as the flexibility of surrounding elastomer, creates a material with applications quite different from those of conventional MREs. Since the particles are embedded in a semi-rigid median, the application of a magnetic field perpendicular to the polarity of the H-MRE will cause motion as the particles attempt to align with the applied magnetic field. The resulting motion and force can be used as a magnetic field controlled actuator.

Limited research has been performed on the new generation of MREs consisting of hard magnetic materials (H-MRE), thus they will be the focus of this study. In particular this study will focus on experimental evaluation of H-MREs in an effort to determine the feasibility of utilizing these materials as bending type actuators. In an effort to characterize H-MREs as new elastomeric actuators, this work studies the blocked force and the displacement of H-MRE samples under varying magnetic fields. In particular, this study will focus on experimental evaluation of H-MREs with different filler materials, filler percentages, and base elastomer types to investigate the effect of these properties on the blocked force and displacement response.

Methodology Sample Fabrication Volume percentages for optimal MR effect have been established for conventional MREs [1], new work must be conducted to produce a comparable value for H-MREs. As such, H-MRE samples were fabricated with particle volume percentages of 10%, 20%, 30%, 35%, and 40% cured with a two component elastomer resin in a rectangular prism mold with dimensions of 63.5×31.8×2.5 mm. The silicone rubber acts as the binding agent, contributing to the stiffness and flexibility of the cured sample, and the embedded magnetic particles produce the MR effect. Three types of Dow Corning elastomer resins were studied; HSII, HSIII, and HSIV.

In the initial work four types of permanent magnetic particles were studied in an effort to determine which particle produced the greatest blocked force and displacement response. The four particles include: barium hexaferrite (BaFe12O19), strontium ferrite (SrFe12O19), samarium cobalt (SmCo5), and neodymium magnet (Nd2Fe14B). Samples were produced to resin specifications with the addition of the magnetic

105 particles and aligned in a two Tesla magnetic for one hour. The total curing time was 24 hours at room temperature.

From the initial work it was concluded that H-MREs filled with Neodymium particles exhibited the greatest combined displacement and blocked force response under an applied magnetic field and thus for experiments involving the filler percentage and base elastomer type neodymium particles were used exclusively.

Experimental Setup In order to produce a varying magnetic field a two coil modified transformer electromagnet with an iron core was constructed. The coils consist of 750 turns of 20 AWG wire. Current was supply to the coil via a TDK Lambda ZUP 36-24 power supply. The samples were positioned in a fixed-free cantilever configuration in the test region (i.e., the gap in the transformer electromagnet) of the electromagnet via a square aluminum clamp. The samples were oriented such that the length of the sample was perpendicular to the direction of the magnetic field and the width was parallel. In order to ensure the uniformity of the magnetic field generated in the test region, magnetic finite element analysis (FEMM) was conducted.

The samples were positioned in a fixed-free cantilever configuration in the test region of the electromagnet via a square aluminum clamp. The samples were oriented such that the length of the sample was perpendicular to the direction of the magnetic field and the width was parallel. A high resolution load cell and a laser displacement sensor were used to measure the blocked force and the displacement of the beam, respectively.

Results and Discussion Effect of Filler Material For displacement experiments, the magnetic field was varied linearly from 0 to 25 mT. This range was chosen due to the nonlinearity of the H-MRE motion at higher magnetic flux. Figure 1 gives evidence that the particle type and magnetic properties affect displacement response. For each particle type studied the displacement generated by the application of a magnetic field has a positive linear correlation over the range studied. Linearity is an important characteristic for both the use and characterization of actuators. Although the bending motion of H-MRE is only linear over a small range, considerable displacements, in excess of 10 mm for neodymium and barium hexaferrite, were able to be achieved (see Fig. 1 (a)). The non-linearity of the H-MRE displacement response at high displacement is a consequence of the elastomer median in which the particles are imbedded; as the material bends in response to the magnetic field it functions as a non-linear spring. Considering the high bending achieved H-MREs may prove to be the magnetic analog to electroactive polymers. Moreover, comparing to other solid-type magnetic materials (such as magnetostrictive materials), the deformation of H-MREs is quite significant.

Similarly to the displacement response, the blocked force response exhibits a linear trend; however the blocked force maintains linearity at much higher magnetic field levels. For the blocked force experiments, the magnetic flux density varied from 0 to 152 mT. The non-linearity presence in the displacement response at high magnetic fields (> 25 mT) is not evident in the blocked force response due to the nature of the loading. Since the beam (H-MRE) is maintained horizontal during the loading, the beam simply transmits the torque provided by the particles attempting to align within the magnetic field and does not provide restorative force due to strain in the elastomer.

Figure 1 (b) shows the blocked force variation of H-MREs with different particle types. For the neodymium sample, the blocked force reaches to nearly 1,200 mN at the maximum magnetic flux density considered in the study (i.e. 150 mT). The magnitudes of these blocked forces are considered to be sufficient to actuate components or mechanisms in small-scale adaptive structures mechanical systems and, such as MEMS devices.

It is interesting to note that the blocked force does not follow the same trend as the displacement (see Fig. 1) although the type of magnetic particle affects both responses. Neodymium based H-MRE exhibits the greatest blocked force; however, its displacement is comparable to that of the barium ferrite, suggesting

106 that the generated responses are consequences of different properties of the embedded particles or are affected by an interaction between the filler material and embedded particles.

Effect of Filler Percentage To investigate the effect of filler volume percentage on the dynamic response, samples were fabricated from HSIV elastomer resin and Neodymium (Nd2Fe14B) particles. Blocked force and displacement as a function of the applied magnetic field for each filler volume percentage can be seen in Figure 2 (a) and (b), respectively.

As shown in Figure 2 (a), the samples demonstrate significant displacement responses for the range of the magnetic flux density considered in this study. For the case of 40% filler sample, its tip displacement is nearly 20 mm, which is approximately 65% of the H-MRE sample’s length, at the magnetic field density of 25 mT. While the response and the magnetic field shows a linear relationship for a small range of the magnetic flux density (i.e., 7mT), the relationship becomes nonlinear at higher magnetic flux densities. Figure 2 (a) shows further shows that the displacement response increases the filler volume content increases, except for the 30% volume content sample.

As shown in Figure 2 (b), for each of the filler percentage, the blocked force linearly increases as the magnetic field increases. The measured blocked force for the 40 % filler sample is over 800 mN at the magnetic flux density of 25 mT. The relationship between the blocked force response and the filler percentage is non-linear. The change in the slope of the blocked force as a function of the applied magnetic field is relatively small for filler percentages between 10% and 30%, but it is significant at filler percentages above 30%.

Effect of Base Resin Type To investigate the effect of filler volume percentage on the dynamic response, samples were fabricated with a fixed volume percentage of 20% of Neodymium particles along with three types of elastomer resins (HS II, HS III, and HS IV). Blocked force and displacement as a function of the applied magnetic field for each base elastomer type/stiffness can be seen in Figure 3a and 3b respectively. For a given magnetic flux density, the sample with the softest resin (HS IV) shows the largest displacement and blocked force.

Significance of Work and Conclusions The focus of this study has been the dynamic response, specifically blocked force and displacement, of H- MREs with different filler material, filler percentages and base elastomer types under an applied magnetic field. In fabricating H-MREs, neodymium powers were used as filler particles, and they were mixed with silicon electrometric resin. The mixture was then exposed to a strong magnetic field for alignment of the embedded particles. Using these H-MRE samples and an electromagnet test setup, the dynamic responses of the samples were investigated. The results indicate that the blocked force linear increases for a range of magnetic flux density considered in this study. The results further indicate that the displacement of the samples linearly increases within a small range of magnetic flux density, but the displacement responses show nonlinear behavior for a higher magnetic density range considered. Overall the trends in the response of the H-MRE provide promising implications for their roles as magnetically controlled actuators. Future work will need to focus on the development of a model that will predict the response of H-MRE beams with a variety of properties to the application of a magnetic field so that they may be more effectively employed as actuators.

107 Figures and Charts

14 1200 Nd Fe B (a) 2 14 Nd Fe B 12 BaFe O (b) 2 14 12 19 1000 BaFe O SrFe O 12 19 10 12 19 SrFe O 12 19 800 SmCo SmCo 5 5 8

600 6

Force (mN) 400 4 Displacement (mm) Displacement 2 200

0 0 0 5 10 15 20 25 30 0 20 40 60 80 100 120 140 160 Magnetic Flux (mT) Magnetic Flux (mT) Figure 1. Filler type comparison of (a) displacement (b) blocked force

20 HSIV 10% 900 18 (b) HSIV 10% (a) HSIV 20% 800 HSIV 20% 16 HSIV 30% HSIV 30% HSIV 35% 700 14 HSIV 35% HSIV 40% 600 HSIV 40% 12 500 10

400 8

Force (mN) Force 300 6

200 Displacement (mm) Displacement 4

2 100

0 0 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Magnetic Field (mT) Magnetic Field (mT) Figure 2. Filler percent comparison of (a) displacement (b) blocked force

16 (a) 450 HSIV 20% (b) HSIV 20% 14 HSIII 20% 400 HSIII 20% HSII 20% HSII 20% 12 350

300 10

250 8

200 6

Force (mN) 150

4

Displacement (mm) Displacement 100

2 50

0 0 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Magnetic Field (mT) Magnetic Field (mT) Figure 3. Base elastomer type comparison of (a) displacement (b) blocked force

Acknowledgments The author would like to Dr. Edelmann in the Electron Microscopy Facility at Miami University for providing the scanning electron microscope images. He would also like to thank Dr. Pechan, Dr. Dou, and Brian Kaster in the Physics Department at Miami University for curing the samples. Lastly the author would like to thank his advisor for all the help and opportunities he has provided throughout the course of the project.

The outcome of the research presented in this paper has been published in two international conferences. For further information, refer to the following proceedings:  J. H. Koo, A. Dawson and H.J. Jung "Fabrication and Characterization of New Magneto- Rheological Elastomers with Embedded Hard Magnetic Particles," Proceedings of the 12th International Conference on Electrorheological Fluids and Magnetorheological Suspensions (ERMR 2010), August, 16-20, 2010, Philadelphia, PA USA  J. H. Koo, A. Dawson, Y.K. Kim, and K.S. Kim, "Experimental Investigation of Magneto- Rheological Elastomers based on Hard Magnetic Fillers," Proceedings of the 21th International Conference on Adaptive Structures and Technologies (ICAST 2010), October, 4-6, 2010, State College, PA USA.

References 1. Lokander, M., & Stenberg, B. Performance of isotropic magnetorheolocial rubber materials. Polymer Testing. 22 (2003) 245-251.

108 Spectroscopy: Exploiting the Interaction of Light and Matter

Student Researcher: Jason H. Day

Advisor: Dr. Rebecca Schneider

The University of Toledo Department of Adolescent to Young Adult Education

Abstract By combining elements of NASA's educational guides “Building the Coolest X-ray Satellite” and “NASA Launchpad: Analyzing Spectra,” student learn about properties of the electromagnetic spectrum. Students also learn that light is a portion of the electromagnetic spectrum and that many portions of the spectrum are studied and tell us much about the nature of our universe.

Lesson Plan Engage: 1. Using an image of a rainbow, or sunlight passing through a prisms to display a rainbow on the whiteboard, ask students to explain what they observe. (Lead students to understand that sunlight is made up of all the colors of the visible spectrum; and each frequency of light is affected differently by the prism material) 2. Ask students if it is possible to spread out the light into individual wavelengths of light. (Lead students to understand that with better spectrometers we can resolve more wavelengths of light.) 3. Ask students if we might use this property of light for any use? 4. Also, ask students how important our vision is; ask what sort of information do we get from our vision. Then ask whether we might benefit from being able to see more of the electromagnetic spectrum. Ask how this is possible?

Explore: (Do the Explore from NASA's analyzing line spectra making a spectroscope)

Explain: 1. Have students explain the types of spectra that we can observe. Make sure students understand the differences between the types. 2. Students should complete the questions on page 6 of NASA's Analyzing Spectra Student Guide. 3. Ask students to relate their observation of different spectra for different light sources in space, such as stars. Discuss how different stars may have different spectra. 4. Discuss with student that visible light is only a small part of the EM spectrum and that different radiation sources in space emit different spectra of other types of EM radiation, which we can detect through other technologies. 5. Discuss with student how we humans are “measuring” the visible light of our spectra? Talk about how a spectrometer might measure it. Discuss the fact that EM radiation has energy that can be measured by the spectrometer.

Elaborate: 1. Ask students what they feel when they stick their bare arms out into the sunlight on a Spring day. (Students should be led to understand that they feel the “warmth of the sun.”) 2. Ask students what has happened that gave them the sensation of warmth? 3. Ask students if there is another effect that they can experience when out in the Summer sun? (Tanning) 4. Again ask student what they think causes tanning. 5. Ask students what they think would happen if they split up the sunlight using a prisms and stuck their bare arm under the red portion of the light? the blue portion? (Students should be led to understand that toward the red end, actually in the infrared, is where they get the

109 warming effects, and toward the blue end, in the UV is where they get the tanning effect.) 6. Ask students why these two ends of the visible spectrum behave this way. Why do you get two different results, one not bad at all, the other potentially cancer causing? (Students should be led toward the idea that light has energy and for various colors or wavelengths there is a different energy.) 7. Show Video segment “X-ray Spectroscopy and the Micro-calorimeter” from Building the Coolest X-ray Satellite, and do the discussions. 8. Perform Activity 1: Modeling and Using the Electromagnetic Spectrum, from NASA's “Building the Coolest X-ray Satellite”

Evaluate: 1. The questions throughout this lesson, and those provided in the two resources can be used to assess student understanding. 2. Additionally, students will summarize their understanding of the interaction of electromagnetic radiation and matter that results in our science of spectroscopy by writing about what they know of the following topic: 1. What is the relationship between energy and the various regions of the electromagnetic spectrum? 2. What is the essence of the interaction between electromagnetic radiation and matter that spectroscopy exploits. 3. What regions of the electromagnetic spectrum are used for spectroscopy.

Learning/Performance Objectives 1. Student defines spectroscopy as the study of electromagnetic radiation emitted from an object. 2. Student explains that different sources of EM radiation emit different kinds of EM radiation. 3. Student distinguishes between a continuous spectrum, emission spectrum, and absorption spectrum. 4. Student states that EM radiation, including visible light, has energy. 5. Student explains that spectrometers exploit the interaction between light and matter, specifically the measurement of the energy of EM radiation. 6. Student explains that most matter in space emits and absorbs all sorts of EM radiation (visible, UV, infrared, gamma ray, X-ray, radio, microwave) that we can detect and study.

Alignment with Ohio Academic Content Standards Physical Science Standard, Benchmark G: Demonstrate that waves (e.g. sound, seismic, water, and light) have energy and waves can transfer energy when they interact with matter; Indicator 18. Demonstrate that electromagnetic radiation is a form of energy. Recognize that light acts as a wave. Show that visible light is a part of the electromagnetic spectrum.

Indicator 19: recognize that electromagnetic waves can be propagated without a medium, I.e through space.

Earth and Space science standard, Benchmark A: Explain how technology can be used to gather evidence and increase our understanding of the universe. Indicator 1: Explain how scientists obtain information about the universe by using technology to detect electromagnetic radiation that is emitted, reflected or absorbed by stars and other objects. Indicator 3: Explain how information about the universe is inferred by understanding that stars and other objects in space emit, reflect, or absorb electromagnetic radiation, which we then detect.

Underlying Pedagogy So much of science and technology is interconnected and learning one thing, such as the phenomenon of visible light diffraction into the colors of the rainbow, frequently opens up new paths of understanding to explore. I believe that students will feel a sense of empowerment when they not only learn about a phenomenon, such as visible light diffraction, but also its significance in the technology of the world that

110 he or she lives in. I think too much of science is taught as a study of encyclopedic knowledge that is inert. The dynamic nature of scientific knowledge is only apparent when it is applied through technology and experiment. The interaction of students and the pieces of simple technology – such as the spectrometer they build in NASA's Analyzing Spectra guide – gives them an intimate connection to the phenomenon; they are witnessing it, they are not being told about it or told how they should fell about it or think about it; the student using technology he or she built is simply present with his or her own thoughts and ideas about the phenomenon she or he is observing. I think this can be a stirring moment. But when we take the student from this beginning understanding and broaden it so that they can understand a bigger picture of the ideas they have been introduced to – such as the fact that through our technology we can exploit the entire EM spectrum in our study of the cosmos – they will develop a sense of empowerment as they realize that science is not a bunch of unfathomable mysteries, but rather ideas and concepts that are enjoyable to think about and understand. To move students toward this goal, using hands on projects and constructing simple technology to use is a powerful way to teach scientific concept and phenomena.

Type and Level of Student Involvement Students must be active group members when working on the construction of the spectrometer. They also need to actively participate in the discussions in the Engage and the Elaborate portions. The entire class must be actively involved in the Activity from “Building the coolest X-ray Satellite.”

Resource Required Portions of two of NASA's Educator guides are used in the lesson. The teacher should be familiar with both of them, and can find additional ideas in these two Educator Guides: NASA eClip Educator Guide: NASAS Launchpad: Analyzing Spectra and Building the Coolest X-ray Satellite. There are materials required for building the spectrometer in the Explore portion and for demonstrating the energy of the various portions of the EM spectrum in the Elaborate portion, which are identified in the two Educator guides mentioned above.

Implementation Results Expect to teach this lesson at end of April, beginning of May, 2011.

Assessment and Results Expect to teach this lesson at end of April, beginning of May, 2011.

Critique and Conclusion of the Project Expect to teach this lesson at end of April, beginning of May, 2011.

111 Lunar Math

Student Researcher: Danielle M. DeChellis

Advisor: Ms. Karen Henning

Youngstown State University Department of Mathematics and Education

Abstract In my project, I will use NASA materials to create a lesson plan that teaches students to calculate the distance between the Earth and Moon. In this lesson, 7th and 8th grade students will use sports balls of different sizes to create scale models of the Earth and Moon. Using the “Moon ABC’s Fact Sheet,” students will be able to formulate a scale for the different sports balls. As they focus on proportions and conversions, the students, separated into groups, will answer questions about their scale models of Earth and the Moon. In conclusion, they will try to see if their model scales fit into the classroom. This will help them see how far apart the two bodies really are.

Lesson This lesson will teach students about conversions and proportions by allowing them to create their own scale models of the Earth and the Moon. After giving the students a brief description of the distance between the Earth and Moon and what scale models are, the teacher can demonstrate the idea of the activity. The teacher will show the class two cutouts of scale models of the Earth and Moon, and show them how far apart they should be placed based on the chosen scale.

Next, the students should be separated into groups of three or four to complete the activity. Each group will be given a sports ball (predetermined by the teacher) to represent the Earth. The students will then complete the teacher modified version of the worksheet from the Exploring the Moon: A Teacher’s Guide with Activities. After receiving the sports ball that represents Earth, the group will have to choose, from a variety of other sports balls, which ball would best represent the model of the Moon in comparison to their given Earth. After completing the worksheet, the group will have determined a scale and will use a meter tape to determine whether their Earth and Moon would fit in the classroom. Then they will present to the class their model, and why they chose that specific ball to represent the Moon. Last, each individual in the group will turn in a completed worksheet, showing that they understood all conversions and kept track of all units necessary. After each group has presented their scale models, the class can then have a discussion about what they have learned, and the teacher can pose questions to the students that they should be able to answer after completing the lesson.

Objectives The learning objectives of this class activity are to help students calculate the distance between scale models of the Earth and Moon. Participation in this activity will help the students to achieve a greater understanding of the vast amount of space between the two bodies in space. Students will also learn how to create scales using proportions.

Alignment This activity aligns with the Ohio Academic Standards by meeting several different 7th and 8th grade Mathematics standards. This lesson uses standards that pertain to measurement techniques. Students will solve problems that involve proportional relationships. For example, the scale proportion between the Earth and Moon. Students will also meet several geometric and spatial standards, such as using proportional reasoning to express relationships between parts of similar and congruent figures, similar figures being the Earth and the Moon.

Underlying Theory This lesson could be classified as a Jerome Bruner’s Discovery Learning Theory based lesson which is similar to the inquiry based and the constructivist approach. During this lesson, students become actively

112 involved in a problem solving situation to gain knowledge about the existing problem by using previous knowledge and newly acquired knowledge. They interact with their group members by manipulating objects and discussing questions to create an accurate scaled model of Earth and Moon. The teacher helps to facilitate learning by guiding students to find their desired model and understand how conversions and proportions are useful in everyday life.

Student Engagement According to Bloom’s Taxonomy, this lesson uses each of Bloom’s six levels of the cognitive domain. Students will use the first level, Knowledge, to write and define what they know and will learn about the definition of a scales and proportions. Then, students will master levels two and three, Comprehension and Application, by understanding the definition of a scale and creating their own scales and models of Earth and Moon. Students will then analyze and synthesize their models, and see if they can use other objects to create two similar models of the Earth and Moon. Last, students will evaluate the models they have created, understand how the models are smaller replicas of the Earth and Moon in outer space. As the levels progress, the students will become increasingly more involved with the activity as well as with their peers. This activity will introduce the students to higher level thinking, and require them to become actively involved.

Resources Each group will be given a sports ball to symbolize the Earth. Then each group will be provided with several other different sports balls to choose from to symbolize the Moon (the variety of sports balls will be determined by how many and which ones are available for classroom use). The students will also be given a copy of the “Moon ABC’s Fact Sheet” so they have an accurate measurement of the Earth and the Moon’s diameter. In addition to the fact sheet, they will receive another worksheet, based off of NASA’s “Distance to the Moon” worksheet, which will include the diameters of several of the most common sports balls that are available. A calculator will also be provided to aid the students in their calculations. Lastly, a meter tape will be provided to aid the students in placing their Earth and Moon the correct distance apart in the classroom.

Results Although this classroom activity has yet to be implemented, I believe each group will provide the classroom with their scale model of the Earth and Moon. They will give an explanation of what their scale, in centimeters, was and how they got that measurement. Along with their scale model, they will individually hand in a teacher modified version of the “Distance to the Moon” worksheet from NASA’s Exploring the Moon: A Teacher’s Guide with Activities for Earth and Space Sciences.

Assessment Students will be assessed by their presentation and explanation of their scale model. They will also be assessed by the accuracy of the worksheet they complete and turn in. A great deal of attention should go toward making sure they show work for their conversions and that all the units are correct.

Conclusion This classroom activity will give students an opportunity for a hands-on approach to learning about conversions and proportions. It will enable students to integrate science and mathematics using common objects to grasp the concept of lunar size and its relationship to Earth.

References 1. Exploring the Moon: A Teacher’s Guide with Activities for Earth and Space Sciences. “Distance to the Moon”. pg 25-28. 2. Ohio Academic Content Standards. http://www.ode.state.oh.us/ 3. Bloom’s Taxonomy of Learning Domains. http://www.nwlink.com/~donclark/hrd/bloom.html 4. Discovery Learning (Bruner). http://www.learning-theories.com/discovery-learning-bruner.html

113 The Night Sky

Student Researcher: Gina M. Dehner

Advisor: Linda Plevyak

University of Cincinnati Department of Early Childhood Education

Abstract The Night Sky is a four day unit designed for first grade students to learn about stars and constellations. The lessons will focus on the definition and characteristics of stars and constellations. The students will have an opportunity to complete an experiment to answer the question “Where are the stars during the day?” Students will also be able to create a fictitious constellation and write a story about it. To conclude the unit there will be a guest speaker joining the class to show tools and technology that can be used to view and study stars and constellations.

Lesson Overview During day one of the unit the students will be engaged through listening to information about stars from various books. They will participate in a discussion about stars and list characteristics of stars. Following the discussions the students will complete an activity where they list four characteristics of stars on small star cutouts that will be attached to construction paper. This activity will be hands-on and engaging while providing a way for the students to express their knowledge gained during the whole class discussion.

Day two of the lesson will be all about constellations. The students will listen to books about constellations and will answer posed questions. After the discussion the students will have two activities to work on. During the first activity they will receive pictures of constellations and connect the dots as to what they think the constellation is a picture of. Children will share their pictures with their peers allowing them to see that people can see different pictures while looking at the same constellation. Then the students will create their own fictitious constellation and write a story about that constellation.

During day three the students will be engaged in an experiment to answer the question, “Where are the stars during the day?” Each group will be given a flash light and will work together to come up with an answer. As the students are working I will be walking around posing questions to the groups to assist them with the experiment. I will aim my questions to get the students thinking about how bright the classroom is and how the light from the flashlight cannot be seen while shining on the ceiling. To draw closure I will hold a class discussion about the experiment to see what the class concluded. I will then discuss how bright it is during the day just like the classroom and how it is hard to see the stars because it is so bright out just like how it was hard to see the light from the flashlight.

A guest speaker will visit the class on day four showing the class technology used to look at constellations and stars. The speaker will also show tools used to view stars and constellations. After the presentation I will give the unit assessment to the class.

Objectives  The students will describe four characteristics of stars.  The students will define what a star is.  The students will define what a constellation is.  The students will create a fictitious constellation and create a story about it.

Alignment  Scientific Ways of Knowing Nature of Science 1.2 Demonstrate good explanations based on evidence from investigations and observations.

114 Underlying Theory This unit supports the constructivist theory of Jerome Bruner. Students have their own ideas about stars and constellations, their naïve conceptions. This lesson will help them to acquire new knowledge about the night sky, particularly about the definitions and characteristics of stars and constellations. This new knowledge, acquired by researching the topics in books and participating in class discussions will be accommodated or assimilated into the information the students already know. The experiment on the third day will support the constructivist theory by allowing the students to discover new concepts about stars through a hands-on experience. The students will engage in inquiry by making observations about stars, posing questions, using tools to gather data, and communicating the results with their group.

Assessment Throughout the lesson I will use many formative assessments to track the progress of the students. These included: teacher observations, collection of work, and questioning. These formative assessments will help me to see if the students understood the material presented to them.

I will also assess the students with a summative assessment on the final day of the unit. This assessment is designed to see what the students learned throughout the unit and if they meet the standards laid out for the lesson. The assessment contains three parts:

1. What is a star? 2. What is a constellation? 3. Draw an example of a constellation in the box below (a box was provided on the assessment).

Assessment Results From the thirty-six students assessed with the end of the unit test:

28 students were able to correctly define a star (appropriate answer: ball of glowing gas). 29 students were able to correctly define a constellation (appropriate answer: stars that make a picture). 31 students were able to correctly draw an example of a constellation.

A rubric was followed while scoring the test.

4 points (A) - 3 questions were answered correctly 3 points (B) - 2 questions were answered correctly 2 points (C) - 1 question was answered correctly 1 point (D) - An attempt was made but no questions were answered correctly 0 points (F) - No attempt was made and no questions were answered correctly

Conclusion Overall the students did well on the test. A majority of the scores were 4’s which is equivalent to an A. The students were able to list the definition of a star and constellation and were able to give an example of a constellation. Their answers showed that the majority of the students met the objectives for the unit.

The formative assessments were also very positive showing the students’ understanding of stars and constellations. The stories and pictures about the constellations showed an understanding of what a constellation is and how there can be stories about constellations.

115 Commercial UAV Autopilot Testing and Test Bed Development

Student Researcher: Joseph M. DiBenedetto

Advisor: Dr. Michael Braasch

Ohio University Department of Electrical Engineering

Abstract There are many commercial uses for small scale Unmanned Aerial Systems (UAS) in domestic airspace. Before UAS can be permitted into airspace over populated areas they must be proven safe and reliable. There are many concerns to be addressed with UAS in commercial airspace. They need to be able to accurately follow a flight path, and sense and avoid obstacles that they might encounter. They need to be able to do these tasks reliably and efficiently. There are several commercial-off-the-shelf autopilots available that have a wide variety of capabilities. As a start to testing these commercial-off-the-shelf autopilots a test bed is needed. After development of the initial test bed, testing of a low cost commercial- off-the-shelf autopilot will begin. The functionality and reliability of the unit was tested as well as the failsafe sensors built into the system.

Project Objectives This project deals with the development of a test bed for small scale UAS and the testing of a low end commercial-off-the-shelf autopilot, the Ardupilot designed by DIY Drones. The objectives of the project are to develop a test bed that is suitable for testing small scale UAS systems in an accurate manner and to test the reliability and robustness of the Ardupilot. This includes determining payload capabilities and flight duration for the test bed as well as verifying the capabilities and testing the user interface of the Ardupilot. Safety is a very important factor in the National Airspace System so we will also be looking for anything that could compromise safety for other aircraft in the area as well as structures or people on the ground in the area of flight.

Methodology Used The GALAH designed by Autonomous Unmanned Air Vehicles was chosen to serve as the base for the test bed. This is because of the size of the payload bay and the capability for the payload bay to be interchangeable so more than one payload can be tested in a shorter amount of time. A Desert Aircraft DA-50 two cycle engine was chosen to give more power and to add some weight to the rear of the airframe. This increase in weight in the rear of the airframe allows for a larger payload, and the larger engine provides more power eliminating dead weight in the rear of the airframe to increase payload. After assembling the GALAH and installing all of the components needed for the GALAH to be air ready, a Certificate of Authorization, COA, was obtained. The pilot obtained a class two flight physical and passed the written pilots test for full scale aircraft pilots, the observer obtained a class two flight physical. Since the GALAH is a pusher engine airframe, meaning that the engine faces the opposite way than on a normal airframe and pushes the airframe through the air instead of pulling it through the air, a ballast weight was added to the front of the airframe to obtain the proper center of gravity. Preliminary tests on the ground to check the control surfaces, and the engine were conducted. Then short test flights were flown to test the airframe of the GALAH and then the airframe was checked to make sure that there were no signs of wear, or that the airframe was not structurally sound. After a few short test flights with no problems, extended test flights were flown to test the power of the engine in the air, the response of the airframe to moving the control surfaces, and to gauge maximum flight duration times with given fuel tanks. After these tests were completed the airframe was ready to test a UAS system.

Due to the fact that the coding for the Ardupilot is open source and not very well documented, a very controlled approach was used. After the Ardupilot was assembled with all the components and connecters attached initial testing started on the workbench. A connection was established to communicate with the Ardupilot using a FTDI to USB cable. This allowed the Ardupilot to be configured to the specifications needed to allow testing. The GPS signal was faked to allow testing of the initialization process and to be

116 able to test board to make sure all connections were properly attached. After all of the connections were verified the Ardupilot was attached to a scale model truck for initial testing on the ground. This was done due to the fact that the ways the Ardupilot handled many of the processes needed to control the vehicle were not clearly defined and this caused safety concerns with putting the Ardupilot in control of an aircraft. After installing the Ardupilot in the model truck, testing of the GPS antenna was conducted. After the GPS obtained a locked signal consistently testing would have proceeded to test the control of the Ardupilot over the truck. Then the capabilities and robustness of the Ardupilot would have been tested. Had those tests gone well the Ardupilot would have then been placed in the test bed and the Ardupilot would have been tested again with the airframe.

Results Obtained The GALAH test bed final airframe weight was 22 pounds including a five pound weight in the nose of the airframe to maintain the proper center of gravity. The payload capacity of the airframe was five to eight pounds; this depends on where in the payload bay that it is placed. This is because the farther the payload is away from the center of gravity the more effect it has on the center of gravity. The maximum flight duration is 20 to 25 minutes depending on how hard the engine is run during the flight. The harder the engine is run the lower the flight time. The maximum flight time also factors in having some fuel left in the fuel tanks at the end of the flight for safety reasons. The airframe responds very well to the control surfaces being moved and handles very elegantly. The conclusion is that the GALAH based test bed is reliable and robust enough to serve as a UAS test bed and meets the requirements for testing needs.

The Ardupilot was not found to be reliable enough for further testing during the initial testing on the ground. The GPS signal took an extremely long time to lock and then did not maintain a lock reliably. The GPS lock failed in every test that was conducted and there were no obstructions in the test field that should have produced errors in the signal. When the GPS failed the Ardupilot lost all functionality until the GPS lock was reestablished. The initial GPS testing was done with the model truck motors disabled and secondary testing was done with the motors enabled. When the GPS signal was lost the Ardupilot became motionless. Several times when the GPS signal was lost, the signal was not reacquired even after 25 minutes. The system was deemed too unreliable to proceed with testing any further after the GPS signal was lost consistently and the cause was tracked back to the Ardupilot itself.

117 Evaluation of a High-Cost Autopilot and Certificate of Authorization Process

Student Researcher: David M. Edwards

Advisor: Dr. Michael Braasch

Ohio University Department of Electrical Engineering

Abstract Unmanned Aerial Systems (UAS) are becoming commonplace in the military, but they have yet to reach their potential in commercial airspace. Before putting UAS in the National Airspace System (NAS), verification of their safety is required. While much of the focus has been on sensing an object, there has been less emphasis on avoiding it. The purpose of the project is to evaluate the strengths and weaknesses of a high-cost autopilot and determine possible problems when putting a small scale UAS containing a high-cost autopilot into the National Air Space system. The process of the FAA’s Certificate of Authorization (COA) will be discussed, as well as its strengths and weaknesses.

Project Objectives Before the strengths and weaknesses of the autopilot can be evaluated, a certificate of authorization must be obtained. A Certificate of Authorization (COA) is a document that is obtained from the FAA that allows a user to fly an Unmanned Aerial System (UAS). This involves submitting paperwork to the FAA about everything from descriptions of the UAS to the pilot and aircrew qualifications. Once a COA has been obtained, then the strengths and weaknesses of the high-cost autopilot can be evaluated. Concerns such as user interface issues, entering a waypoint that is impossible for the UAS to reach, and other unknown safety issues can be evaluated.

Methodology Used The first goal was to obtain a COA to fly the UAS that will contain the selected high-cost autopilot. This began by creating a description of the UAS, which contained the dimensions and weight of the aircraft, a description of the control station, and a description of the communication systems. Next, the performance characteristics of the aircraft had to be addressed. This included the climb rate, descent rate, turn rate, cruise speed, operating altitudes, approach speed, gross takeoff weight, and a description of the launch and recovery procedures. Following the performance characteristics, descriptions of the lost link, lost communications, and emergency procedures were written. Next, a satellite map of the operations area was provided, as well as coordinates for the airfield. Finally, descriptions of the pilot and observer qualifications are required. The pilot must pass the private pilot written exam, and both the pilot and the observer must have class 2 medical certificates. After a few iterations of submitting and correcting the COA application, a COA was obtained for the GALAH airframe, which was designed by Autonomous Unmanned Air Vehicles and was chosen to be used with the high-cost autopilot. The empty weight of the GALAH is about 22 pounds. It has a wingspan of 2.37 meters, and a length of 1.88 meters. Our current configuration allows for a 20 to 25 minute flight-time and a cruise speed of 65 knots.

The high-cost autopilot chosen was the MicroPilot MP2028LRC. It is capable of autonomous takeoff and landing, and has a dual communication link on two different frequencies (900MHz and 2.4 GHz). If communication is lost on one of these frequencies, the other frequency can be used to control the aircraft. The Horizon software that is included with the system allows for communication with the autopilot while in flight, allowing waypoints to be changed during a mission. It also shows the airspeed, altitude, and location of the aircraft. Testing of the autopilot began with the aircraft on the ground and with the engine off. The first test to be performed is the verification of the switching mechanism that switches control between the autopilot and the Pilot In Command (PIC). The autopilot’s control of the servos will be verified by physically moving the UAS.

118 Results Obtained One of the strengths discovered while going through the COA application process was the level of detail required from the person and/or organization submitting the COA. It required that every last detail of the aircraft be known, and procedures for loss of communication and emergencies be resolved. This helped prepare the person and/or organization submitting the COA prepare for situations that might not have otherwise been thought of, prior to the submission of the COA. One of the weaknesses found when going through the COA application was the time that it took to complete the process. It took about 5 months from the first submission of the COA to when the COA was approved (including corrections to the COA application). Another weakness found was in the COA application itself. Many parts of the application were designed for large scale UAS and were not designed for small scale UAS. This made parts of the application difficult to fill out due to the lack of relevance.

One of the strengths discovered when setting up the autopilot system was that it is extremely detailed. There are many parameters and settings that can change the characteristics of the autopilot, including how it controls the aircraft, and how it behaves during a failure. The autopilot is able to identify certain failures such as a loss of electric power, a loss of engine power, or a loss of communications, and can be programmed to react differently in each situation. For example, if the autopilot loses communications with the ground, the autopilot can be programmed to fly back and circle the landing strip. Another one of the strengths discovered when setting up the autopilot system was related to the waypoints. An early concern was the possibility of entering a waypoint into the autopilot that was out of the range of the UAS. There is a setting where you can define the area for allowable waypoints. Any waypoints that are entered outside of this defined area, returns an error from the Horizon software. One of the weaknesses discovered when setting up the autopilot system was caused by the complexity of the autopilot. It was fairly easy to make a user error and change a value that the user never meant to change. This required the parameters and settings to be checked more frequently.

The COA process took much longer than was expected, so the autopilot was only tested on the ground. Further testing is needed in the air to make additional conclusions, but there are several conclusions that can already be made from the experience gained from the autopilot. To safely place autopilots in the NAS, the autopilot must be able to determine a failure and react to it according to a pre-programmed procedure for that particular failure. There must also be a PIC that is able to take control of the autopilot in the event of an unexpected behavior or failure of the autopilot. Finally, it must have a robust and reliable bypass switch, so that the PIC can switch between autopilot control and PIC control.

119 A Load Balancing, Scalable Parallel Algorithm for Simulating the Control of Steady State Heat Flow Through a Metal Sheet

Student Researcher: Kristen D. Edwards

Advisor: Mr. Robert Marcus

Central State University Department of Computer Science and Mathematics

Research Objectives The objective of this research project is to enhance the scalable parallel algorithm that simulates the control of steady-state heat flow through a rectangular metal sheet to determine an optimum load balancing method for cooling residual heat on the sheet. The simulation sets initial boundary conditions of heat of 1000ºC applied to three edges and an ice bath of 00ºC applied to the other edge. The simulation distributes a section of rows to each task in the parallel partition and then the steady-state heat flow condition is determined. Various numbers of tasks were used so that the optimum speed-up and scalability of the algorithm could be determined. The project used MATLAB surface plots to display the various heat topologies and MATLAB two-dimensional function plots to display the speed-up graphs.

In past research it was determined that when the hot edges (1000ºC) were lowered, relative maxima would occur along certain rows and columns of the sheet. These hottest rows and columns were cooled but that did not effectively cool the entire sheet. The current technique uses the cooling of every 50th row and 50th column of the sheet and the sheet was effectively cooled without cooling every cell. The load balancing technique will allow us to distribute all the rows of the sheet across the partition of tasks in an efficient manner so that any number of tasks could be used

Figure 1. Steady-State Heat with Three Hot Edges Figure 2. Resetting Hot Edges: 100º to 50ºC

Figure 3. Reset Every 50th Row and 50th Column Figure 4. Speed-up: Run time for one task Run time for n tasks Conclusion The dimensions of the sheet used were 1000 x 1000 and the steady-state heat conditions were determined. The three hot edges were cooled to 500ºC and the heat topology showed that the maximum internal heat was 740ºC. Every 50th row and 50th column was reset to 500ºC and steady-state conditions were

120 determined again. The heat topology showed that the maximum internal heat was reduced to 590ºC. This was done while the sheet remained distributed.

The load balancing technique used distributed determined the number of tasks to distribute to each task by dividing 1000 by the number of tasks in the parallel partition and then determining the remainder from that division. The remainder indicated how many tasks in the partition would get an extra row. This project showed that the maximum speed-up was achieved from using a partition with 28 tasks.

References 1. Parallel Programming in C with MPI and OpenMP, by Michael J. Quinn. 2. Using MPI Portable Parallel Programming with Message-Passing Interface, 2nd ed. , by Gropp, Lusk, Skjellum. 3. Programming in Matlab, by Marc E. Herniter. 4. C Programming A Modern Approach, by K. N. King. 5. Computer Concepts with C++ Essentials, by Cay Horstmann. 6. High Performance Cluster Computing, Vol.2, Programming and Applications , by Rajkumar Buyya.

121 Observing Transiting Extrasolar Planets at the University of Cincinnati

Student Researcher: Davin C. Flateau

Advisor: Dr. Michael Sitko

University of Cincinnati Department of Physics

Abstract The University of Cincinnati Observatory was refitted for the purpose of creating an ongoing extrasolar planet transit observation program. Transits of two extrasolar planets across the discs of their host stars were observed with the 35.6 cm-diameter Schmidt-Cassegrain telescope housed at the Observatory. Light curves were created and transit parameters were then calculated. A full transit of the planet HAT-P-13 b was observed on 2011-03-02 with a transit duration of 3.22 ± 0.05 hours with a depth of 10.05 ± 0.4 millimagnitudes (mmag), which is within 0.31% and 4.7% of established literature values, respectively. The second half of the transit of planet XO-2 b was observed on 2011-02-20 with a total transit length of 2.67 ± 0.04 hours with a depth of 14.80 ± 0.48 mmag, which is within the measurement error range of the best-known literature values. More precise measurements in the future of many different transiting extrasolar planets are expected as the telescope is mechanically altered, improving sidereal tracking. The need of a larger-format camera is noted in order to capture more photometric comparison stars, reducing the error associated with each target magnitude measurement. Transit measurements of others planets continue; all data will be submitted to the Transiting Extrasolar Planet and Candidates scientific database.

Overview and Rationale The discovery and characterization of planets around other stars has been a field of intense scientific study over the past two decades. To date, over 530 planets around other stars have been identified, revealing solar systems that differ from ours in dramatic ways, leading to a revolution in theories of planet formation and evolution.

The first extrasolar planets were found with the technique of radial velocity measurement – measuring the subtle “wobble” imparted on a star, usually by a Jupiter-sized or larger planet orbiting very close to its star (known as a “Hot Jupiter”). More recently, the discovery of transiting planets - the slight dimming of a star’s light as a planet passes in front it from Earth’s vantage point - has accelerated the discovery rate of exoplanets, and yielded tight constraints on planetary orbits, masses, and even atmospheric compositions. This is the method behind NASA’s current Kepler planet-finding satellite, which currently is verifying over 1200 suspected transiting planet candidates. The current number of verified transiting extrasolar planets stands at 125, with new discoveries announced every month from various telescopic surveys.

With the recent technological advances in telescopes, software and CCD detectors, meaningful contributions to exoplanet science can be accomplished with modest equipment. Precision photometry – the precise measurement of the light curve of a star during a planet’s transit – can be accomplished with readily available commercial CCD equipment and relatively small aperture telescopes. Many larger telescopes, whose time is carefully allocated to many different observers and targets, cannot monitor an individual extrasolar planet host star for long periods of time. Smaller telescope that can devote significant time to monitoring the light-curves from these stars - both in and out of transit – can make contributions to the constraints of a planet’s radius, orbital period, orbital distance, and toward any undiscovered objects in the system, such as other planets and asteroids. A necessary first step for such a program is to observe a variety of existing, known extrasolar planets transits and compare the results to the best-known published values.

Background The orbital planes of planetary systems throughout the Universe have a random orientation to our line-of- sight, resulting in a 10% chance that a planetary system will transit its host star (Borucki & Summers

122 1984). During a transit, a planet traveling across its host star’s disk obscures a percentage of the star’s light, producing a characteristic “transit curve.” A simplified transit geometry can be found in Figure 1.

Figure 1. Left: Simplified geometry of a planetary transit against its host star. Only the combined star-planet flux is observed. During transit, the observed flux drops as the planet obscures part of the starlight. The flux rises again as the planet’s dayside emerges into view. The flux will drop again as the planet is occulted by the star. Right: of a transit, showing the four contact points with the stellar limbs, the impact parameter (b), depth (δ) and the transit time variables t and τ (Winn 2010).

Using the logarithmic magnitude scale of astronomy, where a 1 magnitude difference is a 2.5-fold change in brightness, 1 millimagnitude (mmag) is equivalent to 0.1% of a star’s brightness. Bright extrasolar planets that can be monitored with small telescopes typically produce transits with depths from 1 to 30 mmag, or from 0.1% to 3.0% of the star’s brightness (Gary 2010).

A time-series of photometric measurements using a telescope and CCD camera can be used to create a light curve of the change in the star’s brightness over time. Differential or ensemble photometry is a technique of comparing the target star brightness to other non-variable stars in the same field. For n number of comparison stars, this technique reduces the stochastic error measurement of the transiting star by

where SE1, SE2, etc. are the stochastic error measurements of the individual comparison stars due to readout noise and atmospheric effects (Gary 2010).

Transit parameters such as the depth of the transit (δ), the length of the transit, and the time of mid-transit can be derived from the light curve by fitting the data to a model of the transit, and minimizing “chi- squared”, the sum-of-squares statistical value between the model and data given as

i where fi(obs) is the observed value of the relative flux for a given flux, f (calc) is the calculated flux of the model, and σi is the measurement of uncertainty. An expected ”goodness of fit” (~1) to the model can be 2 2 calculated with the reduced χ , which normalizes χ for a set of np free parameters and nx data points given as (Winn 2010 and Haswell 2010)

123 Procedure and Observations The University of Cincinnati Observatory was refit to conduct observations in a new transiting extrasolar planet observation program. The Observatory, which is located on the main University of Cincinnati campus, currently houses a 14” LX-200 GPS pier-mounted computer controlled telescope. A new primary detector, a Santa Barbara Instrument Group ST-7 XME CCD camera, was obtained to take photometric measurements. A focal reducer was converted to allow experimentation with imaging at various f/ratios to reduce exposure times. New software was acquired to assist in telescope pointing, image acquisition, photometry processing, and to confirm the polar alignment of the telescope to within 0.2 arcminutes of the north celestial pole. Precision tracking is needed during exposure to keep stars on the same CCD pixel to reduce increased uncertainties due to pixel-to-pixel response and flat-field normalization.

Transits of the planets XO-2 b and HAT-P-13 b were chosen for observation due to their altitude above the horizon during transit, transit depth, and the availability during cloud-free nights. Transit timing information was obtained from the Transiting Extrasolar Planets and Candidates online resource (TRESCA).

An observation field of view was chosen with respect to the exoplanet host star, nearby stars that could be used as comparison stars in the differential photometry and to a sufficiently bright guide star that could be used on the camera’s autoguiding chip to correct for small errors in tracking. Both transits were observed with a clear blue-blocking filter, allowing as much light to fall on the CCD chip as possible, while filtering out scattered moonlight present in background sky.

Each observation was evaluated for tracking quality and other defects such as intrusive cosmic- ray events and other instrument failures. Acceptable observations were calibrated, aligned, and evaluated with an artificial star photometry method using MaxIm-DL. Ensemble photometry was performed with the selected comparison stars and results output as a numerical table.

A computational spreadsheet was used to analyze each set of observational data, and fit the observations to a simplified trapezoidal transit model (Gary 2010). An atmospheric extinction curve was fit to each set of observations, compensating for the dimming or brightening of the imaged stars due to changing altitude and varying seeing conditions. Ingress time, egress time, transit depth, a general offset and transit 2 slope were varied to minimize χ N between the data and the model. Uncertainties for each parameter were calculated by individually varying the parameters to 1-sigma of the best-fit parameters.

Figures 2 and 3 present the transit observations and derived transit parameters against the simplified model.

Figure 2. The light curve for the transit observed of HAT-P-13 b on March 2, 2011. The individual observations (+) are averaged into groups of 7 (circles with error bars). Tracking was observed to be poorer than that of XO-2, due to a periodic curve adjustment made before the observation. The observations were fit 2 to a basic transit light curve model by minimizing the reduced χ N between the ensemble photometry results and the model (minimized to 1.39). Mid-transit time was calculated to be 5.21 ± 0.02 (expected value based on

124 Bakos et al. 2009 is 5.403 UT based on a period of 2.91626 days, so 11.5 ± 1.4 minutes early). Transit depth was calculated to be 10.05 ± 0.4 mmag (9.6 ± 1.0, Gary 2010) mmag. Total transit time was calculated to be 3.22 ± 0.05 hr (3.23 hr, Bakos). The lower line and data represents residuals.

Figure 3. The light curve for the transit observed of XO-2 b on February 20, 2011. The individual observations (+) are averaged into groups of 9 (orange circles with error bars). The observations were fit 2 to a basic transit light curve model by minimizing the reduced-chi-squared χ N between the ensemble photometry results and the model (minimized to 1.08). This resulted in a mid-transit time of 3.08 ± 0.01 UT (expected value from Burke et al. 2007 is 3.051 UT based on a period of 2.61586 days, so 1.6 ± 1.2 min late). Transit depth was calculated to be 14.8 ± 0.48 mmag (14.2 mmag). The lower line and data represents residuals. Conclusions The parameters obtained from the transit light curves compare well with the established literature values. As telescope tracking is improved, stochastic error and measurement scatter will be reduced, improving the precision of observations and the parameters from generated light curves. As observations both in and out of transit are made of future targets, data will be presented to TRESCA for inclusion into their database of extrasolar planet transits. More precision can be gained with the addition of a CCD camera with a larger CCD chip, as more comparison stars could be used for the ensemble photometry.

Acknowledgements The assistance of the following people are gratefully acknowledged: Ohio Space Grant, Dr. Gary Slater, the University of Cincinnati Department of Physics, Dr. Michael Sitko, Dr. Richard Gass, John Markus, Robert Schrott, Robert Harris, John Whitaker, Lincoln Bryant, Aaron Eiben, Bruce Gary, Dr. Arne Henden, Ray Gralak, Ted Agos, Tom Krajci, and Joe Garlitz.

References 1. Bakos, G. Á., Howard, A. W., Noyes, R. W., and 15 others 2009 The Astrophysical Journal, 707, 446 2. Borucki, W. J. & Summers, A. L. 1984, Icarus, 58, 121 3. Burke, Christopher J.,McCullough, P. R., Valenti, Jeff A., and 15 others 2007 The Astrophysical Journal 671, 2115 4. Gary, Bruce L., 2010 Exoplanets for Amateurs, Third Edition 5. Haswell, C. A., 2010 Transiting Exoplanets, Cambridge University Press 6. TRESCA, Transiting Exoplanets and Candidates, Czech Astronomical Society, Website, http://var2.astro.cz/EN/tresca/index.php 7. Winn, J.N. 2010 Exoplanet Transits and Occultations, Exoplanets (Seager, S. editor), 55

125 Biomechanical Investigation: Hip Fracture Repair and Removal of Hardware

Student Researcher: Michelle K. Fleming

Advisor: Dr. Hazel Marie

Youngstown State University Department of Mechanical Engineering

Abstract As more of the world’s population begins to live longer more active lives, the prevalence of fractures due to injury and osteoporosis will increase. Due to an increase in the retirement age and the activity level of elderly people, an increased interest into the examination of hip fractures and its consequences has become very important and will continue to be of great importance over the next few years. When repairing hip fractures simple screws, arthroplasty, dynamic hip screws and intramedullary hip screws are often used as a means of repair. In any of these cases, hardware can become problematic, requiring the removal of hard ware and additional surgery related to that procedure. One dangerous consequence of hardware removal is refracture, leading to extensive recovery time and medical costs for patients and their families’. My project will investigate some of the testing methods associated with the determining solutions to minimize complications after the removal of hardware.

Project Objectives With the increase in hip fractures and the idea that at least 50 percent of patients with a hip fracture will never live independently again (Schechner, 2010), the removal of hardware and subsequent refracture will only lead to increased medical care, rehabilitation costs and also increase the already high mortality rate of elderly patients that fracture their hip. The current methods of hip fracture repair require surgery, hospitalization and inactivity that can lead to other illnesses, complications and death in addition to excessive medical costs. When patients receive any type a fracture repair that includes hardware, there are risks associated with hardware. Hardware can cause illness in the body or like in the focus of my project; hardware can begin to push out of its place causing excruciating pain to the bone and other tissue and impair movement depending on the location of hardware.

This kind of pain can greatly affect the quality of life in addition to increasing Medicare costs, making the research that leads to a solution to this issue very important. It is not feasible to use live humans as a means to determine which method is the best method to treat a patient after removal of hardware; my project will focus on the development of testing methods and protocols that can be used to gather information to help improve treatment after removal of hardware.

Methodology In order to determine which type of repair after removal of hardware is best, information about the type of repairs used for hip fracture repair, current testing methods and protocols and modeling options were investigated.

It was determined that testing using an InStron tensile testing machine; SolidWorks, AutoCad and Finite Element Analysis would be the best method of testing with the resources available at Youngstown State University. The experiment with the Instron Machine was developed by YSU Mechanical Engineering graduate student Janet Gbur and supervised by Dr Hazel Marie.

In order to complete the testing associated with the Instron Machine, bone made of resin with a modulus of elasticity similar to bone was obtained. A protocol was created to make the testing as uniform as possible, selecting controls, the type of repair that would be completed on the bones and the type of testing that would be done to determine values of stress and strain of the loads applied at stress risers. Typical surgeries performed after hip fracture repair were performed on sets of bones and then different types of hardware removal was performed. The hardware removal ranged from complete hardware removal ranged from complete to partial hardware removal and also included the insertion of cement after

126 removal. This process was a long an expensive process, taking months to almost a year to obtain all testing materials and create ideas to handle testing problems not previously discussed in work by other researchers. There were many factors affecting the ability to complete the project. There was a waiting process associated with surgery on the bones, possible error based on how meticulously the surgery was performed each time.

One of the most important aspects of the testing of the project was the need to make the testing of each bone as uniform as possible so that results would not be altered due to different measuring procedures, imperfection in the bone, locations of strain gages or lack human error of researchers conducting experiment.

The other method that can be used to determine whether one type of repair after removal of hard ware is better than the other is through 2D and 3D modeling and with Finite Element Analysis. This process was also a meticulous process requiring measuring and investigation associated with the experimental process, this method is also limited to the modeling skills of the person modeling the femur and screws for the investigations. While working on this process it was discovered that model these elements in phases (basic cylinders, 90 degree angles and then gradually increasing the angle of the femoral head) allowed for an improvement of the model each time. This allowed models to be tested using finite element analysis to see whether the insertion of cement worked better than just leaving holes in the bone. We also discovered that you can purchase more accurate modeling of femurs for Finite Element Analysis testing that makes the process more accurate but is much more expensive.

Results Obtained While determining which process was best to use for testing, it was determined that cost seemed to be our biggest limitation, so the methodology that was best primarily is based on budget and what you were looking for with your results. If you wanted your most basic method of testing and information at the lowest cost, the modeling done in multiple phases in 3D applications like SolidWorks and implanted into Finite Element Analysis allows the most basic information to be determined that could be validated by research from other scientists and articles. This method allowed for the experiment to be duplicated with little cost but required lots of time and does not provide enough valuable information to grant a researcher permission to move to real world testing. Purchasing an anatomically correct model is much more expensive but provides more support for researchers trying to get testing phases funded. It allows for very accurate and detailed information on stress with or without hardware or bone cement. The experimental testing phase although it is by far the most interesting, did not provide results that were as uniform as expected. The manufacturing of the bone and any stress risers related to that (some models had to be returned due to cracking upon arrival) affected the results of the testing. The surgeries performed by the surgeon were not all uniform and there was believed to be fracture due this issue during testing. With partial hardware removal, some of the hardware was not secured properly which may have also affected results. This process allows for more investigation of different elements of results obtained. Not only did we receive data about the types of repairs, but there was data to analyze about the surfaces, locations and types of fractures that occurred during this experiment. It also makes a good cause for protocol of human testing.

Significance of Results After looking at the most feasible ways of resolving the removal of hardware complications, it has been determined that a combination of all three methods stated above, focusing on fewer types of repair at a time is the best method for completeing this research. The elements of this testing were too broad. In the experimental phase, too many types of repairs were tested at once. In the modeling phase many versions of models were formed and but didn’t not cover all of the real world factors based on basic assumptions made. Each experiment needs to be more focused on a type of repair in the expermintal and modeling phased and then analyzed before moving to the next type of repair so that each element that affects the experiment can be considered.

127 Figures and Charts

References 1. Behrens, Bernd-Arno, Ingo Nolte, Patrick Wefstaedt, Christina Stukenborg-Colsman, and Anas Bouguecha. "Numerical Investigations on the Strain-adaptive Bone Remodeling in the Periprosthetic Femur: Influence of the Boundary Conditions." BioMedical Engineering OnLine 8.1 (2009): 7-16. 2. Lee, Clive. "Properties of Bone Cement:" The Well-Cemented Total Hip Arthroplasty. Berlin: Springer, 2005. Print. 3. Lee, Taeyong. "Predicting Failure Load of the Femur with Simulated Osteolytic Defects." Annals of Biomedical Engineering 35.4 (2007): 642-650. 4. Rudmann, K. E. "Compression or Tension? The Stress Distribution in the Proximal." BioMedical Engineering OnLine 5.12 (2006). 5. Schechner, Zvi, Luo Gangming, Jonathan J. Kaufman, and Robert S. Siffert. "A Poisson Process Model for Hip Fracture Risk." International Federation for Medical and Biological Engineering (2010): 1-13. 6. "WebMD." WebMD for Healthwise.com. Healthwise Incorporated, 27 May 2009. Web. 1 April 2011. .

128 Elastic Constants of Ultrasonic Additive Manufactured Al 3003-H18

Student Researcher: Daniel R. E. Foster

Advisor: Dr. S. Suresh Babu

The Ohio State University Department of Material Science and Engineering (Welding Engineering Program)

Abstract Ultrasonic Additive Manufacturing (UAM), also known as Ultrasonic Consolidation (UC), is a layered manufacturing process in which thin metal foils are ultrasonically bonded to a previously bonded foil substrate to create a net part. Optimization of process variables (amplitude, normal load and velocity) is done to minimize voids along the bonded interfaces. This work however pertains to the evaluation of bonds in UAM builds through ultrasonic testing of build’s elastic constants. Results from UAM parts indicate anisotropic elastic constants and also a reduction of up 48% in elastic constant values compared to a control sample. In addition, UAM parts are approximately transversely isotropic, with the two elastic constants in the plane of the Al foils having nearly the same value, while the properties normal to foil direction have much lower values. The above reduction was attributed to interfacial voids. In contrast, the measurements from builds made with very high power ultrasonic additive manufacturing (VHP- UAM) showed a drastic improvement in elastic properties, approaching the values similar to that of bulk aluminum.

Project Objectives Ultrasonic Additive Manufacturing (UAM), also known as Ultrasonic Consolidation (UC), is a new manufacturing process in which metallic parts are fabricated from metal foils. The process uses a rotating cylindrical sonotrode to produce high frequency (20kHz) and low amplitude (20 to 50 μm), mechanical vibrations to induce normal and shear forces at the interfaces between 150 µm thick metallic foils[1]. The large shear and normal forces are highly localized, breaking up any oxide films and surface contaminants on the material surface, allowing for intimate metal-to-metal contact. As the ultrasonic consolidation process progresses, the static and oscillating shear forces cause elastic-plastic deformation. The above deformation also leads to localized high temperatures by adiabatic heating. The presence of high temperatures may trigger recrystalization and atomic diffusion across the interface, leading to a completely solid-state bond [2]. This process is repeated, creating a layered manufacturing technique, which continuously consolidates foil layers, to previously deposited material. After every few foil layers, CNC contour milling is used to create the desired part profile with high dimensional accuracy and appropriate surface finishes [3].

Accurate values for elastic constants are needed to predict thermomechanical processes during a UAM build, as well as, the final properties. Modeling of the UAM process has been done by Huang et al. [4] as well as Zhang et al. [5]. These authors assumed isotropic properties of UAM builds, similar to that of bulk metals, in their models. Assuming such a condition is indeed convenient, but it is unclear whether UAM builds satisfy this ideal condition. Hopkins et al., [6] and Hahnlen et al., [7] have shown that the strength of a UAM part depends on the orientation of testing direction with respect to build directions. The authors, however, did not report the elastic constants. It is quite conceivable that elastic constants also depend on material direction. Therefore, the current research focuses on the measurement of the elastic constants in the 3 material directions.

Methodology Used Ultrasonic Testing (UT) was used to evaluate material properties. An ultrasonic transducer is used to convert an electrical signal to an elastic wave that propagates through the test sample. Comparing the Time-of-Flight of the propagating wave versus known reference materials, the elastic wave velocity of a material can be determined. The two types of elastic waves most widely used in UT are longitudinal (compression) and transverse (shear) waves. Longitudinal waves travel through materials as a series of compressions and dilations. Transverse waves propagation causes particle vibration transverse to the

129 wave propagation direction. The phase velocities of both types of waves in a given material are dependent on the elastic properties of the materials being tested [8].

Measuring wave velocities in a material can be used to determine elastic constants. In Figure 1, side-to- side motions parallel to the incident plane indicate shear wave propagation into the plane. The motions normal to the plane indicate longitudinal wave propagation into the plane. The velocity of each wave is dependent on the material density and directional stiffness. Note the direction coordinate system in which axis-1 is the sonotrode rolling direction, axis-2 is traverse to the rolling direction (sonotrode vibration direction), and axis-3 is the build direction. The resulting shear and longitudinal wave velocities can then be used to find the elastic constants of the sample. Using the stiffness matrix, the elastic -1 compliance matrix can be calculated using Sij= (Cij) [9]. Knowledge of the stiffness and compliance matrix is important, as it is used to calculate stresses and strains (σi = Cij εj and εi = Sijσj) and engineering constants G, E, ν.

The process parameters used to create UAM samples with 37%, 59% and 65% bonded areas are listed in Table 1. All builds were made with 149 ºC preheat of the substrate. Consolidation for these samples was carried out on the Beta Ultrasonic Consolidation System created by Solidica Inc., which is capable of applying up to 2.2KN of normal force and 26 μm of vibrational amplitude. The consolidation of each layer was performed using a tacking pass followed by a welding pass on each metallic foil. A single tape width build was used to construct 37% and 65% bonded area builds by continuously depositing one foil on top of the previously deposited foil. The 59% bonded area samples used the same process parameters as that of 37%, except the layering pattern was changed to the “brick wall” layering sequence. In this building sequence, 2 adjacent foil layers were deposited followed by 3 adjacent foil layers on top. This pattern is repeated to construct the build. Care was taken so that samples used from the “brick wall” build did not contain foil edges.

The 98% percent bonded area sample was made using the Very High Powered Ultrasonic Additive Manufacturing (VHP UAM) System produced by Edison Welding Institute and Solidica Inc. This higher- powered ultrasonic consolidation system is capable of applying up to 45KN of normal force and 52 microns vibrational amplitude. The additional vibrational amplitude and normal force induce a greater amount of plastic deformation at the faying interfaces, compared to the Beta system, leading to an increase in bonded area. The VHP UAM system was used to construct a single tape width UAM build without a preheat or tacking pass. The VHP UAM system used is a prototype and is not fully automated, in contrast to the fully automated Beta system. Due to the difficulties of manually aligning and laying foil, VHP UAM sample height was limited; therefore, only V33, V44, V55 measurements could be obtained.

It was uncertain whether bond quality varied throughout the depth of UAM builds. Variations in bonding throughout a build’s height could have an effect on the accuracy velocity measurements. For that reason it was important to quantify how much variation was occurring. To investigate this, a step sample was constructed with step heights increasing heights from 2.5mm to 30mm. To create this step build, a 152mm by 25mm by 30mm 59% bonded area UAM block was created in which steps of increasing height were cut using EDM. Wave velocity was measured along axis-3 of each step. Changes in measured velocity at different steps would be an indication that the bonded areas varied throughout the depth.

It was decided to use “Percentage of bonded area” instead of “Linear weld density” (LWD) to evaluate the amount of bonding at the welding interface. The LWD measurements use a cross-section of a UAM build, as seen in Figure 1a. Comparing the length of voids at the interfaces to the total length of the interface, the amount of bonding (LWD) can be measured. This method of evaluation can be subjective with adjacent interfaces displaying a range of bonded values [6]. “Percentage of bonded area” measurements evaluate interface bonding by surveying fracture surfaces (Figure 2b). Bonded areas are deformed and have greater heights than unbonded areas. Because this type of measurement has been shown to be repeatable and accurate, it was chosen to be the primary technique used to determine the amount of interfacial bonding. To measure percent bonded area, high contrast images of the fracture surface were taken with a Meijer optical microscope. Then Fiji image processing software [8] was used to determine a suitable contrast threshold in which bonded areas would be highlighted due to their darker

130 appearance Figure 2c. Once an appropriate threshold was found, the area fraction of the highlighted sections was used as the “Percent bonded area” measurement. The VHP UAM sample could not be constructed to a suitable height to be fractured in order to obtain a Percentage of bonded area measurement; consequently LWD was measured and assumed to be the same as percentage of bonded area for that sample.

Results Obtained and Interpretation of Results Characterization of Step Build Measured velocities from the step build are displayed in Figure 3a. Neither longitudinal (V33) nor shear (V55) velocity measurements on the step build show significant change with step height. This observation suggests that the amount of bonded area (and therefore stiffness) does not vary significantly with build height. The minimum UAM thickness needed to get accurate measurements is 5 mm, which is consistent with recommendations from ASTM standard E494 [9]. V33 could be measured up to 23 mm before scattering of the ultrasonic waves due to the voids prevented velocity measurements, while V55 could be measured at all step heights above 5 mm.

Characterization of Control Sample The foils used for UAM have a H18 temper designation. Under H18 designation, the initial annealed aluminum was cold rolled to a reduction in thickness of 75%. Cold rolling can affect material properties, causing mechanical properties to be different in each material direction. To ensure that the stiffness of UAM samples was not due to the initial condition, an Al 3003-H14 “control sample” was ultrasonically tested. Although the H14 temper is not cold worked to the degree of H18 temper (50% thickness reduction compared to 75% thickness reduction), it can provide stiffness values that are close to those of Al 3003-H18 foil. Also measurements from the control sample could be used to rule out doubts whether observed stiffness changes in UAM builds were due to the initial rolled state of the foil.

Al 3003-H14-control sample had slightly higher elastic constant values compared to literature values for Al 3003 [10] for all elastic constants except C66 (Table 2a). The largest difference was in C11, C22, C33, and C12 which had changes of 12%, 9%, 6% and 19%. C44, C55, and C66 differed from literature values by a maximum of ±3%. The slightly larger constants obtained for C44, C55, and C66 are within acceptable experimental error (±4%), but comparison for the C11, C22, C33 and C12 cases show significantly higher stiffness constants even when measurement errors are considered. Overall, cold rolling Al 3003 has a slight stiffening effect on the material.

Characterization of UAM Samples Elastic constants for UAM samples with 35%, 59% and 65% bonded areas are compared to Al 3003-H14- control sample in Table 2 b-d. In all cases the UAM material had lower elastic constant values than the control material. The disparity between UAM samples and the control sample decreased as the percentage of bonded area increased.

The difference between the control sample and UAM samples in Tables 2 b-d is especially large for C12, C33, C44 and C55. In the case of the 37% bonded area sample, each of these constants displayed a reduction between 31-51% compared to the control sample. The magnitude of measured elastic constants increased with an increase in bonded area. For example, in the case of 59%bonded area, there was a reduction of 23-37% vs. control and in the case of 65% bonded area, there was a reduction in elastic constants between 10-28% vs. control.

Constants C11, C22, and C66 also had a reduction in value that was dependent on percentage of bonded area, but their reduction was not as extensive as that of C12, C33, C44, and C55. At 37% bonded area, the UAM samples had a 1-20% difference in C11, C22 and C66 compared to the control. As the percentage of bonded area increased for these constants, the discrepancy between UAM and control results reduced with a maximum of 14% reduction in the case of 59% bonded area and 16% in the case of 65% bonded area.

131 The elastic constants in the VHP UAM sample were not significantly affected by the presence of voids, due to 98% bonded area, and therefore, had similar properties to those of the control sample. There was a 0.5% difference in C33 and a 7% difference in C44 and C55 compared to control, as seen in Table 6. Although elastic constants in the other material directions could not be tested, it is likely that those properties would be close to those of the control sample. This is due to material properties in the axis 1 and 2 material direction being less severely affected by voids at the interfaces when compared to axis-3 direction. Since the VHP UAM sample demonstrates that the properties in the axis-3 direction are similar to those of the control, the properties in the axis 1 and 2 directions are likely also close to those of the control. These results indicate that the multidirectional stiffness of the VHP UAM sample material are likely very close to that of the Al tapes used to construct the sample.

Stiffness values for C33, C44 and C55 from Table b-d are plotted in Figure 3b. Effective stiffness decreases linearly with a decrease in percentage of bonded area. The data from VHP UAM supports this observation. As a result of the linear relationship between percentage of bonded area and stiffness, bond quality of UAM components can be determined non-destructively. By measuring longitudinal or shear wave velocity and relating that value to a curve, such as Figure 3c, bond quality of a UAM part can be determined. The linear relationship between wave velocity and percentage of bonded area is of great importance because it allows UAM parts to be characterized on the shop floor in minutes right after consolidation without the need for time consuming cutting, polishing and optical microscopy.

LWD and percentage of bonded area are compared in Table 1. LWD measurements indicate a much higher degree of bonding compared to the percentage of bonded area measurements. The 37% bonded area sample had a 75% LWD, the 59% bonded area sample had a 91% LWD and the 65% bonded area sample had a 91% LWD. Stiffness vs. LWD for C33, C44 and C55 area are plotted in Figure 3. The resulting linear trend between stiffness and LWD is much poorer than that between stiffness and percentage of bonded area in Figure 3d. Plotting LWD vs. stiffness resulted in a linear trend with large deviations from the trend line while stiffness vs. percentage of bonded area plot followed the linear trend much more closely. This is an indication that the percentage of bonded area measurement technique is more consistent and accurate than LWD in determining bond quality in UAM components.

It is hypothesized that the change in material stiffness is due to the presence of voids at the welding interface. These void volumes are filled with no matrix material and thus have negligible mass and strength. As a result, when the material is loaded, the bulk foil portion of the UAM part yields a small amount, while the interface region under the same load will yield excessively. This is because the load bearing cross-sectional area at the interface is smaller due to the presence of the voids for a given load (Figure 4). The combined loading response from the bulk foils and interface region results in an overall greater yielding of the part. This phenomenon creates a component with an effective stiffness that is lower than the foils used to construct it.

The yielding of an UAM part is analogous to springs in parallel and series. C33, C44 and C55 relate to the stiffness of a part when the partially bonded interfaces are in the load path, while elastic constants C11, C22 and C66 relate to the material stiffness when the interfacial region is parallel to the load path. When spring elements are in series, like in the case of C33, C44 and C55, are strained in the 33, 13, 31, 23, or 32 directions, all elements are in the load path and experience the same load but displace different amounts based on their stiffness. The lower stiffness interfacial regions yield excessively to the load while the bulk foil yields as expected, resulting in parts with less effective stiffness in those directions. When spring elements in parallel, as in the case of C11, C22 and C66, are strained in the 11, 22, 12, 21 directions, interfacial regions a well as the bulk foil regions displace the same amount. However most of the load is carried by the bulk regions of the foil and not by the partially bonded region. This is the reason there is a smaller stiffness reduction in the axis 1 and 2 material directions compared to the axis 3-direction.

The stiffness components in the 1 and 2 directions are very similar, demonstrating that the material is approximately transversely isotropic. C11 and C22, along with C44 and C55 in each case have values that are within 3.5% of each other. This indicates an axis of symmetry and that the material can be

132 approximated as transversely isotropic. In transversely isotropic materials an additional elastic constant can be calculated using the relation

C12 = C11 – 2*C66. (1) Calculations of C12 are listed in Table 2 b-e. Ongoing work is focusing on computational simulation of such preferred deformation by prescribing position dependent properties.

This discovery of lower effective stiffness and transverse isotropy is of great importance. Accurate elastic constants are needed to model the UAM process as well as to adopt UAM parts in general engineering design. For example, modeling the lateral displacement of a large UAM build due to the shear forces caused by a sonotrode itself has been considered to be important. This tendency has been used to rationalize the process’s inability to making UAM builds above a critical height. Modeling such phenomena would need an accurate shear modulus in the transverse direction, which is C44 (C44= G23). If one would compare stress results using 37% bonded area G23 (18.1 GPa) to isotropic G (25.9 GPa), there would be a measurement error of 31%. This large error in calculations could lead to simulation results that do not accurately describe response to loads that UAM parts experience.

Conclusions The parts made by ultrasonic additive manufacturing were evaluated with ultrasonic testing. Wave velocity, percentage of bonded area and material stiffness do not change significantly with build height. Shear and longitudinal wave velocity had no significant change with build height.

The effective stiffness of Al 3003-H18 UAM parts was reduced due to the presence of voids. When a load is applied, the interfacial welded regions that contain the voids yield more than the bulk foil regions, resulting in the overall part becoming less stiff than the aluminum used to construct it. The reduction in stiffness components can be as high as 50% in the axis-3 direction while up to only 18% in the axis 1 and 2 directions. Using percentage of bonded area is a more accurate than using Linear Weld Density in determining bond quality as it more closely follows a linear trend with stiffness.

Al 3003-H18 UAM components are approximately transversely isotropic. Material properties in the axis- 1 and 2 material directions are approximately the same (maximum of 3.5% difference) while the material properties in the 3-d direction are much lower. These properties measured in samples made by VHP UAM were close to those of the bulk material. The cold worked state of the Al 3003 foils used in UAM is not the cause of stiffness reduction in Al-3003 UAM parts. Cold working Al-3003 increases elastic constants by as much as 10% in H14 state. Material velocity measurements can be used as a Non- destructive test to evaluate bond quality UAM builds.

Figures and Charts

࡯૜૜ ࢂ ൌ ඨ ૜૜ ࣋

࡯૝૝ ࢂ ൌ ඨ ૝૝ ࣋

࡯૞૞ ࢂ ൌ ඨ ૞૞ ࣋

࡯ Sheet ࡯૞૞ ૟૟ ࡯૚૚ ࢂ ൌ ඨ ࢂ૟૟ ൌ ඨ ࢂ૚૚ ൌ ඨ Normal (3) ૞૞ ࣋ ࣋ ࣋

Transverse Rolling Direction (2) Direction (1)

࡯૟૟ ࢂ ൌ ඨ ૟૟ ࣋

࡯૝૝ ࢂ ൌ ඨ ૝૝ ࣋

࡯૛૛ ࢂ ൌ ඨ ૛૛ ࣋

Figure 1. Ultrasonic velocity relation to elastic constants and material direction

133 (a)

(b) (c) Figure 2. Cross-section of UAM build (a), UAM Fracture Surface (b), Image threshold measurement using Fiji (c)

6 120

V33 C33 5 V55 100 C44

C55 4 80

3 60

2 Stiffness (GPa) 40 Wave (km/s) Velocity

1 20

0 0 0 5 10 15 20 25 30 35 0 20406080100 Step Height (mm) Bonded Area (%) (a) (b)

120 7

C33

V33 100 C 6 44 V44 C55 V55 5 80

4 60

3

Stiffness (GPa) Stiffness 40

Wave (Km/s) Velocity 2

20 1

0 0 0 20406080100 0 20406080100 Bonded Area (%) Linear Weld Density (%) (c) (d)

Figure 3. Wave velocity vs. build height for the Step Build (a), Stiffness vs. Percentage of Bonded area (b), Wave velocity vs. Percentage of Bonded area (c), Stiffness vs. LWD (d)

134 F

Al Foil Al Foil { } Partially Partially Bonded Bonded { } Interface Interface

Al Foil { } Al Foil F

UAM Structure with no load UAM Structure under load. (Excessive yielding in interfacial regions) Figure 4. Schematic of UAM response to loading

Table 1. UAM Process parameters with respective bond quality Tacking Pass Welding Pass

Percentage We ld Weld Sample LinearWeld Force Amplitude Force Amplitude of bonded Speed Speed Number Density (N) (µm) (N) (µm) are a (mm/s) (mm/s) 1 37% 75% 200 9 51 1000 26 42 2 59% 91% 200 9 51 1000 26 42 3 65% 91% 350 12 33 1000 25 28 4 98% 98% Not Used Not Used Not Used 5500 26 35.5

Table 2. Elastic constant comparison of control and UAM builds UT te s ti n g Difference Difference Difference UT tes ti ng of of 59% Literature between Al between Al between Al Al 3003- Al 3003- 37% bonded Al 3003- bonded Value Al 3003-H14 3003-H14 3003-H14 H14 (GPa) H1 4 (GPa) area UAM H14 (GPa) area UAM 3003-H18 and and UAM and UAM ±5% s ample (GPa) sample (GPa) isotropic samples samples (GPa) Al 3003 C11 115.7 92 -20% C11 115.7 99.5 -14% C11 102 115.7 12% C22 112.6 94.6 -16% C22 112.6 100.2 -11% C22 102 112.6 9% 108.9 53.3 -51% 108.9 68.8 -37% C33 102 108.9 6% C33 C33 26.1 18.1 -31% 26.1 19.9 -24% C44 25.9 26.1 1% C44 C44 26.1 18.1 -31% 26.1 20.6 -21% C55 25.9 26.1 1% C55 C55 25.2 25 -1% 25.2 25.8 2% C66 25.9 25.2 -3% C66 C66 62.2 47.9 -23% C12 50.2 62.2 19% C12 62.2 41.9 -33% C12 (a) (b) (c)

UT tes ting Difference UT tes ting of 65% Difference between Al Al 3003- bonded of 98% 3003-H14 H14 (GPa) area UAM between Al- and UAM bonded sample samples Al 3003- 3003-H14 (GPa) area VHP H14 (GPa) and VHP UAM C11 115.7 96.7 -16% UAM sample 112.6 99.5 -12% sample C22 (GPa) C33 108.9 78.2 -28% C 108.7 109.2 0.5% C44 26.1 23.4 -10% 33

C55 26.1 23.1 -11% C44 26.1 28.1 7% C66 25.2 25 -1% 26.1 28.1 7% C12 62.2 47.7 -23% C55 (d) (e)

135 Acknowledgments The authors would like to thank Dr. Marcelo Dapino, Christopher Hopkins, Ryan Hahnlen, and Sriraman Ramanujam of Ohio State University as well as Matt Short and Karl Graff of Edison Welding Institute (EWI) for their input on the project. Financial support for this research was provided by the Ohio Space Grant Consortium (OSGC) and Ohio’s Third Frontier Wright Project. This research is currently under review in ELSEVIER’s “Ultrasonics” peer-reviewed journal.

References 1. D. R. White, Ultrasonic Consolidation of Aluminum Tooling, Advanced Materials and Processes, 161 (2003) 64-65. 2. R. L. O'Brien, Ultrasonic Welding, in: Welding Handbook, 1991, pp. 783-812. 3. G. D. Janaki Ram, C. Robinson, Y. Yang, B.E. Stucker, Use of Ultrasonic Consolidation for Fabrication of Multi-Material Structures, Rapid Prototyping Journal, 13 (2007) 226-235. 4. C. J. Huang, E. Ghassemieh, 3D Coupled Thermomechanical Finite Element Analysis of Ultrasonic Consolidation, Material Science Forum, 539-543 (2007) 2651-2656. 5. C. S. Zhang, L. Li, Effect of Substrate Dimensions on Dynamics of Ultrasonic Consolidation, Ultrasonics, 50 (2010) 811-823. 6. C. D. Hopkins, Development and Characterization of Optimum Process Parameters for Metallic Composites made by Ultrasonic Consolidation, in: Mechanical Engineering, The Ohio State University, Columbus, 2010. 7. R. M. Hahnlen, Development and Characterization of NiTi Joining Methods and Metal Matrix Composite Transducers With Embedded NiTi by Ultrasonic Consolidation, in: Mechanical Engineering, The Ohio State University, Columbus, 2010. 8. W. Rasband, J. Schindelin, A. Cardona, FIJI, in, pacific.mpi-cbg.de, 2010. 9. ASTM, -Standard Practice for Measuring Ultrasonic Velocity in Materials, in: E494 -10 -, ASTM International, 2001. 10. J.W. Bray, ASM Handbooks Online, 2 (1990) 29–61.

136 Synthesis and Characterization of Polymer Electrolyte Material for High Temperature Fuel Cells

Student Researcher: Kaitlin M. Fries

Advisor: Dr. Vladimir Benin

University of Dayton Department of Chemistry

Abstract Poly [2-phospho-p-phenylene-bis(benzimidazole)] (PPBI) polymer was successfully synthesized by direct polymerization, using the monomer 2-phosphonoterephthalic acid and 3,3-diaminobenzidine tetrahydrochloride. Techniques employed to confirm the chemical structure of both the monomer and polymer included melting point and NMR. The thermal properties were characterized by TGA. In the future, this membrane has the potential to be used as the PEM material for fuel cell applications.

Project Objectives PBI membranes can be widely beneficial in fuel cell applications, allowing cells to be operated at temperatures up to 200 ºC, with no water management necessary. In the past the high fuel permeability and limited temperature capabilities of conventional polymer electrolytes has prevented applications. Additionally, conventional membrane electrolytes are often based on costly perfluorinated polymers, making commercialization difficult. Nafion has been the material of choice in the past, but its conductivity depends strongly on high water content. This limits the operating temperature in practical systems to about 80 ºC because the electrolyte will dry out and lose conductivity. It is understood that high temperature operation is generally hampered by three significant drawbacks: (1) loss of hydration of the PEM and the concomitant increase in membrane resistance; (2) polymer membrane degradation, in some cases, above 120 ºC; and (3) lack of intermediate proton conductors in the range of 100-400 ºC with a unique proton ‘solvation’ species supporting conduction in that regime. This has had significant bearing on the direction of high temperature PEM research. The main objective of this proposal is the investigation and development of PBI polymer membrane electrolytes for high temperature PEM fuel cells. These membranes are being designed to decrease fuel permeability by modification of the transport mechanism within the membrane. A significant aspect of this study is to also address, via PBI structure design, the issue of acid leaching that is detrimental to fuel cell operation at high temperatures especially in the case of PBI-phosphoric acid systems.

Results & Discussion Preparation of 2-Phosphonoterephthalic Acid The synthesis of 2-phosphonoterephthalic acid was conducted according to Branion and Benin’s procedure, as shown in Scheme 2.

Scheme 1. Synthesis of 2-Phosphonoterephthalic Acid

O O O O CH3 CH3 COOK COOH Br Cl P OEt P OEt P OEt P OH

OEt OEt KMnO4 HCl OH OEt Mg/THF, Reflux Water/ t-butanol

CH3 CH3 COOK COOH Structure confirmation was obtained through melting point and NMR. The melting point was found to be 297 ºC , similar to the literature value of 298 ºC. NMR data was collected using DMSO as the solvent .

Synthesis of Poly [2-phospho-p-phenylene-bis(benzimidazole)] The synthesis of Poly [2-phospho-p-phenylene-bis(benzimidazole)] (PPBI) was conducted according to the following reaction scheme.

137 Scheme 2. Synthesis of Poly [2-phospho-p-phenylene-bis(benzimidazole)] (PPBI) HO O

P OH H2N NH2 + HOOC COOH H2N NH2

4HCl 180° 77% PPA

HO O P OH

* N N *

HN NH

The viscosity of the polymer produced was found using methanesulfonic acid as the solvent. The intrinsic viscosity of PPBI was observed to be 4.5 mg/dL (see Figure 2). A large intrinsic viscosity indicates a high molecular weight polymer. The higher molecular weight a polymer possess the larger the molecule is and the more solvent molecules it will block thus, making it much more viscous or slow moving. Thermal Gravimetric Analysis values were also collected in both nitrogen and air in order to determine the temperature value at which the polymer decomposition occurred (see Figure 3). The onset of decomposition occurs around 550 ºC in air.

Conclusion  Phosphonoterephthalic Acid was successfully prepared.  Poly [2-phospho-p-phenylene-bis(benzimidazole)] was successfully synthesized via a condensation reaction.  The successfully synthesized polybenzimidazole polymer can withstand temperatures over 500ºC before decomposition occurs. This characteristic makes it highly favorable for use in fuel cell membranes.

 In order to understand this material’s full application potential it must be cast into a film so that proton conductivity and mechanical properties can be studied. Current attempts at casting this film have yet to produce results. This is most likely due to the polymer not being entirely in solution, caused by too little dissolution time. The solubility of PPBI depends on many parameters such as time, solvent, and polymer behavior which require multiple fine tuning attempts to produce a film. In addition to increasing the polymer’s dissolve time, it is possible that methanesulfonic acid may not be an ideal solvent for this particular polymer. All solvents affect differently film forming abilities. The introduction of acid side chains may not be compatible with the use of MSA as a solvent.

 While TGA data shows that this material can withstand high temperatures, further research must be done to characterize the polymer’s behavior before it can be used for practical applications.

Figures

8.40 8.30 8.20 8.10 8.00 7.90 7.80 7.70 7. 60

139 138 137 136 135 134 133 132 131 130 129 128 127 126 125

60 50 40 30 20 10 0 -10 -20 -30 Figure 1. NMR data for 2-Phosphonoterphthalic Acid

138 Ha Hb Hc

COOH O OH

Hc P OH

Hb Ha

COOH Figure 2. Solution Viscosity graph for Poly [2-phospho-p-phenylene-bis(benzimidazole)]

Figure 3. Thermal Stability of PPBI in air vs. nitrogen

Acknowledgments The author of this paper would like to thank the Materials and Manufacturing Directorate at Wright Patterson Air Force Base for supplying the laboratory supplies and equipment necessary to conduct this research. The author would also like to thank Southwestern Ohio Council for Higher Education (SOCHE) for funding this project. Lastly, the author would like to thank Dr. Vladimir Benin and Dr. Thuy Dang for their guidance and support throughout this study.

References 1. Barbir, F., and Gómez, T. (1997). Efficiency and economics of proton exchange membrane (PEM) fuel cells. International Journal of Hydrogen Energy, 22(10-11), 1027-1037. 2. Branion, S., & Benin, V. (2006). Preparation of some substituted terephthalic acids. Synthetic Communications, 36(15), 2121-2127. 3. D. S. Watkins, in: L. J. M. J. Blomen, M. N. Mugerwa (Eds.). (1993). Fuel cell systems. (pp. 493). New York. 4. Fontanella, J., Wintersgill, M., Wainright, J., Savinell, R., & Litt, M. (1998). High pressure electrical conductivity studies of acid doped polybenzimidazole. Electrochimica Acta, 43(10-11), 1289-1294. 5. Hasiotis, C., Qingfeng, L., Deimede, V., Kallitsis, J., Kontoyannis, C., and Bjerrum, N. (2001). Development and characterization of acid-doped polybenzimidazole/sulfonated polysulfone blend polymer electrolytes for fuel cells. Journal of the Electrochemical Society, 148, A513.

139 Digital Image Correlation (Dic / Digic)

Student Researcher: Thomas M. Gambone, II

Advisor: Dr. Joan Carletta

The University of Akron Department of Electrical and Computer Engineering

Abstract Working with my advisor Dr. Carletta, in conjunction with electrical engineering graduate students, I became involved in an ongoing research project for developing an embedded machine-vision-based displacement sensing system. The intent of the project was to broaden my experience working with embedded computer systems, Linux / UNIX system programming, and machine vision systems and algorithms. My part in the research project was to aid in the hardware and software implementation of the sensing system on an embedded computer. Additionally, I was involved with the implementation and optimization of various machine vision algorithms required to allow for the detection of particle or pixel displacement in a stream of images of a material test specimen, or a moving marked target (such as painted markings on a model, or actual bridge) The overall project was a coordinated effort between the electrical and civil engineering departments at the University of Akron. The end result of the research and engineering work I did with Dr. Carletta and her graduate students will be an embedded image displacement sensing system, possibly specialized for one or the other of the aforementioned applications, but ideally capable of being generalized to be reused in any application where a dedicated vision-based displacement sensor is required.

Project Objectives The end objectives of the project were to develop a standalone vision-based displacement sensing system. From a research project , more granularity and concrete tasks seemed necessary to justify the project for independent study credit, as well as to provide a better way to assess whether or not those parts of the project were completed. As such, the actual work of this research project, on my part, was composed of managing and maintaining the Linux OS and application software for our embedded computers. Improving existing digital image correlation code, initially generated by another research team member. Finally, to reinforce the independent study element of the research project, I conducted experiments with near infrared lighting and filters to see if using such lighting techniques would prove beneficial or detrimental to our algorithm's performance.

Discussion Having worked with the Digital Image Correlation (DIC) research team since the beginning of Summer 2010, I have learned a great deal in the course of nearly a year. I gained much insight with respect to what it's like to participate in collaborative research, as well, I also learned what sort of deliverables are expected of a research project, and of the researchers. Overall, my experiences with the Digital Image Correlation research team have taught me many valuable lessons, and have served to provide me motivation to pursue a research-based career. In addition to aiding in my career direction, I have also gained much confidence in myself, and my competence as an engineer.

The first objective of the project was to become acquainted with the hardware available, and to push it to it's maximum capability, then evaluate whether or not, this would be acceptable for continued use in the project. By available hardware, I am referring to an embedded single-board computer called the Fox Board LX832, a now obsolete product marketed by Acme Systems Ltd. This computer was based on an AXIS CRIS-architecture processor. The board ran a very light embedded Linux distribution, as such the processor was only capable of running relatively few service applications, and had very little performance left for our intensive algorithm after accounting for the overhead of the OS and the minimum set of service applications. Quantitatively, when a benchmark test of our algorithm was run on the board, it took 32 seconds to run our most optimized, at the time, code. Compared to the mere milliseconds it took on an

140 average student laptop. After obtaining this result, as well as realizing other hardware limitations, with regard to the camera input capabilities, it was decided to move to a newer model of the Fox Board.

The latest board in use by the DIC research team is the Fox Board G20. This board was, and is a vast improvement over the older Fox Board. Equipped with an ARM9 processor, the board is capable of running the full, or very close to full, ARM port of the Debian Linux distribution. This meant many good things. For one, the principle Linux expert / manager, Tom Gambone (me) was already heavily experienced with the aforementioned distribution. Additionally, having a more capable processor allowed for on-target compilation, which compared to the difficulties in training and education about cross- compilation, as was required for the previous board, was a grand improvement to the development process. Finally, the camera drivers which came with the distribution actually worked rather well, as such; the process of developing functional interface code became much easier. Unfortunately, even this board did not have everything our project required. Mainly, the way that the board was designed, specifically, its implementation of USB 2.0, introduced a bottleneck, which will likely eliminate it as a candidate for continued use as the research progresses. Reaching this conclusion, it was decided that for the time being, the board would continue to be used, but for testing purposes only, and when the time came for more serious, and final implementation, another board candidate would be acquired. At the time of this writing, the expected candidate is a Beagle Board xM, a equipped with a multi-media optimized processor, and interfaces implemented to specification, this board would be more capable of doing everything the Fox Board can do.

Moving on, in addition to hardware selection concerns and decisions; I was also actively involved in software development and maintenance. A fellow team member, Shilpa Kunchum, was initially tasked with writing program code in Matlab, and then C, which could calculate displacement of features within test images. As an electrical engineering student, Shilpa was not a die hard programmer, as such; her code was strictly functional. My contribution to the project in this respect was to clean up and analyze the code she produced to further improve our project. As of this writing, I have taken the initial functional code, and transformed it into significantly more readable, and optimized C code. As always, there's still room for improvement, but at the very least, the code as I leave it will certainly be reusable and understandable. We have yet to do benchmark tests on the current Fox Board target, however, using a relative benchmark of comparing the run time of our original code, versus that of the current, there is a clear improvement.

Finally, the latest part of the project I was responsible for, involved experiments with near infrared lighting. For this I ordered a selection of near infrared light emitting diodes (LED's) (wavelength of 940 nm), and an assortment of LED flashlights. The plan being the replacement of the white LEDs in the flashlights with the IR ones. As of this writing, I built a test light on some prototyping board, to allow for the rapid production of test images. In addition of obtaining and building IR light sources, I also performed some modifications to a duplicate model of our main computer vision camera (A Logitech QuickCam Webcam C600 2MP camera). The main modification being the removal of the camera's factory installed IR block filter, and its subsequent replacement with a self-made visible-light block, IR pass filter. The latter was accomplished by having an over exposed roll of 35mm film developed, then layering the film, and affixing in front of the camera lens. At the time of this writing, no conclusive results have been obtained by running benchmark correlation programs on images with visible-light and IR components, versus images with only IR components. Two supporting reasons for the continued use of IR only lighting, are that doing so would allow for better optical isolation and highlighting of our end target specimen, as well; there are plans for attempting to switch our camera into a mode which would cause it to send RAW, or Bayer image data. The latter means that our camera is capable, and would be sending direct sensor data before any, or at least some post-processing hardware in the camera's SoC ASIC. Switching to said option is in the works due to the fact that using it would reduce the incoming data by 66%, and the data omitted is suspected not relevant for our purposes. Hence, less data would immediately translate to better performance, with respect to our algorithm.

Conclusion /Results As mentioned before, due to the vast scope and duration of this project, my responsibilities were limited. However, even with these limits, I gained ample experience and insight about embedded Linux

141 development, machine vision, and researching in general. To summarize my results, I succeeded in appropriately deciding on where to direct our computer hardware requirements and research. I proved valuable in the sense that I am a legitimate programmer, due to my core skill set and desire, as well as the abundant training and experience I have accumulated over the years as a computer enginering major, and intern. Finally, I demonstrated my ability to creatively and logically find solutions to problems through my efforts to encourage, and perform experiments with near IR lighting.

142 Robust 3d Pose Estimation of Articulating Rigid Bodies

Student Researcher: Adam R. Gerlach

Advisor: Dr. Bruce Walker

University of Cincinnati Department of Aerospace Systems

Abstract The Defense Advanced Research Projects Agency (DARPA) is currently developing a new class of servicer spacecraft to perform autonomous rendezvous and docking of target spacecraft. These spacecraft are characterized by heightened levels of autonomy in both orbital and close proximity maneuvering, and by unparalleled situational awareness through the use of 3D vision and real-time pose estimation. To be successful, these spacecraft require technical advances in path planning, compliance control, machine vision, and real-time pose estimation. The spin-image pose estimation algorithm, along with recent performance enhancements introduced by Gerlach, has been shown as a robust method for real-time pose estimation. Unfortunately, the spin-image algorithm is limited to only estimating the pose of rigid bodies.

This research extends a variation of the spin-image algorithm called the c*- image algorithm introduced by Gerlach towards novel approaches for real-time pose estimation of articulated bodies. Such capabilities are required for estimating pose of spacecraft with articulating surfaces such as solar panels, individual joint angle pose of robotic manipulators, or even 3D medical image registration.

Project Objectives Pose estimation is the process of determining the position and orientation of a solid, three-dimensional object relative to an observer. Humans naturally perform pose estimation on a regular basis. When someone picks up a pencil or tries to parallel park an automobile, they are using pose estimation. In the first case, they must determine the relative position and orientation of the pencil in order to properly orient their hand and fingers to pick it up. To initiate a proper trajectory to park successfully, they must determine the relative positions and orientations of surrounding objects, like other parked cars and the curb, and then plan the trajectory to be followed to avoid these objects, continuously using pose estimation to track and correct the trajectory. In these two cases, the pose is determined using visual sensing, i.e. stereo vision, and tactile feedback. However, pose can be also derived from audio, radar, and other measurements that provide relative position and orientation information. These simple examples demonstrate the significant role pose estimation plays in a human’s ability to interact with its environment, whether that environment is static or dynamic.

Many pose estimation algorithms exist in the literature, but the spin-image pose estimation algorithm provides the most accurate results [Planitz 2005] by determining surface point correspondences while being robust to sensor noise and placing no surface restrictions on the object of interest, other than it must be pose-distinct (i.e. unlike a sphere). In practice, the requirement for pose-distinctness is rarely an issue because the objects of interest are almost never spheres (although an extension to the spin-image algorithm that utilizes the color texture of the object is capable of estimating pose of non-pose-distinct objects with pose-distinct color patterns [Brusco 2005]). The spin-image algorithm also assumes that the object of interest is a rigid body.

One of the fundamental assumptions emplyed when doing surface point correspondence for pose estimation is that the object of interest only undergoes rigid transformations. This assumption greatly simplifies the problem of pose estimation by reducing the degrees of freedom of the transformation from a countably infinite number to six. This assumption is valid for many applications. However, in many practical applications, this assumption is not true. Examples include, estimating the pose of an articulating robotic manipulator or of a spacecraft with articulated solar panels. In these examples, the objects of interest are oriented such that a rigid transformation describes their pose, but also features on the objects themselves undergo changes in their relative orientations. Fortunately, for many of these applications, the

143 motions of the features of the objects relative to the objects themselves are rigid-body motions and are typically constrained to fewer than six degrees of freedom.

The primary objective of this project is to extend the use of the spin-image algorithm towards articulating rigid bodies. This extends the practical uses of the spin-image pose estimation algorithm to a larger class of objects, while also enabling the estimation of the relative poses between the articulating bodies.

Methodology Used There are many 3D pose estimation algorithms based upon measured 3D surface geometry. They are sometimes referred to as surface matching or surface registration algorithms. A majority of these algorithms rely on surface correspondence to estimate pose. Surface correspondence is the process of identifying the same surface point on different representations of a surface. It is the basis for the fundamental differences between pose estimation algorithms because established algorithms [Horn 1987, Thompson 1958, Schut 1960, Oswal 1968] exist for computing pose from a list of surface point correspondences. Pose estimation algorithms that rely on surface correspondence to estimate pose, like the spin-image pose estimation algorithm, require a reference surface model M, called the model, reference, or truth model, and an observed surface model S derived from some measurement of the 3D surface of interest, called the scene.

Corresponding surface points on different surface representations can be thought of as fixed points in space described relative to different 3D coordinate systems, as illustrated in Figure 1. These surface representations can be derived from computer-generated surface models or 3D surface scans of real-world objects.

Figure 1. Surface Points Described in Different Coordinate Systems (from [Horn 1987])

The Spin-Image [Johnson 1997, Johnson and Hebert 1997, 1998, 1999] is a method of representing the local topography of an arbitrary surface as a 2D histogram that is invariant to rigid transformations. The similarities of scene and model spin-images allow for the generation of plausible scene-model point correspondences that can then be used to generate pose estimates.

A spin-image Pi for a specified mesh vertex oi is constructed by calculating the parallel distances β and the perpendicular distances α relative to the surface normal vector ni at oi of all neighboring surface points xi within a specified maximum distance. Bilinear interpolation is used to accumulate these distances in a 2D histogram. Bilinear interpolation spreads the accumulations to neighboring points in the 2D histogram, making it less susceptible to measurement noise and mesh errors. This is illustrated in Figure 2.

βmax

β α α max -β max Figure 2. The Creation of a Spin-Image from a Region (from [Planitz 2005])

Point correspondences are determined by first generating a spin-image for every vertex of the reference model (collectively called the spin-image stack), and for a randomly selected subset of vertices in the

144 scene model. The similarity measure C(P, Q) between the spin-images from a point in the scene P and from a point in the model Q is calculated using:

(1)

(2)

th Where pi and qi are the values of the i bins of P and Q, respectively, N is the total number of bins with non-zero values in both P and Q individually, and λ is a weighting parameter. Eq. (1) and Eq. (2) show that the similarity measure C is defined using a modified normalized correlation coefficient (NCC) R(P,Q) (in Eq. (1)) which only takes into account bins of P and Q that are non-zero in both P and Q. By only using the non-zero bins in both P and Q, the modified NCC is robust to sensor noise, object occlusions, and scene clutter. However, the modified NCC can erroneously report a high correlation between spin-images with little overlap. Through the change in variables performed by the hyperbolic arctangent (in Eq. (2)), the variance of the transformed modified NCC becomes a simple function of the number of overlapping bins in the two spin-images, N. Thus the similarity measure provides a method for weighing the correlation between two spin-images and the confidence in the correlation value calculated.

Possible point correspondences are then found by using matched points from the scene and model with the largest similarity measures. After a series of correspondence filtering and grouping procedures, which reduce the multiple possible point correspondences to a single correspondence, the pose is estimated by using a closed form solution [Horn 1987] for determining the rigid transformation that minimizes the L2 norm between the registered models.

To summarize, the spin-image algorithm can be completely described by the following operations: 1. Spin-Image Generation 2. Spin-Image Matching 3. Correspondence Filtering 4. Spin-Image Grouping 5. Pose Determination c*-image The c*-image was developed by Gerlach [Gerlach 2010] as a reduced order non-linear mapping of a spin- image to provide an efficient method for accelerating the calculation of surface point correspondences. While evaluating the performance benefits of c*-images, it was observed that c*-images not only accelerate the algorithm, but they also provide a highly efficient method for identifying features of interest on an object’s surface. Figure 3a shows a simple spacecraft with its features of interest circled. Figure 3b has a color scale representing the magnitude of the c*-image for each surface point. Notice that the c*-image algorithm is effective in not only identifying the features of interest but also all the components of the spacecraft, i.e. the nozzle, base, sidewalls, and pillars. Because of the c*-image’s ability to efficiently segment features on the surface of an object, pose estimation of articulating rigid bodies is a natural extension of its practical uses.

(a) (b) Figure 3. (a) Simple Spacecraft Model with Features of Interest Circled, (b) Feature Segmentation using the c*-image Algorithm

145

To compute a c*-image from a spin-image, spin-image signatures of dimension n [Assfalg 2007] must first be computed for the spin-image stack. This is called the signature stack. A spin-image signature is a reduced order representation of the spin-image that is composed of the summation of the bins in the spin- image within n independent regions normalized by the total sum of all the bins in the spin-image. These independent regions are defined as sectors of (+) crowns (cp), sectors of (-) crowns (cn), and circular sectors (s) as shown in Figure 4. The computation of the spin-image signatures for each independent region is defined by equations 3-5 respectively. Where I(i,j) is row i and column j of the spin-image I and np, nm, and ns are the number of positive crowns, negative crowns, and circular sectors, respectively.

Figure 4. Spin-image Signature: (a) (+) Crowns, (b) (-) Crowns, and (c) Circular Sectors (from [Assfalg 2007]) (3)

(4)

(5)

The optimal number of fuzzy cluster centers, c*, is determined for the signature stack using the validity index [Kim 2001], and the fuzzy c-means clustering algorithm [Bezdek 1984]. The clustering algorithm not only produces the location of the c* cluster centers, but also the degree of membership to each cluster of each spin-image signature in the signature stack. The degree of membership in each cluster can then be used to characterize an individual spin-image, and this is called the c*-image. The c*-image representation of the signature stack is known as the c*-stack. If c* < n, the degree of membership in each cluster is a further reduction in the order of the spin-image representation over that of the spin-image signature.

Once a c*-image is calculated, it can be checked against those in the c*-stack by evaluating the distance between them by means of the normalized sum square error, l:

(6)

th where mj* is the j c*-image in the c*-stack and s* is the scene c*-image. Because the sum of all the individual elements of a c*-images equals unity and the elements are always non-negative (both properties of fuzzy clusters), the maximum error between two c*-images occurs when they each have maximum membership to different cluster centers.

146 Results Obtained Research in automatic pose estimation of articulating rigid bodies using the c*-image is still in its early stages of development for a robotic manipulator and a spacecraft with articulating solar cells (Figure 5). Much work must still be performed in order to find a general solution for all articulating rigid bodies.

(a) (b) Figure 5. 3D Models of (a) Mitsubishi PA-10 7 DOF Robotic Manipulator, (b) Spacecraft

Preliminary research indicates that the following use of the c*-image may lead to a general algorithm for estimating the pose of articulating bodies.

Consider the spacecraft with articulated solar panels discussed above. A user must first interactively select the surface points belonging to each individual articulating body. In this case, they would be each solar panel considered separately (red and blue in Figure 6a) and the spacecraft body (green in Figure 6a). Spin- and c*-stacks are then computed for each articulating body as if they were standalone objects like the spacecraft body and solar panel in Figures 6b and 6c. The average of all the c*-images in each c*- stack then becomes the representative c*-image, Ci*, for the articulating body belonging to each c*-stack.

(a) (b) (C) Figure 6. (a) Spacecraft Model with Articulating Bodies Highlighted, (b) Spacecraft Body, (c) Solar Panel

To estimate pose, a 3D scene model is captured and a c*-image for each surface point is computed and the normalized sum squared error, lj, is computed relative to Ci* for i = 1…n and j = 1…m, where n is the number of articulating bodies and m is the number of scene surface points. lj is then compared against the preselected thresholds ti*. If lj > ti* then surface point j is flagged as belonging to articulating body i. For the scenario when lj < t1…n*, surface point j is considered to belong to all of the articulated bodies. The problem of pose estimation of articulated rigid bodies is then treated as pose estimation of n independent rigid bodies, but now surface points marked as not belonging to the ith articulated body do not contribute to the spin-images used in estimating that object’s pose. This procedure effectively uses the c*-image to segment the object prior to estimating pose.

147 Even though a substantial amount of effort has been put forth in order to develop a general algorithm for 3D pose estimation for articulating rigid bodies, the performance of the algorithm has yet to be verified. A majority of the time spent on this project has been dedicated to porting and optimizing the authors spin- image and c*-image software library from MATLAB [MATLAB] to C++. MATLAB’s inability to render and visibly manipulate large 3D models as well as its slow computational performance proved to be limiting factors in the authors ability to estimate pose of articulating rigid bodies and to visualize results. Currently, the spin-image algorithm, as well as advanced methods, have been implemented and tested in C++ using Qt [Qt] and the Visualization ToolKit (VTK) [VTK]. The c*-image and the articulating body algorithm described above must still be implemented.

Significance and Interpretation of Results The proposed solution for estimating pose of articulating rigid bodies is unique in that it utilizes information computed in the pose estimation process (the spin-image) to segment the individual bodies of the object. Plus, the c*-image not only provides an effective method for segmenting those bodies, but it also reduces the computation load required in estimating pose, thus reducing the overall time required as shown by Gerlach [Gerlach 2010].

One natural aspect associated with articulating rigid bodies that is currently ignored in this proposed algorithm is the fact that typically the relative degrees of freedom of each rigid body is constrained to less than 6. For the spacecraft example, once the pose is estimated for the spacecraft body, information regarding the relative location of each solar panel to the spacecraft body as well as the fact that the solar panels have only rotation degrees of freedom about a known point could be utilized. Considering this information may drastically reduce the computational effort required in estimating pose of articulated bodies.

Due to the substantial effort required to port the authors spin-image and c*-image software library from MATLAB to C++, results for estimating pose of articulating bodies have yet to be achieved. However, the new C++ library has proven to be significantly faster (100x) and more reliable than its MATLAB counterpart. This, accompanied with advanced visualization algorithms will prove to be valuable assets in understanding the internals of the proposed pose estimation algorithm.

References 1. Assfalg, J., Bertini, M., and Bimbo, A. D., “Content-Based Retrieval of 3-D Objects Using Spin Image Signatures”, IEEE Trans. on Multimedia, Vol. 9, No. 3, 2007, pp. 589-599. 2. Bezdek, J. C., Ehrlich, R., Full, W., “FCM: The Fuzzy c-Means Clustering Algorithm,” Computers & Goesciences, Vol. 10, No. 2-3, 1984, pp. 191-203. 3. Brusco, M., Andreetto, M., Giorgi, A., and Cortelazzo, G. M., "3D Registration by Textured Spin- Images," Proc of the 5th Inter. Conf. on 3-D Digital Imaging and Modeling, 2005, pp. 262-269. 4. Gerlach, A., “Performance Enhancements of the Spin-Image Pose Estimation Algorithm,” Master’s Thesis, University of Cincinnati, OH, 2010. 5. Horn, B., "Closed-Form Solution of Absolute Orientation Using Unit Quaternions," Journal of Optical Society of America, Vol. 4, No. 4, 1987, pp. 629-642. 6. Johnson, A. E., “Spin-Images: A Representation for 3-D Surface Matching”, Ph.D. Dissertation, Robotics, Carnegie Mellon University, PA, 1997. 7. Johnson, A. E., Hebert, M., "Surface Registration by Matching Oriented Points," International Conf. on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, 1997, pp. 121-128 8. Johnson, A. E., Hebert M., "Surface Matching for Object Recognition in Complex 3D Scenes," Image and Vision Computing, Vol. 16, 1998, pp. 635-651. 9. Johnson, A. E., Hebert, M., "Using Spin-Images for Efficient Object Recognition in Cluttered 3D Scenes," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 21, No. 5, 1999, pp. 433- 449. 10. Kim, D. J., Park, Y. W., and Park D. J., "A Novel Validity Index for Determination of the Optimal Number of Clusters," IEICE Trans. on Information and Systems, Vol. E84-D, No. 2, 2001, pp. 281- 285. 11. MATLAB, Software Package, Ver. R2008b, MathWorks, URL: http://www.mathworks.com/

148 12. Oswal, H. L., and Balasubramanian, S., "An exact solution of absolute orientation," Photogrammetric Engineering, Vol. 34, 1968, pp. 1079-1083. 13. Planitz, B. M., Maeder, A. J., and Williams, J. A., "The Correspondence Framework for 3D Surface Matching Algorithms," Computer Vision and Image Understanding, Vol. 97, No. 3, 2005, pp.347- 383. 14. Qt, Software Package, Ver. 4.7, Nokia Corporation, URL: http://qt.nokia.com/ 15. Schut, G. H., "On Exact Linear Equations for the Computation of the Rotational Elements of Absolute Orientation," Photogrammetria, Vol. 16, 1960, pp. 34-37. 16. Thompson, E. H., "A Method for the Construction of Orthogonal Matrices," The Photogrammetric Record, Vol. 3, No. 13, 1958, pp. 55-59. 17. VTK, Software Package, Ver 5.9.0, Kitware Inc., URL: http://www.vtk.org/

149 Jet Acoustics: The Effects of Forced Mixers and Chevrons

Student Researcher: Danielle L. Grage

Advisor: Dr. Ephraim Gutmark

University of Cincinnati Department of Aerospace Engineering Abstract There is a growing concern within the aviation industry to reduce the perceived noise from engine operation. One method of mitigating the sound from jet engines involves using chevrons at the exit of the engine to control the development of coherent structures by increasing the mixing between the core and fan streams. Forced mixing devices have also been successfully used over the past several decades to increase the mixing between the core and bypass flows an effectively decrease the overall noise signature resulting from jet exhaust. This report will first provide an overview of the lessons learned from forced mixers and then recount the experimentation that was conducted at the University of Cincinnati’s Aeroacoustic Test Facility using four chevron designs: a confluent nozzle, a 12-lobe low-penetration (LP) nozzle, an 8-lobe LP nozzle, and an 8-lobe high-penetration (HP) nozzle. The resulting acoustic spectrum showed that the addition of chevrons provided a low frequency benefit and a high frequency penalty. While all three chevrons provided a net acoustic benefit, the 8HP nozzle provided the greatest reduction in Overall Sound Pressure Level (OASPL). It was also noted that while penetration level has a strong effect on the acoustic performance of a chevron, the number of lobes did not have a significant impact.

Project Objectives Despite over half a century of research in flow generated noise, the prediction of the jet noise from complex configurations continues to be a problem within the aviation industry. In 2003, Wright et. al. went as far as to say “little is known about the actual mechanism of this turbulent noise generation”6. Computational techniques are able to predict noise from round jets, but the “techniques and the computational power required to implement them have not yet been developed to the point of usefulness for design applications. Conversely, empirical techniques are easily implemented but are often disappointing in their reliability”6. Tester and Fisher have commented that “in spite of the fundamental understanding of the link between the turbulent flow and far-field noise provided by Lighthill some fifty years ago, the prediction of jet noise on purely theoretical grounds remains somewhat elusive”5. This lack of accurate noise prediction methods for the complex geometries applicable to modern jet engines forces engine companies to design, build, and perform relatively expensive, iterative experimental tests to determine the acoustic performance of a design2.

While the specific mechanisms behind noise creation and propagation are not yet fully understood, it is well accepted that the source of noise is related to the shear interactions and the local Turbulence Kinetic Energy (TKE) of the flow. As a result, the significant sources of noise within a flow occur where coherent structures, also called ‘organized structures’, are present. The most desirable tool, therefore, would be a correlation between some geometric design feature or flow parameter to the acoustic performance of a configuration. The aim of this experimentation, therefore, is to seek such a defining parameter and understand its effect on the noise of the jet. The two parameters that will be studied as a part of this research are chevron penetration level and number of lobes.

Methodology As was discussed before, it is generally accepted that the main sources of noise within a flow come from areas of high turbulent kinetic energy (TKE). Therefore it logically follows that a devise capable of lowering the TKE of a flowfield and limiting the formation of large-scale coherent structures would have the potential to improve the acoustic performance of a flowfield. Forced, or lobed mixers feature multiple tubes or chutes to divide the inner core jet into several individual smaller jets which “results in enhanced mixing of the high-velocity inner flow with the surrounding outer bypass stream and a reduction in the length of the jet plume”1.

150 Several studies have already been completed with the goal of trying to understand how various mixer shapes affect jet noise. In 1998 Pinker and Strange completed a study that looked at both a confluent mixer and a forced mixer in static and flight conditions4. From the results shown in Figure 1a, it is clear that the lobed mixer provides low frequency benefit at the cost of increased high frequency noise. It can also be noted that the Sound Pressure Level (SPL) generated by the flow is lower in flight than at the static condition. Another such study, conducted by Wright et. al. in 2003, looked at two lobed mixer designs with varying penetration, where the penetration of mixer is defined as “the radial distance from the peak of the upper lobe to the valley of the lower lobe6. A comparison of the sound pressure levels (SPL) for the low and high penetration mixer is show in Figure 1b. The higher penetration mixer had better performance at low frequencies but was found to produce more sound at higher frequencies.

With these published results in mind, experimentation was conducted at the University of Cincinnati’s anechoic Aeroacoustic Test Facility, shown in Figure 2a, which features a dual-flow coaxial nozzle, seen in Figure 2b. The four chevron designs that were tested are depicted in Figure 3: a confluent nozzle, a 12- lobe low-penetration (LP) nozzle, an 8-lobe LP nozzle, and an 8-lobe high-penetration (HP) nozzle. The acoustic signature of these nozzles was measured using a microphone array that covered arc angles from 70 degrees to 170 degrees.

Significance and Interpretation of Results After recording all of the data, the performance of the nozzles was evaluated and the frequency spectrums of the four configurations are compared in Figure 4. There is a large overall decrease in SPL for the lower frequencies and the 8HP displays the best performance of the tested configurations. Each of the chevrons, in addition to the low frequency benefits also causes a high frequency penalty, or increase in SPL, similar to the forced mixers. However, of the three configurations, the 8LP has the lowest penalty. Based on the prior understanding of coherent structures and their association with jet noise, it can be concluded the higher the penetration of the chevron the more intense the resulting mixing. This increased shear-layer mixing and causes the generation of additional high-frequency noise near the exit, resulting in a diminished acoustic benefit.

The effect of number of chevrons is shown as well. For low frequencies the 12LP nozzle behaves most similar to the 8LP nozzle, however, at high frequencies the 12LP produces a benefit more similar to that of the 8HP. This is a strong indicator that the effect of chevron count becomes more important at higher frequencies. However, the overall effect of lobe count is significantly less than the penetration effect.

The previous results concerned the sound pressure level (SPL) associated with the various chevron geometries. It is also important to consider the effect these chevrons have on the overall sound pressure level, or OASPL, shown in Figure 5a. The OASPL is useful for identifying the “raw physical effects of the various chevron nozzle parameters on the jet acoustics” 1. It is clear, in Figure 5b, that despite the high frequency penalty there is still an overall reduction in OASPL for all angles relative to the baseline nozzle. It should be noted, however, that while all three chevron designs considered in this research provided a net acoustic benefit “the chevron penetration could be increased to a point where the high frequency noise degradation outweighed the low frequency benefits, resulting in an overall acoustic penalty”3.

Overall, the 8HP chevron is the most effective at reducing the jet noise. Finally, the chevrons are most effective at the aft angles, above 130 degrees, which “indicates that the chevrons are particularly effective at suppressing the noise generated by the large-scale coherent structures” and results in the peak OASPL shifting forward in terms of directivity1.

These finding are consistent with the understanding within the field of jet noise that lower frequencies primarily propagate to aft angles while higher frequencies tend to be biased toward forward angles. That is, since the spectra showed the chevrons to have the greatest benefit at lower frequencies it makes sense that the OASPL decrease would be greatest in the aft angles. The overall decrease in OASPL is also conclusive that the chevrons are effective in modifying the jet plume and, particularly, the size or location of the noise-generating mechanisms.

151 Figures and Tables

(a) (b) Figure 1. Acoustic results of forced mixing for (a) flight and static conditions3 (b) varying penetration5.

(a) (b) Figure 2. University of Cincinnati Aeroacoustic Test Facility (a) overhead schematic (b) dual-flow nozzle

Figure 3. Test hardware: confluent,12-lobe low-penetration (LP),8-lobe LP,8-lobe high-penetration (HP).

152

Figure 4. Sound Pressure Level (SPL) spectrum for the four tested nozzle configurations1.

(a) (b) Figure 5. Overall SPL (OASPL) for (a) all nozzle configurations (b) chevrons relative to baseline nozzle1.

Acknowledgments The author would like to thank Dr. Ephraim Gutmark for his oversight, guidance, and support throughout the completion of this project. Also, the author greatly appreciated the help of Dr. Jeff Kastner in providing technical feedback and for providing helpful references and resources on the subject of aeroacoustics. Finally, the author would like to thank Dr. Muni Majjigi, Dr. Richard Cedar, and Dr. Denis Lynch for their feedback and direction.

References 1. Callender, B., Gutmark, E., and Martens, S. (2005). Far-Field Acoustic Investigation into Chevron Nozzle Mechanisms and Trends. AIAA Journal, 43 (1), 87-95. 2. Garrison, L., Lyrintzis, A., Blaisdell, G., and Dalton, W. (2005). Computational Fluid Dynamics Analysis of Jets with Internal Forced Mixers. AIAA/CEAS Aeroacoustics Conference. Monterey, California: AIAA 2005-2887.

153 3. Harrison, S., Rask, A., Gutmark, E., Martens, S., and Wojno, J. (2009). Jet Noise Reduction by Fluidicly Enhanced Chevrons On Separate Flow Exhaust Systems. AIAA Aerospace Sciences Meeting. Orlando, FL: AIAA 2009-850. 4. Pinker, R., and Strange, P. (1998). The Noise Benefits of Forced Mixing. AIAA-1998-2256. 5. Tester, B., and Fisher, M. (2004). A contribution to the understanding and prediction of jet noise generation in forced mixers. AIAA/CEAS Aeroacoustics Conference. AIAA 2004-2897. 6. Wright, C., Blaisdell, G., and Lyrintzis, A. (2003). The Effects of Various Mixer Shapes on Jet Noise. Aeroacoustics Conference and Exhibit. Hilton Head, South Carolina: AIAA 2003-3251.

154 Biophysical and Biochemical Characterization of Microrna Profiling of Breast Cancer Cell-Secreted Microvesicles

Student Researcher: Nicole D. Guzman

Advisor: Michael E. Paulaitis

The Ohio State University Department of Chemical and Biomolecular Engineering

Abstract The development of minimally invasive clinical biomarkers for the detection and monitoring of human cancers would greatly reduce the worldwide health burden of this disease. To date, none of the biomarkers recommended by the American Society of Clinical Oncology can accurately predict the risk of breast cancer metastasis development or a response to advanced treatment. Consequently, factors such as disease-free interval, previous therapy, site of disease and number of metastatic sites are used to monitor the impact of treatment on patients with metastatic breast disease. Therefore, the development of clinically validated metastatic detection and prediction markers remains an unmet challenge. Currently, the search for easily accessible, sensitive and reliable biomarkers that can be sampled from body fluids such as serum or urine is ongoing. The recent discovery of stable levels of microRNAs (miRs) in the bloodstream has motivated research to elucidate their potential as early detection cancer biomarkers. Extracellular miRs travel the body protected from degradation within cell secreted microvesicles that range in size between 40-1000 nanometers. Exosomes, a specific subpopulation of microvesicles, have been extensively studied and are known to contain miRs implicated in tumor development and progression. We have characterized microvesicle morphology and size distribution using cryo-TEM and dynamic light scattering. From cryo-TEM we can conclude that microvesicles are spherical in shape, their surface is covered with glycoproteins and they contain electron dense material that is released when burst. A clear bimodal distribution is observed for all 3 cell lines, indicating that the size range of exosomes are between 40 to 180 nm. In addition, we have selectively captured microvesicles using an anti EGFR antibody microarray and have profiled the miR content of the vesicles and of their cells of origin. Results indicate that certain miRs are selectively shuttled under cancerous states.

Project Objectives MicroRNAs (miRs) are a recently discovered class of small non-coding RNAs of approximately 22 nucleotides in length, which post-transcriptionally regulate gene expression and have been found to play a critical role in many homeostatic and pathological processes. Recent studies have systematically analyzed miR expression in cancer tumors and showed that these tumors exhibit distinct miR signatures compared to normal tissues. These discoveries have motivated efforts to device miR expression profiling technologies for the early diagnosis, prognosis and response to treatment of many human cancers. Current technologies to isolate and analyze miRs are limited to tissue biopsies and include a myriad of assays which are time consuming, labor intensive, semi-quantitative and prohibitively expensive for routine clinical application. However, the breakthrough observations that miRs are found circulating within a multitude of physiological fluids, including serum, urine and saliva has bolstered studies which directly analyze miRs from these easily accessible biofluids.

Extracellular miRs travel the body protected from degradation within cell secreted microvesicles that range in size between 40-1000 nanometers. Exosomes, a specific subpopulation of microvesicles, have been extensively studied and are known to contain miRs implicated in tumor development and progression. Consequently, isolating tumor-derived exosomes from the rest of the blood’s background will lead to the development of more robust assays for cancer blood-based biomarkers. In addition, the reliability of blood-based assays of miR signatures using well-characterized model systems, rather than tissue or blood samples, has yet to be validated. The overall goal of this study is to develop a novel assay capable of detecting cancer-specific miR signatures in peripheral blood based on: (1) the isolation and characterization of miR-containing exosomes as a subpopulation of blood-borne microvesicles released by tumors, (2) the comparison of miR expression profiles between the exosomes and the cells from which

155 they originated, and (3) validation of the technology by carrying out a small feasibility trail analyzing metastatic breast cancer patient blood.

Methodology Used Cell Culture and Microvesicle Isolation. Breast cell lines were grown in 100 mm cell culture dishes at 5% CO2 and 37°C in DMEM supplemented with 10% fetal bovine serum (FBS) and 1% Penicillin- Streptomycin to 50-60% confluency. These subconfluent cultures were washed in Posphate Buffer Saline (PBS) and further incubated in serum-free media for 48 hrs until confluent. At this point, culture supernatant was decanted and microvesicles were isolated following the differential centrifugation protocol of Thery et al. Briefly, conditioned media was initially depleted of both live and dead cells by consecutive 300 x g 10 min and 2,000 x g 20 min spins. After these initial spins, cell-free supernatant was transferred to polycarbonate ultracentrifuge bottles and depleted of cell debris by a single 10,000 x g 30 min spin using a Beckman Coulter Type 70 Ti rotor. This final supernatant was then ultracentrifuged at 100,000 x g for 70 min to pellet the microvesicle population. The resulting pellet was washed in sterile PBS to eliminate contaminating proteins and centrifuged at 100,000 x g for 70 min. To concentrate the sample, the resuspended pellet was transferred to 500 μl thick-walled polycarbonate tubes and spun down using a TLA-120.1 rotor for Beckman Coulter’s Optima TLX tabletop ultracentrifuge. All spins were carried out at 4°C.

Microvesicle Characterization-CryoTEM. Cryo-transmission electron microscopy images were obtained using a FEI Tecnai G2 Spirit Transmission Electron Microscope coupled with an Orius TEM CCD high resolution Gatan Camera. Briefly, 10 μl of microvesicle sample was blotted onto a previously glow discharged Lacey Formvar/Carbon coated 200 mesh copper grid. Sample preparation was carried out within a controlled environment vitrification system (CEVS) at 25C to prevent fluctuations in temperature and humidity during blotting. Vitrification of the film was achieved by plunging the grid into liquid ethane and then transferring to liquid nitrogen. Grids are kept at -165C during the whole imaging process to prevent sample perturbation using a specialized Gatan cryo-holder.

Microvesicle Characterization-Dynamic light scattering. Submicron particle size distribution measurements were obtained using a BI-200SM Laser Light Scattering Goniometer (Brookhaven Instruments Limited, UK). A total volume of 1 ml of microvesicle suspension was required to obtain particle distributions. All measurements were performed at 25C and at a 90 angle. The translational diffusion coefficient was obtained by plotting relaxation time versus the magnitude of the scattering vector. The obtained diffusion coefficients were converted to the apparent hydrodynamic radii using the Stokes-Einstein relation.

Antibody Microarray. Antibody microarrays are fabricated by printing a solution of 0.5 mg/ml antibody on a polymer film-coated glass slide- Nexterion slide H; using a PerkinElmer noncontact microarrayer. The individual antibody spots consist of ~0.33 nl of solution and are 150 microns in diameter with a center-to-center distance of 400 micron. MVs were labeled with a phospholipid membrane dye (Vybrant DiI labeling solution). Microarray images were acquired using Proscan Array fluorescence scanner. microRNA profiling. A previously concentrated 50μl microvesicle sample in PBS was spun down in a thick-walled polycarbonate tube using a TLA-120.1 rotor for Beckman Coulter’s Optima TLX tabletop ultracentrifuge at 4°C. Microvesicles were hypotonically lysed by resuspending the pellet in 20μl of RNAse free water. One microliter of sample was required to quantify RNA using a Nanodrop 2000 (Thermo Scientific). RNA profiling was done either by using a qRT-PCR (Applied Biosystems) machine or by loading 3 μl of sample onto a 12 sample cartridge (Nanostring).

Results Obtained After ultracentrifugation, which separates the microvesicles from cells, cell debris and contaminating proteins; we determined the particle size distribution and relative abundance of the microvesicles released by our 3 model cell lines using Dynamic Light Scattering (DLS). As observed in Figure 1, microvesicles from the three model cell lines exhibited bimodal size distributions. For all three cell lines, exosomes composed approximately one fifth of the total microvesicle population. Interestingly, the relative

156 abundance of exosomes from early mesenchymal and mesenchymal cell lines was found to be 5% higher than the epithelial MCF10a cell line. A summary of the subpopulation microvesicle size distributions and relative abundance is shown in Table 2.

Verification of microvesicle isolation using the ultracentrifugation technique is observed in figure 2. Figure 2 also confirms that exosomes are spherical in shape. Figure 3a shows the resolution of the two layers of the lipid membrane. Figure 3b demonstrates how the content of a vesicle is released when burst and figure 3c shows how glycoproteins coat the surface of the microvesicle.

We have selectively captured microvesicles based on their characteristic protein surface markers, as observed in Figure 3. Anti-EGFR antibody was printed as the capture molecule on a microarray slide. As observed anti-EGFR levels were significantly higher for the MDA MB231 MVs compared to the MVs of the other two cell lines.

The complete profile of more than 600 human miRs was obtained for the cells and MVs of the 3 model cell lines. Figure 4 shows the calculated ratio of partition coefficients for 12 selected miRs for MCF10A and MDA MB 231 cells and their MVs. Values near 1.0 indicate no difference in the fractional distribution of miRs as the cell state changed from noncancerous to cancerous. Values above or below 1.0 indicate a selective partitioning.

Significance and Interpretation of Results Our overall strategy for assay development has focused on using 3 model breast cancer cell lines which represent 3 different states of the epithelial-to-mesenchymal transition (EMT). The MCF7 and MDA MB 231 cell lines were chosen from the NCI60 panel to model early mesenchymal and mesenchymal states (15,18) while MCF 10A cell line was selected to represent a non-malignant epithelial phenotype. Our results establish that we can recover 100µg of microvesicles from the supernatant of 60 million MDA MB 231 cells, and roughly half this amount from an equivalent number of MCF10a. The amount of cell secreted microvesicles recovered from the MCF7 cell culture supernatant is approximately halfway between these two extremes. These observations correlate with previous findings which have shown an increase in concentration of blood secreted microvesicles during disease states (1).

In addition, given that we expect peripheral blood of cancer patients to contain a mixture of both tumor- derived exosomes and exosomes secreted by other, irrelevant cell sources, as well as indirect tumor- derived exosomes emanating from crosstalk between tumor cells and cells of the immune system (1,14), any future determinations of cancer-specific miR signatures must be able to distinguish and sort according to exosome origin. We have shown that our microarray assay can selectively capture microvesicles based on surface markers that correlate with different cell types. Once we have spacially segregated vesicles based on cell origin we can then perform the characterization of miR signatures of the bound vesicles by qRT-PCR or nanostring analysis as a function of microvesicle protein surface markers. Once we have completely defined the EMT miRs of interest within our 3 cell model system; we can proceed to compare this subset with the subset obtained from blood cancer patient blood samples.

We are investigating the effect cell state has on the selective shuttling of miRs. Obtaining the partition coefficients between cells and vesicles allows us to investigate what miRs are being actively shuttled out of the cell. Next, we related the partition coefficients with cell state and observed that there are changes in the partition coefficients with cell state. Identification of those miRs which differ the most can lead to potential cancer biomarker candidates.

157 Figures and Charts

Figure 1. The size distribution of microvesicles recovered from MCF10A, MCF7 and MDA MB 231 cell culture supernatants by dynamic light scattering. Two subpopulations of microvesicles are observed for all 3 cell lines.

Table 1. Microvesicle Subpopulation Diameters and Relative Abundances. AVERAGE AVERAGE RELATIVE RELATIVE CELL LINE DIAMETER DIAMETER CELL LINE PERCENTAGE PERCENTAGE POPULATION 1 POPULATION 2 POPULATION 1 POPULATION 2 MCF 10A 73.90 384.14 MCF 10A 17.04% 82.96% St. Dev. 21.63 189.71 St. Dev. 0.03 0.03 MCF 7 78.51 276.17 MCF 7 22.32% 77.68% St. Dev. 25.90 100.64 St. Dev. 0.03 0.03 MB MDA 231 94.25 320.45 MB MDA 231 22.62% 77.38% St. Dev. 59.87 159.95 St. Dev. 0.05 0.05

(a) (b) (c) Figure 2. CryoTEM micrograph of MDA MB 231cell line. (a) resolution of the bilipid membrane (b) burst microvesicle releasing it’s content (c) resolution of glicoproteins on the surface of the vesicle.

158 (a)

(b)

(c,d) Figure 3. Microarray images of fluorescently labeled MVs captured using anti-epidermal growth factor receptor (EGFR) for (a) MDA MB231, (b) MCF7, (c) MCF10a, and (d) the supernatant of MVs obtained after isolation (control).

Figure 4. Nanostring results comparing the miR ratio of partition coefficients between MCF10A and MDA MB 231for cells and MVs.

Acknowledgments The author wishes to thank her advisor Professor Michael E. Paulaitis and her labmate Kitty Agarwal for all their help and support. Special thanks to Dr. Ringel and Dr. Saji at the OSU Medical Center for their help with cell culture, Shahid Rameez for his help with the A4F.

159 References 1. Baj-Krzyworzeka, M., et al., Tumour-derived microvesicles carry several surface determinants and mRNA of tumour cells and transfer some of these determinants to monocytes. Cancer Immunol Immunother, 2006. 55(7): p. 808-18. 2. Blenkiron, C., et al., MicroRNA expression profiling of human breast cancer identifies new markers of tumor subtype. Genome Biol, 2007. 8(10): p. R214. 3. Gregory, P. A., et al., The miR-200 family and miR-205 regulate epithelial to mesenchymal transition by targeting ZEB1 and SIP1. Nat Cell Biol, 2008. 10(5): p. 593-601. 4. Harris, L., et al., American Society of Clinical Oncology 2007 update of recommendations for the use of tumor markers in breast cancer. J Clin Oncol, 2007. 25(33): p. 5287-312. 5. Heneghan, H. M., et al., Circulating microRNAs as novel minimally invasive biomarkers for breast cancer. Ann Surg. 251(3): p. 499-505. 6. Huang, Q., et al., The microRNAs miR-373 and miR-520c promote tumour invasion and metastasis. Nat Cell Biol, 2008. 10(2): p. 202-10. 7. Hunter, M. P., et al., Detection of microRNA expression in human peripheral blood microvesicles. PLoS One, 2008. 3(11): p. e3694. 8. Iorio, M. V., et al., MicroRNA gene expression deregulation in human breast cancer. Cancer Res, 2005. 65(16): p. 7065-70. 9. Kang, D., et al., Proteomic analysis of exosomes from human neural stem cells by flow field-flow fractionation and nanoflow liquid chromatography-tandem mass spectrometry. J Proteome Res, 2008. 7(8): p. 3475-80. 10. Ma, L., J. Teruya-Feldstein, and R.A. Weinberg, Tumour invasion and metastasis initiated by microRNA-10b in breast cancer. Nature, 2007. 449(7163): p. 682-8. 11. Mitchell, P.S., et al., Circulating microRNAs as stable blood-based markers for cancer detection. Proc Natl Acad Sci U S A, 2008. 105(30): p. 10513-8. 12. Nole, F., et al., Variation of circulating tumor cell levels during treatment of metastatic breast cancer: prognostic and therapeutic implications. Ann Oncol, 2008. 19(5): p. 891-7. 13. Raposo, G., et al., B lymphocytes secrete antigen-presenting vesicles. J Exp Med, 1996. 183(3): p. 1161-72. 14. Ratajczak, J., et al., Membrane-derived microvesicles: important and underappreciated mediators of cell-to-cell communication. Leukemia, 2006. 20(9): p. 1487-95. 15. Tavazoie, S.F., et al., Endogenous human microRNAs that suppress breast cancer metastasis. Nature, 2008. 451(7175): p. 147-52. 16. Thery, C., et al., Isolation and characterization of exosomes from cell culture supernatants and biological fluids. Curr Protoc Cell Biol, 2006. Chapter 3: p. Unit 3 22. 17. Valadi, H., et al., Exosome-mediated transfer of mRNAs and microRNAs is a novel mechanism of genetic exchange between cells. Nat Cell Biol, 2007. 9(6): p. 654-9. 18. Volinia, S., et al., A microRNA expression signature of human solid tumors defines cancer gene targets. Proc Natl Acad Sci U S A, 2006. 103(7): p. 2257-61.

160 Sustainable Personal Transportation

Student Researcher: Pierre A. Hall

Advisor: Dr. Tom Hartley

The University of Akron Department of Electrical Engineering

Abstract The present study examines how efficient the new types of personal transportation are and how applicable they are in everyday life urban settings with all types of weather conditions. The personal transportation has to get the individual and there load across medium to short distances efficiently and earth friendly. By doing background research on multiple vehicles and taking data collections. With the collected data make metrics for comparison. Comparing different aspects of the vehicles performance, that being distance traveled, recharge time, weight, speed, and cost. After making charts and tables to analyze the collected data a vehicle will be chosen with the best metrics that will perform the best under the given conditions. After the vehicle is chosen it will undergo numerous test and multiple calculations will be performed to verify the marketed performance of the vehicle. After all the calculations have been made and the marketed claims have been made or disproved suggestions will be made to improve the vehicles performance. Different methods of improvement are being considered such as replacing batteries or solar charging methods.

Objectives In this particular research a wide range of electric two-wheeled vehicles were researched and reviewed in regards as to which were more efficient and more advantageous under particular conditions. After the vehicles were collected and their data was archived, an analysis process began comparing the different vehicles and ranking them in different categories of performance. The vehicle that scored well in all the categories was the one that was purchased. When the purchased vehicle arrives many tests and calculations will be done to see if it was really advertised properly and also see what type of enhancements can be made to the vehicle to make it the most efficient.

Methods The methods used in this research called for extensive research and systems of classification. For my particular research I only focused on two all-electric wheeled vehicles which are bikes and scooters. After extensive research of many vehicles, the specifications of the vehicles were then examined and charts were made according to different performance elements that were targeted. From the charts made in Microsoft excel each vehicle was ranked, then a scoring system was implemented and from that the vehicle that had the best score was picked based on the performance. Once the vehicle arrives tests will be conducted. At this point some of the methods that are planned may not work when the vehicle is being tested and new methods and/or formulas will have to be constructed and used.

In my research I gathered as much information on as many electric vehicles as I could, then recorded the specifications on the vehicle. Specifications (specs) are detailed information regarding the performance of the vehicle. In this case the specs that I was interested in were weight, range of the vehicle per charge, speed, battery chemistry, amp hour, voltage, the maximum capacity the vehicle can carry, and the cost. After gathering the information for each vehicle, the vehicles were then classified on different performance aspects: speed, recharge time, carrying capacity, cost, and distance traveled per charge. The charts that were made from this information are:

161

Table 1. Speed Vehicle Speed(mph) Tri-Track 110 Electra Electric 80 Esarati 50 Blade 45 Jackal 40 Falcon Hummer 40 Oxygen 28 eGO 24 E-Bikeboard 22 Currie 15 Yike Bike 15 Xport 14 Segway X2 12.5 Bik.e 12.5 Viza Volt 12

Table 2. Distance Traveled Per Charge Vehicle Distance(miles) Blade 100 Esarati 100 Tri-Track 95 Oxygen 35 E-Bikeboard 30 Jackal 25 Falcon Hummer 25 eGo 25 Bik.e 12.5 Segway X2 12 Viza Volt 8 Currie 8 Xport 8 Yike Bike 6.2

Table 3. Vehicle Cost Vehicle Cost(High-Low) Electra 35000 Tri-Track 19995 Segway X2 6695 Blade 5995 Yike Bike 4300 Jackal 3500 Esarati 3000 E-Bikeboard 2500 Oxygen 2500 eGo 2100 Xport 700 Currie 500 Viza Volt 275

162 Table 4. Vehicle Recharge Time Vehicle Recharge Time(Long-Short) Jackal 6hrs Viza Volt 6hrs Xport 6hrs eGo 6hrs Falcon Hummer 4hrs Oxygen 3.5hr Blade 1hr Yike Bike 40mins

Table 5. Vehicle Load Capacity Vehicle Load(Heavy-Light lbs) Falcon Hummer 400 eGo 350 E-BikeBoard 330 Segway X2 260 Xport 240 Yike Bike 220 Viza volt 200

Once the E-Bikeboard was selected from the charts a series of calculations will be performed to see if the vehicle was accurately advertised. To test if the range was accurate the vehicle will be going around a track and measuring how many laps it went around on a single charge or if a track isn’t available i will have to try another way to measure the distance. By equipping a GPS and tracking the distance traveled I can see the total range of the vehicle. To calculate the speed of the vehicle the formula:

(Giancoli, Douglas C. 2008) will be used. Then after observing different battery attributes like voltage and amp hour, formulas are:

V= Voltage I= Current R= Resistance (Giancoli, Douglas C. 2008)

Q= Amp hour I= Current t= Time (Giancoli, Douglas C. 2008)

The need for the battery attributes was used by my research partner who is doing research on batteries and will be constructing a charging station from solar panels and will then construct a battery management program that will monitor how the battery is performing. And eventually a new battery will be placed on the vehicle that has better performance than the one that is currently on the E-Bikeboard. Better

163 performance means it will put out higher amp hours, keep a longer charge, take less time to charge, and the battery is more stable over time.

After the vehicle has been tested for performance observation on the motor will start being conducted for further research options. By doing new research on electric motors this will allow me to apply that to this vehicle, which will further add to the overall modifications applied to the vehicle and will make it more efficient and better suit the environment the vehicle will be used in. The attended formulas to calculate motor performance are:

P=Power V=Voltage I=Current (Giancoli, Douglas C. 2008)

(Giancoli, Douglas C. 2008)

E=Energy U=Electrical Potential Difference Q=Charge I=Current P=Power R=Resistance t=Time

Results After receiving the E-Bikeboard several tests were performed to verify if the specs were accurate. After the results were gathered it was clear that it performed as the manufactured advertised and in some areas better.

Test Advertised Actual Range 25mi ~30mi Top Speed 22mph 23mph Payload 330lbs +330lbs Recharge Time 3.5hrs 4.5hrs

References 1. "Design & Performance | YikeBike - The World's First Super Light Electric Folding Bike." YikeBike - The World's First Super Light Electric Folding Bike. | Urban Freedom. Web. May 2010. . 2. "Detailed Specifications for the Segway X2." Segway of Hawaii Sales, Service, Segway Tours & Rentals in Waikiki at the Hilton Hawaiian Village. Web. May 2010. . 3. "Electric Motor-Scooters and Motorcycles - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. . 4. "Electric Scooters Best Buys: Currie Scooters - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. . 5. "Electric Scooters Best Buys: XPort SLX - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. .

164 6. "Electric Scooters: BikeBoard - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. . 7. "Electric Scooters EGO-2 Cycle - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. . 8. "Electric Scooters: Forsen - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. . 9. "Electric Scooters: The Viza Volt - Electric-Bikes.com." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. . 10. "Lexus Hybrid Bicycle: Both-wheel Drive | Crave - CNET." Technology News - CNET News. Web. May 2010. . 11. "Lexus Hybrid Bike: What The…?" Car News, Car Reviews, Spy Shots and Videos - 4wheelsnews.com. Web. May 2010. . 12. Melanson, Donald. "Volkswagen Rolls out Foldable 'Bik.e' Electric Bicycle Concept." Engadget. Web. May 2010. . 13. "North America: EGO Vehicles." Home: EGO Vehicles. Web. May 2010. . 14. "OxygenSpecPage." Welcome To Electric Motorsport. Web. May 2010. . 15. "Segway Powers EN-V with Vision for 2030 Transport at Expo 2010 Shanghai." The Last Mile. Web. May 2010. . 16. "TriTrack Street." Electric Bikes, Scooters and Other Light Electric Vehicles (LEV). Web. May 2010. .

165 The Advantages of Nuclear Energy

Student Researcher: Malcolm X. Haraway

Advisor: Dr. Edward Asikele

Wilberforce University Department of Computer and Nuclear Engineering

Abstract The future of nuclear energy is the future of the world. Nuclear energy is the new and improved source of energy. Oil, gas and coal have been great sources of energy for a long time now but by transitioning over to nuclear energy there will be much more advantages. Nuclear energy is by far much more efficient than that of other types of energy. Producing nuclear energy is a more clean, reliable, and inexpensive process as well. Compared to any other source of energy; nuclear energy has the most advantages.

Project Objectives The objective of this research project is to enlighten individuals and the world about the rise in nuclear energy and the advantages of using this type of energy. I want to show all the aspects of where nuclear energy prevails over other sources of energy. This includes cost, efficiency, amount, availability, quality, etc.

Methodology Used In this research I used various sources of energy and compared them. I used coal, gas, oil and petroleum; then compared it to uranium to show the advantages of using nuclear energy.

Results Obtained Here are a few of the results that I have obtained. By far you can see where nuclear energy has completely excelled as the least expensive source of energy.

U. S. Electricity Production Costs and Components 1995 - 2009, In 2009 cents per kilowatt-hour

Total Production Costs Fuel Costs Year Coal Gas Nuclear Petroleum Coal Gas Nuclear Petroleum 1995 2.56 3.73 2.69 5.83 1.96 3.02 0.80 4.20 1996 2.41 4.56 2.52 5.93 1.88 3.86 0.72 4.57 1997 2.33 4.62 2.64 5.33 1.81 3.95 0.71 4.17 1998 2.28 4.04 2.45 3.75 1.73 3.44 0.70 3.03 1999 2.20 4.37 2.21 4.50 1.67 3.86 0.64 3.47 2000 2.15 7.24 2.16 6.48 1.63 6.67 0.60 5.69 2001 2.20 7.30 2.05 5.99 1.66 6.66 0.56 5.17 2002 2.18 4.63 2.01 5.71 1.63 4.01 0.53 4.78 2003 2.15 6.37 1.98 6.86 1.60 5.72 0.53 5.77 2004 2.23 6.40 1.93 6.52 1.66 5.85 0.53 5.54 2005 2.42 7.99 1.87 8.94 1.85 7.47 0.49 7.97 2006 2.52 6.91 1.90 10.31 1.93 6.37 0.49 8.93 2007 2.57 6.68 1.89 10.78 1.96 6.16 0.50 9.33 2008 2.80 7.80 1.96 17.63 2.20 7.27 0.51 15.69 2009 2.97 5.00 2.03 12.37 2.30 4.44 0.57 9.82

166 Production Costs = Operations and Maintenance Costs + Fuel Costs. Production costs do not include indirect costs and are based on FERC Form 1 filings submitted by regulated utilities. Production costs are modeled for utilities that are not regulated. Source: Ventyx Velocity Suite Updated: 5/10

Significance and Interpretation of Results Based on my results it is a valid argument that nuclear energy is has many advantages over other sources of energy. The significance of this research is relative to the growth of our world and revolution of nuclear engineering and a new source of energy.

Figures and Charts Monthly Fuel Cost to U.S. Electric Utilities 1995 – 2009, In 2009 cents per kilowatt-hour

Source: Ventyx Velocity Suite Updated: 5/10

Acknowledgments  Dr. Edward Asikele  Wilberforce University  Ohio Space Grant

References 1. http://www.benefitsofnuclearpower.com/ 2. http://www.nei.org/resourcesandstats/graphicsandcharts/

167 Finite Difference Modeling of a Low Cost Solar Thermal Collector

Student Researcher: James M. Hoffman

Advisor: Dr. Kevin Hallinan

University of Dayton Department of Mechanical Engineering

Abstract While current solar thermal systems are relatively efficient, their high copper content or use of evacuated tubes means that they are quite expensive. While these materials are integral in providing excellent thermal conduction and energy absorption to the working fluid, they make solar thermal heating a cost prohibitive venture for all but the most hard-line green enthusiasts. In an effort to replace these costly components, a new panel using small cells of water or a propylene glycol solution insulated by cells of air to maximize heat transfer to the working fluid and minimize heat losses is being examined. The panel itself is made of readily available and relatively cheap materials meaning that it could be fully integrated into a building to serve as the entire roof. As building multiple prototypes is expensive, an effort to create a finite difference model to predict panel performance under varying conditions and configurations is being developed. This model will be further improved by attempting to match it to test data when available. The desired result is not only a tool to predict basic performance but to predict performance prior to installation. While increasing public acceptance of this technology is one large hurdle to potential widespread use, lowering the economic barrier along with today’s governments’ many renewable energy stimuli is the first step to promote this form of carbon zero heating.

Project Goals The objective of this investigation is to develop a computerized finite difference model to predict the performance of the proposed solar thermal panel. This requires correctly identifying and modeling all energy transfers taking place: solar radiation into the panel, energy transfer from the working fluid through the panel to the outside world, and energy transfer through the back of the panel into the structure that it is mounted on. Once these basic energy balances have been accounted for, the resulting “pure physics” model will be attempted to be correlated to test data, and if necessary have small adders and scalers applied to better match the data. Following this, the model will be used to determine the sensitivity the of the panel’s thermal efficiency to various changes in both design and operating conditions. This will be used to attempt to optimize the panel design and to analyze parameter performance under a variety of weather conditions. The model will be used to analyze multiple thicknesses of the GE Lexan being considered for the model face plate; where solar transmittance and thermal resistivity are being traded off (types 2R and 3T as seen in GE Structured Products Lexan Thermoclear technical sheet). In addition, the case of evacuating the top level of cells will be examined; while this is undesirable from a cost standpoint, it may be necessary to achieve a smaller thermal loss to the ambient world than leaving the cells filled with air. Figure 1 details an axial cutaway view the assembly of all major materials and heat flows existing in the panel.

Figure 1. Axial Cutaway of Proposed Panel and Heat Flows

168 Modeling Methodology The model itself is a finite difference model written in the cross platform and open source TCL/TK scripting language with a guided graphical user interface (GUI). This language was chosen as the interpreter for it is available for anyone to download for free online. While it is understood that MATLAB is often the language of choice for technical computing, it was desired that this model could eventually be operated by anyone wishing to use it, so proprietary forms of computer coding were avoided.

In addition, TCL/TK supports the creation of graphical user interfaces for the user to operate the program. This makes the program significantly more “friendly” to the operator as the use of a command line shell is not requisite, and the user can more plainly see both available options for the simulation and data fields for required inputs. Furthermore, the operator can load in files via the familiar windows explorer rather than having to code in filenames.

The model calculates the energy balances and working fluid temperatures at set elements – finite lengths along the panel – determined by the user. The user enters data for initial fluid temperatures, ambient weather conditions (manual input & TMY3 comma separated values files are both supported) and material properties. Three energy balances are solved for each element on the panel via an iterative solver: Thermal energy from the working fluid to the back plate of the collector and out the back of the panel, thermal energy from the working fluid through the face of the panel to ambient, and solar energy gains to the fluid. These calculations take many factors into account as detailed below.

Energy balance for heat leaving through the back of the collector (this finds Tplate): 0 = convection from fluid to back of collector - conduction through insulative backing + solar radiation transmitted through working fluid

Energy balance for heat leaving out the face of the collector (this finds Tcollector): 0 = convection from fluid to collector surface – conduction through collector face + solar radiation absorbed by collector face

Energy to the working fluid: Efluid = solar radiation – convection to the top and bottom of collector.

For the case of the upper panel cells being evacuated, only radiation losses were considered rather than convection through the collector face. While this is an approximate modeling as there is still some slight conduction through the panel ribs, there is not yet any test data for this case that could be used to correlate the model. (Mahjouri)

For the energy balances, the heat transfer equations for convection, conduction, and radiation were used (Incropera and DeWitt 2002, Kissock 2010) :

In order to calculate the convective heat transfer, the working fluid was approximated as a thin film utilizing the following equation (Kabov et al. 2004) was used to approximate the convection coefficient:

While the model calculates all of the major energy balances, some small factors are left out, assumed to have minimal effect or be taken care of in the model adders and scalers. These omitted factors include all forms of radiation other than solar, radiation absorbed by the face plate (its transmittance is considered), and properly determining U values of partial amounts of material. While the cumulative effect of these is small, a slight derate scaler to the thermal resistance of the collector face was applied to match the model with projections from test data.

169 In order to aid in panel design, a “parameter mode” was coded which permits the user to create a base case, and then perturbate a single input parameter across a range to examine that parameter’s effect on thermal efficiency and the exit temperature of the working fluid. This can eventually be used to create a set of derivatives – a list of pertinent inputs and the effect of a small delta on that input to the panel performance, or show panel performance under the same ambient conditions as the working fluid begins to heat up.

The model’s output is a csv (comma separated values) text file containing the solved conditions at each element on the panel. This file format can easily be read into MS Excel or Matlab to produce plots of the data, or further manipulate it for other purposes.

Results and Future Goals Using the model, preliminary sensitivity analyses to indoor temperature, outdoor temperature, inches of insulation on the backing, and liquid velocity were carried out. In addition, studies were executed to determine the operation of the panel for a variety of fluid inlet temperatures and to compare to panel face materials. These are intended to be used to aid in the design of future panel prototypes and to help determine an optimum configuration.

One item discovered is that it is quite advantageous to use the triple wall material opposed to the double wall, as the decrease in solar transmittance was relatively small compared to the insulative gain from the layer of air at higher fluid temperatures. Figure 2 shows the operating efficiencies of the two materials over a fluid input temperature range at two ambient temperatures. Not only is the triple wall Thermoclear shown to be more efficient at higher operating temperatures, one can also observe the decrease in the efficiency gap between the readings at the two ambient temperatures. Note that efficiencies greater than 100% are a result of the initial fluid temperature being lower than ambient; efficiency was calculated based on solar incident energy received and the fluid temperature rise.

Figure 2. Graph of Thermal Efficiencies for Various Faceplates

Another study used parameter mode to investigate how much insulation was necessary for the backing. This study ran the model for several indoor temperatures, and perturbated the inches of insulation input from one half of an inch to three inches. As increasing insulation eventually reaches a point of diminishing returns and it a design goal to keep costs down, finding the optimum amount of insulation is

170 an important item. The study revealed that the best configuration would use between two and two and one half inches of foam insulation.

Figure 3 shows the model’s prediction for the evacuated upper cell condition versus the standard configuration. While it is obvious that evacuating the cells would greatly lower heat transfer to ambient, the model’s prediction seems to be overly optimistic, as testing of commercial evacuated tube collectors by the Florida Solar Energy Center showed lower efficiencies. (Mahjouri) This suggests that accounting for only radiation losses is not a valid assumption. Once the evacuated cell version can be built and tested, test data will be used to attempt to correct the model’s predictions to more realistic values.

Figure 3. Graph Comparing Efficiencies of Standard and Evacuated Cell Configurations

The model also has room for growth; students doing experimentation with the actual panel have only been able to provide transient data that did not reach a steady state maximum temperature, and much of the data is not consistent as no test conditions were repeated with overly similar results. While a small extrapolation was carried out to predict the maximum fluid temperature, it is desirable to obtain test data in order to more accurately match to model the panel. In order to fully match the model to an actual panel, more test data along with validation of testing instruments is required.

References 1. GE Structured Products, “Lexan Thermoclear Technical Manual”, GE Structured Products Technical Report. 2. Mahjouri, “Vacuum Tube Liquid-Vapor Collectors”, Thermo Technologies. 3. Kissock K.,(2010) “Energy Efficient Process Heating”. 4. Incropera F., DeWitt D, (2002) “Fundamentals of Heat and Mass Transfer”, Textbook, pp 52-53, 326- 328. 5. Kabova O.A., Chinnov E.A., Legros J-C., (2004) “Three-Dimensional Deformations in Non- uniformly Heated Falling Liquid Film at Small and Moderate Reynolds Numbers”, Fortschr-Ber. VDI, Reihe 3, Nr. 817, pp. 62-80.

171 Going Solar! The Power to Generate Energy

Student Researcher: Marguerite J. Hudak

Advisor: Dr. John Milam

The University of Akron Department of Education

Abstract Electricity: where would we be without it today? It is something that the majority of us use every single day in some way, shape, or form; yet, how many of us are actually aware of just how much we are using, at what cost, and on a larger scale – what impact it has? Electricity, like all types of energy, must be produced somehow through the use of resources, whether renewable or nonrenewable. Today, most electricity is produced using nonrenewable energy resources, which once consumed, cannot be replenished; however, my project focuses on a renewable energy resource: solar energy. Over the years with advancements in the areas of science and technology, we have adopted the use of renewable sources of energy, including solar energy, which has become more popular as an alternative way to generate electricity.

Therefore, through this project, students will have the opportunity to explore and learn about the very relevant topic of solar energy while practicing essential mathematical skills. To begin with, they would work through the problems in lessons 12, 13, and 14 of NASA’s: “A Brief Mathematical Guide to Earth Science and Climate Change.” This would serve as an introduction which would provide them with some background information about electricity, including acquainting them with some necessary terminology.

Next, students would begin working on the major portion of the project, which involves a real-life situation in which they would perform various mathematical calculations in order to determine the cost- effectiveness of installing a solar panel system on a home, given various scenarios. Also, in order to make this activity even more relevant to the students, they would utilize the electric bills from their residence; and if by chance these were not available, then other data would be provided. Time would also be taken in order for the students to come up with possible explanations for the changes in cost of electric bills from month to month, in addition to formulating ideas for how they could reduce the amount of electricity that they consume.

Then, as a result of performing the necessary calculations, students would conclude whether or not they would make the investment of purchasing the solar panel system, and also provide their reasoning behind why they would or would not make the purchase. Furthermore, as with any purchase, the cost and benefits should be weighed, and this lesson gives students practice with performing this life-long skill. In addition, students might also choose to keep in mind the other benefits (unrelated to cost) of installing a solar panel system, such as it being a clean form of energy that does not ultimately result in polluting the air. Finally, students would discuss and then write about the impacts of using solar energy versus not using solar energy.

Procedure with Questions

Example of Electric Bill Data: Kilowatt Hours Kilowatt Hours Month Cost (dollars) Month Cost (dollars) (kWh) (kWh) January 862 103.44 July 1282 153.84 February 757 90.84 August 1601 192.12 March 712 85.44 September 1349 161.88 April 793 95.16 October 935 112.20 May 753 90.36 November 807 96.84 June 961 115.32 December 881 105.72

172 How much are you paying per kWh?  For example, looking at January’s bill, taking 103.44 divided by 862, results in a cost of $0.12 per kWh. Doing this same calculation for each and every month would also result in the same cost.

What is your yearly power consumption (usage) in megawatt hours (mWh)? 1 mWh = 1,000 kWh. Round your answer down to the nearest whole number.  The sum of the kWh used for the 12 months = 11,693 kWh / 1,000 = 11.693 ≈ 12 mWh

Using your spreadsheet in Excel, determine the average amount of electricity usage in kWh per year and the average cost of your electric bill for the year. Round your answer for the average amount of electricity usage in kWh per year to the nearest whole number.  Average electricity usage for the year = Total kWh usage for the year / 12 = 974 kWh

 Average cost of electric bill for the year = Total cost for the year (1,403.16)/ 12 = $116.93

What is your total projected output for one year?  Your roof has the dimensions of 40 feet by 17 feet, and each solar panel is 5 feet by 3 feet. *However, you need clearance around the perimeter of the panels and therefore cannot place panels all the way up to the edges of the roof.* So, how many panels can you get? 36 panels

 Each solar panel is ideally rated at 175 watts (w) per panel, which means that each panel is theoretically capable of producing 175 w of electricity.

 Multiplying the total wattage (produced by the total number of panels) by a factor of 1.2 (which is only good for Cleveland), will give you the total projected output for one year. 36 panels x 175 w = 6,300 watt system x 1.2 = 7,560 w

What is the cost of the system?  The cost of the system depends on the total rated wattage of the panels, and so it costs $6 per watt to install. Therefore, the cost of the system is (6,300 w)($6) = $37,800.

 Would you be able to afford the 36 panel system given a budget of…

o $20,000? No. $30,000? No. $40,000? Yes!

 What if instead you wanted to purchase a smaller system (with a lesser number of panels than 36)? Would this be possible with the smaller budgets of $20,000 and $30,000 if you purchased a system that would provide you with…

o 75% coverage (of your roof)? (This would be 27 panels.) . $20,000? – No. (27 x 175 watts x $6 = $28,350) . $30,000? – Yes!

o 50% coverage? (This would be 18 panels.) . $20,000? – Yes! (18 x 175 watts x $6 = $18,900) . $30,000? – Yes!

References 1. National Aeronautics and Space Administration (n.d.). A brief mathematical guide to earth science and climate change. Retrieved from http://spacemath.gsfc.nasa.gov/SMBooks/SMEarthV2.pdf

173 Prevention of Liquid Loading In Horizontal Gas Wells

Student Researcher: Marsha E. Hupp

Advisor: Dr. Benjamin Thomas

Marietta College Department of Petroleum Engineering

Abstract Over the past few years, oil and gas companies have become very interested in shale gas reservoirs. However, shale formations are known to have very low natural permeability. Horizontal drilling and hydraulic fracturing have become necessary to make the gas production profitable. By taking advantage of these newer technologies, companies can now economically drill and produce these formations. An old problem however has arisen, liquid loading. Wells are losing production due to water and condensate building up in the wellbore causing a large hydrostatic pressure on top of the gas formation. This hydrostatic pressure decreases the drawdown pressure from the reservoir pressure (PR) to the bottom hole flowing pressure (Pwf). By using artificial lift, the liquid is removed from the wellbore allowing the gas the ability to flow. Essentially, the well’s hydrostatic pressure has been reduced dramatically, which increases the well’s drawdown pressure.

Project Objectives The purpose of this research report is to investigate methods used to prevent liquid loading in horizontal gas wells. By implementing one of the three types of artificial lift: gas lift, plunger lift or sucker rod pumping, the gas production rates and overall profit of a given well can be maximized. The three types will be described and an example of a real well application provided. Information regarding horizontal wells, pressure drawdown and how the problem of liquid loading begins is also included.

Methodology Used For horizontally drilled wells, the well is drilled vertically down (straight hole portion) to a designated depth. Thereafter the “deviated” portion of the well is drilled. As the drilling continues, the wellbore angle is built from the vertical toward the intended horizontal drilling phase. Drilling the deviated portion of the hole requires drilling several hundred feet of rock while building the “angle from vertical.” When the well has built eighty degrees of deviation from the original vertical wellbore, the well is considered horizontal. From there, the well extends horizontally through the formation containing the hydrocarbons. Due to abnormalities in the wellbore, the lateral portion is never one hundred percent horizontal. This creates low portions within the wellbore where fluid can accumulate. The horizontal portion of the wellbore is often equal or greater in volume than the vertical portion. The first horizontal well was drilled in 1927, but drilling horizontally on large scale began in the 1980s. Horizontal wells increase reservoir contact, which in turn increases production. Gas wells can have several horizontal completions, which mean the well can contain several laterals. These laterals are able to reach more of the reservoir than a normal vertical well, and can have a length of more than 4,000 feet long compared to vertical wells. Horizontal wells are environmentally beneficial as they minimize the amount of land being affected by drilling rigs, drilling pads and wellheads.

In order to have the highest flow rate out of the well, the drawdown pressure remain high. Drawdown pressure refers to the difference between the reservoir pressure and the bottom hole flowing pressure. The reservoir pressure is the average pressure within the reservoir formation. The reservoir has its highest formation pressure at the start of a well’s life. This pressure depletes over time as the volume of hydrocarbons and water within the reservoir is removed. The bottom hole flowing pressure is the hydrostatic pressure of the wellbore fluids which is equal to the flowing gradient of the reservoir fluids multiplied by the depth. The fluids within the wellbore can be gas, oil and water. The flowing gradient for water is higher than oil or condensate.

174 Liquid loading becomes an even larger problem in mature gas fields. Most gas wells have liquid production from inception; however, since the reservoir pressure is so high, this allows a high drawdown pressure. The reservoir pressure declines over time, which allows liquid to build up in the wellbore, creating a high hydrostatic pressure. When the hydrostatic pressure is greater than the reservoir pressure, the well will quit flowing. For horizontal wells, liquid loading often occurs. At first, the well will flow as normal producing gas along with liquid at a constant rate. Eventually, the reservoir pressure will decrease causing less of a pressure drawdown. The liquid will begin to build up in the slumps of the horizontal portions of the wellbore. Thereafter, slug flow will begin to occur. Slug flow is multiphase flow alternating between liquid plugs (slug) and large gas pockets. Finally, the fluid will build up in the wellbore enough to fully restrict the well. Artificial lift extends the economic life of the well. It decreases the flowing bottom hole pressure and creates a larger pressure drawdown thus enhancing produced volumes.

Results Obtained Gas lift is used in the early life of the well for fluid removal. The bottomhole pressure is still high and the well is still producing large volumes of liquid and gas. Gas lift is the continuous or intermittent injection of high pressure gas into the annulus. Through a gas lift valve called a mandrel, the high pressure gas is then able to enter the tubing and mix with the produced reservoir fluids. It lightens the fluid gradient with gas which creates a lower hydrostatic pressure. By lowering the hydrostatic pressure, the pressure drawdown increases. This allows the well to produce at a higher rate. For horizontal wells, gas lift can only be applied down to a vertical deviation of 70 degrees. The gas lift is limited to 70 degrees because of wireline activity, such as the installation and workovers. Figure 1 provides a gas lift installation on a vertical wellbore.

Due to the lack of horizontal well case studies regarding gas lift, a vertical oil well was utilized to show the positives of gas lift. This well has a vertical deviation of approximately 45 degrees. The well’s production without gas lift was approximately 3 MMscfd of gas and 9,000 bpd of liquid. However, after gas lift was implemented, gas production rose to 4 MMscfd and liquid production rose to 11,000 bpd.

Plunger lift is the only form of the three artificial lift techniques that uses only the reservoir pressure to produce the well. Plunger lift is used to sweep the fluid in the tubing string to surface. A plunger is a piston acting as a mechanical interface between the formation gas and produced liquids. Plunger lift helps extend the life of a well as long as there is enough formation gas to push the plunger to surface. First, the well is shut in allowing formation gas to accumulate in the annulus through natural separation. Second, the reservoir pressure builds up in the casing annulus to a certain value and the production tubing opens at surface. Finally, a rapid transfer of gas moves from the casing to the tubing which creates a change in pressure across the plunger and liquids. This causes the plunger to move upward pushing all the liquids above it toward the surface via the tubing. For horizontal wells, conventional plunger lift check valves can be set at a maximum vertical deviation of 45 degrees while modified plunger lift check valves can be set at a maximum of 70 degrees. Figure 2 provides an example of plunger lift in a vertical well.

There are over 1750 producing horizontal wells in the Greater Sierra Field and approximately 1100 of those wells are owned by Encana. The Greater Sierra Field is located 90 kilometers east of Fort Nelson in British Columbia. The main formation in this field is the carbonate Devonian Jean Marie Formation. The primary method to unload liquid from these low rate gas wells is the use of plunger lift technology. Plunger lift is also more effective in wells that the liquid is condensate and not water. Condensate is less dense than water. In this field, studies on plunger lift technology have been abundant. While plunger lift has been successful in the Greater Sierra Field, optimization has been challenging. They have been re- designing where to land the bumper spring to maximize production. Also, they have been utilizing different type of plungers. The production data is as follows in Table 1. Although the production did not increase after the plunger lift system was implemented, it did keep the well producing economically. Since the well was able to flow at a rate less than its critical, this increased the expected ultimate recovery of the well.

175 Sucker rod lift is also known as a rod pump. It is used once the wells’ bottomhole pressure has dropped below +/- 300 psi. At this point, the hydrostatic pressure has become greater than the reservoir pressure and the well will no longer flow without artificial lift. For this scenario, rod pump is the most economical form of lift. Rod pumps consist of a downhole pump connected to the pumping unit at the surface by rods. The pumping unit provides the power to stroke the pump up and down. The pump consists of a standing valve and a traveling valve. On each upstroke, the plunger moves upward, the standing valve opens, the traveling valve closes and the fluid is produced to the surface equipment. On the down stroke, the standing valve closes, the traveling valve opens to get another load of fluid. Pumping units can operate at 6 to 20 stokes per minute. Gas can then be produced up the annulus when fluid is effectively removed. Production rates of liquid can be +/- 100 barrels per day. For horizontal wells, the deviated wellbore limits the depth on the rods and the downhole pump. By lowering the pump into the curved portion of the well, hydrostatic pressure can be reduced and greater production rates can be obtained. Figure 3 provides a diagram of a sucker rod.

Oryx Energy began drilling horizontal wells in the Pearsall Field in 1985. The field is located in South Texas approximately 100 miles southwest of San Antonio, Texas. Oil is produced from the Austin Chalk formation. Prior to 1985, wells in the Pearsall Field were vertical wells with sucker rod lift. Wells from this field came on at very high rates, 1000 bopd. However, these wells would begin to decline in production and when rates declined to below 100-200 bfpd, they need to be put on artificial lift. When sucker rod lift was first implemented in the horizontal wells, the pump was located in the vertical portion. Once the well had been pumped off, lowering the pumps was the next step. However, to what depth could they lower them in the curved portion without causing damage to the pump, rods or wellbore? Equation 1 is an equation to determine the maximum length a rigid tool can be run through a dog-leg to avoid any potential problems. By lowering the pumps into the lateral, a 200 psi additional drawdown was available. This created a production increase. Table 2 outlines the production increase for horizontal wells in the Pearsall Field.

Significance and Interpretation of Results The theory behind liquid loading is simplified. As the reservoir pressure decreases, the pressure drawdown decreases unless artificial lift is implemented. By using gas lift, plunger lift or sucker rod lift, the hydrostatic head can be lowered dramatically causing a decrease in the bottomhole flowing pressure, which in turn increase pressure drawdown. From the case studies provided, all showed an increase in production whether it was in gas production or oil production or created a greater expected ultimate recovery. Liquid loading is a prevalent problem hindering horizontal gas wells across the world including the Marcellus Shale. By utilizing these three artificial lift techniques, liquid can be removed allowing production and profit to increase.

Figures, Tables and Equations

Figure 1. Gas Lift Concept and Gas Figure 2. Plunger Lift Diagram Lift Valves

176 Table 1. Production Data in the Greater Sierra Field

Time Production Rates months m3/day 0 56,000 12 14,000 36 10,000 Critical Flow 12,000 Plunger Lift 9,000

Figure 3. Sucker Rod Lift

Table 2. Production in Pearsall Field Average Vertical Average Lateral Average Well Number Test Test Incremental Group of Wells bopd/bwpd/Mscfd bopd/bwpd/Mscfd BOPD

I 14 106/36/174 218/134/251 112

II 13 109/34/151 160/138/274 51

III 20 106/30/172 167/125/257 61

2 0.5 L = 2 * Ro * [1- (Rxo - Ro) ]

Ro = R + 0.5 * ID

Rx = R - 0.5 * ID + OD

R = 5730/A

A = dog leg angle in degrees/100ft

Equation 1. Max Length Tool Can Be Set in Lateral Acknowledgments The author of this paper would like to thank Dr. Benjamin Thomas, Dr. Robert Chase and Professor David Freeman for their support and project guidance while conducting research and planning out this research paper.

References 1. Brown, Kermit E. Gas Lift Theory and Practice Including a Review of Petroleum Engineering Fundamentals. Petroleum Publishing Company, 1973. Print. 2. Cortines, J.M., G.S. Hollabaugh. “Sucker-Rod Lift in Horizontal Wells in Pearsall Field, Texas.” Society of Petroleum Engineers. 1992. . 3. Economides, Michael J., A. Daniel Hill and Christine Ehlig-Economides. Petroleum Production Systems. Prentice Hall, 1994. Print. 4. Gas Lift Case Study provided by BP. 5. Hein, Norman W. et al. “Sucker Rod Lifting Horizontal and Highly Deviated Wells.” 6. Oilfield Glossary. . 7. Sask, D., D. Kola and T. Tuftin. “Plunger Lift Optimization in Horizontal Gas Wells: Case Studies and Challenges.” Canadian Society for Unconventional Gas and Society of Petroleum Engineers International. 2010. . 8. Swearingen, Jerry, Engineer for Weatherford. Personal interview. 28 Mar. 2011.

177 The Great Planetary Debate

Student Researcher: Amanda E. Hutchinson

Advisor: Dr. Jane Zaharias

Cleveland State University Department of Teacher Education

Abstract During a 2-day lesson about the planets, the students were required to complete an activity designed to replicate authentic debate within the scientific community about what criteria a solar body needed to meet in order to be considered a planet. The students were given NASA data about different objects in the Solar System and had to create a list of criteria that all of our 8 planets meet. Students applied these criteria to the data of the objects and then had to list the objects that were planets. The students then had to stand up to present and defend their choices to their classmates.

Lesson Objectives The main goal of this lesson was twofold; having the students acquire the requisite knowledge about the planets in the Solar System while learning how to generate an argument and participate in a simulation of scientific debate. The student objectives of the lesson were as follows:  Describe the properties and features of the eight planets.  Define satellite, identify two planets that have no satellites, and describe the major satellites of the other planets  Analyze criteria and apply it to a set of data  Generate an argument based on real world data The student objectives are measurable and rooted in Bloom’s Taxonomy of Learning Objectives.

Lesson I designed this lesson with the intention of my students learning about what makes a celestial body a planet and why the dwarf planet Pluto no longer if the criteria to be a planet. The data for the lesson came from the New Horizons: To Pluto and Beyond guide in a lesson titled What is a Planet?, I made some modifications to suit my classroom needs. The students were given a double-sided page that contained some background information, the procedure for the laboratory activity, the data table of 17 celestial bodies, a place to list their chosen criteria and a table to list their chosen bodies to be planets.

Data from NASA Distance from Period of Period of Density Radius Object Orbits body it orbits Mass (kg) Orbit Rotation Satellites (g/cm3) (km) (km) (yrs) (hrs) 1 Sun 413 million 2.1 467 0.008 x1023 4.6 9.1 0 2 Pluto 19,600 1.2 593 0.016 x1023 6.39 153.3 0 3 Sun 149 million 5.5 6371 59.7 x1023 1 23.9 1 4 Object 14 238,000 1.2 249 0.00073 x1023 1.37 32.9 0 5 Sun 8826 million 2.3 1200 0.166 x1023 560 ? 1 6 Sun 172 million 2.4 irregular 7.2 x1023 1.76 5.27 0 7 Object 8 671,000 3 1569 0.48 x1023 3.6 86.4 0 8 Sun 778 million 1.3 69,911 18990 x1023 11.9 9.9 63 9 Sun 227 million 3.9 3390 6.4 x10 23 1.88 24.6 2 10 Sun 57 million 5.4 2440 3.3 x1023 0.24 1407.5 0 11 Object 3 384,000 3.3 1738 0.74 x 1023 27.3 655 0 12 Sun 4504 million 1.6 24,624 1024 x1023 164.8 16.1 13 13 Sun 5914 million 2 1150 0.13 x1023 247.9 153.3 3 14 Sun 1429 million 0.7 58, 232 5684 x1023 29.5 10.7 46 15 Sun 230 million ? 3 ? 5.5 1.7 0 16 Sun 2871 million 1.3 25, 362 868 x1023 84.02 17.2 27 17 Sun 108 million 5.2 6052 48.7 x1023 0.62 -5823.4 0 !

On the first day, the students were asked to create a list of criteria, based on measurable characteristics, which all planets would meet. Then the students were asked to apply their developed set of criteria to a table that contained the known properties of 17 celestial bodies. Analyzing the data with the criteria, the students were able to come up with a list 8 celestial bodies that are the planets in our Solar System.

The second day was the group presentations of their chosen criteria and the resulting objects that fit their criteria. Each group was given 5 minutes to present followed by 3 minutes to field questions and

178 criticisms from their peers. After all presentations were complete, the students came up with a final list of which celestial bodies were the planets.

At the conclusion of the lesson, I provided the students with the current International Astronomy Union’s resolution concerning criteria for a planet and the identities of the celestial bodies given in the table. We then took the rest of the class period to discuss the IAU’s resolution, why Pluto was reclassified, and why debate was still occurring in the scientific community.

Pedagogy This lesson was a great example of inquiry-based education in a science classroom to produce authentic results. Current educational research is advocating the use of inquiry-based instruction as the preferred method in science instruction. This lesson follows these guidelines by limiting the amount of direct instruction and allowing the students to explore the data and draw from prior experiences with the topic. The inquiry nature of the lesson challenged the students to think beyond the data presented and to look critically at their own work and the work of their peers and experts in the field of astronomy. The lesson also follows Bloom’s Taxonomy to incorporate higher-level thinking in the students. This lesson took place over the first 2 days of the Solar System half of our Sun and it’s Solar System unit.

The lesson is aligned with the following State of Ohio Academic Content Science Standards: Earth-Space Science-9- Benchmark C: Indicator 3 Doing Scientific Inquiry-11- Benchmark A: Indicator 2, 6 Nature of Science-11-Benchmark A: Indicator 3

Assessment The assessment of the laboratory activity contained three parts. Each part represents a different learning goal, the gaining of knowledge of what a planet is, the generation of an argument and participating in a scientific debate. In my class, all laboratory activities are graded out of 25 points. For this laboratory activity, the students were graded on the on the following parts: development of the criteria, applying the criteria to the data and the presentation of their planetary choices to the class.

Results The students were highly engaged throughout all parts of the lesson. They became extra lively during the debate portion of the laboratory activity. Since the completion of the lesson the students have often referred back to the activities that they completed, most notably the debate portion. Based on this data I will begin to look for more ways to implement debate themed lessons within my curriculum. Academically the students preformed well on this activity with the average score being a 21/25 or 84%. This is slightly higher then the normal laboratory grade. This achievement could be the result of several different factors, including a more engaging lesson, a portion of the grade being participating in the debate, which gave students a way to earn points without having to do much extra written work.

Conclusion This was a well-designed lesson as it promoted inquiry, presented essential content knowledge in an engaging manner, was properly aligned with state standards and produced and authentic results. Without the use of NASA’s data concerning some properties of different celestial bodies in our atmosphere, this lesson would not have been possible as it was written. I plan to reuse this lesson in my future classes and will make modifications as new information about planets is gathered and adopted by the scientific community.

179 Fiber Metal Laminates

Student Researcher: Kathryn R. Hyden

Advisor: Dr. Pedro Cortes

Youngstown State University Department of Mechanical Engineering

Abstract Studies have shown that the combination of high-performance composite material with tough metals increases the mechanical performance of the resulting hybrid material. Such fiber-metal laminates combine the superior specific strength, stiffness, and fatigue properties of composites with the excellent machinability and toughness of most engineering metals. The present work focuses on developing a novel lightweight structure constituted by reinforced thermoplastic materials based on self-reinforced polypropylene, magnesium, and aluminum alloy. The fiber-metal laminates will be manufactured using a low cost molding press process by alternating layers of PURE® composite with magnesium or aluminum layers. Various configurations are being manufactured and tested in order to optimize the dynamic properties of the structure. Both fracture and impact properties of the hybrid structure will be examined. Thus far, only preliminary testing has been conducted but the results look promising.

Project Objectives Currently, many of the fiber-metal laminates being manufactured are based on thermosetting composite materials. Unfortunately, thermosetting based structures are associated with a number of limitations including low interlaminar fracture toughness, long processing cycles, and repair difficulties. However, many of these drawbacks could be circumvented with the use of advantageous thermoplastic materials rather than thermosetting based structures. The use of thermoplastic materials allows for shorter processing times, superior fracture toughness and excellent impact properties. Recent developments in the area of hybrid structures have shown that the inclusion of CURVE®, a thermoplastic self-reinforced material, in fiber-metal laminates has greatly improved impact properties. Certainly it would be expected that the inclusion of a high-impact based thermoplastic composite would ensure their use in blast applications such as bullet proof vests, ship structures, cargo containers, and explosive proof bins.

This project looks into incorporating reinforced thermoplastic materials into fiber-metal laminates in order to develop a lightweight structure possessing desirable impact and toughness properties. The fiber- metal laminates will consist of alternating layers of Aluminum 2024-T3 and PURE® composite, which is a high-impact, self-reinforced thermoplastic structure. Fracture toughness and impact properties of the fiber-metal laminates will be examined for a range of loading and temperature conditions as well as various layering configurations.

Methodology Used Before manufacturing the fiber-metal laminates, the PURE® composite needed to be tested in order to determine the material properties. The PURE® composite was available in large sheets, allowing for various thicknesses to easily be obtained by placing multiple layers in a low cost molding press. Thus far, test specimens have been created using 12 layers. The sheets are cut into rectangles and stacked in a frame. The frame is then placed in the press at a specified temperature. Once the press has been closed the block is allowed to heat for two minutes and then left to cool to room temperature before removing the pressure. After being removed from the press, the blocks can then be cut into test specimens, with a band saw and polished to ensure that edge cracks cannot affect the results.

The fracture toughness of the PURE® composite is being evaluated using a general fracture toughness equation which is given by:

180 P 2 C Gc  2b a

Where P is the applied force, b the width of the specimen, C is the specimen compliance, and a is the crack length. The fracture toughness tests are being performed with the use of an Instron and a 100 N load cell. At this point, both Mode II and Mode I/II tests have been completed and can be seen in Fig. 1. As it can be seen from Fig. 1, Mode I/II requires that a notch be cut through half the thickness of the specimen, where the crack will begin. Mode II uses the same loading arrangement as Mode I/II but does not have the portion of the specimen removed.

For both cases, tests were run with extension rates of 0.1 mm/min, 1 mm/min, and 10 mm/min. Required data was gathered in order to calculate the fracture toughness of the material. The specimens being tested were created with a double layer of aluminum foil 25 cm from the edge. This provides a consistent place for the crack to begin propagating during testing. Liquid paper is then painted along the side of the specimen, allowing the crack to be visibly seen as it grows with an increased load. As the load was applied, load readings and extension lengths were taken at crack lengths from 0 to 20 mm in increments of 2.5 mm.

Another primary concern is the impact properties of the structures. Though no impact testing has been completed yet, tests will be performed in the future and the materials will be evaluated under low and high loading impact conditions. For the low impact testing, a dropping hammer tower with a piezoelectric cell will be used in order to achieve loading conditions up to 5 m/s. Here, a high velocity speed camera will be used to record the impact event. In the case of the high velocities impact tests, a gas-gun will be used in order to achieve loading rates up to the specimen’s perforation. The low and high velocity impact tests will be performed under a wide range of temperatures. A series of optical analysis will also be performed to elucidate the failure mechanisms of the samples tested during impact events. A prediction of the specific perforation energy under low and high impact loading conditions will also be carried out in this research study.

Results Obtained At this point the results are solely preliminary, as many unexpected obstacles have risen in the testing and manufacturing process. As was previously mentioned, no impact testing has been completed and the fiber-metal laminate has yet to be constructed. Therefore the only results that are currently available are the results obtained from testing the PURE® composite samples for Mode II and Mode I/II for fracture toughness.

Thus far, the results seem encouraging with the exception of a few inconsistent outcomes. Because of the inconsistent results it was necessary to run many tests in order to determine which tests were deficient. It was discovered that the toughness of the material is significantly affected by how the specimens are heated in the press. If they are placed in the press after it has already been brought to the appropriate temperature, they seem to perform better than if they are placed in the press and then slowly heated to the desired temperature.

The most complete results at this point in the research are for Mode II fracture toughness testing and are illustrated in the figures below. It was concluded from the Mode II testing at 1 mm/min that the fracture toughness was approximately 2650 J/m2. It can be seen through the graphs that there are some inconsistencies but as more results are available, it will be possible to more accurately gauge the success of the research and any advantage of using PURE® composite.

181 Figures and Tables

Figure 1. Schematic testing geometries for Mode I, Mode I/II and Mode II (left to right)

Figure 2. Load vs. Extension

Figure 3. Specimen Compliance

Figure 4. Fracture Toughness vs. Crack Length

References 1. P. Cortes, The Mechanical Properties of High-Temperature Fiber-Metal Laminates, University of Liverpool, PhD Thesis (2005). 2. W.J. Cantwell, P. Cortes, R. Abdullah, G. Carrillo-Baeza, L. Mosse, M. Cardew-Hall, P. Compston and S. Kalyanasundaram. “Novel fiber-metal laminates based on thermoplastic matrices”. 5th ICCST (2005).

182 Math and Comets: Discovering the Orbital Period of a Comet

Student Researcher: Christopher J. Iliff

Advisor: Dr. Sandra Schroeder

Ohio Northern University Department of Mathematics

Abstract For my lesson I will be discussing the trajectory of comets when their paths are elliptical. I will first pose the question; is there a way we can determine whether or not we will be able to see a comet again its initial sighting, and if we can, are we able to determine when they will reappear. I will start a discussion about what the students already know about comets and use this to lead into a discussion on their flight paths and eventually, an overall review of ellipses. This review will include topics such as: conic sections, eccentricity and minor and major axis. At the end of this first day I will give the students a work sheet in order to reinforce what we discussed about ellipses and strengthen their understanding for later applications.

The next day I will apply our understanding of ellipses into the orbit of a comet. I will ask the class how we can be sure we will see a comet again, when we’ve only seen it once. I will review Kepler's Laws of Planetary Motion and discuss the equation of an ellipse. When this is done, I will then pose an example of a comet and work through the problem with the class to find the length of the comet’s orbit and when we will be able to see it again. After that I will hand out a worksheet that will allow the students to apply their newly learned knowledge of the paths of comets and the math behind it. After students are given time to complete this worksheet in small groups, we will go over it as a class then review on what we have learned throughout the past few days.

Objectives  The student will be able to work with the definition of ellipses as well as the graphs and properties related to this shape.  The student will be able to use equations and all given information to work with the orbits of comets.  The student will be able determine the period of time the student may be able to see the comet again.

Benchmarks  Formulate a problem or mathematical model in response to a specific need or situation, determine information required to solve the problem, choose method for obtaining this information, and see limits for acceptable solution.  Apply mathematical knowledge and skills routinely in other content areas and practical situations.

Methodology When discussing the pedagogy related to this lesson plan, I believe that it relates to one where discovery is the key basis of learning. Students are encouraged to expand on what they already know to discover new and interesting phenomena that happen in space. Using this new information, they can then apply mathematical concepts and models to derive a new equation for determining something that was not possible with the beginning equations. This creates an environment where students create their own conclusions with the information they already know, instead of reading them from a book. By using mathematical reasoning in this way, students are also strengthening their mathematic skills. Students involved in this process must be actively engaged in order to come to a conclusion, because they must make a series of smaller conclusions that lead to the answer of the main question. Given the idea of a discovery based lesson plan, all the background knowledge for Kepler’s Laws of Planetary Motion, ellipses and descriptions about comets can be found in a variety of resources including the internet.

183 Assessment When implemented in class, the class enjoyed getting out of the textbook for a while and, in essence, writing their own textbook chapter full of their own conclusions. At first, I was a little worried about how they would react when I brought up the idea of space and comets in a math class, but after some conversation they warmed up to the idea and gradually gave me their own input on what they knew about the subject. Unfortunately, given my time frame I was not able to give them a written assessment over the material covered over the lesson, so instead I had them write a small one paragraph response to the lesson. It seemed like the consensus was that they enjoyed the discovery aspect of the lesson and that the information about ellipses and Kepler’s Laws was absorbed well.

Acknowledgments In conclusion, I really enjoyed implementing the things I learned during the workshop and other information I have learned about STEM since. The students really enjoyed the lesson plan, which makes me think the use of NASA materials will be very beneficial throughout my teaching career. My only concern was the lack of a written assessment that I was not able to implement because of the small timeframe of my field experience. Overall, this was a great experience, and I look forward to working with NASA and using the materials that they offer to get my students excited about the mathematics that aren’t just on our own planet; but part of the universe as a whole.

184 Antacid Table Race

Student Researcher: Tariq H. Ismail

Advisor: Ms. Karen Henning

Youngstown State University Department of Education

Abstract This is an introductory lesson to Reaction Rates. In this lesson the NASA core materials for the Antacid Tablet Race lesson will be used, but modified. This lesson has three parts. The student will learn to think using experiences and past knowledge, analyze experimental data, make a hypothesis on reaction rates from empirical data, and by doing so learn the concept of reaction rates and some of the factors that affect it. By the end of this lesson students should understand that temperature and surface area have an impact on Reaction Rates, and how that relates to rocket fuel and propellants.

Lesson Students will initially start by coming up with an hypothesis of what reaction rates are and what type of factors can effect reaction rates using their prior knowledge on the topic or anything they have experienced in their lives. They will also give one example to support their hypothesis. This initial part should take 10 minutes and all answers should be placed on the Reaction Rates worksheet.

Then students will then perform a set of guided experiments and record their data on the provided worksheet. These experiments are based on the Antacid Tablet Race lesson created by NASA core. The Reaction Rates worksheet will outline the steps that are to be followed in this modified lesson. After completing the experiments, students will then analyze their data, and revise, or confirm their initial hypothesis. This second part should take 40 minutes.

Finally, the teacher will go over the key concepts of the lesson and clear up any questions that students have. In this part of the lesson the teacher will need to make sure that students understand the results they should have obtained, and what this means in relation to rocket fuels and propellants. This conclusion should take only 10 minutes.

Objectives  To understand Reaction Rates  To explore some factors that affect Reaction Rates  To understand how Reaction Rates are vital in fuels and propellants.

Alignment • Subject : Science • Standard : Physical Sciences Students demonstrate an understanding of the composition of physical systems and the concepts and principles that describe and predict physical interactions and events in the natural world. This includes demonstrating an understanding of the structure and properties of matter, the properties of materials and objects, chemical reactions and the conservation of matter. In addition, it includes understanding the nature, transfer and conservation of energy, as well as motion and the forces affecting motion, the nature of waves and interactions of matter and energy. Students also demonstrate an understanding of the historical perspectives, scientific approaches and emerging scientific issues associated with the physical sciences. • Grade Range : By the end of the 9-10 program: Benchmark : B. Explain how atoms react with each other to form other substances and how

molecules react with each other and other atoms to form even different substances.

• Standard : Scientific Inquiry

Students develop scientific habits of mind as they use the processes of scientific inquiry to ask

185 valid questions and to gather and analyze information. They understand how to develop hypotheses and make predictions. They are able to reflect on scientific practices as they develop plans of action to create and evaluate a variety of conclusions. Students are also able to demonstrate the ability to communicate their findings to others. • Grade Range : By the end of the 9-10 program:

Underlying Theory This lesson involves two types of scientific inquiry, discovery learning and guided inquiry. In the first part of the lesson students are forced to draw upon real life experiences to discover a new concept. Then in the second part of the lesson students are guided through a set of experiments and questions that should lead them to understanding the concepts of Reaction Rates. This lesson also uses the constructivist approach, as students construct knowledge through their experiences and experiments. Finally, this lesson also involves the use of the scientific method in forming a hypothesis, testing it, and revising it based on empirical data.

These skills that are used in the lesson are all essential skills for a scientist and will enable students to understand science better. Students should be able to retain information longer due to the fact that they are constructing their own knowledge through their experiences.

Student Engagement Students are constantly engaged in this lesson, as they have to answer questions on a worksheet, perform a set of experiments, and use the scientific method.

Resources The resources are firstly that every student has a Reaction Rates handout. Then groups of three to four students will each need:  2 beakers (500ml)  6 Antacid Tables  a pair of tweezers or forceps  stopwatch  boiling water  ice water  water at room temperature  pestle and mortar

Results Students should find that temperature and surface area have an impact on Reaction Rates.

Conclusions 1. The Reaction Rate is the speed of a reaction. This is the speed at which the molecules within the reactants interact with each other. 2. The powdered tablet reacts faster than the whole tablet; this is because the powdered tablet has a larger surface area that is in contact with the water. Therefore reaction rate is based on the surface area that is in contact with other reactants. 3. The Hot water should have reacted the fastest, while the iced water should have reacted the slowest. This means that reaction rates are affected by temperature. This is because particles that are warmer have more energy, causing more frequent collisions 4. Some other factors that affect reaction rates are concentration, catalysts, order, pressure, and concentration. 5. Atoms and Molecules are the particles of matter that are interacting in a chemical reaction. Reaction rates are actually the speed at which bonds are broken and formed between molecules or atoms of their respective elements. 6. Rocket Fuels and Propellants are usually liquids, gases, or very finely ground solids because they can react easily due to a large surface area that can react compared to larger solid sources.

186 Chemical Reactions, Kinetics: Reaction Rates!!!!

Good morning! Today we are going to be learning about reaction rates! Do not worry about getting any wrong answers on part A of today's lesson; this is just to see what prior knowledge you have of the topic. You may consult your neighbors, but try and come up with some ideas on your own. Please answer the following questions below:

Part A

Using your prior knowledge:

1) What do you think is a Chemical Reaction Rate ______

2) Which Factors do you think affects reaction rates (3 maximum)?______

3) Using the above information, come up with an initial hypothesis of what a Reaction Rate is and what Factors affect it. ______

(4) Give a real world example of one of your Factors that affects reaction rates and try and explain why this is so: ______

Now move on to part B and let’s see what the experiments can tell us about reaction rates!

Part B: In part B we will perform at practical experiment. Please follow the directions below and record the data in the tables below each section. Please work in groups of 4 and remember to wear protective eye wear and lab coats. Material for the set of Experiments:  2 beakers (500ml)  6 Antacid Tables  a pair of tweezers or forceps  stopwatch  boiling water  ice water  water at room temperature  pestle and mortar

Experiment 1: Determination of Whether Powdered or Solid Antacid Tables React Faster.

Method: 1. Fill two 500ml beakers with 250ml each, make sure that each are filled with water from the same source so that they are approximately the same temperature. 2. Now crush up one of the antacid tablets using the pestle and mortar. 3. Try and empty the entire crushed tablet onto a wax paper sheet. 4. Have one student ready with two stopwatches, one in each hand ready. Then pour the crushed and the whole antacid tablet into the beakers. As soon as the substances are dumped the timing person should activate the stopwatches.

187 5. When the substances are done reacting with the water, stop the respective stopwatch and record the time taken in the table below. 6. Repeat this experiment two more times.

Results: Crushed vs.. Whole Trial # Crushed Tablet time taken to react Whole Tablet time taken to react (s) (s) 1 2 3 averages

Experiment 2: The effects of Temperature on Reaction Rates

Method: 1. Fill two 500 ml beakers with 250ml of water each, one beaker with boiling water from the kettle and one with tap water. (Be careful when handling hot water, wear oven gloves) 2. Using a thermometer, measure and record the temperatures of the water in each trial, for both scenarios. 3. Place an antacid tablet into each beaker of water, make sure to start a timer as soon as each tablet is dropped into the beaker. 4. Measure and Record the time taken for the respective tablets to dissolve and record them in the table below. 5. Repeat this experiment two more times with the exactly same settings 6. Now, instead of using boiling water, use iced water provided by the teacher. Run this experiment three times as well and record data in the tables below.

Results:

Boiling Water vs. Tap Water Trial # Temperature of Time taken for tablet Temperature of Tap Time taken for tablet Boiling Water (°C) to dissolve(s) Water (°C) to dissolve(s) 1 2 3 averages

Iced Water vs. Tap Water Trial # Temperature of Ice Time taken for tablet Temperature of Tap Time taken for tablet Water (°C) to dissolve(s) Water (°C) to dissolve(s) 1 2 3 averages

188 Part C: Now that we have our empirical data, we can analyze our data and use the analysis to revise our hypothesis. Please answer the following questions based on your data from above.

1. From the first experiment, what can we conclude based on the reaction times, about the reaction rates of the powdered and whole tablet? ______2. What difference between the powdered and whole table exists that make the two react at totally different rates? (Hint: Think of the amount of substance in contact with the water) ______3. From the second experiment, what can we conclude based on the reaction times of the different temperatures of water about the reaction rates of the antacid tablet? ______4. Which had the fastest(highest) and which had the slowest (lowest) reaction times (rates)? ______5. From our results, we can make the conclusions that: ______6. Using the answers to these questions and our results, revise your hypothesis on what a reaction rate is and the factors affecting it. ______7. Almost done, finally are these the only factors that you suspect affect reaction rates? If not, please list a few more that you thing are possible factors. (Do not worry if you are wrong, we will be discussing more factors in the upcoming lessons.) ______8. What could be possible sources of error in our experiments (3 minimum)? ______9. What role could atoms and molecules play in reaction rates? ______

Part D : Good job on the experiments and guided inquiry. Below will be a summary of the results you should have found because often due to experimental error we do not achieve the results we were supposed to, and the conclusions you should have come up with.

1. Reaction Rate is the speed of reaction. This is the speed at which the molecules within the two reactants interact with each other. In these experiments, the reaction times are the reaction rates, as the time taken for the reaction is the speed of reaction. 2. The powdered tablet reacts faster than the whole tablet; this is due to a larger surface area in contact with the water. Therefore reaction rate is based on the amount of surface area that is in contact with the other reactants. 3. The Hot water should have reacted faster compared to the tap water and iced water, while the iced water should have reacted the slowest. This means that reaction rates are affected by temperature.

189 This is because particles that are warmer have more energy and bump more vigorously into other molecules causing them to react faster due to more frequent collisions. 4. Other factors that affect reaction rates that we will talk about in the following weeks are concentration, catalysts, order, pressure, and concentration. 5. Atoms and Molecules are the particles of matter that are interacting in a chemical reaction. Reaction rates are actually the speed at which bonds are broken and formed between molecules or atoms of their respective elements.

Please take a look at the following websites for more information:

For atoms and molecules

Surface Area

190 Inhibition of Amyloid Beta 1-42 Peptide Aggregation

Student Researcher: Emma B. Jenkins

Advisors: Qiuming Wang and Jie Zheng

The University of Akron Department of Chemical and Biomolecular Engineering

Abstract Alzheimer’s disease (AD) is a chronic neurodegenerative disease that affects an estimated 5.2 million Americans and is the seventh-leading cause of death. A neuropathological hallmark of AD is the formation of insoluble amyloid plaques in the human brain caused by the misfolding and self-assembly of the 39- to 43- residue-long amyloid beta (Aβ) peptides. It is believed that soluble Aβ oligomers, rather than monomers (initial species) or insoluble mature fibrils (final species), are major toxic agents responsible for synaptic dysfunction in the brains of patients with AD. Inhibition of the formation of these toxic oligomers of Aβ has emerged as an approach to developing medications for AD. Due to highly hydrophobic nature of Aβ and characteristic cross-β structure in Aβ oligomers/fibrils, the promising Aβ inhibitors should have the following chemical structures/groups to interfere with Aβ aggregation: short sequences (less than 14 residues or groups) for uncomplicated synthesis and characterization and aromatic groups end groups. In this work, we examined the efficacy of small molecular inhibitors against prevention of formation of highly ordered aggregates using fluorescence spectrometry and atomic force microscopy (AFM). The results indicate that a rigid end group is beneficial for inhibition, rather than a flexible carbon chain tail. Two of the inhibitors tested showed high inhibition potential when used at a concentration ratio of 5:1 (inhibitor: Aβ). However, the effectiveness greatly decreased when the concentration ratio was decreased to 3:1; therefore, it is recommended that future cell toxicity testing be done at the higher concentration.

Background Amyloid fibril formation plays a role in at least 25 different diseases, including Alzheimer’s disease, type 2 diabetes and Parkinson’s disease (1-6). The amyloid aggregates associated with AD are comprised of 39 to 43 amino acids (7-17). The toxic species believed to play a key role in AD is the intermediate species of Aβ1-42. Aβ1-42 is formed from the cleavage of the transmembrane amyloid precursor protein (APP) by β- and γ-secretases. APP is also cleaved in a healthy cellular process; however, this is done by α- and γ-secretases, as shown in Fig. 1 (18). The mechanism of Aβ morphology changes is debated. The different species include Aβ oligomers, fibers and plaques. Aβ plaques are extracellular fibrillar deposits. Soluble oligomers have been observed prior to fibril formation, leading researchers to believe they are an intermediate species; however, recent research suggests that oligomers are not obligate intermediates in the fibrillization pathway, and are, rather, an alternate pathway that may or may not lead to fiber formation (19).

There are a few current therapeutic strategies for AD, but the benefit from them are often small or unapparent (20). Patients with neurodegenerative diseases have overstimulation of the N-methyl-D- aspartate (NMDA) receptor by glutamate. One current medication to combat this is Memantime, which is an uncompetitive NMDA-receptor antagonist (21). Another therapeutic method is to administer the medication called Donepezil. This is a cholinesterase inhibitor that works by blocking acetylcholinesterase and butylcholinesterase, which are enzymes responsible for hydrolyzing acetylcholine (22). There are some side effects from these medications; however, the primary motivation for finding a new treatment method is simply from the lack of effectiveness of current medications.

191

Figure 1. The process by which cleavage of APP results in Aβ1-42 (Beta-amyloid) if done by β- and γ- secretases or is a normal cellular process if done by α- and γ-secretases (18).

Due to an increased need to fight against AD, it is critical to find effective pharmaceutical treatments for this disease. There are two possible approaches that various researchers are considering: Aβ aggregation inhibition or enzyme control. For enzyme control, the goal is to control the concentrations of by β- and α- secretases to inhibit the process from occurring. The effects of limiting β-secretase are not fully understood because its other functions in the body are not yet known; thus, this option was considered inferior. As for aggregation inhibition, possible inhibitors include short residue peptides, nanoparticles, Aβ antibodies or small molecular inhibitors. The focus of this work did not deal with short residue peptides; however, they have shown in previous research to be viable candidates for inhibition (23). Previous research has yet to find effective nanoparticles with significant inhibition. Another option is the use of Aβ antibodies, which have shown to be quite effective at lab scale testing (24-26). Unfortunately, the testing on humans was cancelled in phase II of the clinical trials (27). It was found that certain antibodies had significant inhibition in some patients; however, other patients had adverse reactions to the same antibodies, resulting in death. Previous work shows the potential for small molecular inhibitors and is the focus of this research (12, 28-37). There are some conflicting studies regarding optimal small molecular inhibitor chemical structure. Reinke, et. al found that effective inhibitors were comprised of phenyl end groups, with a hydroxyl substitution on one end group and had a linker region that was slightly flexible (8-16Å, 2-3 freely rotating carbons) (12). However, Zhou, et. al found that flexible hydrophobic tails provided better inhibition than rigid aromatic end groups (38). The aim of this research was to find effective Aβ inhibitors to prevent the aggregation of toxic Aβ oligomers associated with AD.

In recent studies, using Thioflavin T (ThT) as a fluorescent marker of Aβ is a common method to determine inhibitor efficacy. One proposed mechanism involves the micelles formed by the hydrophilic head and hydrophobic tail of ThT, shown in Fig. 2. The binding of the micelles to amyloid fibrils amplifies the emission fluorescence, meaning the higher the Aβ fibril concentration, the higher the fluorescence intensity (39). Although ThT is widely used in Aβ inhibition research, the difficulties with it are also widely known, but not fully understood. It has been found in previous research that for some inhibitors the ThT signal will remain low even when AFM shows the presence of aggregation (29, 40). One possible theory for this phenomenon is that the inhibitor will quench the sample. This means that an electron from the inhibitor will become excited and will not return to the ground state, being absorbed by another species in the solution. This results in a lower result from the ThT assay. Therefore, the application of AFM in combination with fluorescence is favored in order to validate the results.

Figure 2. ThT chemical structure.

192 Project Objectives The objective of this project was to find a small molecular inhibitor that successfully inhibits Aβ aggregation. The seven inhibitors that were tested, shown in Fig. 3, have many of the necessary characteristics for inhibition described in literature, including some with rigid, aromatic end groups and some with more flexible hydrophobic tails. An additional benefit is that these potential inhibitors present a holistic therapeutic method, being made mainly of Chinese herbs.

Figure 3a. Chemical Structures for Inhibitors SM-1, SM-2, SM-3 and SM-4 (left to right)

Figure 3b. Chemical Structures for Inhibitors Gu-1, Gu-2 and Gu-3 (top to bottom and left to right)

Methodology Aβ1-42 Preparation Aβ1-42 (American Peptide Inc., Sunnyvale, CA) was purchased in lyophilized form and stored at -20°C. A homogeneous solution of Aβ in the unstructured monomer conformation that is free of seeds is desired for testing purposes. To obtain this, Aβ was dissolved in 100% 1,1,1,3,3,3-hexafluoro-2-propanol (HFIP) and allowed to sit for 2 hours, sonicated for 30 minutes, and centrifuged for 30 min at 4 oC and 14000 rpm. The HFIP is used to break hydrogen bonds in any of the peptides that are not in the monomer conformation. Sonication is used to remove any preexisting aggregates or seeds (41). After this mixing and separating process, only about 75% of the top Aβ solution was removed, to avoid getting any of the aggregates or seeds in the solution. This portion was then frozen with liquid nitrogen and dried with a freeze drier. The Aβ samples were stored at -80°C until use.

PBS Preparation The phosphate buffered saline (PBS) solution was prepared by dissolving dry powder in 1 liter de-ionized water. This yields a 0.01 M PBS solution, containing 0.138 M NaCl and 0.0027 M KCl, with a pH of 7.4 at 25°C. Before being combined with the ThT, the PBS was filtered using a 0.45 μm pore size filter.

193 ThT Preparation A fresh concentrated solution (1000 mM) of ThT-PBS was made weekly. Small aliquots of this solution were stored at 20°C and one was thawed daily to prepare the fresh ThT-PBS solution at a concentration of 20 μM that was used in the fluorescence samples. This method was suggested in literature to help lessen the sporadic behavior of ThT (42, 43).

Sample Preparation First, 10 μL of dimethyl sulfoxide (DMSO) were added to 0.1 mg of freeze-dried Aβ1-42. This was stirred on a vortex stirrer for approximately 5 seconds, sonicated for 1 minute, and allowed to sit for 5 minutes. After sitting, the Aβ-DMSO mixture was added to 1 mL of the filtered PBS (10 mM aqueous, 37°C) for the control solutions. This was a resulting concentration of 20 μM Aβ. For the first round of inhibitor solutions, the Aβ-DMSO mixture was added to a mixture of 1 mL filtered PBS and 2 μL of inhibitor- DMSO. The inhibitor-DMSO mixture was prepared by adding DMSO to the selected inhibitor until a concentration of 50 mM was reached. The resulting concentrations of the Aβ-inhibitor solutions were 20 μM Aβ and 100 μM inhibitor. After the most successful candidates were found, testing was done at decreased inhibitor concentrations (20 μM Aβ and 60 μM inhibitor). After mixing, all solutions were incubated at 37°C, without agitation. Fluorescence samples of each solution were taken approximately every two hours the first day after being mixed and once or twice a day the following two days. The samples contained 2 mL of the 20 μM ThT-PBS solution and 10 μL of the Aβ-inhibitor-PBS solution. The samples sat for 10 minutes before testing, under foil, to give the ThT and Aβ time to interact. AFM samples were prepared using the same solutions: 10 μL for 0 hour-1 day samples and 20 μL for samples after 1 day. The solution was put on a freshly pealed mica surface, sat for 1-2 minutes, rinsed with de- ionized water, air dried and sat for 1 day before AFM testing.

It is important to note that the concentration of Aβ used in these solutions is much higher than the concentration that is present in Alzheimer’s patients. The actual concentration is less than 1 μM, as compared to the 20 μM used in this testing. Likewise, the inhibitor concentration may need to be lower to be safe in the human body.

Results and Discussion Inhibition behavior of SM group small molecular inhibitors on Aβ aggregation The fiber generation processes of SM-1, SM-2, SM-3 and SM-4, with a 5:1 concentration ratio (inhibitor: Aβ), were monitored by fluorescence spectroscopy and are graphed in Fig. 4. The more Aβ fibers in the sample, the more binding with ThT will occur, resulting in increased fluorescence intensity. By comparing the intensity of the control experiment (20 μM Aβ incubated at 37°C) to that of the inhibitors, it is clear that within 50 hours incubation time, all four of these inhibitors greatly decrease the Aβ aggregation, at this concentration. The inhibitor efficacy from greatest to least is SM-1, SM-2, SM-3, SM- 4, according to the fluorescence data. The repeat experiments produced data nearly the same as in the first experiments, providing validation for the use of ThT fluorescence.

The AFM images for the control are shown in Fig. 5. Aβ proto-fibers, approximately 2 nm high and 800 nm long, form quickly, as shown in the image after 4 hours of incubation. The aggregation increases dramatically, and after 8 days of incubation, very large fibril aggregates are apparent. These images support the inhibition performance described above when comparing to the AFM images for the SM samples in Fig. 6-8. Although SM-3 and SM-4 do not inhibit Aβ aggregation as well as SM-1 and SM-2, they still greatly decrease the aggregation found in the control samples. After 10 hours incubation time, the SM-1 and SM-2 samples look quite similar and showing mostly Aβ oligomers. The 1 day samples for these two also look similar, showing few, small proto-fibrils. In addition, it took numerous scans on the AFM to find any proto-fibrils. A difference can be noticed after 2 days incubation time; here, SM-1 shows worse inhibition than SM-2, with a few globular deposits as opposed to the single proto-fibril in the SM-2 sample. After 8 days of incubation, both SM-1 and SM-2 have globular deposits. It was clear early on that SM-3 and SM-4 were outperformed by the first two inhibitors. SM-3 forms many large fiber-like aggregates at 5.6 hours, worsening at 8.6 hours. SM-4 forms much smaller fiber aggregates at 8 hours incubation time.

194

Figure 4. Fluorescence intensity vs. incubation time at 37°C for inhibitors SM-1, SM-2, SM-3, SM-4, the control with no inhibitor and the repeat experiments for them

Figure 5. Control AFM images at 0h, 4h, 7.5h, 1 day, and 8 days incubation time (5X5 μm)

Figure 6. SM-1 AFM images at 10 hours, Figure 7. SM-2 AFM images at 1 day, 2 days and 8 days incubation time 10 hours, 1 day, 2 days and 8 days (scans at 5X5 μm) incubation time (scans at 5X5 μm)

195 SM‐3:5.6 h SM‐3:8.6 h SM‐4:8 h

Figure 8. SM-3 AFM images at 5.6 and 8.6 hours incubation time and SM-4 at 8 hours (scans at 2X2 μm)

Inhibition behavior of dilute SM-1 and SM-2 on Aβ aggregation Fig. 9 illustrates that SM-1 and SM-2 still have inhibition potential at the diluted concentration of 3:1 (inhibitor: Aβ); however, the effectiveness is greatly decreased when compared to the 5:1 concentration ratio samples. According to the fluorescence data, the performance of the diluted SM-1 and SM-2 are very similar, with SM-2 performing better from 0-20 hours and SM-1 performing better following that. The AFM images (Fig. 10 & 11) between the two are comparable as well. The first samples (at 3.3 and 6.3 hours) are somewhat indistinguishable. After 26 hours of incubation, SM-1 forms a small amount of long, thin fibers and SM-2 forms a small amount of shorter, more globular aggregates. By comparing the AFM images to those for the control experiment (Fig. 14), it is not clear if there is any inhibition after 1 day of incubation.

Figure 9. Fluorescence intensity vs. incubation time at 37°C for inhibitors SM-1 and SM-2 at a concentration ratio of 3:1 (inhibitor:Aβ), and the control experiment

Figure 10. SM-1 (3:1 concentration ratio of inhibitor: Aβ) AFM images at 3.3 h, 6.3 h and 26 h incubation time (scans at 5X5 μm)

196

Figure 11. SM-2 (3:1 concentration ratio of inhibitor: Aβ) AFM images at 3.3 h, 6.3 h and 26 h incubation time (scans at 5X5 μm)

Inhibition behavior of Gu group on Aβ aggregation The fiber generation processes of Gu-1, Gu-2 and Gu-3, with a 5:1 concentration ratio (inhibitor: Aβ), were monitored by fluorescence spectroscopy and are graphed in Fig.12. The fluorescence data indicate that the Gu samples do slightly inhibit Aβ aggregation, as compared to the control experiments; however, they do not do so as effectively as the SM inhibitors. These graphs suggest that Gu-3 is the most successful out of this grouping. The intensity of Gu-2 goes below that of Gu-3 around 30 hours incubation time; however, Gu-2 has great variance in measurements, and thus, the data may not be reliable.

The AFM images, Fig. 13, present similar results. Inhibitors Gu-1 and Gu-2 can be eliminated very quickly, by looking at the 9.3 hour samples. Although they are different types, both inhibitors allow a great deal of aggregation; Gu-1 permits globular aggregation and Gu-2 permits long proto-fibril aggregates. Gu-3 is superior to the first 2 inhibitors, with no aggregation at 9.2 hours. Small aggregation starts after 1 day and increases to large fibril aggregates at 30.2 hours, producing much less desirable results than SM-1 and SM-2.

Figure 12. Fluorescence intensity vs. incubation time at 37°C for inhibitors Gu-1, Gu-2, Gu-3, at a concentration ratio of 5:1 (inhibitor: Aβ) and the control with no inhibitor

197

Figure 13. AFM images: Gu-1, Gu-2 and Gu-3 at 9.3 h and Gu-3 at 30 h (scans at 5X5 μm)

Conclusion In comparison to the SM samples, all of the Gu samples have low inhibition potential. This validates the results of Reinke, et al., that specify that a rigid end group is beneficial for inhibition, rather than a flexible tail as described by Zhou, et al. (12, 38). The inhibitors with the most inhibition potential are SM-1 and SM-2, with the best performance belonging to SM-1. The only difference between SM-1 and SM-2 is presence of a double bond in the cyclopentane epoxide in SM-2. According to Reinke, et. al, end phenyl groups are preferred, which may lead one to think the presence of the double bond in the ring would increase inhibitor effectiveness; however, they did not test simple aromatic end groups or epoxides, only phenyl groups and carbon chains (flexible tails). This could be an explanation as to why the anticipated result was not found. Additionally, the performance of SM-1 and SM-2 are extremely close and the differences may be due to experimental error. SM-1 and SM-2 inhibit better than SM-3 and SM-4 due to the left side phenyl end groups on both of them as compared to the cyclohexane group with no double bonds, which is the end group for SM-3 and SM-4. A similar justification explains why Gu-3 outperforms Gu-1 and Gu-2. Gu-1 has one flexible carbon chain end group and Gu-2 has a cyclohexane epoxide with only one double bond, as compared to Gu-3 with two phenyl end groups. Future work should consist of cell toxicity testing for SM-1 and SM-2 at the most effective concentration, which was a ratio of 5:1 (inhibitor: Aβ).

Acknowledgments This research would not have been possible without the help of Qiuming Wang and Dr. Jie Zheng at the University of Akron. I would like to thank Qiuming for testing the AFM samples and helpful guidance in the project. I would also like to thank the Ohio Space Grant Consortium for the funding.

References 1. B. L. Caughey, P. T. , Annu. ReV. Neurosci. 26, 267 (2003). 2. S. M. B. Chen, V.; Hamilton, J. B.; O’Nuallain, B.; Wetzel, R., Biochemistry 41, 7391 (2002). 3. F. D. Chiti, C.M. , Annu. ReV. Biochem. 75, 333 (2006). 4. F. E. K. Cohen, J. W. , Nature 426, 905 (2003). 5. P. T. L. Lansbury, H. A. , Nature 443, 774 (2006). 6. D. J. Selkoe, Nat. Cell Biol. 6, 1054 (2004). 7. Alzheimer, #x, s. Association, Alzheimer's & Dementia: The Journal of the Alzheimer's Association 4, 110 (2008). 8. J. Ghiso et al., Neurobiology of Aging 17, S145 (1996). 9. Hellstr, ouml, E. m-Lindahl, European Journal of Pharmacology 393, 255 (2000). 10. N. Marks, M. J. Berg, Neurochemistry International 52, 184 (2008).

198 11. M. Meyer-Luehmann, Spires-Jones, T.-L., Prada, C., Garcia-Alloza, M., de Calignon, A., Rozkalne, A., , J. Koenigsknecht-Talboo, Holtzman, D.-M., Bacskai, B.-J., Hyman, B.-T., Nature 451, 720 (2008). 12. A. A. Reinke, J. E. Gestwicki, Chemical Biology & Drug Design 70, 206 (2007). 13. D. J. Selkoe, Clinical Neuroscience Research 1, 91 (2001). 14. D. J. Selkoe, Behavioural Brain Research 192, 106 (2008). 15. B. S. Shastry, Neurochemistry International 43, 1 (2003). 16. Urb et al., Neurochemistry International 46, 471 (2005). 17. L. Ye et al., Journal of Neurochemistry 105, 1428 (2008). 18. Medical.Images, in National Institue on Aging. (2010). 19. R. K. Mihaela Necula, Saskia Milton, and Charles G. Glabe, The Journal of Biological Chemistry 282, 10311 (2007). 20. D. Kantor. (2010). 21. R. D. Barry Reisberg, Albrecht Stöffler, Frederick Schmitt, Steven Ferris, and Hans Jörg Möbius, The new england journal of medicine 384, 1333 (2003). 22. M. R. F. S.L. Rogers, R.S. Doody, R. Mohs, L.T. Friedhoff, , Neurology 50, 136 (1998). 23. D. S. G. Michael Baine, Elelta Z. Shiferraw, Theresa P. T. Nguyen, Luiza A. Nogaj and David A. Moffet, Journal of Peptide Science, (2009). 24. K. S. Bacskai BJ, Christie RH, Carter C, Games D, Seubert P, Schenk D, Hyman BT, Nat. Med. 7, 369 (2001). 25. K. S. Bacskai BJ, McLellan ME, Games D, Seubert P, Schenk D, Hyman BT, Journal of Neuroscience 22, 7873 (2002). 26. E. A. S. Julianne A. Lombardo, Megan E. McLellan, Stephen T. Kajdasz, Gregory A. Hickey, Brian J. Bacskai, and Bradley T. Hyman, The Journal of Neuroscience 23, 10879 (2003). 27. B. Solomon, Drugs of Today 43, (2007). 28. J. L. Fei Yin, Xiuhong Ji, Yanwen Wang, Jeffrey Zidichouski, Junzeng Zhang, Neurochem.Int., (2011). 29. O. R. Gunnar T. Dolphin, Myriam Ouberai, Pascal Dumy, Julian Garcia, and, J.-L. Reymond, ChemBioChem 10, 1325 (2009). 30. W. E. Klunk, M. L. Debnath, J. W. Pettergrew, Neurobiology of Aging 16, 541 (1995). 31. W. E. Klunk, Y. Wang, M. L. Debnath, D. P. Holt, C. A. Mathis, Neurobiology of Aging 21, 21 (2000). 32. H. H. Li Ling Lee, Young-Tae Chang, and Matthew P. DeLisa, Protein Science 18, 277 (2009). 33. K. Ono, K. Hasegawa, H. Naiki, M. Yamada, Journal of Neuroscience Research 75, 742 (2004). 34. E. K. Ryu, Y. S. Choe, K.-H. Lee, Y. Choi, B.-T. Kim, Journal of Medicinal Chemistry 49, 6111 (2006). 35. J. H. L. Seong Rim Byeon, Ji-Hoon Sohn, Dong Chan Kim, Kye Jung Shin,, I. M.-J. Kyung Ho Yoo, Won Koo Lee and Dong Jin Kima, Bioorganic & Medicinal Chemistry Letters 17, 1466 (2007). 36. F. Yang et al., Neurobiology of Aging 25, S158 (2004). 37. A. L. A. B. A. YANKNER, Proc. Nati. Acad. Sci. USA 91, 12243 (1994). 38. C. J. Yu Zhou, Yaping Zhang, Zhongjie Liang, Wenfeng Liu, Liefeng Wang, Cheng Luo, Tingting Zhong, Yi Sun, Linxiang Zhao, Xin Xie, Hualiang Jiang, Naiming Zhou, Dongxiang Liu, and Hong Liu, J. Med. Chem. 53, 5449 (2010). 39. C. C. Ritu Khurana, Cristian Ionescu-Zanetti, Sue A. Carter, Vinay Krishna, Rajesh K. Grover, Raja Roy, Shashi Singh Journal of Structural Biology 151, 229 (2005). 40. G. M. M. M. A. Findeis, C. C. Arico-Muendel, H. W. Benjamin, A. M. Hundal, J.-J. Lee, J. Chin, M. Kelley, J. Wakefield, N. J. Hayward, S. M. ACHTUNGTRENUNGMolineaux, Biochemistry 38, 6791 (1999). 41. D. M. Walsh, A. Lomakin, G. B. Benedek, M. M. Condron, D. B. Teplow, J Biol Chem 272, 22364 (Aug 29, 1997). 42. L. F. R. Eisert, and L.R. Brown, Anal. Biochem 353, (2006). 43. L. Riggs. (North Carolina State University, 2006), pp. 21.

199 Memory-Based Open-Loop Control Optimization for Unbounded Resolution

Student Researcher: Alan L. Jennings

Advisor: Dr. Raúl Ordóñez

University of Dayton Department of Electrical and Computer Engineering

Abstract In the interest of autonomous motion development, an algorithm is presented for unbounded waveform resolution. Rather than one optimization in a very large space, incremental improvements are done so that resolution can always be increased. This is accomplished by a memory based model and cubic spline interpolation. Nodes are able to be added to cubic splines, so that all previous memory can be transferred to the higher dimension space. The transfer is exact, so there is no corruption of the data. The memory based model determines confidence, offers search directions when support is insufficient and provides gradient information for the optimization. Results show practical accuracy and reasonably scalability.

Project Objectives The primary objective is an algorithm that develops optimal functions, u(t) in Rn, t in [0,1], such that a desired, scalar objective, yd, is obtained. What this means is that once the desired value of an objective is determined, it is simple to produce a waveform that minimizes a scalar cost, J. A system diagram is given in Figure 1 and will be explained in Methodology. Possible applications include finding a motor startup waveform that requires the least cold-cranking amps (u is voltage, t is time, y is final RPM’s and J is the maximum current during startup), airfoil design (u is the top and bottom camber, t is distance along cord, and y and J are aerodynamic properties), or determining methods for surmounting obstacles (u is the voltage to each motor, y is a measure of the obstacle and J is a reliability measure). A general purpose algorithm is desired, meaning that it should apply to a wide class of problems and not need to know specifics about the system. In this sense, the system is a ‘black box,’ meaning that all the algorithm uses are (u, y, J) triplets. This way, modeling errors can be reduced, especially for cases where the models have significant limitations such as friction and acoustics. These optimizations are typically numerical or also known as direct optimizations.

In practice, numeric optimization to shape a function is done by optimizing the function’s value at nodes (aka knots, or collocation points) and the continuous function is determined by an interpolation. Interpolation introduces a resolution limitation, based on the number of nodes. However, nodes are required to move from the infinite dimension of continuous functions to a finite dimension. Typically, the nodes are selected, the optimization conducted, and for practical purposes, that is the end of it. The reason for stopping is that high dimension optimizations are computationally challenging, due to both finite precision and number of function evaluations. Intermediate results are lost and a new optimization would have to start near scratch. It is true that previous results are likely to provide a good starting point, but this is not guaranteed.

The approach taken in this work is inspired by human development [1-2]. Rather than a single optimization, experiences are collected over an extended period of time, often years for true proficiency [3]. Practice involves directing experiences to where performance is desired to improve. Becoming adept is rewarded by being able to have fine distinctions in motions. Similarly, this work employs a memory- based model that captures knowledge and can transfer it to higher resolutions. The ‘Curse of Dimensionality’ still applies [4], but is mitigated since only one dimension is unsupported in the new resolution. The persistence of system knowledge is one of the novel aspects in this work and can only be guaranteed by ensuring all low resolution waveforms are members of the space of high resolution waveforms. Typical optimization tolerances should be obtained, so the evaluation of the method is based on number of function evaluations. Fewer evaluations mean that learning would be quicker so the algorithm would be more practical for a physical system.

200 There are some limitations to the scope of the problem that are worth mentioning. Due to the curse of dimensionality, global optimums are not pursued. For a nonlinear system, local results are not guaranteed to provide any relevance to results outside of a neighborhood. This means that neighborhoods throughout the entire space of optimization must be checked. The global optimization space expands exponentially, so any global optimization will be limited by complexity to low resolutions or very long run times. Multiple objectives are not addressed. To support multiple objectives, either for outputs or costs, the candidates must be organized in the higher dimension space. Not every combination of outputs may be feasible, requiring demarcations. These are not fundamental or insurmountable problems, but are outside the scope of this work. Note that the input function need not be a single dimension, but can be signals to multiple components. This is simple because optimal results are organized by the output dimension, not the input dimension.

Methodology The concept of the algorithm can be thought of as learning by trial and error. To prevent excessive repetition, memory is used to record previous trials. More than just that, the memory function also provides the local shape of the function. The optimization then works with the memory function to determine the best points, given a constraint: y(u(t))=yd. This brings out one of the advantages of a memory function is that data ‘spills-over’ to support other objectives. So even if the trial did not have the desired yd, it provides support for a future yd. The memory function uses measures to address if it has sufficient support when queried, and if support is insufficient, then it determines neighboring trials until it has sufficient confidence in its results. Results of the optimizations are collected for creating a ‘reflex’ function that has a ready response for a given yd. For this work, a set of yd’s are used, and interpolation of the optimal input functions is not addressed. Typically, linear interpolation of a* would be sufficient; though a thorough analysis would identify discontinuities of a* as yd changes and check for expansion of the range of yd. The focus of this paper is on the process organized by the flow of data in Figure 1.

The input to the system is composed by cubic interpolation, which reduces the degrees of freedom from infinity to the number of nodes. For cubic interpolation to work well, time should be reduced to a specified interval. Once an optimal input, a*, is identified for a sufficient number of values of yd, thus generating the reflex function, then an additional node is added. For every point in the memory database, the higher dimension equivalent can be created by interpolation of the lower dimension function. Cubic interpolation has a cubic function across each segment, and therefore has a constant third derivative. Typically continuity of the function, its first and second derivatives is maintained at each node. The two remaining boundary conditions in this work use the ‘not-a-knot’ condition, meaning that continuity of the third derivative is maintained at the boundary nodes. By adding nodes in the interior and getting its value from interpolating the lower dimension function, continuity of the third derivative is maintained. Since there is no change in smoothness, the function lies in both spaces exactly. That means that results obtained for the lower resolution can be used in the higher dimension space (once their location is found). A method for adding nodes for a one dimension u(t) is shown in Figure 2. This method provides reasonably even support without needing to know more about the process. If more is known about the process, then a better selection of the new node location is straightforward to implement.

The memory function chosen for this work is locally weighted regression (LWR) [5]. LWR is done on parameters of a to the system output y and cost J. To evaluate the LWR function at a test point, a polynomial regression is performed on the samples in memory where the error from each sample is weighted by the distance they are from the test point. As with typical regressions, least squared error is assumed. This can be written as E=(y-xT A)T W2 (y-xT A), where y is a vector of the sample dependent variables, W is a diagonal vector having the weight for each sample, and x is the basis function evaluated at each sample point. The optimal gain is A= (xT W2 x)-1 xT W2 y. Note that even if only a linear regression is done, the results are nonlinear, due to the nonlinear effect of moving the center of the weighing function to the new test point. A LWR is specified by a set of basis and a weighting function. Beyond using a bell function, minute differences in the weighting function are minor [6]. For this work a Gaussian bell on the Euclidian norm is used while the tails are clipped to zero, so that samples outside a radius, h, are ignored. Because an optimization is conducted on the LWR function, at least a second order set of basis should be used, so that the regression will easily capture the behavior of the extremum. Even though

201 the regression coefficients in A are not the exact local polynomial, they are a sufficient approximation for the purposes of optimization.

New data can be obtained to add support, so a fixed range for h is chosen based on the problem. Such as with all forms of polynomial regression, each basis must be independent; otherwise the results are not unique. More practically, the eigenvalues of (xT W2 x) can be used to estimate the support provided to each independent dimension. The eigenvectors can then be used to direct random new samples to the unsupported directions. This directs trials to locations that provide significant, rather than repetitive data. An added benefit to using LWR, is that the regression already has a mechanism for dealing with uncertain data, so noise can automatically be handled, only requiring a higher support threshold.

The optimal input function, a*, is for a given output value, yd, so the optimization is constrained. Numerical optimizations guide candidate points to the manifold where the constrain holds. Well developed commercial optimizations were tested. From the constrained optimizations available in MATLAB, active-set (which uses gradients) outperformed sequential-quadratic-programming, SQP, (which uses a gradient and Hessian). SQP works by repetitively composing a local quadratic optimization and stepping to the extremum of this local problem and repeating. SQP particularly had problems dealing with how the LWR function would change as more data was collected, and would also require more trials to be generated. Active set focuses more on determining active constraints and how they restrict a gradient descent. These commercial optimizations are focused on high precision and fast convergence, so they assume a numerically exact function. Despite this, active-set would work and so it was used in this work.

Results Obtained This algorithm was applied to a mathematical problem to evaluate its performance. The dimension of the input waveform is one, the system output is the average value: y=∫ u(τ) dτ and the system cost is the accumulation of the square of the difference from a sine wave, J2=∫ (u(τ)-(sin(2π τ)+2)/4)2 dτ. The interpolation of a is clipped at 0 and 1. This technically would be part of the system, not interpolation, but makes notation simpler. The saturation applied on u rather than a has the added benefit that the problem is effectively bounded (since interpolation values outside of (0,1) have an ever decreasing effect) without having hard constraints on the independent variable.

The only input that can have a cost of zero is that case when yd=1/2, since a deviation is needed from (sin(2π τ)+2)/4 to have an average value different than 1/2. Where there is an average difference, the root- mean-squared (RMS) is minimized when the difference is uniform. When the saturation takes effect, then a uniform difference is not possible. Where the error would be largest, u(t) would be sinusoidal and the rest would saturate at a lower difference. A trigonometric function was chosen since it is an infinite order function, as shown by the infinite series expansion, so there should always be some error. Results do behave as predicted, and a surface showing the optimal waveforms at 2, 5 and 10 nodes are shown in Figure 3.

Other functions were tested including adding kernel weights to the cost function, so that the RMS was focused onto a specific region. Different sets of yd were tested including some random sets with large and small gaps between values of yd. The cost would occasionally increase going to higher dimensions, but this is the estimated cost from the LWR; and slight differences between the true function and its estimate explain this phenomenon. The progression of J(yd) with number of nodes is shown in Figure 4, note that it converges to a linear relation of the difference from yd=1/2.

The size of the sample population remained manageable. The lowest rate of population increase would be linear, since each new dimension would require a similar level of support as every prior independent dimension. However, obtaining this rate would show that the optimization did not provide much improvement over prior resolutions. If the sample moves to the border and outside of the support from the lower dimension starting point, then the population size should approach doubling. The further the optimal point lies, the more samples would be expected. Results of the population size are summarized by a chart in Figure 5. The jump from 2 to 3 nodes would typically require a five-fold increase in samples,

202 and a doubling from 3 to 4 nodes. After that, the population would typically grow by 65%. Increasing the number of yd’s increased the number of samples, by a decreasing rate, going from 13 points to 23 (a 77% increase) typically resulted in a 44% increase in samples; going from 23 to 46 points (a 100% increase) typically resulted in a 27% increase. This shows that neighboring yd’s are giving and receiving support. Though the number of trials may seem large, it is manageable for a robot that could perform something on the order of 1 trial/sec continuously, resulting in learning over the period of days. This rate would be very optimistic for most practical tests, but shows that results can be obtained as quickly as there is motivation to support.

Significance and Interpretation of Results The goal of this work was to develop an algorithm for autonomously optimizing functions and results show that practical accuracy can be obtained. The method efficiently uses data by transferring it to higher dimensions, so that high dimension searches only initially require support for the new dimension. There are no fundamental limitations to transferring this algorithm to physical tests since the LWR can handle noisy data. In fact, a useful extension would be to make the cost function a measure of the reliability of obtaining the desired output, for example the standard deviation of yd given u(t). In addition, results should be clustered to provide fundamentally different alternatives, identify when the reflex function should not interpolate neighboring results and explore to see if the range of yd can be extended. In addition, the concept of clustering would provide a mechanism for coupling global search methods.

Autonomous motion learning is a significant breakthrough since the method is entirely undirected. The algorithm only needs access to outputs and to know the dimension of the input. Therefore a computer generated robot (one designed without human input) would be able to learn control strategies without an explicit comprehension of its environment. There are some limitations that still require a breakthrough. Currently all samples are stored regardless of statistical significance. A explanations for human learning capacity is suggested from our ability to forget irrelevant details; however what is irrelevant is not often apparent when learning. Forgetting also helps with adapting to environmental changes. Fortunately, there is some interesting work being done to answer these questions.

Figures and Charts Node locations as resolution increases 1 2 3 4 5 6 7 Epoch 8 9 10 11 12 0 0.2 0.4 0.6 0.8 1 Scaled time

Figure 1. A model of the system is built from the history Figure 2. For cubic interpolant to be a of outputs (y) and costs (J) for a given input (a). The member of a higher resolution interpolation, optimization is conducted on the model. If the model has the original set of nodes must be contained in insufficient support, then it determines additional trials to the higher resolution set. One possible run until sufficient support is built. The result of the expansion is shown above. The optimal optimization are optimal inputs (a*) for a given yd, which expansion is problem dependent. can be readily generated.

203

Figure 3. Reflex functions are shown for various yd for 3, 4 and 10 nodes. Note that results quickly converge to sinusoidal behavior. Samples in Number of yd samples Population 13 23 46 2 463 518 581 3 2,153 3,173 3,886 4 4,653 5,243 7,766 5 6,388 8,148 10,076 6 9,958 12,578 15,826 7 16,868 28,323 29,531 8 30,053 51,158 62,911

Number of Nodes 9 50,328 76,508 112,766

10 70,508 115,223 153,671

Figure 4. Along with the interpolant approaching Figure 5. Population results for various sinusoidal behavior, the cost quickly approaches the resolutions. Note that at 1 Hz, there are over theoretical limits. 80,000 samples in a day, so even the longest runs could complete within 2 days it done at that rate.

Acknowledgments The author is very grateful to the Ohio Space Grant Consortium for financial support and organization of informative gatherings. Professor Ordóñez is also deserving of thanks for his guidance and equipment.

References 1. M. Lungarella and G. Metta, “Beyond Gazing, Pointing and Reaching: a survey of developmental Robotics,” Proc. of the 3rd Int. Workshop on Epigenetic Robots: Modeling Cognitive Development in Robotic Systems, vol 101, pp 1-9, 2003. 2. A.L. Jennings and R. Ordonez, “Biomimetic Learning, Not Learning Biomimetics: A Survey of Developmental Learning,” National Aerospace and Electronics Conference (NAECON), IEEE, pp 11- 17, July 2010. doi: 10.1109/NAECON.2010.5712917 3. K. Ericsson, (1998). “The Scientific Study of Expert Levels of Performance: General Implications for Optimal Learning and Creativity,” High Ability Studies, vol 9, no 1, pp 75-100, 1998. doi:10.1080/1359813980090106 4. R.E. Bellman, “Dynamic Programming” Princeton University Press, 1958. 5. W.S. Cleveland, “Robust Locally Weighted Regression and Smoothing Scatterplots,” Journal of the Americian Statistical Association, vol 74, no 368, pp 829-836, Dec 1979. 6. S. Kadade and G. Shakhnarovich, “Large Scale Learning: Locally Weighted Regression” CMSC 3590 Lecture Notes, Toyota Technological Institute at Chicago, 2009.

204 Finite Element Analysis of Soft Biological Tissue with Comparisons to Tensile Testing

Researcher: Brooke R. Johnson

Advisor: Dr. Hazel Marie

Youngstown State University Department of Mechanical Engineering

Abstract Hernias are much too common in the surgical world. With any operation involving an incision into the abdominal cavity, there are up to a one in ten chance of a post-operative hernia. Surgical technology is improving while the rate of reoccurring hernias is staying steady. Reoccurrence is based on physical factors and repair techniques. This project is an attempt to reduce the number of reoccurring hernias and improve wound repair. It is hypothesized that using mesenchymal stem cells on an enhanced collagen matrix will increase collagen deposition and tensile strength of the abdominal fascial tissue. Thus far, the research looks promising; the experimental results indicate with the new techniques, additional energy is required to rupture the samples. The tissue samples are from fifth generation inbred rats and are therefore considered to be clones. The abdominal fascia of the rats was herniated and then treated and allowed to heal for periods of four and eight weeks. Once the tissue was harvested, dumbbell shaped segments were tested on a 100 Newton load cell tensiometer moving at a constant rate of 10 millimeters per minute. The force deflection data was recorded. A finite element model of the tissue was created to find the localized bio mechanical properties in and around the scar tissue.

Project Objectives Application of autologous mesenchymal stem cells to incisions in hopes to reduce re-occurring hernias will be analyzed through experimental and finite element analysis (FEA) simulation. Fifth generation inbred Lewis rats were utilized in the experiment. The rats were broken into three treatment groups for hernias: control, collagen paste with platelet rich plasma (PRP), and collagen paste with platelet rich plasma (PRP) and MSCs. These were further broken down into four and eight week healing periods. There are seven rats per test group. Two tensile tests specimens are harvested from each rat. Through the experiment, biomechanical properties of the herniated tissue with the different treatment groups, such as modulus of elasticity, ultimate tensile strength, yield strength and strain hardening, were established. A finite element analysis will be created to find the localized biomaterial properties in and around the incisional site. The experimental data was utilized to validate and compare against the finite element analysis.

Methodology Used The experiment was broken down into 3 phases. Phase I was the harvest and culture of the MSCs for the rats. Phase II was the induction and repair of the incisional hernia. Phase III is the harvesting of the tissue from the rats. The tissue is then pulled with a standard force extension tensile test.

The MSCs are harvested for rat tibia bone marrow. The MSCs are then filtered and fed until the time of the hernia surgery. The rats are anesthetized for surgery. During the surgery, a six centimeter lateral fascial incision on the abdominal wall was cut and each is sutured with six nylon sutures. Biological additives were applied to the incision depending on the experiment group and shown in Table 1.

205 Table 1. Experimental Group Treatments Experimental Group # Time Before Specimen Technique of Closure (7 Lewis Rats per Group) Harvest (Healing Time) Midline abdominal incision with primary Group 1A Four Weeks suture repair Midline abdominal incision with primary Group 1B Eight Weeks suture repair Group 2A Same as Group 1 + PRP + Colla Tape™ Four Weeks Group 2B Same as Group 1 + PRP + Colla Tape™ Eight Weeks Group 4A Same as Group 2 + MSCs Four Weeks Group 4B Same as Group 2 + MSCs Eight Weeks

The rats are given four and eight weeks healing time before the tissue is harvested. The tissue is harvested directly after the euthanasia of the rats. The tissue was cut into a dumbbell shape to ensure that it would fail in the center of the specimen instead of at the grips of the Instron tensiometer. These specimen’s thickness, width, and initial length were then recorded and taken to the tensiometer for tensile testing. The Instron tensiometer pulled the tissue at a rate of 10 mm/min until complete rupture of the specimen. Merlin, the computer software recorded the force and extension data which was then converted into a stress and strain graph as shown in Figure 1.

Figure 1. 4B4I Recorded Stress And Strain Curve

The pre-test photographs of specimens were used to determine close approximations of each specimen’s geometry. Splines were created in SolidWorks using the width and length dimensions from the photographs. The thickness of the part was inputted as the average of the top, middle, and bottom thicknesses. Because the actual tissue varies in biomechanical properties depending on the relative location to the scar tissue, the solid model is then dissected into different pieces varying in size to be able to apply different biomechanical properties to each piece. The two largest are the outer end pieces. They are the furthest away from the scar tissue and are assumed to have uniform properties. The parts decrease in size as the parts get closer to the center part. The center part is the smallest piece and represents the actual scar tissue at the site of the hernia repair. These parts are then reassembled as a SolidWorks assembly. Comparison to the specimen photograph and the actual model is shown in Figure 2.

206

Figure 2. Specimen and 3-D Model Compared

For each specimen 3-D model, two FEA analyses were completed. The first FEA analysis had a constant experimental modulus of elasticity. This was used to find a thickness of the 3-D model that would result in the experimental extension for the given force applied.

The second FEA analysis used varied moduli of elasticity for the each part. The second FEA used the 3-D model thickness and force used in the first FEA analysis. Iterations were performed on the moduli of elasticity until the extension of the model matched the extension on the experiment. The SolidWorks assembly of the specimen is then imported to ALGOR finite element analysis software package as a static stress with linear material properties analysis. Each element type for each part was set as “brick”. The assembly is then meshed as a solid mesh with an exact mesh size of .35 inches. This ensured that the model had at least four nodes across the midline of the scar. The material properties of each part were manually entered into ALGOR. The assembly was fixed with a surface boundary condition on the bottom surface as shown in Figure 3. The model was then run and dithered on extension.

Figure 3. 3-D Model Boundary Conditions

207 Results Obtained

Figure 4. Eight Week Treatment Groups Compared

The tensile test results show the MSC collagen treated specimens have increased modulus of toughness by 150%-350% (as shown in Figure 4). The modulus of toughness is the energy required to completely rupture the material. This supports that the hernias treated with the MSC collagen will have less reoccurrence because the tissue can hold more energy before it ruptures. Also, the project shows that the application of MSCs decreases the healing time of the hernia repair. More experimental data will be collected to see if the results are consistent and accurate.

The FEA models created also have shown and are validated through the experimental results. These models only considered the elastic part of the region. More models will be created and examined with the use of a strain hardening modulus. By utilizing a strain hardening modulus, the 3-D model will to be able to replicate the specimen tensile test more accurately.

References 1. Vidović, D. D., Jurišić, D. D., Franjić, B. D., Glavan, E. E., Ledinsky, M. M., & Bekavac-Bešlin, M. M., 2006, Factors affecting recurrence after incisional hernia repair. Hernia, 10(4), 322-325. 2. Anthony T., Bergen P. C., Kim L. T., Henderson, M., Fahey, T., Rege, R. V., and Turnage, R. H., 2000, Factors Affecting Recurrence Following Incisional Herniorrhaphy. European Journal of Surgery, 24(1), pp. 95-100. 3. Marx, R. E., Carlson, E. R., Eichstaedt, R. M., et al., 1998, Platelet Rich Plasma: Growth Factor Enhancement for Bone Grafts. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endod. 85: 638. 4. Badiavas E. V et al., Participation of Bone Marrow Derived Cells in Cutaneous Wound Healing. Journal of Cellular Physiology 2003; 196:245-250. 5. Ulrich, H., 2010, Perspectives of Stem Cells. Springer Media, London New York. 118-119. 6. Vaananen, H. K., 2005, “Mesenchymal Stem Cells,” Annals of Medicine, 37(7), pp. 469-479.

208 Engine Reconstruction to Run On Water

Student Researcher: Phillip E. Johnson

Advisor: Dr. Donna Moore-Ramsey

Cuyahoga Community College Department of and Design

Abstract It’s time for the whole globe to take a massive change into how we manufacture automobiles, especially since everyone likes to throw around the word “Green”. We now live in a society where there are many backyard mechanics who have already made cars to run on water, yes that’s right; good ol’H2O, and gas water hybrids. For years nobody thought it was possible, but now that the cat is out of the bag, it’s only a matter of time before it becomes the big great fix for many of society’s pollution problems.

Methodology To get the process going, one of the ways is to first install some stainless steel valves with an exhaust system constructed entirely out of the same material that’s used with the valves, because still to this day auto makers use cast iron exhaust manifolds and steel valves, and this process causes extreme corrosion to the system; therefore, causing you to spend unnecessary large amounts of money when your exhaust falls off of your car every couple of years.

The Hydrogen Process dissociates the water molecule by way of voltage charges, ionizing the combustible gases by electrons and, then preventing the water from forming the molecules during the gas igniting process making extreme energy that’s possible beyond the normal conditions known to man today. Also, the whole complete process is safe for all humans, wildlife, ozone, etc.

Next thing would be to completely use a pulsating energy, and have what is called a water fuel cell which splits oxygen and hydrogen, and then the whole car, or motorcycle, can be run strictly on hydrogen alone. There are a few ways this has been achieved within the past 20 years with different other devices added such as a pulsating transformer which can step up voltage during running operations of the vehicle, also a blocking diode that will prevent the electrical current from shorting out. Also, you might want to add an lc circuit that will act as a capacitor during electric operations. The insulator to the flow of natural water forms the capacitance, i.e., water in this stage becomes part of the circuit in the array of resistance between electrical ground and positive potential, helping to prevent the current flow within the pulsing circuit. Inductors then take on to be a modulator which steps up oscillation of a given charge within the capacitance of a pulse charging network in order to charge the voltage to a high enough range beyond the voltage zone. Physical blockage of components and circuits prevent the voltage from reaching infinity, or to saying (overloading and getting hot). Voltage across inductance for it to work is VI = Vt Xl/ (Xl – Xc). Once voltage-pulse is turned off, the voltage potential goes back to a “ground state” to start the process all over again.

Stanley Meyer and Daniel Dingel were absolutely ahead of their time with the theories back in the day, and now lots of other people all over the world doing the same thing to get away from being dependent on big gasoline prices at the pump. Once the hydrogen and oxygen is split, the hydrogen is running the whole car by going into the combustion chamber in the engine, therefore running hot enough to operate the whole car, also with a regulator that determines the amount of hydrogen that is needed inside of the engine. In fact only a small tiny battery is needed to split the water to run the car with very minimal current. You would need about four or five seconds before the cars starts up because there is no fuel pump on the car, that part becomes useless, then it creates a lot of hydrogen that is sucked into the intake manifold, and since hydrogen is 14 times lighter than air, it goes immediately to the combustion chamber, therefore, making the engine run on water. You cannot mix gasoline with water first of all, and you cannot mix hydrogen with gas either.

209 The air filter is available, but is replaced by a tube, so there is no filter any more. It is inside and rounded to produce a certain drill for the air flow. The gas tank is not being used, so it can be removed. The amount of gas per cylinder is then controlled via solenoid valves. The system has no additional pump or injection system; it is all controlled via vacuum. As far as the spark plug goes, they can be optionally used and modified for better ignition. The engine and the transmission can stay put unchanged. The muffler system was replaced with two end pipes, and the sound of the engine was identical to normally produced sound of every other engine on the road today. No other obvious noises or emissions were present according to the Philippine government. The temperature of the bubbling water was contained at about 40 Celsius. This whole meeting on the facts of the working prototype was done on November 28 and 29, 1999, in Manila. The car was a 1955 Corolla model with a 1.6 liter engine, and it was videotaped for its authenticity and then shared worldwide on television. Dingel said that he earned his mechanical degree by dint and effort, and perfected it by simple practical experience that could be the equivalent to a Ph.D., and Stan Meyer had no higher education at all either; yet both men were able to pull off an astounding feat that could help mankind just by tinkering in the back yard. This idea does not store any hydrogen in large quantities that can pose an explosion risk, unlike gasoline. The split water burns directly into the combustion chamber of the engine block. The Philippine government almost went into production into making the car back in 2000, but then the government got scared feet and didn’t put it out. It was supposed to be signed off by President Joseph Estrada, but then a new President was elected the following year. In 1998, Stan’s version could run for about 100 miles per gallon, so think about that for a minute and do the math, and also hydrogen stations are not the answer either because it would still be the same chaotic mess and gas pumps. People need to be able to fuel up anywhere; they are at not only when they come across a station.

When Daniel Dingel first got the idea going he had made the vehicle run on 50 percent gas and 50 percent hydrogen, but then after more tinkering, and research, he was able to get rid of the gas altogether and make the automobile run strictly on water all by itself, and so did Stanley Meyer. Meyer also quoted that the process was so simple that he had to make the idea on paper look complicated just to please the people down at the patent office, because no one believed it to be true, but without a doubt, he and Daniel and others, of course, showed the world that it was no hoax, that you could run a vehicle off of any water on this planet from rain to toilet water. Also there are numerous motorcycles already using this technology all the way as far as Australia from a guy named Steve Ryan, who has been featured on the news in his local city.

The United States military had wanted to use this technology to use in their tanks, jeeps, etc., and it would only cost about $1,500 to equip your vehicle, which is a savings that everyone can afford says the late Stanley Meyer. Also, with Stan living in Ohio that could have put the state right on top where it used to be back when it was a high production state manufacturing multiple steel products all over the globe.

Results Obtained Thirty years is too long for various countries to ignore technology that can change everything for a better way of living and breathing. Mr. Dingel has had his idea pushed aside time and time again, but now that the idea is mainstreamed; hopefully, that will change things just like the Internet did. The public for the most part is sleeping on something that is needed for the earth to renew itself like a band aid to a bruise. If companies stop polluting the atmosphere then slowly over time Mother Nature will heal itself. The state of California could definitely use the technology as soon as possible because there is a brown blanket of filthy air that covers a lot of their cities, so just imagine how you’re breathing compared to them. That state has listed the most pollution in the country, which could be fixed only if the government wanted it, too. It’s amazing how a small idea that’s been here for some time isn’t being utilized to fix a lot of problems.

The technology is harmless, good to the environment, safer around kids as opposed to a gas tank spill which could cause an explosion, because water in its purest form cannot explode, so that in itself is better than riding around with airbags in a vehicle that has a flammable liquid in its rear compartment. In a rear collision of supreme impact that could kill a lot of people for no reason especially when that tank can be converted to hold nothing but water. It just makes pure plain sense for the majority of the publics benefit.

210 Quantitative Analysis of Haptic Performance Using Human Machine Interaction and Multi-Task Performance Model

Student Researcher: Melissa A. Jones

Advisor: Dr. Chandler A. Phillips

Wright State University Department of Biomedical Engineering

Abstract Previous haptic research characterized a human-machine-interaction model for a human operator performing five simultaneous tasks using a common joystick and multi-attribute task battery (MATB) programming. To further develop this research, the tasks: light, dial, communication, frequency and channels will be performed using a force feedback haptic stick in conjunction with a vibrating mouse. There exist three alternatives of design for the mouse selection; autonomous, semi-autonomic or fully haptic. Additionally, integration of an audio cue for the monitoring task may improve performance. Traditionally, the monitoring section is purely visual. However, the addition of distinct alert tones would allow for either partial audio-visual or total audio-visual integration.

The human-machine-interaction model was redesigned to reflect a randomized study to test operator performance of tasks using haptic feedback and no haptic feedback as well as audio-visual and only visual. Subjects will perform in two 20-minute trials or each type during one of which the feedback is activated and in the other, the feedback is inactivated. Three levels of total machine-initiated baud rate (βIN) are generated by the MATB program and three human operator baud rates (βO) are recorded during testing. The total baud ratio (B¯) is defined as the ratio of BO to BIN. The research refines the previously developed human-machine-interaction model and studied operational interaction with the addition of haptic feedback and audio feedback. Previous results indicate an improvement in the effectiveness of human performance in multiple task information processing using force feedback. Thus, the addition of either audio cues or haptic mouse interaction may additionally increase human operator performance in the MATB environment.

Introduction Recently, various investigators have quantitatively analyzed human performance using human machine interaction and a multi-task performance model. Human machine performance is quantifiable as a stimulus from a machine and as a response from the human operator [1]. The levels of difficulty can be increased and the resulting effect on performance may be measured as baud rate (bits per second) for several tasks simultaneously. The MATB (multi-attribute task battery) program is used to quantitatively assess human performance in multiple task information processing. Few studies have been conducted which analyze human performance during multiple task information processing. However, one study develops a quantitative model of the human operator and machine interaction during multiple task performance, while considering implicit strategy [1]. Implicit strategy is defined as being one in which a specific goal has been specified, no strategy important information has been provided, no feedback about performance is given, and the person is unaware of the performance of other individuals [1].

To further develop this research, multisensory input will be used in conjunction with the previously established haptic stick.

Project Objectives There appear to be no studies performed which consider the effect of force feedback in the human operator performance during multiple task information processing. Consequently, the objective of this study is to develop a quantitative model of the human operator and machine interaction during multiple task performance using force feedback haptic stick, audio signals and a feedback mouse and compare the results to those obtained in previous studies which used only a joystick.

211 Methodology Used Subjects The subjects for this study were volunteers from Wright State University and Wright Patterson Air Force Base. Each subject was trained briefly on each of the tasks and then required to complete two 20-minute experimental trials. The first and second trials differed in that one was performed with force feedback while the other did not include force feedback. Additional testing alternatives include one trial with audio cues integrated into the monitoring section and one which is entirely visual.

MATB Tasks For this study, three display windows were utilized within the MATB environment [Figure 4]. These displays included system monitoring, communications and tracking. The subjects were instructed to ignore the other three windows: scheduling, resource management and pump status. The programs within MATB are run using a script file customized to include haptic input or no haptic input depending on the chosen scenario.

The monitoring task required the monitoring of both lights (F5 and F6) as well as the gauges (F1-F4). The normal state is that F5 is lit with a green light, and F6 is unlit. The subject responds by left clicking inside the offending box when the green light goes out in F5 or a red light appears in F6. The arrows within the gauges normally fluctuate one unit vertically up or vertically down. A response of a left click within the gauge is required when the arrows fluctuate more than one unit in either direction. For audio- visual integration, an audio alert is heard at the onset of a monitoring stimulus which prompts the user to click the offending area. An additional design includes a vibrational alert when the user’s cursor has been placed on a “clickable area” or a periodic vibrational reminder to attend to the monitoring section.

The tracking task requires the subject to use the haptic stick to keep the circular cursor within a large circular target. The level of drift of the cursor is directly related to the difficulty (high, medium, low) of the task. When the cursor drifts from the center, the force feedback haptic stick provides a vibration alert which returns the subject attention to correct.

The experimental design was 1x5 within subjects’ factorial design. The independent variable was the total baud rate with MATB input baud rate.

Results This study indicates an improvement in the effectiveness of human performance in multiple task information processing using force feedback [Table 1-3]. Performance is most notably improved in the tracking task, but minor improvements are seen in the lights and dials section. The graphs demonstrate a visual image of the relationship between trials with haptic feedback, and ones without feedback for subject 1 only (Figures 1-3). The use of force feedback appears to improve the user performance in the tracking task while not decreasing performance in the other tasks. The rate of improvement does not seem to be related to the difficulty level of the tasks, however, the higher input baud rates seem to affect all the tasks. Further testing is to be completed summer 2011 to determine the effect of adding a haptic mouse as well as audio cues. We predict an increase in performance with one, or both of these additions as the current technology allows subjects to ignore tasks which they might deem insignificant (dials).

Significance and Interpretation of Results Based on output data and baud rates, performance improves in the targeting task during implementation of the force feedback and the haptic stick. Use of the haptic stick does not appear to have a direct effect on the monitoring and communications task, but each task shows mild improvement in the performance. It appears, for the monitoring and communications tasks, practice has the greatest effect on performance. In addition to practice, increasing the input baud rate of the tasks seems to improve concentration on the targeting task, but decrease performance on the monitoring and communications sections. Therefore, it appears that, as the input baud rate increases, the output baud rate decreases. This indicates that as the difficulty increases, the performance decreases. During flight, pilots receive a large amount of stimulus ranging from verbal commands to multiple visual demands. The effectiveness of training devices may improve given the integration of a force feedback mechanism.

212

Figures Table 1. Output baud rate for tracking task Subject 1 Subject 2 Subject 3 Subject 4 Time (Min) Haptic No Haptic Haptic No Haptic Haptic No Haptic Haptic No Haptic 0-3.99 0.4932 0.32 0.9635 0.7 0.685 0.753 0.77 0.97 4-7.99 0.4475 0.26 0.9909 0.86 0.872 0.804 0.95 0.79 8-11.99 0.6347 0.56 1 0.98 0.689 0.511 0.96 0.95 12-15.99 0.2877 0.23 0.9909 0.92 0.973 0.954 0.86 0.56 16-19.99 0.7302 0.68 0.9953 0.96 0.995 0.944 0.78 0.5

Figure 1. Targeting task for subject 1 shows clear improvement in performance with haptic stick. Practice may improve performance as subject 1 performed the trial with the haptic stick after performing without the haptic stick.

Table 2. Output baud rate for dials task Subject 1 Subject 2 Subject 3 Subject 4 Time (min) Haptic No Haptic Haptic No Haptic Haptic No Haptic Haptic No Haptic 0-3.99 0.35 0.263 0.6176471 0.5588235 0.5 0.53846154 0.038 0.5 4-7.99 0.115 0.111 0.8125 0.6428571 0.4 0.9 0 0.4 8-11.99 0.4 0.154 0.6363636 0.8666667 0.323 0.61764706 0.0625 0.5625 12-15.99 0.375 0.219 0.5277778 0.5238095 0.462 0.92592593 0.077 0.3076923 16-19.99 0.75 0.429 0.75 0.6538462 0.792 0.8125 0.147 0.2903226

Figure 2. Graph of output baud rate for dials. This graph compares haptic and no haptic for subject 1, and shows a distinct improvement in performance with the addition of haptic feedback.

213 Table 3. Output baud rate for lights task Subject 1 Subject 2 Subject 3 Subject 4 Time (min) Haptic No Haptic Haptic No Haptic Haptic No Haptic Haptic No Haptic 0-3.99 0.821 0.714286 0.8311688 0.8311688 0.8 0.775 0.425 0.675 4-7.99 0.8 0.775 0.7560976 0.7560976 0.7857143 0.78571429 0.464286 0.75 8-11.99 0.8 0.675 0.9 0.9 0.8607595 0.87341772 0.6 0.85 12-15.99 0.756 0.822785 0.7857143 0.7857143 0.875 0.85365854 0.475 0.75 16-19.99 0.825 0.85 0.835443 0.835443 0.8813559 0.9 0.5 0.6585366

Figure 3. Output baud rate for subject 1 during lights task. This figure compares haptic with no haptic without a clear improvement being shown.

Figure 4. The multi-attribute task battery (MATB) environment

References 1. Phillips, C. A., Repperger, D. W., Kinsler, R., Bharwani, G., and Kender, D. (2007). A quantitative model of the human–machine interaction and multi-task performance: A strategy function and the unity model paradigm. Computers in Biology and Medicine, 37(9), 1259-1271. 2. J. R. Comstock, R. J. Annegard, The multi-attribute task battery for human operator workload and strategic behavior research, Technical Report 104174, NASA Langley Research Center, Hampton, VA, 1992.

214 Investigation Into the Impact of Wake Effects on the Aeroelastic Response and Performance of Wind Turbines

Student Researcher: Krista M. Kecskemety

Advisor: Dr. Jack J. McNamara

The Ohio State University Department of Mechanical and Aerospace Engineering

Abstract Wind turbines are currently a rapidly expanding form of renewable energy. However, there are numerous technological challenges that must be overcome before wind energy provides a significant amount of power in the United States. One of the primary challenges in wind turbine design and analysis is accurately accounting for the wake. This study represents a step towards including wake effects in wind turbine design and analysis codes. A free wake model is developed using a time-marching vortex line method, and subsequently coupled with FAST, an open source wind turbine aeroelastic code. The resulting model is verified and validated against existing results. Subsequently, comparisons are made in power predictions and aeroelastic responses between the developed free wake model and typical wind turbine aerodynamic modeling approaches such as Blade Element Momentum theory and dynamic inflow. Results indicate that the free wake model has an impact on the power, blade loads, and blade deflections.

Project Objectives Renewable energy is an important area of research due to rising energy demands, finite fossil fuel supplies, and growing environmental concerns. Over the last 25 years, wind turbine technology has increased power output dramatically - up to 5 MW electrical output machines - making wind energy a viable form of renewable energy1. This motivated a recent study, by the U.S. Department of Energy2 into the potential future role of wind energy in the United States. The primary conclusion was that 20% of the United States’ energy could originate from wind technologies by 2030. However, to meet this milestone, power production must increase from the 2.5 GW of wind power installed in 2006 to 16 GW installed per year by 2018. Furthermore, several significant technological aspects are noted as important to make this 20% a reality, including improved structural and aerodynamic modeling tools2.

This need for improved modeling and prediction tools was recently demonstrated in a blind study by the National Renewable Energy Laboratory (NREL)3 which assessed the accuracy of several different wind turbine prediction codes relative to experimental measurements using the NASA Ames 24.4 m by 36.6 m wind tunnel. Wide variations in predictions were observed. The aerodynamic forces on the blades will produce both power and blade bending forces. Thus, these are two of the parameters investigated in the study. At high wind speeds, predictions varied between 30%-275% of the measured power, and 60%- 125% of the measured blade bending force. For enforced no-stall conditions, which provide the most favorable conditions for comparison, predictions varied between 25%-175% of the measured power and 85% -150% of the measured blade bending force.

A likely root cause of these errors is inaccurate modeling of the complex aerodynamic environment of wind turbines4,5. One of the primary complications of wind turbine aerodynamics is the unsteady nature of the flow as a combined result of: yawed conditions, ambient wind shear, ambient turbulence, blade displacements, the flow velocity deficit due to the tower shadow, and wake effects5. Because the wind turbine is operating at relatively low tip speeds, changes to the flow due to these effects can have a large impact on the effective angle-of-attack of the blades5. Several different approaches, of varying complexity, have been utilized to approximate this aerodynamic environment, such as Blade Element Momentum (BEM) theory, dynamic inflow models, vortex wake models, and Computational Fluid Dynamics (CFD)5. The simplest and lowest fidelity approach is BEM; a two dimensional, semi-empirical theory that assumes the wake is in equilibrium with the blade forces5,6 and that the pressure drop across the actuator disc (i.e., a zero thickness circular surface resembling an infinite number of rotor blades7) is

215 constant8 Attempts to account for neglected effects include corrections for: a finite number of blades, the addition of inflow models to more accurately represent the wake, and coupling to dynamic stall models5. Dynamic stall is an essential effect to wind turbine design and analysis since it can lead to large increases in the blade loads and displacements and is typically included using semi-empirical models9 However, these models are of limited value when coupled to BEM since dynamic stall is significantly affected by the wake5 A partial correction for the wake can be obtained using dynamic inflow models based on a set of differential equations that approximate the nonuniform, time varying inflow conditions caused by the wake9 However, dynamic inflow models approximate the representation of the wake5.

Vortex wake methods attempt to model the essential features of the wake while maintaining computational feasibility. These methods consist of either prescribed wakes or free vortex wakes (FVW). Both prescribed wakes10,11, and free wakes4, 12, 13 have been utilized in wind turbine modeling. In some instances a hybrid free/prescribed approach is used to obtain values required by the prescribed wake14,15. Prescribed wakes are simple to implement and computationally inexpensive; however they are not accurate across a broad range of applications since they require empirical tuning16. In contrast to prescribed wakes, FVW methods model the convection and diffusion of the wake as it is shed and trailed from the rotor blade into the flow17. Thus, FVW models are based on fewer assumptions and are more computationally expensive16. Beyond FVW methods, more accurate modeling of the aerodynamics requires CFD solutions to the Euler or Navier Stokes equations. However, the extreme computational costs, in addition to inadequacies at capturing important features, such as the three dimensional vortex wake and the downstream convection of vorticity, make CFD an impractical option for most applications in wind turbines9.

Of the nineteen different codes included in the NREL blind study3, 9 implemented BEM, 3 implemented dynamic inflow, 4 utilized CFD, and 3 used vortex wake models. Based on the results of this study, it is clear that these methods are either too primitive, or in the case of CFD, too computationally expensive to solve the governing equations of the flow accurately. A commonly used wind turbine design and analysis code, NREL’s open source code FAST, demonstrates the use of primitive and inaccurate aerodynamic theories. The unsteady aerodynamic loads are computed using the open source NREL code AeroDyn. Currently, AeroDyn implements either BEM or a dynamic inflow model, Generalized Dynamic Wake (GDW), to compute the aerodynamic coefficients.6, 18 Clearly, considering the results of the NREL study, improvements over these modeling approaches are needed.

Motivated by this aerodynamic modeling gap, Gupta and Leishman4, 19, 20, recently investigated application and extension of the Maryland Free Wake Model17, 21-25, used extensively in rotorcraft research, to wind turbines. Comparisons with experimental data demonstrated that the developed time- marching free wake model was numerically stable and provided good to excellent predictions of the wake geometry and induced loads. Based on the model developed by Gupta and Leishman4,19, 20, a FVW model was recently developed by the authors26. Verification and validation results demonstrated good to excellent agreement with experimental and computational results26.

This paper highlights preliminary results generated during the initial phases of installing a FVW model in a comprehensive wind turbine analysis code. The specific objectives are: 1. Incorporation of a free vortex wake model into the open source NREL wind turbine aerodynamic modeling code, AeroDyn and then integration into NREL’s FAST code, a comprehensive wind turbine analysis and design tool. 2. Comparisons between the predicted power performance using different aerodynamic modeling approaches, e.g. BEM, dynamic inflow, and free vortex wake, on a representative wind turbine. 3. Investigate the effect of the wake on a flexible wind turbine structure comparing the blade loads and deflections predicted using the different aerodynamic modeling approaches. Fulfilling these objectives represents a necessary step towards improved wind turbine design and analysis tools, and assists in the goal of 20% wind power in the U.S. by 2030.

216 Methodology Used Free Vortex Wake Model Free vortex wake models are either based on relaxation methods for the time integration, which enforce wake periodicity17, 22, 23, or time-marching approaches17, 21, 24, 27. Several different methods are used to model the wake structure, such as: vortex filaments4, 14-16, vortex points/blobs13, screw surface vortex sheets10, 11 and vortex lattice/vortex sheets12.

The FVW model used in this study is primarily based on the Maryland Free Wake4, 17, 19, 20, 23, 24 time- marching, vortex filament approach. First, the bound circulation on the blade is computed using a Weissinger-L model28, adapted for rotating lifting surfaces4, 22, 29. The Biot-Savart law is applied to compute the induced velocity on each vortex filament using the bound circulation and the self-induced wake velocities. A predictor-corrector time-marching sequence is then used to convect the vortex filaments. Subsequently, the positions of the vortex filaments are used to compute the induced velocities on the blade. The effective angle-of-attack on the blade is then computed and used to obtain the blade bound circulation for the next time step. A detailed description of this process can be found in Ref. 26.

FVW-FAST Integration In order to study the effects of the structural response on the aerodynamics of the FVW model, the model must be coupled to a structural solver. To accomplish this the open source code FAST was chosen, as it is a common computational model used in the wind turbine community. Currently, FAST uses AeroDyn to compute its aerodynamics. Two options are available in AeroDyn for computing the induced velocity on the blade: 1) BEM with a skewed wake correction and 2) dynamic inflow (GDW). The aerodynamic force and moment coefficients are then computed using either the Leishman-Beddoes dynamic stall model30 or from a table look-up in static airfoil data tables. Note, the dynamic stall model is not used for the results in this study as the incorporation of a dynamic stall model is currently in progress.

The integration process is shown in Fig. 1. First, the blade locations and velocity vectors are passed to the FVW from FAST. Subsequently, the FVW model computes the blade lift coefficient, blade effective angle-of-attack, and sectional velocity. The effective angle-of-attack is then used to compute the drag coefficient using the table lookup. Next, the forces are passed back to FAST to compute the next set of structural responses. FAST uses a modal representation of each blade and the tower to compute the structural response using an Adams-Bashforth-Adams-Moulton predictor corrector integration scheme.31

Verification and Validation The free wake aerodynamic model was verified and validated in Ref. 26. The model showed good agreement with experimental32 and computational results from Ref. 4 in predicting the wake geometry. Additionally, the numerical scheme was found to be linearly and nonlinearly stable.

The FVW-FAST model must be verified and validated with and without the structural degrees-of-freedom (DOF) added. For the model with the structural effects, 3 DOFs are included for each blade. Each blade includes 2 flapwise DOFs and 1 edgewise DOF. The flapwise DOF is the blade out-of-plane motion and the edgewise DOF corresponds to the blade in-plane motion.

For validation, experimental results are used from the NREL Unsteady Aerodynamics Experiment Phase VI33, 34. CFD results from Sezer-Uzol et al.35 are used for verification. These inviscid CFD results are generated using PUMA2, a 3D, compressible, Euler solver. The model used in this study is based on the NREL Phase VI horizontal axis wind turbine33, 34. Table 1 lists the parameters for the configuration examined. Details about the blade’s varying chord and twist distribution are available in Ref. 33. This verification and validation simulation has an unyawed 7 m/s wind speed. The CP distribution along the normalized chord from both the CFD model and the experimental measurements is used to compute the lift coefficient at each spanwise location. The computed lift coefficient across the blade span for the FVW is compared to CFD and experimental results in Fig. 2 for an unyawed flow with V∞=7 m/s. Results using BEM and the Weissinger-L (W-L) model with no far wake induced velocities are also included for comparison. In addition, the NRMSE and L∞ errors for the FVW with a 6 DOF structure, FVW with a rigid structure, BEM, W-L, and CFD are compared in Table 2. It is clear that CFD overpredicts the lift

217 coefficient, while the FVW model has excellent agreement. BEM also overpredicts the lift coefficient and the approach yields errors up to 30% and an RMS error over 20%. The use of the W-L model only, which neglects the far wake induced velocities, exhibits the worse agreement relative to the experiment; however, it is similar to the BEM prediction. Thus, the effect of the induced velocities from the far wake has a significant impact on the blade loading in this configuration. It is interesting that the FVW models outperform the CFD analysis, as the latter is considered higher fidelity. This may be due to the neglected viscosity in the CFD analysis, which is accounted for in the FVW approach using a viscous core model. Additionally, the CFD results are only computed for 2 rotor revolutions whereas the FVW results are generated with 30 revolutions. Finally, note that the inclusion of elastic degrees-of-freedom has a negligible impact on the blade lift coefficient, indicating that the NREL Phase VI has stiff blades.

The computational expenses of the different theories is an important consideration when making the above comparisons. The FVW solution requires approximately 25 minutes on a single 2.27 GHz Intel Xeon processor to compute 30 wake revolutions. In contrast, the Sezer-Uzol et al. CFD analysis35 required at minimum 1.7 days on 128, Dual 3.2 GHz Intel Xeon processors to compute 2 revolutions. The BEM solution requires approximately 10 s on a single 2.27 GHz Intel Xeon processor. Thus, it is clear that the FVW model provides an optimal balance between accuracy and computational expense. Also, it is important to note that the computation of the FVW could be significantly decreased if a parallel computing environment was used.

Results Obtained and Interpretation of Results To assess the differences between various aerodynamic models of wind turbines on the aeroelastic response, simulations are performed using the developed FVW model, BEM, and dynamic inflow (GDW) coupled with FAST. Due to the high structural stiffness of the NREL UAE Phase VI wind turbine, a different wind turbine was modeled here to examine the aeroelastic response and power predictions. This representative wind turbine is described in Table 3 and is based on the blades of the WindPACT 1.5 MW wind turbine available from the NREL certification test files36. The wind turbine had 10 DOFs for the cases where structural response was considered. These DOFs included: 2 blade flapwise DOFs, 1 blade edgewise DOF, 2 tower fore-aft bending DOFs, and 2 tower side-to-side bending DOFs. The results shown in this section are for the flexible structure and only include the rigid structure as comparison when the difference is not negligible.

Power Predictions Power predictions are an important measure in wind turbine performance, therefore the rotor power was computed using the FVW, BEM, and GDW for three different wind speeds, V∞=8 m/s, 10 m/s, and 12 m/s. These wind speeds were chosen since they represented unstalled conditions, a necessary requirement since the incorporation of the FVW model does not yet account for stall effects (static or dynamic). The power predicted in Fig. 3 shows that BEM and GDW predict much higher power, between 50-90% higher, than the FVW model. It is evident that the induced velocities in the FVW model result in a decrease in power.

Aeroelastic Response In addition to power predictions, the loads, moments, and blade deflections are also important when analyzing and designing a wind turbine. Figures 4-6 show the flapwise moments and forces at the blade root and the flapwise tip deflectsions generated by the three aerodynamic models at V∞=12 m/s. Additional results at different wind speeds and the edgewise directions can be found in Ref. 26. Across the various wind speeds, the FVW predicts lower forces, moments and tip deflections, and in some cases the prediction from BEM and GDW can be almost double that of the FVW.

Conclusions A free vortex wake (FVW) model, based on previous work by Leishman et al.4, 17, 19-25 was developed and coupled to FAST an aeroelastic wind turbine computational modeling tool. The integrated aeroelastic model was verified and validated against data in the open literature based on: CFD35, and experimental observation32-34. Finally, the developed FVW model was compared to standard-practice approximate

218 aerodynamic models commonly used in wind turbine design and aeroelastic analysis; namely, Blade Element Momentum theory (BEM), and dynamic inflow (GDW). Results demonstrated that: 1. The coupled FVW-FAST model showed excellent agreement with experimental results for lift coefficient of the NREL Phase VI wind turbine at unyawed 7 m/s flow. The resulting lift coefficients were closer than CFD predictions. 2. An examination of the power predictions indicated that the inclusion of wake effects through the FVW an impact on the power with instances of 90% higher power predicted with BEM and GDW than the FVW. 3. The FVW also has a significant impact on the predictions of aeroelastic response of the wind turbine. The FVW decreases the blade root forces and moments, as well as, the blade tip deflections compared to BEM and GDW.

Additional work is needed to account for the effects of static and dynamic stall with the FVW model. Since a dynamic stall model is already present in AeroDyn, this is a relatively straight-forward extension that is in progress. Furthermore, additional studies will be carried out on the impact of both yawed and turbulent flow fields, which are prevalent in the operation of wind turbines.

Figures and Charts

Figure 1: Flowchart of the FVW-FAST coupling. Figure 3: Rotor Power

219

Acknowledgment This work is supported in part by an allocation of computing time from the Ohio Supercomputer Center.

References [1] Hansen, M., Sørensen, J., Voutsinas, S., Sørensen, N., and Madsen, H., “State of the art in wind turbine aerodynamics and aeroelasticity,” Progress in Aerospace Sciences, Vol. 42, No. 4, 2006, pp. 285 – 330. [2] United States Department of Energy, “20% Wind Energy by 2030,” www.20percentwind.org. [3] Simms, D., Schreck, S., Hand, M., and Fingersh, L., “NREL Unsteady Aerodynamics Experiment in the NASA-Ames Wind Tunnel: A Comparison of Predictions to Measurements,” Tech. Rep. NREL/TP-500-29494, NREL, June 2001. [4] Gupta, S., Development of a Time-Accurate Viscous Lagrangian Vortex Wake Model for Wind Turbine Applications, Ph.D. thesis, University of Maryland, 2006. [5] Leishman, J., “Challenges in Modeling the Unsteady Aerodynamics of Wind Turbines,” 21st ASME Wind Energy Symposium and the 40th AIAA Aerospace Sciences Meeting, Reno, NV, January AIAA- 2002-37, 2002. [6] Laino, D. and Hansen, A., User’s Guide to the Wind Turbine Aerodynamics Computer Software AeroDyn, Windward Engineering LC, 2002. [7] Johnson, W., Helicopter Theory, Dover Publications, New York, 1994. [8] Burton, T., Sharpe, D., Jenkins, N., and Bossanyi, E., Wind Energy Handbook, John Wiley & Sons, Chichester, England, 2001. [9] Leishman, J., Principles of Helicopter Aerodynamics, Cambridge University Press, Cambridge, 2006.

220 [10] Chattot, J., “Helicoidal vortex model for wind turbine aeroelastic simulation,” Computers & Structures, Vol. 85, No. 11-14, 2007, pp. 1072 – 1079, Fourth MIT Conference on Computational Fluid and Solid Mechanics. [11] Chattot, J., “Optimization of Wind Turbines Using Helicoidal Vortex Model,” Journal of Solar Energy Engineering, Vol. 125, 2003, pp. 418–424. [12] Pesmajoglou, S. and Graham, J., “Prediction of Aerodynamic Forces on Horizontal Axis Wind Turbines in Free Yaw and Turbulence,” Journal of Wind Engineering and Industrial Aerodynamics, Vol. 86, 2000, pp. 1–14. [13] Voutsinas, S., “Vortex Methods in Aeronautics: How to Make Things Work,” International Journal of Computational Fluid Dynamics, Vol. 20, No. 1, 2006, pp. 3–18. [14] Coton, F. and Wang, T., “The Prediction of Horizontal Axis Wind Turbine Performance in Yawed Flow Using an Unsteady Prescribed Wake Model,” Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, Vol. 213, 1999, pp. 33–43. [15] Coton, F., Wang, T., and Galbraith, R., “An Examination of Key Aerodynamic Modeling Issues Raised by the NREL Blind Comparison,” Wind Energy, Vol. 5, 2002, pp. 199–212. [16] Currin, H. D., Coton, F. N., and Wood, B., “Dynamic Prescribed Vortex Wake Model for AERODYN/FAST,” Journal of Solar Energy Engineering, Vol. 130, No. 3, 2008, pp. 031007. [17] Leishman, J. G., Bhagwat, M. J., and Bagai, A., “Free-Vortex Filament Methods for the Analysis of Helicopter Rotor Wakes,” Journal of Aircraft, Vol. 39, No. 5, 2002, pp. 759–775. [18] Moriarty, P., AeroDyn Theory Manual, NREL/TP-500-36881, 2005. [19] Gupta, S. and Leishman, J., “Accuracy of the Induced Velocity from Helicoidal Vortices Using Straight-Line Segmentation,” AIAA Journal, Vol. 43, No. 1, 2005, pp. 29–40. [20] Gupta, S. and Leishman, J., “Stability of Methods in the Free-Vortex Wake Analysis of Wind Turbines,” 23rd ASME Wind Energy Symposium and the 42nd AIAA Aerospace Sciences Meeting, Reno, NV, January AIAA-2004-827, 2004. [21] Crouse Jr., G. and Leishman, J., “A New Method for Improved Rotor Free-Wake Convergence,” 31st AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, January AIAA-1993-872,1993. [22] Bagai, A. and Leishman, J., “Rotor Free-Wake Modeling using a Pseudo-Implicit Technique- Including Comparisons with Experimental Data,” 50th Annual Forum of the American Helicopter Society, Washington, D.C., May 1994, pp. 29–39. [23] Bagai, A. and Leishman, J., “Free-Wake Analysis of Tandem, Tilt-Rotor and Coaxial Rotor Configurations,” Journal of the American Helicopter Society, Vol. 41, No. 3, July 1996, pp. 196– 207. [24] Bhagwat, M. and Leishman, J., “Stability, Consistency and Convergence of Time Marching Free- Vortex Rotor Wake Algorithms,” Journal of the American Helicopter Society, Vol. 46, No. 1, January 2001, pp. 59–71. [25] Bhagwat, M. and Leishman, J., “Time-Accurate Free-Vortex Wake Model for Dynamic Rotor Response,” Aeromechanics 2000, American Helicopter Society Specialist Meeting, Atlanta, GA, November 2000. [26] Kecskemety, K. and McNamara, J., “The Influence of Wake Effects and Inflow Turbulence on Wind Turbine Loads,” AIAA Journal, "Paper in Press", 2011. Also, AIAA-2010-2654. [27] Kini, S. and Conlisk, A., “Nature of Locally Steady Rotor Wakes,” Journal of Aircraft, Vol. 39, No. 5, 2002, pp. 750–758. [28] Weissinger, J., “The Lift Distribution of Swept-Back Wings,” Tech. rep., NACA TM 1120, 1947. [29] Ribera, M., Helicopter Flight Dynamics Simulation with a Time-Accurate Free-Vortex Wake Model, Ph.D. thesis, University of Maryland, 2007. [30] Leishman, J. and Beddoes, T., “A Generalized Method for Airfoil Unsteady Aerodynamic Behavior and Dynamic Stall Using the Indicial Method,” Proceedings of the 42nd Annual Forum of the American Helicopter Society, Washington, DC, June 1986, pp. 243–266. [31] Jonkman, J. and Buhl, M., “New Developments for NWTCs FAST Aeroelastic HAWT Simulator,” 42nd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, January AIAA-2004-504, 2004. [32] Haans, W., Sant, T., van Kuik, G., and van Bussel, G., “Measurement and Modelling of Tip Vortex Paths in the Wake of a HAWT under Yawed Flow Conditions,” 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, January AIAA-2005-590, 2005.

221 [33] Hand, M., Simms, D., Fingersh, L., Jager, D., Cotrell, J., Schreck, S., and Larwood, S., “Unsteady Aerodynamics Experiment Phase VI: Wind Tunnel Test Configurations and Available Data Campaigns,” Tech. Rep. NREL/TP-500-29955, NREL, December 2001. [34] Duque, E. P. N., Burklund, M. D., and Johnson, W., “Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment,” 41st AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, January AIAA-2003-355 , 2003. [35] Sezer-Uzol, N. and Long, L. N., “3-D Time-Accurate CFD Simulations of Wind Turbine Rotor Flow Fields,” 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, January AIAA-2006-394, 2006. [36] “NWTC Design Codes (FAST by Jason Jonkman, Ph.D.),” Last modified 31-March-2010; accessed June-2010.

222 Geometry in Rockets

Student Researcher: Kari J. King

Advisor: Sally Moomaw

University of Cincinnati Department of Early Childhood Education Abstract This third grade mini unit allowed students in a suburban classroom to research and explore rocket composition and purpose. After researching and discussing rockets, students then designed a two- dimensional rocket using geometric shapes on graph paper. The students then build a three-dimensional model rocket based on their design. After building the rocket students created a purpose and hypothesis for the astronauts to test in space. To close the mini unit, students watched “Toys in Space” a NASA Core product.

Project Objectives This project had two objectives that were created based on the Ohio Third Grade Science and Math Standards. The math objective includes: “The students will learn about the compositions of rockets and geometrical concepts used in creating rockets by designing a two dimensional rocket and creating a three dimensional model.” The idea of not only using geometric shapes and symmetry is supported in this objective but also taking an idea in two-dimensional form and creating a three-dimensional model from the blue print is also an early learning math concept. The science objective includes: “The students will discover the purposes of rockets as they research and design their own three dimensional rockets and establish a destination and hypothesis for the astronauts to research.” This objective supports students taking a deeper look into space exploration. By knowing more about space exploration, students were able to generate their own questions they had about space and set a purpose for their rocket model. Methodology:

This mini unit is designed around the Social Learning Theories of Lev Vygotsky. Vygotsky believes that true learning occurs when students are given the opportunity to interact with each other. By interacting with one another in a small group setting, students can increase their skills and knowledge by discussing, disagreeing, compromising and growing together. Through this, they are learning process (negotiating) and content (the shapes and purpose of rockets).These lessons give the students an opportunity to work together in a small four person support circle, where they can make decisions, designs and discoveries about geometry and rockets. The lessons designed were student centered and open for student discovery and decision making through exploration of materials and information.

Results 98% of students met both objectives designed for this lesson. The students were assessed through their final portfolio which included their two-dimensional rocket design, geometric shapes and symmetry used, three-dimensional product, and purpose sheet.

Significance and Interpretation of Results Although students had little background knowledge on rockets, the data was significantly high for the overall class. The students were very interested in this project and were actively engaged throughout the mini unit. Because student interests were so high, the project’s success and their level of learning was increased.

223 Figures and Charts

Acknowledgments and References There were no references used in their project. I do want to thank Sally Moomaw for inspiring me to enjoy and appreciate math and science and expand my experiences in the field of STEM.

224 Exploring Space

Student Researcher: Katherine J. Klepac

Advisor: Dr. Robert Ferguson

Cleveland State University Department of Education

This lesson was intended as an introductory lesson to the solar system and space science due to the fact that upon entering the classroom I was to teach in, the students had not yet started learning about outer space, but rather just finishing their earth science unit. Their teacher, my mentor teacher, wanted me to create a lesson that would just begin to scratch the surface of our solar system and what is out there so her kids could start pondering what they were to be learning about in the weeks following. I am teaching eighth graders in an inner-city Cleveland school, and, unfortunately, the students’ abilities are not quite up to par with the rest of the eighth-graders throughout the state especially when it comes to reading. The students are unable to read at the level they should be able to and there are many students in the class with discipline issues who create a lack of focus for their peers. Because of this, I chose to attack the lesson by breaking the students up into groups and creating four stations. I thought this would be a great way to better control the class as a whole and also a good way to easily and individually monitor the work being done by each student. Therefore, I broke them into groups of five students each and created the following four stations: 1.) the Inner Planets, 2.) The Outer Planets, 3.) Satellites and the Moon, and 4.) Relationships between the Earth and the Sun. I created a worksheet packet that the students were to carry with them to every station. They were to use the information at the stations to answer questions throughout the packet. In some questions, there was a color written next to the blank in parentheses but they did not know why it was there until the final assessment of the lesson, they were just told that they would need it later. I believed this to be a good way to keep the students curious throughout the entire lesson.

The first two stations incorporated reading due to the fact that the teacher told me the students needed to work on their reading skills in order to prepare for the rapidly approaching Ohio Achievement Assessment (OAA). In these stations, students were to read small captions and look at statistics and NASA images on each planet that they were exploring, discovering characteristics specific to each planet. In the first station, they read about Mercury, Venus, Earth and Mars. They were asked simple, introductory questions about each planet’s distinguishing attributes such as: I am the closest planet to the sun. I am the planet with the thickest atmosphere, and I am the only planet with liquid water. They were to fill in the blanks for each question, pairing the statement with the planet it described. All of the information necessary for the students to complete the questions were found in the planet papers that I created.

As for the second station, it was very similar, the only difference being the planets being researched. In station two, the outer planets were the focus, Jupiter, Saturn, Neptune, and Uranus. The students were to do the same thing in this station as they did in the first, finding solutions to statements such as: I have a giant red hurricane that has never gone away, I have the most spectacular ring system in the universe, I tilt the “wrong way”, and so on.

As for the third station, this is where the NASA website was used, and was critical in the student’s learning. This station was created to incorporate technology into the lesson, which the students really enjoyed. The students stated that they very seldom got to use the computers which were about the extent of the technology found in their classroom. The objective of this station was getting the students to critically think about satellites and what they are. One of the main things I wanted them to get out of this station was that satellites are not necessarily only manmade but that objects such as the moon are satellites as well. They were to go to the website http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/How_Do_Satellites_Help_Us.ht ml

225 and view the short video on satellites, giving them an understanding of what satellites were. Because the video was short and the rest of the stations would take longer to complete than this one that solely used this video, the students were instructed to visit the website: http://www.nasa.gov/audience/forkids/kidsclub/flash/index.html. As an instructor, I really enjoyed this website because I thought it gave the kids a fun, interactive way to explore their solar system. Not only that, but it gave them options so not all of the students had to do the same activity. The workshop I attended in the fall showed me how to navigate the NASA website and learn how to incorporate it into my lesson, and the kids absolutely enjoyed it.

As for the last station, it was more of an interactive station between the students and myself. Using a globe and a flashlight, the students and I discussed what makes a day on Earth, and then a year. They were to draw what they saw in their packets and illustrate how and why the Earth has days and years.

As a final assessment, each group was given a black poster board and an outline of all eight planets. They were to color each planet with the color they had written next to it in their worksheet packet. They were also given a sun and a moon to color and place on the poster board. If the students had all eight planets in the correct order and in the correct color they, and I knew they had completed the assignment correctly. There was only one group that created their poster incorrectly. The group colored one planet the wrong color. This assessment created a wonderful way for me to get immediate feedback on the effectiveness of my lesson. I was quickly and easily able to judge which students understood the lesson, and which did not.

Although it was just an introductory lesson, there were still content standards addressed. Station three stressed the standard, “Name and describe tools used to study the universe (e.g., telescopes, probes, satellites and spacecraft,”(http://ims.ode.state.oh.us/ode/ims/acs/Benchmarks/Default.asp ). Station four tackled the standard reading, “describe how objects in the solar system are in regular and predictable motions that explain such phenomena as days, years, and seasons,”(http://ims.ode.state.oh.us/ode/ims/acs/Benchmarks/Default.asp ). Students also practiced their reading skills, which help them all across the board when it comes to schooling, and their ability to follow directions, both of which are extremely crucial in succeeding in the OAA.

I believe that this lesson was wonderful in introducing the solar system and all of its beings to the students. Although, at times, it was hard to get the students refocus and come back to their work, overall, the group method worked well in controlling behavior, and the students enjoyed the ability to get up for a minute to move to the next station rather than just sitting at their desk to learn as they usually do. I incorporated technology and NASA websites to enhance the students’ learning and help them better understand satellites, which they really enjoyed. I was able to receive immediate feedback on the success of my lesson by observing how well the students did on their final assessment. There were content standards addressed as well so learning was not only fun and interactive, but also productive. The students enjoyed this lesson, and they also learned a lot from it.

The websites:  http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/How_Do_Satellites_Help _Us.html  http://www.nasa.gov/audience/forkids/kidsclub/flash/index.html are crucial to acknowledge because they were critical in the student’s understanding of satellites in the third station. Also deserving recognition is:  http://ims.ode.state.oh.us/ode/ims/acs/Benchmarks/Default.asp because this is where the content standards are found.

226 The Creation of a Web-Based Steel and Aluminum Microstructure and Properties Library

Student Researcher: Daniel E. La Croix

Advisor: Dr. Timothy Dewhurst

Cedarville University Department of Engineering and Computer Science

Abstract Heat treating a metal can significantly alter its mechanical properties by changing the microstructure of the material. The heat treatment process involves heating a material to an elevated temperature (up to 1600°F) for a predetermined amount of time and then cooling the material either rapidly (quenching) or slowly (annealing) depending on the desired results. As a capstone project for the Properties of Materials course at Cedarville University, student teams must design four different heat treatments for a steel or aluminum alloy. After implementing these heat treatments, the team must test the mechanical properties of each specimen (strength, ductility, hardness, etc.) and examine the microstructure of each specimen to observe how it has been altered. A web-based library consisting of an organized collection of the data obtained from the heat treatment project would be a valuable tool to the engineering field in both industry and academia.

Project Objectives The goal of this project is to create a meticulously organized and highly accessible web-based library which will provide a visual and tabular collection of data pertaining to the mechanical properties of various steel and aluminum alloys. To make the achievement of this objective feasible, an understanding of the heat treatment process and experience in performing a heat treatment is required. Implementation of the web-based database requires compilation of all the information obtained from the heat treatment projects performed in 2007-2010; standardization of the stress-strain curves for each steel and aluminum alloy; addition of a length scale to each micrograph; tabulation of the mechanical properties and the heat treat specifications for each specimen; and development of the framework and content of the web-based library.

Methodology Used In preparation for creating the web-based library, heat treatments of various steel and aluminum alloys were performed. The objective of these heat treatments was to obtain mechanical properties and micrographs for each specimen of the various alloys. Four heat treatments were designed for each steel and aluminum alloy. For steel alloys, the heat treatments included two austenitized, quenched, and tempered specimens; one normalized specimen; and one annealed specimen. For aluminum alloys, the heat treatments included one underaged specimen, one aged specimen, one overaged specimen, and one annealed specimen. Extensive research was performed to determine the proper temperature and time for each heat treatment as well as the proper quench for each steel and aluminum alloy. This research involved the utilization of steel and aluminum phase , continuous cooling transformation (CCT) diagrams, hardenability curves, Grossman charts, and other heat treatment resources. Once the heat treatments had been completed, both a tension test and a hardness test were performed on each specimen to obtain the desired mechanical properties (strength, ductility, hardness, etc.) of the material. Finally, each specimen was cut and polished in preparation for obtaining a micrograph to examine the microstructure of the material.

Following the completion of the heat treatments, all of the results which were obtained along with the specifications for each of the heat treatments which were performed were compiled and standardized. This standardization is a continuous process since it must be completed each year as new data is collected by different student groups. Once the current data had been compiled and standardized, html pages were created to display the stress-strain curves, hardness data, mechanical properties, and micrographs of each steel and aluminum alloy. These pages were arranged within a website and a hierarchy of links was created so that navigation between alloys would be straightforward. Currently, this website has not been

227 published on the internet as it still requires some additional data and aesthetic refinements. However, the final goal of this project is to publish the website as a web-based library on the Cedarville University webpage so that engineers can have access to the valuable heat treatment and mechanical properties data for the steel and aluminum alloys which have been tested. Furthermore, it is our hope that this library will continue to be updated with additional data obtained from future heat treatments.

Results Obtained Figure 1 below shows the stress-strain curves and hardness data across the cross section of the as received, tempered 302, tempered 572, normalized, and annealed specimens of A36 Steel. Table 1 shows the mechanical properties of each specimen. These plots and table were created from the data obtained from the tension tests and hardness tests performed on each specimen. Figure 2 shows the micrographs which were obtained from each of the cut and polished specimens. These results are typical of those obtained by each of the student groups for steel alloys.

Figure 1. Stress-strain curves and hardness across the cross section of various A36 Steel specimens

Table 1. Mechanical properties of various A36 Steel specimens As Received Tempered 302 Tempered 572 Normalized Annealed Yield Strength (ksi) 46.5 122.8 135.2 50.2 39.4 Ultimate Tensile 66.8 155.2 150.7 64.3 60.1 Strength (ksi) Modulus of 28,900 28,600 24,600 20,600 28,400 Elasticity (ksi) Rockwell A 50.0 66.0 66.7 46.7 49.3 Hardness (60 kgf)

As Received Tempered 302 Tempered 572

Normalized Annealed Figure 2. Micrographs of various A36 Steel specimens at 100x magnification

Figure 3 below shows the stress-strain curves and hardness data across the cross section of the as received, underaged, aged, overaged, and annealed specimens of 7075 Aluminum. Table 2 shows the

228 mechanical properties of each specimen. These plots and table were created from the data obtained from the tension tests and hardness tests performed on each specimen. Figure 4 shows the micrographs which were obtained from each of the cut and polished specimens. These results are typical of those obtained by each of the student groups for aluminum alloys.

Figure 3. Stress-strain curves and hardness across the cross section of various 7075 Aluminum specimens

Table 2. Mechanical properties of various 7075 Aluminum specimens As Received Underaged Aged Overaged Annealed Yield Strength (ksi) 64.1 60.9 60.2 59.9 13.2 Ultimate Tensile 73.0 71.3 71.8 71.2 30.4 Strength (ksi) Modulus of 8,800 8,500 8,600 8,500 8,500 Elasticity (ksi) Rockwell A 53.1 54.2 53.6 53.4 23.9 Hardness (60 kgf)

As Received Underaged Aged

Overaged Annealed Figure 4. Micrographs of various 7075 Aluminum specimens at 100x magnification

Acknowledgments The stress-strain curves, hardness data, mechanical properties, and micrographs of the A36 Steel specimens presented in this report were prepared based on the data collected by the Cedarville University student group comprised of Jason Bender, Casey Hinzman, Andrew Hood, Brad Latario, and Andrew Shrank.

The stress-strain curves, hardness data, mechanical properties, and micrographs of the 7075 Aluminum specimens presented in this report were prepared based on the data collected by the Cedarville University student group comprised of Joshua Brown, Andrew Knesnik, Carl Kobza, Daniel La Croix, and Barry Westefeld.

229 Modeling the Solar System

Student Researcher: Kara E. Layton

Advisor: Dr. Jennifer Hutchison

Cedarville University Department of Science and Math, Department of Education

Abstract The concept of ratios and proportions will provide the main focus for my lesson. Practically, this lesson will come after several days of introductions to proportions and a basic knowledge in using them to solve problems. The lesson is designed for students in an 8th or 9th grade Algebra I class. The students will be working in groups to design individual models of some of the planets in the solar system, based on the lessons “Distance to the Moon” and “Diameter of the Moon” found in the Exploring the Moon educator’s guide. Through these lessons, students will be better able to understand how proportions and ratios apply to nature. Additionally, students will gain a better grasp of the enormity of our solar system.

Lesson The lesson is based on some lessons found in the Exploring the Moon educator’s guide. The lessons focus on scaling down the moon in order to model it practically. This same idea can be extended to all of the planets in our solar system, as well as the distances between them. Due to the vastness of our solar system, physically modeling all of these planets and in distances in the classroom is somewhat unpractical since a high school classroom simply is not large enough. The work, however, can be done, allowing the student to somewhat conceptualize our solar system.

Students begin this project by completing two main worksheets, one is shown to the right, either as a class, or as homework. These worksheets help the student determine scaled distances between planets in the solar system as well as scaled diameters of the planets. After completing this work, students can then use their measurements to either construct objects with the same measurements or find other, already made objects (like sports balls or fruit) to create a model of some of the planets in the solar system. Students then connect these objects using wire or string to create a fully scaled model of some planets in the solar system.

Objectives  To help students connect the use of proportions with their practical application in modeling.  To help students apply what they learn about proportions, ratios, and modeling to everyday scenarios.  To allow students to produce visual materials in order to communicate what they learned.  To give students a greater understanding of the enormous size of their solar system.

Alignment This lesson aligns with Ohio’s Measurement Standard #7, “Apply proportional reasoning to solve problems involving indirect measurements or rates.”

Methodology The lesson attempts to simplify the vast and complex universe in which we live, making the study of it more practical for high school students. The lesson relies on students’ prior knowledge of ratios and proportions. In a sense, the lesson leans more toward a constructivist approach where students expand known information to new circumstances. In this case, students take what they know about proportions

230 and ratios and apply it in order to scale down the solar system. Though the brunt of the work comes from worksheets, the hands on aspect of students actually creating a model of some planets allows the students to interact with the information they are learning. Hopefully applying their knowledge of proportions to this circumstance will allow students to recognize future situations in which they can use ratios and proportions.

Results and Interpretation Prior to this lesson, students had difficulty connecting what they had learned about proportions and ratios to their uses in daily life. Making this connection helped the students truly learn the information. Students were also to each make a model that they could take home and use later. This tangible item gave the students something by which to remember the lesson. The worksheets provided an example to follow when using proportions later, the second of which is shown to the right. Overall, the lesson served its purpose in assisting students in connect the idea of ratios and proportions to their uses in daily life, as well as allowing the students to understand how incredibly vast the solar system is.

Acknowledgments and References 1. Mathematics Academic Content Standards. Modified Jun 09 2010. Ohio Department of Education. http://www.ode.state.oh.us/GD/Templates/Pages/ODE/ODEDetail.aspx?page=3&TopicRelationID=1 704&ContentID=801&Content=86689 2. Canright, Shelley. Exploring the Moon Educator Guide. Aug 23, 2009. http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/Exploring.the.Moon.html

231 Cruising With Sir Isaac: Using Newton Cars to Investigate Motion

Student Researcher: James M. Less, M. S.

Advisor: Mark A. Templin, Ph.D.

The University of Toledo Department of Education

Abstract Cruising with Sir Isaac is an engaging, highly-motivating lesson for students to experience and learn how Newton’s Laws of Motion apply to the movement of objects in their lives. This project is based on the “Newton Car” lesson plan published in the National Aeronautics and Space Administration (NASA) Rocket Educator’s Guide (NASA, 2008). During this inquiry-based project, students work in teams to collect data on the motion of the “Newton Car” when various conditions are changed, including the mass of a canister projectile, the applied force from rubber bands, and frictional surfaces. Students collaborate with their classmates to analyze and present their data in a meaningful way that demonstrates their understanding of Newton‘s Laws of Motion.

Project Objectives This investigation will help students understand and relate Newton’s Laws of Motion to activities in their lives. The following objectives will be achieved during the project. A rubric is used and provided to students to assess their achievement of these objectives.  Students will collect data on factors (mass, applied force, and friction) that affect the motion of a “Newton” car.  Students will draw conclusions and apply Newton’s Laws of Motion to their data.  Students will collaborate with classmates in preparing graphs and presenting their findings.

Methodology This project is based on the “Newton Car” lesson plan published in NASA’s Rocket Educator’s Guide (NASA, 2008), which includes instructions for building and operating the Newton Cars under conditions involving variable mass (for canister projectiles) and variable applied force (number of rubber bands for catapult). I modified the NASA (2008) lesson by including another test variable for “road surface” to investigate the effect of friction on the cars’ motion. Thus, students operated and collected data for the Newton Cars when launching them on a variety of different surfaces.

The inquiry-based project is aligned with Ohio Academic Content Standards (Ohio Department of Education, 2002) for Ninth Grade Physical Science (Indicators 23 and 24) and Scientific Inquiry (Indicators 3, 5 and 6).

The project incorporates three 50-minutes lessons within a motion and forces unit that supplements the text Glencoe Physical Science (McLaughlin & Thompson, 1999).

Day 1: Cruising with Sir Isaac  Opening (5 minutes): Teacher greets students, introduces the project, and reviews the objectives.  Activity 1 (5 minutes): Teacher calls on volunteers to assist in demonstrating a Newton Car.  Activity 2 (5 minutes): Teacher assigns teams and reviews procedures and safety requirements.  Activity 3 (30 minutes): Teams construct their Newton Cars, fill and weigh canisters, operate Newton Cars, and measure and record the distances moved. Teams conduct two or three test runs with same mass in canister. Then students change mass and conduct another set of test runs.  Closing (5 minutes): Teacher questions students about what they learned from their investigation and discusses how mass relates to motion of the Newton Cars.

232 Day 2: Still Cruising  Opening (5 minutes): Teacher greets students and reviews the objectives for the day.  Activity 1 (30 minutes): Students continue to work in teams and collect data to investigate effect of applied force and friction on the cars‘motion. The teams conduct two or three test runs with the same mass in canisters but vary first the number of rubber bands and then the “road surface.”  Activity 2 (10 minutes): After students have collected data and cleaned their work areas, the teacher will lead the class in a discussion of how Newton’s Laws of Motion were demonstrated during the project.  Closing (5 minutes): Teacher questions students about what they learned and discusses the report assignment, referring students to the rubric that was passed out on the first day.

Day 3: Collaborating and Presenting Data  Opening (5 minutes): Teacher greets students and reviews the objectives for the day.  Activity 1 (5 minutes): Teacher introduces the collaboration activity, assigns students to groups, and discusses rubric for group presentations of data.  Activity 2 (20 minutes): Students work in small groups to share and prepare a presentation of their data and findings to the class.  Activity 3 (15 minutes): Groups make a two or three minute presentation to the class, which summarizes their data and how it applies to Newton’s Laws of Motion.  Closing (5 minutes): Teacher recaps the objectives from the project.

Results Students were engaged and worked well together in teams to collect the data. The students were able to demonstrate Newton’s First Law of Motion by observing that the car did not move when the canister was not positioned within the “V” of the rubber band. The students were able to apply Newton’s Second Law of Motion by measuring the distance that the car moved when the mass of the projectile canister, and the applied force of the rubber bands, was changed. Third, the students readily observed that the projection of the canister and the acceleration of the car were in opposite directions. This observation helped them comprehend and apply the Third Law of Motion.

Significance and Interpretation of Results I conducted this project before formally presenting the students with material from the text on forces and Newton’s Laws of Motion so that they could build their knowledge on concrete experiences. Although this is a highly engaging hands-on project, it is important that the teacher provide structure in the form of opening instructions, rubrics, and wrap-up questions/discussions so that students can make sense of their observations and data.

Acknowledgments and References I thank the faculty and staff involved with The University of Toledo UT3 program and the Ohio Aerospace Institute for their support and encouragement. My references for this project include:

1. McLaughlin, C. W., and Thompson, M. (1999). Glencoe Physical Science. Glencoe/McGraw-Hill: Westerville, OH. 2. National Aeronautics and Space Administration. (2008). Newton Car. In NASA, Rockets Educator’s Guide with Activities in Science, Technology, Engineering and Mathematics (pp. 51-55). Kennedy Space Center, FL: National Aeronautics and Space Administration. 3. Ohio Department of Education. (2002). Academic content standards: K-12 Science. Columbus: Ohio Department of Education.

233 Phases of the Moon

Student Researcher: Elise J. Lund

Advisor: Dr. Cathy Mowrer

Marietta College Department of Education Abstract Observing the phases of the moon is an Ohio standard introduced in second grade. Students need to observe the night sky and see how the moon changes every night. By studying the phases of the moon, learners will be able to see the difference between waxing and waning. The following lesson plan report shows teachers how to teach and model learning about the phases of the moon. The plan includes directions on how to create a moon phases book, a demonstration, and a writing extension activity. All different types of learning styles are accounted for with various accommodations. Overall, the lesson plan will help students visually see the phases of the moon by creating books and watching a demonstration.

Objective The student will be able to name the phases of the moon with 85% accuracy, after creating moon booklets and observing a demonstration.

Alignment 1.) Standard: Earth and Space Sciences (Grade 2) Benchmark: The Universe: Observe constant and changing patterns of objects in the day and night sky Grade-Level Indicators: Observe and describe how the moon appears a little different every day but looks nearly the same again about every 4 weeks and Observe the moon

Lesson DAY #1: ‐ After looking at the phases of the moon pictures, place them in order and tape them to the board so all students can refer to them ‐ Label the phases of the moon with note cards: new moon, waxing crescent, first quarter, waxing gibbous, full moon, waning gibbous, last quarter, waning crescent, and new moon ‐ Pass out moon book template: one sheet of yellow, one sheet of black (images below) o Tell students how the yellow represents the lit portion of the moon and the black represents the unlit portion of the moon ‐ Instruct students to cut out all of the phases of the moon ‐ Show the model moon book around the class so they see an example ‐ Help the students put their book in order, start with the new moon o WAXING: lit side on the RIGHT and WANING: lit side on the LEFT ‐ After students have their book in order, check it, teacher punches a hole on the left side ‐ Have students tie the yellow ribbon through the holes on each page, tie a bow

DAY #2: ‐ Look at the moon books the students made yesterday, introduce and sing the following songs:

234 Phases of the Moon (To the tune of: Row, Row, Row, Your Boat) Phases of the moon, first it is waxing: Waxing crescent, first quarter, waxing gibbous, full moon Now the moon is getting small, it is waning: Waning gibbous, last quarter, waning crescent, new moon If the Moon is…. (To the tune of: If You’re Happy and You know It) If the moon is getting bigger it’s on the right That is where you’ll see more moonlight When the moon is getting large Then WAXING is in charge If the moon is getting bigger it’s on the right If the moon is getting tiny it’s on the left Soon there will not be any moon left! When the moon is getting SMALL WANING is what it’s called If the moon is getting tiny it’s on the left

‐ Explain how the moon is ALWAYS HALF LIT ‐ Perform NASA demonstration -Teacher holds the ball out and moves around, showing how the lamp always lights half of the moon o Ask: what part of the moon is lit during a full moon? (part we see, facing Earth) o Ask: what part of the moon is lit during a new moon? (part we don’t see, facing the Sun) ‐ Complete the NASA demonstration page and ask for volunteers to come up and hold the “moon” ‐ Then ask for students to determine which phase of the moon we are seeing ‐ Think about which phase it is: WAXING on the RIGHT, WANING on the LEFT ‐ Introduce extension activity: Moon Journals - observe the moon and make a creative journal.

Results Obtained Students responded well to the hands on approach of making the moon phases booklets. They got to visually see the phases of the moon by looking at the moon lamp and NASA demonstration. All types of learners were engaged in the activity. Auditory learners got to listen to the moon phases song, visual learners saw the pictures, and kinesthetic learners got to create the moon book. Students got to participate at all times during the lesson and activated prior background knowledge.

Significance of Results Grading the moon phases books is up to the teacher. The books are just the means to get to the objective which is being able to name all of the phases of the moon in order. Overall, the students learned the phases of the moon by creating their book and humming the song. Students enjoyed showing their books to each other and their families. They had fun and learned about the moon!

Chart (for accommodations) Group: Accommodation: Below grade-level: -Pre-cut moon phases and help assembling -moon phases written directly on book Above grade-level: -make secret flaps to hide what the moon phase is called- like a quiz! Visual learners: -moon demonstration with lamp and ball Moon phases with note cards labeling the phases Auditory learners: -saying/singing the phases of the moon song Kinesthetic learners: -moon book in hands while looking at pictures

Acknowledgements and References: Images used from NASA.gov through Google images, NASA moon demonstration

235 Solar Powered Water Purification System

Student Researcher: Kamau B. Mbalia

Advisor: Dr. Krishnakumar Nedunuri

Central State University Department of Environment Engineering

Abstract Entering my senior year at Central State University, I have taken numerous courses as well as experienced numerous internships. During my matriculation I have had the pleasure to become witness to all of the options that are available in the field of Environmental Engineering. However, with my background as well as my career interests, water quality always seemed to be the best fit. Yet, with an interest in renewable and alternative energy, it appeared to be difficult to incorporate both of my career interests into my senior design project. After weeks of thinking as well as consulting with Dr. Nedunuri, we came up with a Solar Powered Water Purification System.

Methodology With the Solar Powered Water Purification System, we would be able to encompass both of my career interests. After the decision on the actual project, we needed to take the necessary steps in order to begin constructing our system. With assistance from Mr. Clark Fuller and the connections Central State University has with the National Renewable Energy Laboratory, we were able to acquire a Solar Panel. The specific panel is a SunModule SW 185 mono. The SW 185 model has specs of Performance under standard test conditions

Maximum Power Pmax: 185Wp

Open Circuit Voltage Voc: 44.8 V

Maximum Power Point Voltage Vmpp: 36.3 V

Short Circuit Current I: 5.50 A

Maximum Power Point Current Impp: 5.10 A

After the acquisition of the Solar Panel, the next step was determining how we would use the panel to power our eventual water purification system. Again with the assistance of Mr. Fuller, we obtained a car battery on DC voltage. We would need this in order to continue powering the system in times of little or no sun energy. By charging the battery and running the system simultaneously, we would have enough energy to power the system throughout the night. The next step is determining the proper method of connecting the Solar Panel to the battery for optimal power. In order to do this we are researching a similar project done by one Michael Speed in order to compare methods. Also, we are currently contacting various companies supplying such Solar Panels and getting their input on how to connect the two.

After this, we would need to acquire a DC pump in order to run the water through the actual system. This should not be too much of hassle. Yet, before we obtain the pump, calculations are needed in order to determine the actual power needed amidst other parameters. This is dependent on the system in which the water will actually flow through. This system is believed to have dimensions of 27.56 (in) tall and a diameter of 20 (in). The system will have three partitions. The first will be the inlet point of the water which has a weir which allows particles to settle at the bottom of this partition and the less contaminated water to flow above the weir and on to the lower partition. It is here that a filter bed will be placed. This will allow the smaller particles to become trapped, allowing less to move through. Finally, in the third partition, there will be a gravel bed will allow for maximum filtering of the water flowing through the

236 system. Currently, I am designing the system via AutoCad software. By doing this, we can go back in and easily make adjustments where necessary.

Conclusion In conclusion, we have come a long way on this project, but we still have a long way to go. Calculations, panel connections, attainment of a pump, and lastly, the actual construction of the holding tank still need to be completed. We are making headway and will continue to do such. This project has yet to be done and has the potential to bring knowledge as well as publicity to the opportunities Central State University and the Environmental Engineering Department has to offer.

237 Analysis of Jet Engine Turbine Blade Manufacturing Process Uncertainties for Enhanced Turbine Blade Reliability and Performance

Student Researcher: Myron L. McGee

Advisor: Dr. Abayomi Ajayi-Majebi

Central State University Department of Manufacturing Engineering

Introduction The objective of this research is to quantify uncertainties in jet engine blade manufacturing process variables and rank the process variables in order of sensitivities and importance, using reliability based design optimization theories and approaches. Identifying critical variables make it possible to focus resources on key variables. Identification of critical variables also assist in avoiding the waste of resources in controlling marginal and unimportant variables that do not impact the probability of failure of the blades as a result of manufacturing process variations. Potential benefits include, Advancement of Air Force research information that supports safer blade design resulting in safer aviation mission operations for the Warfighter as well as recommendations to support continuous improvement efforts for the jet engine blade manufacturing process. Due to my current 3-year involvement with Wright-Patterson Air Force Base as an intern in the Manufacturing Technology Division and Dr. Ajayi-Majebi my efforts include connecting my internship and this research endeavor.

What is a Jet Engine Turbine Blade? Jet engine turbine blades also referred to as turbine blades, are the components that compose a turbine. A turbine is a rotary engine that extracts energy from a fluid flow and converts it into work. A jet engine combustor produces high temperatures and high pressure gas. The turbine blades are the components designed to control the energy produced at those high temperatures and pressures. In order to endure the high pressure and high temperature environment of a jet engine, turbine blades are most commonly manufactured utilizing materials such as expensive super alloys. Turbine blades are fabricated using investment casting methods inside of vacuum chambers. The turbine blades are finely machined to a precise shape and laser machining is used to add tiny cooling holes into the blade. Along with the previously stated manufacturing techniques manufactures also implement a variety of cooling methods, including thermal barrier coatings, internal air channels, and boundary layer cooling which affect the blades working life.

Each engine manufacturer uses a proprietary alloy for its turbine blades. These high-strength alloys, called "superalloys", are primarily nickel-based. Even with these high-tech alloys, gas turbine engine temperatures often exceed the melting point of these materials. As a result, the turbine blades require complex cooling mechanisms to maintain component temperatures beneath the melting point of the alloy. Applications of Turbine Blades include compressors, turbines, and rotors. Compressors are mechanical devices that compress a gas increasing pressure. Turbines as stated above are rotary engines that extract energy from a fluid flow and convert it into work. Rotors are rotating parts of mechanical devices, for example generators, alternators or pumps. The rotor of a turbine is powered by fluid pressure.

Revealing Significant Effects in the Manufacturing Processes In order to reveal significant effects in the manufacturing processes of turbine blades one must map the manufacturing processes from start to finish. This mapping ensures that the processes are accurately captured and the significant affects can be exposed through theories and experimentation. The theories that this research would implement are Reliability Based Design Optimization Theories and Approaches such as the first or second order reliability method, The Bathtub Hazard Rate Curve and the importance sampling method etc.

238 First/second-order reliability method First/second-order reliability method (FORM/SORM) is considered to be one of the most reliable computational methods for structural reliability. Its accuracy is generally dependent on three parameters, the curvature radius at the design point, the number of random variables and the first-order reliability index. FORM is an analytical approximation in which the reliability index is interpreted as the minimum distance from the origin to the limit state surface in standardized normal space (u-space). The most likely failure point (design point) is searched using mathematical programming methods. The second-order reliability method (SORM) has been established as an attempt to improve the accuracy of FORM. SORM is obtained by approximating the limit state surface in u-space at the design point by a second-order surface. The Bathtub Hazard Rate Curve

The bathtub hazard rate curve is used to describe failure rates for many engineering components. This curve is a result of three types of failures: (1) quality, (2) stress-related, and (3) wear out. The bathtub curve can be divided into three distinct parts: (1) Burn-in period, (2) useful life period, and (3) wear out period. Failures during the burn-in period are typically due to poor quality control, inadequate manufacturing methods, poor processes, human error, substandard materials and workmanship, and inadequate debugging.

During the useful life period, the hazard rate remains constant and there are various reasons for failure to occur: undetectable defects, low safety factors, higher random stress than expected, abuse, human errors, natural failures, explainable causes, etc.

During the wear out period, the hazard rate increases and causes for the “wear out region” failures. “Wear out region” failures include: wear due to aging, corrosion and creep, short design-in life of the item under consideration, poor maintenance, wear due to friction, and incorrect overhaul practices.

Our Goals still include, developing a flow chart and relationship diagrams for the turbine blade manufacturing process, in support of jet-engine fail-safe operations while keeping in mind desirable properties in the blade materials which may include high ultimate tensile strength, and yield strength, high temperature resistance, high fatigue and crack growth resistance, high creep resistance and light weight extensive property.

After the research is complete the impact of uncertainties in the manufacturing processes would be quantified as an aid to turbine blade manufactures in support of high performance and robust turbine blade manufacturing.

References 1. Ann Klutke, Georgia. A Critical Look at the Bathtub Curve. 1. Vol. 52. 2003 2. Dhillon, B.S. Design Reliability fundamentals and applications. CRC Press LLC, 1999. 83-84. 3. Kristoff, Susan. "Gas Turbine Blade Metallurgy and Fabrication." Designing Turbine Components for the Stresses of Operation (2009): n. pag. Web. 7 Apr 2011. . 4. Verderaime, V. "Illustrated Structural Application of Universal First-Order Reliability Method." NASA Technical Paper 3501. (1994) 5. Zhao, Yan-Gang. "A general procedure for first/second-order reliability."

239 What’s the “Scope” About Gyroscopes?

Student Researcher: Leah R. Mendenhall

Advisor: Dr. Cathy Mowrer

Marietta College Department of Education

Abstract My project and the concept of gyroscopes in general, are based on the discoveries of scientist Sir Isaac Newton. The discoveries that the project specifically focuses on are Newton’s First Law of Motion and the force of gravity. Students will be using gyroscopes and their unique abilities to explore and examine these topics. The lesson is designed to allow students to see and feel first hand just how forces are at play in the world we live in. Students will also take a look at how people use gyroscopic technology in everyday objects, and how NASA uses gyroscopic technology as well.

Project Objectives  Given instruction, the student will be able to explain and see the effect of the forces acting on a gyroscope, in motion and out of motion.

 Given instruction and the opportunity to see and interact with demonstrations, the student will be able to explain Newton’s First Law and how it affects a bicycle tire in and out of motion.

Project Objective Alignment Grades 3 – 5 Physical Science Benchmark C: Describe the forces that directly affect objects and their motion.

Grade Three Physical Science: Forces and Motion indicators #3 and 4.

Project Objective Discussion The most important information that students learn and understand from the lesson is that there are forces all around that affect objects in certain ways. As long as students are able to describe the forces that are at play on a gyroscope in and out of motion, I will feel that the project was successful. It was designed to allow students to learn these ideas through actually seeing and feeling the forces at play. The objectives listed align well with this idea, and stem directly from the Ohio Content Standards that are also listed.

Suggested Materials The materials list is based solely on the instructor. I suggest that as much of the lesson be as hands-on as possible. This allows students to actually become involved in the lesson in a way that lends to a better, more memorable experience. However, thanks to technology, a lot of videos are available for viewing online that would suffice as supplements for the lesson. A gyroscope is definitely needed for demonstrations during this project. Students need to see firsthand what gyroscopes are capable of doing. The decision to assemble a suspended wheel and to have other gyroscopic manipulatives, such as tops and Frisbees, is left to the instructor.

Lesson This lesson will concentrate around gyroscopes. Through hands on demonstrations and instruction, students will be able to actually see, feel, and understand the forces that act on gyroscopes, and as a result, how they are applied in other ways of everyday life. This lesson is not one that students will be able to do alone. The instructor and students

240 must remain working and learning together throughout the entire lesson in order for the concepts to be fully conveyed. Knowing this, the first step is to assemble the What’s the “SCOPE” About Gyroscopes? books that will aid students and teachers throughout the lesson. The book leads students and teacher through each step of the lesson. Each page either has a topic for discussion or posses a question. For example, page two of the booklet explains gyroscopes and the specific components that make them up; page seven of the booklet poses a question about a still gyroscope. Instructors have range on whether to delve deeper into the information on a specific page of not. At the end of the lesson, students should be able to meet the objectives listed above.

Methodology Used Several different methodologies play a role in this lesson. The first one used is the interactive lecture. This involves lecturing from an instructor, but also permits breaks for student interaction and hands on experiences. The What’s the “SCOPE” About Gyroscopes? booklet is design to allow and promote interactive discussion. Each page contains either information about a specific subject pertinent to gyroscopes, or poses a question related to gyroscopes. On the pages that pose questions, activities for students to try and places for them to make predictions and to write results. Instructors have free range to go in depth with the topics on the pages that they wish to, and to provide the resources that they are able.

Other methodologies that are involved in the lesson include group discussions, individual or group research, and reading. Group discussions about the information in the What’s the “SCOPE” About Gyroscopes? booklets will help students understand and retain the information quicker and more easily. Individual or group research is where the students actually get to attempt the small experiments in the What’s the “SCOPE” About Gyroscopes? booklets, either alone or with a group, depending on the number of resources. The reading is obviously that of the What’s the “SCOPE” About Gyroscopes? booklet, and/or the supplemental materials that the instructor can choose to provide.

Results Obtained Due to circumstances that did not allow me the opportunity to try my lesson in a classroom, I am not able to present any results about how it was received or about how effective it was. I was able to show it to a fifth grader, who was no less than fascinated by the different abilities of gyroscopes and the suspended bicycle wheel. She was very curious as to how it worked and once I explained the circumstances the gyroscope and suspended wheel allowed, she could explain the forces that acted upon each manipulative.

Significance and Interpretation of Results As stated above, I was not able to implement my lesson in a classroom. However, I am confident that students in the third, fourth, and/or fifth grades would be interested in learning how gyroscopes work, and would be thrilled at the opportunity to play with them themselves. My hope is that students would be learning without even knowing they were.

Acknowledgments and References The first acknowledgement that I would like to make is to my advisor, Dr. Cathy Mowrer. She has been a great teacher and role model for me, and I am and will be always grateful for everything that she has given me. I would also like to acknowledge my father for first introducing me to the concept of gyroscopes at an early age. I was and still am fascinated by their gravity defying abilities. I must also say THANK-YOU to Google Images for the use of their vast resources. And lastly, I must acknowledge the NASA/Ohio Space Grant Consortium for this amazing opportunity.

241 Creation of a Super-Hydrophobic Surface on Stainless Steel Using Fluorocarbon Based Organosilane Coatings

Student Researcher: Tanya L. Miracle

Advisor: Dr. Bi-min Newby

The University of Akron Department of Chemical and Bimolecular Engineering

Abstract The purpose of this study is to create a super-hydrophobic surface on stainless steel using a fluorocarbon based organosilane coating. Stainless steel is a very desirable material because of its physical properties. It has strength, corrosion resistance, low bacterial attachment, and is hydrophobic in nature. All of these properties make it a widely used material in hospitals, construction, and many industrial applications. Creation of a super-hydrophobic surface on stainless steel (contact angles above 150°) in which water almost completely beads up could make it an even more desirable material. Not only would it be useful for ease of cleaning, but possible corrosion prevention implications would make it invaluable. This study first looks into the creation and reproducibility factors involved with a super-hydrophobic surface on stainless steel using (Heptadecafluoro-1,1,2,2-tetrahydrodecyl)trichlorosilane (FTS). Possible factors that would affect its creation and reproducibility are examined in detail, with separate experiments used to prove or disprove their contribution. After reproducibility has been established, the study than moves on to corrosion testing of the stainless steel modified surface in a temperature elevated, saltwater environment. During the corrosion testing, it is found that the FTS modified stainless steel resisted pitting corrosion while non-modified samples began to corrode as early as four weeks into the study.

Project Objectives There are four main project objectives to be obtained during the study. An initial objective of evaluating and assessing the main contributing factor to the creation of the super-hydrophobic surface characteristics on the stainless steel was investigated by changing various parameters during modification. Once that objective was, achieved surface property changes on the stainless steel was monitored and evaluated over a period of time. Along with this objective, the stability of the organosilane layer in various environments was investigated. The final objective was to assess the protection of the stainless steel against pitting corrosion in a saltwater environment.

Methodology Used Materials (Heptadecafluoro-1,1,2,2-tetra-hydrodecyl)trichlorosilane was purchased from Gelest (catalog# SIH5841.0). This molecule was used in its purchased state; no alteration was made to the molecule before deposition. HPLC hexane (less than 0.01% water content) was used as solvent in order to minimize water content. The substrate used, 304 stainless steel, was purchased from McMaster-Carr Corporation. This is a multi-purpose stainless steel containing about 18% chromium, 8% nickel, and the balance (74%) iron which would be used in industrial applications and meets ASTM A666 standards.

Procedure Solution deposition method was used for the modification of the stainless steel by the organosilane. A 200:1 by weight mixture of solvent to organosilane was prepared in a small sample bottle and sealed. The stainless steel was then cut into approximately 1 cm x 5 cm coupons. These coupons were then cleaned (degreased) using first ethanol, followed by acetone, and finished in deionized water. All cleaning procedures were with two minutes of sonification. Nitrogen gas was used to dry samples before further steps were taken. The samples were then oxidized using a Jelight Model 42 UV/Ozone oxidization chamber for six minutes. The coupons were immediately removed and placed in a glass Petrie dish. The prepared solution of FTS was added and the samples were allowed to soak in the covered Petrie dish for approximately one hour. They were then removed and allowed to air dry. Contact angle measurements

242 were immediately taken. The modified stainless steel coupons were allowed to age in ambient surroundings (20°C and approximately 60% relative humidity) for one week before corrosion studies were conducted. All aging was followed with contact angle measurements. Measurements were taken of steady angles, advancing angles, as well as receding angles so that surface characterization could be done. Methylene Iodide was also used to obtain contact angle data. This information was needed to obtain surface energy values.

Corrosion Testing To evaluate the protection of the FTS modified stainless steel against corrosion, salt water studies were used. Three samples were placed into each testing jar, one modified with FTS coating, one control of simple stainless steel, and one of oxidized stainless steel. Separation of samples was achieved by using polydimethylsiloxane (PDMS) in the lid. The samples were then pushed into the PDMS to maintain separation. This allowed the testing to focus on pitting corrosion as opposed to crevice corrosion which could occur if two samples were in contact with each other in a corrosive environment. Contact angles, obtained with a Ramé-Hart goniometer and One Touch Capture software, were followed over the course of the corrosion study. Optical microscope pictures were also used to evaluate surface changes. The corrosion study was conducted over a six week period and changes were tracked over this time.

Results Obtained Creation of the Super-hydrophobic Surface As discussed in the introduction, super-hydrophobic surfaces are in demand for many applications. Creation of these surfaces is speckled with problems, reproducibility being the largest of these problems seen [16-17]. Reproducibility was, therefore, the first obstacle which needed to be overcome. Since the creation of the original super-hydrophobic surface was unexpected, variables had to be taken into account as to why the surface became super-hydrophobic over time. Separate experiments were run on the FTS modified stainless steel coupons in order to find the main contributing factor to this surface creation. Variables that were manipulated one at a time included; temperature, humidity levels, and concentrations.

Temperature was changed in two separate experiments. In the first, the stainless steel coupon was modified using a 600:1 solvent to FTS ratio. The coupon was then removed and heated on a hot plate to about 75°C for about 20 minutes. A color change was noticed and the coupon took on a deep red tone. Contact angle measurements showed no large change over time with the average remaining near 90°. The second temperature manipulation made was during the modification time. During the modification time, however, the samples in the solution were heated to approximately 60°C. This increase in energy supply would allow the molecules to react faster with the substrate. Contact angle measurements were again followed and no large change was detected.

Humidity levels were next considered as a possible avenue of re-creation and explanation. Since organosilanes react quickly with water in a hydrolysis reaction, humidity levels were assumed to be the cause. Two separate experiments were again conducted on the coupons to determine if humidity was an influencing factor. If hydrolysis needed to occur to create the super-hydrophobic surface, water would need to be present either in the reaction or shortly after to promote this reaction. Initially, small amounts of water were introduced into the FTS/HPLC Hexane solution in an effort to reach maximum solubility in the hexane itself. The stainless steel coupons were then modified in this solution. Contact angles were followed and remained lower (about 80° on average). In the second humidity experiment, stainless steel coupons were modified with the 600:1 ratio FTS solution. After modification, the coupons were immediately placed in a humidified chamber. Humid air was bubbled into the chamber on a consistent basis over a two week period. Humidity levels were tracked and remained at the highest reading of 100%. This could also be seen by the condensation of water on the sides of the chamber indicating that the air was saturated with water vapor. The coupons placed in a polyethylene Petri dish with many small holes punched in the top. This allowed the humid air to enter, but kept the pieces from directly sitting in the water which gathered on the sides and bottom of the chamber. The coupons were periodically removed and contact angles were taken. After the two week period, only very small changes in the angles were seen (approximately 5°). This indicated that the humidity levels in the air had no real bearing on the

243 creation of the super-hydrophobic surface. Humidity was then eliminated as the major contributing factor.

Since both humidity and temperature did not have a large effect on the change in contact angle over time, concentration was considered next. Coupons were modified with two separate concentrations of FTS solution. The first solution was the 600:1 ratio solution used in all previous experiments. These coupons were allowed to sit in ambient conditions and contact angles were measured over a course of about one month. A second solution of ratio 200:1 was then used to modify separate stainless steel coupons. After modification, the stainless steel coupons were evaluated using contact angle measurements. These measurements were followed and plotted over time. A trend was quickly identified and can be seen in Figure 1. This trend shows the evolution of the super-hydrophobic surface over about a one month time period. Since this trend was not seen until concentration levels were changed, a direct correlation can be made between the concentration and the creation of the super-hydrophobic surface.

Day of Modification Day 2 Day 4 Day 7 Avg Contact Angle Avg Contact Angle Avg Contact Angle Avg Contact Angle 107° 120 117° 133°

Day 12 Day 20 Day 27 Day 41 Avg Contact Angle Avg Contact Angle Avg Contact Angle Avg Contact Angle 129° 141° 150° 141°

Figure 1. Changes seen in the contact angles (ambient temperature and humidity exposure) are readily visible even before formal measurements are made.

With concentration being the contributing factor a reasonable conclusion that a lower energy equilibrium is trying to be reached by the organosilane molecules on the surface is plausible. As seen in Figure 3, the relationship in the graph of contact angles versus time clearly demonstrates this conclusion. As the molecules rearrange the initial change is steep, however, as they reach their equilibrium point a flattening of the change can be seen. Surface energies calculated from the contact angle measurements of both water drops and methylene iodide drops, dropped from 29.2 mJ/m2 (measured at 7 days aged) to 13.5 mJ/m2 (measured at 27 days aged). This demonstrates the rearrangement toward a more ordered, lower surface energy configuration of the molecules. This lower surface energy again supports the increasing contact angle seen in the graph. Lower surface energies are directly linked to more homogeneous, hydrophobic surfaces indicating that the molecules have rearranged to a more consistent even distribution across the stainless steel surface [4-7].

Figure 2. Changes in the contact angle on the FTS modified stainless steel coupons exhibit a positive power relationship with contact angles increasing with time.

244 Corrosion Testing Corrosion testing was conducted over a six week period in which the stainless steel coupons were constantly exposed to a saltwater environment at 60°C. Initially, no changes were seen in any of the coupons. After three weeks in saltwater solution, no changes are seen on the FTS modified stainless steel coupons. The oxidized coupons and simply cleaned coupons, however, began to show color changes under low optical microscope magnification (4X) as seen in Figure 5. This color change is indicative of corrosive changes [9-14].

10 μm 10 μm 10 μm A B C

Figure 4. Here optical microscope pictures (4X) of the stainless steel-oxidized coupon (A) and stainless steel-cleaned only (B) coupon both show color changes, while the FTS modified coupon (C) remains unchanged.

After only four weeks in the saltwater solution, pitting is seen on both the stainless steel simply cleaned and the oxidized stainless steel coupons. Again the FTS modified coupons remain unchanged (Figure 6). The super-hydrophobic nature of the FTS modified coupons were observed before removal of the coupons from the solution. Coupons previously modified with the organosilane coating were covered in small bubbles not observed on either of the non-modified stainless steel pieces. Since these bubbles were seen only on the modified pieces, the speculation can be made that the super-hydrophobic nature of these pieces acts as a protective barrier between the stainless steel and saltwater. Since the modified surface is already proven super-hydrophobic by contact angle measurements, repulsion of the saltwater and attraction to trapped air and gases in the water seems a logical explanation as to why this barrier creates a layer of protection around the stainless steel coupon.

A 10 μm B 10 μm C 10 μm

Figure 5. Oxidized stainless steel (A) and stainless steel-cleaned only (B) under low optical microscope magnification (4X) both begin to develop pitting corrosion after four weeks time, while FTS modified coupons (C) show no signs of change under the same magnification.

Final evaluation of corrosion protection was done at six weeks time. Very little change was seen in any of the samples between the four week evaluation (Figure 6) and the six week evaluation. Contact angles on FTS treated stainless steel pieces remained close to 110°, while contact angle measurements on the oxidized stainless steel coupons and the simply cleaned stainless steel coupons were extremely low (< 5°). This indicates that the surface of these pieces has become hydrophilic. This is most likely due to corrosive changes and pitting creating a heterogeneous surface with high surface energy. Oxidization reactions occurring on the surface of the stainless steel also contribute to this hydrophilic nature.

Acknowledgments I would like to thank Dr. Bi-Min Zhang Newby for her advisement on this project, without the use of her lab, materials, and expertise this project would not have been possible.

I would also like to thank Dr. Joe Payer for his advisement on the project. Dr. Payer’s corrosion expertise gave me much insight as to how to set up testing. Further collaboration on future research is hoped for.

245 Expression of E2F2 during Conjugation in T. thermophila

Student Researcher: Michelle M. Mitchener

Advisor: Dr. Alicia E. Schaffner

Cedarville University Department of Science and Mathematics

Abstract A cell’s potential to proliferate, differentiate, and respond to its environment is based on the cell’s ability to alter gene expression at the level of transcription. Transcription is in part regulated by DNA-binding proteins called transcription factors. These factors bind various promoter/enhancer elements leading to the activation or repression of specific target genes. Transcription factors exert their effects by interacting with other proteins such as chromatin remodeling complexes and histone modification enzymes.

This current study involves analyzing gene regulation at the transcriptional-level in Tetrahymena thermophila. Scientists know very little about transcription factors in this organism and even less about chromatin remodeling and chromatin modifications. Transcription factor E2F2 has been previously shown to be upregulated during T. thermophila conjugation. Presently we are analyzing E2F2 levels before and during conjugation using Western analysis. Future studies will involve determining if chromatin modifications, such as acetylation or methylation are necessary for this process.

Project Objectives In an article entitled “Microarray Analyses of Gene Expression during the Tetrahymena thermophila Life Cycle,” published in 2009, Miao et. al. found that the Tetrahymena gene TTHERM_00695710, encoding the homolog for the transcription factor E2F2/E2Fc was expressed specifically during conjugation. They noted there was a high correlation of expression of E2F2 and Dp-2 in early conjugation and of E2F3 and Dp-2 in late conjugation. Thus we decided to explore protein expression of E2F2 in the various stages of conjugation in Tetrahymena thermophila.

Methodology Used • Tetrahymena thermophila strains CU428.2 and CU427.4 were grown in 20% m/v proteose 5 peptone solution, 0.1mM FeCl3, at 28°C to cell densities of 2-3x10 cells/mL. • Tetrahymena were starved in 10mM Tris (pH 7.4) solution for 24 hours at 28°C. 5mL of cell culture were removed at time t = 0, 3, 6, 9, 12, 15, 22, and 26 hours, centrifuged into pellets, and frozen. • After 18-22 hours of starvation, 50mL volumes of each strain were combined. 10mL of cell culture were removed at time t = 0, 1.5, 3, 4.5, 6.5, and 7.5 hours, centrifuged into pellets, and frozen. Successful conjugation was observed at t = 1.5h. • Cells were prepared for SDS-PAGE analysis in three ways: • Whole cell extracts were resuspended in 200µL of 1x SDS buffer (5% β- mercaptoethanol, 10% glycerol, 2% SDS, 60mM Tris-HCl (pH 6.8). • Macronuclei were isolated from whole cell extracts using modification to the Gorovsky et. al. protocol from “Isolation of micro- and macronuclei of Tetrahymena pyriformis” (1975). • Some macronuclei were also lysed with 1% Triton X-100 in physiological buffer. • Western Blot Analysis • SDS-PAGE followed by incubation with antibodies against E2F2 and TBP (control).

Results Obtained Thus far we have successfully grown and mated cultures of Tetrahymena thermophila. The picture below shows T. thermophila four hours into conjugation.

246

Figure 1. T. thermophila four hours into conjugation

Preliminary Western blot data (not pictured) suggests successful isolation of proteins from the macronucleus of conjugating Tetrahymena and potential expression of E2F2. Presently, we are working to eliminate some nonspecific binding of our antibodies to obtain a better idea of the proteins being expressed during the various stages of conjugation.

Acknowledgments I would like to thank my advisor, Dr. Alicia E. Schaffner, for her continued support throughout this research endeavor and for my peers in her research class who helped me in various ways. I would also like to thank Dr. Heather G. Kuruvilla for encouraging us to pursue studying this organism and providing us with several protocols. Finally, I am grateful to the Ohio Space Grant Consortium program for providing me the opportunity to carry out this study at Cedarville University.

247 Mulitspectral Sensing and Image Stabilization

Student Researcher: Nathaniel J. Morris

Advisor: Dr. Augustus Morris, Jr.

Central State University Department of Manufacturing Engineering

Abstract In order to determine the fire affects on vegetation and forestry, remote sensing is the best technique to use. The image sensor for remote sensing should be at an altitude of 36 km to capture a sufficient amount of affected areas. The payload will be mounted on the High Altitude Student Payload (HASP). The payload must conform to HASP’s interface requirements and regulations. As HASP ascends to 36 km above sea level, the payload and its internal components must survive extremely low temperatures and near vacuum conditions. Since HASP is lifted by a small volume zero pressure balloon, the HASP platform is subjected to unpredictable rotations and tilting along its axes. The unpredictable movement of the HASP will be compensated for by the electronics inside the payload. The compensation for rotation and tilting is to guarantee an effective and a high quality remote sensing experiment.

Project Objectives The scope of the project is to develop a payload that fits within the HASP interface requirements and regulations while performing effective remote sensing on fire affected areas. The remote sensing will determine how “high-intensity fires” affect the health of vegetation and the restoration of forests. In order to gather data at low distortion and 30m resolution, a stabilization platform will be needed. The Stabilization platform will be implemented into the small student payload, mounted on HASP to control the image sensor’s orientation. The stabilization platform will consist of 3 degrees of freedom (DOF) and be controlled by servo adjustments. For each image taken of the fire affected area, the orientation and the geographical center of the image must be recorded during the flight and extracted to be used in a geospatial analysis.

Methodology Used The two areas of concern in this project are the remote sensing and the 3 DOF image sensor’s platform. In order to successfully collect images on fire affected areas, the platform that the image sensor is mounted on is required to self adjust for any rotation and axis tilt that the payload experiences. Therefore, there is a need for an orientation sensor and a rotation sensor. The orientation sensor is a 2 axis accelerometer and the rotation sensor is a digital compass. These two specific sensors are a perfect fit for this remote sensing application. Each axis that the 3 DOF platform has will be controlled by an individual servo. The axes are then defined by where the 2 axis accelerometer and the compass sensor are mounted on the 3 DOF platform. In this case, the 2 axis accelerometer is mounted on the XY platform that controls the overall tilt. The digital compass is isolated on a small rotating platform that extends from the XY platform where the image sensor is mounted on. The XY platform is then controlled by two servos that pivot this platform about a ball joint. The small rotating platform is controlled by one servo that is attached to the XY platform. Finally, the 3 DOF platform is completely compensated for along all three axes.

The servos, sensors, and payload health are interfaced with the electrical controls within the payload. Specifically, the servos are controlled by a PIC microcontroller that uses the sensors as input data. The results of the PIC microcontroller are outputs that send positional commands to the servos. When the sensors fall within a tolerance that minimizes the distortion in the remote sensing data, the PIC controller sends a command to the image sensor to capture the data.

The remote sensing is the most important part of this project. The image sensor at an altitude of 36km is pointing directly down towards the earth’s surface, capturing fire affected areas. To verify healthy vegetation and restoration of fire affect areas, the remote sensing has to be in specific wavelength bands. The two wavelength bands are the visible and near infrared. The wavelength range for visible is 400nm –

248 700nm and for near infrared is 750nm – 1400nm. These two wavelength bands were chosen because of how healthy vegetation reflectance varies dramatically from the visible to the near infrared. Variation in reflectance defines the reflectance profile for particular objects on the ground. Therefore, the two wavelength bands are a way to help differentiate between other objects and vegetation, as depicted in figure 1.

Figure 1. Reflectance with respect to wavelength

Since images are going to be collected after the fire has affected an area, there has to be a way the images can be compared to another source of images before the fire. One of these sources is the USGS satellite image database. The landsat 5 satellite does remote sensing at 30m resolution in visible and near infrared. Hence, the landsat 5 satellite images are a perfect source that can be used to compare with the payload’s remote sensing images. Once the images from both soruces are obtained, a geospatial software ERDAS Imagine 2011 can be used to create overlays where the healthy vegetation is located after the fire and where it is located, before the fire.

Results Obtained The only result that has been currenlty determined, is the precision of the 3 DOF platform. The platform went through a perfromance test that resulted in an error of ± 0.05° along all three axis. Conclusively that means that the 3 DOF platform creates an error that is within the 30m resoltuion needed by the image sensor. Since the 3 DOF platform will not affect the payload image’s location, the location of the image will match up well with USGS comparsion data.

References 1. Arnold, L., Gillet, S., Lardiere, P., Schneider, J., “A test for the search for life on extrasolar planets”, Astronomy & Astrophysics, September 2, 2002 2. IPAC, “ Near, Mid & Far Infrared”, http://www.ipac.caltech.edu/outreach/Edu/Regions/irregions.html Accessed 05-01-2011

249 Robotic Football Players

Student Researcher: Amy V. Murray

Advisor: Dr. Jed Marquart

Ohio Northern University Department of Electrical and Computer Engineering and Computer Science

Abstract The goal of this project was to design and build three robots that were capable of functioning as football players. A quarterback, wide receiver, and a center were designed by a group of nine team members at Ohio Northern University. The University of Notre Dame started this event as senior design project two years ago and has hosted the event each year. Notre Dame asked Ohio Northern to be a part of the event for the first time and the robots created by both colleges will participate in a robot football game. Each football player, with the exception of the quarterback, must fit within a 16 x 16 x 24 inch box prior to each play. Once play has initiated a player may extend arms, nets, or projections to aid in offense or defense. Each player will be powered by a 24V drive train, but separate batteries can be used to power the other functional parts. An accelerometer with an LED light is attached to the top of each player and is used to indicate a tackle. If a robot is hit with such a force as to exceed the allowable tolerance of the accelerometer, the robot will be considered tackled and shut down for 2 seconds. PIC Microcontrollers were used to operate the players and communicate between robot and controller. Each player must participate in the competition Combine, which is a series of capability tests to compare the players’ functionality.

Project Objectives The objective of this project was to create three robotic football players that can play in a robotic football game. The teams consist of eleven robots each, and the game will be played eight on eight. The student’s at Ohio Northern University (ONU) were responsible for creating a quarterback, receiver, and center that function together on one of the two teams at Notre Dame. The design requirements specified for this competition are stated in The Rules of Collegiate Mechatronic Football [1] and are outlined below [4]:

 Players must be DC powered with no more than a 24 volt circuit voltage.  Players must have a kill switch mounted to their top surface.  Players may weigh no more than 30 lbs.  Players must incorporate a microprocessor in some form.  Players must include a tip over/tackle sensor; the sensor will cause a two second power off.  Players must have a LED light to indicate status; the LED will have a diffusion lens to allow visibility of the LED from any direction.  A player's base plate must be made of solid HDPE no thinner than one half inch.  No material is allowed beyond the perimeter of the base plate that impedes the ability of an opponent to contact a player’s base plate and thus block or tackle a player.  The centerline of a player's base plate must be located 3.0 ±.1 inches above the playing surface and remain in that position at all times.  All players, except the quarterback, must fit within a 16 inch square, 24 inch tall box at the beginning of any play.  Tires must be mounted on rigid, solid, wheels.  Players must be readily identifiable from the sidelines as a member of their team and have visible numbers.  Players can have no more than two extensible arms consisting only of rotational joints. Each arm may extend no more than 18 inches in any direction from the center of the joint at which it connects to the player. Arms may have cross sectional dimensions no greater than 3 inches, except for the terminus of the arm, considered to be its final 4 inches, which may have a cross

250 sectional dimension no greater than 5 inches. Cross sectional dimensions of arms containing any flexible materials like fabric or netting are measured with the materials fully stretched.

The functional capabilities of each of the players are defined in “The Engineering Design Requirements for Mechatronic Football Players and Performance Evaluation Tests Papers” [2], hereafter referred to as the Combine Rules. The capabilities of each of the players created by the ONU team are summarized below [3], [4]:

The quarterback must be  Capable of taking the ball from the center and executing a hand-off  Capable of throwing a pass  Capable of throwing the ball between 5 -15 ft. with a precision of at least 60% and further than 15 ft. with a precision of at least 40%.  CAPB1-04 Capable of taking the ball from the center in less than 5 sec. every time

The receiver must be  Capable of accepting the ball from a quarterback or quarterback/passer  Capable of holding the ball  Capable of receiving a pass from a quarterback or quarterback/passer

The center must be  Capable of maneuvering in order to block rushing defensive players  Capable of rushing the line to tackle offensive players  Capable of holding the ball  Capable of interacting with the quarterback

In addition, each player design must adhere to the following to pass the tests explained in the Combine Rules.  Team must be capable of removing and replacing the battery source in less than five minutes.  Capable of maintaining an average speed 10 ft. /sec of a distance of 50 ft.  Capable of maintaining a straight path with a maximum deviation of less than ± 2.5 ft.  Capable of moving to a desired location 4 ft. away in less than 10 sec. and within a final position tolerance of ± 4.0 in.

Each player is expected to perform in a series of tests described in the Combine Rules. The tests include the Maintenance Test, Speed Test, Controllability Test, Positioning Test, Throwing Precision Test, Handoff Test, and Player Weight Test. Pictures for the tests can be seen in the Figures and Tables section of this document. Each test is described below [5]:  Maintenance Test: Team members must be able to remove and replace batteries in less than 5 minutes.  Speed Test: A player must travel 50 feet at average speed of 10 ft/sec by starting any distance behind the starting line to reach top speed.  Controllability Test: A player must travel 50 feet in less than 10 seconds with max deviation of ± 2.5 feet by starting from a stopped position at the line.  Positioning Test: A player must move from one square to another in less than 10 sec with a tolerance of ± 4.0 in.  Throwing Precision Test: Throw a ball between 5-15 ft at 60% precision and over 15 ft at 40% precision.

251  Handoff Test: The handoff between center, QB, and running back must be completed in less than 5 seconds with 100% reliability for 5 handoffs.  Player Weight Test: A player cannot weigh more than 30 pounds, or it will not be allowed to participate in the game or any other Combine test.

Design Process The design process was broken down into several steps. First the team performed preliminary testing on ideas for each design and then based on the tests a design decision was made. A detailed design was modeled in SolidWorks, a modeling program and the detailed drawings were used to construct pieces out of quarter inch HDPE (high density polyethylene) using a jigsaw. The pieces were bolted together using angle brackets. After construction each of the players were tested using the tests described in the Combine Rules.

The base unit for the players was designed to be compatible with each of the robots. It consisted of a square half inch HDPE piece. The wheels were centered on two sides, to provide maximum maneuverability and casters were used to prevent tipping. The microprocessor boards and the main 12 V battery were placed in the base as well. A picture of the final design of the base can be seen in Figure 1.

There were three preliminary designs considered for the quarterback: a pitching machine, a trebuchet and air cannon. Tests were conducted to measure the throwing accuracy and precision of each device. An estimate was made for the parameters and then a decision matrix was used to determine the best possible configuration. The group concluded from the results of the decision matrix and testing that the optimal design for the quarterback was the football pitching machine. By tilting one of the pitching wheels the ball is spiraled similar to an actual football throw. This increases the accuracy and precision of the throw. To implement this passing system, the quarterback required two additional dc motors and a common motor controller for the throwing wheels and a dc motor and motor controller for the positioning rod. Both systems are powered by the 12V system used by the drive train. The system is capable of operating at three speeds, and will be controlled by three separate buttons on the remote control. A fourth button will be used to activate the positioning rod and touch switches signal the microprocessor when to retract. Both systems are controlled by the PIC microcontroller. The final design of the quarterback can be seen in Figure 2.

Three design configurations were considered for the receiver: the arcade game, the butterfly net, and the carnival tent. Each configuration was tested by throwing the ball with the quarterback pitching machine and measuring its ability to catch the ball. A decision matrix was used to determine the best configuration. The final design of the receiver was a combination of these configurations. The receiver consists of two rotating arms that extend to hit the ball into a basket at the base of the robot. The receiver system was implemented by using two servo motors and gears to extend the arms. The system operates in either the open or closed state using only one extra button on the controller. At the beginning of play the arms are in the closed position and then extend to increase the area to catch the ball. The final design of the receiver can be seen in Figure 3.

The center was designed to be compatible with the quarterback, which was designed first. The center consists of a horizontally rotating arm and a claw to grip the ball. The ball is picked up of the top of the center, rotated and dropped into the basket on the quarterback, where it is pushed through the wheels by the positioning rod. The final design of the center can be seen in Figure 4. The arm rotates using a vertical shaft that is turned by a gear connected to a servo motor. The angle of the arm rotation is constant and requires only one control button to travel from the starting position to the drop position. The gripper is operated with a servo motor and controlled by a separate button.

252 Combine Testing Results Each player was tested using the tests described in the Combine Rules. The results are shown below and the expected score for the Combine was calculated. Each test was weighted differently for the different players. Based on the testing results the quarterback is expected to receive a score of 9.3, the receiver is expected to receive a score of 9.4 and the center is expected to receive a score of 9.1 out of 10 possible points. Based on these results it is expected that the ONU robots will perform well at the Combine event and in the game that follows.

Figures and Tables

Figure 1. Base Unit Design Figure 2. Quarterback Final Design

Figure 3. Final Receiver Design Figure 4. Final Center Design

Acknowledgments The author of this paper would like to thank the other ONU team members for all their hard work and Dr. Sami Khorbotly and Dr. John-David Yoder for advising the team.

References 1. University of Notre Dame, The Rules of Collegiate Mechatronic Football, Version: 15, March, 2010. 2. University of Notre Dame, Department of Aerospace and Mechanical Engineering, Engineering Design Requirements for Mechatronic Football Players and Performance Evaluation Tests papers, Version: 11 January 2010. 3. Robotic Football Project – Capabilities and Requirements, PA4: 23 MAR 2011. 4. Robotic Football Project – Design Proposal for Review, Rev 1: 23 MAR 2011. 5. Robotic Football Project – Progress Report, Rev 1: 23 MAR 2011.

253 Nuclear Fission Power versus Nuclear Fusion Power

Student Researcher: Susan M. Newsom

Advisor: Dr. James Bighouse

Terra State Community College Department of Nuclear Power Technology

Abstract The Atomic–Age idea of a nuclear-powered car has been around since Ford developed a concept car in 1960 called the Ford Nucleon. Because nuclear power produces very low carbon emissions relative to other sources, it is considered a possible source of reliable power to the automotive industry. At present, uranium is the most productive alternative energy source. The safety concerns of the reactors, the danger of the uranium/plutonium in the wrong hands, the disposal of waste versus the benefits of clean power, and the reduction of the green footprint by saving natural resources are my areas of interest. Possibilities include 1.) Nuclear-fueled hydrogen may be harvested to create clean, safe affordable hydrogen fuel. 2.) Nuclear reactors could power stations where motorists charge highly efficient batteries. 3.) Miniature nuclear reactors could replace the engine and only need refueled every three to five years. Small research reactors have been used to power satellites, so there is already information available, and with some concept ideas, these may be able to be converted or adapted to vehicle use. Nuclear Fission occurs as the atom splits and releases energy and two or three neutrons. Each of these neutrons can cause another nuclear fission. Since energy is released in every atomic fission, chain reactions provide a steady supply of energy. Nuclear Fusion is the process of uniting the nuclei of two light elements to form one heavier nucleus. The mass difference is liberated in the form of energy. Controlled nuclear fusion is difficult because high temperatures are needed for initiation.

In comparison: Nuclear Fission about 0.1% of the mass is converted to energy. Nuclear Fusion about 0.5% of the mass may be changed into energy.

Using E=mc2 the amount of energy liberated can be calculated when the mass loss is known. The energy equivalent to this amount of mass in Joules is impressive. Adapting the concept of Nuclear Fission on a small scale to be usable for everyday consumption at the consumer level would be a huge global benefit.

Project Objectives My objective is to research the possibilities of small scale nuclear reactors as a means for an alternate fuel source for transportation options.

Methodology Used I used information from various textbooks to understand the basics of nuclear fission/fusion power. I refined my research to current applications and advancements, especially into self-sustaining fusion reactions. To investigate the options of a nuclear-powered engine, I took a logical approach and reviewed the feasibility of a nuclear car operating in the public arena. My conclusions were based on the compilation of this data, both from the perspective of an engineer a consumer. Additional research in these same target areas is on-going.

Results Obtained The benefits of a nuclear-powered car are substantial. Rarely would it need refueled; it would have almost no emissions when adequately shielded; and it would always be on, due to the constant energy from the mini-plant. The major concern would be that the vehicle would require so much shielding. All of the shielding would make the car practically immobile from the weight. The extreme concern of environmental safety resulting from car collisions, and the disposal of spent fuel would be extensive. Even though research into controlled fusion has been going on for the past 50 years, no (self-sustaining) controlled fusion reactions have been successful. Scientists from the Los Alamos National Laboratory have created a molecule known as uranium nitride. The new molecule contains depleted uranium, which

254 is relatively harmless from a radiological standpoint. This new molecule may prove to be another avenue to explore.

Significance and Interpretation of Results The recent events globally lead to the conclusion that the general public has a greater fear of nuclear power than anticipated. The mistrust, uncertainty, uneducated personnel from news media to the layman surrounding the entire concept of a nuclear-powered anything versus the benefits and advancements in technology do not equal. The general consensus of an average American when asked “How does gasoline make a car run?” is 90% of the time the same answer – it just does; an “I BELIEVE” syndrome. The same does not apply to the world of fission/fusion power. Also, the rekindled interest in nuclear energy has driven up the price of uranium. The logistics and costs of such an endeavor may prove unreasonable.

References 1. Bland, Eric. "New Kind of Uranium Could Power Your Car." 5 10 2010. Discovery News. 27 2 2011 . 2. Hein, Pattison,Arena,Best. Introduction to General Organic & Biochemisty. Danvers, MA: John Wiley & Sons, Inc., 2009. 469-487. 3. NEI. 2010. 2 March 2011 . 4. Panoptik. Wikipedia. 3 8 2007. 20 2 2001 . 5. Silverman, Jason. "Can a car run on nuclear power?" 16 June 2008. How stuff works. 17 March 2011 >.

255 Effect of Ink Formulation and Sintering Temperature on the Microstructure of Aerosol Jet® Printed YSZ Electrolyte for Solid Oxide Fuel Cells

Student Researcher: Loc P. Nguyen

Advisors: Dr. Mary A. Sukeshini, Michael Rottmayer, and Dr. Thomas L. Reitz

Wright State University Department of Industrial/Systems Engineering

Abstract Solid oxide fuel cells (SOFCs) have gained attention as a promising technology with wide applications in both stationary (power plants) and transportation. Aerosol Jet® printing (AJP) method of fabricating the components of a SOFC has advantages of maskless deposition of patterned layers and high reproducibility of layer thickness and microstructure. These features are lacking in traditional methods of fabrication such as screen printing and spray coating. The aim of this study is to evaluate the impact of new ink formulations and sintering temperature on the microstructure of yttria stabilized zirconia (YSZ) electrolyte layers. The YSZ ink will be formulated using a solvent system of terpineol/butanol along with dispersants and binders. Additionally, formulations containing new solvents and dispersants will be investigated. The optimized ink formulations will be deposited by the AJP method and the films subsequently sintered in air at 1200-1400 ºC. The resulting microstructure will be characterized via Scanning Electron Microscopy (SEM) and assessed for optimal grain growth and density.

Motivation and Objective As mention earlier, SOFC is a promising technology with wide application in both stationary (power plants) and transportation. There are numerous benefits from a SOFC besides the wide range of stationary and transportation applications. The other advantages of a SOFC are as follows: high electric conversion efficiency; superior environmental performance; cogeneration combined heat and power; fuel flexibility; size and sitting flexibility; and transportation and stationary applications. From these benefits, I was extremely interested in the area research of SOFCs. The primary objective of this project was evaluated the impact of three new ink formulations and sintering temperature on the microstructure of YSZ electrolyte layers. The following section will be discussing the procedures of this project.

Procedures An anode substrate was pressed using nickel oxide/yttria stabilized zirconia (NiO/YSZ). The powder was a mixture of a 60 weight percent of nickel oxide and a 40 weight percent of yttria stabilized zirconia. Each substrate required an approximately 2 grams of the mixture. The powders were inserted onto the pressing machine and pressed for approximately 45 seconds at the pressure of 6,000 psi; then released, and press again at 15,000 psi for about 2 minutes and 30 seconds. Finally, the substrate was bisque fired at 950°C for about 1 hour.

The first YSZ ink was formulated using a solvent system of terpineol/butanol along with dispersants D- 111 disperbyk and the YSZ powders. It also consists of Polyalkylene Glycol (PAG) and Butyl Benzyl Phthalate (BBP) plasticizers; and two binders such as Polyvinyl Butyral (PVB) and Ethyl-Cellulose 3000. The second ink formulation was prepared using a different solvent system such as Butyl Carbitol Acetate (BCA)/Terpineol. Binder PVB was removed from this formulation. The Solsperse 3000 was used to replace the D-111 disperbyk on the next formulation. Everything else remains the same as the second ink formulation.

Figure 1 shows the results of the viscosity of new ink formulations, including three new formulations and an existing formulation (Butanol-Terpineol-Solsperse). The purpose of including existing data is comparing the behavior of the ink to the three new inks since they have some similar and differences of the solvent system and dispersants. From the diagram, we can clearly see there are two different behaviors of the inks such as Newtonian and shear thinning behaviors. The viscosity increases as the shear rate increases is called Newtonian behavior and vice versa.

256

Figure 1. Viscosity of Inks

Methodology The Aerosol Jet® printing (AJP) method was used to deposit the YSZ ink formulations on the NiO/YSZ substrates. Figure 2 below is the schematic of the AJP method. The compressed gas (N2) is expanded through the atomizer nozzle to produce a high velocity jet. With the Bernoulli effects, the ink is drawn into the atomizer nozzle and high-velocity gas stream breaks liquid stream into droplets and suspends them in flow. The large droplets impact the sidewalls of the reservoir and drain back into it while smaller particles remain suspended in the gas. The virtual impactor was used to reduce the gas flow while minimizing the amount of atomized material lost during the flow reduction. The atomized particles in the gas have sufficient forward momentum as a result of the high velocity jet to continue along with their original trajectory. Figure 3 shows the deposition head/nozzle of the AJP. A large sheath flow gas was used such that it surrounds the jet to avoid the contact of jet with the nozzle wall. The focused aerosol is deposited on the substrates.

Figure 2. Aerosol Jet® Printing Method Figure 3. Deposition Head/Nozzle

Summary and Observations Figure 4 shows the sintered cells at four different sintering temperatures. In this study, three YSZ inks formulations were used for electrolyte and twelve NiO/YSZs were used as substrates to print YSZ films. From figure 4, we observed that the cells are warping at 1300°C and 1350°C and not at 1250°C and 1400°C. The films are homogenous at four different temperatures.

257 We were able to obtain the Scanning Electron Microscopy (SEM) images for only the 1400°C sintered cells due to the time constraints. The SEM images are shown in figure 5 below. From the images, we saw that all the YSZ films were not fully sintered. The films were crack free, but somewhat porous.

Figure 4. Sintered YSZ Films at Different Sintering Temperature

Figure 5. SEM Images

Acknowledgments First of all, I would like to express my deepest thanks the NASA/Ohio Space grant program for providing the generous cash towards my education for the school year 2010–2011. I also would like to thank Dr. Sukeshini, research faculty of Wright State University, who provided a great of amount of helps in this research study. Finally, I would like to thanks Dr. Reitz (AFRL); Mr. Rottmayer (AFRL); and Mr. Jenkins (UES) for their cooperation on the project.

258 Evaluating Global Climate Conditions

Student Researcher: Rebecca G. Nyers

Advisor: Dr. Patricia Long

Cleveland State University Department of Middle Childhood Education - Science and Language Arts

Abstract In observance of World Meteorological Day - March 23, 2011, students will engage in learning activities related to this year's theme, "Climate for You". This project involves an in-depth study of global weather (in the short term) and climate (in the long term) patterns and the implications climate change will have regarding the future of life on Earth. Students will investigate local weather patterns and explore weather patterns found in other geographic regions of the world. In particular, weather patterns associated with specific climatic zones will be analyzed. Students will examine the variables of weather and climate, historical climate patterns, and current and future trends toward climate change. Students will engage in the Graphing S’COOL Data activity, which entails locating and analyzing data reports for temperature, pressure, relative humidity, and cloud type. Then they will create three scatterplots Temperature vs. Pressure, Relative Humidity vs. Pressure, and Temperature and Relative Humidity vs. Pressure. Independently, students will analyze weather and climate data online at the MET office of Education and in expert learning groups they will construct a climatogram on a region of their choice.

At the Youth Corner of the World Meteorological Organization (WMO) website http://ww.wmo.int/youth students will learn about the goals of this organization and watch a series of four video clips about WMO, El Nino and La Nina, Greenhouse Effect, and the Hole in the Ozone Layer and its effect on ecosystems.

At the website Unite for Climate http://uniteforclimate.org/about/introduction/connecting-classrooms/ students will learn how climate affects the lives of people and living things everywhere on Earth, what climate change is, how it is tracked, and correlated effects: http://www.wmo.int/youth/ students will visit the links on the left hand side of the page: Introduction: Why does climate change matter, What is climate change, What is the Greenhouse Effect, What Are the effects of climate change, and Ten key concepts. video clips. The NASA educator guide: Investigating the Climate System, Global Awareness Tour will be used as well as the online NASA resource: Tropical Rainfall Measuring Mission (TRMM).

Project Objectives  Distinguish between weather and climate  Identify the four major elements of weather and climate (temperature, humidity, air pressure, wind speed)  Interpret weather /climate graphs and identify global patterns in weather and climate  Explain the factors that influence global climate  Describe how the varying climates found in different global biomes affects species adaptations to their environment  Explain human effect on climate

Alignment with Ohio Academic Content Standards  Earth and Space Sciences (Standard); Earth Systems (Benchmark); Analyze data on the availability of fresh water that is essential for life and for most industrial and agricultural processes. Describe how rivers, lakes, and groundwater can be depleted or polluted becoming less hospitable to life and even becoming unavailable or unsuitable to support life (Indicator)  Read a weather map to interpret local, regional, and national weather (Indicator)  Describe how temperature and precipitation determine climatic zones (biomes) (e.g., deserts, grasslands, tundra, alpine, rainforest) (Indicator)

259 Teaching Methodology The methodology used in this unit project incorporates both social constructivist learning theory and project-based learning. The social constructivist approach emphasizes teacher and student-active involvement in learning, relevant societal connections, and acknowledgement of the social and cultural nature of learning through use of peer collaboration and cooperative learning groups. Moreover, students build upon prior learning to create more advanced constructs of knowledge. Aligning this series of lessons with a global concern, also allows students to understand the applications of what they are learning to real life. Finally, this project is integrated among multiple core content areas, primarily science, math, geography, and reading.

Results Obtained Currently, I am student teaching in a Language Arts and Social Studies classroom. However, I will use these NASA resources in future Science instruction and recommend them to other Science, Math, and Technology educators.

Significance and Interpretation of Results Global warming and climate change are rather controversial topics that transcend all countries across the globe. The purpose of this project is to guide students in research, utilizing reputable websites and resources, and draw objective conclusions from the information and data they obtain. In order for students to understand the magnitude of the role climate has in our world, they must first develop a fundamental understanding weather and climate. This unit is sequenced so that students will build upon their prior knowledge of weather and climate and apply emerging awareness about weather and climate trends to a contextualized unit of study.

Figures and Charts

Figure 1. These graphs illustrate Earth's annual mean temperatures and were retrieved from http://data.giss.nasa.gov/gistemp/2010november/fig2.gif

Figure 2. Examples of a climatogram

References 1. National Aeronautics and Space Administration. (June, 2003). Investigating the Climate System, Global Awareness Tour (pp. 91-96). Washington D. C: NASA. 2. See Abstract for complete references.

260 Rehabilitation Engineering Design Project

Student Researcher: Thomas P. O’Connor

Advisor: Dr. Robert Chasnov

Cedarville University Department of Mechanical Engineering

Abstract As a member of the Kettering Vocational Rehabilitation Senior Design Team I have the opportunity to work with the Kettering Rehabilitation Center and MONCO Enterprises to help the employees at MONCO with their job performance. MONCO Enterprises provides employment opportunities, work training, placement services and employment support for individuals with developmental disabilities. Our team is involved with creating working assist devices to enable the employees to complete involved tasks and equip them to be a greater asset to MONCO enterprises. One particular job that is available is the bagging of different items. Specifically, as my project took shape, I was focused on the dexterity required to open the plastic bags and the shakiness of the employee as they held the bag during filling. Looking specifically on these two aspects I started working on a design that would open a bag and hold it in an open position during filling. The major goals that I would be addressing is making their job easier and increasing their productivity. Allowing them to use both hands to fill each bag and not split their concentration between holding and filling each bag. Through helping these employees complete their jobs well my goal is to play a part in enhancing their quality of life.

Project Objectives As a team the objective of our efforts for this senior design project was focused around supplementing MONCO’s Microenterprise Pets & People with assistive technology and streamlining the activities involved so that MONCO could make this small part of there business profitable and more efficient. The microenterprise Pets & People is a section of MONCO that employs workers to bake dog biscuits. Although supervisors prepare the dough, their employees do every other step of the process by hand. This includes tasks like, cutting out dog bones with cookie cutters from sheets of dough and placing them on cookie sheets, and then bagging and sealing the bags to be shipped.

At the end of the project as a team we hope to be able to quadruple the production of Dog Bones. Allowing them to meet the increasing need of their product and to be more profitable with their time. Included in that goal we will need to be able to bag and ship a fourfold increase in dog bone production. This would take us from approximately two hundred and fifty bags a day to one thousand bags a day. Because of the workers situation we could not force them to work faster or set daily quotas, meaning that we had to create devices that inherently increased the ease and efficiency of the job. As a team, we observed that the employees currently struggle with some of the movements associated with the bagging process; breaking the initial seal on the bag, holding the bag steady while filling, and being reliant on the supervisors for empty bags.

Specifically as my project took shape, I was focused on the dexterity required to open the plastic bags and the shakiness of the employee as they held the bag during filling. Looking specifically on these two aspects I started working on a design that would open a bag and hold it in an open position during filling. The major goals that I would be addressing is making their job easier and increasing their productivity. Allowing them to use both hands to fill each bag and not split their concentration between holding and filling each bag.

Methodology This design project for a working assist device began by brainstorming and trying to mimic the pinching and opening of a plastic bag through mechanical means. Eventually this took the form of a vacuum powered suction cup design that used the linear motion of a drawer to engage the bag and the pull the bag open.

261

I began by designing the drawer in Solid Works. Leaving many of the components simplified because the cad sketch was more for understanding both esthetics and how each part was going to be in relation to each other. Once the plans were finalized I began building the drawer in my shop at home. I used 3⁄4 in hardwood for the entire drawer. The first thing that I did was to build the inside of the drawer so that I could properly measure for and size the outside of the drawer. Starting with the size of the bag I wanted to leave about 1⁄2 in on either side of the bag so that you could easily drop the new bag in once it was done. So starting from there I used 8 inches wide as my starting point for the width of the inside of the drawer and made the front face of the drawer 13 in tall in order to have plenty of room to attach the suction cups to the front of the drawer. I bought all of the supplied for the first prototype at Lowes and because of a limited selection of drawer sliders I bought the shortest slider that I could find. This determined the depth that the drawer had to be. And from there I began to make the appropriate cuts. One of the reasons that I made the drawer so makeshift where each measurement then determined the next was because I was working with wood and sliders that I knew nothing about before I started modeling the part in cad. Also there was plenty of time to make adjustments in the event of a second prototype. To finish this prototype I built a makeshift table that could be a test setup by attaching the drawer to the right side and placing some particleboard for a table top. From this point I used the vacuum pump from the Engineering Project Lab on campus and was able to run tests being able to easily access the front and the back of the test set up. For the testing of my suction cups I was able to order many different shapes and sizes of Anver suction cups (http://www.anver.com/document/company/vacuum_cups.htm) through a distributer in OH. The figures shown in the Figures and Tables section of the report show first the cad drawling of the drawer design, then two pictures of the actual test setup, and finally five pictures of suction cups that I used while trying to open a bag with the vacuum.

Results Obtained With the test setup complete ran several tests trying to understand which suction cups worked the best, as well as what vacuum pump setup would work the best. Throughout the semester there have been several breakthroughs in understanding how my drawer design works or does not work. The major shortcoming of the bag opener was the fact that the plastic was an unfortunate thickness. If the plastic was slightly thicker it would behave more like a flat surface and the suction cups would have sealed to the surface well. And if the bag was more flexible than the bag would pull further up into the suction cup and also seal. But the bag being the plastic that it was it was fairly stiff and would develop a single crease along an edge of a suction cup. Some of the results from these tests are, the best working shape of the suction cups to be the oval suction cups made of silicon, which worked better than nitrile. And the best place to grab the bag was right at the seam of the bag, which was the most rigid part, and there was less flexible area that would crease. The final diagnosis for the vacuum set up was that that running two independent vacuum lines to the two opposite suction cups worked the best. So that if one side of the drawer lost suction with the bag it would not drop the bag completely. In the end the drawer was able to open a bag a few times but never consistently. And without consistency it was not a viable solution to the dexterity issues we were trying to address. At the end of the semester as the project was coming to a close and I was not able to think of any changes to make to the drawer design that would enhance or upgrade its performance I began to try and come up with a purely mechanical solution that would at least allow any employee the ability to open a bag given they were not able to do so.

262 Figures and Tables

Figure 1. Solidworks model of the drawer prototype

Figure 2. Picture of the test setup showing the drawer in the open position.

Figure 3. Selection of suction cups that were tested during the design process.

263 Earth vs. Mars

Student Researcher: Jillian M. Payne

Advisor: Dr. Robert Chasnov

Cedarville University Department of Science and Math, and Education

Abstract The main goal of this project is for students to compare and contrast Earth and Mars as if they were NASA scientists who were planning a colony plant on Mars. At the end of the project, students will present a proposal to the class of whether they think that humans could survive on Mars and what would need to change to make this happen. The materials that will be used for this project are The NASA Planetary Geology Teacher’s Guide and Activities and Mars Exploration: Is There Water on Mars? An Educator’s Guide for Physical and Earth and Space Science. The unit is taken from different activities that are presented in these packets. If you wanted, you could take more or less of the activities presented in these packets. It is all dependant on how much time you want to spend on this project. This project could also be done intermittently in an Earth and Space Science unit.

Lesson The first step will be to place the students in groups of 4 or 5 depending on the size of the class. Have students make a list of the things they think are needed to sustain life on a planet. This can be done as homework. The list should contain such things as water, atmosphere (with Oxygen), orbiting a sun, rotating planet, etc. This will help students to start thinking along the lines of this project.

On the first day of the unit, the activity will be Exercise 1 from Planetary Geology. For homework, have students look up pictures of different geological events and print out one of each. Students can then answer the questions provided for the first part of the activity. Students will then work on the second part of the activity in their groups and answer the corresponding questions. Any work not completed should be done for homework.

The second day’s activity will be Exercise 7 from Planetary Geology. In their project groups, students will do the activity with a turntable to demonstrate the Coriolis Effect and they will answer the questions provided about Earth’s atmosphere and Mar’s atmosphere. If there is time left, have students discuss in their groups the importance of an atmosphere to a planet.

The third day will be Exercise 10 from Planetary Geology. Students will answer the questions provided using the pictures from the activity. They will answer the questions in their groups. They will only answer the questions pertaining to Earth and Mars and the mapping of these terrestrial planets.

The fourth day of the unit the students will do Activity 6 from Mars Exploration. Discuss with the students the properties of water. Be sure to talk about how pressure and temperature play a part in the phases of water. The handout Appendix B provides additional information. Have the students work on the activity, comparing the temperature and pressure graphs of the Mars Pathfinder mission. They will answer the questions provided in their groups.

The fifth day of the unit have your students do Activity 7 from Mars Exploration. Discuss as a class where water would be found on Mars. Look at past missions to Mars and see where they have landed and what sort of information they might find. The handout Appendix A is helpful with this activity. The last day of the unit should be used for planning and research. This could be done during class time, or you could have the students do this on their own time. Have your students start discussing whether they think that a colony could be planted on Mars. The students should end up with a proposal statement by the end of the class period. They should turn this to the teacher. Have each group of students start researching so that they can present their proposal. Make sure the students visit the NASA website,

264 specifically the site NASA – Exploration Systems Mission Directorate and read “2004 Space Feature Story Special: Sibling Rivalry: A Mars/Earth Comparison”.

At the end of the project, the students will present their proposals as if they are presenting to a board at NASA. As I mentioned before, this unit could be done intermittently in an Earth and Space Science unit, and thus the presentation could be done at the end of this unit. However if the project is done on its own, this can be done a few days after the project is completed so that the students will have ample time to do research and put together their presentation. The presentations should include:

1. The list that they made in the beginning of the whole project of what is necessary for life to survive on a planet. 2. Comparing and contrasting Earth and Mars in size, temperature, atmosphere, and other geological features. 3. What factors would have to be changed or modified so that humans could live on Mars (if the students think there are any). 4. At least one problem in space science that needs to be solved before humans could make the trip and live on another planet. (Travel time, food supplies on a space ship, etc.) 5. Students will be able to identify and think critically about the properties necessary for life. 6. Students will be able to compare and contrast Earth and Mars 7. Students will be able to think critically about the possibilities of humans inhabiting another planet and the problems that come with this.

Objectives  Students will be able to identify and think critically about the properties necessary for life.  Students will be able to compare and contrast Earth and Mars  Students will be able to think critically about the possibilities of humans inhabiting another planet and the problems that come with this.

Alignment  Grade Eight Earth Systems Benchmark E: Describe the processes that contribute to the continuous changing of Earth's surface (e.g., earthquakes, volcanic eruptions, erosion, mountain building and lithospheric plate movements).  Grade Nine The Universe Benchmark C: Explain the 4.5 billion-year-history of Earth and the 4 billion-year-history of life on Earth based on observable scientific evidence in the geologic record. o 3. Explain that gravitational forces govern the characteristics and movement patterns of the planets, comets and asteroids in the solar system.

265 This is Rocket Science

Student Researcher: Shannon L. Phillips

Advisor: Dr. Robert Ferguson

Cleveland State University Department of Education – Mathematics 7-12th

Abstract I will be using the launch and orbit of a Space Shuttle mission as a day one introductory overview for Pre- calculus and other Advance Mathematics classes. I believe one of the most common questions regarding the subject of math is why study it. I want to use the first day of school to fill them with awe, sparking their imaginations as I demonstrate some of the achievements we have obtained through the sturdy and use of Mathematics.

The Lesson I will start of the class by turning off the lights without warning and showing the launch of a space shuttle. As I turn the lights back on and introduce myself, the Data table for Position, Acceleration and Velocity will appear on the overhead. We can begin talking about the event we have just witnessed and how the Space shuttle goes from zero to almost 18,000 mph in 8.5 minutes. Then we can discuss the Position, Acceleration and Velocity graphs, and how they are quadratic nature.

The next phase of the lesson will focus on the Shuttle in orbit together with the data table of one 90- minute orbit around the earth. The focus will be on determining the position and velocity with respect to the x, y and z axis or in NASA terms the M50 (Aries-Mean-of 1950) coordinate system. These graphs will be sinusoidal in nature and will allow us to develop a parametric equation that will allow us to find the Space Shuttles position, acceleration and velocity at any point in time.

266 Objectives  To give the students an idea of what this course has in store for them  To pique interest and engage the students in the study of math beyond the calculations  To demonstrate the “big” picture behind the individual math problems

Alignment  This introduction will be geared towards Pre-Calculus and beyond and will touch on the application of what we are going to learn in the coming year.

Underlying Theory This lesson is base in inquiry and engagement. The students need to learn this is my class and we are here to learn and have fun. Math does not have to be the dry lecture that most of us experienced when we were in school. In the past most teachers avoided giving students an overall view of the course at the beginning, out of fear of confusing them. I believe that the more advanced students of today need to be shown the end results in order to be inspired to learn the steps. The students that just may not be interested in math need to know that I am there to make this experience worth their time and effort. I also want to set the pace for the year from the very first moment that they walk into my classroom.

Student Engagement End the lesson by letting them know that we will actually be constructing and launching our own rockets and collecting and analyzing the same type of launch data.

Resources Since this is the first day of class, activities will be limited but I will hand out guidelines on their semester project for them to review and to begin forming ideas. This will be centered a round an activity like building and launching their own rockets so that we can generate and analyze their own data.

Conclusion I want my students to leave their first day of class excited and awaken to a new way of learning about math. I want to generate discussion and interest in what is coming next. I am hard pressed to think of something that can capture the imagination of youth more than NASA and space. NASA’s web site and research has provided me with amazing visuals and concepts that excite the imagination

267 Electronic Health Records and Its Impact on the Healthcare Industry

Student Researcher: Renée D. Piontkowski

Advisor: Dr. Donna Moore-Ramsey

Cuyahoga Community College Department of Health Information Management Technology

Change - to make the form, nature, content, future course, etc., of (something) different from what it is or from what it would be if left alone: to change one's name; to change one's opinion; to change the course of history.

Electronic Health Records (EHR) is changing our world. Paper-based patient records are evolving into the electronic realm and the transformation is making an encouraging impact on healthcare.

Electronic recordkeeping is improving many different areas of patient care. Record transactions happen daily and the implementation of the EHR is simplifying the documentation. Whether a patient is registering at a physician’s office, ambulatory care facility or being admitted into a hospital the course is smooth and quick. Timeliness is significant when dealing with a medical illness and being able to accurately document demographic and clinical data swiftly helps. When inserting information in a computerized program developed specifically for medical use, certain data will not be excluded. Programs are designed to complete all facts. Patients will no longer deal with completing repetitious forms or experience unnecessary delays in meeting with their doctor. Staff can review a person’s primary information such as their personal or medical history, any adverse reactions to medication they may have and view health insurance coverage within seconds of arriving at the medical facility.

The interoperability between medical providers within different hospital systems is no longer just a vision. Being able to communicate with different medical professionals will make a tremendous difference on how our healthcare system will care for the sick.

Providers can quickly review a person’s health history, evaluate signs and symptoms of a current complaint and quickly receive test results to assist in diagnosing the patient. If medication is part of the patient’s remedy a provider can electronically order a prescription so that treatment can quickly.

Recently, two major health systems The Cleveland Clinic Health System and The MetroHealth System are participating to communicate clinical information through the Epic System, EpicCare an EMR computerized software program. This software program allows the medical provider to retrieve an individual’s medical information while they are experiencing a health encounter. Communication between physician/medical professionals is vital when it comes to patient treatment. Thus, implementing a tool such as this can effectively reduce clinical errors or decrease the need for repetitive medical testing.

Patients receiving treatment at a facility will experience a decrease in wait time because the facility performing the services won’t have to phone for medical information. The treating facility will already have access to the medical records.

Although the cost to install such a program as Epic is quite expensive, the U.S. Department of Health and Human Services (HHS) has labeled Electronic Health Records “Meaningful Use”. Healthcare providers can receive money and/or benefits to which were established to enhance the EHR. Certain criteria by the provider must be met before these incentives are obtained. For instance, a medical supplier can experience advantages from Medicare and Medicaid once a commitment to the EHR has been made. To learn more about the criteria required to obtain incentives you can log on to www.cms.gov.

Medical reimbursement for services will become less cumbersome. Some software programs will automatically assign ICD-9 codes which will reduces days in the billing cycle.

268

Maintaining patient files will be simplified. Electronic files will eliminate the need to retrieve files from a paper-based records department or an off-site storage facility. Another advantage is more than one person can review the EHR at the same time.

The issues of privacy and security will be more easily enforced. Management will be able to run a variety of reports to assist in managing the Health Insurance Portability and Accountability Act (HIPAA) regulations. Audit checks will be completed quickly. Employees will use passwords to sign into their computer to perform job tasks. Encryption is another tactic that will be implemented to ensure patient privacy.

Public Health can advance through EHR technology. Rural area healthcare facilities will have access to major medical universities in order to conduct research and track trends in a geographical location. With the practice of telemedicine, patients no longer have to travel many miles to receive care from specialists. Doctors will have the capability to check information through means of a computerized database, registry and/or index.

Risk Management and Quality Assurance departments would have more time to deal with other compliance issues within their facility. The EHR allows you to perform record audits at a much faster pace. The EHR also allows more than one person to access the record at the same time. Searches could be performed to maintain that the organization’s policies and procedures are being followed by staff. Confirming good employee performance will assist the facility in receiving accreditation from healthcare agencies. Medical entities are businesses and they work to uphold a good reputation in the public eye.

The EHR allows for timely and accurate documents which in turn aids the legal department in its duty to represent the entity.

The Department of HHS is expecting Electronic Health Records to go into effect in October 2013. EHR continues to grow and Electronic Health Records will certainly make its impact on the healthcare industry.

References 1. www.CMS.gov 2. dictionary.reference.com

269 NASA - Friend of Education - Re-Introducing Students to NASA

Student Researcher: Therese M. Post

Advisor: Dr. Diana Hunn

University of Dayton Department of Education and Applied Professions

Abstract The goal of my topic is a three-fold look at the past, present, and future of NASA. First, is the historical presentation of NASA as a cultural and scientific part of American life; the second is to present to students the scientific benefits and knowledge that have come from NASA research (intended and unintended), and lastly, to present NASA as a vital part of the future of this nation. NASA represents the cutting edge of science and the STEM initiatives that are taking place all over the country which are so important in the science classroom. Even beyond the classroom, though, is the importance in the boardrooms of our nation’s corporations as they look for innovation and new products. As an extension, I propose an educational idea that could bring NASA’s new Robonaut 2 into the classroom to assist teachers in reaching all their students.

Project Objectives After attending the information seminar in November, I was unsure about the topic I would undertake for my poster presentation. The array of knowledge and the content area that NASA presents is vast, almost like space itself. As I have continued in my student teaching from the fall of last year and through the winter and spring of 2011, I have become convinced that today’s middle school students are very unfamiliar with the rich history, culture and scientific value that NASA has brought to our nation and the world for the last 50 years. When I have questioned students in science classes about JFK and the mission to land a man on the moon by the end of the 60s, most of them are unaware of the history behind this great milestone. They don’t know about Sputnik or Voyager. They don’t know what the Hubble Telescope has done for them. Our younger students need to understand the impact that NASA has had on their lives. We cannot assume they have absorbed this information through the internet, TV, in past classrooms or other sources. As I have talked about NASA and the space program in the classes I have taught, I see that students lack a basic knowledge about the interwoven history of NASA, education and classic American culture. In order to bring NASA into our classrooms, it is necessary to supply our students with a good knowledge and solid foundation about what NASA has meant to this country. In addition to that, students should gain a store of information about the benefits and inventions that have improved the life of mankind through the work of NASA. After they understand this, it will be possible to use the student curiosity and familiarity with NASA to bring NASA’s face into the classroom (Robonaut 2). Let’s re-name him Eddie for education. This will provide science teachers with a method for integrating science, language arts and social studies through this mini-unit on NASA and then to use Eddie to assist them in reaching all students in the classroom.

Methodology Used The methodology I used for my project was the format of a mini-unit on NASA that would integrate the content in the areas of science, social studies and language arts. Recent educational research has shown that integrating content areas is an effective way of teaching middle school age students. We can combine several content areas to attract and hold the attention of many students at once. (Gardner’s Multiple Intelligences, Brain-based thinking, visual learning, Vgotsky’s ZPD, Piaget’s schema theory) Most modern educators are also enthusiastic about bringing language arts into all the content areas in order to improve the reading and comprehension skills of middle school students. This could easily be accomplished with this type of unit in the classroom. This mini- unit could be taught in about a week or so and could involve the science, social studies and language arts areas. It would probably be easy to include math and a humanities aspect if teachers wanted to expand the scope of this unit to include all content areas. Following the scientific method, as an extension of this lesson, I propose using the

270 prototype Robonaut 2 in a modified version that would have reading/ comprehension programs designed to aid the teacher in analyzing the reading and comprehension levels of students using tests such as Cloze, Maze, QRI, Fry, and Interest Inventories.

Acknowledgments Advisor: Dr. Diana Hunn, University of Dayton, School of Education

References 1. Benefits of Space Exploration. www.nasa.gov “Public Reaps Benefits of ASA Research”, Washington Times, July, 2009. 2. History of NASA. www.history.nasa.gov 3. U. S. and world history facts www.nagc.org

271 Hydraulic Fracture Design in the Marcellus Shale

Student Researcher: William Tyler Ragan

Advisor: Dr. Benjamin Thomas

Marietta College Department of Petroleum Engineering

Abstract Shale gas has become one of the most sought after commodities in the energy industry because of recent advances in hydraulic fracturing. Hydraulic fracturing involves the pumping large volumes of fluid into rock formations to create fractures. Once the formation is fractured, a high permeable proppant(sand) is pumped and placed in the fracture to keep it open and allow the hydrocarbons to flow. This process is vital in shale formations because of shale’s low permeability. The fracture helps access gas deep in the formation and allows operators to produce shale formations economically. However, dense population and increased fracturing activity has caused water availability problems for oil and gas operations in the Northeast. As a result, it has become important to limit water usage. However, this is hard to accomplish when most new wells drilled in the Marcellus Shale need to be hydraulically fractured to produce economically.

The objective of this project is to maintain or increase fracture height, half-length, and stimulated reservoir volume, while significantly reducing the volume of water pumped by adjusting parameters such as viscosity, slurry rate, proppant concentration, and other variables. Since it isn’t economical to vary parameters and run experiments on actual fracture jobs, a 3-d software called MShale will be used to model the experiments. Reservoir properties, wellbore data, and current fracture designs from Marcellus Shale operations will be used to construct these models and create realistic situations.

Project Objectives The Marcellus Shale (located in New York, West Virginia, Pennsylvania, and Ohio) has become one of the most important plays in the energy industry over the past several years. However, due the high population density and the increased hydraulic fracture activity, water shortage problems have started to plague the oil and gas operators in this area. As a result, it is vital to develop solutions to these water shortage problems because most wells have to be hydraulically fractured to produce economically in the Marcellus Shale. The objective of this project is to reduce the total volume of fracture fluid by 25%, while maintaining or increasing fracture height, length, and the stimulated reservoir volume. This will be done by altering parameters in current Marcellus Shale fracture designs and testing them on a 3-d fracture simulator. The tests will then be evaluated to see the applicability of these alterations. Applicable alterations will be coupled together to find a new fracture design that accounts for the water reduction without decreasing the fracture length, height, or the stimulated reservoir volume.

Methodology In order to create hydraulic fracture simulations that are as accurate as possible, a hydraulic fracture program called MShale was used. This program was selected because of shale’s tendency to have natural fractures. Most hydraulic fracture simulators do not account for natural fractures in formations and can tend to overestimate dominant fracture half-lengths because they don’t account for fluid leakoff into the natural fractures. However, MShale allows oil and gas operators to develop simulations that account for natural fractures. These natural fractures are vital to the model because they are extremely important in accessing natural gas stored in shale. These natural fractures are modeled in cubic form and an example can be seen in Figure 3 of the Figure and Tables section.

An example of a Marcellus Shale fracture simulation provided by Meyer and Associates was used to provide important information such as rock stresses, formation depth, leakoff coefficients, and a variety of other data. In order to provide a complete model, a fracture design was taken from an SPE paper that discussed current operations in the Marcellus Shale. This fracture design can be seen in Figure 1 of the

272 Figures and Tables Section. These parameters were inputted into MShale to create an accurate model for a single stage fracture job.

Engineers can only control certain variables in a hydraulic fracture design. These parameters include fracture fluid viscosity, fracture fluid density, fluid loss additives, pad volume, treatment volume, injection rate, proppant size, fluid type, and proppant concentration. However, this project focused on altering the fluid type, proppant size, injection rate, and proppant concentration to account for the 25% volume reduction.

The first step in producing results involved running the original fracture design in the simulator and recording the fracture half-length, height, and stimulated reservoir volume. These values were then used as the base case to compare to design alterations. The next step was to reduce the volume of fluid pumped by 25 % and record the results. The first alteration to account for the volume reduction involved varying the injection rate and measuring the results. Four variations of rate were run which included a 10% and 20% reduction in rate and a 10% and 20% increase in rate. The same variations were then run with proppant concentration and the results were recorded. The next alteration involved changing the fluid type used. Simulations were run with slickwater (with 1 gal/1000 friction reducer), a linear gel, and a cross linked gel. The types of linear gel(40 PPT AG-21R, GW-21, 12.8 GPT GW-24L by BJ) and cross linked gel (FracGel 25 lb/Mgal WG-31 by Halliburton) were chosen at random because the different types of linear and cross linked gels were not the focus of this research. The last simulations ran involved altering the proppant size. These alterations included proppant sizes such as 100 mesh, 20/40, and 70/140. These proppants were chosen because they are typical proppants sizes used in hydraulic fracturing. As was with the case in the current fracture design, different sizes of proppant were not paired together in one design. Each simulation was run with only one size of proppant. It should be noted that each alteration was made separately and run with respect to the original design and the 25% volume reduction.

Upon the completion of the different simulations, a thorough analysis of the results was conducted and the variations that provided good results were noted. These variations were then paired together and more simulations were run to provide an optimum fracture design. The results from the different simulations were compared and one design was chosen.

Results Obtained The base case produced a fracture half-length of 1,553 ft, a height of 122 feet, and a stimulated reservoir volume of 1.8318X10 9 gallons. Reducing the volume of the treatment by 25% produced a fracture half- length of 1,358 ft, a height of 103 feet, and a stimulated reservoir volume of 1.3334X10 9 gallons. Alterations to rate made a significant impact on the output parameters, but decreasing the rate produced the most desirable results. Running variations in fluid type also produced significant changes to the output parameters. The linear and cross linked gels both produced significant decreases in half-length and stimulated reservoir volume. They also dramatically increased the fracture height and fractured into the formations above and below the Marcellus Shale. Both of these results are undesirable.

The changes in proppant sizes had minimal effects on any of the three measured outputs, so the original design was chosen to be used in the new design. However, the changes in proppant density did have some effect on the three measured outputs. Increasing the proppant density increased the fracture half-length and stimulated reservoir volume, while decreasing the proppant density decreased these parameters. The height remained the same for all alterations.

The increase in proppant density by 20%, current proppant design, slickwater, and decrease in rate by 20% were paired together to provide the optimum fracture design for this research. This design produced a fracture half-length of 1,406 feet, height of 100 feet, and a stimulated reservoir volume of 1.3931X10 9 gallons. The design can be seen in Figure 2 of the Figures and Tables Section. This new design doesn’t fully account for the volume reduction implemented, but in order to fully determine the success of this design, one would have to run a production vs. cost analysis. However, due to the cost associated with that process, it could not be performed for this research project.

273 Figures and Tables Table 1. Current Marcellus Well Fracture Design

Current Marcellus Fracture Design Slurry Rate(bpm) Stage Volume(gal) Stage Type Fluid Type Proppant Type Slurry Density(ppg) Total Time(mins) 50 100000 Pad Slickwater - 0 48 50 75000 Slug Slickwater 100 Mesh 0.5 86 50 5000 Sweep Slickwater 0 0 88 50 75000 Slug Slickwater 100 Mesh 0.7 126 50 5000 Sweep Slickwater 0 0 129 50 50000 Slug Slickwater 30/50 1 152 50 5000 Sweep Slickwater 0 0 155 50 50000 Slug Slickwater 30/50 1.5 179 50 5000 Sweep Slickwater 0 0 181 50 50000 Slug Slickwater 30/50 2 205 50 5000 Sweep Slickwater 0 0 207 50 50000 Slug Slickwater 30/50 2.5 231 50 5000 Sweep Slickwater 0 0 233 50 25000 Slug Slickwater 20/40 2.5 245 50 5000 Sweep Slickwater 0 0 248 50 25000 Slug Slickwater 20/40 3 260

Table 2. New Marcellus Well Fracture Design

Alte re d Marce llus Fracture De s ign Slurry Rate(bpm) Stage Volume(gal) Stage Type Fluid Type Proppant Type Slurry Density(ppg) Total Time(mins) 40 75000 Pad Slickwater - 0 45 40 60000 Slug Slickwater 100 Mesh 0.6 81 40 3750 Sweep Slickwater 0 0 83 40 60000 Slug Slickwater 100 Mesh 0.84 119 40 3750 Sweep Slickwater 0 0 121 40 37500 Slug Slickwater 30/50 1.2 143 40 3750 Sweep Slickwater 0 0 146 40 37500 Slug Slickwater 30/50 1.8 168 40 3750 Sweep Slickwater 0 0 170 40 37500 Slug Slickwater 30/50 2.4 193 40 3750 Sweep Slickwater 0 0 195 40 37500 Slug Slickwater 30/50 3 218 40 3750 Sweep Slickwater 0 0 220 40 18750 Slug Slickwater 20/40 3 233 40 3750 Sweep Slickwater 0 0 235 40 18750 Slug Slickwater 20/40 3.6 248

Figure 1. Cubic Natural Fracture Model

The spaces represent the natural fractures and the cubes represent the rock matrix. Natural fractures are extremely hard to predict, but this model provides a good approximation.

References 1. Economides, Michael J., A. D. Hill, and Christine Ehlig-Economides. Petroleum Production Systems. Englewood Cliffs, NJ: PTR Prentice Hall, 1994. Print. 2. Fontaine, J., N. Johnson, and D. Scheon. "Design Execution, and Evaluation of a "Typical" Marcellus Shale Slickwater Stimulation: A Case History." SPE (2008). Web. . 3. Meyer and Associates 4. http://www.fekete.com/software/piper/media/webhelp/c-te-reservoir.htm

274 Improvement of Coplanar Grid CZT Detector Performance with Silicon Dioxide Deposition

Student Researcher: 1Danielle N. Richards

Advisors: 2Dr. Arnold Burger, 2MSc Michael Groza, 2Dr. Liviu Matei, 2Vladimir Buliga

Wilberforce University 1Department of Engineering, 2Material Science & Applications Group, REU/SPR, Fisk University

Abstract This research paper presents an attempt to improve the response of a CZT (cadmium zinc telluride) detector in coplanar grids configuration by applying an insulator layer on top of gold anode grids. The chosen insulator in this experiment is SiO2 deposited by RF sputtering. Insulator deposition was performed at room temperature, without any extra heating of the CZT crystal other than the heat produced by sputtering process. Detector response to high energy gamma radiation (662 keV) is determined before and after silicon dioxide coverage of the anodes area. An improvement of energy resolution at 662 keV was found from 3.2% before silicon dioxide deposition, to 2.8% after insulator deposition on top of the grids.

Introduction CZT is the best choice for room temperature radiation detectors but for large crystals, due to the difference in  values for electrons 5x10-3 cm2/V compared with ~5x10-5 cm2/V for holes. For high energy gamma radiation special contacts should be used to limit the effect of holes trapping. Coplanar grids (CPG) configuration is the most efficient solution besides the Frisch capacitive grid[1] and small pixel[2] to remove the effect of holes trapping on large radiation detectors made of semiconductors with important difference in electrons and holes transport. The coplanar grids anode consist of two interlaced fine forks, spaced at 100-200 microns apart and biased one relative to the other up to 200V. Principle of operation of coplanar grids detector is presented elsewhere[3]. Considering surface defects as surface terminated tellurium inclusions and high electric field developed between grids up to 20kV/cm, too close to the dielectric strength of the air, the inter-grid current may become unstable compromising the well function of CPG devices. Covering the anode face of the detector with a good insulating material as silicon dioxide is expected to lead to reduction in surface leakage currents and improve the stability in detector performance over time. This paper will discuss the improvement of detector performance as well as detector stability for short term.

Experiment and Results The chosen insulator is SiO2 deposited by RF sputtering. The sputtering process is a PVD (physical vapor deposition) method that deposits a coat of material on a substrate (CZT). Different pressure of mixtures of Ar + 5% O2 was applied and gold was used as the best contact. Insulator deposition was performed at room temperature, without any extra heating of the CZT crystal other than the heat produced by sputtering process. The following are different attempts to achieve the best results. Pressure of process gas RF Time of SiO2 layer SiO2 layer (Ar + O2) [torr] power Deposition surface resistivity surface resistivity [W] [min.] rs [W/sq.] rs [W/sq.] After deposition After 20 hrs. 1.0 x 10-2 100 10 1.4 x 1011 - 2.5 x 10-2 100 10 2.6 x 1010 - 8.0 x 10-3 100 10 7.1 x 1011 - 8.0 x 10-3 80 20 1.6 x 1012 2.5 x 1013

The CZT crystal used for our study was a 20x20x9 mm3 grown by Orbotech, Israel. The crystal was polished on all six faces up to 0.05m particle size abrasion then cleaned in DI water and methanol and applied gold contacts by RF sputtering. The coplanar grids anode shown in Figure 1 was fabricated using the photolithographic technique. This is the process to selectively remove parts of a thin film or bulk substrate, or the process of using light to create a pattern.

275 Grid 1 (collecting grid)

Grid 2 Grounding grid (non-collecting grid)

Figure 1. Photography of the CPG CZT detector, anode side. Anode strips as well as the grounding grid are of RF sputtered gold, 0.1 m thick. Cathode side was solid gold RF sputtered on the cadmium rich face

Detector response to 662 keV gamma radiation was done under -2000V applied at the cathode and 160 V on the collecting grid with a shaping time of 1 microsecond. Energy resolution at 662 keV measured before covering the anode area with silicon dioxide was 3.2% then the detector anode was RF sputtered with SiO2. The silicon dioxide deposition was done at 80W for 20 minutes. The process gas was a -3 mixture of argon and oxygen (5%) at pressure of 8 x 10 torr. SiO2 layer thickness was estimated to about 80 nm with a rms roughness of 15 nm. The surface resistivity of silicon dioxide layer was measured and found to be of 2.5x1013 ohms /square, substantially higher than surface resistivity of CZT.  VV )( cmL )(  The Surface Resistivity Equation.       *  sq .]/[   AI )( cmg )(  The schematic of the detection response setup is presented in Figure 2 and the spectra before and after silicon dioxide deposition in Figures 3 and 4 respectively.

Figure 2. The block schematic of radiation detection system used to evaluate the detection response of CPG detector

Detection response before and after insulator deposition on the anode was done in identical conditions using standard equipment, A250CF preamplifiers, LeCroy model DA1855A differential amplifier, model 671 of Ortec shaping amplifier and a Canberra Multiport II multi-channel analyzer. The energy resolution at 662 keV measured after anode covering with SiO2 was on 2.8%.

137 Cs 137Cs Coplanar grids CZT Coplanar grids CZT Detector Cathode bias: Detector Cathode bias: 2500V 2500V G1 bias: 160V G1 bias: 160V CPG with no SiO2 SiO in top of grids Shaping time 0.5 s 2 Shaping time 0.5 s 1000 FWHM =21.23 keV 1000 662 keV FWHM662 keV= 18.71 keV Counts Counts 500 500

0 0 0 300 600 0 300 600 Channels Channels

(a) (b)

276 Figure 3. 137Cs spectra obtained with a 20x20x9 mm3 CZT detector in coplanar grids configuration without (a) and with silicon dioxide layer in top of coplanar grids anode. (b) The SiO2 layer covering the coplanar grids led to about 12% improvement in energy resolution at 662 keV, from 3.2% to 2.8%

The relative improvement (12%) in energy resolution after SiO2 coverage of the CPG anode is more evident in the zoomed 662 peak area in Figure 4.

137Cs Coplanar grids CZT Detector

Black: No SiO2

FWHM662 keV=21.23 keV 1000 Blue: SiO2 in top of grids

FWHM662 keV= 18.71 keV Counts 500

0 620 640 660 680 Channels

Figure 4. Zoomed 662 keV peaks shown in Figures 3a and 3b for a better visualization of difference in energy resolution between conditions before and after SiO2 deposition.

One month after SiO2 protection of anode area the detector was tested again in identical condition and its performance was not reduced but slightly improved probably to a better setting of insulator layer with time or room temperature. All this time the detector was maintained in normal ambient conditions in the lab.

Conclusions Covering the polished gold contacted area of anode of a coplanar CZT detector leads to an improvement in detector performance of about 13%, energy resolution at 662 kev reduced from 3.2% to 2.8% and better preservation of detector quality in condition of ambient atmosphere exposure. Future goals are variations of the main perimeter which is silicon dioxide layer thickness and the inter grid gap. This will bring more light and useful information for future studies.

Acknowledgments NSF, Center of Physics and Chemistry of Material (Fisk “CPCOM”), Dr. Arnold Burger, MSc Michael Groza, Dr. Liviu Matei, Vladimir Buliga, REU Interns.

References 1. Arnold Burger, Michael Groza, Yunlong Cui, Utpal N. Roy, Damian Hillman, Mike Guo, Longxia Li, Gomez W. Wright, and Ralph B. James, “Development of portable CdZnTe spectrometers for remote sensing of signatures from nuclear materials”, phys. stat. sol. (c) 2, No. 5, 1586–1591 (2005) / DOI 10.1002/pssc.200460839 2. A. E. Bolotnikov, G. S. Camarda, G. A. Carini, G. W. Wright, L. Li, A. Burger, M. Groza, and R. B. James, “Large area/volume CZT nuclear detectors”, phys. stat. sol. (c) 2, No. 5, 1495–1503 (2005) / DOI 10.1002/pssc.200460831 3. P. N. Luke, “Single-polarity charge sensing in ionization detectors using coplanar electrodes”, Appl. Phys. Lett. 65 (22), 28 November 1994

277 Path Following Algorithm Development

Student Researcher: David A. Rogers

Advisor: Dr. David Mikesell

Ohio Northern University Department of Mechanical Engineering

Abstract Currently a senior design team at Ohio Northern University is working on developing an autonomous golf cart. Part of this process is developing a path following algorithm for the golf cart. The algorithm needs to fit into the scope of the project however. This project is meant to supply a modular platform for future groups or classes can expand upon the design from this year’s team. Therefore the algorithm picked needs to be relatively simple to grasp, but also still able to successfully maneuver the cart to its specifications. My goal is to research numerous algorithms in order to best choose one that fits into the goals of the golf cart project.

Project Objectives The development of fully autonomous vehicles and driver assist functions as become increasingly popular. In the mid 2000’s, DARPA hosted a competition where teams would develop autonomous vehicles which were capable of driving through unpaved, suburban or urban settings [1]. Google as also developed a fleet of autonomous vehicles which has logged over 1000 hours of urban driving [2]. These recent developments are what spurred Ohio Northern University to have students design an autonomous system for a Senior Design project. This system is to be able to safely and accurately navigate the campus at Ohio Northern. However it was decided that this project had to large of a scale for a single group to be able to accomplish in one school year. The objective for the first year of this project was to create a system which could navigate a series of preprogrammed GPS points. One of the major obstacles to the creation of this system is the development of a path following algorithm. The team had to decide whether to design one from scratch, or use one of several possible algorithms already in existence.

This research project looks into finding the best solution for a path following algorithm for this autonomous system. Once the optimal solution is found, an algorithm can then be written and implemented on the control systems being designed for the vehicle. This should prove to be the final step which leads to the completion of all of the goals outlined for teams senior design project.

Methodology There were many different design decisions which went choosing the correct algorithm. The algorithm had to fit into the main criteria which the overall project had. First and foremost, the algorithm picked needed to be fairly simple to implement. As was stated previously this is intended to be a multiyear project with a limited goal for the first year. With so much other design and implementation to perform to realize the goals for the project, an overly complex path follower would take away too much time from other important aspects of the vehicle. In addition, since the vehicle will not be fully autonomous after this year, a high end path follower is not as necessary.

Another of the main criteria for this cart is to promote future development. One of the big attractions of this project is that it can provide a modular platform for future senior design projects or class projects to work on. Implementing a rather rudimentary algorithm could encourage a future group to work exclusively on developing a more efficient and precise system. It is expected that future years will add various sensors to the vehicle for collision avoidance and path tracking. Since the vehicle is only currently supposed to work with a GPS/INS receiver, an algorithm which is designed to work with other sensors would be unnecessary, again emphasizing the positives on implanting a rather simple system for this year’s design.

278 Due to the reasons stated above, implementing a system which was already realized, and had been proven, seemed the best course of action. Developing a path following algorithm from the ground up would take up time which the group did not have. The next step was to find suitable algorithms which to choose from. Three such systems came from papers written by current professors at Ohio Northern. The first was designed for use in automobiles, and was created by the projects advisor Dr. David Mikesell [3]. The second was designed for use on the Mars Rovers and was written by Dr. Eric Baumgartner [4]. The last was designed for use on wheelchairs, and was written by Dr. J.D. Yoder and Dr. Eric Baumgartner [5].

The validity for each of these designs was closely studied as to how they could accomplish the goals set out for the path follower. A large weight was put on how closely the system the algorithm was designed for resembled the system for this project. This is mostly to make the algorithm as simple to implement into the system as possible.

The vehicle being used for this project is a golf cart. This made the first system the most accessible, since it was designed for use on automobiles. In addition the main components which were being made available to use by the school were the same components used in Dr. Mikesell’s system. These components were the RT500 GPS/INS receiver and the National Instruments CompactRIO. The system itself was also fairly easy to conceptualize and implement. Using this system would allow the group to save money on the main components and help to reduce the overall budget for the project, which was important for the university. This system would also provide a solid base with which to expand. While the path follower is designed to work with a GPS/INS, the additions of various sensors in future years could only improve on the accuracy of the system.

The main problem which arose with the second system mentioned is the platform for which it was designed. The Mars Rovers inherently are very different than a golf cart. While the changing the algorithm to work with the cart would not be too difficult in the long run, it would take some time away from other parts of the project. The system was also optimized to be able to perform very complex maneuvers which are not physically possible for a golf cart to replicate. This would take further development to reduce the complexity to work with the system available. Since the Rovers were designed to be driven remotely, the algorithm reflects this. Also the rovers were not designed to work with GPS, which is the system desired for the project.

The final system was much different than the previous two. The main difference is that this system was meant to work with paths which were taught. This would work only for basic testing purposes if used on this project. Expanding off of this system would possibly be as much work as starting from scratch with a different path follower. Also this system was designed for use with an array of sensors in tight environments, as opposed to using a GPS or other positioning system, and wide open environments. While using this system would not be very viable to use for the main algorithm for the project, it could prove useful in future years. Once more sensors are implemented and the cart is used around static and dynamic obstacles, this system could provide a basis with which to work from.

Results Obtained It was decided that the path following algorithm designed by Dr. David Mikesell was the most viable option for the golf cart. The similarity of the projects, and the accessibility to the right equipment and the original author were found to be very attractive features with this route. These resources could provide to be very important as the cart gets closer to completion.

At the time this report was written no path following algorithm had been implemented on the golf cart. The team had recently completed installing a drive-by-wire system on the vehicle. The next step towards completion of the years goals include writing the path following algorithm outlined in this paper, and coordinating it with the control systems installed on the golf cart. With the amount of time left before the end of the school year the team plans on being able to complete all goals to have a cart driving on a designated path of GPS waypoints.

279 Acknowledgments The author of this paper first and foremost would like to thank the rest of the team working on this project, Alan Hall, Brandon Helms, Nick Secue, and Josh Stone. The author would also like to thank SEA inc. for providing several key components for the project. Finally, the author would like to thank Dr. David Mikesell and Dr. Nathaniel Bird for their assistance on the project.

Figures and Tables

Look ahead a distance, L from current postion GPS in Alert User GPS Enabled State? needs enabled

Find nearest point greater than distance L (pt A) Process Current GPS Location + Heading – Convert to fixed grid

Is A last pt in Yes Is cart past A? sequence Current wheel A = G position from SEA No

Look at point previous to A (pt Stop Cart B)

Generate new Determine new Compare new wheel position, or heading θ, based heading to wheel Create change to wheel interpolated point Interpolate on target point G position Is the cart past position between A and B No Yes between A and pt B? (pt G) at distance current postion L from cart

Calculate Heading Change ∆ Create Target point G

Calculate velocity Convert velocity to based on ∆ (big ∆ 1-2v range to send = small velocity) to speed controller Laptop display:

Window ,xy graph Current position A,B,G plotted in xy plane

Velocity GPS in enabled state?

Figure 1. Wiring diagram for cart's Figure 2. Logic diagram for planned path following control system algorithm

References 1. "GM Says Driverless Cars Could Be on the Road by 2018." Wired. 07 Jan. 2008. Web. 10 Oct. 2010. 2. "Autonomous Vehicles Are Probably Google's Most Ambitions Project to Date." Softpedia. 11 Oct. 2010. Web. 15 Oct. 2010. 3. A. Sidhu, D. R. Mikesell, D. A. Guenther, R. Bixel, G. Heydinger, “Development and Implementation of a Path-Following Algorithm for an Autonomous Vehicle,” M.S. Thesis, Ohio State University, Columbus, OH, 2007 4. E. T. Baumgartner, H. Aghazarian, A. Trebi-Ollenmu, “Rover Localization Results for the FIDO Rover,” Proceedings of SPIE, 4571, pp. 34-44, Boston, MA, October 2001. 5. J. D. Yoder, E. Baumgartner, and S. B. Skaar, “Reference Path Description for an Autonomous Powered Wheelchair,” IEEE Transactions on Robotics and Automation, 8, pp. 2012-2017, May 1994.

280 Synthesis and Characterization of Polycarbonate Nanocomposites Using In-Sity Polymerization

Researcher: Bradley B. Rupp

Advisor: Dr. Maria Coleman

The University of Toledo Department of Chemical and Environmental Engineering

Abstract Polycarbonate was synthesized using in-situ polymerization in the presence of carbon nanofibers and alumina nanowhiskers to make a concentrated nanocomposite. The polycarbonate produced showed a high molecular weight as well as a high yield. Films were fabricated by blending the nanocomposite polycarbonate with pure polycarbonate to a desired fiber concentration. These films showed improved mechanical properties in storage modulus as compared to films produced with pure polycarbonate, with the nanocomposite films having higher moduli at lower carbon nanofiber loadings.

Project Objectives This work is a follow up to a previous publication by Hakim-elahi et al [1]. The work analyzed the effects of alumina nanowhisker nanocomposites formed using both blending and in-situ polymerization over a range of temperatures on the mechanical and optical properties. Nanocomposites were produced using an in-situ polymerization of alumina with polycarbonate as well as blending pure polycarbonate with raw alumina. Films produced showed an increase in tensile properties over the range of fiber loadings (i.e. Young’s modulus and tensile strength), while optical transparency diminished as the loading of alumina increased. Results showed that increasing the reaction temperature of the polymerization reaction gave increased yield and molecular weight, which lead to increases in mechanical properties and maintained transparency. Functionalizing the alumina nanowhiskers using the in-situ polymerization rather than blending with raw fibers also showed to improve the tensile and optical properties of the films that were cast [1].

The work of this paper focused on the application of the techniques used in the previous publication on a new system of carbon nanofiber (CNF) nanocomposites. Since the discovery in the early 1990s of carbon nanotubes (CNT) and CNFs [2, 3], there has been considerable work done devoted to the application of CNTs [4] and CNFs [2, 5] used in the reinforcing of polymer matrices because of the remarkable mechanical and electrical properties of the CNTs and CNFs. Carbon nanofibers were polymerized with polycarbonate to form a nanocomposites network. A new technique for synthesis [6]—different from that of the previous paper—was utilized in this paper which greatly increased the yield and molecular weight (MW) of the polycarbonate. The new technique predissolved triphosgene in dichloromethane before addition to the polymerization solution and eliminated the use of a triethylamine catalyst. It also greatly reduced the reaction time from around 24 hours to less than 5 hours [1]. The new synthesis was used to reexamine some of the mechanical properties of the alumina polycarbonate nanocomposite system and to produce CNF PC samples to see if the increase in MW impacts the mechanical properties of the films cast. The films cast from CNF samples were tested for storage modulus and glass transition temperature.

Methodology Pure polycarbonate as well as alumina nanowhisker and carbon nanofiber nanocomposites were produced using a condensation polymerization. The nanocomposites were made using an in-situ polymerization. The reaction takes place in a ratio of three moles of bisphenol A to one mole of triphosgene, with the triphosgene having a ten percent excess. The in-situ polymerization takes advantage of hydroxyl groups bound to the surface of the fibers. Those bound hydroxyl groups allow the polycarbonate to grow chains off of the surface of the fibers, functionalizing them, as well as grow in the bulk. In the reaction, one mole of bisphenol A goes to produce one mole of a PC repeat unit. Using this ratio, the yield of the reaction was obtained using the following formula:

281 of Mass-Product of ((Mass of Mass-Product of Fiber)/(MW of PC Repeat Uni t)) =Yield 100%* BPA of ((Mass of BPA Reacted)/( ofMW BPA) )

This assumes that no fibers were lost in the process. Using this assumption, the estimated fiber loading on a volume percent basis is given by:

Fiber of Mass of Fiber Reacted  =Vol%  PC 100%* Product of Mass of Product  Fiber

To prepare a master batch of polycarbonate, a solution of 8.03 mmol (2.383 g) of triphosgene was dissolved in 10 mL of dichloromethane using a sonicator (Ultrasonic cleaner, Model FS60, Fisher Scientific, Pittsburgh, PA) for 30 minutes. A 21.9 mmol (5.000 g) of bisphenol A and 0.181 g of CNF (for pure PC master batch the CNF was not added; for alumina PC 0.361 g of alumina was added) were dissolved in 60 ml of pyridine in a sonicator for 30 minutes. The pyridine was placed in an ice bath stirring for 5 minutes until the solution became cold. The dichloromethane solution was added dropwise to the pyridine solution, and once all was added, the mixture was stirred for 30 minutes in the ice bath. After 30 minutes, the mixture was allowed to react at room temperature for 4 hours under constant mixing. Once completed, the polycarbonate was recovered by precipitation with methanol and washed several times with methanol. The polycarbonate was allowed to air dry overnight in a fume hood, and then dried in a vacuum oven at 100°C for 24 hours.

To prepare a film of the polycarbonate, the polycarbonate master batch was diluted to a desired concentration of CNF or alumina using the purchased polycarbonate. Then, 1.7 grams of the polycarbonate mixture was dissolved in 17 mL of dichloromethane using a sonicator for 30 minutes. The solution was stirred for 12 hours in a fume hood, and then was transferred to a clean glass dish for casting in a fume hood for 12 hours. The film was dried in a vacuum oven for 24 hours at 160°C.

The unbound polycarbonate was separated from the fibers using Soxhlet extraction [7]. The polycarbonate was extracted for 48 hours using tetrahydrofuran (THF). The molecular weight (MW) of the polycarbonate produced was determined by gel permeation chromatography using a SCL-10Avp Shimadzu high-performance liquid chromatograph (Columbia, MD). The eluent was HPLC dichloromethane and the column operated under flow rate of 1 ml/min. The standard used for calibration was polystyrene in the range of 980 to 500,000. The temperature dependent dynamic mechanical properties of the PC films were measured using a TA Instruments Q800 (New Castle, DE) series dynamic mechanical analyzer (DMA) in tensile mode at an oscillation frequency of 1.0 Hz. DMA data was collected from room temperature to 300°C at a heating rate of 2°C/min. The DMA gave the storage moduli and the glass transition temperature (Tg) for the samples produced. This was compared to previous data for PC nanocomposite samples produced.

Results Obtained Polycarbonate produced using the in-situ polymerization exhibited a higher molecular weight with a high yield as compared to PC produced previously. Functionalization of the fibers by the PC was shown by increases in storage modulus. The films cast using the nanocomposites showed good mechanical properties as compared to pure PC. A storage modulus of 2942 MPa was achieved at a loading of 0.31 vol% of carbon nanofibers, as compared to 1930 MPa for pure PC. Lower loadings of CNF also showed an increase in glass transition temperature as compared to pure PC, increasing from 154.5°C for pure PC to 162°C at loadings of 0.16 and 0.31 vol%.

The dynamic mechanical testing shows a general trend of increasing storage modulus over that of the pure polycarbonate reported in literature. However, the storage modulus decreases at higher loadings of CNF as the fibers start to take up an increased volume in the nanocomposite. The lower loadings also have the advantage of increasing the glass transition temperature as compared to pure polycarbonate.

282 Future work will focus on obtaining more data on the tensile properties for CNF PC films as well as investigating the dispersion tendencies of the carbon nanofibers in the polymer matrix. The approach for testing the mechanical properties of the CNF PC will be reproduced for films of alumina PC to see the effects of the MW of the master batch polymer on the tensile properties of the films cast.

Acknowledgments I would like to thank my advisor Dr. Maria Coleman for her support and guidance on the project. Also, I would like to thank Nima Hakim-elahi for teaching me everything that I needed to know. I would also like to thank The University of Toledo Honors College and Department of Chemical Engineering as well as the NASA Ohio Space Grant Consortium.

Figures and Tables CH3 Cl O Cl

OH + HO C OH + 1/3 Cl C O C O C Cl

CH3 Cl Cl

4 3

2

1

O CH3 O

O C O C O C n

CH3 Figure 1. Schematic of in-situ polymerization with fiber

Table 1. Average molecular weights and yields for PCs Polycarbonate Pure CNF Alumina Purchased Average Yield 88.9% 96.1% 94.3% - Average MW 51,920 72,700 NA* 64,000

Figure 2. Storage moduli of various percentages of CNF PC loadings and pure PC

283

Figure 3. Tan delta plot of CNF PC and pure PC composites giving the Tg of the polymers

Table 2. Storage moduli at 60ºC and Glass Transitions for CNF PC samples CNF Loading 0.0% 0.16% 0.31% 0.62% 0.94% 1.25% Storage Modulus @ 1930 [5] 2805 ± 297 2942 ± 261 2760 ± 183 2550 ± 110 2408 ± 61 60°C (Mpa)

Tg (°C) 154.5 [1] 162.0 ± 2.0 161.7 ± 2.5 158.7 ± 2.3 154.0 ± 3.5 155.3 ± 1.2 References 1. Hakim-elahi, H. R., Hu, L., Rupp, B. B., Coleman, M. R. Polymer 2010; 51: 2494-2502. 2. Gao, Y., He, P., Lian, J., Wang, L., Qian, D., Zhao, J., Wang, W., Schulz, M., Zhou, X., and Shi, D. Journal of Macromolecular Science: Physics 2006; 45: 671-679. 3. Iijima, S. Nature 1991; 354: 56-58. Carbon nanotube discovery. 4. Lau, K. T., and Hui, D. Carbon 2002; 40: 1597-1617. 5. Zhou, Y., Pervin, F., Jeelani, S., and Mallick, P. K. Journal of Materials Processing Technology 2008; 198: 445-453. 6. Sun, S. J., Liao, Y. C., and Chang, T. C. Journal of Polymer Science (2000); 38: 1852-1860. 7. Li, X. and Coleman, M. R. Carbon 2008; 46: 1115- 1125. Soxhlet extraction.

284 Elemental Light Spectroscopy

Student Researcher: Allison N. Russell

Advisor: Dr. Robert Chasnov

Cedarville University Department of Science and Math, and Education

Abstract This project teaches students how to read and measure visible light spectrums of elemental gases. In this lab the students will observe various light sources including tubes that have been filled with various types of gases. As electricity passes through these tubes, the gas glows and light is given off. They will compare the spectra of these gas tubes with incandescent (regular light bulb) sources and fluorescent light fixtures. Specifically, they will be asked to identify the gas that is used to fill fluorescent light tubes.

Lesson The basis for this activity came from the “Supernova Chemistry” lesson from NASA’s educational materials online.

The students will divide into pairs and rotate with their partner through the 10 various stations. They will use the Spectrometer placed at each station to take their measurements. The 10 different stations will include an incandescent light bulb, a hydrogen gas tube, a helium gas tube, a neon gas tube, a mercury gas tube, a nitrogen gas tube, a “Plant Grow” light bulb, a compact fluorescent light fixture, chemical light sticks, and a fluorescent light source. At each station the student will use the spectrometer to measure the spectrum of each light or gas tube and record it on their paper. After going to each station the student will be given a paper with the composition of various stars and the sun. They will then need to give a possible spectrum for each based on their spectrum they recorded from the stations.

Objectives  Students will observe visible spectra of known elements and identify an unknown element or combination of elements by visible spectra.

Alignment Grade Eleven Physical Science Benchmark A: Make appropriate choices when designing and participating in scientific investigations by using cognitive and manipulative skills when collecting data and formulating conclusions from the data.

Grade Nine Physical Science: Scientific Inquiry indicator #5.

Underlying Theory Atomic spectroscopy is an extremely important tool for scientists. Because the electron patterns around every kind of atom are unique, and because these electrons interact with light in different ways because of their different positions, you can determine what kinds of atoms are present in a substance by the kind of light absorbed or emitted by the substance. Every atom has a kind of "fingerprint" in the normal light spectrum that is measured with a device called a Spectrometer. This instrument uses a diffraction grating as a prism, splitting the incoming light into its composite colors.

Student Engagement The content knowledge at the beginning of this unit was much more passive for the students than the activity. The activity is a very hands-on approach, where the responsibility for success rests solely on the students. The students are very much involved since they are trying to determine unknowns based on their known information.

285 Resources At each station there will be one of the following: an incandescent light bulb, a hydrogen gas tube, a helium gas tube, a neon gas tube, a mercury gas tube, a nitrogen gas tube, a “Plant Grow” light bulb, a compact fluorescent light fixture, chemical light sticks, and a fluorescent light source.

Each student will need colored pencils and a paper containing blank spectrum graphs for each station. They will then fill out each spectrum according to what they observe when they measure each light or tube. They will also need a paper of blank spectrums to fill out for their stars and sun worksheet.

Results After each student marked their measurements of the spectrums of the known elements, they were easily able to determine the gases that are in the unknowns.

Assessment The students sketch the spectrum of each known and unknown. They use the graph of the known to determine the gases in the unknowns.

Conclusion This project allowed students to have a concrete example of how light spectroscopy is used by scientist and in every day life. After graphing the spectrums the students are able to see the wavelength of different gases and use that knowledge to determine the make up of unknowns.

286 Biogeography-Based Optimization with Distributed Learning

Student Researcher: Carré D. Scheidegger

Advisor: Dr. Daniel Simon

Cleveland State University Department of Electrical and Computer Engineering

Abstract My research presents hardware and experimental testing of an evolutionary algorithm known as biogeography-based optimization (BBO) and extends it to distributed learning. BBO is an evolutionary algorithm based on the theory of biogeography, which describes how nature geographically distributes organisms. This paper introduces a new BBO algorithm that does not require a centralized computer to optimize, and which we call distributed BBO. BBO and distributed BBO have been developed by observing nature. And this has resulted in an algorithm that optimizes solutions for different situations and problems. I use fourteen common benchmark functions to simulate results of BBO and distributed BBO, and also apply both algorithms to optimize robot control algorithms. I present not only simulation results, but also experimental results using BBO to optimize the control algorithms of a swarm of mobile robots. The results show that centralized BBO gives better solution results for a problem and would be a better choice compared to any of the new forms of distributed BBO. However, distributed BBO allows the user to find a less optimal solution to a problem while avoiding the need for centralized, coordinated controller.

Project Objectives One objective of this research was to develop a distributed version of BBO that would be compatible with robot simulations, benchmark simulations, and experimental mobile robots. Using this developed DBBO algorithm, I would be able to accurately compare the performance of centralized BBO against distributed BBO. The performance being analyzed is how well the algorithms optimize problem solutions for real world applications and simulations. The algorithm that resulted in the lowest costs performs most optimally, and this analysis could help to decide which algorithm would be better suited for different applicable situations.

Methodology Used First, I began by running benchmark simulations of the centralized BBO algorithm. Fourteen common benchmark functions were used, and I was able to see which function performed the best with BBO applied to it. Next, I used a MATLAB program designed to imitate an experimental robot in order to simulate how BBO would optimize the problem solution. Finally, the BBO algorithm was applied to four experimental mobile robots in order to observe how the algorithm works in real world applications. In order to have a common data set to compare, I used the same steps that I performed on centralized BBO to distributed BBO. Using benchmark functions, a robot simulation function, and four actual mobile robots, we again analyzed how DBBO optimized the various situations.

Results Obtained

BBO DBBO/2 DBBO/4 DBBO/6 Minimum Cost 7.48 7.23 7.30 7.16 Maximum Cost 7.99 8.12 8.07 8.10 Average Cost 7.68 7.78 7.77 7.76 Std Dev of Min. Costs 0.119 0.169 0.147 0.193

Figure 1. Results of the robot computer simulations using MATLAB.

287

Figure 2. Results of the performance of BBO versus DBBO with 2, 4, and 6 peers for the 14 benchmark functions.

Figure 3. Plot of eight generation’s minimum cost and average cost value of the distributed BBO/2 program on four mobile robots

Significance and Interpretation of Results The benchmark results, robot simulation results, and the mobile robot results showed that DBBO performed worse than BBO overall. BBO optimized the benchmark functions and the robot simulations to lower cost values than DBBO was capable of doing for 2, 4, or 6 peers. However, we did observe that DBBO of 6 peers performed better than DBBO of 2 peers. This is because increasing the number of peers helps to increase the exchange of data for optimization. Lastly, the mobile robot simulations successfully showed that distributed BBO is capable of optimization. It can successfully optimize a problem solution or task without a centralized computer. Although DBBO performs less optimally compared to BBO, some experiments or situations may call for a non-centralized system, of which DBBO would be ideal for.

Acknowledgements and References 1. Simon, D. Biogeography-Based Optimization. IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, 702–713 (2008) 2. Quammen, D.: The Song of the Dodo: Island Biogeography in an Age of Extinction. Simon & Schuster, New York (1997) 3. Lozovyy, P., Thomas, G., Simon, D.: Biogeography-Based Optimization for Robot Controller Tuning, in: Computational Modeling and Simulation of Intellect: Current State and Future Perspectives (B. Igelnik, editor) IGI Global, in print (2011)

288 Synthesis of FeSb2 Nanorods for Use as Low Temperature Thermoelectric Materials

Student Researcher: Joel E. Schmidt1,2

Advisor: Dr. Douglas S. Dudis1

1Air Force Research Laboratory Materials and Manufacturing Directorate Thermal Sciences and Materials Branch Wright-Patterson Air Force Base, OH 45433

2University of Dayton Department of Chemical Engineering

Abstract Iron antinomide (FeSb2) is promising for low temperature thermoelectric materials since it exhibits the record high thermoelectric power factor at 12K of any material at any temperature. This material could be used for sensor cooling on military and scientific space missions, which require cryogenic cooling for IR, γ-ray, and x-ray sensors. However, the thermoelectric potential of FeSb2 is limited by its high thermal conductivity of ~500 W/mK, which results in a ZT value of 0.005. Therefore, the purpose of this project is to explore methods to reduce the thermal conductivity of FeSb2 while leaving its other thermoelectric parameters unaffected. Nanostructuring will be used in FeSb2 to induce phonon scattering, however, because of the different mean free path of the electron, the electrical conductivity should not be affected. Solvothermal synthesis has already been used in literature to produce FeSb2 nanorods so this method will be reproduced and SEM and TEM will be used to characterize the diameter and aspect ratio. Additionally, a sodium naphthalenide reduction will be explored to produce FeSb2 nanoparticles which can then be used to seed nanorod growth in solution synthesis using metal salts.

Project Objectives Many scientific and military applications require cryogenic sensor cooling for IR, γ-ray, and x-ray sensors.1-4 The development of efficient, low temperature thermoelectric materials could lead to a revolution in cryogenic cooling for these sensors on satellites which could greatly increase scientific knowledge by replacing bulky, unreliable dewar cooling and magnetic refrigeration systems with a vibration and cryogen free, long lasting, reliable system. The figure of merit for thermoelectric materials is the dimensionless ZT value where:

In this equation σ is the electrical conductivity, S is the Seebeck coefficient, T is the absolute temperature and κ is the thermal conductivity. ZT corresponds to the Carnot efficiency of the material and the higher the ZT value the more efficient the energy conversion. As seen in the equation to maximize the thermoelectric efficiency it is necessary to simultaneously maximize σ and S while minimizing κ. This is difficult as these parameters are interrelated in most materials and optimizing thermoelectric efficiency requires tradeoffs in the three parameters.

The material FeSb2 has an enormous Seebeck coefficient of ~-45,000 μV/K at 10 K which, combined with an electrical resistivity of ~ 0.1 Ω cm, leads to a record high thermoelectric power factor of ~ 2300 -2 -1 μWK cm at 12 K. The Seebeck coefficient as a function of temperature for FeSb2 is shown in Figure 1. The Seebeck coefficient is highly temperature dependent and shows that this material is suited for low temperature applications. The thermoelectric power factor as a function of temperature is shown in Figure 2. This power factor is 65 times larger than state of the art materials at any temperature. However, the thermoelectric figure of merit, ZT, is only 0.005 at 12 K because of the high thermal conductivity which is close to 500 Wm-1K-1. Since ZT is proportional to 1/T, a drastic reduction of thermal conductivity would be necessary to make the material into a practical thermoelectric device. If the thermal

289 conductivity could be reduced to 1 W/mK it would lead to a ZT of 2.5, much larger than would be 5-7 necessary to make FeSb2 a viable thermoelectric material. It has been shown in Silicon nanowires that it is possible to reduce the thermal conductivity 100 times below that of the bulk material using 8,9 nanostructuring. Therefore, analogous nanostructuring in FeSb2 should lead to a drastic reduction in thermal conductivity and could improve overall ZT.

The purpose of this project will be to investigate methods to nanostructure FeSb2 to determine structure property relationships for various nanostructures and synthesis methods. It is proposed that nanostructuring may have a stronger impact on κ than on ρ or S, thereby leading to an increase in the overall ZT. This is because nanostructuring of the material will lower the particle dimensions below that of the mean free path of the phonon, scattering the phonons, but not disrupting the electrons. Solvothermal synthesis techniques will be explored in this work as well as seeded nanorod growth to develop structure-property-synthesis technique relationships for FeSb2 so that an optimized thermoelectric material can be developed.

Methodology Used The purpose of this project is to develop synthesis methods to tailor the structure of FeSb2 and subsequently characterize its effect on properties. Solvothermal synthesis offers one approach for nanostructuring FeSb2 but it comes from literature describing the synthesis of FeSb2 nanorods for lithium ion battery anodes.10-12 The literature preparation routes were duplicated to validate results.

Nanorod synthesis was conducted in a Parr pressure reactor with a glass liner. FeCl3•6H2O and SbCl3 were dissolved in anhydrous ethanol and then combined with NaBH4 in the reactor. The synthesis was conducted at 250°C for three days. The products were removed by filtration and then washed with water to remove the byproduct salts and any remaining sodium borohydride. Products were characterized using SEM and XRD.

Nanoparticle synthesis was conducted using a sodium naphthalenide reduction. The solution was made by first combing sodium metal and naphthalene in triglyme and stirring at room temperature until complete dissolution occurred. FeCl3 and SbCl3 were added to the solution and refluxed at 300°C for two hours. The products were removed from solution by filtration and washed with water. Products were characterized using SEM and XRD.

Results Obtained The high pressure solvothermal synthesis produced a black powder with silver flakes. XPS was used to confirm that the flakes were antimony metal. Mechanical separation was used to remove the larger flakes from the powder. The powder was characterized using XRD and the results are shown in Figure 3. Images of the powder were taken using SEM and are shown in Figure 4.

The sodium naphthalenide reduction produced a fine black powder as its product. XRD of the product obtained is shown in Figure 5. SEM images of the product are shown in Figure 6.

Significance and Interpretation of Results The XRD images of the product of the solvothermal synthesis confirm the presence of the desired product FeSb2 but also show significant antimony metal contamination. This contamination is a result of the reduction method used. Since different metal salts are reduced at different rates it is likely that the SbCl3 was quickly reduced by the sodium borohydride forming solid antimony metal. The SEM images of the product indicate that small particles have been formed instead of the desired high aspect ratio nanorods. In order to use this product for a thermoelectric material a method to separate the antimony metal from the FeSb2 will have to be developed. Additionally, higher resolution TEM images for the product need to be obtained to verify if nanorods have been obtained or just nanopowder.

The XRD images of the product of the sodium naphthalenide reduction show FeSb2 as well as a small amount of antimony metal. Additionally, the broad peaks are evidence of a nanopowder since XRD peaks become broad as particle size decreases. The SEM images seem to show a nanopowder which is

290 agglomerated in larger particles. TEM images need to be taken to find the particle size. Once particle size is determined it is proposed to use this powder as seed material for nanorod growth.

Solution based metal salt reduction synthesis techniques show promise for easy synthesis of metal alloy thermoelectric materials. If the reaction and purification parameters can be optimized it is possible that these techniques could produce large amounts of thermoelectric materials. Currently the reduction in thermal conductivity is only theoretical but once nanorod growth is accomplished the thermal conductivity will be measured to find the amount of reduction in thermal conductivity due to nanostructuring.

Figures

Figure 2. Thermoelectric power factor as a Seebeck coefficient as a function of Figure 1. function of temperature plotted for different temperature and applied magnetic field. Upper applied magnetic fields. In the upper right the right insert gives the same information over a wider 5 lattice thermal conductivity is shown. temperature range indicating the material is useful at low temperatures. Lower right insert shows the magneto-thermopower.5

Figure 4. SEM images of the product of solvothermal synthesis Figure 3. XRD of the product of solvothermal synthesis. ♦ indicates FeSb2 peaks and ▲ indicates antimony metal peaks.

291

Figure 5. XRD of the product of sodium Figure 6. SEM images of the product of sodium naphthalenide reduction. ♦ indicates FeSb2 peaks naphthalenide reduction. and ▲ indicates antimony metal peaks.

References 1. Timmerhaus, K. D.; Reed, R. P. Cryogenic engineering: fifty years of progress; Springer: 2007. 2. Horn, S. B. Cryogenic cooler system. U.S. Patent 5,385,010, Jan 31, 1995. 3. Johnson, A. L. Spacecraft borne long life cryogenic refrigeration status and trends. Cryogenics 1983, 23 (7), 339-347. 4. Kaiser, G.; Bohm, U.; Binneberg, A.; Linzen, S.; Seidel, P. Advanced Stirling cryogenic unit for cooling of a highly sensitive HTS/Hall-magnetometer used in a system for nondestructive evaluation. IEEE Transactions on Applied Superconductivity 2001, 11 (Part 1), 852-854. 5. Bentien, A.; Johnsen, S.; Madsen, G. K. H.; Iversen, B. B.; Steglich, F. Colossal Seebeck coefficient in strongly correlated semiconductor FeSb2. Europhysics Letters 2007, 80, 17008. 6. Sun, P.; Oeschler, N.; Johnsen, S.; Iversen, B. B.; Steglich, F. FeSb2: Prototype of huge electron- diffusion thermoelectricity. Physical Review B 2009, 79 (15), 153308. 7. Sun, P.; Oeschler, N.; Johnsen, S.; Iversen, B. B.; Steglich, F. Thermoelectric properties of the narrow-gap semiconductors FeSb2 and RuSb2: A comparative study. IOP Publishing: 2009; p. 012049. 8. Boukai, A. I.; Bunimovich, Y.; Tahir-Kheli, J.; Yu, J. K.; Goddard, W. A.; Heath, J. R. Silicon nanowires as efficient thermoelectric materials. Nature 2008, 451 (7175), 168-171. 9. Hochbaum, A. I.; Chen, R.; Delgado, R. D.; Liang, W.; Garnett, E. C.; Najarian, M.; Majumdar, A.; Yang, P. Enhanced thermoelectric performance of rough silicon nanowires. Nature 2008, 451 (7175), 163-167. 10. Xie, J.; Zhao, X. B.; Cao, G. S.; Zhao, M. J.; Zhong, Y. D.; Deng, L. Z. Electrochemical lithiation and delithiation of FeSb2 anodes for lithium-ion batteries. Materials Letters 2003, 57 (30), 4673-4677. 11. Xie, J.; Zhao, X.; Cao, G.; Zhao, M. Electrochemical Li-storage Properties of Nanosized FeSb2 Prepared by Solvothermal Method. J. Mater. Sci. Technology 2006, 22 (1), 31-34. 12. Xie, J.; Zhao, X. B.; Mi, J. L.; Tu, J.; Qin, H. Y.; Cao, G. S.; Tu, J. P. Low-Temperature Solvothermal Synthesis of FeSb2 Nanorods as Li-Ion Batteries Anode Material. Electrochemical and Solid State Letters 2006, 9 (7), 336.

292 Responsive Polymers

Student Researcher: Ciara C. Seitz

Advisor: Dr. Nolan Holland

Cleveland State University Department of Chemical and Biomedical Engineering

Abstract Elastin- like polypeptides are responsive polymers, which consist of repeats of a five amino acid sequence. This sequence is GVGVP (G=glycine, V=valine, P=proline), where the first valine can be replaced by any of the 19 other naturally occurring amino acids and the second can be replaced by any except proline. By changing the valines in the sequence a polypeptide with a reduced critical solution temperature is made. These ELPs are a common system, which are designed using recombinant DNA technology in order to control the structure of the material and biosynthesized in bacterial expression systems. This procedure is desirable because at a lower temperature characterization can more easily be achieved.

Project Objectives The main objective of the project is to use the polymers in the lab and replace the valines with either phenylalanine or leucine to produce new polypeptides. By switching the valines it is possible for the polymer to exhibit lower transition temperatures. The polymers I am working with will be made to various lengths and then fully characterized. This is possible because these are stimuli-responsive materials, which can be altered by changes to their local environment such as temperature, salt concentration, pH levels, etc. These changes become desirable when developing hydrogel materials for drug delivery and tissue engineering applications. The materials are made to overcome slow response times and the small magnitude of the response of traditional responsive hydrogels. In this case the valines are replaced to allow the polypeptides to exhibit a transition temperature close to room temperature. Once this is achieved further related tests can be done.

Methodology Used The polymers used were made by the doctoral students whom I work with in the lab. The initial polymers were GLGVP, where the first valine was replaced with leucine, and GVGFPGVGFP, where the second valine was replaced with phenylalanine. The GLGVP is to be lengthened to a sequence of 40 or 60 and the GVGFPGVGFP is to be lengthened to a sequence of 32 or 64. The longer sequence is more desirable, but 40 and 32 can still be characterized. In order to lengthen the sequences, a series of experimental protocols are followed, a plasmid of the DNA , which is the vector( a basic type of DNA molecule) is used with an insert, which is a piece of DNA that is placed into the vector in order to replicate it. These specific protocols are followed until the desired lengths are achieved.

Results Obtained So far I have been able to obtain the GLGVP sequence to a length of 20 repeats successfully. As for the GVGFPGVGFP sequence I have been able to obtain a length of 16 repeats, however more tests need to be done to be sure. The total process takes about three to four days consecutively however with small errors each set takes about five days.

References 1. Ali Ghoorchian, James T. Cole, and Nolan B. Holland. “ Thermoreversible Micelle Formation Using a Three Armed Star Elastin-like Polypeptide.” Macromolecules Volume 43, No. 9 (2010): 4340- 4345. 2. "Essential Biochemistry - DNA Sequencing." Wiley::Home. Web. 07 Apr. 2011. . 3. "Molecular Biotechnology : Protein Sequencing." Cleveland Clinic Lerner Research Institute. Web. 07 Apr. 2011. .

293 Autonomous Golf Cart Project

Student Researcher: Jessica L. Sellar

Advisor: Robert Setlock

Miami University Department of Mechanical and Manufacturing Engineering

Abstract The purpose of this project has been to employ up-and-coming technologies in an academic setting. Our focus has been on the development of a driverless golf cart which has allowed the incorporation of multiple departments within the Miami University School of Engineering and Applied Science.

Project Objectives With a limited budget, the team has chosen to implement the simplest, cheapest, and most efficient system possible to make a fully autonomous vehicle, using research to expand its knowledge on the subjects of mechanical and electrical systems, as well as computer programming. The team foresees the use of autonomous systems in a variety of sectors including commercial, mass transportation, aerospace, and military but has chosen to specialize in commercial application for this project in the short-term.

Commercial application covers a wide base, from driverless vehicles to autonomous snow plows and lawn mowers. The area the team chose to focus on was based on the driverless car concept, which related closely to the demographic at hand. Along with an inevitable decrease in automobile accidents due to human error, lower usage of fuel and energy, and overall decreased time of transport, the issue is also one of convenience. Our team is seeking to create a prototype solution of a semiautonomous vehicle and continue expanding to eventually implement a fully autonomous system for commercial use.

Preliminary Research At the onset of the research phase, patents and currently employed systems were looked at in-depth. Among these applications is the automated guided vehicle, or AGV [8]. These compact vehicles are commonly used in a manufacturing setting to deliver materials, transport parts, and other simple, brute force tasks. Often, factories employing AGVs have a complete infrastructure with embedded wires or a chemical paint path on the factory floor. These methods allow for bidirectional motion of unidirectional vehicles. Bang-bang control of AGVs is used, with the only human input being to program a start and an end point into the vehicle and have it maneuver its own way on the simplest, least congested, and most efficient path. AGVs are typically used in large volumes and thus the key autonomous aspect of their functionality comes from their ability to sense the positioning of other vehicles and avoid collisions and congestion as much as possible. In the first stages of this project, sensing will be limited to remaining on the desired path and identifying stationary obstacles but we hope to eventually extrapolate its abilities to sense and avoid moving objects. Although embedded wires simplify mechanical design, lower costs, improve performance, and add to functionality, driving by wire introduces issues of unpredictability of other drivers, lack of lateral control, emergency braking issues, and the need for a new infrastructure.

Other groups of autonomous vehicles are the convoy type and the “home-base” type. Vehicles such as these report to a superior command, whether that be a leading car or a stationary base. Convoy type vehicles could potentially be used in future applications for commercial vehicles to transport from place to place and will likely be seen in the second stage of the vehicle’s journey toward full autonomy. Base station transport lends itself to a multitude of potential problems, as it would be difficult to implement over long distances but would be useful for military or space application.

Design Generation and Implementation The team initially considered using and altering a full-sized commercial automobile for the project but cost and space constraints limited our capacity for this application. For this reason, we chose to implement our designs on a standard 2007 model EZ-GO golf cart.

294 Due to the nature of the project and its inevitable time constraint, the team decided on breaking up tasks to accomplish its goals most efficiently, making four teams split according to discipline. Mechanical systems were divided into steering and braking methods and electrical and programming systems were grouped into communications and controls sub-teams respectively.

Steering The first task undertaken by the mechanical team was the issue of steering the vehicle. A variety of methods were considered for the task of steering. Among these were a simple stepper motor, a linear actuator, and a rack and pinion. It was decided that the stepper motor would be more useful for rotational actuation than for linear motion and would only create the need for a more complex system. The rack and pinion concept also created an unnecessary level of complexity. Although the linear actuator was arguably the most expensive alternative, it perfectly fit the requirements and constraints for the task at hand. Additional components were then selected, including a tie rod, and mounting brackets. The needed capacity of the actuator was measured by a load cell in a number of trials. It was found that the actuator would need to satisfy a desired stroke length of nine inches and would need to undergo an average static force of approximately 113 lb. First, linear actuators were found with at least a 9-inch stroke capability and around 120-lb force capability such as the one by Firgelli Auto. This actuator also had an actuation speed of 1⁄2 inch per second and complimentary mounting brackets. It was then necessary to consider a method of positional feedback. Our research showed that feedback incorporated in the actuator was a far cheaper and more convenient alternative to external feedback. It was also found that the cheapest alternative, an actuator with feedback from Firgelli Auto only came in 8-inch stroke and 12-inch stroke options. Considering the need for 9 inches of actuation length, the 12-inch actuator was researched but was found to have a total compacted length of 17.9 inches and an extended length of 29.9 inches, much longer than the allotted length equal to the size of the front axle (24 inches). Because of this, it was decided that a 1⁄2 inch at either extreme of the actuation cycle could be sacrificed in order to accommodate the Firgelli actuator with feedback. With this, the 8-inch stroke length actuator was chosen with a compact length of 13.9 inches and an extended length of only 21.9 inches. A number of linear actuators with feedback were considered, though the final design incorporated an 8-inch stroke linear actuator from Firgelli Auto with 150-lb force capacity and built-in potentiometric feedback.

Braking Methods of braking such as hydraulics, pneumatics, and linear actuation were considered. Again a linear actuator was considered the most efficient and easiest method to employ, as well as meeting the cost constraint. It was decided during the design process that hydraulics and pneumatics would both require too many accessories. With only about a 9-inch space allowance, the linear actuator was found to be the ideal method of actuation for 4 inches of necessary actuation at a designated force capacity. There has been no ideal method of attachment found for the actuator so a completely original fixture will be designed and used to fasten the actuator to the preexisting braking panel on the golf cart.

Communications Through research the team found that full autonomy of the golf cart would not be possible to execute in the eight months of the academic school year, as many current autonomous systems take years to develop completely. For our application, we considered using the wire-line lead, the paint line (similar to the chemical paint line used by some AGVs), and WiFi control. Through the design portion of the project, the team decided to execute the WiFi type control for the golf cart. Due to the nature of the project, it was a necessity of the system be adaptability to future systems. Because of these considerations, the team decided to use WiFi for the initial stage to test the efficiency of the cart’s system before commencing development of fully autonomous systems. Looking into WiFi systems, the team has considered using Skype from an on-board computer for video feedback and Xbox 360 controllers. WiFi is a relatively cheap and long range option, with the cost of WiFi antennae running anywhere from $35 to $200 with a wide range of sizes and capabilities.

An online source suggests using a GPS and an on-screen display for first-person view driving, which would be the method used through the WiFi control system [8]. Data loggers are also incorporated into these systems, which allow for the integration of sensors, another important feature for our application.

295 The system would operate by hooking the computer’s webcam into the data logger’s input channel using a PC-to-transmitter (PCTx) interface and a wireless transmitter to the output. The PCTx interface researched by the team is made by Endurance R/C is relatively low cost and generates up to nine output channels. Differential GPS systems typically cost upwards of thousands of dollars. The researched alternative was a relatively cheap non-differential GPS created by the Space and Naval Warfare Systems Center [1]. This system incorporates inexpensive sensors and Kalman filters to mimic the responsibilities of a differential GPS while attempting to eliminate the error encountered from using a differential GPS. Kalman filters estimate position, through dead reckoning, triangulation and other methods. Dead reckoning predicts the vehicle’s position at a certain point in time in the future based on its current direction and velocity, using sensors that record wheel rotation and steering direction. Localization is when the vehicle uses a combination of dead reckoning and cameras to localize itself in its environment. Dead reckoning allows for absolute positioning while cameras can track and map the area. Using just cameras for positioning feedback, however, leads to the problem of “perceptual aliasing”, which is essentially being unable to differentiate between two places based solely on appearances [5].

This non-differential GPS system lends itself to usage in smaller vehicles that require a higher degree of accuracy in a smaller space. Errors in traditional differential GPS systems stem from high-frequency noise and long-term drift. The first arises from long-range satellites “coming in and out of view of the receiver”, causing errors of up to hundreds of feet [4]. Secondly, drift arises from atmospheric inconsistencies that occur in the space between the satellite and the receiver. Researching wireless transmitters, the team found integrated transmitting and receiving (Tx and Rx) systems with a variety of ranges [12]. Ultimately, we have decided on a fail-safe method of using a fairly low-range system to avoid the undirected escape of the vehicle. One 315-MHz RF Link transmitter proved to be ideal with its extreme light weight and low cost [18].

The method being used for the communications employs the use of sensors for positioning feedback of the vehicle. The most viable candidate to be used in multiple locations throughout the cart is the Wide Beam Sensor by Laser Technology, Inc. [2]. The Wide Beam is a laser sensor useful for collision avoidance and proximity detection and is extremely accurate up to 164 feet due to its redundancies in circuitry. Eventually radar and ultrasound may be used for full autonomy of the golf cart. The plan for communication among the systems is rough at this point and still in the process of being fully developed.

Controls With both the steering and braking actuation methods being simple linear actuators with feedback, two individual controllers will be used for the simultaneous control of the two systems. Initially, an integrated controller was considered the most viable option but future testing was considered and two controllers would inevitably be easier for troubleshooting than one integrated one. Arduino controllers were recommended as both an inexpensive and reliable method of control (Figure 6).

As far as the programming method, multiple languages were considered as possible candidates for use. At the onset, common languages Java, C, C++, and C# were considered. It was decided that object-oriented programming was the most viable option to employ for our application. Object-oriented programming defines objects as systems and is alleged to be simpler to use with an easier to read interface. It also claims to be more efficient, faster, more robust, and easier to avoid errors in coding. Between the object- oriented languages, Java, C++, and C#, the team decided on using Java because team members cumulatively had the most previous experience with Java. The Linux operating system will be used for its ease of use, cleanliness of interface, and ability to run simple programs.

The controls team will employ the use of a transistor switch for the voltage differential between the controllers and the actuators (1-2.7V versus 12V). A relay was considered for this application but due to the physical attributes of relays, they are more subject to breakage. A proportional-integral-derivative (PID) controller will also be used as a feedback mechanism, making the braking and steering motions smoother.

296 Future Work Initially, the team would like to have the golf cart moving by computer control by the end of the academic year. In the future, WiFi components will be added for remote manned control. Additional components to be added in the future include an integrated controller, sensors, possibly gyroscopes and accelerometers for sensing, a camera and laptop, and radar for the fully autonomous system.

References 1. Bruch, Michael H., G.A. Gilbreath, J.W. Muelhauser, and J.Q. Lum. "Accurate waypoint navigation using non-differential GPS." Print. 2. "Dead Reckoning." Wikipedia. Mar 2010. Web. Jan 2011 . 3. "GPS, On-Screen Display, and Data Logging."FPVpilot.com. Web. Mar 2011. . 4. Kelly, Alonzo. "A 3D State Space Formulation of a Navigation Kalman Filter for Autonomous Vehicles." (1994): Print. 5. Krishnarmurthy, Nirup N., Rajan Batta, and Mark H. Karwan. “Developing Conflict-Free Routes for Automated Guided Vehicles”. Operations Research. 41.6 (1993) : 1077-90. 6. "Mars Exploration Rover Mission." NASA Jet Propulsion Laboratory, California Institute of Technology. USA.gov, 01 Apr 2011. Web. 2 Apr 2011. . 7. Morris, J. "It Was All a Dream." Four Fold Business Solutions 20 Nov 2010. Web. Feb 2011. . 8. Parent, Michael, and Ming Yang. "Road Map Towards Full Driving Automation." 9. "Programming Languages." White Fang. 19 Mar 2010. Web. Mar 2011. .http://cm.bell- labs.com/cm/cs/who/dmr/chist.html 10. "The Java Programming Language." CSUSB, 08 Sep 2007. Web. Mar 2011. .

297 Augmentation of the DME Signal Format for Possible APNT Applications

Student Researcher: Daniel K. Shapiro

Advisor: Dr. Michael DiBenedetto

Ohio University Department of Mechanical and Electrical Engineering

Abstract This project looks into a proof of concept reception test of the phase modulated interrogation pulse pairs of the DME signal format on the DME channel 17X. Two types of signals were tested at various pulse pairs/second rates. The first is a signal compliant with current FAA DME specifications and is referred to as the normal signal. The second is a signal compliant with FAA specifications except a phase modulation scheme is applied to the second Gaussian pulse of the DME signal format.

Project Objectives A need has been identified for an alternative means, to the Global Positioning System (GPS), for position, navigation, and timing functionality. A current proposal is to imbed the data and timing capability into the Distance Measuring Equipment (DME) uplink/downlink signal format. Any augmentations made to the current signal scheme must be transparent to current end-users. It has been proposed that schemes using phase modulation of the carrier signal, and otherwise consistent with the current DME signal format, should be transparent to current end users.

This project looks into developing a method for assessing the compatibility of phase modulated signals with the normal signal. To perform such assessments, a repeatable method of test signal generation and measurement of the compatibility must be developed. Once the experiment methodology is proven repeatable, the compatibility of proposed signal formats can be examined and assessed.

Methodology In order to make assessments of the compatibility of the proposed phase modulation scheme with the normal signal, an accepted and repeatable method of testing the transparency of proposed schemes is needed. A scheme is considered transparent when the service provided to currently equipped users is unaffected by the use of such a scheme. For transparency, it is essential that the reception and signal processing of interrogations/reply pulse pairs received provides the same performance as when the normal signal format is used.

The DME test signals for this study were generated using the Ohio University Avionics Engineering Center’s Transponder Traffic Load Emulator (TTLE). The TTLE can be configured to provide either DME interrogation or reply pulse pairs at a user specified rate/rate profile. A Thales model 415SE DME, a commercially available unit currently fielded in the National Airspace System, was utilized to ensure that standard reception and signal processing techniques were applied.

To observe if the experiment setup was capable of generating the desired phase modulated signal a digital oscilloscope was used to view the output of the TTLE. Using a carrier frequency below the DME band, so that the number of oscillations in the Gaussian pulse envelope could be observed, the time between a carrier signal peek of the first pulse to a carrier signal peek of the second pulse of a DME pulse pair was measured. For a normal DME signal, this time should be an integer number of periods at the carrier frequency, whereas in the phase modulated signal it should be an integer plus some fraction depending on the amount of phase modulation applied. If a 180° phase modulation was applied, the time for this modified signal should be an integer number of periods plus one half period. Such a modulated signal was used to validate the TTLE setup. With confidence that the equipment was capable of generating the desired output signal, the next step in the experiment could begin.

298 Each time a valid pulse pair is received by the 415SE the dead-time-gate transitions from a high to a low state for 60 microseconds. This transition can be used to count the number of valid pulse pairs, this count includes pulse pairs from the internal monitor, the TTLE, and any interrogators operating on the DME channel. A computer program counts the desired dead time gate transitions applied to the input of a National Instruments I/O card, which is connected to the 415SE's dead-time-gate test point. The activity generated by the TTLE was monitored by counting pulses on the dead-time-gate of the 415SE.

To avoid the need to apply for and receive a Frequency Transmission Authorization, the 415SE DME was operated in a reception only mode. Because the DME would not be broadcasting interrogation replies it was not connected to a standard DME antenna, but instead an antenna simulator, which matches the impedance of a regular antenna and allows the system monitor to function normally. In this experiment the tests were operated at a fixed distance and at the minimum signal strength necessary for the normal DME signal to be received by the DME while connected to the antenna simulator.

During the experiment the TTLE was set to generate a fixed average number of pulse pairs per second and left to run for a period of one hour for each trial. Using identical signal strength, baseline data was collected using the normal DME signal and compared to data collected using the phase modulated signal. The experiment was repeated at average rates of 1000, 2000, and 3000 pulse pairs per second to simulate a variety of different system loading conditions.

Results Obtained The first trials showed that the Thales 415SE had a monitor baseline of approximately 55 pulses pairs per second with no external input to the system. With this in mind it was expected that proceeding tests would have pulse counts of the output rate plus the monitor rate.

Comparison of the phase modulation scheme tested and normal DME signal shows much promise for the user-transparency of the scheme. Table 1, in the Figures and Tables section, shows that there was less than 0.6% difference between the number of received pulse pairs on any of the baselines and the corresponding phase modulated tests. Note, the difference between the two 1K pulse pairs per second baselines scenarios was 0.01%

Acknowledgements The author of this paper would like to thank the Ohio University Avionics Engineering Center for providing use of the equipment to conduct this research. Specifically, the author would like to thank Mr. Carl Hawes and Mr. Frank Alder for their cooperation and assistance on the project's setup and operation. The Thales 415SE DME used to support this testing is Government Furnished Equipment, special thanks is offered to Mr. Greg Rugila of the Federal Aviation Administration.

Figures and Tables

Figure 1. Transponder Traffic Load Emulator (TTLE) Block Diagram

299

Figure 2. Monitor Block Diagram

Table 1. Summary of Experimental Results

References 1. R. Kelly, and D. Cusick. "Distance Measuring Equipment and Its Evolving Role in Aviation," Advances in Electronics and Electron Physics, Vol. 68, 1968 Academic Press, New York. 2. C. Cohenour. "Transponder Traffic Load Emulator for the Thales Model 415 SE Low Power DME," Ohio University, Avionics Engineering Center, Technical Memorandum OU/AEC 03- 08TM00071/1.2-2, April 2003 3. M. DiBenedetto, et al. "Initial Comments on Proposed Distance Measuring Equipment Timing and Data Transmission Schemes," Ohio University, Avionics Engineering Center, Technical Memorandum OU/AEC 10-01TM00071/3.3-1, February 2010.

300 SAE Aero Competition Improvements

Student Researcher: Christopher J. Slattery

Advisor: Dr. Jed Marquart

Ohio Northern University Department of Mechanical Engineering

Abstract The Society of Automotive Engineers holds a series of collegiate competitions designed to give students a chance to apply their newfound engineering knowledge to “real world” challenges. One such competition that I have grown particularly fond of is the SAE Aero competition. The object of the competition is to lift as much weight as possible under the constraints designated in the rules. Some of the constraints include: set take-off and landing distances, length, width and height limits, material restrictions, and the use of a standard unmodified engine.

Our team’s (Ohio Northern Black Swan) major design goals for this year’s aircraft are to reduce the weight to 10lbs, simplify the construction process, and increase the accessibility. Material testing will be performed experimentally and computationally in an attempt to lower the aircraft weight. The fuselage accessibility issue will be addressed by placing all serviceable components in centralized locations, and by creating large access panels. Lastly, the construction process will be streamlined by implementing DFM and DFA engineering principles, and by fully utilizing what modern technology has to offer.

Project Objectives The goal of the SAE Aero Design Competition is to design, manufacture, and successfully fly a remote controlled aircraft capable of carrying a large payload while adhering to the SAE Aero Design competition requirements as summarized below [1]:  Takeoff within 200 feet in less than 3 minutes  Land within 400 feet without bouncing off runway  All aircraft components must remain attached from takeoff to landing  Total length, width, and height less than or equal to 225 inches  Fly one 360 degree circuit of the field  Weigh no more than 55 pounds with payload and fuel  Powered by a single, unmodified O.S. 61FX with E-4010 Muffler  Payload must be fully enclosed in the fuselage but has no dimensional restrictions

This is the second year this team has competed in SAE Aero. In April 2010, team Mac Attack competed at SAE Aero East in Fort Worth, TX. Mac Attack proved to be a very competitive aircraft, placing 15th overall. While this was a leap over past designs, there was ample room for improvement. Areas for improvement were broken into 3 categories [2]: 1. Performance The team’s main goal in this “heavy lifting” competition was to improve performance and lift more weight. Ways to improve performance include: increase lift, decrease system weight, and lower drag. Through research and optimization, a new fuselage design improved the performance of this aircraft by cutting weight and lowering drag. 2. Serviceability The biggest struggle from last year’s design was the difficulty in working on the aircraft if repairs were necessary. It is inevitable that problems will arise, so making the aircraft easy to access was of utmost importance. This feature would also appeal to an industry customer. 3. Manufacturability When marketing a product to industry, manufacturing cost is one of the driving factors in a products success. An effective way to keep costs down is to implement DFM (design for manufacturing) and DFA (design for assembly). The team furthered its development of self-aligning parts through the

301 utilization of CNC laser cutting and custom build jigs in an attempt to shorten/simplify the manufacturing process.

Improvements Performance As mentioned, areas of potential performance gain include: increased lift, decreased system weight, and lower overall drag. In past years, the fuselage size was limited by payload compartment size requirements. The absence of these requirements in this year’s competition has driven the team to streamline the fuselage and improve the drag profile. Since one goal is to reduce overall drag, the fuselage frontal area was cut by 42.5 percent over last year’s design (as seen in Figure 1).

Figure 1. (a) Fuselage Drag Profile, (b) Fuselage Profile

The fuselage was modeled to resemble a NACA 0012 symmetric airfoil, giving it the potential to have minimal drag and produce lift at positive angles of attack. Since competition rules state that the payload is required to be fully enclosed in the fuselage, the fuselage size is ultimately dependent on the size of the payload bay. This year's payload was designed to be 3.5”X 4.5”X 7.5” and was placed at the center of gravity to eliminate CG fluctuations when weight is added. Space in the fuselage for mounting the throttle, nose gear, servos, radio receiver, battery, and fuel tank was allotted.

Since the fuselage design chosen was a semi-monocoque design, material selection was an integral part of the design process. The chosen material must be as light as possible while maintaining the structural rigidity necessary. The team analyzed four different material options for use in the fuselage; Balsa, Laminated Balsa (2, 3, & 4 ply), Lite-Ply, and Aircraft Ply. Balsa ply is handmade by alternating the grain direction of 1/16” balsa sheets and adhering them together using epoxy. After testing, it was found that 0.125 inch Lite Plywood and the 2 & 3-ply balsa were the best candidates. The 2-ply balsa is 26% lighter than lite-ply with only an 8% reduction in bending stiffness and 5% in tensile strength. Due to warpage and material inconsistencies, Lite Plywood was not used for anything that required perfect alignment. While the balsa ply is harder and more expensive to make, it was crafted from the straightest and stiffest pieces of balsa. The grain directions were also altered to add strength where needed.

All bulkheads were made of the balsa ply because of its strength to weight ratio, flatness, and its freedom to alter stiffness properties. The materials selected for the fuselage provide the necessary strength and stability for the loads the fuselage will encounter in flight, and keeps the aircraft weight as low as possible.

Serviceability No matter how well engineered a plane is, there is always the potential for something to go wrong. The biggest complaint about last year's competition plane, and other commercially available R/C aircraft was the lack of accessibility to internal components. It was the team's goal to centrally locate all major components in areas that can be easily accessed. The result was a fuselage with three large, easily removable access panels (figure 2). The largest is located on the belly of the aircraft and allows access to the payload compartment, nose gear, all servos, and radio equipment. The second is a smaller panel located on the top of the aircraft and provides access to the fuel tank (Competition Requirement). The third is an engine cowling that gives unobstructed access to the entire engine.

302

Figure 2. Fuselage Access Panels

Manufacturability In the past the team found out the hard way how labor intensive building an R/C aircraft could be. While SolidWorks has always been utilized to design the entire aircraft and all of its components, it does not necessarily guarantee that the final product will be easy to build. Having some forethought and design intent will go a long way in simplifying the manufacturing process.

A recent advancement in team Black Swan's manufacturing methods was the utilization of a CNC laser engraver. This machine can rapidly and very precisely cut complex parts out of an assortment of materials. The laser not only saved time in cutting out the components, but also allowed the designers to create complex self-aligning parts that would be impossible to cut by hand. Such parts fit together using tongue-and-groove notching, making construction a quick and precise process.

Since it was not always feasible to have every part align itself, alignment jigs were also cut using the laser. These jigs were especially useful on the rear, free floating fuselage bulkheads. Figure 3 below depicts that specific scenario where jigs were used and how beneficial they proved to be.

Figure 3. Construction Jigs

With a rapidly improving manufacturing plan in place, the team decided to take it to the next level this year and attempt to implement some common DFM and DFA principles. While no radical changes to the manufacturing plan were made, a number of small implementations drastically improved the process this year.

DFM In an attempt to both lighten and simplify the aircraft, components were designed to be multifunctional. For instance, the supporting bulkhead for the nose landing gear doubles as the mounting bulkhead for the engine. Another example would be that the mounting bulkhead for the rear landing gear also serves as the mounting location for the wing and payload.

When designing the aircraft, an attempt was made to standardize the hardware used. As a result, there are only three fastener sizes, all with hex heads to minimize potential wear. Since this is a model aircraft made of wood, it is not practical to expect tight tolerances between parts. Subsequently, many of the alignment holes and notches were slightly oversized to eliminate the potential for tolerancing mishaps.

303 DFA Through proper design intent, the assembly process was very simple this year. The use of subassemblies made the fuselage very simple to put together (figure 4). The fuselage was broken down into three main subassemblies which were joined together through the help of alignment jigs. Each subassembly consisted of interlocking, self-aligning parts. Similar parts were either labeled "left" or "right", or had asymmetric features allowing them to only fit in their designated locations.

Figure 4. Notice the colors differentiating between the various subassemblies

Once the initial joining of the fuselage subassemblies was complete, only one fuselage setup position was required to complete nearly all remaining construction. Special care was also taken to provide unobstructed access for all fasteners and removable components within the fuselage.

Overall, the implementation of these basic DFM and DFA principles was well-worth the added design time because it made manufacturing and assembly much easier than past builds. To-date, it is estimated that the team has saved 100+ man hours over the previous year's build. A current picture can be seen in figure 5.

Figure 5. Aircraft Construction Complete

Acknowledgements The author of this paper would like to thank his SAE Aero team (Ohio Northern University Black Swans) for all of their hard work and dedication to this project. This project would not have been a success without them!

References 1. SAE International , “2011 Collegiate Design Series – Aero Design East and West Rules,” Warrendale, PA. 2. A. Murray, C. Slattery, et. al, “SAE Aero Design East 2011 – Black Swan Report,” Ohio Northern University, 2011.

304 Removal of a Bittering Agent Potentially Released to Water Supplies: Implications for Drinking Water Treatment

Student Researcher: Bartina C. Smith

Advisor: Kenya Crosson, Ph.D.

University of Dayton Department of Civil and Environmental Engineering

Abstract Bittering agents are non-toxic compounds added to toxic consumer products to discourage large-scale ingestion by humans or animals. The “Antifreeze Bittering Act of 2009” (H.R. 615) was introduced to the U.S. House of Representatives on January 21, 2009, and it mandates the addition of 30-50 mg/L denatonium benzoate, a bittering agent, to antifreeze and engine coolant. At 1-10 mg/L, denatonium benzoate’s bitter taste can be detected, and water with 30-100 mg/L denatonium benzoate (DB) is unpalatable. Although denatonium benzoate’s environmental fate in soil and water systems has been modeled, it has not been empirically studied, and concern exists that the unintentional or intentional release of DB spiked antifreeze or engine coolant could adversely impact drinking water supplies by rendering water unpalatable. This project addresses concerns related to the potential release of DB to water supplies, by determining if powdered activated carbon (PAC) treatment, a common method employed to remove taste and odor contaminants from water, is suitable for DB removal. If H.R. 615 is passed and significant releases of antifreeze and engine coolant to water supplies occurs, the affected water could be unpalatable. Drinking water treatment facilities relying on DB contaminated water supplies may need to depend on existing treatment processes or invest in new treatment options to provide consumers with suitable drinking water. Results herein indicated that PAC removed low concentrations of DB best at the 24 hour contact time and higher PAC doses. At a higher DB concentration, less DB removal by PAC was achieved. A bituminous-based carbon performed slightly better than a lignite-based carbon under all conditions. Future research will investigate additional activated carbons and natural water spiked with denatonium benzoate to assess the impact of natural organic matter on adsorption.

Introduction The U.S. Congression has pending legislation mandating the addition of denatonium benzoate at a dosage of 30 ppm to antifreeze and engine coolants (1). Denatonium benzoate (DB) is a bittering agent added to antifreeze to prevent digestion of antifreeze by humans and animals. DB is known to travel with groundwater (1). In the drinking water treatment process, powdered activated carbon (PAC) or granular activated carbon (GAC) treatment is used for the removal of taste and odors and organic contaminants (2).

Objectives/Hypothesis The objective of this research is to investigate the effectiveness of powdered activated carbon treatment in the removal of denatonium benzoate from water. The contact time, influence of carbon type and optimal carbon dose will also be determined. We have developed the following hypotheses:  powder activated carbon is effective in the adsorption of denatonium benzoate by achieving at least 50% removal.  adsorption will increase with an increase in powder activated carbon dose.  the expected contact time for adsorption is 24 hours at 25 degrees centigrade.

Literature Review/Background Every year an estimated 10,000 dogs, cats, and children fall victim to accidental poisoning by antifreeze. It was documented in 2003, an estimated 1,400 children were hospitalized due to antifreeze poisoning according to the consumer product safety commission (1). Three to four teaspoons of anti-freeze can be

305 deadly if consumed by a child. A few licks of the sweet tasting substance can be deadly for a dog or cat. Alcoholics consume the substance in replacement for alcohol.

Denatonium benzoate is a harmless bitter tasting substance that in small quantities makes digestion impossible (1). Denatonium Benzoate is a preferred bittering agent because it is inexpensive. One teaspoon of denatonium benzoate is needed for every 50 gallons of antifreeze. That amounts to about 3 to 4 cents per gallon (1).

Manufacturers claim that denatonium benzoate is biodegradable and is not known to bioaccumulate (2). Researchers have found that the denatonium ion does not biodegrade during the typical wastewater treatment water process and the denatonium ion is responsible for the bitter taste of the compound (2).

A drinking water treatment plant’s main objective is to produce high quality water that is safe for human consumption and conforms to state and federal standards. One method to achieve this goal is activated carbon treatment. Activated carbon is used commonly for the control of naturally occurring and synthetic chemicals in drinking water. Therefore, the authors set out to examine the suitability of activated carbon treatment for the removal of denatonium benzoate water.

Materials and Methods Adsorption isotherm tests were conducted using bituminous coal-based PAC and also a lignite coal based PAC. Isotherm tests were used to determine an activated carbons ability to adsorb denatonium benzoate (MP Biomedicals, LLC, Solon, Ohio). Isotherm tests were conducted by spiking reversed osmosis water with 10, 20, and 50 mg/L concentrations of powdered carbon slurry and 5 mg/L or 70 mg/L of denatonium benzoate into 42-mL amber vials with no head space. The carbon slurry was prepared by combining ground and sieved carbon (.1919 grams of less than 365 mesh size) with 250 mL ultrapure Millipore™ water. The carbon slurry was stored in a desiccator under a vacuum seal. The stock solution of DB was prepared with ultrapure Millipore™ water and kept refrigerated.

All experiments were conducted in ultrapure Millipore™ water. Blanks were prepared and analyzed for each experiment. Samples were prepared in triplicate. The amber vials containing the appropriate DB and PAC concentrations were placed onto a rotating shaker for continuous mixing at 25°C for 24 hours. When removed from the shaker, the samples were filtered through a 0.22-m Millipore Millex sterile syringe filter and placed into a quartz cuvette to obtain an absorption reading at 270 nm using a Shimadzu 1201 UV-VIS spectrophotometer (Shimadzu, Columbia, MD). The absorption measurement was used with a calibration curve (prepared daily) to determine the DB concentration remaining in solution.

Results Contact time and isotherm tests were conducted to evaluate effects of operating conditions, carbon type, and carbon dose on DB adsorption. The contact time results are shown in Table 1. At 24 hours, the optimal contact time for DB adsorption onto carbon was reached.

Table 1. Impact of Contact Time on DB Adsorption by PAC and PAC B % Denatonium Benzoate Contact Time (hrs) Removed PAC A 24 42 30 25 48 40 PAC B 24 38 30 3 48 23

306 At 5 mg/L initial DB concentration, PAC A and PAC B achieved 70-73% DB removal efficiency at low PAC doses, (2.5 & 5 mg/L). At the 10 mg/L PAC dose, PAC A absorbed 80% of DB comparing to 73% by PAC B (Figure 1).

Figure 1. Percent Removal of DB (initial DB concentration 5 mg/L)

Adsorption of DB into PAC A and PAC B were compared via a Freundlich isotherm. Results indicated that PAC A, bituminous-based activated carbon adsorbed DB better than the PAC B, lignite-based activated carbon. Calculated Freundlich isotherm adsorption capacity constants,K (mg/g) (L/mg)1/n, for PAC A and PAC B were 358 and 108, respectively. These values support the conclusion that PAC A exhibited improved adsorption (Figure 2).

PAC A PAC B

Figure 2. Freundlich Isotherm for DB Adsorption onto PAC A and PAC B (Co= 5mg/L DB)

When the simulated contaminant level was raised to 70 mg/L the removal efficiency for both PACS dropped (29% for PAC A and 14% PAC B) (Figure 3).

307

Figure 3. Percent Removal of DB (Co= 70 mg/L DB)

Conclusions/Future Work After conducting several contact time tests, both powder activated carbons had the most success at removing denatonium benzoate at the 24 hour time period. PAC A and also PAC B are both reasonable treatment options for removing lower concentrations of denatonium benzoate (Co=5 mg/L) from ultrapure reversed osmosis water. For higher concentrations of denatonium benzoate (Co=70 mg/L), increased PAC doses or other treatment methods may be warranted to decrease DB concentration levels to that which would make water palatable (<30 mg/L DB).

Since organic matter will compete with DB for adsorption onto activated carbon, it is expected that the observed DB removal efficiencies would decrease for full-scale treatment conditions. Therefore, future studies will include conducting experiments using natural water spiked with denatonium benzoate. Experiments will also be conducted under various pH conditions and with other types of activated carbon.

Acknowledgments The authors would like to acknowledge the University of Dayton Sustainability, Energy, and the Environment Initiative Seed Grant program.

References 1. Tutuni, Peter.Denatonium Benzoate - A bittering agent to prevent Antifreeze Poisoning. , 8/15/2009. 2. Li, Leu, P Quinlivan, D Knapp.2002. Effects of activated carbon surface chemistry and pore structure on the adsorption of organic contaminants from aqueous solution. 3. Bonacquisti, Tom.2006.H.R. 2567 the antifreeze bittering act of 2005.

308 Using Three Dimensional Printing in Product Development

Student Researcher: David M. Smith

Advisor: Jeffery Woodson

Columbus State Community College Department of Mechanical Engineering Technology

Introduction In August 2008 an article appeared in Mechanical Engineering Magazine entitled ”Digital Product Development.” The article was written by Mark A. Burgess, Chief Engineer for Boeing’s Phantom Works, an advanced research and development division at Boeing. (Boeing) The article examined the progression of developing new products by designing them in 2 dimensions on paper, to the introduction of computers to the design process and the ability to create 3 dimensional models. With the use of computers in 3D modeling , parts can now be fully created in the virtual world and have a great deal of their functionality and fit examined before money is ever spent in creating a prototype.(Burgess)

Gordon Moore, one of the co-founders of Intel, wrote a paper in 1965 entitled ”Cramming more Components onto Integrated Circuits.” In this paper he laid the groundwork for what would later become Moore’s Law as his predictions proved to be true. The law states that “The numbers of transistors on an integrated circuit will double every 24 months.”(Intel) This means that computers will become more powerful every two years. Over time, the prediction has fluctuated between 18 and 24 months but, overall has held true. (Dorsch)

With these two ideas in hand, Burgess poses this question, “Now consider, if you will, 20 years from now. If Gordon Moore’s prediction continues to hold true, then the computer capability available to mechanical engineers will be almost 10,000 times what it is today. That’s four orders of magnitude. With 10,000 times increased computing capability, what will our design and analysis systems be like?” While 20 years have not passed, 2-1/2 has since the Burgess article was written, and the question that can be asked is; what are computers capable of now? The next wave appears to be Three Dimensional (3D) printing.

Printing in 3D Three dimensional printing, also known as rapid prototyping, was initially invented in 1984, and it started to become a reliable, functional reality in the late 1990s (Stereolithography). The idea behind it is similar to how an inkjet printer works. Whereas and inkjet printer “sprays” or deposits ink onto paper to create the image or text desired, a 3D printer uses another media, usually some type of polymer, plastic to lay down successive layers which continue to be built up in the vertical direction until the model is created. (Sherman) The result then is a model of the part which can be examined in the real world, measured for fit, shown to a customer, and etc.

Printing in 3D – More Than a Model What if it could go beyond that? What if the part that was created was a fully finished, usable part? What if a company needed to make several customized parts? Experts now predict that as much as 20% of the 3D printing that is done is producing completed usable parts, and this number is expected to rise to as high as 50% by 2020. This is possible because, whereas older 3D printers were limited to plastics, newer models are able to use a variety of materials including titanium alloys, glass, and even concrete. (Dillow) Another big advantage to printing a finished part in 3D, is cost savings. One-off parts can be made without the cost of developing expensive tooling, or other related manufacturing necessities, and the part can be changed in an instant to reflect a change in the design.

Specific Examples of 3D Printing 1. TV entertainer Jay Leno uses 3D printing to replicate obsolete car parts that are hard to find, those models serve as the basis for replicating the parts at a machine shop. (Charleton)

309 2. Bespoke Prosthetics, a company co-founded by an orthopedic surgeon is creating fully functional prosthetic limbs at up to 90% less than the cost traditional prosthetics. (Vance) 3. A company in California is developing a printer that could one day build homes out of concrete. The printer is so large it would need to be carried on a tractor trailer. (Vance) 4. European based aircraft manufacturer, Airbus, is developing a way to print an entire aircraft wing. They currently use the technology for parts related to their landing gear assemblies. (Dillow)

Conclusion Computers have revolutionized every aspect of life, and the field of engineering and design is no exception. As computers have increased in power CAD (Computer Aided Drafting) software has given way to the technology of 3D Printing. Printing in 3D is the next frontier engineering and manufacturing. Producing parts than can be prototyped, tested, and even used as finished products could begin to eliminate the need for large scale manufacturing operations. Many manufacturers could offer the options of making custom products at a lower price than is currently available for mass-produced goods. The use 3D printing to produce custom prosthetics is an example of this, in addition to cost savings, each prosthetic is custom designed for the best individual fit. (Vance) These advancements have come in the last two to three years, what will be possible in five or ten more years? Designers and Engineers are only limited by their imaginations and the power of their computer.

Acknowledgment Jeff Woodson-Project Advisor

References 1. "Boeing: Phantom Works Home." The Boeing Company. Web. 06 Apr. 2011. . 2. Burgess, Mark A. ""Digital Product Development," Feature Article, August 2008." Mechanical Engineering Magazine Online Redirect. Aug. 2008. Web. 06 Apr. 2011. . 3. Charleton, Gene. "Printing in Three Dimensions : Discovery News." Discovery News: Earth, Space, Tech, Animals, Dinosaurs, History. Discovery News, 10 Sept. 2010. Web. 06 Apr. 2011. . 4. Dillow, Clay. "Using 3-D Printing Tech, British Airbus Engineers Aim to Print Out an Entire Aircraft Wing | Popular Science." Popular Science | New Technology, Science News, The Future Now. Popular Science, 14 Feb. 2011. Web. 06 Apr. 2011. . 5. Dorsch, Jeffrey. "Does Moore's Law Still Hold Up?" EDA Vision The Online Magazine for EDA Professionals. EDA, Sept. 2003. Web. 6 Apr. 2011. . 6. Moore, Gordon. "Cramming More Components Onto Integrated Circuits." Intel. Web. . Electronics, Volume 38, Number 8, April 19, 1965. 7. "Moore's Law and Intel Innovation." Laptop, Notebook, Desktop, Server and Embedded Processor Technology - Intel. Web. 06 Apr. 2011. . 8. Sherman, Lilli M. "3D Printers Lead Growth of Rapid Prototyping." Plastics Technology Online. Aug. 2004. Web. 6 Apr. 2011. . 9. "STEREOLITHOGRAPHY." PHOTOPOLYMER. Web. 06 Apr. 2011. . 10. Vance, Ashlee. "3-D Printing Spurs a Manufacturing Revolution." New York Times. 13 Sept. 2010. Web. 6 Apr. 2011. <3-D Printing Spurs a Manufacturing Revolution>.

310 The Wing in Ground Effect on an Airfoil

Student Researcher: Matthew G. Smith

Advisor: Dr. Jed E. Marquart

Ohio Northern University Department of Mechanical Engineering

Abstract When an aircraft descends for a landing it experiences a number of effects caused by the ground. One of these effects is known as the wing in ground effect. This occurs when the aircraft becomes a distance equal to its wingspan away from the ground. At this point the aircraft experiences a decrease in drag and an increase in lift. As it approaches one half its wingspan the effect multiplies.

To further explore this phenomenon, using computational fluid dynamics, the airflow over an airfoil was analyzed at three different distances from the ground.

Project Objectives Many studies have been conducted on the phenomenon of the wing in ground effect. To further understand this phenomenon the main objectives of this work are: 1) To study the change in the pressure distribution that a wing experiences as it descends for its landing. 2) To study the change in the drag and lift coefficients as the airfoil approaches the ground.

Methodology Used To successfully conduct this research in a low cost manner, computational fluid dynamics was applied. The first step to this method was to specify the three preferred distances that the airfoil would be from the ground. These distances are shown in Table 1.

With these distances specified, the software Pointwise was used to create an unstructured mesh around the airfoil for each corresponding distance. An example of a generated mesh used is shown in Figure 1 and Figure 2. As seen in Figure 1 the mesh is extremely fine around the airfoil and the downwash region trailing the airfoil. This region is exploited in Figure 2. In this mesh, the main focus is this region because it is critical to the flow analysis.

Once each mesh is generated successfully, the flow analysis is then conducted using the software Cobalt. To display the results, Fieldview was used.

Results Obtained First the Reynolds number was calculated to determine if the flow over the airfoil would be laminar or turbulent. This calculation was completed by using the values shown in Table 2. The resulting Reynolds number was 675,580; which implies turbulent flow.

Next the average velocity of the radio controlled aircraft that this airfoil was modeled for was determined. This velocity was 45 mph which resulted in a Mach number of 0.059. With these values determined the analysis for each trial was completed using the flow solver Cobalt and the Navier Stokes method.

The resulting lift and drag coefficients can be seen in Table 3.

An image of the flow over the airfoil at a distance of 27.5 inches is shown in Figure 3. As the distance to the ground decreases, the pressure on the bottom surface of the airfoil increases. In turn, this should result in an increase in lift. This can be seen in Figure 4 where a distance of 2.0 inches from the ground is analyzed. Also by comparing Figures 5 and 6, it can be seen that as the distance to the ground decreases, the flow in the downwash region is interrupted by the ground.

311 Significance and Interpretation of Results In theory, as the airfoil descends for its landing, the pressure gradients on the bottom surface increase in magnitude. This phenomenon results in an increase in speed, an increase in lift, and a decrease in drag. As shown below in Table 3 the numerical results for Trial 3 do not conclude to this.

This was caused by human error when editing the job files for Cobalt. For example when the grid is exported, if double precision is selected, this means that double precision must be selected before each job file is ran in Cobalt. The error in Trail 3 is just that. The mistake was made that the grid was exported in double precision but the Trial 3 job file was ran in single precision. This resulted in the downwash region being treated almost as if it was a boundary condition. After much troubleshooting, this was found to be the problem.

In the near future, to get the correct results, this trial will be conducted again until convergence is reached in double precision. It is expected that as the airfoil gets closer to the ground, the lift coefficient will increase and the drag coefficient will decrease.

Figures/Charts

Table 1. Airfoil distance from the ground Trial # Distance (inches) Trial 1 55 Trial 2 27.5 Trial 3 2

Table 2. Values used in Reynolds number calculation Variable Value Chord Length 19.375 in Density 1.146x10-7 snails/in3 Velocity 792 in/sec -9 2 Viscosity 2.603x10 lbf sec/in Reynolds Number 675,580

Table 3. Computational fluid dynamics results

Trial # CL CD 1 1.32 0.047 2 1.166 0.0151 3 1.176 0.0127

Figure 1. Generated mesh for a distance of 27.5 inches.

312

Figure 2. Fine mesh on the surface of the airfoil Figure 3. Flow over the airfoil at 27.5 inches from the ground

Figure 4. Flow over the airfoil at 2.0 inches from the ground

Figure 5. Downwash region for the airfoil at 27.5 inches from the ground

Figure 6. Downwash region for the airfoil at 2.0 inches from the ground

Acknowledgments I would like to thank Dr. Jed Marquart for his assistance in the completion of this research.

313 Vital Monitoring Systems

Student Researcher: Zachary J. T. Snyder

Advisors: Neurer Satish and Lynn Kendall

Owens Community College Department of Biomedical Electronics

Abstract Vital monitoring systems play a crucial part in any space mission. These systems are comprised of a collection of biosensors that feed health and physiology data to a compact device which is worn by the astronaut. These devices are similar to an airplane’s “black box” which is able to record data. Unlike the “black box”, the device worn by the astronauts is able to transmit the data in real time. These devices are less bulky and allow the astronaut to move around more freely than old monitoring systems. In 2004, a team of researchers from NASA and Stanford University developed a system named LifeGuard. Not only are these devices beneficial to astronauts, this technology would have a great impact in the private sector. "There are tons of applications for medical use, home use, athletic training and uses in many other areas," said Greg Kovacs, a Stanford University professor and one of the LifeGuard project leaders.

Project Objective My objective is to research the use of the LifeGuard system, more specifically the Crew Physiological Observation Device (CPOD), which is used to monitor the vitals of astronauts in space. I will also research the effectiveness of this system and what improvements can be made to incorporate its use in commercial/medical applications.

Results Obtained The LifeGuard system was developed by researchers from NASA and Stanford University in 2004. The Lifeguard system is an assortment of biosensors that take health and physiology data and send it to a compact device which is worn around the waist which records the astronaut’s vitals. The data is then sent wirelessly in real time to doctors on the ground. Telemedicine plays a very important role in any space mission. Telemedicine, as defined by the International Society for Telemedicine and eHealth, is “the delivery of healthcare services, where distance is a critical factor, by all healthcare professionals using information and communications technologies for the exchange of valid information for diagnosis, treatment and prevention of disease and injuries, research and evaluation, and for the continuing education of healthcare providers, all in the interest of advancing the health of individuals and their communities with this device doctors are able to practice telemedicine over long distances and even into space.”i

The current LifeGuard system is very effective for what it was designed to do. Currently the unit is able to measure respiration, pulse, blood pressure, temperature, and body orientation. The unit, which is referred to as Crew Physiological Observation Device (CPOD), is only able to transmit the collected data through a wired RS-232 connection or through Bluetooth. This data is able to be downloaded to a base station which interprets the recorded data and display streaming data in real time. The CPOD operates on 2 AAA batteries for up to twenty four hours and records up to eight hours of data.ii

The enhancements that could be made to improve the CPOD would be using the VHF, UHF, and or ISM radio bands instead of Bluetooth; which would allow for a greater range of the CPOD and make it more viable in medical applications. Through the use of one of these bands the applications in the commercial field would be beneficial because of the greater range these bands offer. Since the CPOD was designed for extreme conditions for use in military applications the addition of a GPS unit would be very beneficial; military command staff would be able to track soldiers through GPS and monitor vitals. To improve the operating time of the unit, a small solar panel could be affixed to the unit itself or wired to prolong the operating time. With the advancements made in data storage, the size of the internal memory could be increased substantially. With these enhancements the CPOD could be even more effective.

314 Significance From a biomedical standpoint, I feel that the results from further development and research would make the Crew Physiological Observation Device (CPOD) very crucial, not only for space flight, but also in medical applications. Even though the CPOD is a small package, it could be scaled down considerably. With the current technology we have today for creating circuit boards and longer lasting batteries, like the ones utilized in cell phones, these systems could run longer and more efficiently. Having solar powered capabilities could also be beneficial. Enhancing the vital monitoring system with GPS, longer battery life, and greater wireless capabilities would allow for the CPOD to be used in more diverse medical applications.

References 1. Asaravala, Amit. "A Black Box for Human Health." Wired. 13 Apr. 2004. Web. 11 Mar. 2011. . 2. "International Society for Telemedicine and EHealth (ISfTeH) | Q - Z | Glossary of Telemedical Terms Q - Z." ISfTeH - International Society for Telemedicine & EHealth. Web. 05 Feb. 2011. . 3. Malik, Tariq. "Scientists Create Wearable Health Monitor - Technology & Science - Space - Space.com - Msnbc.com." Breaking News, Weather, Business, Health, Entertainment, Sports, Politics, Travel, Science, Technology, Local, US & World News - Msnbc.com. 13 May 2004. Web. 11 Jan. 2011. . 4. SPACE.com, Tariq Malik. "Researchers Build a 'black Box' Astronauts Can Wear - CNN." CNN. 13 May 2004. Web. 11 Jan. 2011. . 5. Standford University. "LifeGuard System Specs." Standford. 29 Apr. 2003. Web. 15 Feb. 2011. Stanford. "Specs." Lifeguard- Wearable, Wireless Physiological Monitor. Web. 1 Apr. 2011.

315 Optimization of Algae Lipid Measurements and Biomass Recovery

Student Researcher: Brittany M. M. Studmire

Advisor: Dr. Joanne Belovich

Cleveland State University Department of Chemical and Biomedical Engineering

Abstract The need for a sustainable fuel has become more apparent over the years as concerns about the limited amounts of crude oil continue to increase. One such source of a sustainable alternative fuel is microalgae. The particular microalgae used for this research is scenedesmus dimorphis, a type of green, unicellular algae that is often oval-shaped with simple cell walls that has an impressive lipid production when placed under stress to survive. It is studied along with mammal excrement, also known as digestate, as a possible nutrient source for optimal algal growth and lipid production.

Project Objectives The purpose of this research was to optimize the lipid production of algae by using digestate as a nutrient source.

Methodology Scenedesmus Dimorphus was allowed to grow in a media rich in digestate to see if it could be a feasible nutrient source to increase lipid productivity. The chemical compositions of the digestate in water varied at 1%, 1.5%, 2% and 5% (vol %). S. dimorphus was cultured in 250 mL Erlenmeyer flasks with working volume of 150 mL. Cultures were maintained at 32 °C. and agitated at 400 RPM in a shaking water bath, with light levels between 500-600 foot candles at the water’s surface. 0.1 L/min of 5% carbon dioxide in air (v/v) entered each flask. The experiment was conducted for 15 days with 1-2 mL samples being taken every 24 hours for absorbance readings. After the allotted 15 days, all samples were centrifuged in 50 mL centrifuge tubes, thus allowing all biomass solids to settle. Supernatant was discarded, and biomass was allowed to dry in the oven set to 45-50 °C. After biomass was completely dry, it was ground via mortar and pestle and allocated into its respective lipid tubes. 10 mL of hexane/isopropanol (2:3 v/v) was added to each tube, and tubes were placed on the shaker platform overnight (12-18 hrs). Lipid and solvent was then extracted via pipette and placed in lipid tubes. Both lipids and biomass were then placed under the manifold to dry. Dry weights were then used to calculate percentage of lipids.

Results Obtained Results show that the use of digestate gave a significant increase in lipid production versus the use of our standard 3N-BBM media solution. A lower concentration of digestate produced better lipid results than a higher concentration. Filtering out digestate solids before making the media solution also contributed to higher lipid yields. It was concluded that a 1% filtered digestate concentration was optimal for lipid production.

Acknowledgments Dr. Joanne Belovich

316 Charts

Figure 1. Lipid Content comparing 3 N- BBM media with 2% and 5% concentration of digestate in media.

Figure 2. Lipid content comparing effects of filtering digestate solids.

317 Biofuels as an Alternative Source for Aviation

Student Researcher: Tyra P. Studmire

Advisor: Dr. Bilal Bomani

Cleveland State University Department of Science

Abstract Whether or not a plane can fly depends mainly on fuel. With fuel in high demand, and prices on the rise, it has become more difficult to obtain fuel. Using biofuels can put this problem to rest, thus enabling us to study and experiment with two sources: seawater algae, and arid land halophytes. However, in using these two sources, there are many effects to consider. For example, we cannot use fresh water for it competes with human consumption. Only 2.5% of water on Earth is fresh water (only 1% of this water is accessible for direct human use), while 97% of Earth’s water is saline. Thus we ask ourselves, why not use the biggest source of water? We also do not use any food products to conduct the fuel, for this also competes with human consumption. Lastly, we do not use arable land, for this also competes with food crops.

There are a number of halophytes being used: salicornia virginica, also known as pickleweed; salicornia europea, also known as glasswort; rhizophora mangle, also known as red mangrove; kosteletzkya virginica, also known as seashore mallow; lastly, salicornia bigelovii. Instead of using normal fertilizer for the plants, we use fish. Fish extract a type of fertilization that is good for the plants and it keeps them healthy. So in order for the plants to be healthy, the fish also must be happy. Talking care of the fish and the plants are a big role for the interns. It is our top priority.

Acknowledgments I would like to acknowledge Dr. Bilal Mark McDowell Bomani who is constructing this experiment, and who is the advisor of the intern’s research.

318 The Rotation Rate Distribution of Small Near-Earth Asteroids

Student Researcher: Kevin M. Sweeney

Advisor: Dr. Thomas S. Statler

Ohio University Department of Physics and Astronomy

Abstract This observational campaign focuses on observing small (<500 meters) Near-Earth Asteroids in order to determine their rotational periods. Since 2006, the group has observed 83 objects, finding reliable period solutions for 18 and lower limits for an additional 28 asteroids. A possibly multiply periodic light curve is discussed that may be the signature of a non-principal axis rotator. This data will aid future statistical studies of the rotation rate distribution by occupying an underpopulated region in the current compilation of data found in the literature.

Project Objectives Asteroids provide a unique opportunity for examining the conditions of the early Solar System. This is because the materials that coalesced to form these bodies have since remained more or less unaltered. The physics of asteroids is also interesting because their size scale is in an intermediate range where the forces of gravity and tensile strength are comparable. This conflict between the forces that determine the object’s stability leads to complicated behavior during disruptive events such as collisions (Asphaug 2002). Astronomers hope to contribute to the development of minor body science by observing their behavior on a large scale so that the statistics of their properties may be resolved.

A readily observable characteristic is the distribution of rotation rates, which has served as a major source of information regarding asteroid structure in recent years. Figure 3 presents a survey from Holsapple (2008) of rotational period measurements for the entire range of sizes. A wealth of data is developing for asteroids with diameters greater than a few hundred meters, but smaller objects remain relatively obscure.

This project focuses on gathering data for objects with diameters less than approximately 500 meters. This is accomplished by observing as many asteroids as possible, spending just enough time on each to get a reliable period solution before moving on to the next. Doing so will give future statistical studies tighter constraints by more thoroughly defining the transition region between the gravitational and material strength regimes.

After joining this group two years ago, I have played an integral role in the production of the results presented below. Partaking in every step of the process, from observation to data reduction to light curve analysis, my work is present in 34 of the 83 total light curves produced thus far.

Methodology Close approaching Near-Earth Asteroids (NEAs) are the most practical targets. Observing strategy is determined by the technological limitations of the telescope equipment. Using the 2.4-meter Hiltner telescope at MDM Observatory, a 30 second time exposure is required to detect most asteroids on the sky. In conjunction with a 90 second CCD camera readout time, this means that we are able to take an image approximately every two minutes. Ideally, we take images at this rate for about four hours per asteroid. Under good observing conditions, this window gives enough time to unambiguously determine the rotation period of the object in most cases.

Due to the large distances to even the relatively nearby NEAs, the objects are not resolvable by ground- based telescopes. Rather, they appear as a point source of light in the images. The brightness of this point varies as the asteroid rotates and the amount of reflected sunlight reaching us changes with the geometry. This means that measuring the brightness of the asteroid in each image and finding the periodicity of the resulting time series provides a means of finding the rotational period.

319

The process of measuring the amount of light detected from a point source is called photometry. The technique we use for doing this is known as point spread function (PSF) fitting, where the PSF is the effect the telescope has in distorting a point source into a specific shape on the image. PSFs can be accurately described by a two dimensional Gaussian function. This function is found for the asteroid by first finding one for each of the stars in the background of the image, averaging them all, and then applying the final averaged PSF to the asteroid. The amount of light detected in the shape of this function at the asteroid’s coordinates in the image is the brightness of the asteroid integrated throughout the time exposure.

Plotting these brightness values over time results in what astronomers call a light curve (e. g. Figure 1). Unfortunately, the two minute time spacing between data points is comparable to the rotation period of most small asteroids, and so very few points are gathered per rotation. This is the reason for the lack of an obvious periodicity in the typical light curve of Fig. 1. For sparse time series data sets such as this, the method of phase dispersion minimization (PDM) is highly effective in revealing the period (Stellingwerf 1978). PDM is based on the idea that a periodic signal repeatedly folded back onto itself at its period will flawlessly line up into a single sequence showing the shape of one cycle. Realistically, for a sparse data set this is analogous to folding a series over multiple times for many different periods and determining which has the minimal average distance between adjacent points in the folded curve. Figure 2 shows the effectiveness of this process in determining the period for light curves like Figure 1.

Results Obtained 83 asteroids have been observed over 34 nights since the start of this project in 2006. Of these data sets, 18 yielded reliable period solutions, and lower limits were found for 28 objects. Figure 3 overlays these results on the Holsapple (2008) survey. Diamonds and downward arrows represent this project’s period solutions and lower limits, respectively.

Significance and Interpretation of Results The data shows many fast rotating asteroids well above the two-hour limit for larger objects, as expected for asteroids of this size. The lower limit estimates for asteroids without a successful period determination also suggest several small, slow rotating asteroids as well. Both of these sets, along with future additions from this ongoing project will greatly enhance the quality of statistical studies of the rotation rate distribution of asteroids.

A light curve of particular interest is that of 2008CP, an NEA with an estimated diameter of 54 meters. The original, unfolded light curve has a clear periodicity of about 50.16 minutes. This periodic signal, however, follows a long-term trend throughout the entire 3.5 hour observation. In addition to continued observations, future work will include analyzing this light curve and exploring the interpretation of this phenomenon as a multiply periodic signal. If this is the case, the second period in the data may be the signature of what is known as a non-principal axis rotator. These are objects that are undergoing a precession along with their spin, which would be the source of the second period.

Figures

Figure 1. Light curve for the NEA 2009HK73.

320

Figure 2. Light curve in Figure 1 folded at its most probable period of 180 seconds.

Figure 3. Distribution of rotation rates as a function of object size. Figure adapted from Holsapple (2008), in which small dots represent a survey of recent data. Diamonds represent period solutions from this work, along with downward arrows symbolizing lower limits for objects of which an exact period could not be found. The dashed line is a theoretical rotation limit from the Holsapple paper, determined by material strength at small diameters and gravitational force at larger sizes.

Acknowledgments I would like to thank Dr. Thomas Statler for his mentorship throughout this project. I also owe Desiree Cotto-Figueroa a great deal of credit for her instruction and infinite patience. I am also grateful to Dr. Joseph Shields and David Riethmiller for their invaluable assistance in completing this work.

References 1. Asphaug E., Ryan E. V., Zuber M. T. 2002. Asteroid Interiors. Asteroids III 463-484. 2. Pravec P., et al. 2005. Tumbling Asteroids. Icarus 173:108-131. 3. Holsapple K. A. and Michel P. 2008. Tidal Disruptions. Icarus 193:283-301. 4. Stellingwerf R. F. 1978. Period Determination Using Phase Dispersion Minimization. The Astrophysical Journal 224:953-960.

321 Zosteric Acid Integrated Thermoreversible Gel for the Prevention of Post-Surgical Adhesions

Student Researcher: Jorge A. Sylvester

Advisor: Dr. Bi-min Newby

The University of Akron Department of Biomedical Engineering

Abstract Abdominal adhesions are fibrous bands that form when injury to the peritoneum occurs, such as in surgery. The incidence of adhesions is 90% in gynecological surgeries and 97% in general surgery laparotomy procedures [Fox Ray 1994; Ellis et al. 1999]. The complications include chronic pain, infertility, and small bowel and intestinal obstruction. Injury to the peritoneum causes protein-rich, serosanguineous fluid to cover the injured area. Adhesions will form between peritoneal surfaces that come into contact while covered in this protein-rich fluid [El-Mowafi and Diamon 2003]. Zosteric acid has been shown to be an effective, non-toxic anti-foulant that prevents the attachment of organisms to surfaces [Barrios et al. 2005; Ram et al. 2010]. These organisms release a protein-rich fluid onto the surface, which allows subsequent attachment. A biodegradable polymer, poly(lactic-co-glycolic) acid (PLGA), will be used in conjunction with a thermoreversible gel, Pluronic® F-127 (PF-127), to achieve a localized, controlled delivery of zosteric acid.

Objectives Due to the high incidence of abdominal adhesion formations after gynecological surgeries, an effective, preventative method is desired. Ideally, zosteric acid will be used for this purpose since it has been shown to be an anti-foulant. It will be delivered to the surgical site at the end of the surgical procedure via PLGA and Pluronic® F-127. Pluronic® F-127 is liquid at low temperatures and becomes a gel at body temperature. The thermoreversibility of Pluronic® F-127 allows the zosteric acid to be delivered as a liquid and become a gel once it comes into contact with the body. Encapsulation of zosteric acid in micro and nanoparticles allows the zosteric acid to be delivered at different rates due to nanoparticles degrading faster.

Methodology Zosteric acid was synthesized using p-coumaric acid and pyridine sulphur trioxide complex in the presence of dimethylformamide as shown in the diagram. Zosteric acid was then encapsulated in 50/50 PLGA using a water-in-oil-in-water method to produce encapsulated zosteric acid in micro and nanoparticles. Free zosteric acid along with the encapsulated form were both mixed into PF-127 gel. PF- 127 is a thermoreversible gel which is a liquid at low temperatures and turns into a gel at 37°C, which is approximately human body temperature. This allows for easy application during surgeries and for localized delivery. Cytotoxicity will be assessed using mouse L929 fibroblast cell cultures. Effective dosage will be assessed using peritoneal macrophage/monocyte assays. These results will then be used in a standardized, published swine model for quantitative evaluation of pelvic adhesion formation [Cheung et al. 2009]. The adhesions will be measured using Material Testing System (MTS) machine platform which measures force vs. displacement.

Results Zosteric Acid was synthesized using a new less-toxic method. The UV-vis spectroscopy results of the zosteric acid synthesis are shown below. It shows a peak at 273 nm which corresponds to zosteric acid. Mass spectrometry analysis of the zosteric acid synthesis product showed that zosteric acid was synthesized. However, proton Nuclear Magnetic Resonance analysis is needed to determine the purity. Encapsulation into 50/50 PLGA and mixing into Pluronic® F-127 will be done in the future along with cytotoxicity testing and effective dosage testing. A published swine model (Cheung et al., 2009) will be used to test the final product.

322 UV-vis Analysis

2

1.5

1

Abs. 0.5

0 0 100 200 300 400 500 600 700 -0.5 Wavelength (nm)

Acknowledgments The author would like to thank Dr. Bi-min Newby and Mrs. Michelle Chapman from Summa Health System for all of the help on the project.

References 1. Barrios CA, Xu Q.-w., Cutright TJ, Zhang Newby B.-m., Incorporating zosteric acid into silicone coatings to achieve its slow release while reducing fresh water bacterial attachment, Colloids Surf B: Biointerfaces 2005; 41: 83-93. 2. Cheung M, Chapman M, Kovacik M, Noe D, Ree N, Fanning J, Fenton BW, A Method for the consistent creation and quantitative testing of postoperative pelvic adhesions in a porcine model, J Invest Surg 2009; 22(1):56-62. 3. Ellis H, Moran BJ, Thompson JN, Parker MC, Wilson MS, Menzies D, McGuire A, Lower AM, Hawthorn RJS, O’Brien F, Buchan S, Crowe AM, Adhesion-related hospital readmissions after abdominal and pelvic surgery: a retrospective cohort study; Lancet 1999; 353: 1476-1480. 4. El-Mowafi D, Diamon M, Are pelvic adhesions preventable?, Surg. Techn. Int. 2003; 11: p222-35. 5. Fox Ray N, Denton W, Thamer M, Henderson S, and Perry S, Abdominal adhesiolysis: inpatient care and expenditures in the United States in 1994. J Am Coll Surg 1998; 186: 1-9. 6. Ram J, Purohit S, Zhang Newby B-m, Cutright TJ, Evaluation of the natural product antifoulant, zosteric acid, for preventing the attachment of quagga mussels – A preliminary study, Natural Product Res., 2010 under review.

323 Characterization and Modeling of Thin Film Deposition Processes

Student Researcher: Charles F. Tillie

Advisor: Dr. Jorge E. Gatica

Cleveland State University Department of Chemical and Biomedical Engineering

Abstract With the raise of environmental awareness and the renewed importance of environmentally friendly processes, surface pre-treatment processes based on chromates have been targeted for elimination by the United States Environmental Protection Agency (EPA). Indeed, chromate-based processes are subject to regulations under the Clean Water Act and other environmental initiatives, and there is today a marked movement to phase these processes out in the near future. Therefore, there is a clear need in developing new approaches in coating technology aimed to provide alternative practical options to chromate-based coatings in order to meet EPA mandates. This research focuses on calorimetric analysis and mathematical modeling to develop an alternative process.

Project Objectives The overall goal of characterizing the chemical vapor deposition reaction has many components to it. Thermal characterization of the solutions being used to grow the films must be completed, including the specific heat and heat of vaporization. Distinguishing between the thermal effects of the surface reaction and vaporization must be completed to ensure an accurate model. It is also necessary to develop a data analysis methodology to retrieve the kinetic parameters for the chemical vapor deposition reaction. The laboratory environment in which the films will be grown must also be modeled. This includes determining the optimum location of the stage inside the furnace and modeling the flow of air through the furnace.

Methodology Used These analyses are completed using a state-of-the-art Differential Scanning Calorimeter (DSC), a research grade MDSC: Q200 Modulated DSC with Mass Flow Control from TA Instruments. This device measures the amount of heat flow required to raise the temperature of a sample of solution at a user- specified rate against an empty reference pan. When used conventionally, the DSC provides the user with the necessary data to complete thermal characterization of a substance, like finding the heat of vaporization. The governing equation for the heat flow in the pans is

By assuming that the mass remains constant during a run, this can be rearranged to isolate the specific heat of the substance, cp

Where is the measured heat flow and is the inverse of the heating rate. Thus, for a known mass of sample, m, it is possible to determine the specific heat over a given temperature interval. To determine 2 3 the parameters of the cp (in the form of cp = a + bT + cT + dT ), polynomial regression via MATLAB was used.

A plot of heat flow against temperature can be used to see the effects of phase change on the heat flow. When working with a liquid sample, the line will start to deviate from its trend and form a peak when the

324 material begins to vaporize. Once vaporization is complete, the original trend again resumes. Thus, the heat of vaporization can be evaluated by calculating the area of the peak. This was also done in MATLAB using appropriate integration methods.

Results Obtained Below are the preliminary results for the cp and the heat of vaporization. For the cp, the following correlation with temperature was found from fitting the line:

-3 -5 2 -7 3 cp = 1.27 + 9.95×10 T – 5.22×10 T + 1.12×10 T

The heat of vaporization was found to be 270.5968 J/g. However, these are preliminary results that are unconfirmed. More tests will be run to confirm the reproducibility of these numbers.

325 Problem Based Learning Approach to Designing Aircrafts

Student Researcher: Zachary M. Tocchi

Advisor: Dr. Lynn Pachnowski

The University of Akron Department of Education

Abstract Often times, students do not see that their classes are all connected in some way, shape, or form. In science, students may have to write lab reports which uses skills learned in Language Arts, they may have to do some calculations which are learned in math class, and there may be background information or research needed which is learned in social studies. A Problem-Based Learning (PBL) activity is one where students approach the project from all aspects of their education to complete one project. Therefore, the goal of my project is to not only incorporate math and science in one activity, but include social studies, language arts, and technology as well. Each aspect of this project will be covered in all areas of the students’ classes: in science, they will complete the “Glow with the Flow” activity and build their design; in math, they will create and analyze the data related to the “Glow with the Flow” activity; in social studies, they will work on researching the history of aircrafts and air resistance; and in language arts, students will work on their presentation as well as a short paper about their aircraft.

Theoretical Framework In order to incorporate all areas of a students’ education in one project, I will create a PBL activity which assigns students a specific role to play throughout, to figure out what they “know” about the problem and “need to know” (KNK), then do research, further investigate the KNK, and finally create a presentation of a solution to the problem. The reason a PBL is being used is partly because students will be put into problem-based situations throughout their lives. In fact, courses in the medical field can be all problem based. The American Journal of Pharmaceutical Education recently published an article about a course run in a PBL format versus the same course run in a lecture format. According to this article, students in the PBL format of the class scored significantly higher on examinations. This was most likely because students were hands on during the process of the course. The PBL I have created has the students taking the role of employees of a private company closely related to NASA. Their goal is to provide NASA with a better aircraft that has less drag and air resistance. This project will be presented to the students in the school auditorium as a local engineer acting as a high-ranking employee of NASA will speak to the class as a whole. After his presentation, students will complete the KNK as a whole group by adding things to the lists. This PBL will utilize NASA’s “Glow with the Flow” activity which will help students identify a way to create a new aircraft with less drag. Students are also told that they can “contact NASA” with questions they may have which translates to the teachers answering their student’s questions. After students gather enough information and answer the KNK completely, they are to work in small groups to propose how they will reduce air resistance on aircrafts.

Lesson Overview After the KNK charts are made, students will choose a few of the “need to know” topics to research individually and share that information with the class. They will continuously revisit the KNK and update it with new “knows” and new “need to knows” as they learn information and ask new questions. Once they feel like they have enough information to complete the project. Students will be introduced to the subject of reducing air resistance and drag through various research options and through in-class labs such as the NASA activity “Glow with the Flow.” Through the activity, students will be learning and practicing essential mathematical skills of using mathematical models to represent and understand quantitative relationships, analyze change in changing contexts, collect data and organize it in a presentable fashion and apply problem solving strategies to real life situations. At this point, students will list ideas they have for a new type of aircraft and will then design their best idea on the computer. Next, students will build their design and test it against their list of constraints. Using data collected from their aircraft, students will interpret their data to see if any changes should be made to their design. Students

326 will create a poster and present it during a final meeting with NASA representatives where each group will present their idea.

Objectives  Students will understand how an airplane flies and focus their attention on the shape of the plane, not the internal parts. They will complete an activity about how an airplane flies.  The students will then use their knowledge in completing the PBL that I developed.

Alignment The activity “Glow with the Flow” lists numerous math, science, and technology standards covered in the activity. The following are Common Core standards for English which correlate to creating an essay about the student’s air craft: “Conduct short research projects to answer a question, drawing on several sources and refocusing the inquiry when appropriate; gather relevant information from multiple print and digital sources; assess the credibility of each source; and quote or paraphrase the data and conclusions of others while avoiding plagiarism and providing basic bibliographic information for sources.” From the Social Studies standards, “Gather relevant information from multiple authoritative print and digital sources, using advanced searches effectively; assess the strengths and limitations of each source in terms of the specific task, purpose, and audience; integrate information into the text selectively to maintain the flow of ideas, avoiding plagiarism and overreliance on any one source and following a standard format for citation.” Standards for all areas of education can be tied into this project. Most cannot be applied to the project until the students decide which way the project is going to go.

Assessment Verbal assessments will be used throughout the PBL. I will be checking in with each student and group to see their progress on the project. The final assessment of the project will be done in a poster session where the students will present their ideas for a new aircraft to “NASA employees” which will most likely be local engineers. They will be graded on how feasible it would be to build this aircraft, how creative their ideas are, and their presentation skills. Each individual course would also have grades for the project. Students could take a test over the math skills used during the “Glow with the Flow” activity. Social Studies teachers could evaluate their student’s ability to research. Science teachers could score students on how they explored the “Glow with the Flow” activity. Finally, Language Arts teachers could grade a student’s presentation. The ways you can assess a PBL are limitless.

Conclusion This project allows students to really draw connections to all of their classes which is not always easy. Unfortunately, today’s education clumps everything into blocks, and there are no real obvious connections that students can make. After doing this PBL, students will have a better idea as to how their classes will relate as well as how some of the skills they are learning will apply to everyday life. It is almost impossible for a student to not feel involved during a PBL. Every students will eventually find some little aspect of this problem that will interest them.

References 1. National Aeronautics and Space Administration (n.d.). Geometry and algebra: glow with the flow. National Aeronautics and Space Administration. Retrieved from http://nasa.ibiblio.org/details.php?view=combo&subject=Mathematics&start=0&videoid=6020 2. Common Core State Standards Initiative. English Language Arts Standards. Retrieved from http://corestandards.org/the-standards/english-language-arts-standards 3. Common Core State Standards Initiative. English Language Arts Standards for Social Studies. Retrieved fromhttp://corestandards.org/the-standards/english-language-arts-standards/writing- hst/grades-11-12/ 4. Romero, R. M., Eriksen, S. P., & Haworth, I. S. (2010). Quantitative Assessment o f Assisted Problem-based Learning in a Pharmaceutics Course. American Journal of Pharmaceutical Education, 74(4), 1-9. Retrieved from EBSCOhost.

327 Optimization of a High Jump for a Prototype Biped

Student Researcher: Patrick M. Wensing

Advisor: Dr. David E. Orin

The Ohio State University Department of Electrical and Computer Engineering

Abstract The creation of legged machines capable of dynamic locomotion has the potential to greatly advance our abilities to explore other planets. Legs offer a variety of advantages in terms of mobility and agility that can be observed in nature. Still, legged robots have yet to leverage these advantages, as their mechanical designs and associated motion control capabilities lag far behind the biological realm. It is only through a concert of advances in mechanical design and intelligent control development that the performance capabilities of legged systems will be furthered.

As a preliminary study into dynamic movement, this work develops a maximum-height jump for a prototype biped. The jump development is accomplished through the selection of an intelligent parameterization of the jump control system. This parameterization reduces the infinite dimensional space of all jump controllers to a more tractable search space for optimization. Biologically inspired swarm optimization techniques are then used to find controllers that push the system to its performance boundaries. Results are presented that showcase a family of jump controllers that outperform hand-tuned control strategies by nearly 35 percent.

Introduction Robotics has long played a pivotal role in space exploration, most notably in recent history with the Mars rovers Spirit and Opportunity. While these wheeled robots are easier to control than legged robots, they require a clear path for navigation. In contrast, legged robots require only discrete footholds for transportation and can navigate a wider variety of terrain conditions. Although the use of legged robots would provide a variety of advantages for space exploration, our current methods to control legged systems are far from robust. If legged robots are to be used reliably in unfamiliar environments, an ability to maintain their stability through a variety of motions needs to be further developed.

A substantial proportion of the legged robots to date have only been endowed with the ability to perform quasi-static movements and are largely incapable of dynamic motion. Quasi-static movements are slow- speed movements where system accelerations are negligible. As a result, the stability of such motions is dominated by the position of the system’s center of gravity. That is, on even terrain, if the center of gravity resides over the system’s foot support, the system is statically stable. In contrast, dynamic movements are characterized by large accelerations, rapid changes in direction, and often experience periods of flight during which the system is largely uncontrollable. These motions require high-power actuators and responsive real-time control. While this presents substantial challenges to the engineering of these systems, the capability to quickly respond to surroundings through dynamic motion will be pivotal to any future autonomous legged system in operation.

Controlling and characterizing the stability of dynamic movements is a difficult task. The dynamic stability of some motions can be verified through the use of the celebrated Zero Moment Point criteria [1]. At every instant in a motion, a unique point on the ground plane, called the Zero Moment Point (ZMP), can be calculated. Just as the projection of our center of gravity (CoG) onto the ground plane provides a test of static stability for stationary configurations, a ZMP inside the foot support provides a certificate of dynamic stability for an instant of motion. Still, many motions, such as a human walk, place the ZMP at the edge of the foot support, resulting in a controlled fall that is interrupted only by the periodic occurrence of subsequent footfalls. This represents one of the major challenges to the performance dynamic movements. That is, the system often must evolve through uncontrollable regions of its state- space only to regain that control through future actions.

328 Dynamically stable bipeds have been developed by a number of researchers. Perhaps the most research has focused on the development of dynamic walking robots. While many are familiar with Honda’s humanoid ASIMO, this system fails to achieve truly dynamic walking due to overly restrictive quasi- static control approaches [2]. Many simple legged robots have been created for the sole purpose of dynamic walking, and have been constructed to walk passively down slight inclines [3]. Other’s have achieved walking on more biologically realistic systems but still have failed to demonstrate a variety of movements [4, 5]. Recently, researchers have demonstrated the extension of walking control strategies to that of a run in a planar biped [6]. One of the most impressive bipeds to date remains Raibert’s dynamic biped that was capable of a stable running gait incorporated with a front flip [7]. The control approaches for these systems are widely varied; some use complex mathematics, while others employ simple heuristics developed from an intuitive understanding of the system.

Figure 1. (a) Honda’s ASIMO, a quasi-static humanoid capable of non-dynamic walking. (b) Cornell’s powered biped. Inspired by passive-dynamic bipeds, it is the most efficient walking biped to date. (c) TU Deflt’s Flame, a dynamic biped that is specialized for walking. (d) Raibert’s 3D biped. Still one of the most capable bipeds to date, this system can run stably and perform front flip jumps.

To address the control complexities in dynamic legged systems, many researchers have turned to biological inspiration to formulate and tune their control strategies. Heuristics observed from biological systems have helped to formulate and tune knowledge-based fuzzy reasoning systems in both quadrupeds and bipeds [8, 9]. Others have employed genetic algorithms to optimize controllers for quadruped gallops and other dynamic motions [10]. Each of these strategies leverages the intelligence of the control designer, as opposed to a strictly model-based control approach. A synergy of these previous approaches will be taken in this work; heuristics will be used to construct an intelligent controller formulation, while biologically inspired optimization techniques will then be used to tune the controller for maximum performance.

Objectives The primary goal of this work is to investigate the high-jump capabilities of the prototype biped KURMET. KURMET is a planar biped that was built at OSU specifically to enable the study of dynamic locomotion. The fist goal of this work is to identify an appropriate control strategy for the performance of a jump. Key parameters in this control strategy will be outlined, and this parameterization will implicitly introduce a family of jump controllers within the infinite dimensional space of all possible jump controllers. The second goal of this work is to produce a maximal-height jump with KURMET. This will be accomplished through optimization over the family of jump controllers produced in the previous step. The goal of the work is not just to find locally optimal controllers, but rather to find globally optimal controllers that represent the maximum performance capabilities of the system. Finally, it is desired to compare optimal control approaches to hand-tuned controllers produced through heuristic tuning on the experimental system.

329 Methodology This section will be laid out as follows. First, the kinematic and dynamic models of the robot will be described along with a description of the dynamic simulation environment. Next, a jump control approach will be described that is based on a finite state machine. The state machine allows for different control objectives to be attained throughout the different phases of the jump. Key parameters in each state will be identified in order to reduce the dimensionality of the control policies to be considered. Next, a two-stage optimization strategy will be introduced which employs a global optimizer for exploration and a local optimizer for fine-tuning to arrive at maximal jump-height control solutions.

Methodology – Biped Model To investigate dynamic movements in biped robots, an experimental robot, KURMET, was created at The Ohio State University. KURMET is a five-link planar biped, constrained by a boom [11] and is shown in Figure 2(a). KURMET has a mass of approximately 15 kg and has a standing height of 50 cm from the ground to the hip when its legs are fully extended. Each leg includes a thigh and a shank, which are each actuated by a series-elastic actuator (SEA).

Series elastic actuators have proven useful in the performance of dynamic movements, due to their ability to provide the explosive leg power demanded by motions such as a jump. The model for the hip SEA is shown in Figure 2(b). As opposed to a direct drive actuator, where the motor directly drives the link, the SEA incorporates an element of compliance in series between the motor and the link through the addition of a flexible spring. This compliance allows for smaller impact forces and smaller impact losses. That is, upon touchdown, some of the kinetic energy of the system is converted to spring potential energy. This energy is converted back to kinetic energy prior to the subsequent take-off. A detailed model of the series- elastic actuator can be found in [12]. More technically, the SEAs include a unidirectional compliance feature that disengages compliance when shortening the leg. While this allows for more precise leg positioning during flight, the complexities of this feature lead to complications for control. All four SEAs are located in the body of the biped to minimize leg inertia.

Figure 2. KURMET: (a) Prototype system and (b) Series Elastic Actuator (SEA) model for the hip axis (from [9]). Note: a similar actuator is used to drive the shank axis.

This work relies heavily on the use of dynamic simulation for controller evaluation. A high-fidelity model of the biped has been developed within the RobotBuilder [13] simulation environment and is shown in Figure 3. The biped is modeled as a series of articulated rigid bodies, where solid modeling software was used to estimate the inertial parameters of each link. RobotBuilder employs the DynaMechs package to efficiently calculate the system’s rigid body dynamics. All the actuator dynamics are modeled as well, including SEA and motor dynamics. The ground is modeled as a soft contact with a spring and damper in the vertical and lateral directions. More details on the model can be found in [9].

Methodology – Control Approach In order to perform computational optimization of the jump controller, a parameterization of the controller must be developed. In general, the space of all jump controllers is infinite dimensional. One may seek to optimize joint torques as a function of time, or to explicitly optimize a feedback policy based on the system state. Either of these approaches results in an infinite dimensional optimization problem.

330 Instead, this section will develop a finite dimensional parameterization of the jump controller through extension of a state machine controller that was previously developed in [9].

Figure 3. KURMET in the RobotBuilder simulation environment. RobotBuilder provides efficient dynamics algorithms to quickly simulate the system’s performance under a given control strategy.

In this work a state machine is used to sequence the legs through the various phases of the jump motion. The structure of the state machine is shown in Figure 4. Beginning in the pre-touchdown (PRE_TD) state, the state machine operates from top of flight (TOF) through one complete jump cycle. During each phase of the jump, there are different control objectives. Encoding the jump controller as a state machine allows a control structure to be specified in each state that is tailored to meeting these phase-specific objectives.

Figure 4. State machine employed to coordinate the jump behavior. States outlined in bold indicate states that occur during ground contact.

The pre-touchdown phase positions the legs for touchdown, starting at TOF and ending at touchdown. The most important details of this state are the starting and ending configurations. As a result, this state’s control parameters are selected as the starting height, and the hip and knee touchdown angles.

The hold phase acts to reverse the vertical velocity of the biped. This can be accomplished through the passive dynamics provided by the series-elastic actuators. That is, by holding the motors at a constant position, the legs will flex at touchdown and divert energy to the leg springs. To simplify the control structure, this state’s control policy is parameterized by hold times for the hip and knee actuators.

The thrust phase adds energy to the system by actively deflecting the springs in the series-elastic actuators. The injection of this energy into the system is critical to its performance. As a result, a number of parameters are included for this state. These parameters include the thrust time and the thrust amount on both the hip and the knee actuators. An additional parameter is used to specify the termination of this state, which occurs just prior to liftoff. To estimate when liftoff is approaching, the deflection in each spring is monitored. Termination then occurs once this deflection has fallen below a threshold percentage of the maximum deflection. This threshold percentage is critical and is chosen as an additional parameter.

331 The pre-liftoff phase positions the motors to the desired liftoff configuration. Due to the nature of the series-elastic actuators, this action inherently removes energy from the system. As a result, in addition to the final liftoff angles, the timing of this phase is critical. A trajectory time and final liftoff angles are used as parameters for this state. The pre-TOF state is perhaps the simplest state and returns the legs to the previously used touchdown configuration. This helps to prepare the system for the following jump cycle. The complete set of 13 controller parameters is summarized below in Table 1.

Table 1. Summary of controller parameterization. State # of Parameters Parameter Description Pre-Touchdown 3 Starting height (1) Hip and knee touchdown angles (2) Hold 2 Hip and knee hold time (2) Thrust 5 Hip and knee thrust time (2) Hip and knee thrust amount (2) Deflection threshold (1) Pre-Liftoff 3 Return trajectory time (1) Hip and knee lift-off angles (2) Pre-TOF 0 None

Methodology – Control Optimization The goal of this section will be to describe how specific values for the 13 parameters identified previously were selected to produce a maximal height jump. The optimization problem was solved with computational optimization algorithms in two steps. First, a particle swarm optimization algorithm was employed to explore the high-dimensional parameter space for promising regions of parameters. Second, a local non-gradient optimizer was used to fine-tune the controller’s performance around the best solution found in the first step.

The jump optimization problem to be solved can roughly be cast in the non-linear programming framework. The goal is to optimize the resultant jump height of the system under the control policy parameterized by some x=(x1, x2, …, x13), where each xi provides a specific value for the i-th controller parameter. This optimization is constrained, since the resulting jump must not violate the kinematic limits of the joints and must not result in the ground contact at the knee. The complete formulation is shown in Figure 5(a). Each of the functions in this equation can be evaluated through dynamic simulation of the biped under the control policy parameterized by x. Alternatively, an unconstrained approximation to this problem can be adopted that sufficiently penalizes any violation of the inequality constraints as in Figure 5(b). Here the penalty coefficients p1 and p2 must be selected sufficiently large to prevent an optimum from violating either constraint.

Figure 5: (a) Constrained version of the jump optimization problem. (b) Unconstrained approximation by linear additive penalty method.

Particle swarm optimization (PSO) provides an effective approach to search for globally optimal solutions to high-dimensional unconstrained optimization problems. Much like the genetic algorithm mimics the optimization process of biological evolution, particle swarm optimization attempts to mimic the process through which ideas are communicated and refined through social influence [15]. A swarm may consist of 1 N i i i N individuals (x , … , x ) where each individual x =(x 1, … , x 13) encodes a controller. Each individual explores the space in the direction vi, which is influenced by personal and social interaction through the following iterative updates:

332

i i Here, x pbest and x lbest represent particle i’s personal best controller, and the best controller found in its social network respectively. While each particles social network can be defined arbitrarily, a fully connected social topology was used in this work. The variables and are stochastic variables that randomize the acceleration influence from personal and social information. is a constriction coefficient that can be selected to accelerate or decelerate exploration. Finally, w is small random “turbulence” in order to further encourage exploration and prevent stagnation. In order to provide a balance between exploration and exploitation of good solutions, the particle swarm iterations were divided into two sections. For the first half of optimization, constriction coefficients and ranges for and w were selected to encourage exploration. Yet, during the second half of optimization, these coefficients were modified as described in Table 2 to encourage swarm convergence to the most promising areas of the search space.

Table 2. PSO parameter selection for different stages of optimization. Note: the second stage of optimization favors acceleration towards the best solution in the social network, and constricts particle velocities to encourage convergence of the swarm. (~U[a,b] denotes a uniform random variable on [a,b]). Parameter Exploration Stage 0.6 Convergence Stage 0.4

In order to ensure local optimality, a non-gradient optimization approach was employed to clean up the final results of the PSO optimizer. The Nelder-Mead downhill simplex method was selected based on its low number of required function evaluations, a process that requires costly dynamic simulation in this case. This algorithm worked by successively improving the worst performing particle in a 14-particle collection. Gradient information can be approximated in this algorithm based on the relative location of the 13 superior particles. Details of the algorithm can be found in [16]. This procedure of global search and local optimization was repeated to obtain a number of candidate global optima.

Results Obtained The two-stage optimization approach produced a number of locally optimal controllers ranging in jump heights from 1.02 m to 1.05 m. Here, jump heights are measured from the ground to the center of the left hip. While these many local optima were found with a wide range of initial configurations and thrust profiles, the optimum closest to the hand-tuned controller will be presented for ease of comparison. A comparison of the hip actuator performance and hip height for these jumps is shown in Figures 6(a-b).

Figure 6. (a) Left hip motor and link trajectory comparison for hand tuned vs. optimized controllers. Note the superior link velocity at liftoff (450ms) attained by the optimized solution. (b) Height trajectory comparison.

While both controllers result in a deep knee bend, a characteristic enjoyed by all optima found, the hand- tuned controller fails to effectively manage the take-off phase of the jump. The hand-tuned controller allows the passive dynamics to dominate around liftoff, while the optimized controller continues to deflect the SEA spring, until further deflection would lead to a violation of the knee kinematic limits. This result informs future system designs, which would benefit from robust hyperextension limit hardware and the ability to perform a more extreme leg contraction.

333 Conclusions This work has laid the foundation for further development of dynamic motions on legged systems. The intuitively designed controller structure along with appropriate parameter selection allows human intelligence to be incorporated into the design of motion controllers in a manner that is unparalleled in model-based approaches. The two-stage optimization approach will be generally applicable to further motions once an appropriate control structure has been specified. Specifically, future work aims to expand on this research by considering a full 3D humanoid model and combinations of other dynamic movements such as a dynamic run into a running jump.

Acknowledgments The author would like to gratefully acknowledge the Ohio Space Grant Consortium for their support of this work, Yiping Liu for the sound research upon which this extension was based, and Dr. David E. Orin for his patient guidance.

References 1. M.Vukobratovic and B. Borovac, “Zero-Moment Point-thirty five years of its life,” International Journal Humanoid Robotics, vol. 1, no. 1, pp. 157–173, 2004. 2. M. Hirose, Y. Haikawa, T. Takenaka, K. Hirai, “Development of humanoid robot ASIMO,” Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. 2001. 3. S. Collins, A. Ruina, R. Tedrake, M. Wisse, “Efficient Bipedal Robots Based on Passive-Dynamic Walkers,” Science, vol 307, no. 5712, pp. 1082-1085, 2005. 4. D. Hobbelen, T. de Boer, and M. Wisse, “System overview of bipedal robots Flame and TUlip: Tailor-made for Limit Cycle Walking,” in Proc. Int. Conf. on Intelligent Robots and Systems, pp. 2486-2491, October 2008. 5. T. Yang, E. Westervelt, J. Schmiedeler, and R. Bockbrader, “Design and control of a planar bipedal robot ERNIE with parallel knee compliance,” Autonomous Robots, vol. 25, no. 4, pp. 317–330, 2008. 6. B. Morris, E. R. Westervelt, C. Chevallereau, G. Buche, and J. W. Grizzle, “Achieving bipedal running with RABBIT: Six steps toward infinity,” in Fast Motions in Biomechanics and Robotics, pp. 277–297, Springer Berlin / Heidelberg, 2006. 7. M. Raibert, Legged Robots That Balance. MIT Press, 1986. 8. L. Palmer and D. Orin, “Intelligent control of high-speed turning in a quadruped trot,” Journal of Intelligent and Robotic Systems, pp. 47–68, 2009. 9. Y. Liu, P. Wensing, D. Orin, J. Schmiedeler, “Fuzzy Controlled Hopping in a Biped Robot,” To appear in Proc. Int. Conf. on Robotics and Automation, May, 2011. 10. D. P. Krasny and D. E.Orin, “Generating high-speed dynamic running gaits in a quadruped robot using an evolutionary search,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, pp. 1685–1696, August 2004. 11. B. Knox, “Design of a biped robot capable of dynamic maneuvers,” Master’s thesis, The Ohio State University, Dept. of Mechanical Engineering, 2008. 12. S. Curran, D. Orin, B. Knox, and J. Schmiedeler, “Analysis and optimization of a series-elastic actuator for jumping in robots with articulated legs,” in Proc. of 2008 ASME Dynamic Systems and Control Conference, pp. DSCC2008–2142:1–8, October 2008. 13. S. Rodenbaugh, “RobotBuilder: a graphical software tool for the rapid development of robotic dynamic simulations,” Master’s thesis, The Ohio State Univ., Dept. of Electrical & Computer Engineering, 2003. 14. S. McMillan, D. Orin, and R. McGhee, “DynaMechs: An object oriented software package for efficient dynamic simulation of under- water robotic vehicles,” in Underwater Robotic Vehicles: Design and Control, pp. 73–98, TSI Press, 1995. 15. R. Eberhart, Y. Shi, and J. Kennedy, Swarm Intelligence. Morgan Kaufmann, 2001. 16. J. Nelder, R. Mead, "A simplex method for function minimization," Computer Journal, vol. 7, pp. 308–313, 1965.

334 Advances in Technology: Building a Personal Computer

Student Researcher: Michael D. Williams

Advisor: Dr. Edward Asikele

Wilberforce University Department of Computer Engineering

Abstract To thoroughly explain and diagram the resources and procedures used to build a personal computer. Parts used will be a Case, to act as the enclosure for the system. The motherboard will act as the interface for all of the other devices to connect through. A processor will handle all of the computing calculations and operations. A hard drive is used to store permanent data. Random Access Memory for running applications. A DVD Burner will be installed to read CD’s and DVD’s. Lastly a dedicated graphics card will be installed for increased visual performance.

Project Objectives The main objective of this project will be to show the cost benefits and performance benefits of building a personal computer versus purchasing one through a retail vendor. My budget was set in the beginning at $600.00. The performance should rival that of similar quad core processor systems which range from $800.00 to $1000.00 in price.

Methodology Used The parts being used in my system are as follows:

Case: Apevia X-QBOII Black Steel MicroATX Mini Tower Power Supply: Apevia 500W Power Supply Motherboard: ASUS M4A88T-M MicroATX motherboard Processor: AMD Phenom II X4 945 Deneb 3.0GHz 95W Quad Core Processor RAM: G.SKILL 4GB 240-Pin DDR3 1333 SDRAM Hard Drive: SAMSUNG Spinpoint F3 500GB 7200RPM SATA 3.0GB/s 3.5” Internal Hard Drive Optical Drive: ASUS SATA 24x DVD Burner Video Card: SAPPHIRE RADEON HD 5670 512MB 128-bit DDR5 PCI Express 2.1 x 16 Video Card Operating System: Windows 7 Ultimate

After finding an equal system on the Dell website:

Dell Studio XPS 7100 Quad Core, which totaled $699.99, and did not include a copy of windows but rather, came with a distribution of the Linux operating system.

Results Obtained After completion of the project, the final total of the parts came up to $519.06, which was a huge savings from my intended budget. Compared to retail systems I saved a total of about $180.00 for the same performance. Construction of the computer was fairly simple following the instructions in the motherboard manual. In the end the results were flawless.

Acknowledgments Dr. Edward Asikele – Wilberforce University – Dean of Professional Studies Division

References 1. Newegg.com – Once You Know You Newegg – Parts and resources procurement

335 Tensile Testing of Auxetic Fiber Networks and Their Composites

Student Researcher: Rachael L. Willingham

Advisor: Dr. Lesley M. Berhan

The University of Toledo Department of Mechanical, Industrial and Manufacturing Engineering

Abstract This material development project reports on the continuation of an investigation of the hypothesis that an auxetic (i.e. negative Poisson’s ratio) composite can be produced by embedding an auxetic fiber network within a conventional non-auxetic matrix. First presented are the results of tensile tests performed on samples of compressed mats of sintered stainless steel fibers, known to have a negative Poisson’s ratio out-of-plane. Next, results of tensile tests on composite samples made by infusing the mats with a polyurethane polymer (commercially available Gorilla Glue) are reported. Finally, results of the tensile test behavior observed in the polymer are reported for comparison. The results show that the composite samples had an increased effective stiffness over the auxetic fiber mats, yet still displayed negative Poisson’s ratios. The results thus confirm that embedding an auxetic fiber network in a conventional polymer matrix is a feasible approach towards developing auxetic fiber reinforced composites.

Project Objectives In theory, a material with a negative Poisson’s ratio (i.e. an auxetic material) has improved hardness, impact resistance, fracture toughness, and shear modulus over a non-auxetic material, or one with a positive Poisson’s ratio, and the same Young’s modulus [1]. Interest in auxetics is motivated by the mechanical properties of these materials and the potential for enhancements that could be obtained by replacing a conventional positive Poisson’s ratio component with its unconventional auxetic complement. Although random fibrous architectures have been experimentally determined to exhibit auxetic behavior, there is very little research on this class of auxetic materials.

Recently Tatlier and Berhan reported in the behavior of auxetic fiber networks and on the parameters that give rise to negative Poisson’s ratio behavior in these materials [2]. Later, the research group initiated a study of the feasibility of developing auxetic fiber reinforced composites by embedding auxetic fiber networks within a conventional positive Poisson’s ratio matrix material [3]. This study will focus on stiffer materials that will be used as matrix material in order to study the effect of the stiffness of the matrix relative to that of the reinforcing network on the auxetic behavior of the composite.

Methodology In order to make comparisons between the composites, tensile tests were performed on plain compressed mats of sintered stainless steel fibers which are known to be auxetic [4]. This also served to verify the accuracy of the tests, as the results were compared to values that had been previously reported [3].

Tensile tests were performed on three dog bone samples for each porosity of mat, with and without polymer, and on the polymer itself. The tests were conducted using an Instron 5560 universal testing machine, set at a rate of 1 mm/min. The thickness of each sample was measured during the test using a Laserlinc laser extensometer mounted on the Instron. At 0.1s intervals, the load (F), longitudinal displacement (∆l) and mat thickness were recorded. At each interval, the longitudinal strain (ε 1), longitudinal stress (σ1) and transverse strain (ε3) were determined using the following calculations

where l0 is the initial gauge length,

336 t0 and w0 are the initial sample thickness and width, and

Figure 1 depicts a typical plot of a tensile test: longitudinal stress versus longitudinal strain and transverse strain versus longitudinal strain. For each sample, the effective stiffness was calculated from the slope of the elastic region of the stress-strain plots. Additionally, the Poisson’s ratio (ν13) for each sample was also determined and can be defined as

The Poisson’s ratio was taken as the negative of the slope of the elastic region from the plot of the transverse strain against the longitudinal strain.

To fabricate the composite samples, the stainless steel mats were infused with a polyurethane polymer (Gorilla Glue). The mats were coated with Gorilla Glue and then the glue was forced into the mat by compressing the samples between wax paper under weight. Then the mats were left to dry for at least 48 hours before testing. The composites were tested following the same procedure outlined above, and their mechanical characteristics were determined in the same manner.

Finally, samples of the polymer were made by allowing the glue to dry while being compressed between two pieces of wax paper to create small ‘sheets’ that the dog bone samples were cut from. The polymer samples were also tested following the same procedure and calculations listed above.

Results Obtained Table 1 lists the average values for Poisson’s ratio and effective stiffness for each porosity of mat. Each porosity of the stainless steel mats was found to have a negative Poisson’s ratio, as depicted in Figure 2. These results are consistent with previously obtained values and confirm the accuracy and consistency of the test method.

The average values for Poisson’s ratio and effective stiffness for the composite mats are listed in Table 2. A comparison of the effective stiffness values from Table 1 and Table 2 shows that embedding the mats into a stiff polymer increases effective stiffness of the composite. For the 60% porosity composites, the mat dominated the deformation of the sample. Based off the relative stiffness of the polymer and its behavior, the polymer initially dominates the properties of the 70% and 80% composites. Figure 3 depicts the typical test results for a 70% composite sample. The Poisson’s ratio is initially positive due to the presence of the polymer. After the first region where the polymer dominates, the sample thickness then begins to increase. One reason for this is that the polymer is so brittle that it fails and the mat deformation then dominates. The 80% composite samples yielded results similar to the 70% composites as shown in Figure 4.

Figure 4 shows that transverse strain versus longitudinal strain curves for the composites exhibit the same general trend as the mats themselves, but depending on the porosity of the mat, infusing the mat with the polymer had varying effects. This is attributed to the fact that depending on the porosity, the polymer accounted for more or less (as in the case of the 60% mats where the polymer has very little effect) of the governing mechanical properties. The explanation is that the surrounding polymer only serves to restrict the deformation of the mat initially; however the deformation behavior of the composite is governed largely by the embedded network.

Finally, Table 3 lists the average values for Poisson’s ratio and effective stiffness for the polymer, polyurethane Gorilla Glue, for comparison. Figure 5 represents at typical plot of the stress-strain and transverse strain versus longitudinal strain curves for a Gorilla Glue sample. Gorilla Glue dries as a foam- like substance that has a varied and random structure, but behaves as a conventional non-auxetic material with a positive Poisson’s ratio.

337 Significance and Interpretation of Results The results show that there exists a potential for developing new auxetic composites by embedding an auxetic fiber network in a conventional polymer matrix and that the effective stiffness of the matrix component chosen is crucial in determining the desired mechanical properties.

Figures and Tables

Figure 1. Plot of longitudinal stress versus longitudinal strain and transverse strain versus longitudinal strain for an 80% porosity stainless steel mat.

Figure 2. Plot of transverse strain versus longitudinal strain for mats of different porosities.

Table 1. Average Poisson’s ratios and effective stiffnesses for compressed, sintered stainless steel mats. Mat Porosity Poisson’s Ratio (ν13) Effective Stiffness (MPa) 60% -18.387 1200.2 70% -7.665 1123.5 80% -5.755 830.8

Region 1 Region 2

Figure 3. Plot of longitudinal stress versus longitudinal strain and transverse strain versus longitudinal strain for a 70% porosity stainless steel mat embedded in polymer matrix.

338 Table 2. Average properties of composite samples made from Bekipor ST FP3 mats and polyurethane polymer (Gorilla Glue).

Composite Mat Poisson’s Ratio (ν13) Effective Stiffness (MPa) Porosity Region 1 Region 2 Region 1 Region 2 60% - -19.59 - 1493.3 70% 3.33 -8.359 196.17 1372.0 80% -1.84 -5.631 140.59 859.03

Figure 4. Plot of transverse strain versus longitudinal strain for auxetic composites.

Table 3. Average properties of the polyurethane polymer (Gorilla Glue). Poisson’s Ratio (ν13) Effective Stiffness (MPa) Gorilla Glue 3.499 41.851

Figure 5. Plot of longitudinal stress versus longitudinal strain and transverse strain versus longitudinal strain for a polymer (Gorilla Glue) sample.

Acknowledgments The author of this paper would like to thank The University of Toledo and Dr. Berhan for supplying the materials and equipment to conduct this research, and for project support, set up and organization.

References 1. Evans, K. E. and Alderson, A. (2000), Auxetic Materials: Functional Materials and Structures from Lateral Thinking!. Advanced Materials, 12: 617–628. 2. Tatlier, M. and Berhan, L. (2009), Modelling the negative Poisson's ratio of compressed fused fibre networks. physica status solidi (b), 246: 2018–2024. 3. Jayanty, S., Crowe, J. and Berhan, L. (2011), Auxetic fibre networks and their composites. physica status solidi (b), 248(12): 73–81. 4. Delannay, F. (2005), International Journal of Solids and Structures, 42: 2265-2285.

339 Transfer and Storage of High Rate GPS Data

Student Researcher: Ryan A. Wolfarth

Advisor: Dr. Peter Jamieson

Miami University Department of Electrical and Computer Engineering

Abstract The thrust to modernize the GPS has resulted in the new wideband L5 signal which is being broadcast by the next generation of GPS satellites. The wideband nature of the L5 signal makes it particularly useful for the research that is being conducted at Miami to characterize the ionosphere during scintillations. The goal of this research is to develop advanced algorithms for GPS signal tracking during heavy scintillation events. This report describes an incomplete solution for transferring and storing L5 signal data at 25 megabytes per second with a USRP2 and personal computer running GNU Radio.

Introduction The GPS system is subject to many forms of interference that degrade signal structure which results in less accurate position solutions. One form of interference is caused by the ionosphere. The ionosphere has not been accurately modeled due to its dynamic nature. The ionosphere will behave differently depending on variables such as solar cycle activity, or man-made RF interference. These are scintillation errors which are frequency dependent: signals of different frequencies will be affected dissimilarly. GPS signal structure is comprised of three different signals:

1. A Gold code which is used to initially acquire and sometimes track the overall signal. 2. Navigation data that contains satellite position information (used to solve for user position). 3. A high frequency carrier wave.

Since these three components are of different frequencies, ionospheric interference can cause dissimilar reduction in amplitude, or worse, a phase difference between the three signal components.

Miami University has established an experimental receiver array at the High frequency Active Auroral Research Program (HAARP) in Gakona, Alaska. This station consists of three GPS receivers of various grades which can capture GPS signals that traveling through the ionosphere. The overarching goal of this research is to develop advanced algorithms for GPS signal tracking during heavy scintillation events. This research began in 2009 and is currently still underway. All data sets presented herein where collected during the solar minimum in Gakona, Alaska.

Data collected during the solar minimum will have less scintillation errors because of the reduced magnetic interference from solar activity. Figure 1 shows a plot of the solar cycle since January 2000 [1]. Predicted values are also included. The Y-axis of the plot indicates the number of sun spots present. This data is of interest because it is a good measure of the severity of magnetic interference.

The location of HAARP isolates our experiments from heavy forms of man-made interference. This location, along with the solar minimum, allowed us to monitor a controlled environment. Controlled scintillation events are generated by heating a section of the ionosphere with a high frequency phased array RADAR. Satellites passing over the Earth’s magnetic zenith are targeted because it was discovered that the area closest to the magnetic zenith is the most prone to artificial scintillations. Figure 2 shows a sky plot of GPS satellites over an 8 hour period with the magnetic zenith highlighted as the area of interest.

Figure 2 also shows the lack of satellite density directly overhead at Gakona. This fact makes it desireable to track satellites from the Russian GLONASS GNSS because they are more concentrated in the Northern region of the globe where the GPS satellite constellation are lacking.

340 An example of the collected data is visible in Figure 3. The blue regions of the X-axis represent times when the HAARP array was actively heating the ionosphere. The S4 index also given in Figure 3 is a metric of severity for scintillations. It is apparent though the figure that as we continue to heat the ionosphere, the total electron content (TEC) increases which results in a less desirable carrier to noise ratio (C/N0).

Scintillation events like the ones created at the HAARP are a difficult source of error for modern GPS receivers. Further research in this area is required to gain a better understanding of how to develop advanced algorithms to mitogate ionospheric scintillation errors.

Problem Description The data collected and presented in the introduction were all obtained from L1 and L2 GPS signals. The data rate required for these signals is only approximately 2MB/s. Additionally, the resolution of these data sets was only 1 bit. The introduction of the GPS L5 signal presented a signal at a lower frequency with increased bandwidth. It is desirable to obtain a higher resolution L5 signal in order to further understand the scintillation events described above.

The Universal Software Radio Peripheral 2 (USRP2) is a generic software receiver that can be configured to receive any or all of the three GPS signals. We desired to add an L5 data collection component to our receiver array in Gakona that could be easily transported and modified for similar roles in other locations. The USRP2 fits this role perfectly, but the problem that remained involved data transfer and storage. The software receiver nature of the USRP2 allows it to do all data processing in house and write the data out to a Gigabyte Ethernet port. However, we needed it to be configurable for use with a remote triggering system already in place and running GNU Radio (Linux based software).

Increased resolution was also desired: 4 bits per sample versus 1 bit per sample. We are required to sample at 25MHz due to the wideband nature of the L5 signal. This sampling rate coupled with the increased resolution required an overall data rate of 25MB/s. Thus, the following problem was posed:

Construct a data transfer system that takes input data from the USRP2 and writes to hard disk at 25MB/s.

Solution Implementation Several possibilities were considered before settling on one solution. These included an FPGA based packet sniffer or a striped RAID array. The packet sniffer was rejected because of the increased complexity. The RAID array was a viable solution, but size was a consideration that the RAID array exceeded.

The solution that was implemented involved a cyclic buffer in main memory on the local host running GNU Radio. The setup was connected as follows:

 USRP2 running as a receiver front-end to give binary data out for post processing.  Linux based computer running GNU Radio linked to the USRP2 via Gigabit Ethernet.  Cyclic buffer 1 on the Gigabit Ethernet hardware card that passed data to main memory via a PCI interface.  Cyclic buffer 2 in main memory to essentially increase the buffer capacity of the hardware cyclic buffer.  Cyclic buffer 2 wrote to hard drive.

By increasing the size of the Ethernet cards cyclic buffer using main memory we hoped that we would write more data to hard drive before an overflow occurred. However, the creation of the main memory cyclic buffer caused the system to begin paging memory while trying to handle the incoming data and the writes to hard disk. This resulted in high CPU usage combined with memory leaks. These results were tested and confirmed by reducing the data rate and seeing data successfully written to the hard disk.

341 Conclusion and Future Work The errors we encountered were caused by a basic hardware consideration that was overlooked in the design process. Our future solution will include a striped RAID array in a rack based system.

This project has allowed us to gain more experience with generic software receiver platforms. Additionally, we have a greater understanding of hardware that will aid us in creating our next iteration of a transportable L5 data collection system.

Figures

Figure 1. Number of sun spots over the past 10 Figure 2. Sky plot over Gakona, AK. The years. magnetic zenith is highlighted as the target area.

Figure 3. Total Electron Content (top), carrier to noise ration (middle) and S4 index (bottom)

References 1. NOAA. “Solar Cycle Progression.” Swpc.noaa.gov. 2011. Web. 31 March 2011. 2. Morton, Jade, Zhou, Qihou, Cosgrove, Mathew, "A Floating Vertical TEC Ionosphere Delay Correction Algorithm for Single Frequency GPS Receivers," Proceedings of the 63rd Annual Meeting of The Institute of Navigation, Cambridge, MA, April 2007, pp. 479-484.

342 Bleed Hole Simulations for Mixed Compression Inlets

Student Researcher: Nathan A. Wukie

Advisor: Dr. Paul Orkwis

University of Cincinnati Department of Aerospace Engineering and Engineering Mechanics

Abstract The University of Cincinnati is working with the Air Force Research Laboratory (AFRL) at Wright- Patterson Air Force Base (WPAFB) on a project titled Shock Wave Boundary-Layer Interactions (SWBLI). The purpose of this project is to investigate how shock waves in a mixed-compression inlet affect the inlet boundary layer, and in turn the inlet flow field. One problem with mixed-compression inlets occurs at the point where a shock wave meets the flow boundary layer. At that point, a pocket of the boundary layer flow becomes separated, which is known as a separation bubble. That section of separated flow can then grow to actually block a significant portion of the inlet area. If such a condition is not handled properly, the result could be an unstart of the inlet or in an actual application, the unstart of an engine. To counteract this effect, sections of the tunnel are aspirated through bleed holes to allow a portion of the flow to be bled off, which diminishes the size of the separated pockets around the areas of bleed. The bleed holes are being modeled separately to demonstrate the capability of simulating bleed hole systems.

Figure 1. CAD geometry of SWBLI model in wind tunnel

Project Objectives The purpose of the research presented here is to demonstrate the feasibility of simulating discrete bleed holes in the aspirated sections of the model in order to provide a higher level of fidelity for the SWBLI model. There are several key objectives that will contribute to achieving that goal. The first objective is to develop a gridding system for the bleed model that will facilitate ease of implementation into the larger model, and also minimize the required number of grid cells. The second objective is to produce an extremely fine grid and produce a reference solution to which future results may be compared. Lastly, a simulation will be run using conditions representative of those in the actual model for preliminary comparison to experimental results.

Methodology The Chimera gridding method was used for this project, which allowed for increased flexibility in the development of the simulation grid. Chimera gridding allows for different grid blocks to overlap in the simulation, making it much easier to develop the grids that are required for more complex geometries. The OVERFLOW solver was then used to run the solutions. When a solution is solved with the OVERFLOW code, the option exists to iterate the solution on coarser grid levels first before solving for the entire number of grid points. At each successive grid level, the number of points is divided by two in each direction, so that a rougher grid level is solving for one-eighth the number of grid points than the next finest grid level. By allowing a simulation to run first on coarser grid levels and then moving up to medium and fine grid levels as the solution progresses, the time required to obtain a converged solution is diminished. The actual generation of the grid is done in POINTWISE® in conjunction with several in- house software programs that facilitate the mesh generation process for the SWBLI project.

It was decided that a methodology and solution for a single, unit hole would be developed and then the resulting methodology would be replicated to generate an array of bleed holes for simulation. A hole was

343 taken from a SWBLI aspiration plate geometry and then used as a unit hole model and a computational grid was developed to fit that geometry.

Results and Discussion Several iterations of grid development were produced with two main grid systems that resulted. The first of the two, called “v5”, was used as a reference for future development and can be seen in the top row of Figure 2. It has the highest resolution of the developed grids, which is the reason it is being used as a comparison to future, coarser grids. For comparison, the “v5” version of the grid has 492,525 cells just in the hole region while the “v6” version has only 42,300. A series of solutions were run with conditions representative of those that would be seen in the experimental wind tunnel/SWBLI model. A freestream Mach number of 2.0 was imposed at the inlet plane of the simulation and the outlet plane of the freestream was an extrapolated mass flow boundary condition. The outlet plane of the plenum region was a specified pressure outflow ratio defined as the imposed static backpressure at that outlet over the freestream static pressure.

Figure 3 shows a comparison of the results produced by the two gridding schemes. The plots shown are at a slice plane parallel to the freestream that crosses through the center plane of the bleed hole. There are some minor differences in the results produced by the two grid schemes. In earlier simulations, there was some indication that the simulations were in fact unsteady, which could be the reason for the difference but further investigation is needed to confirm those results.

Figure 2. Grid versions v5(top) and v6(bottom) Figure 3. v5(top) and v7(bottom) - Mach and Total Pressure

Project Status At this point in the project, two different gridding schemes have been developed for future use on the project. One acts as a reference solution for future iterations of grid development and the second as a continuation in the grid development process. One prohibitive aspect of simulating entire arrays of bleed holes is the number of computational cells that such simulations require. Further coarsening of the hole grids will be done to find a desired compromise between accuracy compared to the reference solution and size of the computational grid.

An additional aspect of the project is to develop an automated method for generating a computational array of bleed holes. Some preliminary development in this area has been started but requires further work for implementation. Once a method for generating the bleed hole arrays has been developed, the next challenge will be incorporating those arrays into the larger SWBLI simulation. Simulations of the whole SWBLI model have been run already and will continue to be run to investigate other aspects of the project. The current simulations are being run with a distributed mass flow boundary condition to simulate the bleed regions but this method does not capture the discrete nature of the bleed holes. Comparisons will be made between the distributed boundary condition results and the discrete bleed hole results to characterize the differences between the two approaches, determine if those differences are significant, and if so, in what ways.

Acknowledgments The author would like to thank Dr. Paul Orkwis, Dr. Mark Turner, Daniel Galbraith, Marshall Galbraith, Jonathan Ratzlaff, Alex Apyan, and Richard Wolf for their support during this project.

344 References 1. Seddon J., Goldsmith E.L., Intake Aerodynamics, Blackwell 2nd Ed. 1999. "Preliminary Numerical Investigation of a Mach 3 Inlet Configuration with and without Aspiration and Micro- Ramps" aiaa 2010-1095 for the swbli cad models. 2. Galbraith, D. S., Galbraith, M. C., Turner, M. G., Paul, O. D., Apyan, A., “Preliminary Numerical Investigation of a Mach 3 Inlet Configuration with and without Aspiration and Micro-Ramps”, AIAA 2010-1095. 3. OVERFLOW Manual, V2.1.

345 Reaping Rocks

Student Researcher: Tara N. Yeager

Advisor: Ms. Karen Henning

Youngstown State University Department of Education

Abstract The classifications and characteristics of Earth rocks and minerals are the basis for my project. In order to begin this project, lessons on the types and features of common rocks and minerals will have to be discussed. This project pertains to those students who are enrolled in an Astronomy course. While students gain knowledge about common rocks on our planet, they will be challenged to use those characteristics, along with the “Moon ABCs Fact Sheet,” in order to determine the origin of rocks on the lunar surface. Students will be working in groups of 2 or 3 to carry out this activity. After determining the characteristics of their rocks, they will need to fill out their “My Own Rock Chart” to the best of their abilities. This project allows students to use their deeper cognitive skills to form a hypothesis on how moon rocks originated based on the known information about Earth rocks.

Lesson The basis for this project came from the “Reaping Rocks” lesson located in the NASA’s Rocket Educator Guide. First, each group needs to collect 10 different rock samples from around their community. These collected rocks will be the main specimens to be studied throughout the activity. The students will need to obtain microscopes from the classroom in order to carefully look at and examine the characteristics of each rock.

Once students have completed the examinations, they will need to fill out the provided “My Own Rock Chart” in order to organize the gathered information. Based upon the observations and the information located on the “Moon ABCs Fact Sheet,” students will be required to formulate intellectual hypotheses describing how lunar rocks originated. After coming up with their hypothesis, students will be required to complete the handout, “Lunar Rocks: Where Did They Come From?”

Objective . To make predictions about the origin of lunar rocks by first collecting, describing, and classifying neighborhood rocks.

Learning Theory This activity leans towards a constructivist approach to learning. The students own way thinking and learning is the underlying focus of this theory. Based on given information and acquired information through observation, students have the ability to shape their own ideas into a hypothesis correlating with the activity. The students acquire a plentiful amount of background information dealing with rocks and minerals on our planet, Earth. Along with the given knowledge, students can apply their newly gained knowledge to constructively come up with their own ideas about how the origin of lunar rocks came about. The hands-on experience with microscopes gives students the opportunities to feel like real geologists who must answer questions like this every day. This activity will be used to sum up the section of the moon and its characteristics.

Resources In order to complete the activity, students will need the following: 10 rocks samples and an egg crate to display the rock samples. After the rocks are placed in the egg crate, the students will need labels to label where the rocks came from, in relation to the community. In addition students will be provided microscopes to observe the characteristics of their rocks.

346 Assessment The students will need to complete the handouts given to them in class. These handouts include “My Own Rock Chart” and “Lunar Rocks: Where Did They Come From?” The associated worksheets will be used to determine the students’ grade based on completion and accuracy. Eventually, the results from this activity will be examined again at a later date when lunar rock samples are received from the NASA center.

Conclusion At the end of this activity, students are challenged to foretell what lunar rocks look like and where they come from. All of the assumptions are based on the knowledge given to them at the beginning of the activity. Students will be required to keep their rocks and results to compare their assumptions with actual lunar rocks provided by the NASA center.

Figures

347 Creep and Subsidence of the Hip Stem in THA

Student Researcher: Benjamin D. Yeh

Advisor: Dr. Timothy L. Norman

Cedarville University Department of Engineering and Computer Science

Abstract Total hip arthroplasty (THA) is the surgical procedure of replacing the hip joint with a prosthetic implant. An important component of this implant is the hip stem and how it interfaces with the femur and remains stable over time. There are two primary means of providing the fixation between the femoral stem and the femur itself. These methods include using either a cemented or an uncemented stem. In an uncemented stem implantation, the stem is in direct contact with the cortical bone at the midshaft and distal end and with the cancellous bone at the proximal end. The primary support comes from press-fit conditions between the cortical bone and the stem. In the cemented stem, support is provided by the bone via bone cement. One undesirable behavior in THA is distal subsidence (downward movement in the femoral canal) of the stem within the bone. This can occur due to stem separation from the bone or cement and/or it can be due to viscoelastic behavior of the bone or bone cement known as creep. A cemented stem likely has subsidence due to creep of both the bone and bone cement. Using uncemented stems removes the variable of viscoelastic behavior in the cement and perhaps lessons or reduces creep- induced subsidence. This study required the development of a 3-D finite element model of the femur and hip stem. The effects of cortical bone creep with and without bone cement on stem subsidence was determined by using ABAQUS finite element software. By running simulations with and without the viscoelastic behavior of the cortical bone of the uncemented stem included in the analysis, the effects of the bone creep can be determined.

Project Objectives This project is focused on examining the stresses and displacements in the environment of an uncemented hip stem in total hip arthroplasty. Early objectives were to research known publications relating to the topic of THA. Research was focused on the effects of viscoelastic behavior in the bone (bone creep) on the subsidence and stability of the stem under constant load. The uncemented stem was chosen for investigation because there has already been research published on cemented stems [1] and the full 3-D model of the uncemented stem could be used as an extension of research conducted using uncemented axisymmetric models [2]. The primary objective was to investigate whether the effects of viscoelastic behavior in the cortical bone constitute a significant portion of the overall subsidence of the stem.

Methodology Finite elements analysis software was used to perform the analysis in this study. The particular package used was ABAQUS. The actual geometries of the stem and femur were modeled in SolidWorks before being exported to ABAQUS for meshing and analysis. The stem model was created using the Depuy AML stem as a reference for geometric proportions and dimensions. Because there are a wide range of possible sizes of stems that are used, the stem was selected as a best fit for the femur model being used. These size selection criteria were based off the Depuy surgical technique manual for the AML stem [3]. The femur model was an electronic model created from an actual femur. To ensure applicability of this study, the femur dimensions were measured and compared with average femur sizes. It was found to fall within the average range for an adult femur. To simplify analysis, only the proximal end of the femur was included in the final model. The entire femur was modeled as a solid homogeneous section with the material properties of cortical bone. This approximation excludes the effect that cancellous bone has on the behavior of the stem; however, because the primary support comes from the press fit at the distal end.

Because the model of the stem and femur were developed separately, they had to be assembled before performing the analysis. The Depuy surgical technique manual was again referenced to provide the actual sequence of steps used in implanting the stem. However, some exceptions were made for the sake of the

348 study. The femur was reamed to the exact size of the stem because this study is limited to analyzing a 100% contact area model in the press fit region. The actual alignment of the stem in the femur was conducted according to the manual, using the bone features as references. The loading was applied as a point load on the proximal face of the stem; it was 269.7 Newtons in the lateral direction and 1,338.1 Newtons in the distal direction. Because the stem being modeled is uncemented, it has a coated surface that is to allow the bone to grow into the surface of the stem. Therefore, for the simulation, the stem was modeled as completely fixed to the bone. The transverse viscoelasticity is represented by the following expression for the time dependent strain rate [2].

(1)

The analysis was performed for two cases, one with the viscoelastic behavior modeled in the cortical bone and one without the viscoelastic behavior of bone included (Figure 1).

Results Obtained The maximum subsidence of the distal tip of the stem was equivalent for both cases. Both the results with viscoelastic behavior and without viscoelastic behavior resulted in a distal subsidence of -0.0040207 mm. The maximum displacement occurred at the proximal end of the stem due to bending; however, relative displacement to the femur was zero due to the fixed interface condition. The maximum transverse stress in the bone was caused by the bending and was near the distal tip of the stem. The largest stresses in the femur were compressive along the medial edge.

Significance and Interpretation of Results The bone creep did not contribute significantly (< 0.1%) to the overall subsidence of the stem under these conditions. This result was anticipated because of the lack of press fit conditions in the distal region of the stem. The bone creep modeled is driven by stress in the transverse directions. Without the radial stresses caused by the press fit, the viscoelastic behavior did not have much effect and therefore the contributions to subsidence were minimal. The largest displacements in the stem were at the proximal end as a result of the stem bending (Figure 2). The primary stress in this model was in the distal direction with only a small amount of transverse stress caused by bending in the stem (Figure 3).

Figure 1. Full model results, Figure 2. Hip Stem, non-viscoelastic Figure 3. Femur, distal end, non-viscoelastic case, von Mises case, von Mises stress maximum transverse stress stresses

349 Further Work Goals for continued work in this area include examining the effects of viscoelastic behavior in both the cortical and cancellous bone on stem subsidence and stability while using press fit conditions with <100% contact area [2][4]. This will involve redesigning the model to allow the use of press fit conditions as well as to simulate various amounts of contact. Performing multiple simulations with a range of conditions will allow for a greater understanding of the effects of bone creep in stem fixation. Additionally, the simulations will be performed without the viscoelastic effects in the bone modeled. This will give a baseline against which the results including viscoelastic behavior can be compared, giving a method of isolating the effects of bone creep. The hypothesis to be investigated is that a stem with decreased bone-stem contact will experience increased levels of subsidence compared to a 100% contact model.

References 1. Norman,T.L., Thyagariajan, G., Saligrama, V.C., Gruen, T.A., Blaha, J.D. 2001 Stem surface roughness alters creep induced subsidence and ‘taper-lock’ in a cemented femoral hip prosthesis. Journal of Biomechanics 34, 1352–1333. 2. Schultz, T.R., Blaha, J.D., Gruen, T.A., Norman, T.L., 2006 Cortical Bone Viscoelasticity and Fixation Strength of Press-Fit Femoral Stems: A Finite Element Model. Journal of Biomechanical Engineering 128, 7–12. 3. http://www.depuy.com/sites/default/files/products/files/DO_AML_Hip_Surgtech_0612-71-050r1.pdf 4. Norman, T.L., Todd, M.B., SanGregory, S.L., Dewhurst, T.B. “Partial Stem-Bone Contact Area Significantly Reduces Stem Stability.” Accepted to the 52nd Annual Meeting, Orthopaedic Research Society, Mar. 19-22, 2006, pg. 680.

350